Remote Instrumentation for eScience and Related Aspects
Franco Davoli • Marcin Lawenda Norbert Meyer • Roberto Pugliese Jan We˛ glarz • Sandro Zappatore Editors
Remote Instrumentation for eScience and Related Aspects
123
Editors Franco Davoli DIST-University of Genoa Via Opera Pia 13 16145 Genova Italy
[email protected] Norbert Meyer Pozna´n Supercomputing and Networking Center Noskowskiego 10 61-704 Pozna´n Poland
[email protected] Jan We˛ glarz Pozna´n Supercomputing and Networking Center Noskowskiego 10 61-704 Pozna´n Poland
[email protected]
Marcin Lawenda Pozna´n Supercomputing and Networking Center Noskowskiego 10 61-704 Pozna´n Poland
[email protected] Roberto Pugliese Sincrotrone Trieste S.c.p.A. Department of Information Technology S.S. 14, km 163.5, Area Science Park 34012 Basovizza, Trieste Italy
[email protected] Sandro Zappatore DIST-University of Genoa Via Opera Pia 13 16145 Genova Italy
[email protected]
ISBN 978-1-4614-0507-8 e-ISBN 978-1-4614-0508-5 DOI 10.1007/978-1-4614-0508-5 Springer New York Dordrecht Heidelberg London Library of Congress Control Number: 2011938142 © Springer Science+Business Media, LLC 2012 All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer Science+Business Media, LLC, 233 Spring Street, New York, NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com)
Preface
e-Science is a complex set of disciplines requiring computationally intensive distributed operations, high-speed networking, and collaborative working tools. As such, it is most often (and correctly) associated with grid- and cloud-computing infrastructures and middleware. However, an essential component is sometimes overlooked in this picture, which consists of the scientific instrumentation providing either the very source of data, or special purpose analytical (as in chemical or biological laboratories) and processing tools (as in electronic Very Large Baseline Interferometry, or eVLBI, and in the use of various measurement devices). Making scientific instruments an essential manageable resource over distributed computing infrastructures, such as the grid, has been the focus of a specific research area referred to as Remote Instrumentation, over recent years. European research has been quite active in it, through projects like GRIDCC (Grid Enabled Remote Instrumentation with Distributed Control and Computation), RINGrid (Remote Instrumentation in Grid environment) and DORII (Deployment of Remote Instrumentation Infrastructure), among others. Since 2005, these projects have been accompanied by a Workshop series named INGRID (“Instrumenting” the Grid). This book stems from the fifth Workshop in this series, which was held in May 2010 in Pozna´n, Poland. The contributions touch the main theme of remote instrumentation, along with related technologies that enable the implementation of truly distributed and coordinated laboratories. The content deals with remote instrumentation and sensors’ infrastructure, virtual laboratories and observatories, e-Science applications over the e-Infrastructure, architectural middleware elements, advanced grid and cloud computing, scientific visualization, workflows, virtual machines, instrument simulation, security, network performance monitoring, data mining in virtual laboratories, distance learning tools, software platforms, among other topics. The book is organized in three parts. Part I, Sensors’ Infrastructure, contains six contributions. R. Borghes, R. Pugliese, G. Kourousias, A. Curri, M. Prica, D. Favretto and A. Del Linz examine the use of the functionalities of the Instrument Element (IE), the basic abstraction that allows exposing instruments (of any kind) as manageable resources, to perform computing tasks in a Synchrotron Radiation v
vi
Preface
Facility beamline dedicated to medical imaging. F. Davoli, L. Berruti, S. Vignola and S. Zappatore evaluate the performance of the IE’s embedded publish/subscribe mechanism in acquiring measurement data and delivering them to a multiplicity of users. M. Dias de Assunc¸a˜o, J.-P. Gelas, L. Lef`evre and A.-C. Orgerie present a large-scale energy-sensing infrastructure with software components that allow users to precisely measure and understand the energy usage of their system. P. Gamba and M. Lanati describe the steps and results toward the customization of a seismic early warning system for its porting to grid technology. The contribution by I. Coterillo, M. Campo, J. Marco De Lucas, J. A. Monteoliva, A. Monteoliva, A. Monn`a, M. Prica and A. Del Linz deals with a mobile floating autonomous platform supporting an extensive set of sensors to monitor a water reservoir and its integration in the existing grid infrastructure. M. Adamski, G. Frankowski, M. Jerzak, D. Stokłosa and M. Rzepka examine the security issues of virtual laboratories, also with reference to the use case of a sensor infrastructure. The second part, Software Platforms, includes four contributions dedicated to various supporting software tools and applications, particularly in the field of distance learning. A. Cheptsov and B. Koller present tools for performance analysis of parallel applications. The other four contributions are related to distance learning platforms centered on the grid and on remote instrumentation and virtual laboratories. D. Stokłosa, D. Kaliszan, T. Rajtar, N. Meyer, F. Koczorowski, M. Procyk, C. Mazurek and M. Stroi´nski present the Kiwi Remote Instrumentation Platform and its usage for phenology observations. A. Grosso, D. Anghinolfi, C. Vecchiola and A. Boccalatte describe ExpertGrid, a software infrastructure for the development of decision support tools to train crisis managers. L. Caviglione, M. Coccoli, and E. Punta discuss the issues related with the adoption of virtual laboratories for education in schools and universities, and for the training of professionals, also by using examples in networked control systems. The third part is dedicated to the grid infrastructure, the basis on which e-Science applications build upon. A. Merlo, A. Corana, V. Gianuzzi and A. Clematis address the issue of Quality of Service (QoS), and present a high-level simulator to evaluate QoS-specific tools. D. Adami, C. Callegari, S. Giordano and M. Pagano propose a distributed resource allocation algorithm that is aware not only of grid resources, but also of the network status and of the latencies to reach them, thereby allowing to optimize the choice on the basis of multiple criteria. A. Monari, A. Scemama and M. Caffarel discuss a grid implementation of a massively parallel Quantum Monte Carlo (QMC) code on the EGEE grid architecture, and analyze ´ its performance. B. Strug, I. Ryszka, E. Grabska and G. Slusarczyk introduce a layered graph approach for the generation of a grid structure and its parameters. T. Hopp, M. Hardt, N. Ruiter, M. Zapf, G. Borges, I. Campos and J. Marco present an interactive approach that allows Application Program Interface (API) style communication with grid resources and execution of parallel applications using the Message Passing Interface (MPI) paradigm. M. Sutter, V. Hartmann, M. G¨otter, J. van Wezel, A. Trunov, T. Jejkal and R. Stotzka examine access technologies useable for the development of a Large Scale Data Facility to meet the requirements of various scientific projects and describe a first implementation of a uniform user
Preface
vii
interface. D. Kr´ol, R. Słota, B. Kryza, D. Nikolow, W. Funika and J. Kitowski introduce a novel approach to data management within the Grid environment based on user-defined storage policies, and discuss its implementation in the PL-GRID infrastructure. M. Brescia, S. Cavuoti, R. D’Abrusco, O. Laurino and G. Longo present a distributed web-based data mining infrastructure specialized on Massive Data Sets exploration with Soft Computing methods. Finally, the contribution by D. Adami, A. Chepstov, F. Davoli, M. Lanati, I. Liabotis, S. Vignola, S. Zappatore and A. Zafeiropoulos describes the network monitoring platform deployed in support of grid applications within the DORII project, and presents the measurement results of two selected applications. The editors wish to thank all authors, organizers and participants of the INGRID 2010 Workshop, and particularly the two keynote speakers, Prof. Geoffrey Fox from Indiana University, Bloomington, IN, USA, and Dr. Monika Ka˛ cik from the ´ European Commission, Information Society and Media DG, Unit F3 “GEANT & e-Infrastructure”. Franco Davoli Marcin Lawenda Norbert Meyer Roberto Pugliese Jan We˛ glarz Sandro Zappatore
Contents
Part I
Sensors’ Infrastructure
Grid Computations without the Computing Element: Interfacing Control Systems for On-Line Computations .. . . . . . . . . . . . . . . . . . . Roberto Borghes, Roberto Pugliese, George Kourousias, Alessio Curri, Milan Prica, and Andrea Del Linz Performance Evaluation of the DORII Instrument Element Data Transfer Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . Luca Berruti, Franco Davoli, Stefano Vignola, and Sandro Zappatore The Green Grid’5000: Instrumenting and Using a Grid with Energy Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . Marcos Dias de Assunc¸a˜ o, Jean-Patrick Gelas, Laurent Lef`evre, and Anne-C´ecile Orgerie
3
15
25
Porting a Seismic Network to the Grid . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . Paolo Gamba and Matteo Lanati
43
Integrating a Multisensor Mobile System in the Grid Infrastructure.. . . . . Ignacio Coterillo, Maria Campo, Jes´us Marco de Lucas, Jose Augusto Monteoliva, Agust´ın Monteoliva, A. Monn´a, M. Prica, and A. Del Linz
59
Defence in Depth Strategy: A Use Case Scenario of Securing a Virtual Laboratory.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . Marcin Adamski, Gerard Frankowski, Marcin Jerzak, Dominik Stokłosa, and Michał Rzepka Part II
75
Software Platforms
Performance Analysis Framework for Parallel Application Support on the Remote Instrumentation Grid. . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 105 Alexey Cheptsov and Bastian Koller ix
x
Contents
New Technologies in Environmental Science: Phenology Observations with the Kiwi Remote Instrumentation Platform . . . . . . . . . . . . 117 Dominik Stokłosa, Damian Kaliszan, Tomasz Rajtar, Norbert Meyer, Filip Koczorowski, Marcin Procyk, Cezary Mazurek, and Maciej Stroi´nski An Agent Service Grid for Supporting Open and Distance Learning.. . . . . 129 Alberto Grosso, Davide Anghinolfi, Antonio Boccalatte, and Christian Vecchiola Education and Training in Grid-Enabled Laboratories and Complex Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 145 Luca Caviglione, Mauro Coccoli, and Elisabetta Punta Part III
Grid Infrastructure
SoRTSim: A High-Level Simulator for the Evaluation of QoS Models on Grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 161 Alessio Merlo, Angelo Corana, Vittoria Gianuzzi, and Andrea Clematis MRA3D: A New Algorithm for Resource Allocation in a Network-Aware Grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 177 Davide Adami, Christian Callegari, Stefano Giordano, and Michele Pagano Large-Scale Quantum Monte Carlo Electronic Structure Calculations on the EGEE Grid .. . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 195 Antonio Monari, Anthony Scemama, and Michel Caffarel Generating a Virtual Computational Grid by Graph Transformations . . . 209 Barbara Strug, Iwona Ryszka, Ewa Grabska, ´ and Gra˙zyna Slusarczyk Interactive Grid Access with MPI Support Using GridSolve on gLite-Infrastructures .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 227 Torsten Hopp, Marcus Hardt, Nicole Ruiter, Michael Zapf, Gonc¸alo Borges, Isabel Campos, and Jes´us Marco File Systems and Access Technologies for the Large Scale Data Facility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 239 M. Sutter, V. Hartmann, M. G¨otter, J. van Wezel, A. Trunov, T. Jejkal, and R. Stotzka Policy Driven Data Management in PL-Grid Virtual Organizations . . . . . . 257 Dariusz Kr´ol, Renata Słota, Bartosz Kryza, Darin Nikolow, Włodzimierz Funika, and Jacek Kitowski
Contents
xi
DAME: A Distributed Data Mining and Exploration Framework Within the Virtual Observatory . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 267 Massimo Brescia, Stefano Cavuoti, Raffaele D’Abrusco, Omar Laurino, and Giuseppe Longo Network Performance Monitoring for Remote Instrumentation Services: The DORII Platform Test Case . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 285 Davide Adami, Alexey Chepstov, Franco Davoli, Matteo Lanati, Ioannis Liabotis, Stefano Vignola, Sandro Zappatore, and Anastasios Zafeiropoulos Index . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 305
Part I
Sensors’ Infrastructure
Grid Computations without the Computing Element: Interfacing Control Systems for On-Line Computations Roberto Borghes, Roberto Pugliese, George Kourousias, Alessio Curri, Milan Prica, and Andrea Del Linz
Abstract Traditionally the computations on the Grid take place on the Computing Element. In the same line, the Instrument Element is meant to Grid-enable instrumentation. In this chapter we introduce a non-classical use of the Instrument Element where it serves as a virtual instrument for performing a computational task. Specifically it has been used as the interface to a Control System that executes a series of High Throughput Computing tasks in an On-Line manner. This had to be done in order to meet the special requirements of an application in the Synchrotron Radiation Facility Elettra. The instrument control in such institutions is often done through Distributed Control Systems. Such a system is TANGO and the Synchrotron Radiation Facility (SRF) Elettra among other synchrotrons is heavily based on it. The application was for a beamline working in medical imaging (SYRMEP) and aimed to be an improvement of an established Computed Tomography workflow. The task was the generation, in parallel, of sinograms of a specific data format based on the acquired X-ray absorption data. The target was the availability of the complete sinogram data set in a Storage Element by the time of the completion of the CT scan. The Grid related latencies, like job submission and queuing, would have been an issue given the near-real-time requirements. Moreover the inclusion of a set of TANGO devices was necessary and a generic gLite WN would not have been as suitable as a dedicated system. Besides the avoidance of certain Grid parts, the Grid Security infrastructure was required to be fully This works has been partially supported by the DORII EU project (European Commission within the 7th Framework Programme (FP7/2007-2013) under grant agreement no. RI-213110) www.dorii.eu The authors would like to thank Diego Dreossi, a senior SR X-ray CT and Medical Imaging specialist working for the SYRMEP@Elettra beamline, for his valuable feedback and cooperation during all the phases of the project. R. Borghes • R. Pugliese () • G. Kourousias • A. Curri • M. Prica • A.D. Linz Scientific Computing Team, IT Group, Sincrotrone Trieste S.C.p.A., Strada Statale 14 - km 163,5 in AREA Science Park, 34149 Basovizza, Trieste, Italy e-mail: author’
[email protected] F. Davoli et al. (eds.), Remote Instrumentation for eScience and Related Aspects, DOI 10.1007/978-1-4614-0508-5 1, © Springer Science+Business Media, LLC 2012
3
4
R. Borghes et al.
utilised in the final solution. The design followed a bottom-up approach: (a) design and preparation of a dedicated system based on virtualisation, (b) development of a parallel sinogram generator, (c) deployment of suitable TANGO devices for controlling the data acquisition, the generator, and the On-Line progress, (d) a TANGO-to-IE bridge to export the devices as IM, (e) utilising a Grid Web Portal (VCR) in order to serve as the end-user GUI for the application. In this contribution (I) we introduce a novel concept where computation may take place outside the CE, (II) we design an architecture where a Distributed Control System is piloted by an Instrument Manager through the Grid, and (III) we discuss a working implementation of the system.
1 Introduction The domain of the application is that of scientific computing for a large multidisciplinary Physics research establishment, a Synchrotron Radiation Facility [3]. The proposed architecture has been realised to satisfy certain computational needs in the third generation Synchrotron Radiation Facility Elettra.1 Specifically the deployment was for a beamline/laboratory that specialises in advanced Computed Tomography and Medical Imaging. The SciComp team of Elettra has successfully utilised the Grid Computing paradigm [9] in various occasions [1, 19]. The power of the Grid has been proved useful but it has various limitations [11] that will be discussed later in this chapter. In order to overcome certain Grid-related problems we utilise a new architecture that is only-partially Grid based. In general, the computation on the Grid is done on Worker Nodes (WN) of a Computing Element (CE). There are various “logistics” in the process that the middleware, in our case gLite [4], takes care of. It has been observed that there are quite high latencies during the job submission phase. These latencies for certain applications are not a problem but for others they are. The proposed solution carries the computation in an atypical Grid manner while maintaining other Grid features like security and I/O to storage. Elettra, like many other Synchrotrons, bases its control on a Distributed Control System (DCS) [2]. This control core manages more than 10,000 devices, including sensors, detectors, PSUs, and motors. The DCS of choice is TANGO [10]. The proposed solution assumes that a set of TANGO devices are part of the system. In order to blend TANGO with the Grid technologies we used the Instrument Element (IE) [5, 14], a middleware extension that allows instrument integration on the Grid. The IE has been utilised in a non-standard way, so it does not control an actual instrument but a software-controlling TANGO device. This has been possible due to the extensible nature of its architecture. 1
URL:http://www.elettra.trieste.it, note that the establishment also includes a fourth generation light source based on a free-electron laser, FERMI@Elettra.
Grid Computations without the Computing Element
5
In order to satisfy the specific needs of the application a suitable set of Virtualisation [21] technologies was deployed. The actual imaging algorithm and application were designed to take advantage of this setup, yielding satisfactory performance. The user interaction with the application was of good standards since the final solution is based on a modern Grid-portal, the VCR [13].
2 The Application 2.1 Beamlines and Online Processing A beamline (BL) in a Synchrotron is a kind of specialised laboratory. Many of the experiments that take place in such environments have specialised and often high computing needs. Such needs may include the design of novel algorithms, software, hardware, control, and user interaction. The SRF Elettra has 24 beamlines providing more than 105,000 h of user time to visiting and in-house scientists. A mode of operation that is in demand is that of “on-line” processing (Fig. 1), which takes place during the acquisition phase of the experiment. Due to long duration of certain experiments, like >4 h scans, this mode of operation can provide an early feedback that allows the specialist to preliminarily evaluate the correctness of the experiment’s setup.
2.2 Imaging for a Tomography Workflow The requested application aimed to serve as an improvement to an established Computed Tomography (CT) [8] workflow for a Medical Imaging beamline. An established workflow implies that the beamline is already operating and functional: there are various stages of processing (including few Grid-based [6]), other sequential other parallel, and data formats and instrumentation control is already in-place.
Fig. 1 ONLINE takes place during the experiment and ends with it, while OFFLINE starts when the dataset is complete. Where the situations permit, ONLINE is a preferred mode since it saves time
6
R. Borghes et al.
Fig. 2 Non-normalised X-ray absorption and the sinogram line that is contributing. PSGen is processing all the lines in parallel
This increased the challenge since the improvements had to be a transparent part that would integrate well and would yield advantages. An example technical challenge was the on-the-fly conversion of Lempel-Ziv-Welch [22] compressed 12-bit datasets of integers to a RAW 16-bit format. The conversion may have not been an optimal engineering solution2 but had been strictly requested so that the I/O formats would be exactly the same as prior to the introduction of our solution. This resulted in an undisturbed workflow where the data processing procedures, before and after the new system, remained the same. The mode of operation was identified from the early System Analysis stages that would be Online. Before our solution the specific process in the beamline was operating in an Offline mode. In Fig. 1, we see that this can provide a complete output dataset by the end of the experiment. Eventually, the online system that this paper is describing saves 2 h per dataset; therefore, more data can be acquired for a specific experiment. The application had to collect X-ray Absorptions data during a CT scan and process and produce the corresponding Sinogram dataset (Fig. 2). The resulted application was PSGen, a Parallel Sinogram Generator. This through the architecture described in the following sections has resulted in a rapid sub-second Absorption data processing including parallel I/O to multiple hundreds of Sinogram files, with a total I/O of approximately 5 GB per dataset; all in near-real-time.
3 The Distributed Control System TANGO is an open source object oriented control system developed mainly for controlling Accelerators and experiments. It can be used for almost any kind of hardware but deployment on a standard gLite WN may be difficult. The system is actively developed by a consortium of synchrotron radiation institutes: Alba, Desy, Elettra, ESRF, and Soleil. 2
For large scientific datasets modern formats like HDF5 should be considered.
Grid Computations without the Computing Element
7
Fig. 3 There are many ways to control and monitor the TANGO devices. But even the popular GUI based tool JIVE would not be user-friendly enough for the end-user, the beamline scientist
TANGO uses the omniORB implementation of CORBA [7] as its network protocol in order to provide network access to hardware. Hardware can range from single bits of digital input/output up to sophisticated detector systems or entire plant control systems. Hardware access is programmed in a process called a Device Server. The device server implements device classes which implement the hardware access. At runtime the device server creates devices which represent logical instances of hardware. The object model in TANGO supports methods, attributes and properties. The client-server communication model can be synchronous, asynchronous or event driven. Clients and Device Servers can be written in CCC, Java or Python. The choice for the devices of PSGen was CCC in order to achieve the optimal performance. For the online-processing application, two TANGO device servers have been developed: the Watchdog and the Worker. The main task of the Watchdog is signalling the presence of new data to the Worker device. The Worker, after receiving the signal, performs the data processing step. The system is highly configurable and provides at all stages of the processing feedback and status information. The devices have such an open and extensible architecture that can be deployed in other online-processing application with minimal effort. Example settings for such a device include the Watchdog frequency, dataset size indicator, and self-stopping parameters. In Fig. 3, a generic interaction is achieved by using JIVE, a graphical tool for viewing and modifying the TANGO db and testing devices3 .
3
The GUI tool JIVE for TANGO we are referring to, should not be confused with the homonym EU FP7 project.
8
R. Borghes et al.
4 Where Computation Takes Place The TANGO system that has been described in the previous section has requirements that the typical gLite Worker Node (WN) may not have or may be impractically difficult to be satisfied. These requirements usually are OS related like user privileges and kernel tuning, and network related like having inbound connectivity. In addition to the TANGO requirements, PSGen itself requires a hardware and software setup that is not present in a general gLite CE deployment. These include custom and cutting-edge versions of libraries and Dynamic Memory Provisioning. An additional design decision was to provide the application with a direct, fast and low latency access to the storage which at all times can be accessed securely through the GridFTP protocol [20]. The system we designed for deploying the application coupled with the TANGO devices, is hosted in the Elettra Cloud Platform. This infrastructure is based on a Virtualisation solution, XEN Hypervisor. It provides a flexible way to add, remove, upgrade, and change resources like CPUs, memory, and Network connectivity. In the future, the application may see an increase to its computational requirements (i.e. as CCDs get larger) so scalability and upgradability is necessary [16]. The system, that we named “Dynamite”, has been configured in a paravirtualised mode and the mixed PSGen-specific performance hit has been measured to be less than 4.2%. The current setup of Dynamite is a 64-bit OS (CentOS 5.4) with many upgraded libraries. The 64-bit OS permits access to the 14GB of available memory, allowing massive RAM caching and large array in-memory operations. The available CPU cores that can be dedicated are 10 but the setup can easily scale and be upgraded. The current performance satisfies the processing needs of PSGen resulting to the calculation of an approximate 15:6 106 16-bit Integers per second. An additional consideration of the system’s performance is that of the I/O. The system does not use local storage at any stage. The input and output datasets are stored in a high performance Storage Area Network (SAN). The connection between Dynamite and the front-end server of the SAN is 4Gb/s. The available storage for the specific beamline/lab is 20TB. The overall performance, including calculation and I/O, has been fast and stable enough for near-real-time computation (Fig. 4).
Fig. 4 In theory, each PSGen cycle for a given dataset should be the same. The actual numbers confirm the stability and constant performance of Dynamite, showing an insignificant variation. The initial cycles reflect cached I/O
Grid Computations without the Computing Element
9
5 The IE not for Instrument Control The Instrument Element (IE) is an advanced Grid middleware extension that enables the inclusion of Instrumentation to the Grid. The exact details of it are beyond the scope of this contribution but the reader may refer to the suitable literature for better understanding [14]. In general, it is a set of technologies that permit the easy inclusion of instruments like sensors, motors and cameras to a gLite infrastructure. The main objective is the remote operation [15, 18] and control of the instrument through a secure channel by respecting the Grid security protocols. The Instrument Element is an open source middleware implemented in Java as an Axis SOAP web service that runs in the Tomcat web server. To interface the actual instruments and sensors it uses the so-called Instrument Managers (IM). An IM is a suitably structured Java client code that communicates with the physical instrument or better, its control system. Instrument managers have to run inside the IE framework which provides a number of common services. In particular, the authentication and authorisation mechanism based on user credentials and VOMS roles accordingly to the GSI/gLite schema and a common interface to all instruments attached (Fig. 5). The architecture of the IE is extensible and the scope of it may be stretched beyond its original objectives. Specifically, the IE can be seen as an Integration Element by exploiting the abstraction of the concept instrument [17, 19]. When the concept is generalized it may include software application too; and in our case we assume as an instrument the underlying TANGO device server that manages PSGen. Thus, the full control of the PSGen application and its accompanying TANGO
Fig. 5 The Instrument Element architecture. It is an extensible and expandable model so it can be used for purposes beyond its original scope, like steering the TANGO devices that control PSGen
10
R. Borghes et al.
Fig. 6 The IE for TANGO Watchdog of PSGen can be controlled from a web portal through an automatically generated graphical panel
devices can be interfaced and provided as an IE via an Instrument Manager. At this point, the IE controls a Computing entity, the PSGen application, without accessing a traditional CE. This irregular use of the IE demonstrates a case where it may even compute or, more precisely, steer a computation. Alternative Web Services based technologies could be used as well, but other than the overhead of the new development, the main disadvantage would be the disconnection of the application from the rest of Grid infrastructure that it profits from (like the SE) (Fig. 6).
6 The User Interaction The Scientific Computing team of the IT Group of the Synchrotron Elettra, for the past years has been involved in R&D for technologies that aim to realise single sing-on and Web based access to various resources. One of the latest products is that of the VCR. The VCR was initially designed to serve as a portal for easy access to Grid resources. During a project, DORII [12], the VCR was further developed to include Workflow Management, Remote Rendering (GVid), collaboration features and other. The latest version of the VCR had to be suitably updated in order to accommodate the novel hybrid approach for applications that was described in the previous
Grid Computations without the Computing Element
11
Fig. 7 The end-user, the beamline scientist, interacts with the application through a web browser in a simple manner
chapters. An updated Application Manager allowed an easy integration of the TANGO subsystem and rapid application deployment. PSGen, at this point a VCR Application, can be accessed by the users that have access to portal, taking into account most of the fine Grid-related security control, like Certificates, VOs, and the VCR-side TAGS4 . The PSGen user can start the application at the beginning of the experiment and receive feedback at any stage; even after logoff and re-login. The input and output datasets are stored in Storage Elements (SE) that can be browsed from the web portal. In addition to the above, the user may start other Applications that make full use of the Grid in standard manner. These may include online and offline, single, parametric, MPI, and interactive and any combination permitted from the gLite, the Common Library, and the rest of the technologies developed during the DORII project. Finally, the scientists may use features like the logbook for note-taking purposes while using PSGen. Additional work is in progress in order to allow richer GUI elements that will permit richer interaction with the remote application (Fig. 7).
7 Conclusions The Grid can be useful in many scenarios but there are cases where its limitations are obvious. One such limitation is constituted by the various latencies. They usually occur during the job submission phase. There is a large family of applications where these latencies are not a problem but there are other cases that can be limited from these. The presented work was for an application with near-real-time requirements for on-line processing in a Tomography lab of a Synchrotron. The presented architecture and the related R&D demonstrated a hybrid approach where storage
4
TAGS in the VCR domain can be seen as flags that allow a flexible association of a specific user or user-group with a set of applications.
12
R. Borghes et al.
(I/O on the SE), interaction (VCR portal) and security is done on the Grid while the computation is done in a non-CE based infrastructure controlled by an IE. Such a hybrid approach is novel in the field and in the future may be extended towards other directions.
References 1. R. Pugliese, M. Prica, G. Kourousias, A. Del Linz, A. Curri, Integrating Instruments in the Grid for On-line and Off-line Processing in a Synchrotron Radiation Facility, COMPUTATIONAL METHODS IN SCIENCE AND TECHNOLOGY 15 (2009), no. 1, 21–30. 2. A. Butkovski˘ı, Distributed control systems, Elsevier Publishing Company, 1969. 3. E. Koch, G. Marr, T.Sasaki, H Winick, Handbook on synchrotron radiation, (1983). 4. E. Laure, S. Fisher, A. Frohner, C. Grandi, P. Kunszt, A. Krenek, O. Mulmo,F. Pacini, F. Prelz, J. White, and others, Programming the grid with glite, Computational Methods in Science and Technology 12 (2006), no. 1, 33–45. 5. E. Frizziero, M. Gulmini, F. Lelli, G. Maron, A. Oh, S. Orlando, A. Petrucci,S. Squizzato, S. Traldi, Instrument Element: a new Grid component that enables the control of remote instrumentation, doi:10.1109/CCGRID.2006.146), 2006. 6. F. Brun, G. Kourousias, D. Dreossi, L. Mancini, An improved method for ring artifacts removing in reconstructed tomographic images, IFMBE World Congress on Medical Physics and Biomedical Engineering, vol. 24, Springer, 2009, pp. 926–929. 7. M. Henning, The rise and fall of CORBA, Queue 4 (2006), no. 5, 34. 8. G.T. Herman, Fundamentals of computerized tomography: Image reconstruction from projections, Springer Verlag, 2009. 9. I. Foster, C. Kesselman, The grid: blueprint for a new computing infrastructure, Morgan Kaufmann, 2004. 10. J.M. Chaize, A. G¨otz, W.D. Klotz, J. Meyer, M. Perez, E. Taurel, Tango-an Object oriented control system based on CORBA, Proceedings of the International Conference on Accelerator and Large Experimental Physics Control Systems (ICALEPS99), 2000. 11. Lovro Ilijaˇsi´c, Computational Grids as Complex Networks, Master’s thesis, Scuola di Dottorato in Scienza e Alta Tecnologia, 2010. ´ David, G. Barcel´o,I. Coterillo, F. Davoli, P. Gamba, R. Keller, 12. M. Pł´ociennik, D. Adami, A. D. Kranzlm¨uller, I. Labotis, DORII–Deployment of Remote Instrumentation Infrastructure, Relation 10 (2009), no. 1.109, 800. 13. M. Prica, R. Pugliese, A. Del Linz, G. Kourousias, A. Curri, D. Favretto, F. Bonaccorso, An advanced web portal for accessing grid resources with virtual collaboration features, 5th EGEE User Forum (Uppsala, Sweden), EGEE User Forum, April 2010. 14. M. Prica, R. Pugliese, A. Del Linz, A. Curri, Adapting the instrument element to support a remote instrumentation infrastructure, Remote Instrumentation and Virtual Laboratories: Service Architecture and Networking (2010), 11. 15. M. Prica, R. Pugliese, C. Scafuri, L.D. Cano, F. Asnicar, A. Curri, Remote operations of an accelerator using the grid, Grid Enabled Remote Instrumentation (2009), 527–536. 16. R. Pugliese, G. Kourousias, A. Curri, A quantitative method for the projective approximation of computational requirements, Tech. Report 090500:01, Sincrotrone Elettra, 2009. 17. R. Pugliese, G. Kourousias, M. Prica, A. Del Linz, A. Curri, An infrastructure for the integration of geoscience instruments and sensors on the grid, Geophysical Research Abstracts, vol. 11, EGU, 2009. 18. R. Pugliese, M. Prica, Remote operations of an accelerator using the grid, International Conference on Accelerator and Large Experimental Physics Control Systems (ICALEPCS), WOPA02, 2007, pp. 303–306.
Grid Computations without the Computing Element
13
19. R. Pugliese, M. Prica, G. Kourousias, A. Del Linz, A. Curri, The grid as a software application provider in a synchrotron radiation facility, Remote Instrumentation Services on the eInfrastructure, vol. X, Springer, 2009. 20. I.J. Taylor, From P2P to web services and grids: peers in a client/server world, Springer-Verlag New York Inc, 2005. 21. T.L. Borden, J.P. Hennessy, J.W. Rymarczyk, Multiple operating systems on one processor complex, IBM Systems Journal 28 (1989), no. 1, 104–123. 22. T.A. Welch, A technique for high performance data compression, IEEE Computer 17 (1984), no. 6.
Performance Evaluation of the DORII Instrument Element Data Transfer Capabilities Luca Berruti, Franco Davoli, Stefano Vignola, and Sandro Zappatore
Abstract The possibility of accessing and managing remote laboratories by exploiting the facilities offered by the Internet represents a challenging issue. To this purpose, many research projects have been funded and different hardware/software solutions have been proposed. A possible approach to the problem may consist of adopting a grid-based platform to expose and share the resources present in a group of laboratories in a uniform way. Recent projects have proposed a complex and complete architecture that enables Internet users to remotely control instrumentation and perform experiments. According to such architecture, the instrument element (IE) represents the basic abstraction that allows physical instrumentation to be exposed as a web service and to become a manageable resource in a grid environment. This chapter aims at evaluating the performance of the transfer of data generated by the instruments from the IE to a multiplicity of users. The evaluation is performed under different traffic load values, generated by the acquisition of measurements, with different numbers of clients connected to the system, and by means of two different data transfer methodologies supported by the IE. The results related to a large number of tests are presented and discussed.
1 Introduction Remotely controlling a laboratory and the related instruments and devices, sending commands, and acquiring measurements are not new activities in the instrumentation and measurement scenario – they have been performed in a whole range of different applications. However, the goals of recent remote instrumentation service
L. Berruti • F. Davoli () • S. Vignola • S. Zappatore DIST-University of Genoa/CNIT, University of Genoa Research Unit, Genoa, Italy e-mail:
[email protected];
[email protected];
[email protected];
[email protected] F. Davoli et al. (eds.), Remote Instrumentation for eScience and Related Aspects, DOI 10.1007/978-1-4614-0508-5 2, © Springer Science+Business Media, LLC 2012
15
16
L. Berruti et al.
paradigms, such as those proposed by GRIDCC [1], RINGrid [2], and DORII [3], three recent projects funded by the European Community, are more ambitious. Indeed, they aim at: • Providing a set of standard capabilities to perform whatever functionality may be required • Constructing suitable abstractions of the remote instrumentation, in order to make it visible as a manageable resource • Presenting the user standard interfaces, which allow browsing the “distributed laboratory space,” choose different pieces of equipment, configure their interconnection, orchestrate experiment executions, and collect, process and analyze the results – all by providing also built-in security services In order to accomplish such tasks to a full extent, laboratories, devices, and instruments should be exposed as a set of web services, whose interfaces may be compliant with some grid architecture, much in the same way as computing and storage devices are. In this way, they would take advantage of: (a) isolation from and relative independence of the underlying networking infrastructure providing connectivity, (b) tools for resource allocation and management, (c) standard user interfaces, and (d) nontrivial quality of service (QoS) control. All the previously mentioned concerns have been the subject of recent interest, strictly connected to the issue of remote instrumentation services (RIS). The widespread diffusion of such services can foster the use of sophisticated and costly scientific equipment, and of eScience applications. Furthermore, it should be highlighted that this paradigm does not only apply to large-scale laboratories and devices, but it can be fruitfully employed even with smaller and relatively widespread measurement instrumentation, as often adopted in engineering applications. The interest in this field is witnessed by a number of papers (see, e.g., [4–13]) available in the literature and dealing with these topics. A possible approach for controlling and managing remote laboratories and instrumentation may consist of employing the grid-based architecture initially developed with the GRIDCC project. According to this architecture, the main role is played by a software component called the instrument element (IE). It represents the basic abstraction that allows physical instrumentation to be exposed as a web service and to become a manageable resource in a grid environment. Through the presence of one or more instrument managers (IMs), the IE actually interfaces the instrumentation, both hiding the details of the drivers from users, and possibly embedding popular proprietary solutions (e.g., LabView), to provide a unified standard interface. After initial implementations, the IE has been redesigned within the DORII project to simplify and better organize its structure and enhance its performance. A similar enhancement has been undertaken for another basic component of remote instrumentation services, namely, the VCR (virtual control room), a web portal that provides user access to the distributed environment. The present chapter aims at evaluating the performance of the transfer of data generated by the instruments from the IE to a multiplicity of users, under two
Performance Evaluation of the DORII Instrument Element Data Transfer Capabilities
17
different data transfer methodologies supported by the IE. To this purpose, we emulate the generation of measurement data from an instrument as an array of variables, under different load values and with different numbers of subscribing clients. Java message service (JMS) is the publish/subscribe mechanism adopted, and the generation of data is effected by using JMeter, a Java application designed to measure performance. There are basically two ways in which the combination of these architectural elements can be used to interact with the physical instruments. These are: (a) configuring and setting parameters and (b) collecting measurement data from the instruments. With respect to the second point, possible choices are polling the instrument from the client, by traversing the whole web service protocol stack (which might hinder performance) or using a publish/subscribe approach, whereby clients which subscribed for a certain topic are notified asynchronously by means of a messaging system. This chapter is structured as follows. Section 2 illustrates the overall DORII architecture, pointing out the role played by its principal components. Section 3 describes the experimental setup employed to carry out the measurement campaigns devoted to evaluate the performance of the data collection from the IE. The results obtained are presented and discussed in Sect. 4. Finally, in the last section conclusions are drawn.
2 Overall DORII Architecture The overall DORII architecture is diagrammatically depicted in Fig. 1. The architecture stems from the grid-based platform adopted within the GRIDCC project and represents the natural evolution of the latter. Though the GRIDCC platform may include components (called “elements”) of different kinds, in the context of distributed remotely controlled laboratories, the main rules are played by the instrument elements (IEs) and the virtual control room (VCR). The latter constitutes the actual interface exploited by the user to carry out any sort of actions and/or measurements that the laboratory, handled by the IE, is able to perform. The IE consists of a web service module embedded in a grid middleware, specifically gLite. The IE actually controls all the devices/instruments present in the laboratory through a set of instrument managers (IMs). The IE runs in the Apache Tomcat container as an Axis web service and it includes two ancillary sub-components (the access control manager – ACM – and the information and monitor service – IMS), plus a set of IMs that actually handle the instrumentation. The grid framework also provides another data exchange protocol (GridFTP), which allows IMs to send their outputs (in a batch fashion) to other grid elements (e.g., storage elements, SEs). The main task of an IM is to drive and handle a specific class of devices or instruments: obviously, each class of devices requires a specific IM implementation. Under this perspective, an oscilloscope, independent of its manufacturer, will require an “oscilloscope IM,” as well as a spectrum analyzer
18
L. Berruti et al.
JMS - Data/Info Publishing/Alarms/Events
Web GUI
Instrument Element
Control Manager
IMS Proxy
GridFTP
Access Control Manager
Web Service – Instrumentation control
Information and Monitor Service
Instrument Manager Instrument Abstraction Layer Dynamic Link Library Instruments Access Device Driver
Storage Element
Compute Element
GridFtp
Web Service
Device
Web Service
Virtual Control Room
Web Service
Grid
Fig. 1 Overall DORII architecture
will need a “spectrum analyzer IM.” In order to handle the instrumentation of a laboratory, the IM exploits a set of application program interfaces (viz., APIs) implemented as a software library, called instrument abstraction layer (IAL). The adoption of the IAL permits us to develop an IM without considering the specific technology used to interface the real device/instrument; in other words, an IM programmer does not need to take care of the hardware specifications at the physical layer, the low level communication protocol, and so on. As regards the command and data exchange, the requesters (viz., users at the VCR end) can send SOAP requests and get SOAP responses from the IE over a HTTP connection. Upon receiving a message (i.e., a request), the IE dispatches it toward the IM referenced in the message. According to this mechanism, a possible way a user can read the current value of a certain quantity (namely, the “attribute” of a “variable”) acquired by a device/instrument consists of triggering a read operation (specifically, a getAttribute method) by means of a user request: hence, a near realtime monitoring may involve a continuous polling of the IE and, in turn, of the appropriate IM. Employing the getAttribute method is surely more suitable for asynchronous (nonpersistent) reading of an attribute, since a continuous polling may produce an overburden of the system hosting the IE/IM and an overall wastage of resources. If a continuous (though still asynchronous) monitoring is needed, the IE may take advantage of the use of a publish/subscribe mechanism, as the one provided
Performance Evaluation of the DORII Instrument Element Data Transfer Capabilities
19
by a JMS server (sometimes also called a JMS broker) in order to fast deliver data messages from the instrument managers to the users or to the VCR. According to the JMS terminology [14], the messaging system provided by a JMS server may use a managed object, called a “Topic,” to identify a specific message flow. A message producer acquires a reference to a JMS topic on the server, and sends messages to that Topic. When a message arrives, the JMS provider is responsible for notifying the presence of a new message to all consumers who have subscribed to that Topic. According to this scheme, if an attribute is declared “subscribable,” the IM internally polls the variable associated to the attribute and, only upon changes of the variable, the IM publishes the new value by means of the services offered by a JMS server “coupled” with the IE.
3 Experimental Setup The experimental setup adopted to evaluate the performance of the data collection from the IE is sketched in Fig. 2. The PC denoted as “Grid-Node” hosts the IE (ie-release2.1), its related IMs, and the JMS broker (Sun JMS Message Queue 4.3). The possible actions a user can perform, both directly and through a VCR, are mimicked by a JMeter (jakarta-jmeter-2.3.4) application [15] hosted in the PC denoted as “Client Station.” More precisely, the grid-node is a Fujitsu-Siemens workstation “Celsius 460,” running a Windows-XP sp3 operating system, and the Client Station is a Dell XPS 1,330 laptop running a Debian GNU/Linux 5.0 operating system. The two PCs are directly linked via an Ethernet interface at 100 Mbit/s. It should be highlighted that in order to effectively estimate the performance of data gathering, no real device is connected to the IMs, thus avoiding possible
Client Station
IM IE
JMeter suite tool
IM IM
100 Mbit/s Ethernet
JMS broker
Fig. 2 Experimental setup used during the performance measuring tests
Mock devices
Grid-Node
20
L. Berruti et al.
bottlenecks ascribable to the protocols and/or technologies employed to connect the IMs to the devices. Therefore, any IM sends commands and receives responses and data to/from a “mock” device that, upon receiving a data request from the IM, simply assembles a vector of doubles (8 bytes in a 32 bit x86 Intel architecture) of predefined length. A specific application of the IE/IM abstractions to physical instrumentation can be found in [12, 13]. JMeter is a pure Java application originally written to test functionalities of static and dynamic resources, such as Java objects, data base queries, Servlets, and successively expanded to evaluate the performance of a wide class of objects, including SOAP/web services. Nowadays, JMeter represents a quasistandard, wellknown tool for performance evaluation able to run in a variety of environments, to provide fast operation and precise timings, and to allow great personalization.
4 Performance Evaluation The performance of the data collection from the IE has been evaluated in two different operating contexts. According to the former operative mode (hereafter referred as “Polling Mode”), the JMeter mimics a certain number of users that continuously pool the IE, which transmits back a vector of doubles. The term “continuously” means “at the maximum speed supported by the system”; viz., upon receiving a response from the IE, the client promptly sends a new request to the IE. Obviously, the maximal frequency of user requests will depend on several factors, such as: the number of clients which are issuing the requests, the length of each answer, the computation power of the computer hosting the IE, and so on. The purpose is to estimate the average response time (viz., the time spent by a user to receive a vector of data from the IE) versus the number of users and the length of the vector of doubles, and to evaluate the overall traffic offered to the network. In the latter operative context (hereafter referred as “JMS Subscribing Mode”), the JMS broker is exploited to fast deliver the data from the IE to the users. Thus, the users have to subscribe themselves to a topic corresponding to the attribute to be monitored. Successively, any user can get the attribute directly from the topic whenever a new attribute is published by the IE. In this case our measurement campaign is aimed at evaluating the overall traffic present in the network, by varying both the number of the platform users and the frequency at which the attribute changes. It is worth noting that the frequencies of attributes’ renewal have been chosen as the inverse of the average response time estimated when the JMS broker was not active. To better detail this point, let us suppose that under certain specific conditions and with the JMS disabled (i.e., with the system operating in polling mode), the average response time estimated by JMeter is Tr and Lp is the average traffic load measured over the network. Then, the goal of our measurement campaign is to estimate the network traffic load Lj and compare it with Lp , when the JMS broker is enabled and 1=Tr is the renewal frequency of the attribute handled by the IM.
Performance Evaluation of the DORII Instrument Element Data Transfer Capabilities Mean 2500
Lower 99%
Upper 99%
Mean 1250
Lower 99%
21
Upper 99%
16000 Response Time [ms]
14000 12000 10000 8000 6000 4000 2000 0 0
5
10
15
20
25
30
35
40
45
Number of Clients
Fig. 3 Response time versus the number of clients when the “Polling Mode” is used
Figure 3 reports the behavior of response time versus the number of clients which poll the IE in order to get a data vector. The upper line (marked with triangles) refers to the case where the IE response consists of a vector of 2,500 doubles (i.e., a vector of 20,000 bytes); the lower line (marked with squares) shows the response time when the IE returns a vector of 1,250 doubles. The two dotted lines on either side of each solid line represent the borders of the confidence interval at 99%. It is worth noting that: (a) the response time increases almost linearly with the number of clients and (b) the time spent by a requester to obtain a response from the IE could be unacceptable for near real-time applications in most cases, even if only a limited number of users are accessing the laboratory controlled by the IE. Figure 4 shows the response rate (viz., the inverse of the response time, Tr / versus the number of clients (Nc / when the response of the IE consists of a vector of 2,500 and 1,250 doubles. The solid and dotted curves represent the best polynomial interpolating functions of the measured data. The interpolating functions are almost hyperboles, thus highlighting that the product Rr Nc is equal to a constant Kl , which, in turn, weakly depends on the length of the data vector returned by the IE. Observing the data in Table 1 can shed light on the factors affecting the overall IE performance. Indeed, Table 1 reports the traffic load (kbytes/s) offered to the transmission network by varying the number of clients (a) connected to the IE when the “Polling Mode” is in use or (b) picking data from the JMS broker when the “JMS Subscribing Mode” is enabled. It should be noted that, when the “Polling Mode” is active, the traffic load does not practically depend on the number of clients accessing the IE: actually, the load is almost constant and equal to about 1 MB/s, i.e., 8 Mb/s. Hence, the IE performance
22
L. Berruti et al. 1.600
Response Rate [Hz]
1.400 1.200 1.000 0.800
f(x) = 8.4 x^-1.11
0.600 0.400 f(x) = 3.65 x^-1.08
0.200 0.000 0
5
10
15
20
25
30
35
40
45
Number of Clients
Fig. 4 Response rate versus the number of clients in the “Polling Mode” Table 1 Average network traffic load (kbytes/s) under different operative conditions
Number of clients Polling 1250 JMS 1250 Polling 2500 JMS 2500
5 1; 214 199 1; 137 165
10 1; 063 161 1; 027 154
20 1; 003 150 1; 000 151
40 971 145 966 149
is not surely limited by the network capacity (100 Mb/s during our tests), but mainly by the IE’s specific implementation, the computational capabilities of the grid node hosting the IE, and partially by the JMeter. Furthermore, Table 1 presents the traffic load when the JMS broker is exploited by the IE and the IM (of the IE) internally polls the device at a sample frequency equal to the average response rate achievable when the “Polling Mode” is adopted. To better clarify this point, in the “Polling Mode” when five stations are active and the IE produces a response vector of 2,500 doubles (assumed to be the payload of a packet at the network layer), the average response rate amounts to 0.658 vectors/s (corresponding to an average response time of 1,520 ms, see Fig. 3) and the total traffic offered to the network is about 1,137 kB/s (see Table 1). On the contrary, in the “JMS Subscribing Mode,” when five stations contemporarily read data vectors (of 2,500 doubles) published, via the JMS broker, by the IM every 1.52 s, the total traffic load amounts to 165 kB/s. Analyzing Table 1 highlights how the adoption of the “JMS Subscribing Mode” permits us to scale down the traffic load by a factor of about six; alternatively, keeping the traffic load fixed, the IM can sample the device/instrument about six times faster than the case when the “Polling Mode” is adopted. The better performance achievable by the “JMS Subscribing Mode” can be ascribed to two main factors. The first and the most relevant one depends on the different data
Performance Evaluation of the DORII Instrument Element Data Transfer Capabilities
23
structures employed by the IE in polling mode and the JMS broker to dispatch data to the clients. As a matter of fact, the IE polling mode exploits the data structures provided by XML/SOAP, which are about 6.7 and 5.7 times more “space”-consuming than the data structures adopted by the JMS broker in the cases of 1,250 and 2,500 doubles, respectively. The second factor, which becomes more evident when a higher quantity of data is managed (viz., in the case of 2,500 doubles), likely depends upon the more efficient implementation of the JMS broker with respect to that of the IE in polling mode. In the case of 2,500 doubles, the data in Table 1 show a ratio of more than 6.48 between network traffic loads produced by the “Polling Mode” and the “JMS Subscribing Mode,” while the ratio between the memory space used by the data structures is about 5.7. In any case, in many situations the benefits resulting from the adoption of a JMS broker may significantly improve the realism perceived by users while handling remote instrumentation.
5 Conclusion In this chapter, we have evaluated the data gathering performance of the IE, the main software component of the GRIDCC/DORII architecture devoted to remotely control the resources present in real laboratories. During all the tests, the JMeter tool suite has been employed to estimate the time spent by a user to get a measure under the two different operational modes provided by the IE. The measurement campaign revealed that the “Polling Mode” may be often inadequate to get long data arrays from instrumentation, even if only a limited number of users access the laboratory. This mode seems to be more suitable for obtaining atomic, asynchronous data. On the contrary, in case the user requires a continuous monitoring of a data array (e.g., an oscilloscope trace), the “JMS Subscribing Mode” is surely effective, as it allows reducing the total traffic load while increasing the reactivity of the entire platform that manages the remote laboratory. Specifically, the performance tests highlighted that the “JMS Subscribing Mode” permits us to scale down the traffic load by a factor of about six and, conversely, keeping the traffic offered to the network fixed, the user can require the IE to interrogate the device/instrument about six times faster than in the case when the “Polling Mode” is in use. Future work will be aimed at evaluating the possible use of a JMS broker to gather data from a multitude of small sensor nodes that can be seen as a sort of distributed probe of a complex instrument, whose computation capabilities are inside the IE. Acknowledgments This work was supported by the European Commission, Information Society, ´ and Media DG, Unit F3 ‘GEANT & e-Infrastructure’, in the framework of the DORII FP7 Project (contract no. 213110).
24
L. Berruti et al.
References 1. 2. 3. 4.
GRIDCC project website, http://www.gridcc.org. RINGrid project website, http://www.ringrid.eu. DORII project website, http://www.dorii.eu. V.J. Harward et al., “The iLab shared architecture: A Web Services infrastructure to build communities of Internet accessible laboratories”, Proc. IEEE, vol. 96, no. 6, pp. 931–950, June 2008. 5. F. Davoli, N. Meyer, R. Pugliese, S. Zappatore, Eds., Grid-Enabled Remote Instrumentation, Springer, New York, NY, 2008; ISBN 978-0-387-09662-9. 6. F. Lelli, E. Frizziero, M. Gulmini, G. Maron, S. Orlando, A. Petrucci, S. Squizzato, “The many faces of the integration of instruments and the grid”, International Journal of Web and Grid Services, vol. 3, no. 3, 2007, pp. 239–266. 7. F. Davoli, S. Palazzo, S. Zappatore, Eds., Distributed Cooperative Laboratories: Networking, Instrumentation, and Measurements, Springer, New York, NY, 2006; ISBN 0-387-29811-8. 8. I. Foster, “Service-oriented science”, Science Mag., vol. 308, no. 5723, May 2005, pp. 814–817. 9. D.F. McMullen, R. Bramley, K. Chiu, H. Davis, T. Devadithya, J.C. Huffman, K. Huffman, T. Reichherzer, “The Common Instrument Middleware Architecture”, in F. Davoli, N. Meyer, R. Pugliese, S. Zappatore, Eds., Grid Enabled Remote Instrumentation, Springer, New York, NY, 2009, pp. 393–407; ISBN: 978-0-387-09662-9. 10. The RINGrid project team, “Remote Instrumentation Whitepaper”, available at http://www. ringrid.eu. 11. L. Berruti, F. Davoli, G. Massei, A. Scarpiello, S. Zappatore, “Remote laboratory experiments in a Virtual Immersive Learning environment”, Advances in Multimedia, vol. 2008 (2008), Article ID 426981, 11 pages, doi:10.1155/2008/426981. 12. L. Berruti, F. Davoli, M. Perrando, S. Vignola, S. Zappatore, “Engineering applications on the eInfrastructure: The case of telecommunication measurement instrumentation”, Computational Methods in Science and Technology, Special Issue on “Instrumentation for e-Science”, vol. 15, no. 1, pp. 41–48, 2009. 13. L. Caviglione, L. Berruti, F. Davoli, M. Polizzi, S. Vignola, S. Zappatore, “On the integration of telecommunication measurement devices within the framework of an instrumentation grid”, in F. Davoli, N. Meyer, R. Pugliese, S. Zappatore, Eds., Grid Enabled Remote Instrumentation, Springer, New York, NY, 2009, pp. 282–300; ISBN: 978-0-387-09662-9. 14. K. Haase, “JMS Tutorial”, available at http://download.oracle.com/docs/cd/E17477 01/javaee/ 1.3/jms/tutorial/1 3 1-fcs/doc/jms tutorialTOC.html. 15. “Apache JMeter”, The Apache Jakarta Project, available at http://jakarta.apache.org/jmeter/.
The Green Grid’5000: Instrumenting and Using a Grid with Energy Sensors Marcos Dias de Assunc¸a˜ o, Jean-Patrick Gelas, Laurent Lef`evre, and Anne-C´ecile Orgerie
Abstract Targeting mostly application performance, distributed systems have constantly increased in size and computing power of their resources. The power supply requirements of these systems have increased in a similar fashion, which has raised concerns about the energy they consume. This paper presents the Green Grid’5000 approach1 , a large-scale energy-sensing infrastructure with software components that allow users to precisely measure and understand the energy usage of their system. It also discusses a set of use-cases describing how an energy instrumented platform can be utilised by various categories of users, including administrators, Grid component designers, and distributed application end-users.
1 Introduction Large-scale distributed systems have become essential to assist scientists and practitioners in addressing a wide range of challenges, from DNA sequence analysis to economic forecasting. Driven mostly by application performance, these systems have constantly increased in size and computing power of their resources. The power supply requirements of these systems have increased in a similar fashion, which has raised concerns about the electrical power they consume and fostered research on improving their efficiency.
1
Some experiments of this paper were performed on the Grid’5000 platform, an initiative from the French Ministry of Research through the ACI GRID incentive action, INRIA, CNRS, and RENATER and other contributing partners (http://www.grid5000.fr).
M.D. de Assunc¸a˜ o () • J.-P. Gelas • L. Lef`evre • A.-C. Orgerie ´ INRIA, LIP Laboratory (UMR CNRS, ENS, INRIA, UCB), Ecole Normale Sup´erieure of Lyon, University of Lyon – 46 all´ee d’Italie 69364, Lyon Cedex 07 – France e-mail: marcos.dias.de.assuncao; jean-patrick.gelas; laurent.lefevre;
[email protected] F. Davoli et al. (eds.), Remote Instrumentation for eScience and Related Aspects, DOI 10.1007/978-1-4614-0508-5 3, © Springer Science+Business Media, LLC 2012
25
M.D. de Assunc¸a˜ o et al.
26
A range of techniques can be utilised for making computing infrastructures more energy efficient, including better cooling technologies, temperature-aware scheduling [8], Dynamic Voltage and Frequency Scaling (DVFS) [11], and resource virtualisation [3]. To benefit from these techniques and devise mechanisms for curbing the energy consumed by large-scale distributed systems [5], it is important to instrument the hardware infrastructure. The instrumentation can provide application and system designers with the information they need to understand the energy consumed by their applications and design more energy efficient mechanisms and policies for resource management. However, monitoring large-scale distributed systems such as Grids remains a challenge. Although various solutions have been proposed [2, 6, 12, 13] for monitoring the usage of resources (e.g. CPU, storage and network) to improve the performance of applications, only a few systems monitor the energy usage of infrastructures [9, 10]. This chapter presents the Green Grid’5000 approach, a large-scale energysensing infrastructure with software components that allow users to precisely measure and understand the energy usage of their system. When architecting the energy-sensing infrastructure and deciding on the required data management techniques, the following goals were considered: • • • • •
Increase users’ awareness on the energy consumed by their applications Improve the design of user applications Obtain data to advance Grid middleware Help managers in implementing power management policies Create an energy-data repository for further studies
The rest of this chapter is organised as follows. Section 2 presents the deployed energy sensing hardware infrastructure, whereas Sect. 3 describes the design of software components. Section 4 presents some concrete use-cases benefiting from the Green Grid’5000 solutions. Conclusion and future work are presented in Sect. 5.
2 Instrumenting a Distributed Infrastructure with Energy Sensors The Grid’5000 platform [1], depicted in Fig. 1, is an experimental testbed for research in distributed computing which offers 5000 processors geographically distributed across 9 sites in France (i.e., Bordeaux, Grenoble, Lille, Lyon, Nancy, Nice, Orsay, Rennes, and Toulouse), all linked by dedicated high-speed networks. This platform can be defined as a highly reconfigurable, controllable and monitorable experimental Grid. Its utilisation is specific: each user can reserve a number of nodes on which she deploys a system image; hence, the node is entirely dedicated to the user during her reservation. The Resource Management System (RMS) used by Grid’5000 is called OAR [4]. OAR is an open-source batch scheduler that provides a simple and flexible interface for reserving and using cluster
The Green Grid’5000: Instrumenting and Using a Grid with Energy Sensors
27
Fig. 1 The Grid’5000 infrastructure in France
resources.2 Users can submit best-effort jobs, which are started by OAR when resources are available, and advance reservations where users specify the desired start time of their requests. Figure 2 illustrates one of the interfaces provided by the Grid’5000 API, showing the status of the platform. The API also gives users access to information about each node, such as CPU, memory, network, disk and swap usages. However, to obtain this information a daemon must run on each reserved node. As running this daemon is optional because it can interfere with user experiments, the information may not be available for all reservation requests. Although Grid’5000 is different from a production infrastructure, the concerns surrounding its energy consumption are similar to those of production environments. Therefore, solutions proposed to tackle challenges in Grid’5000 can fit both experimental and production infrastructures. The energy-sensing infrastructure3 has been deployed on three Grid’5000 sites (i.e. Lyon, Grenoble, and Toulouse) monitoring in total 160 computing and networking resources. The infrastructure comprises a large set of wattmeters manufactured by OMEGAWATT.4 Each wattmeter is connected to a data collector via a serial link. A dedicated server stores the gathered data and generates live graphs of energy usage. In the Lyon site, where the whole infrastructure is monitored, the wattmeters periodically measure the energy consumed by 135 nodes and a router. Four Omegawatt boxes where required: one 48-port box, two 42-port boxes, and one 8-port box. Each box works as a multi-plug measurement device. For example, in the case of the 8-port box, 8 physical nodes can be plugged into it and the energy data is transmitted to a data collector (i.e. a remote server) via a serial link
2
http://oar.imag.fr/index.html. The infrastructure is financially supported by the INRIA ARC GREEN-NET initiative and the ALADDIN/Grid’5000 consortium. 4 http://www.omegawatt.fr/gb/index.php. 3
28
M.D. de Assunc¸a˜ o et al.
Fig. 2 Example of information provided by Grid’5000 API
(i.e. RS232). Hence, in this site, the energy data-collector server is connected to four boxes through four different serial links. As the energy data collector does not have four serial ports, it uses USB-RS232 converters. Another problem faced with serial links, was that the node racks are not next to each other, but in front of each other. Thus, two boxes are connected to the energy data collector via 6-meter long USB cables; cables must go under the floor. Although USB cables have a theoretical limit of 5 m, above which distance they do not operate well, we have not observed failures during the 10 months that this infrastructure is in operation.
The Green Grid’5000: Instrumenting and Using a Grid with Energy Sensors
29
On the data collector, a daemon for each box is responsible for collecting the energy data. At each second, this daemon sends a data request in a hexadecimal code with a CRC to the box to ask for the energy consumption value of all the resources monitored by that box. The response is a hexadecimal code with a CRC in the form of “packets”, each packet containing the instantaneous power consumption of six nodes. The software responsible for communicating with the wattmeters for obtaining the data was not provided with the boxes. We had to design it by ourselves for collecting at every second the measurements of 135 nodes including CRC checking, hexadecimal decoding, and storage of these values. The data gathering process should not exceed one second. As the goal of instrumenting the Grid infrastructure was to provide energy consumption information to different groups of users and system administrators, several data management techniques had to be applied. For example, the energy data is stored in three different ways: • Raw data: that includes a timestamp indicating when a measurement was performed (there is one log file per resource). • Last values: one file (in memory file systems) per resource and the value is overwritten at each second. • Round-Robin Databases5 : one file per monitored node. The data collection and storage process for the Lyon site is illustrated in Fig. 3. The usage of each type of storage for the different categories of users is detailed in the following sections. As each category of user has different requirements, the frequency of measurement is important and must be calibrated depending on what one wants to observe. For administrators that want to observe global energy trends, a few measurements per hour or per day can suffice. Users who want to understand usage and applications impact on energy consumption, demand several measurements per minute. For middleware designers who want to understand peak-usage phenomena (during a system deployment phase for example), multiple measurements per minute are mandatory. Moreover, in this case the data must be stored to allow one to evaluate trends and patterns that can aid the improvement of middleware tools. In the Green Grid’5000 context, one live measurement (in Watts) is performed each second for each monitored resource with a precision of 0:125 W. These measurements are displayed lively and stored in specific repositories. One year of energy logs for a 150-node platform corresponds to approximately 70 GB of data. These energy logs can be correlated with the reservation logs (provided by OAR) and the resource usage logs provided by the Grid’5000 API in order to have a more complete view on the energy usage.
5
RRDtool is the OpenSource industry standard, high performance data logging and graphing system for time series data (http://oss.oetiker.ch/rrdtool/).
30
M.D. de Assunc¸a˜ o et al.
Fig. 3 Energy data collector deployed at the Lyon site
3 Displaying Information on Energy Consumption Measuring energy consumption and storing the data are only part of the process when the goal is to reduce the amount of electricity utilised by an infrastructure. Ideally, the obtained information on energy consumption should drive optimisations in the resource management systems; inform users of the cost of their applications in terms of electricity spent; and possibly offer means to raise users’ awareness and adapt their behaviour so they can make more energy-aware choices when executing their applications. In addition, the information should be presented to users in a way they can measure the impact of their choices in terms of energy consumption when they run their applications differently from the manner they are used. Towards achieving these goals, we have developed a set of tools for visualising the data collected from instrumenting the Grid platform. This section describes a graphical interface, termed as ShowWatts, for displaying the measurements in near real-time, and a list of Web pages and graphs for viewing historical data on energy consumption. The first interface allows users to view the instant energy consumption of the whole platform and check the immediate impact of executing an application on a set of resources; whereas the Web pages and graphs enable the analysis of energy consumption over larger periods and the identification of patterns and application behaviours.
The Green Grid’5000: Instrumenting and Using a Grid with Energy Sensors
31
Fig. 4 Screenshot of ShowWatts graphical interface
3.1 ShowWatts: Live Display ShowWatts comprises a Java Swing application for displaying the energy consumption of the instrumented platform in nearly real-time (see Fig. 4), and a module responsible for aggregating the energy consumption information and transferring it to the node where it is displayed. It works on the client-server model: a daemon is launched on the server side and another is launched by the client; the client daemon uses the “last values” described in Sect. 2. These two processes communicate through an SSH tunnel. The server side part is optimised to minimise the data volume sent: only the values that differ from the previous transfer are sent. From a system administrator’s point of view, this interface can be used to monitor the whole platform or the consumption of specific resources. The administrator can select the resources in which she is interested and view their share of the overall energy consumption. In addition, it is possible to view other statistics such as a rough amount of money spent with electricity. Additionally, the application is customisable and can be changed to display information on other platforms. This is achieved by implementing the module that aggregates the consumption information and changing two XML files; one that describes general configuration and display preferences and another that specifies the monitored resources. For demonstrative purposes, we have implemented a reservation scheduler in python, which shows the impact in energy consumption when switching machines off and on, and aggregating reservation requests.
32
M.D. de Assunc¸a˜ o et al.
Fig. 5 Web interface for energy usage visualisation
3.2 Portable Web-Based Visualisation As mentioner earlier, in addition to displaying the instant energy consumption, visualisation tools must provide means whereby users and system administrators can analyse past consumption. By visualising the energy consumed by individual resources or groups of resources over larger periods, administrators and users can better know the behaviour of their platform, design more energy-efficient allocation policies, and investigate possible system optimisations. We have developed a set of scripts that automatically generate Web pages and graphs for displaying the energy consumption of individual resources and groups of resources. Figure 5 shows an example of Web page containing a list of graphs with measurements taken from a Grid’5000 site. These graphs are updated every minute so that the user has an up-to-date view of the platform in terms of energy consumption.
3.3 Collecting and Providing Energy Logs Measuring the power consumption of a Grid is the first step to understand the energy usage of the platform, while the following step is to collect and store this data and give users access to it. As discussed in Sect. 2, to collect the energy measurements, we have developed software to interact with the wattmeters and to store the received information in different formats: RRD databases and text files.
The Green Grid’5000: Instrumenting and Using a Grid with Energy Sensors
33
RRD databases offer the advantage of having a fixed size. They use average values for generating graphs of long periods. They are used to have an overview of the energy platform. It provides different views of the node’s energy consumption (i.e. per hour, per day, per week, per month, and per year). Each view presents maximum, minimum, and average values. The current value per node (in Watts) is also displayed and updated every second (by an ajax script) as well as the global value for the whole site. These tools are particularly suitable for administrators. They provide a quick overview of the whole platform with an in-depth access to the most recent values for each node. An example of usage for this kind of energy log is given in Sect. 4.4. These aggregated views, however, are not precise enough to understand the link between running an application on a node and the energy it consumes. Hence, text files are used to store each measurement (i.e. for each node at each second with a timestamp). These text files are archived and compressed each week in order to reduce the disk space required; this allows us to keep all the logs. While collecting and storing energy logs is important, the main goal is to provide these logs to users, so that they can see the impact of their applications on energy consumption of the platform. This may increase users’ awareness about the energy consumed by large-scale infrastructures. Therefore, the monitoring Website includes log-on-demand features. As shown in Fig. 6, the Web portal entails a Web page where a user queries the energy consumption of servers by specifying the period, the time increment (i.e. value in seconds between two measurements) and the nodes. These tools are specially designed for the users and the middleware developers who want to precisely profile their applications in terms of energy usage and to correlate the energy data with resource (e.g. CPU, memory, disk, network, etc.) utilisation to view the impact of their programming decisions on energy consumption. Examples of such applications and utilisations of these tools are given in Sects. 4.1, 4.2 and 4.3. The architecture responsible for collecting, storing and exposing the logs is summarised in Fig. 7. The logs and graphs are available for all Grid’5000 users and administrators. Figure 8 presents the typical result of a log request. First, the global information of the request is given, such as the global consumption of the requested nodes during the given time frame. Moreover, for each node, the energy graph is provided to show the energy profile of the user’s request. For each request, we provide a directory (with a permanent address) that contains: • A log file for each requested node with all the collected measurements during the time period requested (with the requested increment between two measurements). • An SVG graph for each requested node plotting its energy profile. • A log file with the global consumption of each requested node during the requested time period and the total consumption of the requested nodes. • A .tar.gz file containing all the files above.
Fig. 6 Web Interface for energy-log requests
34 M.D. de Assunc¸a˜ o et al.
The Green Grid’5000: Instrumenting and Using a Grid with Energy Sensors
35
Fig. 7 Overall architecture to collect, store and expose the energy data
Fig. 8 Web page displaying the result of an energy-log request
4 Green Grid’5000 Usage Examples This section presents some examples on how the energy consumption data can be utilised by different types of users.
36
M.D. de Assunc¸a˜ o et al.
Fig. 9 Energy consumption during typical node applications
4.1 Energy Profiling of Applications The energy monitoring architecture is often utilised by Grid’5000 users to study the energy profile of their applications. Figure 9 illustrates the behaviour of the typical life-cycle of a user application on a Grid’5000 node (Sun Fire V20z: 2 CPUs of 2.4GHz). The different phases of the application correspond to: • The boot of the node that lasts for approximately 120 s and is characterised by power consumption peaks. • Then, an idle period during which the node still has a high energy consumption even if it does nothing (around 240 W). • An hdparm6 experiment which represents intensive disk-accesses (243 W on average). • An iperf 7 experiment which makes intensive use of the network (using UDP, it consumes 263 W on average but also uses CPU). • A cpuburn8 which fully loads a CPU (267 W on average). • A stress9 which fully uses all the CPUs and also performs I/O operations (277 W on average). • A halt during which a small power consumption peak can be observed. • The node is off and still consumes around 10 W.
6 hdparm is a command line utility for Linux to set and view the IDE hard disks hardware parameters. 7 Iperf is a commonly-used network tool that can create TCP and UDP data streams to measure TCP and UDP bandwidth performance. 8 cpuburn is a tool designed to heavily load CPU chips in order to test their reliability. 9 stress is a tool to impose load on and stress test systems.
The Green Grid’5000: Instrumenting and Using a Grid with Energy Sensors
37
Fig. 10 Basic MPI application scenario
The logs on demand tool has been used to create Fig. 9. As one can observe, the tool provides detailed information and a summary; the experiment has lasted 474 s, and during that period the node consumed 111,296.34 Joules. Energy profiling of user applications can greatly increase their awareness as well as help programmers to improve the energy efficiency of their applications.
4.2 Improve Distributed Applications As end-users are able to profile the energy usage of their applications, it is possible to design more efficient distributed applications by applying some “green” programming concepts. As an illustrative scenario, a programmer designs an MPI client-server application as depicted in Fig. 10, where 3 processes run concurrently: • One server is assigning tasks to two clients. • Client 1 is computing small tasks (S Tasks). • Client 2 is computing XL Tasks (1 XL task D 2 S tasks). At the first attempt, the programmer designs a basic application where the server assigns tasks to clients in a synchronous way. This application is neither optimal in terms of performance nor energy efficient. As tasks are heterogeneous, some periods of inactivity are observed on client 1 and energy is wasted (Fig. 11). The client spends long periods of inactivity with an idle consumption of 179 W. By using the Green Grid’5000 infrastructure, the end-user is able to detect these idle periods and re-design her application. The programmer then decides to use asynchronous MPI communication. As soon as the client has finished to compute its tasks, it sends the results back to the server, which is then able to send new tasks to the client. We can observe that this more aggressive approach is better in terms of energy and performance as it reduces
38
M.D. de Assunc¸a˜ o et al.
Fig. 11 Power consumption of client 1 during the non-energy efficient MPI application
Fig. 12 Power consumption of client 1 during the energy-efficient MPI application
inactivity periods (Fig. 12). In one minute, the client can compute more tasks than in the previous example. By profiling applications, designers can explore ways to improve their energy efficiency.
4.3 Improving the Energy Efficiency of Grid Middleware As discussed beforehand, recent computer hardware and operating systems have offered a range of techniques to manage the power drawn by servers. The challenge posed to middleware designers is how to benefit from these techniques for
The Green Grid’5000: Instrumenting and Using a Grid with Energy Sensors
39
Fig. 13 Energy consumed by a server during part of an interactive reservation
making distributed systems more energy efficient; hence, reducing both their energy footprint and the costs associated with managing a Grid infrastructure. Tackling this challenge requires an understanding of how users utilise the infrastructure and how their applications behave in terms of resource usage and energy consumption. Previous study on the energy consumption of Grid’5000 has shown that substantial energy savings can be achieved if users accept to change the start or finish times of their reservations, so that they can be aggregated, creating free windows during which unused servers can be switched off [9]. Additional savings can be made when analysing the energy consumption data and crossing it with information from other system components such as the RMS or scheduler. We utilise the “interactive” advance reservations of Grid’5000 as an example to illustrate how Grid middleware can be improved when information on energy consumption of servers and resource utilisation logs are made available to middleware developers. Under an interactive reservation, a user reserves a group of servers on which she deploys an environment that contains the whole operating system customised to her application. Figure 13 shows the energy consumed by a server of the Lyon site during part of a reservation that starts at 10 a.m. on a working day. The noisy measurements obtained after 1,900 s are typical of an environmentdeployment phase during which the disk image is copied to the node that is later rebooted with the new operating system. This figure shows that during the period that precedes the deployment, the server was not utilised, hence wasting resources and energy. If mechanisms for identifying and predicting these behaviours are incorporated into middleware design, the unused servers can be switched off; hence, minimising resource wastage and consequently improving the energy efficiency of the infrastructure.
40
M.D. de Assunc¸a˜ o et al.
Fig. 14 Energy consumption and utilisation of nodes over 6 months
4.4 Administrator’s Use Case Collecting and analysing energy consumption information can aid system administrators: to evaluate the power management techniques offered by the hardware, observe the impact of different policies to curbing the energy consumption of the infrastructure, and guide capacity planning decisions. However, there is only so much that energy consumption data alone can provide. As illustrated in the example above, it is important to cross the data on energy consumption with information from other system components such as cluster schedulers – e.g. information on node failures, job arrivals, job scheduling – and information on resource utilisation – e.g. information on CPU, memory, storage, and network usage. Figure 14 presents the energy in kWh per day consumed by the Grid’5000 site in Lyon. The blue line shows the resource utilisation according to the site scheduler (i.e. OAR) [4]; the utilisation indicates the percentage of reserved nodes, and hence does not imply that CPUs, storage or network resources were used by reservations at the same rate. Although this is a simple example, it illustrates the type of information required by managers to evaluate the efficiency of the infrastructure and identify correlations between resource usage and energy consumption. Managing and analysing this data can provide system administrators with hints on how to provision and manage resources more efficiently.
5 Conclusion and Future Work This chapter presents the steps in instrumenting and monitoring the energy consumption of an experimental Grid. As observed, having a configurable sensing infrastructure is one of the basic components mandatory for designing energy
The Green Grid’5000: Instrumenting and Using a Grid with Energy Sensors
41
efficient software frameworks. The Green Grid’5000 is the first available large-scale platform whose energy is monitored per plug every second. Current research on energy efficiency in large-scale distributed systems is made possible by the instrumented platform [7, 9]. As of writing of this contribution, the energy sensing infrastructure has been in operation for over 10 months. Several techniques have been applied to obtain, store and manage the data resulting from monitoring the energy consumed by the infrastructure. The different manners in which the data has been stored and reported aim to serve the needs of several categories of users, including application developers, middleware designers, system administrators and infrastructure managers. In addition to energy consumption data, obtaining and analysing information from multiple sources – e.g. schedulers and resource utilisation – is essential to understand the usage of the infrastructure and applications’ behaviours. This analysis can help system designers and managers to identify how to devise and implement techniques for reducing the cost of energy consumption. It also allows them to evaluate the return of investment of deploying the sensing infrastructure itself. At present, instrumenting a platform is expensive and its cost can surpass the savings achieved by striving to improve the energy efficiency of the overall infrastructure. As future work, we intend to deploy energy sensors in several Grid’5000 sites and monitor the consumption of both computing and network equipment. In addition, we are in the process of creating a repository where the data on energy consumption will be made publicly available to the research community.
References 1. Cappello et al, F.: Grid’5000: A large scale, reconfigurable, controlable and monitorable grid platform. In: 6th IEEE/ACM International Workshop on Grid Computing, Grid’2005. Seattle, Washington, USA (2005) 2. Andreozzi, S., De Bortoli, N., Fantinel, S., Ghiselli, A., Rubini, G.L., Tortone, G., Vistoli, M.C.: Gridice: a monitoring service for grid systems. Future Generation Computer Systems 21(4), 559–571 (2005). DOI http://dx.doi.org/10.1016/j.future.2004.10.005 3. Barham, P., Dragovic, B., Fraser, K., Hand, S., Harris, T., Ho, A., Neugebauer, R., Pratt, I., Warfield, A.: Xen and the art of virtualization. In: 19th ACM Symposium on Operating Systems Principles (SOSP ’03), pp. 164–177. ACM Press, New York, USA (2003). DOI http:// doi.acm.org/10.1145/945445.945462 4. Capit, N., Costa, G.D., Georgiou, Y., Huard, G., n, C.M., Mouni´e, G., Neyron, P., Richard, O.: A batch scheduler with high level components. In: Cluster computing and Grid 2005 (CCGrid05) (2005). URL (http://oar.imag.fr/papers/oar ccgrid05.pdf) 5. Da Costa, G., Gelas, J.P., Georgiou, Y., Lef`evre, L., Orgerie, A.C., Pierson, J.M., Richard, O., Sharma, K.: The green-net framework: Energy efficiency in large scale distributed systems. In: HPPAC 2009 : High Performance Power Aware Computing Workshop in conjunction with IPDPS 2009. Roma, Italy (2009) 6. Gu, J., Luo, J.: Reliability analysis approach of grid monitoring architecture. Annual Conference ChinaGrid 0, 3–9 (2009). DOI http://doi.ieeecomputersociety.org/10.1109/ChinaGrid. 2009.28
42
M.D. de Assunc¸a˜ o et al.
7. Lef`evre, L., Orgerie, A.C.: Designing and evaluating an energy efficient cloud. The Journal of SuperComputing 51(3), 352–373 (2010) 8. Moore, J., Chase, J., Ranganathan, P., Sharma, R.: Making scheduling “cool”: Temperatureaware workload placement in data centers. In: USENIX Annual Technical Conference (ATEC 2005), pp. 5–5. USENIX Association, Berkeley, CA, USA (2005) 9. Orgerie, A.C., Lef`evre, L., Gelas, J.P.: Save watts in your grid: Green strategies for energyaware framework in large scale distributed systems. In: 14th IEEE International Conference on Parallel and Distributed Systems (ICPADS). Melbourne, Australia (2008) 10. Singh, T., Vara, P.K.: Smart metering the clouds. IEEE International Workshops on Enabling Technologies 0, 66–71 (2009). DOI http://doi.ieeecomputersociety.org/10.1109/WETICE. 2009.49 11. Snowdon, D.C., Ruocco, S., Heiser, G.: Power Management and Dynamic Voltage Scaling: Myths and Facts. In: Proceedings of the 2005 Workshop on Power Aware Real-time Computing (2005) 12. Truong, H.L., Fahringer, T.: Scalea-g: A unified monitoring and performance analysis system for the grid. Sci. Program. 12(4), 225–237 (2004) 13. Zanikolas, S., Sakellariou, R.: A taxonomy of grid monitoring systems. Future Generation Computer Systems 21(1), 163–188 (2005). DOI http://dx.doi.org/10.1016/j.future.2004.07.002
Porting a Seismic Network to the Grid Paolo Gamba and Matteo Lanati
Abstract The chapter describes the experience and lessons learnt during customization of a seismic early warning system for the grid technology. Our goal is to shorten the workflow of an experiment, so that final users have direct access to data sources, i.e. seismic sensors, without intermediaries and without leaving the environment employed for the analysis. We strongly rely on remote instrumentation capabilities of the grid, a feature that makes this platform very attractive for scientific communities aiming at blending computational procedures and data access in a single tool. The expected outcome should be a distributed virtual laboratory working in a secure way regardless of the distance or the number of participants. We started to set up the application and the infrastructure as a part of the DORII (Deployment of Remote Instrumentation Infrastructure) project. In the following sections we will try to explain the steps that led us to integration, the experience perceived by the testers, the results obtained so far and future perspectives.
1 Introduction The aim of the application described in this article is to retrieve data from a seismic sensor network and process them to obtain useful information about earthquakes. At the current status, it is reasonably hard that the grid can effectively support an early warning system: the time available to detect seismic waves and broadcast the alarm message is too tight. This time window varies on the basis of the distance
P. Gamba University of Pavia, Department of Electronics, via Ferrata, 1 - 27100 Pavia, Italy e-mail:
[email protected] M. Lanati () Eucentre, via Ferrata 1 - 27100 Pavia, Italy e-mail:
[email protected] F. Davoli et al. (eds.), Remote Instrumentation for eScience and Related Aspects, DOI 10.1007/978-1-4614-0508-5 4, © Springer Science+Business Media, LLC 2012
43
44
P. Gamba and M. Lanati
between the epicenter and the area being monitored, however a good estimation goes from 30 to 60 s. It is common practice to put sensors near relevant buildings or infrastructures (railways, nuclear, or industrial plants) and perform locally all the computations needed to detect P seismic waves, forecasting the earthquake arrival. All the necessary grid-related procedures, i.e. identification or resource matching, can dramatically delay this phase. On the contrary, the post-event phase opens more interesting scenarios, especially for fast evaluation and assessment. The grid offers enough free space to host a wide catalog of events, always available to be processed. Computations on large quantities of data is a must for this platform and can be exploited, for example, to produce risk analysis maps. We also have to remember the added value of recent developments such as remote instrumentation control, bringing directly into the grid the source of information. New ways of collaboration and resource sharing are possible, in particular the set up of virtual and distributed laboratories. The idea of using computational grids to interpret historical data sets is quite straightforward, see [1], but nowadays a deeper integration is fundamental. Webbased grid services can play a relevant role in state-of-the-art efforts to mitigate or forecast damages brought about natural disasters. They are good candidates for broad collaboration from the geographical and inter-disciplinary point of view due to their intrinsic remote and distributed nature. For example, QuakeSim [2] makes available a web portal capable of merging sensor data sources (satellite images and GPS readings from ground stations, both real time or recorded) with multiscale models to study fault system’s dynamics, focusing in particular on the California case. The potential impact of these activities could help in sharing information in an easier way, improving cooperation experience and reducing response time during critical situations. The intermediate layer adapting the devices to the graphical interface proposed to the final users generalizes the instrument model. The proprietary protocol solutions adopted by the manufacturers are hidden by implementing independent dialogue boxes, commands, or configuration parameters. The chapter is organized as follows: after a brief review of the grid services supporting our work, some space is reserved to the description of the seismic sensor network monitored and its access protocol. Section 4 explains the theoretical basis of the analysis carried out and all the previous concepts are mixed in Sect. 5, where all the details about integration are specified. Finally, a discussion about users evaluation and future perspective is proposed.
2 VCR and IE: New Services for the Grid User Interface (UI), a set of command line tools to access the grid, from identity to resource management, still represents a quick way to enter this world and keeps some advantages. It can hardly be replaced during the debug phase or for fast application prototypation; moreover, it is not resource greedy. However, it is not ideal for interactive tasks and it can appear unfriendly to people approaching the
Porting a Seismic Network to the Grid
45
grid without aiming to become developers. This is the target audience the DORII project1 is dedicated to: scientists belonging to experimental science, earthquake, and earth observation communities demanding for computation/storage resources in strong connection to instrumentation availability. Since they are not specialists, the demand for an easy-to-use and easy-to-learn environment is taken into great consideration. All these problems have been deeply investigated in previous projects such as GRIDCC2 and RINGrid3 , for collaborative environments involving remote instrumentation, or int.eu.grid4 for interaction aspects. DORII relies on these experiences to provide an integrated multiplatform environment to deal with real devices and to perform data analysis. This goes beyond simply putting legacy results all together, the goal is to create an e-Infrastructure that can serve different communities, but sharing the same fundamental software components. The infrastructure’s front-end is the Virtual Control Room (VCR) [3], a web portal making available all the basic grid commands adopting a point-and-click philosophy. First of all, the potential users need only a web browser to start their work, then certificate management is much easier and clearer. It has to be handled only at the first log in, then credentials are issued automatically. A dedicated applet lists all the resources supplied by the project partners: the most interesting ones are the Instrument Elements (IEs) and their Instrument Managers (IMs) [4]. The IE is the middleware element that exposes some fundamental web services, virtualizing instruments functionalities: configuration, monitoring and management. The key components of an IE are its IMs, they act as protocol adapters and describe each device as a finite state machine. The IM is quite autonomous, its Event Processor gets input from the user, typically a command, or from the sensor (errors and monitored parameters), then the Brain Module decides whether to change the state or interact with the instrument through the Resource Proxy. The behaviour of the IM is set in an XML file which defines the available commands, the monitored attributes and state transitions. It is clear that the IM is a general and high abstraction level framework, needing customization. The most difficult part in the developer’s work is to interface the IM and the physical node, but this is a general problem, not a grid specific one. Usually, manufacturers provide a development kit with libraries or a reference guide explaining the protocol adopted.
3 The Instrumentation Put Into the Grid The instruments involved in the application are seismic sensors, that is accelerometers inserted into a shock-proof package measuring the ground velocity along the three Cartesian directions. Each sensor is usually associated to a seismic 1
DORII project web site: http://www.dorii.eu. GRIDCC project web site: http://www.gridcc.org/. 3 RINGrid project web site http://www.ringrid.eu/. 4 int.eu.grid project web site: http://www.interactive–grid.eu/. 2
46
P. Gamba and M. Lanati
Fig. 1 Sensor network deployment and communications
recorder, providing some basic services such as continuous power supply, network connection, geo-localization, and remote administration. The sensor network is owned and managed by Dip.Te.Ris (Dipartimento per lo Studio del Territorio e delle sue Risorse) at the University of Genoa and it is spread throughout the Liguria region, in the north-west of Italy. The coverage area is not too wide in order to ensure good monitoring quality. Moreover, it is more valuable to have the global overview of the phenomena given by the whole set, so all the measurements are gathered by a single server. It is run by a proprietary software solution provided by Nanometrics Inc., responsible also for the communication with the sensors. A network node is connected to the central point by means of a wired or wireless (satellite, GPRS modem) link and data are exchanged over UDP, as depicted in Fig. 1. Inbound data flow [8], running from the nodes to the server, is split in packets identified by a unique sequence number and a timestamp, then encapsulated in UDP datagrams. A packet starts reporting the oldest sequence number available as a 4-byte integer, while the remaining space is organized in bundles, a 17-byte long independent collection of data (timestamp, status message or measurements). The first bundle in a packet acts as header, carrying some common information such as packet type, time stamp, sequence number or instrument identifier, coding the serial number and model, plus other specific fields (i.e., in a data packet header there is the sampling frequency and the value of the first measure). Among the different bundle types that can follow the header, the most commonly found is the one carrying compressed data, so it is worth to look in detail at its structure. It contains from 4 to 16 samples, stored using a differential algorithm, referring to the initial value placed in the data packet header. The compressed data bundle is subdivided in four data sets of 4 bytes each, while the remaining space, 1 byte, encodes how data are packed: byte, word or long difference. So, the single data set can be interpreted as four single byte differences, two word differences or one long difference. A user is not supposed to deal with bytes, but packets related to the same source, for example a ground velocity component or the state of health messages of a station,
Porting a Seismic Network to the Grid
47
are bunched together in a channel. The list of these unique streams of information is then broadcasted by the server. The number of bundles in a packet is variable, an odd value ranging from 1 to 255, to maximize network efficiency depending on the chosen carrier. However, once packet dimension is fixed during the deployment, it cannot be changed, ensuring easy implementation of the protocol even in devices with poor resources. Finally, reliability is guaranteed because wrong or missing packets are retransmitted. The central recording site stores and reorders what the sensors have transmitted to offer data access services over TCP (right side of Fig. 1) relying on two protocols: Private Data Stream and Data Access Protocol. The Data Access Protocol defines the communication between a client and the Nanometrics DataServer component, making available historical recordings in original format or as uncompressed stream. Data types provided include triggers, events, state of health messages, time-series, or transparent serial data. A trigger is generated when the energy associated to a signal (channel) changes, probably due to an earthquake, while event packets describe the seismic phenomena in term of duration and amplitude. State of health messages contain details about the current situation of a sensor, logs or error conditions. Transparent serial data employ only the compressed format and packets are sent when filled with a number of bytes specified by the user, unless a time out occurs. In this case, the space left is padded and there are no mistakes because the header bundle always specifies the number of valid bytes in the payload. On the contrary, seismic data retrieved in time-series mode can be expressed in the original format or as uncompressed 32-bit integer values. In the first case, the DataServer is forwarding the packets received from the sensor, while in the second, data is arranged in fixed-length blocks, usually corresponding to a one second time window. A typical connection goes through these steps: • Open a socket to the DataServer on port 28002 • Read the connection time (4-byte integer) • Send a connection request, including the connection time together with username and password, if necessary • Wait for a Ready message • Send a request message to retrieve data, state of health, events, triggers • Receive answers from the server (this phase ends with a Ready message) • Send a new request or terminate the subscription • Close the socket The server notifies the client that it has successfully carried out a request sending a Ready message, so, from now on, it is available for a new query. If the client does not wait for this reply, demanding immediately different information, the current request is cancelled (however acknowledged by the Ready message) and the new one is taken into account. Moreover, if the client does not generate traffic for more than 20 s the connection is closed by the server. The client can reset the timer issuing a RequestPending packet and handling the subsequent Ready message.
48
P. Gamba and M. Lanati
Private Data Stream protocol supervises online data access in near real time. The component involved is the NaqsServer, or better, the Stream Manager subsystem, which plays a role similar to those seen for the DataServer, but with a strict time schedule. Once again, events, trigger, state of health messages, timer-series and transparent serial data are supported. Compressed data channels (i.e., state of health, time-series, and transparent serial) can be subscribed as raw or buffered streams. In a raw stream, the packets are sent to the client as they are received from the sensor node, so without guarantees on correct order, but with possible duplications or gaps in the flow. If a buffered stream is requested, the most recent packets available at the server for that channel before subscription time, usually 4 or 5, are forwarded to the client. In this sense, collected flow starts in the past. When requesting a channel, it is also possible to define the short-term-completion time, that is the interval the server waits to fill gaps in the measurements with retransmitted packets. This parameter, if not disabled, can vary from 0 to 300 s. In front of a tolerable delay, the Stream Manager provides with reasonable probability a continuous set of values, which is very important for the subsequent analysis, avoiding interpolation. Finally, since the user is not allowed to access sampling frequency set on the instrument, the Timeserver can apply a decimation filter to down-sample the data stream. In order to set up a connection according to the Private Data Stream protocol a client should: • • • • • •
Open a socket to the NaqsServer on port 28000 Send a connection request Receive the channel list Subscribe one or more channels Receive and decode measurements, adding or removing subscriptions When done, send a terminate subscription and close the socket
The Stream Manager can handle multiple requests, so it is possible to subscribe to more channels at the same time, dynamically adding/removing them or changing parameters such as short-term-completion on the fly. The client should also prevent the server to close the connection after 30 s of inactivity sending a keep alive message.
4 The Proposed Application The application discussed in this article is split in two parts: interaction with instruments and data analysis. The instrument oriented task consists in polling the central server to obtain the measurements, in real time or off-line. The two operational modes are equivalent for the results, but the former is closer to an early warning system, while the latter is useful in case of historical analysis, i.e., to evaluate how parameters influence an algorithm.
Porting a Seismic Network to the Grid
49
The computational part of the application starts from the ground velocity provided by the sensors to extract the time history and calculate the Fourier amplitude spectrum together with the acceleration response spectrum. The first one is simply the plot of displacement, ground velocity and acceleration versus time. Wellknown algorithms for numerical integration [5] or differentiation [6] give the result, respectively, for displacement and acceleration. The absolute value of the Fourier amplitude spectrum shows the acceleration in the frequency domain, obtained by a Fast Fourier Transform. Finally, the acceleration response spectrum is the most technical item discussed here and broadly adopted in the seismic community, so some more explanations [7] are needed. It is defined as the mathematical locus of acceleration response peaks (in absolute values) for a set of single degree of freedom oscillators characterized by different natural periods and fixed damping constants. If one of these systems is subject to ground acceleration imposed by an earthquake, the equation describing its behaviour is mRu C c uP C ku D mRug
(1)
where uR , uP and u are, respectively, acceleration, velocity and displacement; m defines the mass; k the spring constant; c the damping coefficient; and uR g is the ground acceleration (in our case derived from measured ground velocity). Equation (1) can be also written as uR C 2!n uP C !n2 u D Rug
(2)
where the damping ratio D
c c D p cc 2 km
(3)
is given by the damping constant over the critical damping constant or expressed as a function of c, k, and m. At the same time, the natural frequency 2 !n D D Tn
r
k m
(4)
depends on the mass and the spring constant. Even if derived from a simple model, the spectral acceleration derived from the response spectrum is important because, multiplied by the mass, represents the peak base shear of the structure. Figure 2 shows the interaction between the grid blocks, all the resources available in the e-infrastructure has been involved. The two basic steps (acquisition and analysis) are recognizable, all the actions are routed through the VCR. VCR and IE are located in Eucentre, while Storage Elements (SEs) and Computing Elements (CEs) are selected from the DORII pool assigned to the generic catch-all VO. The National Research Networks take in charge the traffic generated, even for the communication between the IE andthe Nanometrics server (not shown here). Given
50
P. Gamba and M. Lanati
Fig. 2 Grid blocks involved in the application deployment
the central role of the VCR, new institutions that would like to use the application have only to join the VO and apply for an account on the web portal. The secure and coordinate instrument access is guaranteed through the VOMS architecture.
5 Integration Into the Grid In order to exploit grid potentialities in the seismic application described in Sect. 4, it is fundamental to give access to sensors from the grid’s user interface chosen. This is one of the most relevant points in the DORII project and the final goal is to propose to the users an integrated environment where to take measurements and perform computations. Despite the Nanometrics solution is proprietary, a detailed reference guide [8] describes every aspect of the protocol, so it is quite straightforward to implement a client software tailored for all needs. We wrote a Java client library covering most of the features described in Sect. 3: access to historical and real time data as raw or buffered time-series stream in original compressed format. Java language was chosen to maximize the integration with the DORII framework, however the library was developed as a generic tool and can be used also outside the grid. Captured data are saved in two formats: ASCII plain text and binary. Text files are organized in two columns, relative time from the beginning of the acquisition for that channel and the integer value produced by the A/D converter on the station deployed (quantized millivolts). On the basis of the sensor, a conversion factor gives the ground velocity in meters per second. Binary files are written according to the SAC (Seismic Analysis Code) format5, widely adopted in the earthquake community. 5
SAC Data File Format, http://seismolab.gso.uri.edu/savage/sac/sac format.html.
Porting a Seismic Network to the Grid
51
Fig. 3 Finite state machine for the sensor network
Seismic sensor network, or better, access to the Nanometrics server, is hidden behind two IMs, one for NaqsServer and one for DataServer. Both of them are abstracted as finite state machines, sharing the structure depicted in Fig. 3. From the implementation point of view, IMs are simply a client software based on the Java library running in an application server (the IE), having fixed some parameters. For example, we choose to request only unbuffered streams disabling the short-term-completion time, in order to keep the delay under control. Going into details, the states and related commands, highlighting the differences in the implementation, are Off: The connection to the Nanometrics server is not established yet, the command Turn on opens the socket and triggers the transition to the next state. On: The client sets up the communication by means of a proper request and handles the answer. The NaqsServer provides the channel list, while the same information should be demanded to the DatServer. In this case, a request for a precise list is issued, in order to get the time-window availability of stored data for each channel. Now the IM waits for the user to receive a command, meanwhile keep alive packets are sent regularly (15 or 25 s for Naqs or Dat, respectively) to prevent connection closing. There are two commands available: Shutdown, closing the connection and pushing back the state machine to the Off condition, or Acquire. The second one allows a user to subscribe a channel, for NaqsServer implementation, or to request seismic measurements in a specified time window, for Data Access Protocol. Acquisition: Once the acquisition phase is started, the IM switches to this state. For near real time subscriptions, it is possible to add or remove channels from the current polling list employing the Subscribe and Unsubscribe commands. The offline data access IM version enables other directives: Cancel, clearing
52
P. Gamba and M. Lanati
Fig. 4 IM for NaqsServer during acquisition phase
the current process on the server, and NewAcquisition, to retrieve a new time windowed bunch of data. The action comes successfully to an end if all previous requests have been satisfied, that is no other Ready messages are expected. Otherwise, the execution fails and the IM remains in this state. All the measurements gathered are stored locally on the IE while a set of VCR attributes give a feedback showing the last sample read (see Fig. 4). Stopped: Issuing the Stop command in the previous state, the connection to the server is closed, the files holding the downloaded data are finalized and this transition takes place. The Store command uploads local files to a SE specified on a configuration file, while the user enters only the existing remote folder on the selected resource. The IM can now be turned off or set to the On state, opening a new connection.
Porting a Seismic Network to the Grid
53
Fig. 5 First form of the VCR application. The first argument is filled in by the user, the second one (SE address) is set by default but editable
Error: This is the target in case an error occurs. It is not shown in Fig. 3 for a clarity matter. The analysis of seismic data is completely carried out on the grid, starting from the storage on a SE, as we saw. The code extracting the information described in Sect. 4 is written in C and should run for every file collected. A convenient solution is to use parametric jobs, where file name is the item changing each time. The job is very simple, it consists in downloading the file from the SE then running the executable statically compiled and included in the Input Sandbox. In order to access CEs from the VCR it is necessary to write a JDL (Job Description Language) file, so the potential users are supposed to go into details of grid technology. It is better to address to VCR applications, that is a form-oriented interface launching jython scripts and/or jobs. Our application needed only two steps. In the first one, shown if Fig. 5, the user is prompted for the remote folder on the SE where the IM saved sensors’ output. The script customizes a template and yields a JDL file, used to submit a job in the next step. The user has only to wait for the computation to complete, then the output is placed in the home folder on the VCR. This approach has been chosen to avoid long type writing and to hide some details that scientists feel unnecessary. Some tests undertaken during the deployment phase of the DORII project showed that researchers are more focused on the scientific implications of the grid rather than its technical aspects and they are constantly looking for comfortable interfaces. Another tool going in this direction and under development in DORII is the Workflow Management System (WfMS) [9], a graphical environment to interact with all the grid nodes. Application porting to WfMS is an ongoing activity.
6 Discussion and Future Work First of all, a brief overview of the output produced during the computational phase is proposed. Figure 6 plots the ground velocity, that is the raw data generated by the sensor, also saved in SAC format. This snapshot was captured in real time from
54
P. Gamba and M. Lanati
Ground speed [m/s]
−3.4
x 10−6
−3.6 −3.8 −4 −4.2 −4.4
0
5
10
15
20
25
time [s]
Fig. 6 Ground velocity in the vertical direction
2
b
x 10−5
6
x 10−6
5 1
Acceleration
Ground acceleration [m/s2]
a
0
4 3 2
−1 1 −2
0
5
10
15
time [s]
Time domain
20
25
0
0
20
40
60
80
100
Frequency [Hz]
Frequency domain
Fig. 7 Ground acceleration plots
RORO. HHZ channel and represents the vertical component (HHZ) measured by the station whose identification code is RORO. The absolute values are very small because there is not an ongoing significant seismic activity in the area monitored. Subsequent processing needs the acceleration, so velocity is differentiated and the results are shown in Fig. 7a, b, for time and frequency domain, respectively. The absolute spectrum is a pure number since the ratio with g, the gravity acceleration, was calculated. Finally, the response spectrum is worked out, see Fig. 8. The natural period Tn on the horizontal axis is increased with a step equal to 0.025 s (this means that the mass m and the spring constant k are changed accordingly to (4)), while the damping ratio is fixed to 5%. It is worth noticing that in real cases the response spectrum should take into account a natural period extending to 50 s and the analysis should be repeated for different damping ratios, up to 20%.
Porting a Seismic Network to the Grid
Acceleration [m/s2]
8
55
x 10−5
6
4
2
0
0
0.5
1
1.5 Natural period [s]
2
2.5
3
Fig. 8 Response spectrum
The application has been proposed to some scientists belonging to the earthquake community in order to have an operational point of view on the grid approach. Some difficulties arose at the beginning, mainly related to the new platform. The testers were used to work with different tools, commonly installed on their personal laptop, so authentication is limited to log on procedure. The change of perspective triggered by the distributed environment needs a more accurate authorization management. This is done automatically, but it may take time, which is uncomfortable for the user. However, after the first impact, the benefits were immediately perceived. A unique web portal, accessible from every device equipped with a browser supporting Java, allows to keep under control every aspect of the experiment. Moreover, parametric jobs ensure an intrinsic parallelization, which is very important for a statistical study such as risk evaluation. The description of the work exposed so far is the first step towards other goals. We are planning to continuously monitor some stations and store the accelerograms on a SE, creating a huge data collection. This is useful not only to test the reliability of the platform, it also extends the time window in which data are available (current limitations are due to the fact that the Nanometrics server is a single machine). Users are not restricted only to access historical series, but they can also speed up and broaden response spectrum analysis. Another problem where we think the framework we developed can be applied is seismic waves detection from earthquake recordings, an important issue to locate and analyze the event [10]. In particular, in the next months we will focus on P-waves picking employing the Akaike Information Criterion (AIC) [11]. P-wave onset is associated to the sudden variation in the amplitude and the increase in the energy carried at high frequencies. According to [12], the seismogram can be modeled as a sum of auto-regressive processes, whose coefficients and order
56
P. Gamba and M. Lanati
change before and after the arrival. AIC estimates how real data fit with the model and the onset is identified by a minimum of the function. It is obvious that it is always possible to find a minimum, so, to avoid errors it is fundamental to set a proper time window. This can be accomplished by a filter, a Hilbert transform or a neural network. Some code implementing AIC with Hilbert transform filtering is ready, so we are going to port it into the grid together with a collection of seismograms recorded during some earthquakes. Then, we will launch a series of jobs to study the impact of time window on the quality of auto-picking with respect to the same procedure executed by a human operator. Acknowledgements The work described is carried out under the DORII project, supported by the European Commission with contract number 213110. The authors would like to thank Mirko Corigliano for the analysis software ported into the grid and for the related explanations. The cooperation of Dip.Te.Ris at University of Genoa was fundamental to access the sensor network. In particular, the contribution received from Daniele Spallarossa and Gabriele Ferretti was very useful to fix the application. Finally, the work of Davide Silvestri in implementing the Instrument Managers and part of the Java library was highly appreciated.
References 1. Y. Guo, J. G. Liu, M. Ghanem, K. Mish, V. Curcin, C. Haselwimmer, D. Sotiriou, K.K Muraleetharan, L. Taylor, “Bridging the Macro and Micro: A Computing Intensive Earthquake Study Using Discovery Net”, Proc. of the 2005 ACM/IEEE SC 05 Conference (SC 05), Seattle, WA, USA, Nov. 2005. 2. A. Donnellan, J. Parker, C. Norton, G. Lyzenga, M. Glasscoe, , M. Glasscoe, G. Fox, M. Pierce, J. Rundle, D. McLeod, L. Grant, W. Brooks, T. Tullis “QuakeSim: Enabling Model Interactions in Solid Earth Science Sensor Webs”, Proc. of 2007 IEE Aerospace Conference, Big Sky, MT, USA, Mar. 2007. 3. R. Ranon, L. De Marco, A. Senerchia, S. Gabrielli, L. Chittaro, R. Pugliese, L. Del Cano, F. Asnicar, M. Prica, “A web-based tool for collaborative access to scientific instruments in cyberinfrastructures” in F. Davoli, N. Meyer, R. Pugliese, S. Zappatore, Eds., Grid Enabled Remote Instrumentation, Springer, New York, NY, 2008, pp. 237-251. 4. E. Frizziero, M. Gulmini, F. Lelli, G. Maron, A. Oh, S. Orlando, A. Petrucci, S. Squizzato, S. Traldi, “Instrument Element: a new Grid component that enables the control of remote instrumentation” Proc. 6th IEEE Internat. Symp. on Cluster Computing and the Grid Workshops (CCGRIDW 06), Singapore, May 2006. 5. N. M. Newmark, “A Method of Computation for Structural Dynamics”, ASCE Journal of the Engineering Mechanics Division, Vol. 85, No. 3, 1959, pp. 67-94. 6. M. Abramowitz, I. A. Stegun, “Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables”, Dover, New York, NY, 1964, Section 25.3.4 . 7. A. K. Chopra, “Dynamics of Structures (Theory and Applications to Earthquake), 3rd Edition”, Prentice Hall, Upper Saddle River, NJ, 2007, pp. 208-217. 8. Nanometrics Data Formats - Reference Guide. 9. M. Oko´n, D. Kaliszan, M. Lawenda, D. Stokłosa, T. Rajtar, N. Meyer, M. Stroi´nski, “Virtual Laboratory as a remote and interactive access to the scientific instrumentation embedded in Grid environment”, Proc. 2nd IEEE Internat. Conf. on e-Science and Grid Computing (e-Science 06), Amsterdam, The Netherlands, Dec. 2006.
Porting a Seismic Network to the Grid
57
10. H. Zhang, “Application of Multilayer Perceptron (MLP) Neural Network in Identification and Picking P-wave arrival”, ECE539 Project Report, Department of Geology and Geophysics, University of Wisconsin-Madison, 2001. 11. H. Akaike, “Markovian representation of stochastic processes and its application to the analysis of autoregressive moving average process”, Annals of the Institute of Statistical Mathematics, Vol. 26, 1974, pp. 363-387. 12. R. Sleeman, T. V. Eck, “Robust automatic P-phase picking: an on-line implementation in the analysis of broadband seismogram recordings”, Physics of the Earth and Planetary Interiors, Vol. 113, 1999, pp. 265-275.
Integrating a Multisensor Mobile System in the Grid Infrastructure Ignacio Coterillo, Maria Campo, Jesus ´ Marco de Lucas, Jose Augusto Monteoliva, Agust´ın Monteoliva, A. Monn´a, M. Prica, and A. Del Linz
Abstract This document presents a mobile floating autonomous platform supporting an extensive set of sensors, with depth profiling capabilities and ease of deployment at any place within a water reservoir limits in a short time; and its integration in the existing grid infrastructure. The platform equipment is remotely operable and provides real-time access to the measured data. With the integration in the existing Grid infrastructure, by means of the infrastructure and middleware provided by the DORII project, a remote operator can not only control the platform system but also use the produced data as soon as it is recorded in desired simulations and analysis. The instrumentation is installed in a mobile floating autonomous platform containing a power provider system (consisting of a pair of solar panels and a wind turbine, along with a series of deep-cycle batteries), a set of surface sensors, and a set of water quality sensors integrated in a compact underwater cage, connected with a steel cable to a winch system situated at the platform surface, providing water column profiling capabilities. All the underwater sensors (plus one of the surface sensors) are locally controlled by a datalogger device which has capabilities for storing up to 14 days worth of sampling data, constituting a first layer of data security backup. An onboard computer serves as datalogger for the rest of the sensors and as a second layer data backup. Communication with the land station is managed via a WiFi point to point
This work is made in the frame of the DORII project under grant agreement RI-211693 (7th Framework Program) I. Coterillo • M. Campo • J.M. de Lucas () Instituto de F´ısica de Cantabria (IFCA, CSIC-UC), Santander, Spain e-mail:
[email protected];
[email protected];
[email protected] J.A. Monteoliva • A. Monteoliva • A. Monn´a Ecohydros S.L, Santander, Spain e-mail:
[email protected];
[email protected];
[email protected] M. Prica • A.D. Linz ELETTRA, Sincrotrone Trieste S.C.p.A, Trieste, Italy e-mail:
[email protected];
[email protected] F. Davoli et al. (eds.), Remote Instrumentation for eScience and Related Aspects, DOI 10.1007/978-1-4614-0508-5 5, © Springer Science+Business Media, LLC 2012
59
60
I. Coterillo et al.
link, using standard consumer directional antennas on both sides of the link. With this configuration distances of over 20 km are possible. In the land station, a server hosts both the primary database where the platform computer stores all sensor data (replicated at a second location, for additional data redundancy) and the Instrument Element (IE) middleware that integrates the whole monitoring system in the Grid infrastructure and provides remote access to the data and controlling capabilities through the Virtual Control Room (VCR). From within the VCR an operator is able not only to control and configure the behaviour of the sensors and access the recorded data in real-time, but also to employ this data in analysis jobs executing in the high availability Grid environment.
1 Introduction Scientific activity is based in the collection of data through observation and experimentation and the formulation and testing of hypotheses [1]. After this basic and broad definition, the different fields of science present different and highly heterogenous data acquisition requirements (referring to data acquisition not only as the proper process of acquisition but also as the transport, storage and curation of the acquired data), and computing requirements in order to successfully process their data. The DORII (Deployment of Remote Instrumentation Infrastructure) project aims to deploy e-Infrastructure for new scientific communities where ICT technology is not present but is demanded to empower their community daily work [2]. The project is mainly focused on groups of scientific users with experimental equipment and instrumentation currently not, or only partially, integrated in the European e-Infrastructures. The framework established by the DORII project allows to integrate existing instrumentation in the current e-Infrastructure providing an abstraction for instruments – the Instrument Element (IE) – and a web interface called the Virtual Control Room (VCR) which provides final users a collaborative environment with access to both the infrastructure resources (storage and computing resources in the Grid) and their instrumentation by means of the IE. With the current state of the art in both communication and computing technologies, this kind of framework provides big opportunities for applications within a broad range of demands, such as multi user access to instrumentation, remote control and operation of the instrumentation, distributed networks of sensors, high data throughput, highly demanding computing applications for processing, etc. This document presents a brief (generic) overview on how to integrate one application in the DORII e-Infrastructure using two components from the DORII middleware (the IE and the VCR), and then proceeds to describe the integration of one of the applications inside the DORII project environmental community, Monitoring Inland Waters And Reservoirs, a multi sensor remotely operated autonomous mobile platform used by Ecohydros S.L [3] for monitoring the quality of water in the Cuerda del Pozo reservoir (located) in Soria, Spain.
Integrating a Multisensor Mobile System in the Grid Infrastructure
61
2 The DORII Framework In the kind of situation addressed by the DORII project objectives, a scientific community of any type would have existing instrumentation already deployed, in one of many possible working scenarios. A summary of the instrumentation setup in the scientific areas represented in the project is as follows: • The Earthquake community, present with applications which acquire data from various sensor networks. • The Experimental science community, with applications such as a synchrotron and free electron lasers requiring support for multi-user access to instrumentation and specific individual needs. • The Environmental science community, where one can find applications such as the Oceanographic and coastal observation and modelling Mediterranean Ocean Observing Network which uses data from a vast network of floating sensors, the Oceanographic and coastal observation and modelling using Imaging which processes images obtained from several digital cameras, and the Monitoring Inland Waters and Reservoirs, which will be studied deeper in the following sections, and which integrates a number of heterogeneous sensors. It can be inferred that there are a number of use cases with very different combinations of requirements regarding mobility, availability, user access control and scheduling, bandwidth of produced data, and number of sensors. Additionally, the computation requirements of the different applications range widely as the set of simulations vary depending of the kind and size of the data to be processed in each application.
2.1 The Instrument Element The key element in the process of integration of each application instrumentation is the Instrument Element (IE). A Java web service interface for client access, (using Apache Axis) which brings the possibility of including scientific instruments and sensors in the Grid environment, typically composed of computing and storage resources and additional support and structural systems as file catalogue or information systems. The IE framework provides support for secure access control (by using the gLite/GSI security model based on users proxy delegation), multiple user access through concurrency control and locking, and a simple Grid storage access through a Grid-FTP based utility that allows the Instrument Managers to save and retrieve data to and from Grid storage elements.
62
I. Coterillo et al.
Fig. 1 Instrument Element architecture (a) and Communication between IMs and instruments (b)
It also offers the possibility of using Java Message Service (JMS) for asynchronous monitoring of instrument variables, enabling the framework to use signaled alarms and events (Fig. 1). Instruments and sensors are interfaced via Instrument Managers (IM) that allow connection to physical devices, or, more precisely, their control systems. Instrument Managers consist mainly of the Java client code for managing the instrumentation and should run inside the IE installation. Summing up, the IE offers a common interface for accessing instrumentation, and can be viewed as an extension of the gLite Grid middleware laying at the bottom of the DORII architecture. A more extended description can be found in [4].
2.2 The Virtual Control Room The Virtual Control Room (VCR) is a collaborative environment based on Gridsphere [5] and web 2.0 technologies, offering a set of groupware tools supporting scientific team-work such as a logbook, chat, wiki help, or people and resource browsers, which serves as both the main frontend to the IE and a window to the Grid. The VCR allows users to search, browse, control, and manage Grid resources (e.g. allowing Job submission using a specific Computing Element (CE) or Workflow Management System (WMS), workflow submission, file browsing and transfering to a SE via gsiftp, LFC Catalog browsing and operation, credential management, etc.), and remote instrumentation, allowing the user to interact with Instrument Elements (by browsing and operating the managers offered in a certain Instrument Element). As support for application customization it includes an application manager for creating simple, application-specific forms and a scripting environment for creating and running simple workflows.
Integrating a Multisensor Mobile System in the Grid Infrastructure
63
Fig. 2 User view of Grid resources and instruments in the VCR
The purpose intent of the VCR is to present a final user of the instrumentation (which, according to the DORII objectives, would be a member of a community with no specific technical knowledge in the field of computation) with a seamless interface to both the instrumentation and the Grid. Figure 2 shows the typical user view of available resources once the user has logged to the VCR.
3 A Use Case: Monitoring of Inland Waters and Reservoirs This section of the document presents a practical process of integration of a real application in the DORII e-Infrastructure.
3.1 Motivation The Cuerda del Pozo water reservoir, in Soria, Spain was built in 1941 with a total surface of 2,176 ha. and has presented an average occupation of 99 hm3 (43.23%). Its main uses are the irrigation of 26,000 ha. of crops and the supply of drinking
64
I. Coterillo et al.
water to the province of Soria and (part of) Valladolid. It has historically presented a water quality below expected limits regarding its geographic situation and the characteristics of its watershed. A study of the trophic state of the reservoir was executed in 1997 (and later updated in 2008) confirming the eutrophication of the reservoir waters. This brought attention to the necessity of impulsing correcting actions to invert this trend and its unwanted consequences; the main one being the apparition of cyanophyceae algae blooms, which upon decay, could liberate toxins to the water rendering it useless for human use. As cyanobacteria blooms develop in short time scales (days) with a highly localized spatial differentiation inside the reservoir, monitoring via periodical sampling campaigns is not a suitable method for detecting/detection; and thus appears the need of a mobile monitoring platform with long-term deployment capabilities, multiple sensors and power autonomy. Ecohydros S.L, a Spanish SME dedicated to environmental consulting, and specialised in aquatic ecosystems in both coastal and inland areas, is in charge of developing and deploying such platform, while creating a model of the relations between the presence of different nutrients and the appearance of cyanophiceae algae.
4 Sensor Platform To fulfill the requirement of mobility of the sensors around the reservoir, the equipment is installed in a floating platform, suitable for moored deployment or for being towed with a small boat, which hosts the following sensors and probes. The sensor suite is selected to monitor a variety of environmental parameters, and the presence of different nutrients in the water (Fig. 3).
4.1 Underwater Sensors All underwater sensors are placed inside a steel cage which provides protection and a structure to fix the different sensor bodies to. The set of underwater sensors include: • An advanced CTD (Conductivity, Temperature, Depth) probe, with additional sensors for measuring also Dissolved Oxygen (DO), PH, REDOX potential and Salinity. • Fluorescence sensors for different groups of algae. • Hyperspectral irradiance sensors. A winch device holds the cage and manages the power supply and communications cable. This allows changing the depth of operation and the execution of profile measures in the 0–100 m depth range.
Integrating a Multisensor Mobile System in the Grid Infrastructure
65
Fig. 3 Frontal view of the platform and the solar panels (a), the platform after installing the wind turbine and the winch (b), and a close caption of the underwater cage (c)
4.2 Surface Sensors The following sensors are installed in the body of the platform above water surface: • A Meteorological station measuring wind speed and direction; rain, hail and snow events, air temperature, and relative humidity. • Net radiometer • A Hyperspectral irradiance sensor • A GPS
66
I. Coterillo et al.
Fig. 4 Sensor integration scheme
4.3 Sensor Integration The integration of the instruments is organized in a two-layer structure. The first layer (bottom) of the structure consist of a datalogger gathering the raw signal from all the sensors in the underwater cage plus the surface hyperspectral irradiance sensor and performing two important tasks. First, it digitalizes the raw signal measurements from the sensors while storing a two week (approximately) history of measurements which serves as a first level partial data backup; secondly, if offers a uniform interface for the different sensors in the cage using the Modbus [6] protocol over an Ethernet link. From this interface, the datalogger/cage pair can be abstracted as an individual multiprobe instrument (and will be referred as such in the next paragraphs). At the second layer of the structure, a computer using LabVIEW [7] works as the main datalogger, integrating all sensor data coming from the multiprobe instrument (via the Modbus over Ethernet link), the meteorological station (via a RS-232 serial connection, using standard ASCII as protocol), the GPS (accessed via NMEA over USB) and the net radiometer (accessed via a serial connection, and using a propietary protocol for which an ad-hoc communication module had to be developed) (Fig. 4). In a similar way as the first level datalogger, the LabVIEW software integrates the data acquisition process. Additionally, it performs a second level data backup locally with no practical storage limits, it writes all sensors output to an external database and above all, it exposes a control interface to all the sensors using the LabVIEW Web Services interface (this is the key to the Grid Integration of the whole system, as will be shown in the following sections).
Integrating a Multisensor Mobile System in the Grid Infrastructure
67
The computer is connected also to an external WiFi module for high transmission power, and this module is connected to a directional antenna used to connect to the land station (both the land station and the communications setup will be treated in the following sections).
4.4 Power System The platform is equipped with three sets of two 24-V batteries; one set connected to the computer and surface sensors, one set connected to the winch, and one set connected to the underwater sensors, depending on power consumption demand. The batteries are connected to two solar panels and a wind turbine for recharging; they support autonomous operation of the equipment for several days during winter, and continuously during spring/summer when the solar panels are able to provide their nominal power (which is also the time of the year when the monitoring of the reservoir conditions has special relevance).
4.5 Land Station The land station is placed in a CHD office building located at the head of the reservoir. This station contains: • Communication equipment linking to the floating platform, consisting of a WiFi access point and a directional antenna. • A server hosting the following services: – The Instrument Element – Monitoring software – A database storing all acquired data • A network camera used to monitor the platform for security reasons
5 Networking and Communications The main factor conditioning the design of the network architecture for the whole system was the remoteness of the location, situated in a rural area far away from any notable population. At the moment of planning and executing the initial deployment of instrumentation there was no GPRS/3G coverage in the area; and so in order to allow remote access to the instrumentation, the following setup was employed.
68
I. Coterillo et al.
Fig. 5 Platform Yagi WiFi antenna (a) and the land station patch WiFi Antenna (b)
5.1 Platform to Land Station Communication As mentioned in the previous sections, communication between the land station and the platform is carried via a WiFi link using the following equipment: • A WiFi AP connected to a WiFi outdoor patch directional antenna (C14dBi gain) placed in the land station. • A high transmission power WiFi external module connected to a Yagi directional WiFi antenna (with a C12 dBi gain) installed in the exterior of the platform housing (Fig. 5). With this setup it is possible to achieve link distances of up to 20 km, which is enough to cover points of interest in the reservoir with the platform while maintaining communication with the land station.
5.2 e-Infrastructure Network Communication For the same reason that GPRS/3G could not be used (the location being too remote) the only solution available for connecting the land station to the Internet – and thus to the DORII e-Infrastructure – at the moment of planning was to deploy a WiMAX link offering a 4 Mbps bandwidth (Fig. 6).
5.3 Data Security and Backup Protections During the development, installation and testing of the different components, possible points of failure were considered (ordered by severity):
Integrating a Multisensor Mobile System in the Grid Infrastructure
69
Fig. 6 Land station WiMAX patch antenna
1. Power supply failure for any of the platform components. Typically, in conditions where the solar panels are not receiving enough sunlight (occurring frequently during winter and autumn seasons in the location), the platform computer will drain its set of batteries before the underwater cage and cause a loss of data from the surface sensors and isolation of the underwater cage sensors, while these will remain in operation as their power consumption is low. 2. Failure in the communication link between the platform and the land station. If the link between the platform and the land station is broken, the final stage of data acquisition, when data is written to the database, fails; the platform becomes isolated from user access and control. 3. Failure in the communication link between the user and the land station. If the WiMAX link fails the platform is isolated from user access and control. During the description of instrument integration architecture in the previous section, a data backup point in each of the two levels was mentioned. The backup at the multiprobe datalogger assures that none of the data from the underwater sensors is lost in the case that the more severe error of type 1 occurs, as it is able to store up to two weeks worth of data after the power supply is lost, an event that will be detected by the monitoring system. The second backup point ensures that none of the sensor data is lost when the communication to the platform is temporarily lost. When the communication is restored the LabVIEW control software resumes operation as normally. A failure in the WiMAX link to the land station is quite improbable, but it is possible to experience long delays or bandwidth narrowing which hinders the user interaction with the system. An asynchronous synced replica of the database is hosted in the CSIC-IFCA site, and could be seen as a third level of backup. This
70
Fig. 7 Complete use case infrastructure
I. Coterillo et al.
Integrating a Multisensor Mobile System in the Grid Infrastructure
71
replica is intended to be used when accessing data for simulation and processing purposes (specifically, when launching simulations jobs to the grid, when the overall speed of the process will benefit from having the database in the same local network as one of the infrastructure sites).
6 e-Infrastructure Integration As described in Sect. 2, the DORII middleware components, IE, and VCR are the key for integrating the instruments in the existent e-Infrastructure. In this application, as mentioned in Sect. 3, a uniform interface to each of the sensors is created using LavVIEW, and this interface is exposed using a Web Services interface (Fig. 7). The Instrument Element contains three Instrument Managers (divided by function) which are basically JAVA clients for the exposed sensor interface. • myWSIM. Used to control the instruments. All actions that involve sending data to the instruments, such as starting/stopping individual sensors, setting specific parameters like sampling frequency, measurement units, etc., are carried by this IM. • myDBIM. Monitors and controls the writing of data in the land station database. In this case, the database is considered as the instrument. This IM makes possible to set alarms related to values stored in the database. • myWINCHIM. Controls the winch, allowing to remotely set the depth of operation for the underwater cage.1 The Virtual Control Room is hosted at the CSIC-IFCA site in Santander, Spain, which also hosts one of the computing and storage sites composing the DORII e-Infrastructure, and is the point of access to the system for the final user. The screenshots shown in Fig. 8 present the typical view available for managing the Instrument managers inside the VCR.
7 Conclusions This chapter presents the different components and the required architecture to integrate a set of instruments in the grid e-Infrastructure by explaining the scheme currently used in the Monitoring inland waters and reservoirs application from the DORII project environmental community; it also introduces briefly the key components from the DORII middleware that makes possible to integrate the instrumentation in the e-Infrastructure. 1
This IM is under development at the moment of writing this document.
72
I. Coterillo et al.
Fig. 8 (a) myDBIM: Generating a simulation file (b) myWSIM: control Instrument Manager (b) and (c) a close caption of myDBIM monitoring of data
It is important to remark that although this contribution focuses only on the integration of the instrumentation, the VCR is not only used to control and monitor the instrumentation. The final purpose is to present available instrumental and computational resources to the user in an environment which makes it possible to easily create and manage workflows between acquired data and the Grid computational and storage resources for successfully processing that data.
Integrating a Multisensor Mobile System in the Grid Infrastructure
73
References 1. 2. 3. 4.
http://en.wikipedia.org/wiki/Scientific method WIKIPEDIA: Scientific method http://www.dorii.eu DORII: Deployment of Remote Infrastructure Instrumentation http://www.ecohydros.com Ecohydros S.L Instrument Element User Guide V2.4 Milan Prica, Andrea del Linz Sincrotrone Trieste S.C.p.A, 2009 5. http://www.gridsphere.org/gridsphere/gridsphere Gridsphere portal framework 6. http://www.modbus.org The Modbus organization 7. http://www.ni.com/labview/ National Instruments LabVIEW
Defence in Depth Strategy: A Use Case Scenario of Securing a Virtual Laboratory Marcin Adamski, Gerard Frankowski, Marcin Jerzak, Dominik Stokłosa, and Michał Rzepka
Abstract The role of IT security is continuously growing. Complicated systems have to be protected against sophisticated attacks on different technical and logical levels. Achieving the sufficient security level becomes even more difficult for distributed heterogeneous environments, involving valuable assets and data. A virtual laboratory, which enables remote access to unique scientific instruments, is a proper example of a multi-layered environment, where security is to be introduced from the project stage, based on a number of solutions. We explain the significance of security issues in the environment, describe possible threats and suggest an approach to address the necessary issues, based on one of the threats classifying models and applied in Pozna´n Supercomputing and Networking Center for building a new Kiwi Remote Instrumentation Platform.
1 Introduction The role of security in IT infrastructures is continuously increasing. Informatization makes use of computer systems and software ubiquitous. This process is especially dynamic in developing countries. New solutions, including the whole computational paradigms (e.g. grids) are now shared also among research and scientific communities throughout the world. Every new technology is carefully investigated by people who might try to abuse it. There are numerous reports and estimations concerning the total costs of IT security breaches and of a different nature and scope. The Internet Crime Complaint Center, which is the partnership between the FBI and the National White Collar Crime Center (NW3C), received in 2009 over 336 thousands complaints
M. Adamski • G. Frankowski () • M. Jerzak • D. Stokłosa • M. Rzepka Pozna´n Supercomputing and Networking Center, Poland e-mail:
[email protected];
[email protected];
[email protected];
[email protected];
[email protected] F. Davoli et al. (eds.), Remote Instrumentation for eScience and Related Aspects, DOI 10.1007/978-1-4614-0508-5 6, © Springer Science+Business Media, LLC 2012
75
76
M. Adamski et al.
with reported losses of about 560 million USD – and they addressed only US Internet frauds [1]. As for the whole “black IT market”, in 2009, it was assessed by AT&T’s Chief Security Officer Edward Amoroso that the estimated amount of cyber criminals profit annually from hacking computers (still only in the USA) may reach a billion (1012) USD [2]. There are several factors that increase the challenges that IT security may face. First of all, IT solutions are applied on numerous new application fields in order to ease life and work of new groups of users. Not all new users are sufficiently aware of IT security issues, while their new infrastructures are often initially oriented towards functionality, not security. Moreover, those infrastructures, due to new computational paradigms like grids or clouds, appear as more powerful and thus seductive for network attackers. Botnets, groups of infected computers that may be used for sending spam or conducting DDoS (distributed denial of service) attacks of a large scale, may be sold by malicious people just with 0.50 USD per bot [3], but, e.g. single infrastructures, on the example of EGEE grid [4], may consist of 150 000 CPUs, 14 000 registered users in 55 countries, almost 70 petabytes of storage space. The resources themselves are valuable, but other types of cyber criminals might look for data that are stored on them – especially in infrastructures intended for cooperation of research communities. It is assumed that security, even if once achieved, is neither a product nor a consistent state [5]. Due to systems being changed, as well as new methods of network attacks and found vulnerabilities, it should rather be considered as a process. It should be noted that some computational paradigms have their dynamics built-in. Some parts of the infrastructure may be dynamically attached to its core and detached when unnecessary. In such cases, appropriate mechanisms that assure maintaining the desired security level (see [6]) should be implemented, but in general preserving a dynamic infrastructure from network threats is a significantly harder goal to achieve. Another factor is embedded in the software itself. The level of complexity of today’s programs, often combined with the pressure of releasing the version on time, causes that it may be expected to introduce up to 20–30 software bugs (of all types, not only security vulnerabilities) in 1,000 lines of the source code (KLOC) [7]. This number may vary due to different factors; one of them may be that software produced, for example, within the confines of R&D projects may suffer from the lack of professional testing stage (with a special respect to security tests). Indeed, there are approaches that may drastically decrease the number of bugs: e.g. NASA developers were able to achieve 0.004 bugs per KLOC for the space shuttle software, but it cost 850 000 USD per that amount of the source code (with about 5 USD for industry software) [8]. Research communities cannot afford that. Obviously, there are not only threats, vulnerabilities and attackers. IT security specialists inexhaustibly make the attackers’ goal more difficult. Although it is actually unrealistic to build perfectly secure systems, in most cases this is not required. It is enough to prepare systems that are too expensive to be attacked. If a malicious hacker is required to spend more resources in order to break the system
Defence in Depth Strategy: A Use Case Scenario of Securing a Virtual Laboratory
77
being attacked than equal to his or her expected gain from the attack, the system may be assumed as secure [9]. Although this rule of security economy may be occasionally impossible to be applied (e.g. when the value of protected assets is hard to be quantified or when the attacker is driven with personal motives), it may be simplified as “the more obstacles for a malicious hacker, the safer the system”. Introducing the desired level of security is supported with numerous methodologies, approaches and security models. We will now refer to best practices of security testing [10]. It is reasonable as the potential attacker is also actually a security tester who tends not to disclose or report his or her findings, but rather take personal advantage of them. The tested application (or, more general, a problem) should first be decomposed into basic components with differentiating interfaces between the components. Having the problem decomposed, it is a reasonable approach to apply a specific threat model in order to obtain a well-structured description of possible security threats, which are easier to be assessed and prioritized. Then the structure of data exchanged between interfaces should be identified and, finally, actual tests should be started, basing on attempts of providing unexpected data. That last stage may obviously be run no earlier than the whole system is ready. There are numerous security models and concepts. Information security is often referred to with respect to three components or objectives: confidentiality, integrity and availability (CIA) [11]. However, extremely useful as an introduction into the world of IT security, the above may not be enough for the detailed security threats specification of a complicated infrastructure. One among more granular approaches to classifying computer security threats is the STRIDE model, developed by Microsoft [12]. It provides a mnemonic for security threats in six categories: spoofing of user identity, tampering with data, repudiation, information disclosure (both privacy breach or data leak), denial of service (DoS) and elevation of privileges. To follow STRIDE, it is necessary to decompose the system into relevant components, analyze each component for susceptibility to the threats and mitigate the threats. The STRIDE model has been repeatedly applied within Pozna´n Supercomputing and Networking Center, e.g. for assuring the appropriate security level of the architecture of the National Data Storage R&D project [13]. Common security rules are helpful in mitigating the threats. Two main security principles that are considered for building sufficiently secure systems are: the minimum privileges principle and the security-in depth rule. The minimum privileges rule says that every user of a computer system must be granted only privileges strictly required in order to perform expected activities. The latter rule is actually a general guideline for introducing as perfect security as necessary, especially if the system has already been decomposed onto layers and components, suitable security measures should be applied independently on all layers. Even if an attacker is able to break into the system on one of the layers, the consecutive security solutions will stop him or her a step later.
78
M. Adamski et al.
2 Virtual Laboratories Virtual laboratory (VL) is a heterogeneous distributed environment, which allows a group of scientists from different sites to work on one project [14]. As in all other laboratories, the equipment and techniques are specific for a given field of activity. Despite some interrelations with tele-immersive applications, the virtual laboratory does not presume the need of sharing the working environment. Virtual laboratories could implement different components depending on the type of experiments which will be provided within. The common parts of all virtual laboratories are: • Access via Internet by a Web Portal • Computational server – a high-performance computer which can work with large-scale simulations and data processing • Databases which contain application-specific information – such as initial simulations, bound conditions, experimental observations, client requirements or production limitations. Databases also contain distributed, application-specific resources (e.g. repositories of human genotype). The databases content should be changed automatically. Databases could also be distributed. It should be presumed that databases will contain a large amount of information • Scientific equipment connected with the computational networks. For example, it could be data from satellites, earthquake detectors, air pollution detectors and astronomical equipment (like the distributed astronomical research program which is provided by the National Radio Astronomy Observatory [15]) • Collaboration and communication tools, such as chat, audio and video conferences • Applications. Each virtual laboratory is built on specific software which allows simulating processes, data analysis or visualization There are several important aspects which have to be taken into consideration while creating a virtual laboratory system. Delay is a critical parameter for many applications. This is the reason why a computational centre which implements the VL idea should have access to a high-throughput network. It will be helpful to connect the tasks scheduling system with the throughput reservation services. Next critical parameters are multi-cast protocols and technology reliability in collaboration with specific VL experiments where people, resources and computations are highly distributed [16, 17]. In these experiments, data streams could be divided into voice, video, computational elements and huge portions of simulation and visualization data which will be delivered in real-time from the scientific equipment. Applications should allow access to data from several heterogeneous sources. For example, such information can come from real experiments or computational simulations. The main source of information will be the mass data storage systems – they are especially important for bioinformatics tasks [16]. These data storage systems could be solved on the basis of dedicated databases implemented in national supercomputing centres and could also include small personal databases from workstations and PC. Because of virtual laboratory, data processing will be controlled by an individual scientist or a distributed research team, working in their own laboratories.
Defence in Depth Strategy: A Use Case Scenario of Securing a Virtual Laboratory
79
We will discuss a virtual laboratory (VLab) created in Pozna´n Supercomputing and Networking Center. The VLab [18] is a framework architecture aiming to provide remote access to various kinds of unique and therefore expensive scientific laboratory equipments. The main goal of the VLab was to embed remote facilities in grid environments, closely supporting the experimental processes with the computing and visualization infrastructure. Another purpose concerned the concept of Workflow Management System (WfMS) [19]. WfMS allows defining the measurement workflow in a way which is convenient for the user: from pre-processing, through executing the experiment, to the post-processing and visualization tasks. The virtual laboratory is not only a set of mechanisms to submit, monitor and execute jobs. It is also an opportunity to give access to the resources of the digital library, communication and e-Learning systems. The VLab project introduced two different instruments: NMR spectroscope and radio telescope. The VLab system has been designed as a generic system to serve different instrumentation from many domains. However, such an approach has one significant disadvantage: limited functionality. It is not possible to create one common interface for every type of equipment. The experience and lessons learned from the VLab project allowed us to propose a new approach to building remote instrumentation systems: the Kiwi Remote Instrumentation Platform [20]. It has been proposed to create it as a skeleton for the future systems. This platform consists of several components with well-defined functionalities and communication interfaces. The carefully selected components from the Kiwi Platform will become a base for building new remote instrumentation systems. The Platform provides a set of components to control and manage scientific equipment or sensors like cameras, weather, air pollution and water flow sensors and others. All the equipment connected to the Kiwi Platform can be controlled remotely with one unique user interface. It also allows users to design and run the so-called observation workflows. It is possible to set up a sequence of operations starting from data acquisition, data processing and finally visualization. This workflow can be launched periodically by the Kiwi Workflow Manager component with a desired frequency and time. For instance, the system administrator can plan a workflow with a camera taking pictures twice a day. The device can be set up so that several points of a scene are shot (Fig. 1). The interactive e-Learning system for environmental science teaching is an example of a system built on top of the Kiwi Platform. The main objective of the project is to develop an e-Learning interactive platform for areas like water management, air pollution, deadwood or phenology (Fig. 2). In order to make the study process more interesting, the Platform will be equipped with dynamic content like data collected by measurement devices and sensors or high-quality images from the observation scenes. Kiwi is also used to control and manage all the sensors deployed in the various locations in the forest near Pozna´n. The basic sensor suite measures wind speed and direction, pressure, temperature, relative humidity and precipitation, etc. Optional sensors can be added to measure, e.g. water level, soil/water temperature, global and net solar radiation. It is believed that a successful e-Learning Platform should provide means to add new devices and sensors to the existing infrastructure.
80
Fig. 1 Kiwi – remotely controlled DSLR camera
Fig. 2 A sample VL based on the Kiwi platform
M. Adamski et al.
Defence in Depth Strategy: A Use Case Scenario of Securing a Virtual Laboratory
81
An automated, remotely controlled system for fauna and flora observations has been introduced. The system is based on the Kiwi Platform. High-quality DSLR photo cameras are used for a scene comprehensive view. Photo cameras can be set up to take high-resolution pictures with a desired frequency and time. It is also possible to set up “points of interest” within an observation scene with a desired frequency.
3 Security Issues in Virtual Laboratories Besides the strict technical problems, virtual laboratories involve a number of security problems that are associated with informatization of access to scientific instruments. First of all, virtual laboratories are usually heterogeneous environments. They are usually not built from scratch, but rather based on providing remote access to already existing scientific devices. Those instruments usually have a local IT infrastructure (in most cases – a local control workstation, a storage device or measures for manipulating hardware). This infrastructure is often relatively aged and, additionally, may completely vary among controlled devices. Therefore, the infrastructure itself may not contain any built-in security solutions (as previously, IT security was not considered) or those measures may be only general and insufficient. Moreover, adjusting new security solutions to protect those aged infrastructures may be time-consuming and expensive. On the other hand, a solution able to securing one device may not be in use for the other due to their heterogeneity. Thus, applying dedicated security solutions for the whole virtual laboratory will require even more resources. Another obstacle is associated with the human factor. People often have difficulties in accepting and understanding thorough changes in their working environment, especially when those changes are not directly associated with their research area. To be more concrete, when IT solutions are introduced, they are often assumed as hostile and incomprehensible (that is why it is so significant for them to be userfriendly and fit the needs of their users, which is not always the case). Moreover, for new users of IT infrastructures, the issue of IT security is usually completely or partially new. The above may be applied especially in communities that start using networking solutions. Virtual laboratories often merge scientific devices from different countries and continents. If the researchers (and/or the administrators) are not sufficiently educated with appropriate trainings, they will not be convinced to use encrypted emails, apply strong passwords, keep their networking workstations up-to-date, avoid clicking malicious links or browsing Web pages that may contain a dangerous content. We have experienced cases where it was impossible to convince an abroad research centre to use, for example, a secure way of sending passwords through the Internet. Although trainings and applying security solutions cost effort, they are unavoidable from the point of view of security. However, IT security specialists cannot resolve this problem themselves as this is not purely technical.
82
M. Adamski et al.
As for technical threats, the goal of virtual laboratories is to provide access to rare and expensive scientific instruments. Therefore, an attacker might abuse the control infrastructure of a device and even damage it (which is a real danger for, e.g., the radio telescope in the observatory located in Piwnice near Toru´n – issuing a specific sequence of commands may be destructive for its servomotors). Although damaging a radio telescope does not result in any direct gain for the attacker, his or her goal may be to cause losses for the research centre. Technically, and with respect to the STRIDE model, it would be a classical DoS attack (not directly against the data, but the equipment). Several scenarios (like monitoring a forest) assume that the instruments are left unattended, so pure physical security matters as well. The scientific instruments are not the only valuable part of the infrastructure. Grids are often used in the virtual laboratories concept in order to increase the computational power of the whole infrastructure, which is required to be extremely large in applications like nuclear physics or weather modelling. Those resources, when successfully attacked, may be used by network attackers, e.g. to conduct DDoS attacks. A botnet built from a grid is attractive for the attackers, as a great number of different IP addresses may be used during the attack (thus, blocking them on firewalls appears overwhelming) and, moreover, each host of the botnet contributes to the size of the used network bandwidth. The computational power of the grid, especially supported with its storage facilities, may also be abused, e.g. for cracking passwords. The concept of virtual laboratories is inseparably associated with research and science. In the age of competitiveness and industrial espionage, the research results may be a significantly more attractive goal than resources or hardware. When not addressed by security solutions, they will be in danger (as well as other data stored within the same infrastructures, e.g. password hashes, personal records and internal databases). Finally, as every IT infrastructure, a virtual laboratory may suffer from all types of network attacks that are spreading through the Internet. VLs are accessed by Web portals, therefore standard Web application threats apply (see, e.g. [21] for more information). Although, as an entry point to the VL structure, a Web access portal appears as the most apparent vector of attack, it must be remembered that another opportunity for an attacker may be a misconfigured server, offering unnecessary and/or insecure services – like Telnet, FTP or even SSH with account names that are easy to be guessed and are protected with weak passwords.
4 Applying Defence In-Depth Strategy As it has already been discussed, in order to cope with protecting a sophisticated (large, distributed, heterogeneous) IT infrastructure, it should be logically decomposed into smaller, distinct subsystems. All subsystems should then be secured appropriately, and additionally interfaces between the subsystems have to be considered.
Defence in Depth Strategy: A Use Case Scenario of Securing a Virtual Laboratory
83
Fig. 3 Decomposition of a virtual laboratory into smaller layers
According to the general structure of virtual laboratories, we will decompose it into separate layers: instruments layer, VLAB platform and access platform. The first and last layers are the most distinct, while the VLAB platform, being a consistent subsystem, still has a relatively sophisticated structure where subcomponents may be differentiated with interactions between them. Therefore, the VLAB platform has been divided into four lower-level layers, addressing systems, network, computational resources and digital libraries. Figure 3 shows the conceptual decomposition of the infrastructure. What is worth noting, there may exist security solutions that address even more than one logical layer (examples of such systems are described later).
4.1 Instruments Layer The scientific instruments accessible in virtual laboratories are rare; otherwise they would not have to be accessible in that manner. As it has been already mentioned, one specific threat is damaging a scientific device (either accidentally or intentionally). The instruments are usually remotely accessible via a dedicated interface. There are scenarios (e.g. for the NMR use case), where the issued commands may have to be manually applied by a local operator of the device. This decreases the threat, although the local operator has to be exactly aware, which commands or parameters may be inappropriate for the device. However, in other scenarios (radio astronomy), the interface may allow to directly operate the device. In the majority of cases, the interface of the instrument does not contain any built-in remote access facilities. The interfaces are usually complicated for inexperienced users. Moreover, they are closely tied to the instrument and it is unrealistic or unjustified to build alternative ones. Remote access
84
M. Adamski et al.
may thus be obtained by controlling the local management station, not the device itself (e.g. by a virtual desktop). Therefore, at least the following threats must be considered: • The direct interface of the device may be too complicated for a remote user and an inappropriate command may be issued by accident (not all interfaces contain appropriate software limitations prohibiting that) • A malicious remote user may abuse the remote control interface in order to gain control over an inappropriately (insecurely) configured management station, thus, e.g. intercepting the research results of others, leaving a backdoor or switching off the device • A malicious attacker may intercept the network traffic and read the sent commands or modify them to be inappropriate; the legitimate user would then be accused of malicious activities In scenarios applied several years ago within the VLAB project, one of the solutions applied based on RealVNC software, supported with Zeebeedee Secure Tunnel software for encrypting the communication channel. Other remote control software may be used as well, e.g. UltraVNC that is free of charge, and – supported with an additional plug-in – provides encryption as well. The above, however, protects only against intercepting and potentially influencing the network traffic. Protecting against abusing the functionality of remote control software is a complex task. It has to be started with an appropriate selection of remote control application; first, it should provide functionality of controlling only a single application (the device interface). Unfortunately, this will only make the attack more difficult – a single Windows desktop application may offer functionality that might be abused: e.g. by using File j Open main menu commands one could browse directories and delete system files. However, restricting remote control to a single application conforms with minimum privileges principle and thus should be applied. The device management station itself should be configured according to the mentioned rule. For example, the remote control application server should be run with minimum possible privileges (never under the administration account!). Significant operating system files (and also those of other users) must not be accessible for that account. In addition, appropriate and detailed logging and accounting facilities must be implemented. They might be helpful in detecting some attacks in their early stage and then be suitable for forensics. There still remains a problem of denying potentially dangerous parameters or sequences of commands issued to the device. Without implementing dedicated solutions, however, it may be only partially solved. For scenarios with access portals (or any other interface between VLAB users and instruments), the entered data should be carefully filtered on the server side. The sets of commands or values that may damage the device must be known for security specialists and the developers (local instrument operators must be interviewed and the values should be put into an appendix to the suitable security policy). In addition, an intrusion detection system (IDS) might be applied.
Defence in Depth Strategy: A Use Case Scenario of Securing a Virtual Laboratory
85
4.2 VLAB Platform Layer This layer covers components that are directly associated with the core of the VL concept: remote access to scientific devices and maintaining the research results. The instruments themselves are not included as they function also separately and therefore should be treated as a distinct layer. However, the described functionality is compound enough to be divided into separate subcomponents. Those smaller modules will correspond with different facets of security – both in terms of the VL structure and functionality: systems, communication channels, authorization policies and digital libraries (or more general: storing research data). This structure is mainly based on our experience concerning conducted projects oriented towards virtual laboratories.
4.2.1 Systems and Communications Security issues associated with those subcomponents are the most general of all discussed in this paper. The virtual laboratory application does not introduce any specific considerations here, and therefore the provided description will be exceptionally brief. However, those issues still have to be remembered and addressed during the process of building the whole infrastructure. Each system (server, computer, network hardware unit, etc.) has to be secured according to known security best practices for the given system type and provided functionality. Security measures must be introduced on the system itself – its environment must not be trusted. Even if the environment provides sufficient security measures, it may change in the future. In the case of the Therac 25 device [22], in previous versions of the device, the controlling software was supported with many hardware safety interlocks. Unfortunately, after releasing a new version of Therac, the software was not sufficiently improved to handle the whole control over the system, which finally resulted in cases of radiation overdose (including at least five lethal cases) [23]. As the systems are secured, interactions (interfaces) between them must be taken into consideration, especially addressing the protection of the public communication channels. An appropriate level of confidentiality must be assured (encrypting the network traffic where necessary) and proper authentication mechanisms must be provided (e.g. PKI based on X.509 certificates). It is recommended that mutual authentication is applied. Server to client authentication, which is mandatory [24], protects the user against connecting to an inappropriate remote system (the owner of which, could, e.g. intercept the parameters sent to the scientific device, which may be a precious piece of information on research being conducted by the user). On the other hand, client to server authentication (which is optional and thus often omitted) would protect the management station: it would facilitate logging and accounting, and obstruct attacks from unknown sources (e.g. unauthorized “switch off” requests). There are appropriate open solutions (e.g. OpenSSL [25], OpenCA
86
M. Adamski et al.
[26]) that may be used without increasing the price of the whole infrastructure (although, naturally, effort is required to install and configure them properly). Finally, each system (and also the infrastructure as a whole) has to be the subject of a security audit, conducted by independent security specialists. “Independent” means there that the auditors should not have been involved with the project or implementation of the analyzed system – not necessarily that they must not be from the same company or research unit.
4.2.2 Identities, Authorization and Grid In order to work properly, virtual laboratories must provide sufficient identity management facilities (authorization and authentication mechanisms). This does not only mean differentiating between distinct users and their activities, but also implementation of the minimum privileges principle: differentiating between separate roles in the infrastructure, and granting those roles only strictly necessary rights. During our work with virtual laboratories, the applied scenarios usually have assumed utilization of the grid computational model, especially to enable distributed processing of large amounts of data. There are several specific authorization mechanisms and scenarios used in grids, which are discussed below. We have developed three main use cases defined for virtual laboratories, all strongly connected to a grid infrastructure. It implicates using a sophisticated authorization policy, and therefore usually an extended authorization system. Below connections between use cases (based on their layered structure) and authorization policies are explained in details. For the first use case – the Spectroscopy NMR – three separated functional layers are defined: Access, Grid and Resources. This scenario uses an external authorization system: the Grid Authorization Service (GAS) [27]. The access layer provides necessary interaction facilities with the system for its users. The internal security model has been prepared for this layer, which contains items like: laboratory groups (groups of users), user roles (superadmin/supervisor, admin, common user) and objects (sets of devices which are accessible for a specific laboratory group). All such policies are stored in the VL server database. In turn, the next layer – the grid layer – is connected to GAS. This service is responsible for storing grid polices and for evaluating an authorization decision based on known users, grid services and resource requests. Figure 4 presents a sample authorization policy for the spectrometry laboratory. Two object definitions are created: laboratory and device. Several instances of these definitions are shown with a set of operations which can be performed on the object. Each instance of the object has its unique name. In addition, several groups and subgroups of users are defined in one structure, making it possible to distinguish user rights to a specific device. The GAS policy stores authorization knowledge as triplets: “user A can perform operation Op on object Ob”. The last layer – resources – has an internal security policy, which is compared with the access rights of the operation system to which a device is connected. Access is granted based on local
Defence in Depth Strategy: A Use Case Scenario of Securing a Virtual Laboratory
87
Fig. 4 Authorization policy for NMR spectrometry scenario
user rights to run applications. The first scenario treats authorization processes in an especially serious way, i.e. on each layer authorization is one of the most critical security aspects. One disadvantage that can be found covers missing communication between layers from the authorization perspective. It results in separate management actions on each layer and a need of manual synchronization of policies between layers. The second use case, the EXPReS Radioastronomy [28], treats authorization in a different way. It is based on trust between users in a limited community. It covers three functional layers: Portal (an internal, Web-based system), Platform and Devices. As it is mentioned above, full trust inside the community is required. The platform (grid) layer is organized as a separate computing cluster, accessed only by the community members. The most critical layer – Devices – makes it possible to use instruments like telescopes by the community. Each action on such a device requires a manual confirmation to be made by the telescope operator. Due to the fact that it is possible to damage a radio telescope by executing a specific set of operations, this scenario is limited only to a closed and trusted community. The authorization policies are similar for each community member. There is only
88
M. Adamski et al.
Fig. 5 Authorization policy for the forest environment scenario
one authorization advantage important for this scenario: each operation or set of operations on a device has to be accepted (and thus verified) by the device operator as even a random set of operations can damage the device. As the last use case, the forest environment is described, applied to support a multi-media nature lesson in a school. This scenario takes into consideration several authorization aspects. Therefore, it can be assumed as a proper reference solution from the authorization perspective. This type of solution will also have a multilayered architecture, which makes it possible to prepare interaction between end users and devices. The extended authorization system will be used not only by one layer but also by the whole system. The authorization system will contain several services connected together (for exchanging policy processes). In addition, on the lower layers (resource, grid layers), it will be possible to put the policy inside – e.g. proxy certificates for the components not connected directly to the authorization system. The policy for the topmost layer (portal layer) will be defined inside the portal in the RBAC (role-based access control) [29] model and automatically put to the authorization system which will facilitate easy managing the authorization polices. There are several portal implementations which can be used in this scenario: an example to be recommended as containing extended authorization functionality may be the Liferay portal with its role management facilities [30]. A policy similar to the first described use case can be defined: for a specific user or for a group of users (e.g. a virtual organization). Each device and organization will have its own definition and a set of operations which may be performed in its context. Also, as shown in Fig. 5, it will be possible to define complex authorization scenarios which will make the authorization decision dependent on results obtained from the previous steps or other modules (e.g. a pupil will be able to enter the next stage of work only after he or she has completed the previous one).
Defence in Depth Strategy: A Use Case Scenario of Securing a Virtual Laboratory
89
However, authorization and roles are not only security-related facets of the grid. At least two other threats should be considered (excluding a standard requirement of properly securing each node of the grid). The first of them is the opportunity of executing applications on remote grid nodes. Scenarios where a grid user may send his or her own application to be executed should be absolutely excluded (the VLab project did not accept the above). Moreover, even if the user is merely able to provide input parameters and/or data to one application out of a strictly defined set, those parameters should be exceptionally carefully sanitized. Otherwise a malicious user could be able to execute remote code on grid nodes. For example, if the user parameters were directly appended to a shell script call or application, one could provide legitimate parameters, followed by a semicolon metacharacter (;) and a next, malicious command to be executed. The other issue, although probably significant only in a limited set of scenarios, is a potential indirect information disclosure. Grid infrastructures base on the distributed computing model, where one large task is divided into subtasks (jobs) that are executed on different systems. It is potentially possible that a single job, received by a node controlled by potentially distrusted virtual organization, may reveal sensitive information about the research (e.g. symbols of investigated chemical substances or coordinates of reviewed space area). Information about the fact of conducting specified research may also be confidential for a researcher. In such cases, it must be considered whether a kind of encoding of sensitive data may be applied; if not, leaving the distributed computational model should be considered. 4.2.3 Research Data and Digital Libraries Virtual laboratories are built in order to provide access to valuable scientific devices also for those research centres that cannot afford an expensive infrastructure. Therefore, especially in a distributed computational model of grid, it is possible that VL users will tend to store their research results in external repositories (digital libraries), at least the part that is going to be finally published (but not necessarily expected to be accessible during the research itself). Such type of repositories usually are an excellent solution for (intended) sharing the research results: e.g. a researcher may check whether similar work had already been conducted or use some earlier results as a reference. The described situation requires applying suitable security solutions, although none of them is especially related to digital libraries. Authorization policies should be applied to differentiate between data that are accessible to anyone (world readable, potentially also world writable), data that may be available by members of the same research group (or any other defined subset of users) and confidential data (readable and writable only by their owner). In addition, strict accounting and logging policies must be applied. Authorization solutions based on access control lists (ACLs) or RBAC should meet those requirements. Communication channels should be protected against eavesdropping (by encrypting network connections) and
90
M. Adamski et al.
impersonation (e.g. by an appropriate PKI implementation). Finally, the systems that compose the digital library must undergo an independent security audit as well as the computational nodes.
4.3 Access Platform The access platform is a key component of virtual laboratory infrastructure in terms of security, because this is where the users enter their data in order to conduct their research. As it has been mentioned, Web technologies are commonly used to provide access to VLs. This is an excellent approach in terms of interoperability and accessibility of rare instruments. Web standards are relatively well established and exceptionally widespread. On the other hand, the basic Web standards (like HTTP) were developed several years ago and, although relatively simple, are not securityaware. Therefore, additional security measures have to be applied, addressing Web applications vulnerabilities. The Web access platform (an example of which is the Kiwi Remote Instrumentation Platform access portal) is intended to be deployed for numerous users and communities. Therefore, “security through obscurity” has to be especially excluded from considerations. On the other hand, opening the platform to the world will expose the VL to more potential attackers. There are many types of well-known security vulnerabilities of Web applications (see [21]). Addressing them, it would be difficult to differentiate virtual laboratories from other types of portals and applications. However, it is sure that the opportunity of identity stealing and compromising remote systems has to be emphasized. The former may lead to impersonation and stealing research results of another user, while the latter will result in total taking control over users, instruments and data (the whole infrastructure may also be used for further attacks). HTTP is a stateless protocol which adds great simplicity to implement applications based on it, but requires additional mechanisms in order to differentiate between distinct users. Usually, HTTP session facilities are used, with information on users stored in files or databases on the server side, and cookies (which may be understood as simple, short text files) are maintained by the browsers on the client side. Although the whole mechanism is widely understood, in many cases it is implemented in its simplest form. Session state is stored in files on the server side and the cookies are not associated with any particular attributes. In order to implement an HTTP session not only for differentiating between the users, but also to increase the security level of the application, at least several good practices should be applied. First, the session identifier should be random and long enough (e.g. current PHP implementations already assure that). The session lifetime should be, on the contrary, short enough – that is, the session should expire within a reasonable amount of time. The amount should depend on a particular case – for sensitive applications (like, e.g. remote administration of the radio telescope) it should be no longer than 5–10 min. To loosen the limitations, a sliding expiration
Defence in Depth Strategy: A Use Case Scenario of Securing a Virtual Laboratory
91
window may be considered (the session lifetime counts not from its beginning, but from the last activity of a user). Logging out by a user must absolutely cause immediate session invalidation on the server side. As an additional security measure, association of a session identifier with the IP address of the user might be considered. For applications like Polish e-commerce portals it appeared as rarely implemented (only 2 out of 50 investigated [31]). It may be partially justified with the willingness to help mobile users; however, in scientific communities users are rather expected to have a static IP within a single session expiration time interval (possibly with an exception of a remote system administrator who is travelling while handling a user request). To make the compromise between functionality and security even better, the association between session ID and IP address may not cause the session expiration, but, e.g. only increase the inspection level of these particular user activities by an IDS. Furthermore, additional attributes should be applied for cookies: httponly and secure [32]. Although not all versions of popular browsers (especially the more aged ones) support them, applying the attributes will never make any harm. The httponly attribute disables access to cookies from active script languages (which is the main idea of XSS attacks), while secure enables this access only for a secure HTTPS protocol. The latter is equivalent to spending an additional effort devoted to providing HTTPS services, but for applications that deal with user identities (which is apparently desired for virtual laboratories) this should be one of the very first requirements to be defined. To be finished with a session and cookies, it is recommended to consider developing a custom session handling mechanism on the server side. Instead of simply storing the session data in file system objects, database facilities should rather be used. For example, the PHP scripting language provides highly configurable custom session handling routines (session set save handler library function plus additional callback functions invoked when necessary) that allow, e.g. to implement additional logging functionality or IP address verification [33]. It has been noted that applying security attributes to cookies allows to protect against Cross Site Scripting attacks. This is true, although the Web application developers must be aware that this is only partial and supplementary protection. The main countermeasure against not only XSS attacks, but also many other types (e.g. SQL Injection, Cross Site Request Forgery and Command Injection) is careful sanitization of all user input data. It should never be assumed that the infrastructure being built will never be attacked (or has never been attacked). The detailed explanations of those (and several others) excuses that may be intended to justify little emphasis put on security, may be found in [34]. Functionality is crucial for the application users, but it has to be supported with built-in routines that increase the security level of the software. The designers of an application must include data sanitization functions and the developers must implement them properly. Sanitizing of input data (which, in general, must be treated as malicious) allows to avoid the majority of common Web attacks. Moreover, sanitization cannot be limited to the client side. It is useful for
92
M. Adamski et al.
the user convenience (e.g. to point out that an entered value is improperly formatted or lacking), but it may be easily omitted with a proxy allowing to modify a sent HTTP packet. Several approaches may be applied in data filtering. Blacklisting bases on enumerating all known malicious sequences, the existence of which is verified in the input data. If at least one sequence is found, the data will be rejected, otherwise – accepted. This approach is easier to apply with data that are hard to be structured (e.g. comments on a forum where a number of HTML tags are allowed), but may become inefficient as the number of malicious strings grows, e.g. because a new type of network attacks is introduced. Whitelisting appears to be a better approach: the definition of properly formatted data is known (e.g. a postal code in Poland: two digits – a dash – three digits) and every input string not conforming the pattern is rejected. This type of input data processing is faster, more convenient and easier to be implemented; however, not all sorts of data fit into it. Regular expressions are a great help for the developer. They may be used to build the definition of both accepted and rejected types of data. However, even those definitions must be designed with care as there are attacks aimed to DoS regular expression parsers [35]. Finally, even if the format of the data is proper, their value is still to be investigated. For instance, if a three-digit number denoted the angle at which a radiotelescope should be turned, one should consider if there is a risk of damage if a malicious user enters, say, 720. Probably, it should be assured that there is an appropriate sanitization routine on the server side (at the device management station). A detailed description how to defend against all particular types of attacks is beyond the scope of this work. The last security issue associated with the Web access platform that will be mentioned here is information disclosure. Network attackers, before actually trying to compromise the system, attempt to gather as much information about it as possible. Therefore, all misconfigurations that cause leaking even the smallest pieces of information (like server signatures) may be harmful. Other examples are: switching of indexing directories (which allows to see the directory contents in the user browser), configuring your scripting language to display error messages in generated pages or leaving unnecessary contents under the directories with names that are easy to be guessed (like old, test, backup – or even leaving infamous phpinfo output under /.phpinfo.php). More detailed information about avoiding common Web attacks may be found in numerous articles, papers and portals. The OWASP project may be recommended as a general and broad source of information, and as a condensed source of useful advices the best security practices set for EGEE-II/EGEE-III project may be suggested [36]. And once again, as the final stage of securing the access portal, a complex security audit should be conducted. The recommended way of the audit would be a combination of penetration tests and review of the configuration and source code.
Defence in Depth Strategy: A Use Case Scenario of Securing a Virtual Laboratory
93
4.4 Multi-layer Security Components In the previous sections, a virtual laboratory has been decomposed into layers and security measures have been suggested for all layers. In this section, we would like to present additional security systems whose range may cover several layers. An intrusion detection system (MetaIDS) and the SARA solution for administration, inventory and automatic reporting are described. Both systems have been designed and are being implemented and deployed in Pozna´n Supercomputing and Networking Center within the confines of national research projects.
4.4.1 MetaIDS: Intrusion Detection System To deal with the problem of dynamically changing security conditions, often IDSs are used. This additional security layer may react to the most recent threats in areas where standard security countermeasures could not be effectively applied. An IDS will rather detect and report malicious behaviour of an attacker than block the preconditions of the attack, but apparently this increases the security level of the infrastructure being protected. Several usage scenarios of an IDS in the VL structure may be developed. In the previous chapters, we have described many potential threats and attack scenarios. As mentioned before, the attack vector is broad and includes numerous entry points. In order to monitor all of them properly, an infrastructure administrator should constantly perform a log analysis for many applications simultaneously. The logfiles are stored in separate files, located all over the hard drive (or even on several systems). This complicates the task and makes information correlation extremely difficult and time consuming. An IDS may help the administrator to tackle with a large amount of distributed information, and – moreover – to recognize patterns of attack, often complicated to be detected manually. In addition, the IDS analyzes data all the time and may issue an e-mail or SMS alert in an emergency case. By then all the important data regarding a particular incident are normalized and stored in one place, making it really easy to maintain (e.g. for further forensics purposes). A standard host-based IDS works on a single machine and therefore is unable to detect more sophisticated attacks, which could have been performed using many machines located in different parts of a corporate network or a distributed environment like a VL. For instance, an IDS installed on a Samba server will not be able to correlate events and detect the following attack: 1. Virtual laboratory Web access server scanned from a particular IP address. 2. Malicious network traffic sent to the VL Web access server. 3. A successful login to the Samba server from the VL Web access server. Port scanning is an activity which may be considered malicious, but not necessarily implicates a successful break in. Considering every port scanning event as a serious alert, an administrator would end up with numerous false positives and much effort
94
M. Adamski et al.
spent on verifying them. A successful login to the Samba server is not suspicious at all, but once a machine is marked as “compromised” (in this case – VL Web access server), all its outgoing connections, whether it is a Samba Server, an FTP Server or any other machine, are considered malicious and should be immediately investigated. The distributed IDS “MetaIDS”, developed in Pozna´n Supercomputing and Networking Center within the confines of the Polish Platform for Homeland Security project [37], is able to detect intrusions and intrusion attempts based on information from many resources located all over the network being protected. In order to successfully monitor a particular environment – in this case a VL – it is crucial to link several information sources. All the important elements that work within the monitored infrastructure and store data in log files should be plugged in to MetaIDS [38]. Because MetaIDS has a modular architecture, it is possible to implement, e.g., an additional module for a new and unique log source that was not included originally [38]. Basically, the system consists of a central management and analysis server and a set of agents installed on protected systems. The agents send specified information to the server, where all data are analyzed and appropriate decisions are undertaken. An agent (also called “sensor”) itself is a lightweight, stateless monitoring program, working on an unprivileged account. The detailed explanations are out of the scope of this chapter, and may be obtained, e.g., in [38]. An analysis and combining different events are described there as well. What is especially significant for distributed infrastructures is that MetaIDS offers facilities of cooperation with other IDSs. Therefore, it appears reasonable to implement it in distributed infrastructures wherein some parts are already protected with another IDS (e.g. when a new subnet with another scientific device is attached to the VL structure). That approach will assure a better coverage of the overall attack vector. The communication may be bidirectional: IDS ! MetaIDS and MetaIDS ! IDS. In the first case (see Fig. 6), MetaIDS reads events detected by another IDS, understood there as a “rich sensor”. That type of configuration is especially valuable, when MetaIDS cooperates with a network-based IDS like Snort [39]. MetaIDS sensors monitor events occurring in services running on servers, so acting like a host-based IDS. Snort monitors events on a lower level, and therefore it is able to detect other types of events which are not available for host-based systems. By combining information from host and network sensors, MetaIDS may detect significantly more compound attacks. A special MetaIDS module monitoring another IDS is required that filters generated events and translates them to the form used by MetaIDS. Currently, events reported from Snort [39], Ossec [40] and any other IDS supporting events in IDMEF [41] format (e.g. Prelude [42]) may be read. Communication in the opposite direction is also possible. In such case, MetaIDS stores detected incidents in a file, using the IDMEF format. The foreign IDS may read that file and use it for further analysis. If there is a need to “push” events
Defence in Depth Strategy: A Use Case Scenario of Securing a Virtual Laboratory
95
Fig. 6 MetaIDS in cooperation with another IDS system
directly to the foreign IDS (without the intermediate IDMEF file), it is possible to configure a MetaIDS action, which will automatically send an IDMEF message using an external program. Applying MetaIDS as an additional protecting mechanism for virtual laboratories, besides its ability to protect or participate in protecting distributed environments, may also be based on preparing dedicated modules for sensors that monitor particular VL systems. For instance, it may be hard to apply external protection software on outdated systems that appear in research centres as device management stations. Preparing a simple agent that will control logs or system calls may be a surrogate. A similar sensor could also, e.g., intercept values of parameters directed to the scientific device and check whether they do not become dangerous for the instrument. 4.4.2 SARA System for Support of Inventory and Monitoring Because security is a process, in distributed and heterogeneous environments like VLs, it is especially meaningful to maintain appropriate versions of software products and suitable, hardened configurations on all currently available nodes as even a single vulnerable system may act as a broad input for an attacker. Dynamic environments, including VLs, are especially affected with this issue; as new nodes (computers, networking devices, laboratory equipment) may appear within the infrastructure, they may introduce new attack vectors. Built-in security mechanisms seem to be not enough for assuring the desired security level (again, with a special emphasis on heterogeneous environments). Therefore, as an additional security measure, we would like to propose a system for constant monitoring of a dynamic infrastructure with respect to individual
96
M. Adamski et al.
security of its subsystems. Both active and passive control mechanisms are necessary, and on different granularity levels. A fast reaction that is able to inform security officers about found vulnerabilities is required. The control should cover known (publicly disclosed) security holes and – whenever possible – take into account data taken from repositories of software vendors. On the other hand, it seems to be extremely useful to maintain a detailed registry of systems used within the infrastructure. The structure and rules managing that registry must anyway be based on a consistent and detailed security policy. The problem of an accurate, formal and consistent description of security state is itself non-trivial. The provided information should be appropriately structured but also easy to be understood and exchanged (e.g. between different organizations within the virtual laboratory). There are a number of standards, relatively new or even still being established, specifications and repositories that may be utilized for those purposes. CVE [43], CPE [44], CVSS [45] or NVD [46] may serve as examples. Currently, it is difficult to find a system offering the ability to control all systems within a highly heterogeneous infrastructure. The virtual laboratories are specific: the systems which are in use may be outdated, due to software compatibility issues. Also, in such an environment, uncommon operating systems may exist. Nagios software [47] with its plug-ins allows to monitor systems, but it requires installation of plug-ins on each of them. An additional difficulty for the monitoring systems is the lack of agents (local programs controlling security on the nodes) for certain operating systems. The problem of agents was partially eliminated in the software Pakiti [48]; however, it uses vulnerability repositories provided by some of the operating system vendors, which can significantly restrict the application area. The SARA system leverages a slightly different approach to the problem of correlation of known security vulnerabilities with the system configurations. A universal solution helps to minimize the significance of the human factor in controlling the security level under changing conditions (i.e. the infrastructure contents and a set of known attacks). SARA combines information on vulnerabilities stored with the help of several mentioned standards in the National Vulnerability Database (which is updated every 2 h, and SARA regularly checks for those updates). The standards allow to determine which systems may be affected and how critical the known vulnerabilities are. The system automatically issues alerts to administrators and security officers. Currently, SARA requires a certain amount of initial human work: information on systems has to be inserted and maintained manually. This overhead will be minimized in the future versions, although probably it will never be avoided at all, especially in environments containing heterogeneous, uncommon and outdated systems (as for those machines there will be no solution reporting installed software versions automatically). On the other hand, the system is an excellent tool for inventory with determining the physical location and responsibility of each system. Contrary to the mentioned existing solutions, SARA may also handle not only servers, but also e.g., network devices (Fig. 7).
Defence in Depth Strategy: A Use Case Scenario of Securing a Virtual Laboratory
97
Fig. 7 SARA system general architecture overview
The current version of the SARA system has been accepted for use in the national project Pl-Grid [49] as an optional solution for appliance in various clusters. Furthermore, there are outlined promising visions of development, expanding its capabilities and making easier the interaction with the users, as well as other security systems.
4.5 Security Considerations on Dynamic and Heterogeneous VLAB Environments As it has already been mentioned, security is actually dynamic even in a static, stable environment, because the knowledge on security of particular systems or software is dynamic. The reason why grid paradigm introduces another dimension of this dynamism is because new systems may be attached to the protected environment. In the case of a VLAB environment, the laboratory may be, e.g. extended with a new scientific device with an appropriate access point. The new element of the environment should be incorporated by the security systems as well – at least to a certain degree.
98
M. Adamski et al.
The KIWI remote instrumentation platform is intended to be accessible from the Internet (although this is not a problem to provide the installation limited to a corporate or a campus network). However, the part of the environment that holds the devices should be granted a higher trust level than an arbitrary computer of a remote user of the KIWI platform. Therefore this “internal” part, which may also be dynamic, must be strongly protected (anyway, the authors of the platform have no impact on what level of security is applied by its users). If the trusted part of a virtual laboratory is going to be extended, a trade-off problem between security and usability (understood as the easiness of attaching the new systems, potentially only temporally) must be resolved. Especially, it may be not allowed or not applicable to introduce significant (or even any) changes in the attached systems, like installing Nagios plug-ins or requiring using specific properties of the infrastructure or applying defined attachment procedures as proposed in the Clusterix grid project [20]. The multi-layer security components, described in the Sect. 4.4, address this issue in a different way. It must be noted that the deployment of MetaIDS in the new part of the infrastructure may be found unacceptable by the owner of this part. In addition, a plug-in for the particular operating system may not be available (however, installing only plug-ins on some additional systems to extend the set of computers protected by the current MetaIDS server would be relatively fast and easy). On the other hand, it is obvious that MetaIDS will be immediately able to detect known attacks originating from the attached part of the environment. The SARA system has rather been intended to implement the static approach to monitoring security and to complement other, dynamic, security systems (or to assure a basic security level where the dynamic systems are not applied). A huge advantage of this approach is that for securing the attached systems, SARA requires neither software installation nor specific configuration routines to be made on them. On the other hand, a certain amount of interaction with their administrator would be necessary: the administrator would have to register his or her systems on the SARA internal webpage and provide the basic data on the used software, as well as the email address. The above would be enough to start receiving notifications about known security vulnerabilities. Finally, the security systems may become outdated themselves – either because of a security vulnerability made by their developers or because their knowledge bases should be extended with the most recently known threats. Both proposed multi-layer security components provide software updating facilities. However, please note that no SARA components would be installed on the attached systems, and MetaIDS plug-ins would probably be also avoided on those systems. However, if ultimately installed, they may be updated by the MetaIDS server (that is controlled, the same as the SARA server, by the KIWI platform administrator). As for updating knowledge bases, SARA in general does not require that facility as it uses NVD [46] – a public repository of software security vulnerabilities. However, there is a possibility of providing the descriptions of vulnerabilities for software not considered by the NVD (which may be important for highly specific
Defence in Depth Strategy: A Use Case Scenario of Securing a Virtual Laboratory
99
and rarely used systems that may occur in a virtual laboratory environment). MetaIDS plug-ins do not contain their internal knowledge bases, they just send the descriptions of the events to the MetaIDS server. With respect to the requirements of the owners of external systems, the described facilities may seem to be oriented more towards usability than security. On the other hand, a virtual laboratory is not a security service. A KIWI platform administrator may have no impact on organizational details in the dynamically attached part of the environment, but the platform offers basic security facilities with minimal (or even zero) changes made to the attached systems.
5 Summary This chapter attempts to aggregate all necessary security considerations that should be taken into account during the process of building a distributed IT infrastructure – on a practical example of a virtual laboratory, based on the authors’ experience. The reasons for which security has to be considered thoroughly and during all stages of the project development have been introduced. Then virtual laboratories have been described, as heterogeneous and distributed environments, together with their general and specific security threats. In order to cope with possible attacks, the problem of securing VLs has been decomposed onto several smaller pieces (layers) and the most significant security problems to consider have been described. This chapter does not aim to be a complete review of all existing security issues, but numerous references have been provided in order to provide more knowledge for those who would like to apply the described recommendations in practice. It has been emphasized that security should not be applied in a single place. The best results are obtained when security measures are present in different functional layers, and when solutions of a different nature are multiplied. Even though a potential attacker might be able to bypass or break one or two of them, the other(s) will still stop (or at least limit) the further damages. The described solutions and practices are and will be used to assure the appropriate security level of the KIWI Remote Instrumentation Platform developed in Pozna´n Supercomputing and Networking Center within the confines of the WLIN (Virtual Laboratory of Interactive Teaching) research project [50]. The described security measures have been applied at an early stage of the lifecycle of the KIWI platform; therefore, no direct comparisons between the unprotected and secured versions of KIWI could be provided. On the other hand, providing built-in security from the very beginning allows to stop the vast majority of network attacks immediately after the protected infrastructure is released. Acknowledgement The authors thank Łukasz Olejnik (PSNC) for valuable advice and support in writing this paper.
100
M. Adamski et al.
References 1. Internet Crime Complaint Center, 2009 Internet Crime Report, pp. 2, http://www.ic3.gov/ media/annualreport/2009 IC3Report.pdf. 2. Eric Chabrow, Cyber Attacks Cost U.S. $1 Trillion a Year, 24 March 2009, http://blogs. govinfosecurity.com/posts.php?postID=159. 3. Yuri Namestnikov, The economics of botnets – Kaspersky Lab, 2009, page 10, http://www. securelist.com/en/downloads/pdf/ynam botnets 0907 en.pdf. 4. EGEE Project – EGEE grid in numbers: Infrastructure Status, October 2009, http://project.euegee.org/index.php?id=417 5. Bruce Schneier, Crypto-Gram Newsletter, May 15, 2000, http://www.schneier.com/cryptogram-0005.html 6. Jan Kwiatkowski, Marcin Pawlik, Gerard Frankowski, Kazimierz Balos, Roman Wyrzykowski, Konrad Karczewski, Dynamic clusters available under Clusterix grid, Lecture Notes in Computer Science, 2007, Volume 4699/2007, pp. 819–829 7. Steven McConnell, Code Complete – A Practical Handbook of Software Construction (Second Edition) – Microsoft Press, 2004, page 548. 8. Naresh Jain, Bala, Agile Overview – Embrace Uncertainty, http://www.slideshare.net/nashjain/ agile-overview. 9. Andrew Odlyzko, Economics, Psychology, and Sociology of Security, Lecture Notes in Computer Science, 2003, Volume 2742, pp. 182–189 10. Michael Howard, David LeBlanc, Writing Secure Code, Microsoft Press 2002, pp. 347 11. FIPS PUB 199, Standards for Security Categorization of Federal Information and Information Systems, National Institute of Standards and Technology, US, February 2004, pp. 2 12. Shawn Herman, Scott Lambert, Tomasz Ostwald, Adam Shostack, Threat Modeling – Uncover Security Design Flaws Using The STRIDE Approach, MSDN Magazine – November 2006, http://msdn.microsoft.com/en-us/magazine/cc163519.aspx 13. National Data Storage project, http://nds.psnc.pl 14. Virtual Laboratory, http://edutechwiki.unige.ch/en/Virtual laboratory 15. National Radio Astronomy Observatory, http://www.nrao.edu 16. Adamiak, R.W., Gdaniec, Z., Lawenda, M., Meyer, N., Popenda, Ł., Stroi´nski, M., Zieli´nski, Laboratorium Wirtualne w srodowisku gridowym (in Polish) http://vlab.psnc.pl/pub/ Laboratorium Wirtualne w srodowisku gridowym ver.1.0.pdf 17. Lawenda, M., Meyer, N., Rajtar, T., Okon, M., Stoklosa, D., Stroinski, M., Popenda, L., Gdaniec, Z., Adamiak, R.W. General Conception of the Virtual Laboratory. International Conference on Computational Science 2004, LNCS 3038, pp. 1013–1016 18. VLab project home page, http://vlab.psnc.pl 19. Lawenda, M., Meyer, N., Rajtar, T., Okon, M., Stoklosa, D., Kaliszan, D., Kupczyk, M., Stroinski, M.: Workflow with Dynamic Measurement Scenarios in the Virtual Laboratory http://vlab.psnc.pl/pub/Workflow With Dynamic Measurement Scenarios In The Virtual Laboratory.pdf 20. Kiwi Platform, http://kiwi.man.poznan.pl 21. Open Web Application Security Project, OWASP Top 10 – 2010, The Ten Most Critical Web Application Security Risks, http://owasptop10.googlecode.com/files/OWASP%20Top%2010 %20-%202010.pdf 22. Nancy Leveson, Clark S. Turner, An Investigation of the Therac-25 Accidents, IEEE Computer, Vol. 26, No. 7, July 1993, pp. 18–41. 23. Therac 25 Case Materials – System Safety, ComputingCases.org service, http:// computingcases.org/case materials/therac/supporting docs/therac case narr/System Safety. html 24. Pravir Chandra, Matt Messier, John Viega, Network Security in OpenSSL, O’Reilly 2002, pp.109 25. OpenSSL: The Open Source toolkit for SSL/TLS, http://www.openssl.org
Defence in Depth Strategy: A Use Case Scenario of Securing a Virtual Laboratory 26. 27. 28. 29.
101
OpenCA PKI Research Labs, http://www.openca.org GAS – Grid Authorization Service, http://www.gridlab.org/WorkPackages/wp-6 Express Production Real-time e-VLBI Service (EXPREeS), http://www.expres-eu.org David F. Ferraiolo, D. Richard Kuhn, Role-Based Access Controls, 15th National Computer Security Conference (1992), Baltimore MD, pp. 554–563, http://csrc.nist.gov/rbac/ferraiolokuhn-92.pdf 30. Pawan Modi, Liferay Portal Authorization & Role Management, http://www.scribd.com/doc/ 16804928/Liferay-Authorization-Role-Management 31. PSNC Security Team, E-commerce security: session and cookies, http://security.psnc.pl/ reports/sklepy internetowe cookies.pdf (in Polish) 32. Ryan Barnett, Fixing Both Missing HTTPOnly and Secure Cookie Flags, December 2008, http://blog.modsecurity.org/2008/12/fixing-both-missing-httponly-and-secure-cookieflags.html 33. Chris Shiftlett, Storing Sessions in a Database, http://shiflett.org/articles/storing-sessions-ina-database 34. Michael Howard, David LeBlanc, Writing Secure Code, Microsoft Press 2002, pp. 434–437 35. Bryan Sullivan, Regular Expressions Denial of Service Attacks and Defenses, MSDN Magazine, May 2010, http://msdn.microsoft.com/en-us/magazine/ff646973.aspx 36. PSNC Security Team, Security best practices for administrators, developers and users of EGEE infrastructure, 2008, https://edms.cern.ch/file/926685/1/EGEE best practices.pdf 37. PSNC in Polish Platform for Homeland Security, http://ppbw.pcss.pl/en 38. Jerzak, M., Wojtysiak, M., Distributed Intrusion Detection Systems – MetalDS case study, Computational Methods in Science in Technology, Special Issue 2010, pp. 135–145. 39. Snort intrusion detection system, http://www.snort.org 40. OSSEC – Open Source Security intrusion detection system, http://www.ossec.net 41. RFC 4765, The Intrusion Detection Message Exchange Format (IDMEF) – http://www.ietf. org/rfc/rfc4765.txt 42. Prelude intrusion detection system, http://prelude.sourceforge.net 43. Common Vulnerabilities and Exposures standard, http://cve.mitre.org 44. Common Platform Enumeration standard, http://cpe.mitre.org 45. Common Vulnerabilities Scoring System standard, http://www.first.org/cvss 46. National Vulnerability Database, http://nvd.nist.gov 47. Nagios monitoring system, http://www.nagios.org 48. Pakiti monitoring software, http://pakiti.sourceforge.net 49. Polish Infrastructure for Information Science Support in the European Research Space – PLGrid, http://www.plgrid.pl/en 50. Virtual Laboratory of Interactive Teaching (WLIN), http://www.man.poznan.pl/online/pl/ projekty/113/WLIN.html (in Polish)
Part II
Software Platforms
Performance Analysis Framework for Parallel Application Support on the Remote Instrumentation Grid Alexey Cheptsov and Bastian Koller
Abstract In the recent years, the Grid has become the most progressive IT trend that has enabled the high-performance computing for a number of scientific domains. The large-scale infrastructures (such as Distributed European Infrastructure for Supercomputing Applications setup in the frame of DEISA or Remote Instrumentation Infrastructure deployed within the DORII EU project) enabled the Grid technology on practice for many application areas of e-Science and have served as a testbed for performing challenging experiments, often involving the results acquired from complex technical and laboratory equipments. However, as the Grid technology has matured, the attention is largely shifted towards optimization of Grid resource utilization by the applications. The performance analysis module setup within the DORII project offers scientific applications an advanced tool set for the optimization of performance characteristics on the Grid. The performance analysis tools adapted and techniques elaborated within DORII for parallel applications, implemented for example by means of Message-Passing Interface (MPI), are presented in this chapter and might be of great interest for the optimization of a wide variety of parallel scientific applications.
1 Motivation and Introduction The widespread use of Internet and Web technology has resulted in leading-edge innovations allowing researches to exploit advanced computational technology in a wide range of application areas of modern science and technology. Grid technology is a fundamental aspect of e-Science that has enabled many scientific societies to get access to high-performance e-Infrastructures in which virtual communities share, federate and exploit the collective power of scientific facilities.
A. Cheptsov () • B. Koller High-Performance Computing Center (HLRS), University of Stuttgart, Germany e-mail:
[email protected];
[email protected] F. Davoli et al. (eds.), Remote Instrumentation for eScience and Related Aspects, DOI 10.1007/978-1-4614-0508-5 7, © Springer Science+Business Media, LLC 2012
105
106
A. Cheptsov and B. Koller
Computation and data Grids are highly beneficial for getting high application scalability characteristics as they offer virtually unlimited computation resources and storage facilities. Moreover, the e-Infrastructures enable the Grid for a great variety of new scientific domains, which pose new challenging requirements to the e-Infrastructure. For example, the e-Infrastructure that is setup by the DORII (Deployment of the Remote Instrumentation Infrastructure)1 project offers a promising way how the modern Grid technology can enhance the level of scientific applications usability, allowing them to have shared access to unique and/or distributed scientific facilities (including data, instruments, computing and communications), regardless of their type and geographical location. The DORII infrastructure consolidates 2,200 CPU cores for the computation and offers a total of 147 TB for storing the data. However, as our experience of porting scientific applications to the Grid e-Infrastructures [1] has revealed, the application performance suffers heavily on a single node due to poor performance characteristics of a standard Grid resource – generic cluster of workstations as compared with the dedicated high-performance computers. This especially concerns the network interconnect between the compute nodes and the file I/O system characteristics, which can considerably degrade the performance of many applications. Message-passing interface (MPI) is a widespread standard for the implementation of parallel applications [2]. MPI applications constitute an important part of the pilot applications that are ported to the e-Infrastructure in the frame of DORII, too. Realization of the message-passing mechanism allows the application to share the computation work over the nodes of the parallel computing system. The efficiency of this sharing is straightforward for gaining high application performance on a single node as well as scalability when running on many nodes. In order to make running the MPI applications on the Grid even more efficient, special performance improvement techniques can be applied for the MPI applications. These techniques rely mainly on in-depth analysis of the implemented communication patterns. The most efficient way of performance characteristics analysis is application’s source code instrumentation. With regard to the MPI applications, instrumentation means mainly collecting the time stamps of the main communication events occurred in the application during run-time. There are many tools facilitating the application instrumentation as well as GUIs intended for analysis of the application run profile. However, support of those tools on the currently available infrastructures is very poor. On the other hand, there is no clear methodic of different tools’ consolidated use for performance analysis of an application. In consequence, performance analysis of the parallel MPI applications is quite awkward for the developers. In the most cases, the complexity of the performance analysis techniques often prevents application providers from in-depth analysis and performance optimization of their applications. In order to facilitate performance analysis of applications ported to a Grid e-Infrastructure, a performance analysis module has been setup within the mid-
1
http://www.dorii.eu.
Performance Analysis Framework for Parallel Application Support on...
107
dleware architecture of the DORII project [3]. In this chapter, we introduce this module and its main components. We begin with a description of a practical use case coming from the DORII project, which is the OPATM-BFM application [4]. We then describe how to use the tools of the performance analysis module for collecting the communication profile in terms of the test application. Finally, we generalize our experience obtained for the tested use case application and present some general performance improvement proposals that might also be of great interest for other parallel applications implemented by means of MPI.
2 Use Case OPATM-BFM is a physical–biogeochemical simulation model developed at Istituto Nazionale di Oceanografia e di Geofisica Sperimentale (OGS) and applied for shortterm forecasts of key biogeochemical variables (among others, chlorophyll and salinity) for a wide range of coastal areas, in particular for the Mediterranean Sea, and currently explored within the DORII project. The model solves the transport-reaction equation (1): @ci @ci @ci @ kz C Rbio .ci ; c1 : : : cN ; T; I : : :/; (1) C v rci D wi C kh rh ci C @t @z @z @z where v is the current velocity, wi is the sinking velocity, kh and kz are the eddy diffusivity constants and Rbio is the biogeochemical reactor that depends, in general, on the other concentrations and on temperature T , short-wave radiation I and other physical variables. The complexity of OPATM-BFM consists in the great number of prognostic variables to be integrated, dimension of analysed ecosystems and steps of the numerical solution according to the forecasted period. In this context, OPATM-BFM poses several challenging scenarios for the efficient usage of modern HPC systems for the application. OPATM-BFM is parallelized by means of MPI, based on domain decomposition over longitudinal elements. MPI implementation enables OPATM-BFM to utilize massive-parallel computing resources. The number of domains corresponds to the number of computing nodes the application is running on. Consistency of the computation by the parallelization and domain decomposition is ensured by the inter-domain communication pattern implemented for exchange of the data cells residing on the domain’s bounds. Obtaining maximal performance (in terms of the execution time) and scalability (in terms of speed-up due to running on the increasing number of computing nodes) is mandatory for OPATM-BFM’s practical usability for the tasks of the real complexity (in particular long-term simulation). Moreover, the application poses great challenge for different HPC architectures with regard to both optimal
108
A. Cheptsov and B. Koller
utilization of resources for performing the identified complex tasks of environmental simulation and development of algorithms enabling such an efficient utilization [5]. The application’s standard run consists of three major phases: initialization (where the model is initialized and input files are read), main simulation loop (iterative solver, the number of iterations depends on the forecasted period’s length, each step corresponds to a half an hour of the simulated system’s behaviour) and data storage (file storage operations, at the end of simulation or after each 48th step of the numerical solution). Analysis of the application time characteristics in the identified phases with regard to computation on a single node and communication between the nodes (notably, MPI functions duration), performed in [1], revealed that scalability on different number of nodes (in the current experiment, execution on 32 and 64 nodes has been analysed) is quite poor for some operations (see Table 1). However, it is highly important to highlight that time distribution among the phases of execution in the tested use case (second column in Table 1) can hugely differ as compared with a real long-term simulation use case (third column in Table 1) due to different impact of iteratively repeated operations on the total time (the last row in Table 1). Application performance speed-up when running on bigger number of nodes is also changing according to the use case. Nevertheless, the internal characteristics of the iterative phase are iteration-independent and valid not only for the test one but for the all use cases. There are several software tools that facilitate instrumentation of the source code and collection of the details about the occurred events as well as further analysis of those events. Some of the tools (e.g. Valgrind tool suite [6]) are useful for static analysis of the single application’s process on a single computing node, while others aim at parallel applications [7] and focus on analysis of the MPI communication between the nodes of the parallel computer (e.g. Vampir [8] and Paraver [9]). The DORII project’s applications, including presented OPATM-BFM, can greatly benefit from the use of both mentioned categories of tools. Whereas profiling a single MPI process will allow the user to identify the source code regions where the most time-consuming communication takes place, the detailed information about the interactions (messages for the MPI applications) between the different processes of a parallel program is provided by the communication analysis tools (Fig. 1).
3 Performance Analysis Tools and Techniques The practical attempts to design a scalable and easy-to-use performance measurement and monitoring software environment for supercomputing applications running on the remote instrumentation infrastructure resulted in a special module provided within the middleware architecture of the DORII project. In the following, we give a brief description of the main tools comprised in the module – Valgrind and Vampir/VampirTrace – and highlight the main usage scenarios of those for the exemplar use case from the previous section – the OPATM-BFM application.
Phases of execution 1. Initialization, input 2. Main simulation loop 3. Data storage Total time, (s)
# Iterations, test case 1
3
1
# Iterations, real case 1
816
17
7 949
3
Comput. duration 939
204 226
5
MPI calls duration 6
32 nodes, with one process per node, (s)
Table 1 Application performance on changing the number of computing nodes
213 1,175
8
Total time 945
3 2,243
2
Comput. duration 2,238
170 181
5
MPI calls duration 6
64 nodes, with one process per node, (s)
173 2,424
7
Total time 2,244
0.4 2.4
0.7
1.25 0.8
1
Comput MPI calls 2.4 1
Scalability coefficient, t64 =t32 ; .s/
0.8 2
0.9
Total 2.4
Performance Analysis Framework for Parallel Application Support on... 109
110
A. Cheptsov and B. Koller
Fig. 1 Using performance analysis tools for OPATM-BFM
Valgrind [6] is an instrumentation framework for building dynamic analysis tools for debugging and profiling sequential applications (or single processes of a parallel application), aiming at speeding-up application performance. Valgrind’s distribution is an open source and currently includes several production-quality tools, including a memory error and thread error detectors, a cache and branch-prediction profiler, a call-graph generating cache profiler and others. In order to proceed with the parallel application analysis efficiently, the phases of application execution should firstly be identified. The localization of the most computation- and communication-intensive phases without the help of any profiling tools is a non-trivial and quite complicated task that requires deep understanding of the application source code as well as implemented algorithms. However, Valgrind is capable of building a so-called application call-graph, which is sufficient for basic understanding of the dependencies between the application’s code regions as well as time characteristics of the communication events in those regions. This is done by means of the Callgrind tool, which is included to the current Valgrind distribution. Moreover, details on the I1, D1 and L2 CPU caches are provided by Callgrind as well. In addition, Valgrind provides a powerful visualizer for the data produced by Callgrind, which is KCachegrind. For example, in Fig. 2 a fragment of the OPATMBFM’s call-graph is shown, visualized through KCachegrind, based on which the main phases of application run (Table 1) have been identified.
Performance Analysis Framework for Parallel Application Support on...
111
Fig. 2 The OPATM-BFM’s call-graph’s fragment (visualized with KCachegrind)
Analysis of the single process’ run profile is also an excellent starting point for profiling communication between the MPI processes in the parallel application. There are several tools designed for large-scale application analysis. In the frame of the remote instrumentation infrastructure, we chose the VampirTrace tool [10] because of its tight integration to the Open MPI library, used for the implementation of parallel applications by the DORII consortium. However, the introduced performance analysis module does not eliminate the use of other famous tools, including Paraver [9], or Scalasca [11], for post-mortem analysis or Periscope, which is used for analysis at run-time. VampirTrace is an application tracing package that collects a very fine grained event trace of a sequential or parallel program. The traces can be visualized with the Vampir or any other tool that reads the open trace format (Fig. 3). Another advantage of VampirTrace is that no changes in the application’s code are required for switching on profile collection functionality, only recompilation with corresponding VampirTrace libraries (or using the wrapper provided with open MPI).
112
A. Cheptsov and B. Koller
Fig. 3 Example of the visualized application’s communication profile (performed with Vampir tool). Communication between the nodes is shown with arrows. The operations are presented with different colours
To sum up, analysis of the MPI application’s communication patterns as well as time distribution among the main phases of a single process is largely based on the profile collected at application’s run-time. The profile is stored in the trace files of the special format and can be visualized by means of the dedicated graphical front-ends after execution is completed. When using performance analysis tools, it is highly important to note that the size of the obtained trace files is proportionate to the number of the events occurred. For the large-scale parallel applications, which are often implemented with a significant number of function calls, in particular to the MPI library, the large size of the collected trace files poses a serious drawback for their further exploration with the visualization front-ends. In such cases, a preliminary analysis step is required, which aims at detecting the application execution’s regions with the communications having the maximal impact on execution time. In general, the low-priority communication events should be discarded from profiling that in turn leads to decreasing the size of the trace files with the collected events. This can be done by either filtering the events in the defined regions or launching the application for special use cases with a limited number of iterations. However, this task is not trivial and approaches that allow the user to filter the events in the trace files differ from application to application. Nevertheless, it can greatly be supported by the tools of the first category, in particular, based on the call-graph analysis. For example, even for a standard (short-term) use case, the OPATM-BFM application presented above performs more than 250,000 calls to the MPI library. In a consequence, the size of the trace data grew up to several tens
Performance Analysis Framework for Parallel Application Support on...
113
of gigabytes. Due to preliminary analysis of the call-graph, the trace file size was minimized down to only 200 Mb, which allowed us to proceed with post-processing analysis even on the low-performance machine. Therefore, the combination of those tools presented above on the joint basis, provided by the performance analysis module, offers the user a complete environment for efficient and effective application performance analysis.
4 Some Optimization Results There are several techniques that could be applied for MPI applications aiming at their performance improvement. As an outcome of exemplarily use of the performance analysis module for OPATM-BFM, the following main improvement proposals have been specified: • Use of MPI-IO operations for parallel access/storage of NetCDF data (currently the application I/O has been ported to the PNetCDF library) • Use of the collective MPI communication for inter-domain communication • Encapsulation of the data chunks transmitted inside a single MPI communication The realization of the optimization proposals for the application communication and I/O patterns elaborated in [1] for OPATM-BFM allowed us to improve dramatically the performance for the test case (three steps of numerical solution), as shown in Table 2. Whereas file I/O operations are dominating for the application execution for a small number of simulation steps (three steps for the test case), the overall performance improvement due to optimization of the MPI communication becomes significant only for a long-term simulation (816 steps for the real case). As Table 2 shows, the total amount of realized optimizations allowed us to reduce the duration of the application execution for the real case by 64% (from initially measured 1,175 down to 5,145 s). Furthermore, the optimization for the increasing number of nodes
Table 2 Comparison of application time characteristics before and after optimization 32 nodes 64 nodes Phases of execution 1. Initialization, input 2. Main simulation loop 3. Data storage Total
Total duration (s)
Total duration (s)
Initial
Initial
Optimized
Optimized
945
300
2,244
360
8
7
7
6
213 1,175
2,075 5,145
173 2,424
172 538
114
A. Cheptsov and B. Koller
from 32 up to 64 grew up to 88% (by reduction of the execution time from 2,424 down to only 538 s). The application speed-up when scaling the number of compute nodes from 32 to 64 was improved as well.
5 Conclusions and Future Directions The main aim of this chapter was to show how different performance analysis tools and strategies can be consolidated and applied for supercomputing applications. The DORII project enables Grid technology for many new applications, whose performance properties play an important role in practical usability within the modern Grid environments. Performance analysis is, therefore, straightforward for improved run of the applications, in particular, for use in production. Integration of all the described tools in the common performance analysis module introduced in this chapter allowed us to holistically investigate the performance of one of the most challenging applications deployed on the Remote Instrumentation Infrastructure – OPATM-BFM. The performance improvement techniques applied for the application allowed us to optimize the application characteristics when running on a standard EGEE Grid site. The performance analysis module will be used to tune the performance of academic and industrial simulation applications, not limited on the ones coming from the DORII consortium.
References 1. Alexey Cheptsov, Kiril Dichev, Rainer Keller, Paolo Lazzari and Stefano Salon: Porting the OPATM-BFM Application to a Grid e-Infrastructure – Optimization of Communication and I/O Patterns, Computational Methods in Science and Technology, 15(1), 9–19 2009. 2. The MPI standard http://www.mcs.anl.gov/research/projects/mpi/standard.html 3. Adami, D., Cheptsov, A., Davoli, F., Liabotis, I., Pugliese, R., and Zafeiropoulos, A.: The DORII Project Test Bed: Distributed eScience Applications at Work, in: Proceedings of the 5th International Conference on Testbeds and Research Infrastructures for the Development of Networks & Communities and Workshops (TridentCom 2009), Washington DC, USA, 6–8 April 2009, 1–4, doi:10.1109/TRIDENTCOM.2009.4976247, 2009. 4. A. Crise, P. Lazzari, S. Salon, and A. Teruzzi. MERSEA deliverable D11.2.1.3 - Final report on the BFM OGS-OPA Transport module, 21 pp., 2008. 5. Alexey Cheptsov: Enabling grid-driven supercomputing for oceanographic applications – theory and deployment of hybrid OpenMP C MPI parallel model for the OPATM-BFM application. Proceeding of HPC-Europa project’s Transnational Access Meeting, Montpellier, October 14th–16th 2009. 6. Nicholas Nethercote and Julian Seward. algrind: A Framework for Heavyweight Dynamic Binary Instrumentation. Proceedings of ACM SIGPLAN 2007 Conference on Programming Language Design and Implementation (PLDI 2007), San Diego, California, USA, June 2007. 7. F. Wolf: Performance Tools for Petascale Sysems. inSiDE, Vol. 7, No. 2, pp. 38–39 2009. 8. A. Kn¨upfer, H. Brunst, J. Doleschal, M. Jurenz, M. Lieber, H. Mickler, M.S. M¨uller, and W.E. Nagel. The Vampir Performance Analysis Tool-Set. Tools for High Performance Computing, Springer, 2008, 139–156.
Performance Analysis Framework for Parallel Application Support on...
115
9. Vincent Pillet, Jes´us Labarta, Toni Cortes, Sergi Girona. PARAVER: A Tool to Visualize and Analyze Parallel Code. In WoTUG-18, 1995. 10. A.G. Sunderland, R.J. Allan. An Example of Parallel Performance Analysis using VAMPIR. Retrieved from http://www.cse.scitech.ac.uk/arc/vampir.shtml 11. Markus Geimer, Felix Wolf, Brian J. N. Wylie, Daniel Becker, David B¨ohme, Wolfgang Frings, Marc-Andr´e Hermanns, Bernd Mohr, Zolt´an Szebenyi: Recent Developments in the Scalasca Toolset. In Tools for High Performance Computing 2009 of 3rd International Workshop on Parallel Tools for High Performance Computing, pages 39–51, Dresden, ZIH, Springer, 2010.
New Technologies in Environmental Science: Phenology Observations with the Kiwi Remote Instrumentation Platform Dominik Stokłosa, Damian Kaliszan, Tomasz Rajtar, Norbert Meyer, ´ Filip Koczorowski, Marcin Procyk, Cezary Mazurek, and Maciej Stroinski
Abstract Advances in e-learning create a base for creative use of new technologies in virtually any field of science. Whether or not this results in improvements in the quality of education is mostly up to the attractiveness of the final solution. This chapter describes a possible way of increasing the attractiveness of learning tutorials by the Kiwi remote instrumentation platform with the main concern on environmental science domains like phenology, meteorology, water circulation and its chemistry or, finally, air pollution. This chapter is focussed on the technical aspects of conducting phenology observations with a remotely controlled DSLR camera managed by the Kiwi platform.
1 Introduction The continuous development of human civilization, explorations of natural resources or pollution of the environment have an enormous impact on the balance of the biological life of fauna and flora. However, over the past decades, people have tried to minimise the harmful impact on the environment and to repair all errors made while exploring it. A variety of means, like advanced and eco-friendly production technologies, establishment of new forms of nature conservation and wide-ranging environmental education among children are used nowadays to protect the ecosystem and spread out the knowledge. The main objective of the project is to develop a virtual laboratory of interactive teaching, which will be controlled by the Kiwi remote instrumentation platform.
D. Stokłosa () • D. Kaliszan • T. Rajtar • N. Meyer • F. Koczorowski • M. Procyk • C. Mazurek • M. Stroi´nski Pozna´n Supercomputing and Networking Center Pozna´n, Poland e-mail:
[email protected];
[email protected];
[email protected];
[email protected];
[email protected];
[email protected];
[email protected];
[email protected] F. Davoli et al. (eds.), Remote Instrumentation for eScience and Related Aspects, DOI 10.1007/978-1-4614-0508-5 8, © Springer Science+Business Media, LLC 2012
117
118
D. Stokłosa et al.
Kiwi has been designed to provide the environment where different scientific, video or measurement pieces of equipment can be easily attached to the system and also to build an interactive e-learning platform for students. We are planning to build a pilot installation of the system with a set of different measuring devices installed in a forest in two selected locations near Pozna´n. All devices will be integrated and remotely controlled by the Kiwi platform. The data from devices and sensors will be stored automatically in the system and will be available within the e-learning platform. Moreover, data from the observation equipment will be incorporated into e-learning lessons, which will be available for students and citizens.
2 Scope The project addresses five selected areas: air pollution, water management, meteorology, dead wood and, finally, phenology (see Fig. 1). Phenology is described in detail in Sect. 4. However, the automated, remotely controlled observation system is the main focus of this chapter. The description of the e-learning platform is not the scope of this chapter.
Fig. 1 List of addressed areas
New Technologies in Environmental Science: Phenology Observations...
119
Fig. 2 Old mill pond
(a) Air Pollution Two weather stations will be installed near Pozna´n and integrated with the Kiwi platform. Educational content about air pollution will be enriched with live data coming from air quality sensors. This will allow us to visualise the influence of city-generated air pollution on the nearby region. (b) Water Management Since the water circulation is strictly connected with weather conditions, online content will be enriched with data from weather stations, as well as from a water flow measurement device installed at the estuary of a river coming through a pond located near an old water mill close to Pozna´n. This solution will help us to observe the relationship between changing amounts of atmospheric precipitation, isolation, temperature and the water flows. Moreover, it is planned to build an educational path next to the pond (see Figs. 2 and 3) explaining the role of the water in the environment. Students and pupils will have a chance to see devices and sensors integrated with the Kiwi platform in action on-site, and also to expand their knowledge on water management area. (c) Meteorology Weather stations will be installed in two locations near Pozna´n. A weather station is a compact system for hydro-meteorological monitoring. A station is a flexible and modular system which can be extended with different sensors. The sensors allow us to measure wind speed and direction, air pressure, air and ground temperature or humidity.
120
D. Stokłosa et al.
Fig. 3 Pond estuary
(d) Role of Deadwood By popular knowledge, deadwood is ranked close to waste. That, however, cannot be further from the truth. Dozens of species live only in deadwood microenvironments. Deadwood is even considered to be the richest habitat in a healthy forest [5]. Learning materials concerning deadwood will be accompanied by a live video stream from a site inhabited by Osmoderma eremita, a European beetle in the family Scarabaeidae. The video stream will be supplied by a remotely controlled panoramic video camera equipped with a large zoom range lens, that will allow showing both the site as a whole and single hollows.
3 Kiwi Platform As it was presented before, the main objective of the project is to develop an e-learning interactive platform for areas like water management, air pollution, deadwood or phenology. In order to make the study process more interesting, the platform will be equipped with dynamic content like data collected by measurement devices or high quality images from the observation scenes. We believe that a successful e-learning platform should provide means to add new devices and sensors to the existing infrastructure. The platform should also allow us to manage and monitor devices and control data acquisition processes. In order to achieve these goals, a special system is required which will be responsible
New Technologies in Environmental Science: Phenology Observations...
121
Fig. 4 Kiwi platform
for all the work behind the scenes. We propose to split the system into two parts, with an access layer in between two layers (see Fig. 4). • Layer I: e-learning platform – user interface and access point to the learning tutorials • Layer II: Kiwi remote instrumentation platform Layer I is responsible for providing the interface for online lessons and tutorials. The platform is based on a custom learning management system built by extending open source content management system software. The e-learning platform layer is only focussed on providing lessons and tutorials. It does not deal with issues related to accessing and controlling devices or sensors, nor has the knowledge how to do it. All the instruments are managed and controlled by the Kiwi remote instrumentation platform (see Fig. 4). The Kiwi platform is a framework for building remote instrumentation systems. The platform provides a set of components to control and manage scientific equipment or sensors like cameras, weather, air pollution and water flow sensors and others. All the equipment connected to the Kiwi platform can be controlled remotely with one unique user interface. It also allows users to design and run the so-called observation workflows. It is possible to set up a sequence of operations starting from data acquisition, data processing and finally visualisation. This workflow can be launched periodically by the Kiwi workflow manager component with a desired frequency and time. For instance, the system administrator can plan a
122
D. Stokłosa et al.
workflow with a camera taking pictures twice a day at noon time. The device can be set up so that several points of a scene are being shot. The Kiwi remote instrumentation platform is a successor of a distributed workgroup environment, the virtual laboratory system [8], which has been developed by Pozna´n Supercomputing and Networking Center. Devices and sensors produce different output data. Most of the sensors produce numerical data which will be visualised by the e-learning platform. However, the IP cameras or photo-cameras produce images and videos. In order to make the system flexible, we have designed one common and generic interface for retrieving data from the instruments – Kiwi access layer. This is the only place where the two layers interact with each other. The same interface is used when dealing with different equipment, i.e. when the temperature from the given location is required, as well as images from the photo camera are downloaded.
4 Phenology Observations Phenology is the study of the timing of life cycle events of plants and animals [1]. Common examples include the date migrating birds return, the first flower dates for plants, and the date on which a lake freezes in the autumn or opens in the spring. The presence of advanced weather forecasting systems pushed aside this area of research. However, phenological records can be a useful proxy for a temperature analysis in historical climatology [2], especially in the study of climate change and global warming. Phenological observations are one of the simplest methods of observing climate changes that do not require any special skills nor equipment. The only rule is to select a site, then select a plant or an animal and record whether or not the so-called phenophases are occurring. The hard requirement is to record observations regularly and as often as every two or three days. Additionally, the observation sites are ideally far from urban areas. This means even simple phenology observations require quite a bit of time. It also means that the observations may benefit from new technologies such as remotely controlled cameras and video streaming. The Chinese are thought to have kept the first written records dating back to around 974 BC. For the past 1,200 years, observations of the timing of peak cherry blossoms in Japan have been recorded. Phenological observations have a long tradition in Poland and are one of the oldest in Europe. First networks of stations and outposts were found in Poland in the middle of the nineteenth century [9]. Phenological observations are nowadays still carried out. Action 725 of the European Cooperation in the field of Scientific and Technical Research [6] and the USA National Phenology Network [7] are the biggest scale observations.
New Technologies in Environmental Science: Phenology Observations...
123
5 Automated Phenology Observations Fauna and flora are reacting rapidly to climate changes. Therefore, plants and animals observations can be valuable in nature science education. However, they are very time consuming and require a special attitude during the observation phase. It seems that regular, stationary phenology observations are very hard to conduct with a group of students. We have introduced an automated, remotely controlled system for phenology observations based on the Kiwi remote instrumentation platform. High quality photo-cameras are used for a scene comprehensive view. Cameras can be set up to take high-resolution pictures with a desired frequency and time. The photo equipment has been integrated with the Kiwi platform. The platform allows controlling the cameras remotely and also provides tools to manage the observation process, i.e. changing the time and frequency of pictures being shot. The system allows controlling and monitoring all the observation equipment remotely. All photos taken during nature surveillance will be available for students and pupils for a study process or further analysis in the e-learning portal.
6 Surveillance Equipment The most important factors of successful digital phenology observations are: high quality images and a high level of details. The observation scenario is as follows: an operator defines a set of scenes. The operator has to set up cameras to shoot certain points of interest on each scene. Next, the operator has to set up the time and frequency of pictures being shot. After that, the entire process is automated. The photo-cameras will get triggered by the Kiwi platform at the specified time. The resulting photos will be stored in the Kiwi platform and will be available for students through the e-learning portal. We have to make sure that the resulting pictures are of high quality allowing users to zoom in or out and observe tiny details of their target of interest like a leaf or a bud. This is why we have conducted a series of tests allowing us to choose the best equipment. A maple tree has been chosen as our test object. We have shot three types of pictures: wide angle, medium and finally maximum zoom pictures using such devices as video cameras, video cameras with high quality lens and photocameras. The comparison is shown in Figs. 5–7. The comparison presents pictures taken with a compact camera, digital reflex camera and finally with a high-definition video camera. We have divided the pictures into three categories: wide angle (if a user wants to see the entire scene and its surroundings), medium zoom (please note that for some devices this was the maximum zoom available) and maximum zoom.
124
D. Stokłosa et al.
Fig. 5 Image quality comparison – wide angle
Fig. 6 Image quality comparison – medium zoom
The user is mostly interested in the maximum zoom quality comparison which is presented in Fig. 7. For instance, for a daily increase of a leaf size measurement, a strong close-up is required. Our tests have shown that even expensive, high-quality video cameras did not turn out to be useful in the phenology observations. As one can see in the comparison, the resulting photos are not detailed enough. Moreover, the Sony
New Technologies in Environmental Science: Phenology Observations...
125
Fig. 7 Image quality comparison – max zoom
Fig. 8 Image quality comparison – compact cameras
PMW-Ex3 Full-HD Camcorder, which has been used in out tests, reaches its maximum zoom very quickly. The maximum zoom available is presented in the medium comparison. Photo-cameras, as our tests show, are much better suited for this kind of usage. Tests in the field have shown that even consumer-level compact cameras deliver satisfactory image quality. Figure 8 shows the quality comparison between two different compact cameras: Panasonic DMC TX3 and Nikon Coolpix L12. However, photo-cameras have one significant disadvantage compared to video cameras: lack of remote control functionality and software required to control the camera remotely using a web browser. This feature is provided even by low-end video cameras but is usually not available in digital SLR or compact cameras. We have proposed a solution where a digital reflex camera is integrated with a turnplate device and can be controlled remotely using the Kiwi platform.
126
D. Stokłosa et al.
7 Remotely Controlled Reflex Camera As it was mentioned before, a digital reflex camera allows us to obtain high quality and high resolution pictures. However, the remote control functionality is missing. In order to base the phenology observations on reflex cameras, we had to implement a set of missing functionalities like camera movements, setting picture properties, shooting pictures and, finally, storing a camera position. In order to achieve this, we had to integrate the camera with other devices like turnplate, heater, control computer and other controllers. This is presented in Fig. 9. We have designed a special casing to hold a camera and other necessary equipment. The casing is resistant to water and other outdoor conditions. The casing is also equipped with a temperature sensor and a heater, which is triggered by the sensor. Since there is no option to connect a digital camera directly to the network, a small control computer is required. The camera is connected to the computer and managed by the Kiwi components, which are installed there as well. The Kiwi components are responsible for controlling the camera, taking pictures and downloading them from a device. The camera case is mounted on top of a turnplate device – see Fig. 9. This means that we will not be moving the camera itself, but the entire casing with all the equipment installed in it. The turnplate is connected to the control computer and managed by the Kiwi component. We have also prepared a graphical user interface for end users – Kiwi Instrument, which allows users to take advantage of the implemented functionality (Fig. 10). The interface integrates camera control, turnplate control and, finally, camera live preview functionality at one place.
Fig. 9 Remotely controlled camera components
New Technologies in Environmental Science: Phenology Observations...
127
Fig. 10 Kiwi instrument – camera control interface
8 Summary At this point the project is under development. The first release was completed at the beginning of 2011. All devices from areas presented in this article like air pollution, water management, meteorology or deadwood will get integrated with the Kiwi platform. Moreover, two remotely controlled photo-cameras will be deployed in the Zielonka Forest near Pozna´n. The cameras will be set up to observe the same type of tree to give students the opportunity to compare the results and broaden their knowledge about phenology.
References 1. About Phenology. Available on-line at http://www.usanpn.org/about/phenology 2. M. Molga. Meteorologia rolnicza. PWRiL. 1983. 3. K. Piotrowicz. Historia obserwacji fenologicznych w Galicji. IMiGW. 2007. 4. Water cycle. Available on-line at http://en.wikipedia.org/wiki/Water circulation 5. Deadwood – living forests. The importance of veteran trees and deadwood to biodiversity. WWF Report. 2004. Available on-line at http://wwf.panda.org/about our earth/all publications/? 15899/Deadwood-living-forests-The-importance-of-veteran-trees-and-deadwood-tobiodiversity 6. COST 725: Establishing a European Phenological Data Platform for Clima tological Applications. Available on-line at www.fsd.nl/cost725 7. USA National Phenology Network. Available on-line at http://www.usanpn.org/home 8. Virtual Laboratory PSNC. Available on-line at http://vlab.psnc.pl 9. Phenology History, http://www.budburst.ucar.edu/phenology history.php
An Agent Service Grid for Supporting Open and Distance Learning Alberto Grosso, Davide Anghinolfi, Antonio Boccalatte, and Christian Vecchiola
Abstract This chapter describes ExpertGrid, a software infrastructure for the development of decision support tools to train crisis managers. The system handles a rich variety of realistic critical scenarios addressed by the trainees, who, by means of the system, will develop effective countermeasures. ExpertGrid is based on agent oriented technology to be dynamic, flexible, and open, and it relies on a distributed computing grid to support large scale scenario computations and incorporate a larger body of knowledge. This contribution presents the design of the system together with its prototypal implementation and initial results.
1 Introduction Decision support systems [1] constitute a valuable tool for helping humans in taking decisions during critical conditions. Critical situations happen unexpectedly and require immediate response where the outcome of a wrong decision can be extremely costly. In these scenarios, the ability to react promptly is fundamental; the support given by an automated tool filtering out the inapplicable or not opportune courses of action can help reducing the decision time, thus making the reaction more effective. Moreover, response to crisis scenarios often requires a heterogeneous expertise to fully address the critical situation. It generally includes knowledge of regulatory bodies, psychology, medicine, structural engineering, chemical and electrical hazards, and specific knowledge about the crisis site (e.g., urban, maritime, or airborne). Such expertise is both human, matured over years of experience, and technical. In these conditions, a crisis executive committee will rely on a large
A. Grosso () • D. Anghinolfi • A. Boccalatte • C. Vecchiola Department of Communication, Computer and System Sciences, University of Genova, Via Opera Pia 13, 16145 Genova, Italy e-mail:
[email protected];
[email protected];
[email protected];
[email protected] F. Davoli et al. (eds.), Remote Instrumentation for eScience and Related Aspects, DOI 10.1007/978-1-4614-0508-5 9, © Springer Science+Business Media, LLC 2012
129
130
A. Grosso et al.
number of experts to cover the full range of skills required for an effective and appropriate decision. This might not be a practical solution given the stringent time constraints, which do not leave room for large scale debates on the best course of action. A training system able to prepare crisis managers to actively respond to emergency situations could help speeding up the decision process, thus making the reaction more effective and prompt. In this paper, we propose a distributed e-learning tool, called ExpertGrid, for training crisis managers to act readily in critical conditions. The tool integrates statistical analysis, crisis modelling, and scenarios management. More precisely, it leverages the body of knowledge of real experts and provides a simulation environment where crisis scenarios designed and validated by experts are used to train crisis managers. ExpertGrid uses a “learn by mistake” approach: the solutions devised by the trainees are compared to the ones given by the experts and crisis managers can refine their skills incrementally. The system is intended to help municipalities, regional authorities, and state governments to face unexpected emergencies. ExpertGrid enables local response, recovery, and optimized damage control. Moreover, by relying on a distributed and widely accessible infrastructure it can be used as a general repository of knowledge: pre-existing and already tested solutions in similar scenarios can be adapted to the current crisis condition. Moreover, mutual feedback from local crisis handling creates a robust, versatile, and fast reacting crisis response network managed by highly trained crisis managers. Fundamental for the effectiveness of the system is the creation of a community of experts and crisis managers around it. In order to make this happen, ExpertGrid has been designed to be widely accessible and distributed. This allows the system to incorporate a larger body of knowledge as a result of the contribution of all the community of experts around the world. Essential for the tool is the ability to scale as the number of users increases and to support large scenario simulations. To address this issue, ExpertGrid relies on Grid computing, which provides the ability to scale and to leverage multiple processing nodes for large scenario simulations, and agent technology, which confer the flexibility, openness, and dynamicity required to continually keeping updated the system and integrate new policies and attributes to evaluate. The rest of the chapter is organized as follows: Sect. 2 provides a brief description of the background technologies used to develop the system, such as grid computing and software agents; Sect. 3 discusses the motivations, functionalities, and design solutions of ExpertGrid. Conclusions along with the description of ongoing activities follow.
2 Background Technologies 2.1 Grid Computing Grid Computing [2] provides access to large computational power, huge storage facilities, and a variety of additional services by harnessing geographically disparate computing sites connected through high capable network connections. These
An Agent Service Grid for Supporting Open and Distance Learning
131
services are accessed and consumed as a utility, like power, electricity, gas, and water. Like a power grid [3] a computing Grid provides its services uninterruptedly and it does so by leveraging a large distributed infrastructure combining resources belonging to different administrative domains, which are presented to the users as a single virtual infrastructure. In a complete analogy with the power Grid such resources are easily accessible to end users simply by connecting to the Grid, as we are used to get electricity supply from the socket. In the case of Grid Computing the “socket” is represented by the software interfaces and services that allow integrating it into existing applications. Grid Computing is a very popular and established research area as well as a reliable technology for several applications especially in the field of scientific computing where the need of large data storage and immense computing capacity are common. A fundamental role in the development of Grid Computing research and technologies is played by the Open Grid Forum, which is a community of users, developers, and vendors leading the global standardization effort for Grid Computing. The activity of OGF has produced several successful standards such as OGSA [4], OGSI [5], and JSDL [6]. From a technological point of view the Globus Alliance has worked actively to compose a community of individuals and organization to build effective technologies for Grid Computing. The most important outcome of the Globus Alliance is the Globus Toolkit [7], an open source toolkit for building Grid systems and applications that has become the de facto standard in industry and academia. The Globus Toolkit implements the OGF specifications and offers middleware for harnessing together disparate resources, and APIs for developing Grid services, and managing and interacting the distributed system constituting the Grid. Other solutions for the development and the management of Grids include: Condor-G [8], gLite [9], XtermWeb [10], UNICORE [11], and Alchemi [12]. Currently, several computing and data Grids are deployed over the world as a testimonial of the success of this technology and approach. Among the most relevant we can list: EGEE, and EGEE-II [13], TeraGRID [14], Open Science Grid [15], and NorduGrid [16]. Many others exist.
2.2 Grid Computing and e-Learning Grid Computing provides access to an immense computational power and huge storage facilities, which can be easily plugged into existing applications and services. It has been designed and conceived to serve the needs of compute intensive or data intensive applications. E-learning systems by themselves do not have such needs. As defined by Eklund et al. [17], Electronic learning (e-Learning) is a “wide set of applications and processes, which use available electronic media (and tools) to deliver vocational education and training”. E-learning applications foster the creation of a shared environment where learners, authors, experts, and administrators interact and collaborate to create and deliver the learning process. As a result an e-learning application is naturally represented by a multi-user system that
132
A. Grosso et al.
is eventually distributed. Nowadays, e-learning technologies strongly leverage the Internet, and more likely the World Wide Web, to deliver the learning experience. E-learning systems are implemented as web applications where learners, authors of content, and experts interact from all over the world. The capillary diffusion of the World Wide Web provides a solid infrastructure through which it is possible to made available learning objects and develop the learning experience. This scenario naturally leads to the requirement to efficiently scale in terms of number of users connected, data storage allowance, and computing capability. Scalability becomes a fundamental element to make the learning experience effective under peak load conditions. Grid Computing can be a valid support to address these needs, by operating behind the scenes and thus providing the desirable performance of the system in terms of response time and data capacity, thus improving the end user experience. Several research works have investigated the use of Grid Computing for e-learning. Pankratius and Vossen [18] discussed in detail the advantages of using Grid Computing to support e-Learning and outlined a reference architecture for an e-learning Grid. Abbas et al. [19] proposed SELF, a framework for semantic Grid based e-learning and also discussed how the enablers of e-learning can be easily found in Grid Computing. Nassiry and Kardan [20] introduced the concept of Grid Learning Architecture and provided a detailed analysis of such architecture by mapping the different operations and concepts of an e-learning system on Grid components and services. As discussed by these research works, Grid Computing definitely constitutes a valid support for e-learning systems that want to scale efficiently as they get more popular and used. More specifically, particular classes of e-learning applications naturally require access to large computing capacity. As an example, e-learning applications in the field of medicine and biology can require the interactive visualization of part of the human body or complex protein structures. These tasks can become prohibitive without the use of distributed computing facilities on demand. The same happens in the case discussed in this paper, where the decision support system needs to perform compute intensive simulations as a result of the learning process of the users.
2.3 AgentService Suite for Supporting Grid Computing Multi-agent systems are, by definition, distributed systems, with their autonomous entities which are the software agents [21]. AgentService [22] is an agent programming framework built on top of the Common Language Infrastructure [23]. We can see the AgentService ecosystem with a different level of granularity: from the point of view of agents, as distributed entities supported by platforms during their execution, and from the point of view of platforms intended, in turn, as distributed entities acting in a federation of platforms. AgentService provides services to agents at the level of distributed platforms, in order to support their dynamic life-cycle
An Agent Service Grid for Supporting Open and Distance Learning
133
and mainly their interactions with remote peers [22]. The AgentService platforms support distributed agent platforms in term of: 1. Node discovering: the system must warrant a way to discover new nodes appeared on the network. 2. Topology management: the system must ensure that topology of the federation is always up to date. 3. Load balancing: the system must prevent possible overcharges of certain platforms when other ones are still free. 4. Remote administration: the system must allow administrators to remotely manage the platform and agent lifecycles. Considering the aforementioned features, we can state the AgentService platform federation is a Grid computing system intended as a PaaS (Platform as a Service) which is implemented in a hybrid peer-to-peer architecture. The following points clarify those assertions: 1. Grid computing: the system is first of all a federation of multi-agent platforms. 2. From the administrator point of view, the complexity of the federation topology is totally hidden, alike the disposition of resources. Only the developer can take into account the fact that the multi-agent system is distributed over a federation and that it is not a simple in-process application. In this way, the federation is seen as a cloud where jobs, agents, resources, and platforms are allocated and managed in a transparent way. 3. PaaS (Platform as a Service): the case of the federation of platforms ties in the definition of PaaS, because a multi-agent platform is precisely a set of service and libraries. 4. Hybrid peer-to-peer: a federation of platforms is intended as a hybrid peerto-peer network which every node has to discover the peers available on the network. Let us briefly analyse the architectural details of the AgentService Federation Management Suite. First of all, every binomial platform-computer is a peer in the network, directly connected to the rest of the federation. It leans on a Windows service which represents the controller of each operation on the platform. Every peer interacts with other peers through a WCF (Windows Communication Foundation) web service (exposed by the windows service) which allows a peer to participate in the federation and to share its actual knowledge about the federation topology. This web service allows master nodes to monitor the performance of each other node in order to locate the best node in terms of performances. The constant monitoring of peers is useful when an administrator establishes a connection with a master node and submits a request for the instantiation of a new agent. The Administrator has at his disposal two graphical interfaces: the remote platform manager and the federation manager. An important feature is the platform discovery service. Once the service is launched, it starts a thread which constantly monitors the network (namely the subnet where the node is deployed) in order to: find new nodes recently appeared on the network, monitor the availability of connected peers. Every master
134
A. Grosso et al.
node of the Grid cyclically polls each known peer in order to collect its performance statistics. The evaluated measures are the free RAM memory, the CPU usage, and the number of active agents. Every node calculates an average value for each measure, considering a configurable time slot. Hence, the service formulates a classification of the best nodes, in terms of actual performances. The relevance of the performance evaluation is related to the creation of new agent instances. The involved master node first evaluates the performance of peers and then verifies if the selected peer has in its storage facility the necessary assemblies containing the agent template. During the development of this suite we have developed two tools which enable administrators to administer the AgentService platform. The first one is focused on the management of a single platform, performing usual administration operations on it. The second one is rather intended for the support to distributed federation of platforms.
3 ExpertGrid: The Agent Grid Supporting e-Learning in Disasters Management 3.1 Motivations and Challenges This chapter refers to a research project aimed to developing advanced technologies and frameworks to efficiently simulate and to create models of crisis concerning for instance energy infrastructures and transporting networks using proper scenario building tools. In this work, an efficient scenario building tool will be combined with an effective human training tool and a decision support methodology. Critical decisions under a crisis have the nature of being costly (in terms of lives lost and/or financially) if they happened to be wrong and there is not much margin for error, if any [1]. Another difficulty is that a crisis needs a quick resolution. The main issue of concern during resolution is uncertainty and lack of information. Crisis managers must respond to a fast developing crisis without having the necessary information at their disposal. If all the relevant information is present for the situation, the solution can be simplified algorithmically. The probabilistic nature of some events can also be optimized mathematically. Failure of having some of the relevant information can be introduced into the system as an uncertainty. Another difficulty is that solutions should be able to involve past experiences where an expert on the topic may have the best global view on that. This notion that the community of relevant experts is the best source of wisdom to handle the challenge of a crisis is the foundation of the tool here proposed. Training of Crisis Managers should focus on several important key items. The system used for training needs to model the relevant crisis situations in great detail including all possible probabilistic nature of events. Simulation of the designed scenario should be as close to real life as it can be. Finally, the decision methodology needs to help the manager to come up with the right decision as quickly as possible. The novelty of the approach will be including
An Agent Service Grid for Supporting Open and Distance Learning
135
expert information into decision methodology during a crisis. For that purpose we designed ExpertGrid, a grid of software agents modeling the elements cited above; furthermore, the adoption of a historical event database also helps to bring previous experiences into the decision. In this section the design of the software presentation layer compliant to the service-based architecture defined for the TRIPLAN system is detailed. Following the considerations reported above, the proposed TRIPLAN Web Interface software infrastructure is composed by a dedicated Presentation Business Logic tier (PBL tier) and by a set of user interaction services (UI services). The role of the Presentation Business Logic access tier is to provide access to the ServiceOriented business logic by abstracting the invocation of the Business Logic services; it can also manipulate the data results offering a programmatic interface that could be consumed by the UI services. The PBL is instantiated a single time in the server side and communicates with the client side through AJAX technology. The User Interaction services model the interoperation with the user; each service is bounded with a dedicated presentation interface and one or more specific services of the PBL. The UI services can be composed within a web portal and so consumed through the web by the users. In order to manage user profiling issues, n-instances of the UI services could be instantiated for user group or membership.
3.2 Objectives of the Work As explained above the state of the art for uncertainty handling in general, and crisis management in particular, is the division between human uncertainty (controversies), and data uncertainty (extrapolation algorithms). The aim of this work is to contribute to the fusion of these two categories, training the crisis manager to utilize both. Crises have two difficult features: they are unexpected, and they require a fast response. The first feature means that one cannot anticipate the particularities of a crisis. They come when least comfortable, they involve targets that are a surprise, they employ means that are not appreciated beforehand, they unravel according to a pathway that was never charted, or considered ahead of time. It happens with natural crises, like an earthquake and definitely with manmade crisis. Terrorists aim to surprise on all counts (and unfortunately they succeed). Being unable to expect a crisis in detail means that it cannot be prepared for algorithmically. This need runs against the other feature of crises: they must be handled under a tight constraint of time. This in turn means that the crisis manager does not have the time for large conferences where a great number of participants will hash, discuss, propose, counter-propose, and suchlike. So, in other words a crisis manager needs to rely on human experts (not formulas, or algorithms), he or she needs plenty of them (to cover every possible needed area of expertise), and that manager cannot engage all those experts in a lengthy conference (he must come up with a decisive answer right away). These constraints will lead one to a crisis management strategy based on the following concept: a small executive team evaluates the situation and
136
A. Grosso et al.
comes up with one or more solution concepts, expressed as action scenarios. These scenarios will be subsequently considered by a large group of well initiated experts. The opinions of these experts will be integrated to a summary judgment for every scenario. This summary judgment will rank the evaluated scenarios according to the wisdom of the many relevant experts. The ranking will be forwarded to the crisis manager for his final decision. This strategy will accomplish three critical objectives: 1. All the relevant areas of expertise will contribute to the final decision. 2. The crisis manager will not be burdened with large committees and endless debates. 3. The participants will have a powerful training tool to practice over virtual scenarios. This proposed strategy requires valid, relevant and quality action scenarios to work on. Collecting all relevant information during a crisis poses a critical challenge to overcome as the response time is critical and a wrong decision may have devastating consequences. Our belief is that the project will overcome these difficulties and answer challenging questions. The scenario creation and execution is a focal point as the scenario should include all related details and the execution should be as real time as it is possible. The scenario builder will be designed as a dynamically growing tool, since it is very difficult to estimate and enter all parameters of all possible events. Each real event occurred actually will help the builder to have more detailed information about an event so that these features can be included in the next scenario. Finally, another relevant aspect is the decision process since it should include a past event database and the opinion of current crisis experts which should be included in the decision in an efficient way. The general idea is to bring community wisdom to bear on the issue at hand, without overwhelming the planning team that must be small, tight and focused. ExpertGrid does it by developing action scenarios by the planning team, and then referring to these scenarios by the larger group event experts. The various opinions of the group members are integrated to a single summary opinion. Such integration is possible by asking all the experts to express their opinions as a binary vote. Each vote is fed into the system and a dedicated software agent integrates them as a single results. The impact of each vote depends on pre-selected factors (configuration parameters for the given scenario).
3.3 Project Outline The following scheme portrays the configuration of the proposed crisis training system: A brief description of the components depicted in Fig. 1 follows.
An Agent Service Grid for Supporting Open and Distance Learning
137
Fig. 1 The whole CMS architecture
3.3.1 Historical and Event Database – This object addresses the problem of using the information about past experiences and related decisions. The database is divided into two parts, where one database will keep all past events and decisions. The outcome of the decisions is stored as well, which will form a baseline for the future events. The other database keeps all relevant information about the current and possible future events. The structure of each database will improve with each included crisis or event. 3.3.2 Decision Methodology Using Expert Information – The decision methodology has been implemented through a dedicated software agent where the outcome of the final decision is the combination of opinions. 3.3.3 Scenario Building and Execution – The Scenario Builder will be based on the analysis of data about current crisis collected time over time in the EDB. It uses knowledge, such as classification rules, acquired from the analysis of reference cases stored in the HDB or provided by domain experts to identify and rank possible response scenarios.
138
A. Grosso et al.
3.3.4 Decision Under Uncertainty – As identified above, uncertainty is the heart of the challenge for the crisis manager. Hitherto the technologies for uncertainty handling have been divided into two main categories: human, and data. The former dealt with controversial issues where credible experts find themselves opposing each other, and the latter dealt with missing data that had to be modelled and guessed. The system handles the problem of merging these two categories. Indeed, a crisis manager will need tools to complement missing data, as well as tools needed to resolve human controversies.
3.3.5 Visualization Tools – A web-based user interface is included in order to display the risk analysis results in the form of a crisis analysis map with all risk probabilities and responses.
3.4 The Role of the Information Manager This unit is the repository for storing both data on past crisis or relevant case studies and for recording the evolution of the simulated training sessions, including emerged response scenarios, decisions/actions, consequence, as well as qualified evaluations. This component is based on a suitable Ontological Model (OM) allowing a formal description of static data (as parameters, localizations, quantities, and so on) and dynamic data (evolution of the involved processes). Hence, the OM provides the system with flexibility so that it will be extended and applied to different context through appropriate customization. The OM will also be the base for the analytical process on which the Scenario Builder module will be based. Basically, to handle a crisis in a real or simulated context one needs to collect (and store) all the data coming time after time from the environment from any information source available. The data can be relevant to a number of different aspects, as occurrence of facts from the crisis field, forecasts, availability of resources, and can be affected by uncertainty so that they can be changed (refined, corrected) over time. Each crisis is different, the unexpected and the surprise are prevalent, and so it is a special challenge to build a crisis database as well as a relevant ontological model. Their construction should be derived from the entities, fact occurrences and data parameters that are most important for the crisis manager to decide on his countermeasures. The basic parameters are easy to identify. Every crisis will have to be defined by the journalistic formula of what? Where? Who? When? Why? These parameters should be identified in as much detail and specificity as possible. Then the database has to feature softer data: escalation scenarios and their probabilities, side effects and their probabilities, political, social, and other ramifications, and their probabilities, etc. The crisis event database has to allow for a large number of sources to register
An Agent Service Grid for Supporting Open and Distance Learning
139
their information. This information is expected to be mutually inconsistent and with various degrees of credibility. It should be organized in a clear way to allow the crisis manager to reach an optimal decision. Furthermore, ontological models could be exploited in order to perform ulterior reasoning activities: consistency checking, for ensuring that an ontology does not contain any contradictory facts; concept satisfiability, for determining if a class of ontological concept is unsatisfiable causing the whole ontology to be inconsistent; classification, computing the subclass relations between every named class to create the complete class hierarchy; and realization, for finding the most specific classes that an individual belongs to.
3.5 Building the Training Scenarios The Scenario Builder will develop response scenarios based on the best available data. Such scenarios will be proposed to the trainee to be analysed, possibly changed and selected. The Scenario Builder offers facilities which help the trainer to create a training scenario (Scenario Editor) and to execute (simulate) the given scenario (Scenario Execution). The Scenario Builder is based on the analysis of data about current crisis collected time over time in the EDB. It uses knowledge, such as classification rules, acquired from the analysis of reference cases stored in the HDB or provided by domain experts to identify and rank possible response scenarios. To this end knowledge extraction and data mining procedures have been developed to (a) classify events (b) identify recurrent sequences of happenings associated with classes of events (c) identify promising rules for event treatment according to past experience; at actual implementation, case-based and decisional binary-tree extraction methods could be exploited to achieve this goal. The formal description of the simulated crisis under concern by means of the designed OM eases this task, so that the structure adopted to represent knowledge could be flexible and adaptable also to different contexts. An example of OM for a possible railway crisis context is shown in Fig. 2. Pattern matching procedures are designed to assign the incumbent event, with a computed confidence level, to the reference event classes and then to activate knowledge base rules for scenario generation. Multiple criteria decision algorithms will be used to ranked the emerged scenarios. With reference to a railway crisis situation, a sketched ontology describing the entities involved in this kind of scenario has been developed (an example is shown in Fig. 2). Relying on this ontology, the trainer is able to instantiate a training session (namely a crisis scenario), filling the Event Database with information about the context (railway line, stations, trains, passengers, etc.) and about the temporal evolution of the crisis (the workflow describing the main steps of the crisis scenario). Contextual information and actions to carry out are instances of the concepts described in the
140
A. Grosso et al.
Fig. 2 Example of ontological model
railway ontology. Hence, if the concept “train” is formally defined, the trainer must create an instance with the details and attributes of a particular train (number of wagons, passengers, etc.). These instances are linked to the states of the workflow and gradually displayed to the trainee, who traces the flow of activities and events and takes a decision when the system queries her/him. The task of the trainer is then to construct the workflow modelling the crisis scenario and instantiate the concepts of the railway ontology in order to furnish to the trainee and to the system information about the current training session details (Fig. 3).
3.6 The Agents Platform Federation The system described above is entirely implemented on the AgentService Platforms Federation in order to make the system distributed, scalable, and easy to access. We can distinguish two kinds of nodes in the grid: the manager node and the simulation node. In each manager node of the federation, an instance of the Information Manager is deployed in order to deal at least with a specific scenario. Hence, this node manages the data and the tool related to a given context scenario and orchestrates the decisional workflow of the emerged response scenarios. decisions/actions, consequence, as well as qualified evaluations. The manager nodes could be replicated enhancing robustness by means of the AgentService platform features. The payload required for simulating crisis scenario is divided among the
An Agent Service Grid for Supporting Open and Distance Learning
141
Fig. 3 The scenario building process
agents deployed over the grid of agent platforms in the so-called simulation nodes. The number of agents and simulation nodes can dynamically vary at runtime scaling in order to follow scenarios complexity and client requests.
4 Conclusions, Actual Developments, and Future Works In this chapter, we have presented ExpertGrid, a distributed and scalable system for distant learning in crisis management. ExpertGrid is a multi-agent society composed by a grid of AgentService platforms. Thanks to the AgentService federation suite the system is scalable in the number of nodes and agents instances, and robust by means of the persistency of the agent state and the replication of the platform nodes. Ontologies and workflows are adopted in order to: solve data consistency issues integrating distributed information, enhance the expressiveness of experts and managers in modelling crisis scenarios and decisional processes. ExpertGrid adopts the “learn by mistake” approach exploiting a network of crisis response experts in preparing the learning scenario and whose advices and decisions are mediated through a vote mechanism. At present, the implementation of the ExpertGrid software infrastructure has been completed relying on the AgentService suite. The next steps of the project will consist in modelling crisis scenarios, monitoring and collecting data about them, and finally starting a real training phase for
142
A. Grosso et al.
real crisis managers personnel. At this moment, we have taken into account as reference scenario the highway auto-route traffic system and we are working with the help of experts in modelling the related ontology and workflows. This work has the objective to build a relevant set of models of crisis scenarios related to the highway traffic context in order to populate the knowledge base of the system. Once completed the system could be applied into the training process; this will give us feedback about the effectiveness of the proposed e-learning strategies that is in some sense independent of the good results reached at software infrastructural level.
References 1. Sprague, R., and Watson, H.: Decision Support Systems - Putting Theory into Practice. Englewood Cliffs: Prentice Hall, 3rd Edition (2003) 2. Foster, I., Kesselman, C.: The Grid: Blueprint for a New Computing Infrastructure. Morgan Kaufmann, San Francisco (1999) 3. Chetty, M., and Buyya, R.: Weaving Computational Grids: How Analogous Are They with Electrical Grids?. In Computing in Science and Engineering (CiSE), ISSN 1521-9615, vol. 4, no. 4, pp. 61-71, IEEE Computer Society Press and American Institute of Physics, USA (2002) 4. Open Grid Service Architecture v1.0. Available at https://forge.gridforum.org/projects/ogsawg/ docman 5. Tuecke, S., Czajkowski, K., Foster, I., Frey, J., Graham, S., Kesselman, C., Maquire, T., Sandholm, T., Snelling, D., Vanderbilt, P.: Open Grid Service Infrastructure (OGSI) Version 1.0, Proposed Recommendation. Available at http://www.ggf.org/documents/GFD.15.pdf 6. Anjomshoaa, A., Brisard, F., Drescher, M., Fellows, D., Ly, A., McGough, S., Pulsipher, D., and Savva, A.: Job Submission Description Language (JSDL) Specification, Version 1.0. Available at http://www.gridforum.org/documents/GFD.56.pdf 7. Foster., I.: Globus Toolkit Version 4: Software for Service-Oriented Systems. In IFIP International Conference on Network and Parallel Computing, Lecture Notes in Computer Science (LNCS) 3779, Springer-Verlag, pp. 2 - 13 (2005) 8. Frey, J., Tannenbaum, T., Foster, I., Livny, M., and Tuecke, S.: Condor-G: A Computation Management Agent for Multi-Institutional Grids. In Proceedings of the Tenth IEEE Symposium on High Performance Distributed Computing (HPDC10) San Francisco, California (2001) 9. Laure, E., Gr, C., Fisher, S., Frohner, A., Kunszt, P., Krenek, A., Mulmo, O., Pacini, F., Prelz, F., White, J., Barroso, M., Buncic, P., Byrom, R., Cornwall, L., Craig, M., Di Meglio, A., Djaoui, A., Giacomini, F., Hahkala, J., Hemmer, F., Hicks, S., Edlund, A., Maraschini, A., Middleton, R., Sgaravatto, M., Steenbakkers, M., Walk, J., and Wilson, A.: Programming the Grid with gLite. In Computational Methods in Science and Technology Vol. 12 (2006) 10. Germain, C., Neri, V., Fedak, G., Cappello, F.: XtremWeb: building an experimental platform for Global Computing. In Proceedings of the 1st IEEE/ACM International Workshop on Grid Computing (Grid 2000), Bangalore, India (2000) 11. Erwin, D., W.: UNICORE - A Grid Computing Environment. In Concurrency and Computation: Practice and Experience, 14(13-15), John Wiley and Sons, pp. 1395-1440 (2003) 12. Luther, A., Buyya, R., Ranjan, R., and Venugopal, S.: Alchemi: A .NET-Based Enterprise Grid Computing System. In Proceedings of the 6th International Conference on Internet Computing (ICOMP’05), Las Vegas, USA (2005) 13. Gagliardi, F., Begin, M., E.: EGEE - providing a production quality Grid for e-science. In Local to Global Data Interoperability - Challenges and Technologies, Sardinia, Italy (2005) 14. Catlett, C., E.: TeraGrid: A Foundation for US Cyberinfrastructure. In Network and Parallel Computing, LCNS vol. 3779, H. Jin, D. Reed, and W. Jiang Eds., Springer Berlin / Heidelberg (2005)
An Agent Service Grid for Supporting Open and Distance Learning
143
15. Pordes, R., Petravick, D.B. Kramer, Olson, D., Livny, M., Roy, A., Avery, P., Blackburn, K., Wenaus, T., Wurthwein, F., Foster, I., Gardner, R., Wilde, M., Blatecky, A., McGee, J., and Quick, R.: The open science Grid. In Journal of Physics: Conference Series, vol. 78, no.1, pp. 012-057 (2007) 16. Smirnova, O., Eerola, P., Ekelf, T., Ellert, M., Hansen, J. R., Konstantinov, A., Knya, B., Nielsen, J. L., Ould-Saada, F., and Wnnen, A.: The NorduGrid Architecture and Middleware for Scientific Applications. In Computational Science - ICCS 2003, G. Goos, J. Hartmanis, and J. van Leeuwen Eds., LNCS 2657, Springer Verlag, Berlin / Heidelberg (2003) 17. Eklund, J., Kay, M., and Lynch, H. M.: E-Learning: Emerging Issues and Key Trends: A Discussion Paper. In Australian National Training Authority (ANTA), (2003) 18. Pankratius, V., Vossen, G.: Towards E-Learning Grids: using Grid Computing in Electronic Learning. In Proceeding of IEEE Workshop on Knowledge Grid and Grid Intelligence, Nova Scotia, Canada, pages:4-15 (2003) 19. Abbas, Z., Umer, M., Odeh, M., McClatchey, R., Ali, A., Farooq, A.: A semantic gridbased e-learning framework (SELF). In Proceedings of the 2nd International Workshop on Collaborative and Learning Applications of Grid Technology and Grid Education at CCGrid05, pp. 11-18, Volume 1, ISBN: 0-7803-9075-X IEEE Press. Cardiff, UK (2005) 20. Nassiry, A., Kardan, A.: Grid Learning; Computer Grid Joins to e- Learning. In World Academy of Science, Engineering and Technology, vol. 49, pp. 280-281 (2009) 21. Jennings, Nicholas, R.: On agent-based software engineering. In Journal of Artificial Intelligence, Vol. 117, Issue 2, pp. 277-296 (2000) 22. Vecchiola, C., Grosso, A., Passadore, A., and Boccalatte, A.: AgentService: A Framework for Distributed Multi-agent System Development. In International Journal of Computers and Applications, ACTA Press, Vol. 31, Issue 3.202-2968 (2009) 23. Standard ISO/IEC: Common Language Infrastructure, March, 23271 (2003)
Education and Training in Grid-Enabled Laboratories and Complex Systems Luca Caviglione, Mauro Coccoli, and Elisabetta Punta
Abstract The development of grid-oriented technologies revamps the idea of using virtual laboratories for education in schools and universities, and for the training of professionals. This is a challenging idea, both from the pedagogical and the technical point of view. The model of hands-on experience can be implemented reducing the need of physical machineries, thus removing inherent boundaries and limitations. Moreover, this decreases costs and promotes the sharing of exclusive resources. In this perspective, the chapter presents possible architectural blueprints for the usage of distributed simulation and emulation facilities for e-learning purposes, as well as for scientific research, resulting in a novel grid-supported collaborative learning environment.
1 Introduction The idea and implementation of remote laboratories are evolving as the Internet technologies are changing. In particular, it is worth considering the improvements that can be achieved through the exploitation of grid-oriented architectures and peer-to-peer (p2p) overlays. The use of virtual laboratories, simulated devices, and plants can be considered the remote version of the traditional “hands-on lab,” based on the “learning by doing” strategy (see, e.g., [1] as a possible example).
L. Caviglione () • E. Punta Institute of Intelligent Systems for Automation (ISSIA) – Genoa Branch, Italian National Research Council (CNR), Via de Marini 6, I-16149, Genova, Italy e-mail:
[email protected];
[email protected] M. Coccoli Department of Communications, Computer and Systems Science (DIST), University of Genoa, Via Opera Pia 13, I-16145, Genova, Italy e-mail:
[email protected] F. Davoli et al. (eds.), Remote Instrumentation for eScience and Related Aspects, DOI 10.1007/978-1-4614-0508-5 10, © Springer Science+Business Media, LLC 2012
145
146
L. Caviglione et al.
This is recognized to be a powerful educational model since it traces the way the people learn. Also, the process is natural, since it happens through direct experience. As demonstrated, this model can embed knowledge in long-term memory, creating unconscious competence in learners and scientific intuition [2]. According to Schank [3], “Effective e-learning requires real experience,” “We learn best from reality,” and “There is no substitute for natural learning by doing.” Then, education and training can take great advantages by using a combination of simulation and emulation techniques. Simulation is widely adopted in complex and hazardous work environments, for example, healthcare and medicine, avionics, industrial and nuclear power plants. To pursue such vision, we base our framework on the technological pool, which is already used to merge the grid with virtual laboratories. By exploiting the intrinsic “resource-sharing” attitude of the grid middleware, we can envisage a seamless multiuser experience, where many students conduct the same activity simultaneously. This can push the educational paradigm of the knowledge construction to enhance the information transfer process [4] with a social learning model, based on the idea of human peers actively collaborating. Accordingly, learning can be considered a social process in which knowledge can be shared and reused among participants “playing” for a common objective. This can be efficiently supported by a grid computing architecture, as proposed in [5]. We also consider p2p as an enabler to empower the information transfer process (see, e.g., [6], and references therein). The integration between the p2p communication paradigm and grid architectures has been already investigated in the literature. Apart from the utilization of “classic” file-sharing services to move data across grid nodes (or, at least, to a remote collector), their exploitation in elaborate tasks remains only theoretical. In fact, many duties concerning management operations (such as, “job assignment”) are too complex. Conversely, p2p architectures have proven to be effective to handle, in a scalable way, distributed searches. As an example, we mention the cooperative look-up performed via Distributed Hash Tables (DHTs). Thus, we still imagine to rely upon p2p to manage searches, to publish and retrieve remote and sparse resources or users. For the specific case of e-learning, [7] proposes a p2p framework for the organization of the cooperation and the set-up phase of networked laboratories for didactical purposes. Additionally, to avoid the overlap of too many technologies, mostly incompatible, heterogenous, and out of any standardization pipeline, authors do use the Session Initiation Protocol (SIP) to build the overall infrastructure. This also guarantees an abstract namespace for adding/removing laboratories and equipment on-the-fly and using them. Taking into account the aforementioned technologies, we can focus on sensors, devices, and complex systems and control, evaluating how it is possible to merge the educational objectives with the necessity of expanding the number of experiments. We underline that the goal of this chapter is not the investigation of implementation issues at the middleware level, but the explanation of how it can be used to enable new learning paradigms, as well as how pre-existent functionalities can offer access to–control of complex systems and machineries to learners. Summarizing, we concentrate on the “space of possibilities” achievable through the different grid features.
Education and Training in Grid-Enabled Laboratories and Complex Systems
147
The remainder of this chapter is structured as follows: Sect. 2 introduces possible applications, ideas, and open issues of virtual laboratories, remote instrumentation, and plants for education and training purposes. Section 3 showcases different case studies about the adoption of virtualized resources to conduct experiments. Finally, Sect. 4 concludes the chapter, presenting issues and possible future developments.
2 Virtual Laboratories, Remote Instrumentation, and Complex Systems The use of grid technology for building e-learning environments for control education is a suitable solution to cope with performance issues. In fact, it offers computing as utility, making the simulation (or the emulation) of complex systems feasible [8]. An example of mixed usage of a traditional e-learning environment with a simulated laboratory relying on a grid middleware is presented in [9], where the grid is exploited to run separate experiments, or several instances of a simulation task. This allows to build complex and rich environments that can be also extended with additional elements, owing to the intrinsic scalability of the grid. General concepts and key architectural elements of virtual laboratories are reported in [10]. Both laboratories and related facilities are core components in the education of students, especially for scientific disciplines [11]. They enable students to learn through hands-on experience, which is the basis for many educational activities as well as for training at the workplace. This can be done through two different grid technologies. Specifically: – Computational grid: it provides access to a shared pool of resources so that high throughput applications can be performed over distributed machines. – Data grid: it is a particular kind of architecture specifically tweaked for handling data storage as the shared resource. Thanks to this type of grid, the storage capacity can be enhanced and uniquely accessed by also exploiting different types of permanent storage systems and devices. Summing up, grid computing can guarantee users the access to a relevant amount of computing resources when it is required, or large sets of data have to be processed. Additionally, we mention a further realization related to network-enabled sensors and devices that can instrument the grid. We will address this particular use case in Sect. 3.
2.1 Virtual Laboratories Traditional hands-on laboratories are opposite to remotely operated ones. While powerful computers are available at low cost, sophisticated instruments are not. Still, we can use simulations to recover such issues, but the interaction with real devices has no equivalent. As possible examples of complex machineries, let us consider
148
L. Caviglione et al.
the following use cases: (a) hospital equipment for surgery and the need of training doctors to operate them, and (b) to practice on experimental setups in the field of robotics. Then, possible tradeoffs, becoming feasible thanks to the advancements in the field of networking and grid frameworks, are: (a) to accurately emulate a complex hardware deployment, thus exchanging “real hardware” for computing power, reflecting in a virtual device or (b) to make remotely available a real plant through the Internet, resulting in a remote device. Accordingly, the definition of remote or virtual laboratory is then straightforward. To develop a virtual laboratory, a suited architecture has to be identified, as to provide an appropriate access and control to the remote hardware and to guarantee the requested functionality. A common solution is a custom Web-based facade with control panels and instruments, which may optionally include reports of measurements, data or audio/visual feedbacks. We also mention the possibility of creating sophisticated Graphical User Interfaces (GUIs) by means of Web 2.0 paradigms, such as the Asynchronous Javascript And XML (AJAX) one. More sophisticated requirements could be: (a) management of multiple devices and experiments; (b) the facility must be operated by a vast amount of users; (c) software must be reusable and equipment-independent (see, e.g., [12] for a prototypal implementation with such properties). Since the grid heavily relies upon the network infrastructure, we can envisage to deploy additional services, for instance, to use a portion of the available bandwidth to implement learning strategies based on strict interactions among students, which is part of the laboratory experimentation process. Therefore, a proper network (or overlay) infrastructure empowers this aspect and can support the student-toequipment, student-to-student, and student-to-instructor relationships [13].
2.2 Simulation and Emulation of Complex Systems Through simulation one can achieve real-time interactive experience and the ability of working in safe and controlled environments. It is also possible to realize a wide variety of use cases and scenarios, also by mixing multiple learning strategies and styles. Additionally, a simulated environment guarantees many important features, such as: reliability, reproducibility of experiments, and availability of feedback signals. Also, it allows operations of data collection and analysis, and the adoption of trial and error methodologies. As a consequence, students are motivated and willing to work in remote labs [14]. According to classical educational models, a simulation allows learners to perceive an event as real. The main objectives of a simulation-based e-learning strategy are as follows [15]: 1. To contextualize the learning process in a real-life scenario (e.g., learners may face it on the job) 2. To provide a safe virtual environment where learners have the opportunity to practice their skills without fear of real-life consequences
Education and Training in Grid-Enabled Laboratories and Complex Systems
149
3. To use remedial feedback to explain the consequences of mistakes and to reinforce best practices 4. To simplify and control the reality by removing complex systems existing in real life. Hence, the learner can focus on the knowledge to be learnt effectively and efficiently To simulate physical phenomena, mathematical models must be designed. To this aim, we need the support of computing resources, which increase with the complexity of the models to be investigated. With the objective of simulating complex systems, real-world devices composing them must be modeled, even if with some approximations, according to their dynamics. Subsequently, they must be properly arranged (or merged) to reflect the overall system. The more the system is complex and the sampling rate is high, the more accurate is the model, thus the more powerful the computing system has to be. When real-time simulation is needed, all of these aspects are further amplified. Another key point is the control system and the relevant control and decision strategies. They are often derived from the “past history” of the system itself; thus a large amount of data have to be delivered across the network and continuously/iteratively processed. To make this vision feasible, we identify the grid-computing technologies and architectures as the solution to have the needed “rough” power in a scalable and cost-effective manner.
2.3 Control in Networked Systems Many industrial companies and institutions have shown interest in exploiting the possibilities offered by the virtual laboratories not only for training but also for activities supporting the development and the design of complex systems. Performing the control over a network has some attractive benefits. However, some control problems arise in networked control systems. Regardless of the type of network used, their performance will degrade due to network delays, jitter, and dynamics introduced by the protocol architecture in the control loop. Besides, the presence of a complex layered software architecture, as it happens in grid systems, accounts for additional delays, e.g., data percolation across multiple software modules. In the worst case, all the introduced delays can destabilize the system by reducing the stability region. If the controlled plant is linear (either its linearized model provides a good dynamic approximation), every methodology can be applied. However, if a plant is nonlinear, only suitably designed robust control methodologies can be applied. The performances of classical control methods can be substantially deteriorated by delays, thus specific controllers have to be designed [16]. As an example, let us consider a simple positioning control problem. A mass must be moved from the initial position to the final desired one (e.g., the origin of the space). A dissipative viscous frictional force acts against the motion of the mass. This system is controlled over the network, which is modeled as a bounded
150
L. Caviglione et al.
time delay, i.e., a delay bandwidth pipe network model without jitter. A control is applied, which is designed according to a second-order sliding mode algorithm [17]. The system trajectories converge to a limit cycle, which is orbitally asymptotically stable and globally attractive [18]. Instead, if a suitably modified second-order sliding mode algorithm [19] is applied, the control law guarantees a step-by-step reduction of the size of the limit cycle, through the choice of a time-dependent, piecewise constant control modulus. Such results are depicted in Fig. 1. The proposed control algorithm assures a faster damping of the oscillations of both position and velocity as shown in Fig. 2. Network delays may not significantly affect an open-loop control system; however, the open-loop control configuration may not be appropriate for high performance control applications. These applications require feedback data sent across the network. Existing constant time-delay control methodologies may not be directly suitable for controlling a system over the network since network delays are usually time-varying. Therefore, to handle network delays in a closed-loop control system, advanced methodologies are required. This kind of techniques involves the necessity of designing both suitable control strategies [20] and observers of the system’s state vector [21].
3 Case Studies Depending on the available equipment, hardware and network resources, various configurations are possible. One can simulate the control strategy for the real plant or the entire plant to be controlled. It may be also necessary to train personnel on the use of equipment via remote controllers. In this section, some case studies are presented, as possible usage patterns of grids in remotely operated laboratories and for simulation/emulation purposes.
3.1 Remote Control on a Real Plant In order to make feasible remote control operations on real devices and/or industrial plants, we propose the reference system architecture presented in Fig. 3. The idea at the basis of this case study is the availability of a complex system that can be used for educational purposes, such as experimental activities or training of specialized personnel. The input signals for the real plant are provided by a suited control system, designed according to the specific needs. It can be a real device or an emulated one, too. Let us consider the case in which, for research and/or training purposes, the control system is simulated. The required performances in terms of sampling frequency and response time may cause very high computational requests, and
Education and Training in Grid-Enabled Laboratories and Complex Systems
151
y1
10 5 0 0
5
10
15
20
25
30
0
5
10
15
20
25
30
0
5
10
15 t [sec]
20
25
30
y2
2 0 2 4
u(t – τ)
2 1 0 1 2
0
0.06
0.5
0.04
1
0.02
y2
0.08
y2
0.5
1.5
0 0.02
2
0.04 2.5 0.06 3
0
2
4
6
y1
8
10
5
0 y1
5
x 10−3
Fig. 1 The system’s trajectory under the action of the suitably modified second-order sliding mode control algorithm
non-negligible usage of network resources. The grid architecture is exploited so that the computational activity (mainly number crunching) can be distributed over the grid itself, thus reducing the load per single node. Nevertheless, it also allows to exploit load-balancing policies, or to aggregate the needed computing resources,
152
L. Caviglione et al.
12 10
y1
8 6 4 2 0 2
0
5
10
15
20
25
30
5
10
15 t [sec]
20
25
30
1
y2
0 1 2 3
0
Fig. 2 Comparison between simulation results of the networked control system. Case 1 (dotted line): a classical second-order sliding mode control is applied. Case 2 (solid line): a suitably modified second-order sliding mode control is applied
which are not available locally. For what concerns the network, we can imagine to have proper mechanisms for providing the needed degree of Quality of Service (QoS) to specific flows.
3.2 Remote Access to Sensor Networks We consider a different situation, i.e., the remote access to a Sensor Network (SN), as depicted in Fig. 4. In this case, the SN is the source of external data driving the behavior of the plant, also by sharing the same network infrastructure. We point out that our aim is not to investigate all the possibilities and related interoperability issues of joining SN and grid. Rather, we would like to analyze, even if in an abstract flavor, all the “functional space” that SN and grid enable to enrich learning facilities and virtual laboratories. This allows to stress the access to a peripheral device, which can be also the resulting aggregation/composition of a complex set of machineries. In this perspective, a paradigmatic example can be found in an SN.
Education and Training in Grid-Enabled Laboratories and Complex Systems
153
Fig. 3 Remote control of a real plant through a simulated complex control system
Fig. 4 A reference deployment where data are provided by remotely accessing a sensor network
154
L. Caviglione et al.
In fact, SNs are often remotely accessed to acquire data to be used for different purposes. Nevertheless, this also highlights the following technological aspects: – SNs are commonly implemented by using wireless technologies, such as the IEEE 802.11 family of standards, Bluetooth and ZigBee. Consequently, joining one or several SNs accounts for a more heterogeneous network deployment. Being grid-based infrastructures often deployed over wired technologies, this requires to adopt countermeasures, both in terms of protocols and infrastructural elements. – SNs are usually “hidden” by an ad hoc software infrastructure, as well as architectural components (e.g., a centralized data sink or collector) to perceive them as a standard data source, gathered via well-known paradigms, i.e., WebServices. Thus, proper software components or protocol adapters to access data to SN with the needed requirements must be present within the overall grid architecture. In a nutshell, an SN can be the data provider for a plant; this enables simulating, for didactical purposes, different contexts starting from real data. The stimuli can be routed through the grid infrastructure to the virtual plant, to use them to feed particular didactical models or practical exercises, for instance, those aiming at understanding the tweaking of critical parameters. Conversely, SN can be also emulated to enable people to gain more comprehension on what can happen when using data provided by this kind of environments. A very simple “virtual” SN can be roughly done via a database, which can be remotely accessed with the proper protocols and mechanisms. However, since SNs are complex tools, jointly with the dynamics injected by the remote access, a simple database cannot be sufficient to have the proper understanding of their impact over the complete system. In fact, sensors can fail due to power draining issues or network outages. To this aim, the impact of having a complex remote-sensing infrastructure can be “simulated” by having a recorded data set that can be delivered to the virtual plant upon some specific processing. For instance, data can be pruned as to represent malfunctioning of some sensors, delayed or completely lost, and sent in irregular bursts as to simulate problems in the data bearer devoted to deliver the quantities of interest. With respect to “race conditions,” we can also imagine to share a real SN to feed multiple virtual plants, allowing students to do their learning assignments by using real data. But, in the case of “races” over the SN, we can introduce the following procedural countermeasures: (a) using multiple databases to “emulate” the availability of different SNs or (b) using a real SN and using it to feed a database farm to solve all race issues. We point out that another possible application of a grid integrated with SNs is related to data fusion and analysis. As a final consideration, we underline that the joint utilization of SN and grid environments for e-learning purposes (e.g., to enrich virtual laboratories) has been already investigated in the literature, even if not in the perspective of their utilization in e-learning contexts. Concerning issues in performing experiments on SNs in grid-based virtual laboratories, [22] showcases both the pedagogical and technical challenges when deploying a distributed laboratory for didactical aspects of SNs.
Education and Training in Grid-Enabled Laboratories and Complex Systems
155
Fig. 5 Remote multiple access to simulated devices and/or control systems
Moreover, in [23] the case of Wireless Sensor Networks (WSN) is discussed. This scenario is of great interest for e-learning purposes, e.g., when a merge between wireless technologies and virtual systems must be provided to learners. Finally, [24] (and references therein) survey the usage of wireless grids to facilitate e-learning tasks, also by taking into account interactions and contact points with WSNs.
3.3 Remote Multiple Access to Simulated Devices The third use case taken into account is depicted in Fig. 5, and it represents the scenario when plants or devices are not available to users. To cope with this, a simulated instance is implemented, based on mathematical models and known dynamics. Then, by using our reference model, the access to the facility is granted, even if in a virtualized flavor; through a grid-based infrastructure, it is possible to create as many instances as needed. This way guarantees the remote multiple access to device or plant. The reference architecture discussed in this use case can also “embed” (and, consequently implement) the control system within the same environment. Additionally, the devices to be controlled, or automatically operated, can be real or emulated. In this case, lean Internet clients can be used to drive and control the simulated plant reflecting in a full virtualization of the deployment.
156
L. Caviglione et al.
4 Conclusions and Future Work In this chapter, through reference examples and possible high-level solutions, an overview on the realization of grid-enabled laboratories, simulation, and control of complex systems has been considered. Specifically, we showcased issues and architectural choices to empower virtual laboratories accessed through the Internet and “juxtaposed” over a grid environment. As we presented, grids are evolving and the concept of online learning through remote experiments is gaining consensus, favoring the realization of remote and/or virtual laboratories and training systems. At the same time, they allow to easily manage pools of resources shared across a large or remote population of heterogeneous users. One of the main reasons to adopt the grid as a technological enabler is also related to the improved bandwidth availability. This enables a better transmission of video (or multimedia) as a feedback from remotely controlled devices. Grid services can be exploited to ease interaction and to enhance social activity among students and teachers, also by collaborating to the experiments or by comparing the collected results of didactical experiences. By using grid middleware, SNs and actuators can be easily integrated into simulated systems and real-world equipment can be linked to software components, such as soft-control systems: it is another step into the “Internet of Things” [25]. It would be interesting to implement real testbeds to evaluate the effectiveness of the proposed architectures and solutions, but this is difficult. At the time of writing, the majority of real plants of interests are both complex and expensive. Consequently, they are not easily accessible, even if for scientific purposes. At the same time, many of them are used for industrial purposes, then regarded only as “black-box” software components, reflecting in modules to be plugged in simulated environments. To cope with such drawbacks, future works aim at producing prototypal testbeds, to better quantify pros and cons of the solutions presented in this chapter.
References 1. Ko., C.C., Chen, B.M., Hu, S., Ramakrishnan, V., Cheng, C. D., Zhuang, Y., Chen, J.: A Web-based Virtual Laboratory on a Frequency Modulation Experiment. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, vol. 31, no. 3, pp. 295-303 (2001) 2. Cheetham, G., Chivers, G.: How Professionals Learn in Practice: an Investigation of Informal Learning Amongst People Working in Professions. Journal of European Industrial Training, vol. 25, no. 5, pp. 247-292 (2001) 3. Schank, R.C.: Designing World Class E-learning. McGraw-Hill (2002) 4. Carchiolo, V., Longheu, A., Malgeri, M., Mangioni, G.: A Model for a Web-based Learning System. Information Systems Frontiers, vol. 9, no. 2-3, pp. 267-282 (2007)
Education and Training in Grid-Enabled Laboratories and Complex Systems
157
5. Vassiliadis, B., Xenos, M., Stefani, A.: Enabling Enhanced Learning in an Open University Environment: a Grid-aware Architecture. International Journal of Information and Communication Technology Education, vol. 5, no. 3, pp. 59-73 (2009) 6. Caviglione, L., Coccoli, M.: Peer-to-peer Infrastructures to Support the Delivery of Learning Objects. Proceedings of 2nd International Conference on Education Technology and Computer (ICETC 2010), pp. 176-180 (2010) 7. Caviglione, L., Veltri, L.: A p2p Framework for Distributed and Cooperative Laboratories. F. Davoli, S. Palazzo, S. Zappatore, Eds., Distributed Cooperative Laboratories-Networking, Instrumentation and Measurements, Springer, Norwell, MA, pp. 309-319 (2006) 8. Foster, I., Kesselman, C.: The Grid: Blueprint for a New Computing Infrastructure. Morgan Kaufmann, San Francisco (1999) 9. Szczytowski, P., Schmid, C.: Grid Technologies for Virtual Control Laboratories. Proceedings of 2006 IEEE International Conference on Control Applications, pp. 2286-2291 (2006) 10. Okon, M., Kaliszan, D., Lawenda, M., Stoklosa, D., Rajtar, T., Meyer, N., Stroinski, M.: Virtual Laboratory as a Remote and Interactive Access to the Scientific Instrumentation Embedded in Grid Environment. Proceedings of 2nd IEEE International Conference on e-Science and Grid Computing, pp. 124-128 (2006) 11. Corter, J. E., Nickerson, J. V., Esche, S. K., Chassapis, C., Im, S., Ma, J.: Constructing Reality: a Study of Remote, Hands-on, and Simulated Laboratories. ACM Transactions on ComputerHuman Interaction, vol. 14, no. 2 (2007) 12. Bochicchio, M.A., Longo, A.: Hands-on Remote Labs: Collaborative Web Laboratories as a Case Study for IT Engineering Classes. IEEE Transactions on Learning Technologies, vol. 2, no. 4 (2009) 13. Lowe, D., Murray, S., Lindsay, E., Liu, D.: Evolving Remote Laboratory Architectures to Leverage Emerging Internet Technologies. IEEE Transactions on Learning Technologies, vol. 2, no. 4 (2009) 14. Ma, J., Nickerson, J.V.: Hands-on, Simulated, and Remote Laboratories: a Comparative Literature Review. ACM Computing Surveys, vol. 38 (2006) 15. Alessi, S.M., Trollip, S.R.: Computer Based Instruction: Methods and Development. Prentice Hall (1991) 16. Richard, J.-P., Dambrine, M., Gouaisbaut, F., Perruquetti, W.: Systems with Delays: an Overview of some Recent Advances. SACTA, vol. 3, no. 1, pp. 3-23 (2000) 17. Bartolini, G., Ferrara, A., Usai, E.: Output Tracking Control of Uncertain Nonlinear Second Order Systems. Automatica, vol. 33, no. 12 (1997) 18. Levaggi, L., Punta, E.: Analysis of a Second Order Sliding Mode Algorithm in Presence of Input Delays. IEEE Transactions on Automatic Control, vol. 51, no. 8 (2006) 19. Levaggi, L., Punta, E.: Variable Structure Control with Unknown Input-delay and Nonlinear Dissipative Disturbance. Proceedings of 16th IFAC World Congress (2005) 20. Polushin, I.G., Liu, P.X., Lung, C.-H.: On the Model-based Approach to Nonlinear Networked Control Systems. Automatica, vol. 44, pp. 2409-2414 (2008) 21. Cacace, F., Germani, A., Manes, C.: An Observer for a Class of Nonlinear Systems with Time Varying Observation Delay. Systems & Control Letters, vol. 59, pp. 305-312 (2010) 22. Christou, I.T., Efremidis, S., Tiropanis, T., Kalis, A.: Grid-based Virtual Laboratory Experiments for a Graduate Course on Sensor Networks. IEEE Transactions on Education, vol. 50, no.1, pp.17-26 (2007) 23. Panda, M., Panigrahi, T., Khilar, P.M., Panda, G.: Learning with Distributed Data in Wireless Sensor Network. Proceedings of 1st International Conference on Parallel Distributed and Grid Computing (PDGC 2010), pp. 256-259 (2010) 24. Li, G., Sun, H., Gao, H., Yu, H., Cai, Y.: A Survey on Wireless Grids and Clouds. Proceedings of International Conference on Grid and Cloud Computing, pp. 261-267 (2009) 25. Gershenfeld, N., Krikorian, R., Cohen, D.: The Internet of Things. Scientific Am., vol. 291, no. 4, pp. 46-51 (2004)
Part III
Grid Infrastructure
SoRTSim: A High-Level Simulator for the Evaluation of QoS Models on Grid Alessio Merlo, Angelo Corana, Vittoria Gianuzzi, and Andrea Clematis
Abstract We present SoRTSim, a high-level simulator designed to evaluate tools built over the middleware to support additional services besides the standard ones, particularly to allow the execution on Grid platforms of applications with strict QoS requirements, up to Soft Real-Time. SoRTSim has been designed to be quite general and to be easily customized for different tools and classes of applications. We describe the SoRTSim architecture and its implementation, highlighting its main features. The simulator allows the easy generation of a high number of abstract Grid scenarios, starting from a set of constraints; supports the definition of the various Grid entities and their behaviour; and makes simple the evaluation of performance, based on user-defined metrics and statistics. As a case study, we show how SoRTSim can be used to simulate SoRTGrid, a framework that we previously developed for managing time constraints on a Service-oriented Grid.
1 Introduction In the last years, the evolution of the Grid paradigm has been characterized by an increasing demand of Quality of Service (QoS), for supporting the execution of applications with performance constraints. Indeed, the QoS issue in Grid computing is very important, as its satisfactory solution would greatly extend the Grid exploitation to a number of new classes of applications (e.g. efficient execution of workflows [1], computational steering, urgent computing [2], time-constrained jobs). To this regard, many proposals have been made to extend basic Grid middleware, as a rule A. Merlo () • A. Corana IEIIT-CNR, Via De Marini 6, 16149 Genova (Italy) e-mail:
[email protected];
[email protected] V. Gianuzzi • A. Clematis IMATI-CNR, Via De Marini 6, 16149 Genova (Italy) e-mail:
[email protected] F. Davoli et al. (eds.), Remote Instrumentation for eScience and Related Aspects, DOI 10.1007/978-1-4614-0508-5 11, © Springer Science+Business Media, LLC 2012
161
162
A. Merlo et al.
developed with the main goal of allowing resource sharing for best-effort jobs, with some kind of QoS support (e.g. [3, 4]). In this context, a hard question is to understand and model in a precise way the QoS requirements of different applications, so that it is possible to evaluate both the feasibility of executing an application on the Grid, and the impact of the dynamism of the Grid on the QoS requirements. In order to investigate such aspects, there is the need of suitable high-level Grid simulators [5] that allow to test the behaviour of different Grid scenarios independently of the Grid architecture (e.g. WS-based/Pre-WS) and middleware (e.g. Globus Toolkit [6], gLite [7]). For instance, current Grid simulators such as GridSim [8] simulate the Grid at every abstraction layer, and it is not possible to obtain a Grid reproduction without directly managing the simulated Grid structures (e.g. Index Services, Grid resources). Such kind of simulation is heavy and time-consuming, especially when a high number of trials have to be performed to evaluate different Grid scenarios and/or application classes, since several variables at middleware and fabric layers are involved. So, we devise the necessity to develop a discrete Grid simulator, SoRTSim, able to simulate a Grid at a higher abstraction level in comparison with the current state-of-the-art of Grid simulators. In particular, SoRTSim allows to generate a high number of abstract Grid scenarios in an automated or semi-automated way, starting from a set of constraints. Moreover, it allows to define proper Grid entities and their behaviour, and to support user-defined metrics and statistics. In this way, we can easily obtain various indices describing in a global way the performance of the system, both at the user side (e.g. percentage of jobs that have their requirements fulfilled), and at the system side (e.g. percentage of resource usage). So, SoRTSim is useful to understand the influence of the main architectural parameters and of the policies of resource management on the system behaviour, avoiding the need of dealing with too many low-level parameters. In order to demonstrate the contribution of SoRTSim to the problem of Grid simulation, we present a porting of the SoRTGrid model to SoRTSim. SoRTGrid [9] is a framework for managing time constraints on a Service-oriented Grid we have previously developed. First we describe the porting of the SoRTGrid model to SoRTSim, and the set of related entities and metrics. Then we present a heterogeneous set of computing scenarios, from very simple to more complex ones, chosen so as to evaluate in a quite comprehensive way the SoRTGrid model. Finally, we outline some simulation results. The chapter is organized as follows: Section 2 describes the motivations that induced us to develop SoRTsim and some related works. Section 3 describes the SoRTSim architecture, its Java implementation and its main features. In Sect. 4, we show how the proposed tool can be used to evaluate the SoRTGrid framework for the execution of Soft Real-Time jobs. Section 5 reports some concluding remarks together with some indications for future work.
SoRTSim: A High-Level Simulator for the Evaluation...
163
2 Motivating Scenario and Related Works The evaluation of an architecture/tool in a real Grid is difficult for several reasons. Indeed, to obtain significant results the Grid has to be large enough, and it is difficult to have at disposal a large Grid for all the trials that must be performed. Moreover, real Service Oriented Grids require the installation and configuration of the middleware, registration of Grid Services to Index Services, explicit management of security issues and so on. This comes to a significant complication in building Grid scenarios with only simulation purposes. A further problem is that a real Grid is a highly dynamic working architecture on which it is quite impossible to make assumptions, because of its non-deterministic behaviour. As a consequence, trials are complex and time-consuming, and it is not possible to assume a full correctness on the behaviour of the configured architecture and to assure the repeatability of results. So, the use of simulators of Grid architectures and tools and their interactions with applications is very common. Simulation reduces the complexity and the time required for building up an effective working scenario, avoiding the need to manage aspects of Grid useless for the simulation purpose. Moreover, it allows the repeatability of results and an easy extension to further scenarios. Generally speaking, a simulator is based on an abstract model of the system under consideration. The abstraction level is usually quite high and results are fully reproducible. Through simulation, it is possible to test an architecture in conditions difficult to obtain on a real system. Currently, there is a large number of Grid simulators. In [5, 10], some of the most interesting ones are briefly described and compared. Important features are abstraction level, scalability, execution time, and target (e.g. general-purpose Grids, data Grids, QoS-oriented Grids). Probably, the most complete and most used Grid simulators are OptorSim [11], developed as part of the EU DataGrid project, and mainly focused on data storage and transfer issues, SimGrid [12], a general-purpose simulator, and GridSim [8, 13] suitable for both compute-intensive and dataintensive applications, which allows CPU reservation and supports the economy model. GridSim is written in Java, simulates the Grid structure, particularly a Service-oriented Grid, offers a basic QoS support, and is quite easily customizable. The choice of a Grid simulator depends on the features it supports and on the characteristics of the application to simulate. Regarding the first point, we consider a “good” logical Grid simulator the one able to support services for simulating large grids easily, allowing both a simple and quick definition of Grid scenarios and offering many possibilities of customizing the Grid behaviour. Indeed, the previously considered simulators offer a lot of features, most of them are useless if we are only interested in a simulation at a logical level. In particular, they model an effective Grid and, in general, a distributed system, so the level of abstraction for the resource scenario that we want is impossible to obtain. The results that we expect from the simulation have to be enough accurate to allow the evaluation of architectures/tools for the considered class of applications, but we are not interested in this phase in a very precise characterization of the various hardware and software Grid components.
164
A. Merlo et al.
So, we choose to consider an abstract representation of the Grid, and to neglect some specific aspects middleware-dependent. In this way, we obtain a simple enough simulator able to evaluate the global behaviour of the Grid for different scenarios and portable across different middleware. For this reason, and because the assumption on experimenting over an abstracted representation of a Grid is fundamental in order to reduce the number of variables, we choose to implement a proper simulator, SoRTSim, based on threads to simulate the behaviour of the Grid architecture.
3 The SoRTSim Simulator 3.1 SoRTSim Architecture SoRTSim is a discrete overlay network simulator that allows both a specific customization and an easy and automatic generation of large and complex scenarios referable to Grids. Each entity of the overlay network and their connections is simulated at a very high abstraction level. SoRTSim is based on a discrete time in order to define a universal duration for any operation and to share a common clock, granting a better control on operations and reproducibility of results, without reducing the validity of the simulations. The simulation is driven by both the interactions among entities of the overlay network and events. Some aspects of SoRTSim simulations are based on probability distributions. In particular, we distinguish between probability and configuration distributions. Probabilities regard events that can happen during the simulation, while configurations regard the definition of the characteristics of the set of entities. At every instant, a probability value is associated with a single event to determine whether the event happens. Overlay networks modeled in SoRTSim are made of active entities (e.g. the nodes of the overlay network) that can be connected manually or automatically through different algorithms (e.g. mesh, randomly). The overlay network operates over a static scenario, made of passive entities (e.g. physical resources), on which the single active entities operate. Passive entities are partitioned among the active ones. Choices and behaviour of active entities during the simulation can be related to the state of the scenario. In SoRTSim, any active entity is defined by a set of variables determining its behavior and the operations it can perform in a single instant of the simulations, as a result of interactions with the other entities. Passive entities are defined by a set of variables describing their state that can change during the simulation only as a result of external events or actions made by active entities. Among the active entities, the Simulation Manager (SM) is the core of the simulation, acting as a centralized scheduler of the activities of the other entities
SoRTSim: A High-Level Simulator for the Evaluation...
165
Fig. 1 Internal architecture of SoRTSim
SM
GE 1
GE 7 GE 6 GE 3
GE 2
GE 5
GE 8
GE 4
(Generic Entities (GEs)) and as a time synchronizer. It allows communication among parts and controls the whole simulation. The architectural model of SoRTSim is shown in Fig. 1.
3.2 SoRTSim Implementation and Interface SoRTSim architecture and behaviour has been mainly implemented with Java Thread technology. Both SM and GE entities are defined by a proper interface (ISM and IGE) containing methods for their supported operations. Entities classes extend the Java Thread class. The choice to define them starting from the Thread class has been motivated to obtain an effective concurrent activity of the entities during a single discrete instant. The behaviour of each entity in a single instant of the simulation is defined in the run() method that is a function containing the concurrent code. For a GE, the method defines all possible interactions with SM and other GEs, as a result of decisions made on their current internal state. Regarding SM, the run() method is more complex since it manages the whole simulation and the synchronization between the various entities. In particular, it waits for all the entities to finish their run() before proceeding on the next instant. When the simulation ends, it stops all activities and calculates statistics and summarizing graphs. Threads corresponding to GEs interact with the SM one through message exchanging; SM directly modifies their status for granting the advancement of the simulation. The configuration of scenarios in SoRTSim can be complex, due to the high number of parameters involved. Moreover, for the reproducibility of results and further analysis, it is important to maintain a track of the simulated scenarios. For this, the core simulator of SoRTSim is integrated with a GUI and an XML plug-in. The first grants an easy way to buildup scenarios, supporting functionalities for automatically building the sets of entities. It maintains all the information concerning the simulation and updates statistic graphs during the simulation execution.
166
A. Merlo et al.
Fig. 2 SoRTSim GUI for configuring the simulation
The second extension provides an universal way to save, export and import scenarios and simulation data from and to the SoRTSim simulator. Figures 2 and 3 show examples of the GUI and graphs for evaluating the trend of the simulation.
4 Simulation of the SoRTGrid Framework 4.1 SoRTGrid Outline The above-described simulator has been used to simulate SoRTGrid (Soft RealTime Grid) [9], a framework designed to supply several services and features for the execution of applications with time constraints (e.g. Soft Real-Time jobs) on Grid platforms. SoRTGrid is based on the direct matching of application requirements, described in a proper Job Requirement Manifest (JRM), with the available resource bids, and employs a distributed bid Repository coupled with an overlay architecture. The SoRTGrid tool has been designed following these guidelines: it has to be non-invasive in the middleware, i.e. the additional functionalities needed to manage time-constrained jobs must be offered on top of the middleware, which remains general-purpose; it must be suitable for an economic approach.
SoRTSim: A High-Level Simulator for the Evaluation...
167
Fig. 3 SoRTSim GUI for summarizing the simulation results
A key point for time-constrained applications is that a selected resource would be effectively available when the job need to execute. The User must be able to evaluate the maximum length of its deterministic time-constrained job. The resource Owner must be able to control its resources directly, without the intermediation of any Grid Metascheduler. In SoRTGrid, Users and Owners (or their Agents) are independent in their actions and pursue different objectives, respectively, the correct execution of the job within the required deadline and an efficient use of their resources. SoRTGrid implements functionalities assuring that: (a) up-to-date and consistent information on resources that are effectively available is provided; (b) the provided information remains valid for a time interval sufficient to perform negotiation (timed pre-reservation); (c) the selected resources remain available for the whole requested time-interval and (d) the same resources are not offered simultaneously to different users (no overbooking). We consider the Grid as constituted by the union of several sub-portions, each of which corresponds to an administrative domain (Physical Organization (PO)). SoRTGrid (see Fig. 4) is an overlay Grid network composed of several connected Facilitator Agents (FAs). Each PO is managed by a FA. All Grid resources shared by the PO are controlled by the FA. There is an FA for each PO, to which refer all resources in the PO. Each FA is composed of two Grid services, namely Bid Management (BidMan) and Discovery Performer (DiPe) and relies on the underlying Grid middleware, such as Globus Toolkit [6], to obtain basic services.
168
A. Merlo et al.
Fig. 4 SoRTGrid as a set of interacting Facilitator Agents
The resource owner directly offers its resources to Grid users through a suitable extension and customization of the mechanism of SLA bids [14], which we call SoRT-Bid. The BidMan service allows an owner to upload (publish) and remove SoRT-Bids in the local Repository and to obtain from the Repository information such as total, pre-selected and free SoRT-Bids. There is a single instance of BidMan Grid service for each PO, which receives the offers from all the Owner belonging to the PO. The User Agent provides a list of job requirements (JRM), the number of required SoRT-Bids and a maximum time for the discovery activity. A DiPe instance is created at every user request; it performs the local discovery and, if necessary, the remote discovery. This second phase, based on a completely distributed approach, is carried out only if the first one does not find all the requested bids, through an interaction with other DiPe services belonging to neighbour FAs. The set of SoRTBids matching the requirements is then returned to the User Agent who chooses a subset of the retrieved SoRT-Bids, contacts the respective Owner Agents to negotiate them within the pre-reservation time and finally executes its job on negotiated resources. The number of steps in the remote discovery phase is bounded by logneighbour NFA , NFA denoting the number of FAs. The SoRTGrid architecture assures a fast resource discovery time.
SoRTSim: A High-Level Simulator for the Evaluation...
169
The behaviour of SoRTGrid can be customized by setting various parameters (e.g. the Grid size, the kind of resource requests and resource bid proposals, allowed search time, number of neighbours).
4.2 SoRTGrid Simulation The simulation of SoRTGrid is based on the following assumptions: abstract representation of the Grid, by defining properly the sets of JRMs and bids; only computational resources (CPU) are considered; the total number of resources is fixed, although only a part of them can be active at every instant; all resources are supposed to be fault tolerant. Such assumptions reduce the number of variables, improving the analysis and readability of results.
4.2.1 The Soft Real-Time System Let us consider a Grid platform with M resources where we want to execute N Soft Real-Time (SRT) jobs. A Soft Real-Time System (SRTS) is composed by a set of Soft Real-Time jobs (fJn g; n 2 N) that need to be executed within a given deadline dn and a set of resource bids (fRm g; m 2 N) that grants the access to physical resources where the jobs can execute. The set of jobs represents the tasks of a given set of Users; the bids are offered by different resource Owners. SoRTGrid jobs are deterministic, i.e. it is possible to appraise a maximum job duration (worst case), expressed in terms of Million of Instructions (MI n ), although it can happen that a job lasts less instructions than MI n . The job activation time En is unknown and unpredictable, and the job has to be dispatched on an available resource as soon as it is activated; once activated, the job is considered successful if it terminates within the deadline dn . The absolute deadline of the job is DLn D En C dn . So, an SRT job is defined by the triple Jn D < MI n ; dn ; En >. A computational resource bid is defined by the couple Rm D < MIPSm ; Expm > where MIPSm is the amount of Million Instruction per Second offered by the resource and Expm is the absolute expiration time for the computational power provisioning. The set of resource bids represents, in every moment, the resources on which it is possible to dispatch a job activated in that moment. Such set varies continuously depending on bid offering, bid acquisition and bid expiration.
4.2.2 SoRTGrid Simulation and Probabilities In the SoRTGrid context, P .a/n provides the probability that a pending inactive job activates in the current instant, whereas P .t/n provides the probability that the execution of a running job terminates in that instant, lasting less than the maximum duration MI n .
170
A. Merlo et al.
Probability values are assigned through proper distributions: • Certainty. This distribution allows to evaluate extreme scenarios. Regarding job activation, any job activates as soon as it becomes the current one; moreover, any job has always its maximum length (MI n ). • Random probability. This is used both for simulating the behaviour of very heterogeneous Grid scenarios and for evaluating the influence of casualness in the simulations. • Incremental probability. This distribution increases the probability that the event happens (the job activates, in the activation case, or the job terminates, during the execution) as instants pass. In fact, it is reasonable that the probability for a job activation increases during instants in which the job remains inactive, and that the job terminates when the number of instructions approaches MI n . When the event happens, the probability comes back to a base value from the successive instant. Configuration distributions are used for configuring the set of Jobs and Bids assigning values to four parameters (MI n ; dn ; MIPSm ; ExpBasem ). The last one is a base value for the duration of the bid, the actual bid duration depending on the Owner behavior. The values are distributed to the set of Users and Owners through two ranged functions: Hash function and Discrete Gaussian (Poissonian) function. The same distributions can also be used to assign Users and Owners to different logical SMs.
4.2.3 Entities and Behaviours We assume that any job requires a single resource only and that the jobs in the pool of a User have the same MI n and dn requirements. The User is able to execute the job correctly only if it has previously acquired a bid that grants him/her the immediate access to a sufficient resource (a resource is sufficient if it can support the correct execution of the job within the deadline in the worst case duration). Such situation is manageable by the User only through a preventive acquisition of resources. When a job must be dispatched, it is considered regularly activated if the User has a sufficient resource; if the user has available resources but nonsufficient, the job is considered as potentially degraded and the respect of the deadline depends on the effective duration of the job. Finally, if there are no resources available, the job is failed and dropped without executing. A User can perform three possible operations: 1. Resource discovery. The User checks the state of the acquired resources in order to determine whether they are sufficient in case the current inactive job activates. If they are not sufficient, it queries SM for new bids. If better bids are found, the other ones are released at the end of the instant. The release is notified to SM that marks the resources as free.
SoRTSim: A High-Level Simulator for the Evaluation...
171
2. Job activation check. If the current job is inactive, a proper probabilistic function is called for determining if the job activates or not in the current instant. 3. Job termination check. If the current job is active, the User checks whether the execution completes in the current instant. If so, the job is considered normally completed. If the execution is interrupted (since the deadline is reached or the resource expired before the job completion), the job is considered degraded. Presently, SoRTSim supports three kinds of deterministic behaviours for Users: Shrewd, if the User tries to have always a single sufficient resource reserved; Speculative, if the User maintains up to two sufficient reserved resources at a time; Selfish, if the User can potentially select and acquire any amount of resources (used only in the Economic mode). Owners make two kinds of operations: (1) Collect statistical information on the historical trend of their previous offers to find hints for offering more interesting bids than the previous ones; (2) Create and upload new bids depending on different variables and if necessary using statistical algorithms based on the historical database built during the simulation. SoRTSim supports the following behaviours for the Owner: • Fixed behaviour. The Owner chooses a fixed duration for its bids, constant for the whole simulation. At the beginning of the simulation, durations are distributed to Owners through hash or probabilistic functions. This behaviour is typical of a PO that has a well-defined planning in its participation to SoRTGrid. • Variable behaviour. An Owner provides different expirations for different bids. This behaviour can be typical of a PO that has not a deep interest in offering high QoS resources and participates to SoRTGrid only when its resources are not used. • Learning-based behaviour. Owners define new bids analyzing the historical trend of the previous ones. Statistical learning algorithms (e.g. K-Means and Mean Shift) can be used for analyzing data and defining a publication strategy that takes into account both the duration of the bid and the most suitable instant of publication. This behaviour can be adopted by POs strongly motivated to participate in SoRTGrid. At any instant, SM receives the bid upload from Owners and a set of requests from Users. For every request, it performs a discovery activity based on the matching degree between the bid and the request. To simulate the pre-selection mode of the FA of SoRTGrid, SM selects the first n bids that match the User request and waits for the final selection from the User. SoRTSim is able to simulate local and remote discovery of SoRTGrid. So, a User request is managed, at first, as a Local Discovery (considering only bids of its SM instance) and it is carried on, if necessary, as a set of parallel Remote Discoveries with neighbours in successive instants. Selection, negotiation and job dispatching operations are atomic and made during a single instant.
172
A. Merlo et al.
4.2.4 Metrics To evaluate the performance of SoRTGrid, we implement the following metrics: • Average Degrade Percentage - ADP. At any instant, it provides the percentage of jobs that have been degraded or failed on the total of the activated job until that instant. AD is the main measurement of the efficiency and involves User, Owner and infrastructure sides. A higher number of failed jobs means that, in general, there is not a sufficient amount of bids to satisfy the User requests; a majority of degraded jobs shows a lack of sufficiency in the proposed bids. The degrade can depend both on suitability of bids and on efficiency in selections made by Users and middleware. • Reservation Percentage - RP. It provides, at any instant, the percentage of published bids that have been selected by the User counterpart on the total of published bids. This quantity measures the efficiency of the choices made at Owner side. • Real Usage Percentage - RUP. It is given by the number of instants in which reserved resources execute a job on the total time during which the same resources are reserved. This metric indicates the effective use of reserved resources and provides a measurement on the efficiency of User selection.
4.3 Experiments and Results We are mainly interested in balanced scenarios where the total request of MIs from the User side is similar to the full offer of computational power proposed by the Owner side. Similarly, we assume that the average of the bids duration is comparable with the average of the job deadlines. Let us now present the variable involved in the simulation, divided into User, Owner and SM variables. User variables. N is the total number of User entities and ActU % the percentage of the total Users active at any instant. A single User is defined by User D .MI; d; P .a/; P .t/; UBeh/ giving, respectively, the amount of MI for each job, the deadline, the kind of probability for activation and termination (Certainty, Random, Incremental) and the User behaviour (Shrewd, Speculative, Selfish). Owner variables. M is the number of Owner entities and ActO% the percentage of active entities at every instant. An Owner is defined by Owner D .MIPS; ExpBase; OBeh/, giving the amount of provided MIPS, the base value for the bid expiration and the Owner behaviour (Fixed, Variable, Learning-Based). SM variables. The characteristics of the overlay network are modeled through the variables SMNum which defines the number of simulated FAs, and SelPol which defines the policy used by the single SM to make selections (e.g. BestFirst, CheapestFirst).
SoRTSim: A High-Level Simulator for the Evaluation...
Steps Dedicated Heterogeneous Cluster Shared Heterogeneous Cluster Generic Distributed System Dedicated Physical Organization Generic Physical Organization Virtual Organization Grid Economic Grid
Steps Dedicated Heterogeneous Cluster Shared Heterogeneous Cluster Generic Distributed System Dedicated Physical Organization Generic Physical Organization Virtual Organization Grid Economic Grid
MI
d
P(a)
X [H, G] [H, G] X [H, G] [H, G] [H, G] [H, G]
X [H, G] [H, G] X [H, G] [H, G] [H, G] [H, G]
[C, V, I] [C, V, I] [C, V, I] [C, V, I] [C, V, I] [C, V, I] [C, V, I] [C, V, I]
MIPI X [H, G] [H, G] [H, G] [H, G] [H, G] [H, G] [H, G]
173 USER
OWNER ExpBase Set of active Owners [H, G] Fixed [H, G] Fixed [H, G] Fixed [H, G] Variable [H, G] Variable [H, G] Variable [H, G] Variable [H, G] Variable
[C, V, I] [C, V, I] [C, V, I] [C, V, I] [C, V, I] [C, V, I] [C, V, I] [C, V, I]
P(t)
Set of active Users Fixed Fixed Fixed Variable Variable Variable Variable Variable
Behaviors
Mode
[Fi., Va.] [Fi., Va.] [Fi., Va.] [Fi., Va.] [Fi., Va.] [Fi., Va.] [Fi., Va.] [Fi., Va., L-b]
Meta Meta FA Meta Meta FA FA FA
Behaviors [Sh., Sp.] [Sh., Sp.] [Sh., Sp.] [Sh., Sp.] [Sh., Sp.] [Sh., Sp.] [Sh., Sp.] [Sh., Sp., Se.]
GOD N° of logical Gods Single Single Single Single Single Single Many Many
Fig. 5 Simulation steps for evaluating SoRTGrid. The [H, G] range is related to the set of Owners and Users and underlines that both Hash and Gaussian (Poissonian) functions have been used to distribute values for MI, d , MIPS and ExpBase; C, V, I denote the kind of probability for job activation and termination; Sh., Sp., Se. and Fi., Va., L-b indicate the three possible User and Owner behaviors; Mode indicates the Simulation Manager behaviour (like the SoRTGrid Facilitator Agent or as a Metascheduler)
A general configuration is then described by three parts: User side: .N; ActU %; .MI; d; P .a/; P .t/; UBeh/N / Owner side: .M; ActO%; .MIPS; ExpBase; OBeh/M / SM side: .SMNum; SelPol/. It is important to define an experimental strategy that allows to evaluate the most important features of the overlay architecture. In particular, we performed step-by-step simulations on different user and resource scenarios, from simple to increasingly complex ones (cluster, PO, Virtual Organizations, Grid). In this way, we are able to evaluate the impact of incremental modifications on the performance of the architecture. Figure 5 summarizes the considered scenarios, defining the characteristics of the configurations of any step. For a complete description and analysis of results, see [15]. Here, we briefly present some aggregate results. Figure 6 shows the values of Average Degrade and Usage Percentage for the various environments considered in our simulations, obtained averaging the various trials performed for each environment. It is interesting to note that the Average Degree Percentage, the main metric that measures the effectiveness of a time-constrained system, diminishes as the system complexity increases; this fact is explainable observing that the more complex the computing platform the higher is the utility of an approach based on a direct management of user requirements and owner bids. Since ADP is particularly low for the Grid environment, our simulations show that the SoRTGrid overlay improves the management of time-constrained jobs (like the SRTSs ones) on large, distributed and heterogeneous scenarios referable to Grids. Also, the Usage Percentage remains at very satisfying levels for Grid and Economic Grid environments.
174
A. Merlo et al.
Fig. 6 Average degree and usage percentage for the various environments
5 Conclusions We presented a simulator for distributed computing systems, especially designed to simulate Grid platforms enhanced with additional services, besides the basic ones offered by typical middleware. The simulator works at a high abstraction level in order to capture the behaviour of the system without the need of managing low-level aspects related to middleware implementation and resource level. This approach allows a reasonably fast execution of tests and good results. Moreover, the simulator can be easily adapted to simulate different tools, independently of the underlying middleware. As a case study we show how SoRTSim can be used to simulate SoRTGrid, a framework we previously developed to address the problem of adding QoS provision to general purpose Grid middleware, so as to be able to manage applications with demanding QoS requirements, up to the Soft Real-Time. We performed a large number of incremental trials, from simple platforms (local cluster) to Grid platforms, possibly enhanced with an economic approach. A first analysis of results shows that SoRTSim is able to get insight into the behaviour of the simulated architecture, confirming the suitability of SoRTGrid for the management of Soft Real-Time jobs on Grid platforms. SoRTSim simulations can help the optimal configuration of SoRTGrid for different applicative scenarios. As future work we plan to perform a deeper analysis of SoRTSim results, pointing out the influence of probability distributions, user and owner behaviours, and different management policies. Moreover, to further validate the obtained results, it could be interesting to select a significant subset of the scenarios we simulated with SoRTSim, and simulate them with a lower-level, and therefore more expensive, Grid simulator, for example, GridSim, able to model the interactions with effective logical Grid structures.
SoRTSim: A High-Level Simulator for the Evaluation...
175
References 1. Y. Wang, L. Wang, and G. Dai. Qgwengine: A qos-aware grid workflow engine. Proc. of the 2008 IEEE Congress on Services, pages 561–566, 2008. 2. P. Beckham, S. Nadella, N. Trebon, and I. Beschastnikh. Spruce: A system for supporting urgent high-performance computing. In Proceedings of IFIP WoCo9 Conference, 2006. 3. K.C. Nainwal, J. Lakshmi, S.K. Nandy, R. Narayan, and K. Varadarajan. A framework for qos adaptive grid meta scheduling. Proc. of the 16th International Workshop on Database and Expert Systems Applications (DEXA’05), 2005. 4. R. Al-Ali, K. Amin, G. von Laszewski, O. Rana, and D. Walker. An ogsa-based quality of service framework. In Proceedings of the 2nd International Workshop on Grid and Cooperative Computing, Shangai, pages 529–540, 2003. 5. J. Gustedt, E. Jeannot, and M. Quinson. Experimental methodologies for large-scale systems: A survey. Parallel Processing Letter, 19:399–418, 2009. 6. I. Foster. Globus toolkit version 4: Software for service oriented systems. Journal of Computer Science and Technology, 21-4:513–520, July 2006. 7. Glite middleware. http:// glite.web.cern.ch/ glite/ . 8. R. Buyya, M. Murshed, and D. Abramson. Gridsim: A toolkit for the modeling and simulation of distributed resource management and scheduling for grid computing. Conurrency and Computation: Practice and Experience, 14:1175–1220, 2002. 9. A. Merlo, A. Clematis, A. Corana, D. D’Agostino, V. Gianuzzi, and A. Quarati. Sortgrid: a grid framework compliant with soft real time requirements. Remote Instrumentation and Virtual Laboratories: Service Architecture and Networking, pages 145–161, Springer, Berlin, 2010. 10. A. Sulistio, U. Cibej, S. Venugopal, B. Robic, and Buyya R. A toolkit for modelling and simulating data grids: an extension to gridsim. Concurrency and Computation: Practice and Experience, 20/13:1591–1609, 2008. 11. W.H. Bell et al. Optorsim: A grid simulator for studying dynamic data replication strategies. Int. Journal of High Performance Computing Applications, 17-4:403–416, 2003. 12. Simgrid. http:// simgrid.gforge.inria.fr. 13. Gridsim. http:// www.buyya.com/ gridsim. 14. A. Harwood R. Ranjan and R. Buyya. Sla-based coordinated superscheduling scheme for computational grids. In Proceedings of the 8th IEEE International Conference on Cluster Computing (Cluster 2006), IEEE Computer Society Press, September 27 - 30, 2006, Barcelona, Spain. 15. A. Merlo. An architectural approach to the management of applications with QoS requirements on Grid. Ph.D. Thesis in Computer Science, DISI, University of Genoa (available at www.disi.unige.it/dottorato/THESES/2010-04/Merlo.pdf and http://sealab.disi.unige.it/ Krakatoa/MerloA/thesis.pdf), April 2010.
MRA3D: A New Algorithm for Resource Allocation in a Network-Aware Grid Davide Adami, Christian Callegari, Stefano Giordano, and Michele Pagano
Abstract A Remote Instrumentation Services e-Infrastructure, such as the one deployed in the DORII project, allows end-user’s applications to easily and securely access heterogeneous resources (e.g., instrument, computing and storage elements). Since Remote Instrumentation Services are often characterized by huge data transfers and high computational loads, the selection and the allocation of grid resources dramatically affect their performance. This paper proposes a distributed resource allocation algorithm, referred to as MRA3D, capable of handling multiple resource requirements for jobs/tasks that arrive to the grid computing environment of the e-Infrastructure. More specifically, MRA3D aims at minimizing the execution time of data-intensive applications by taking into account both system and connectivity status of the computational resources. Simulations have been carried out to compare the performance of MRA3D with other resource allocation algorithms in a realistic environment by using synthetic as well as real workload traces.
1 Introduction The e-Infrastructure for Remote Instrumentation Services (RISs), deployed in the DORII project [1], allows end-users’ applications to easily and securely access instrumentation resources (e.g., expensive experimental equipment, sensors for data acquisition, mobile devices) integrated in a grid environment with high-performance computation and storage facilities.
D. Adami () CNIT – Department of Information Engineering, University of Pisa, Italy e-mail:
[email protected] C. Callegari • S. Giordano • M. Pagano Department of Information Engineering, University of Pisa, Italy e-mail:
[email protected];
[email protected]; michele.pagano@iet. unipi.it F. Davoli et al. (eds.), Remote Instrumentation for eScience and Related Aspects, DOI 10.1007/978-1-4614-0508-5 12, © Springer Science+Business Media, LLC 2012
177
178
D. Adami et al. Optimal Local
Static
Approximate Sub-optimal Heuristic
Scheduling Algorithms
Global
Optimal Cooperative Sub-optimal Physically Distributed Optimal Non-Cooperative Sub-optimal Dynamic Optimal Physically Non-Distributed Approximate Sub-Optimal Heuristic
Fig. 1 Taxonomy of grid scheduling algorithms
The middleware of the DORII e-Infrastructure is based on gLite [2] as well as on specific services developed within the project. More specifically, while gLite provides basic grid services, such as Information, Job Management, Data Management and Security Services, the end-users may interact with the instruments through a new abstraction, called Instrument Element (IE), which, already introduced in the GridCC project [3], has been re-designed and implemented from scratch in the DORII project. Information about the resources and services provided by the infrastructure are made available by the Berkeley Database Information Index (BDII). The Workload Management System (WMS) is the gLite service responsible for the distribution and management of tasks across grid resources, so that applications are conveniently, efficiently and effectively executed. The core component of the Workload Management System is the Workload Manager (WM), whose purpose is to accept and satisfy requests for job management coming from its clients. For a computation job there are two main types of request: submission and cancellation. In particular, a submission request means passing the responsibility of the job to the WM. The WM will then pass the job to an appropriate Computing Element (CE) for execution, taking into account the requirements and the preferences expressed in the job description. The decision of which resource should be used is the outcome of a matchmaking process between submission requests and available resources. It is well-known that the complexity of a general scheduling algorithm is NPComplete [4]. However, the design of efficient scheduling algorithms is even more challenging in a grid environment, due to the heterogeneity and autonomy of the resources, the performance dynamism and the separation between data and computation in the resource selection phase.
MRA3D: A New Algorithm for Resource Allocation in a Network-Aware Grid
179
Concerning grid scheduling algorithms, a survey on the state of the art is provided in [5] and a hierarchical taxonomy for general-purpose distributed and parallel computing systems is also proposed (see also Fig. 1). Since grid is a special kind of these systems, scheduling algorithms for grids represent a subset of this taxonomy. Nevertheless, this classification from the system’s viewpoint is not sufficient to cover all the features of grid scheduling algorithms, since many other characteristics should be taken into account, such as the goal for scheduling (application centric versus resource-centric), the capability of the scheduler to adapt to the status of the resources, the dependency or independency among tasks in an application, the influence of QoS requirements on the scheduler behaviour, etc. In this chapter, we propose a new algorithm, called Multiple Resources Allocation Three Dimensional (MRA3D) that allows selecting the best resource for each submission request according to a global, distributed, and heuristic approach. The main strength of the algorithm consists in the capability of taking into account the computational power and the dynamism of the resources as well as the network connectivity status in terms of available bandwidth and latency. This is especially relevant for data-intensive applications, whose performance not only depends on the processing time, but also on the time for data movement. MRA3D is an application-centric scheduling algorithm. Unlike resource-centric approaches that aim at optimizing the utilization of the resources in a particular time period, MRA3D seeks to satisfy the applications’ requirements. For example, most of current grid applications’ concerns are about time, such as the time spent from the first task to the end of the last task in a job. In other cases, the end-user’s requests should be satisfied by minimizing the economic cost. Starting from the Multi-Resource Scheduler (MRS) [6,7], the co-ordinates .x; y/ based resource allocation strategy for grid environment, MRA3D matches the performance indexes of each resource with a point in a three-dimensional space. Next, the best resource is selected by solving a minimization problem. The main contributions of this work are as follows: 1. Unlike the MRS algorithm, the computational index of MRA3D takes into account both static (CPU number, CPU speed, storage capacity) and dynamic (current CPU load, idle CPUs number) properties of the grid resources. 2. Unlike the MRS algorithm, the data index of MRA3D is not used for task migration, but, together with a new index that takes into account the network latency, it allows to choose the resource with the best performance in terms of bandwidth and delay. 3. When required by the end-user, MRA3D allows to select the best resource that satisfies performance (execution time, bandwidth, delay) and/or cost constraints by using the "-constraint method. 4. The algorithm has been implemented as a new module in Gridsim, an open source toolkit available at [8]. Moreover, all the extensions necessary to collect information about the resources and network status have been added. The chapter is organized as follows. Section 2 introduces the grid reference model and formalizes the problem. Section 3 discusses the performance indexes and describes the MRA3D resource allocation scheme, whereas Sect. 4 shows the results
180
D. Adami et al.
of the simulations carried out to evaluate the performance of MRA3D with respect to other common resource allocation schemes. Finally, Sect. 5 concludes the chapter with some final remarks.
2 Problem Statement and Grid Reference Model From the point of view of scheduling systems, a higher level abstraction for the grid can be applied by ignoring some infrastructure components, such as authentication, authorization, resource discovery, and access control. Thus, in this work the following definition for the term grid is adopted: A type of parallel and distributed system that enables the sharing, selection and aggregation of geographically distributed autonomous and heterogeneous resources dynamically at runtime depending on their availability [9]. Suppose that an application (or job, i.e., a set of atomic tasks that will be carried out on a set of heterogeneous resources) consists of t tasks (an atomic unit to be scheduled by the scheduler and assigned to a resource) and Ti that arrive according to a pre-defined order, established, e.g., by a workflow, a scheduling discipline or a priority policy. To optimize the performance of the application, it is required to minimize the overall execution time, perhaps taking into account QoS or cost requirements. In a grid environment, any member of the Virtual Organization (VO), independently of its location, can access the VO grid resources for data processing, data storage, etc. In the following, in accordance with the Data Grid paradigm, we assume that the computational resources also function as data storage resources, thus fulfilling a dual role. Therefore, the grid computing environment is characterized as follows: • The computational resources may consist of individual servers, clusters or large multiprocessor systems. The Local Resource Manager (LRM) is responsible for the communications to individual nodes in the cluster and therefore it allows exposing them as a single aggregated resource. • The computational resources have heterogeneous and dynamic capabilities that are instantaneously disseminated throughout the grid. In a real grid computing environment, this information is accurate only within a limited time period. • No computational resource ever fails. • The network infrastructure interconnecting the computational resources and the grid application sites consists of links with different bandwidth and delay. • All resources may be accessed by every other node in the grid, because they belong to the same VO and, when required, users have been already successfully authenticated. • The computational capacity is provided in MIPS (Million of Instructions per Second) throughout the grid in order to standardize the performance of different CPU architectures in various sites.
MRA3D: A New Algorithm for Resource Allocation in a Network-Aware Grid
181
Concerning the task execution environment, the following assumptions hold: • Tasks can be generated and scheduled for execution anywhere within the grid computing environment. • Resource requirements for a task do not change during execution. • The execution time of a task is computed from the time the task is submitted till the end of its output file stage-out procedure. This includes the time needed for the data to be staged-in for execution and the data transfer time. • Tasks migration is not allowed: when a task has been allocated to a resource, it will be executed by such resource. • Resources are locked for task execution. • In addition to computational requirements (usually in the order of GigaInstructions), each task also has data requirements, thus the data file size is stated. • The processing time for each task is estimated by taking into account the computational capacity of the idle CPUs and enforcing the resources spaceshared scheduling discipline.
3 The MRA3D Algorithm This new resource allocation algorithm aims at making a more efficient use of grid resources, in order to minimize the execution time of the application and, if required, to satisfy performance as well as economic cost constraints. Therefore, the algorithm takes into account three performance indexes that will be analyzed in more detail in the next subsection: • The computational index, which indicates the capability of a resource to efficiently carry out a computation job/task. This capability depends on many factors, such as, the number and speed of CPUs and the amount of RAM. • The data index, which provides an estimate of the minimum time necessary to transfer the application data from the site where they reside to the grid resource site. The data index is strictly related to the available bandwidth along the path between the two sites and heavily affects the performance of data-intensive applications. • The latency, which provides an estimation of the network delay from the application site to the resource site. In our scenario, the latency is calculated by measuring the Round Trip Time (RTT) for packets traveling from the application site to the resource site. High and/or highly variable delays affect the performance of data-intensive applications and significantly reduce the throughput.
182
D. Adami et al.
3.1 Performance Indexes • Computational Index • Given a set of r grid resources R D fR1 ; R2 ; : : :; Rr g, we suppose that: • Each resource Rk consists of m machines M D fM1 ; M2 ; : : :; Mm g and each of them is composed of a set of n processor elements (PEs) PE D fPE1 ; PE2 ; : : :; PEn g. • The Pn computational power of the k-th grid resource is given by Pk D Pm overall j D1 i D1 Pij where Pij is the computational power of the i -th PE belonging to the j -th machine. • The overall available power of the k-th grid resource, at time t, is Pn P computational given by Pk D m P where Pij is the available computational power j D1 i D1 ij at time t of the i -th PE belonging to the j -th machine. Therefore, only taking into account the number and speed of CPUs, the Computational Index (CI) at time t of the k-th resource may be expressed as follows: CIk D
Pk Pk
• Data Index This parameter allows evaluating the time necessary to move data to and from a grid resource. Therefore, the Data Index (DI) is a relevant performance metric for data-intensive grid applications in order to find the grid resource that matches the network requirements of these applications. Given a task Tj that requires moving Sj bits from the application site x to the grid resource site y, DIi may be evaluated as follows: DIi D
Si Bxy
where Bxy is the minimum upload bandwidth of the links along the path from x to y. • Latency By measuring the average RTT from the application site x to the grid resource site y and assuming that the delay is the same in both directions, it is possible to select the grid resources that assure the lowest latency (Lx ). This is done by using ping or statistics collected by latency monitoring tools (e.g., Smokeping [10]). Figure 2 shows how the performance indexes can be mapped into a virtual threedimensional space.
MRA3D: A New Algorithm for Resource Allocation in a Network-Aware Grid
183
Fig. 2 The MRA3D indexes
3.2 The MRA3D Resource Allocation Scheme MRA3D consists of the following steps (see Fig. 3): 1. Retrieve the resources list RVO D fR1 ; R2 ; : : :; Rk g from the VO BDII 2. For each task Ti of the application, repeat the steps 3 and 4 3. For each resource Ri : (a) Check whether Ri is reachable by using ping and, in case the test is successful, put Ri in the list of the available resources R D fR1 ; R2 ; : : :; Rr g (b) retrieve the static and dynamic features of Ri and then compute CIi (c) estimate the currently available bandwidth from the application site to the resource site, (i.e., the maximum throughput that the path can provide to the application, given the path’s current cross traffic load), and then compute DIi (d) estimate the latency Li from the application site to the resource site (e) calculate the economic cost Ci for processing Ti by using Ri (f) check whether the performance and cost constraints are verified:
CIi CI; DIi DI; Li L; Ci C
(g) if the constraints are verified, put Ri in the candidate resources pool and calculate the value of the following function: d.i / D
q
wCI CI2i C wDI DI2i C wL L2i ;
184
D. Adami et al.
Select the first CE
CE Pool
Run the Reachability Test
Select the next CE
YES NO
Is the CE reachable?
Any CE is still in the pool?
YES
Compute DI
Compute CI
Estimate L
Compute the Cost
Run the Constraints Test
Are constraints satisfied?
NO
Discard the CE
YES
Put the CE in the Candidate Pool
Choose the best CE
Select the CE
Fig. 3 MRA3D flow diagram
EXIT
NO
MRA3D: A New Algorithm for Resource Allocation in a Network-Aware Grid
185
where wCI , wDI (sec2 ) and wL (sec2 ) can be used to assign a different weight to each performance index. In our experiments, the weights will assume the same values. 1. Choose the best resource Ri as follows: mini Œd.i / Step 3.f is skipped when the user aims at minimizing the execution time of each task without searching for a resource that also matches cost and/or performance constraints. It is worth highlighting that MRA3D selects the resource that matches the point with the minimum distance from the origin in the three-dimensional space shown in Fig. 2.
4 Simulations This section describes the setup and the results of the simulations carried out to compare the performance of MRA3D with respect to the following resource allocation schemes: • Random: each task is allocated to a randomly chosen resource, without taking into account neither its computational power, nor its network connectivity status and properties. Therefore, some resources may be overloaded and network links congested, while others may be idle and underutilized. • PseudoRandom: each application’s task is assigned to a different resource following a round-robin scheduling discipline, so as seeking to balance the number of tasks allocated to all the resources. The execution time is not optimized. • MRA2D: this resource allocation strategy takes into account both static and dynamic features of the resources (CI) as well as the available bandwidth (DI). For each resource, (CI, DI) is reported on a two-dimension virtual map and then the resource with the minimum distance from the origin of the axis is selected. In some cases, this resource allocation strategy leads to overloading the resource with the maximum computational power and congesting its network links.
4.1 Simulations Setup Simulations have been carried out by means of GridSim, an open source toolkit that allows to model and simulate entities (i.e., users, applications, resources, and resource brokers – schedulers) in parallel and distributed computing systems, for the design and evaluation of resource allocation algorithms. • Simulations Setting
186
D. Adami et al.
Table 1 Resource specification Resource Name 1 2 3
Processing capability (MIPS) 8000 5000 6200
Nodes number 7 4 5
Storage (TB) 0.35 0.10 0.25
Cost per MIPS (G$) 6 4 3
Resources. GridSim provides a comprehensive facility to create different classes of heterogeneous resources that can be aggregated by using resource brokers, so as to solve computing and data intensive applications. The processing nodes within a resource can be heterogeneous in terms of processing capability, configuration, and availability. A resource can be a single processor or a multi-processor with shared or distributed memory and managed by time or space shared schedulers. In the simulations, a VO consisting of three resources has been taken into account. The resource settings (processors number, processing capacity and disk capacity) were obtained from the characteristics of the real WLCG (Worldwide LHC Computing Grid) testbed [11], a global collaboration linking grid infrastructures and computer centers worldwide. Data about the resources have been taken and scaled down to reduce the time and the amount of memory required to perform the simulations. The computing capacities were scaled down by 10 and the storage capacities by 20. Table 1 summarizes the resource relevant information. Network Topology and Links. Resources and other entities (e.g. GIS) have been connected in a network topology as shown in Fig. 4, where the bandwidth of each link is also reported. Moreover, we have assumed that some parameters are identical for all network links, i.e., the Maximum Transmission Unit (MTU) is 1,500 bytes and the latency is 10 ms. Background Traffic. GridSim incorporates a background network traffic functionality based on a probabilistic distribution [12]. This has been used to carry out simulations in the presence of background traffic. Users. In the simulations, two different scenarios have been considered. In the first one, we analyze the behaviour of different resource allocation algorithms when only one user has tasks to be processed. In the second one, we compare their performance when concurrent requests are coming from two users. Workload Traces. GridSim incorporates a functionality that reads workload traces taken from supercomputers for simulating a realistic grid environment. Therefore, workload traces of the DAS2 5 Cluster Grid have been downloaded from the Parallel Workload Archive [13] and used in our simulations. Resource Broker. The resource brokers use scheduling algorithms or policies for mapping jobs/tasks to resources in order to optimize system or user objectives. In this work, MRA2D and MRA3D have been implemented and integrated into the GridSim Toolkit, by extending the AllocPolicy class.
MRA3D: A New Algorithm for Resource Allocation in a Network-Aware Grid
187
Site 2
Resource 2 User 2
R3
Site 3 Site 4
2.5 Gbps
R2
Site 1
2.5 Gbps
2.5 Gbps 10 Gbps
R4 10 Gbps
R1 10 Gbps
Resource 1 R5
Resource 3
User 1 Site 5
Fig. 4 Simulation scenario
4.2 Simulation Results • Case 1: Single User, Simulated Workload Traces, MRA3D without constraints The first set of simulations was aimed at comparing the behaviour of MRA3D with respect to Random, PseudoRandom and MRA2D resource allocation strategies in a single-user scenario with synthetic workload traces. The computational power required for each task was supposed to be uniformly distributed in the range [10], [50] GIPS, whereas the size of the input and output files was 1 GB. The background traffic was bursty, each burst consisted of a number of packets uniformly distributed in the range [30, 100], the length of each packet was 1,500 bytes, and the start time of each burst was distributed according to a Poisson process with rate 0:1 s1 . Moreover, the link between site 5 and site 1 was congested. Figure 5 shows how the resources are utilized when the total number of tasks is 30. PseudoRandom and Random resource allocation strategies assure that tasks are almost fairly distributed among the resources, but the execution time of the application is not minimized. MRA2D overloads resource 1, since it has the highest CI and the same DI as resource 3, but it does not minimize the execution time since the latency is high, due to links congestion. MRA3D outperforms all the other algorithms, because it better utilizes the available resources also taking into account their network connectivity features.
188
D. Adami et al.
Fig. 5 Average resource utilization – single user case
• Case 2: Two Users, Simulated Workload Traces, MRA3D without constraints To investigate the behaviour of MRA3D when multiple users are competing for the same computational resources, a second set of simulations was carried out. The same settings as in case 1 were used, but in this case we considered two users located at different sites (see Fig. 4) and simulations were repeated with a number of tasks varying from 5 to 50. The execution time for each user in function of the application tasks number is shown in Fig. 6. MRA3D always outperforms all the other algorithms for both users, because, as in case 1, it is able to better utilize the available resources and to minimize the execution time that not only depends on the processing time, but also on the data transfer time. It is relevant to highlight that MRA2D performs worse than Random and PseudoRandom. This is not outstanding because both users, by using MRA2D, send most part of their tasks to resource 1 (see Fig. 7), this way congesting both the resource and the network connection towards it. MRA3D guarantees a more uniform utilization of the resources, thus improving the overall performance of the applications for both users. • Case 3: Real Workload Traces, MRA3D with constraints This set of simulations aimed at analyzing the behaviour of MRA3D with a cost constraint. The processing cost for MIPS was expressed in Grid Dollar (G$). The reference costs were taken from the WWG testbed: it seemed reasonable to assume that the more powerful resources were also the most expensive per MIPS, but the cost was chosen under 10 G$ for all the resources (see Table 1).
MRA3D: A New Algorithm for Resource Allocation in a Network-Aware Grid
189
Fig. 6 Execution time versus tasks number – two users case
Fig. 7 Total number of tasks for each resource – two users case
First, simulations were performed by considering a single user and varying the cost constraint. Figures 8 and 9 show the total cost and the execution time, respectively.
190
Fig. 8 Total cost versus cost constraint – single user
Fig. 9 Execution time versus cost constraint – single user
D. Adami et al.
MRA3D: A New Algorithm for Resource Allocation in a Network-Aware Grid
191
Fig. 10 Execution time versus cost constraint – two users
If the cost constraint is too low (C 10;000 G$), no resource is found, whereas if C 20;000 G$ only one resource satisfies the constraint and, therefore, all the algorithms provide the same result. When C 30;000 G$ , MRA3D always minimizes the execution time. Indeed, when the cost constraint can be satisfied by several resources, MRA3D allows selecting the resource that achieves the best performance in terms of processing and data transfer time even though the cost sometimes is not the minimum. It is relevant to highlight that if C 50;000 G$, when MRA3D is adopted, the total cost and the execution time remain constant, because all the resources are used in the most effective way. In some cases (e.g., C D 50;000 G$), Random and PseudoRandom may achieve a lower cost than MRA3D, but the execution time is greater. The second set of simulations was performed to evaluate the performance of MRA3D with cost constraints when two users are competing for the same resources. As shown in Fig. 10, for both users, regardless of the resource allocation algorithm, the execution time decreases when the constraint increases, because multiple resources are able to meet the constraint. Moreover, for both users, when C 30;000 G$ MRA3D minimizes the execution time and if C 50;000 G$ the execution time is constant. As in the single-user case, sometimes Random or PseudoRandom resource allocation algorithms may minimize the cost, but not the execution time.
192
D. Adami et al.
5 Conclusions This chapter has addressed the problem of dynamic resource allocation for grid applications whose execution time is highly affected by both the processing and data transfer time. More specifically, MRA3D is an application-centric algorithm that aims at minimizing the execution time of data-intensive applications by taking into account the current status of the computational resources in terms of computational load and network performance metrics. Moreover, MRA3D is also able to select the resource that satisfies performance and/or cost constraints specified by the end-user. A large set of simulations has been carried out to evaluate the performance of MRA3D and other common resource allocation algorithm in a realistic grid environment (i.e., the DORII project infrastructure). Tasks generated by a dataintensive application have been modelled by using synthetic as well as real workload traces. The simulations results clearly show that MRA3D, taking advantage of the knowledge of the grid infrastructure status, always outperforms the other algorithms considered in this work and achieve better resources utilization from the application’s viewpoint. Acknowledgements This work was supported by the European Commission under the DORII project (contract no. 213110). The authors would like to thank Dr. Leonardo Fortuna for his work in support of the research activity carried out in this paper.
References 1. 2. 3. 4.
EU DORII Project Home page, http://www.dorii.eu gLite Home Page, http://glite.web.cern.ch/glite/ EU GridCC Project Home Page, http://www.gridcc.org/cms/ El Rewini, H., Lewis, T., Ali, H.: Task Scheduling in Parallel and Distributed Systems, ISBN: 0130992356, PTR Prentice Hall, 1994 5. Dong, F., Akl, S. G.: Scheduling Algorithms for Grid Computing: State of the Art anc Open Problems, Technical Report Nı 2006-504, School of Computing, Queen’s University Kingston, Ontario, Canada, January 2006 6. Khoo, B. B., Veeravalli, B., Hung, T., Simon See, C. W,: Co-ordinate Based Resource Allocation Strategy for Grid Environments, CCGrid06 - Resource Management & Scheduling, Singapore, 16–19 May 2006. 7. Khoo, B. B., Veeravalli, B., Hung, T., Simon See, C. W,: Multi-dimensional scheduling scheme in a Grid computing environment, Journal of Parallel Distributed Computing, 67, 6, June 2007. 8. GridSim Home page, http://sourceforge.net/projects/gridsim/ 9. Baker, M., Buyya, R., Laforenza, D.: Grids and Grid Technologies for Wide-Area Distributed Computing, Journal of Software Practice and Experience, Vol. 32, Nı 15, pp. 1437–1466, December 2002 10. Smokeping Home Page, http://oss.oetiker.ch/smokeping/ 11. Worldwide LHC Computing Grid, http://lcg.web.cern.ch/LCG/
MRA3D: A New Algorithm for Resource Allocation in a Network-Aware Grid
193
12. Sulistio, A., Poduval, G., Buyya, R., Tham, C.: On Incorporating Differentiated Levels of Network Service into GridSim, Future Generation Computer Systems (FGCS), ISSN: 0167739X, Volume 23, Issue 4, May 2007, Pages: 606–615 Elsevier Science, Amsterdam, The Netherlands, May 2007 13. Parallel Workload Archive Home Page, http://www.cs.huji.ac.il/labs/parallel/workload
Large-Scale Quantum Monte Carlo Electronic Structure Calculations on the EGEE Grid Antonio Monari, Anthony Scemama, and Michel Caffarel
Abstract A grid implementation of a massively parallel quantum Monte Carlo (QMC) code on the EGEE grid architecture is discussed. Technical details allowing an efficient implementation are presented and the grid performance (number of queued, running, and executed tasks as a function of time) is discussed. Finally, we present a very accurate Li2 potential energy curve obtained by running simultaneously several hundreds of tasks on the grid.
1 Introduction Quantum Monte Carlo (QMC) methods are known to be powerful stochastic approaches for solving the Schr¨odinger equation [1]. Although they have been widely used in computational physics during the last 20 years, they are still of marginal use in computational chemistry [2]. Two major reasons can be invoked for that: (a) the N -body problem encountered in chemistry is particularly challenging (a set of strongly interacting electrons in the field of highly-attractive nuclei) and (b) the level of numerical accuracy required is very high (the so-called “chemical accuracy”). In computational chemistry, the two standard approaches used presently are the Density Functional Theory (DFT) approaches and the various post-Hartree– Fock wavefunction-based methods (Configuration Interaction, Coupled Cluster, etc.). In practice, DFT methods are the most popular approaches, essentially because they combine both a reasonable accuracy and a favorable scaling of the computational effort as a function of the number of electrons. On the other hand,
A. Monari • A. Scemama () • M. Caffarel Laboratoire de Chimie et Physique Quantiques, CNRS-IRSAMC, Universit´e de Toulouse, France e-mail:
[email protected];
[email protected];
[email protected] F. Davoli et al. (eds.), Remote Instrumentation for eScience and Related Aspects, DOI 10.1007/978-1-4614-0508-5 13, © Springer Science+Business Media, LLC 2012
195
196
A. Monari et al.
post-HF methods are also employed since they lead to a greater and much controlled accuracy than DFT. Unfortunately, the price to pay for such an accuracy is too high to be of practical use for large molecular systems. QMC appears as a third promising alternative method essentially because it combines the advantages of both approaches: a favorable scaling together with a very good accuracy. In addition to this, and it is the central point of the present note, the QMC approaches – in sharp contrast with DFT and post-HF methods – are ideally suited to High-Performance-Computing (HPC) and, more specifically, to massive parallel computations either on homogeneous multi-processor platforms or on heterogeneous grid infrastructures. As most “classical” or “quantum” Monte Carlo approaches, the algorithm is essentially of the number crunching type, the central memory requirements remain small and bounded and the I/O flows are essentially marginal. Due to these extremely favorable computational aspects plus the rapid evolution of computational infrastructures towards more and more numerous and efficient processors, it is likely that QMC will play in the next years a growing role in computational chemistry. In the present study, the first implementation of our quantum Monte Carlo program on a large scale grid – the European EGEE-III grid1 – is presented. As a scientific application we have chosen to compute with a very high accuracy the potential energy curve (PEC) of the Li2 molecule (total energy of the system as a function of the Li–Li distance). To the best of our knowledge, the curve presented here is the most accurate PEC ever published for this system. In order to reach such an accuracy two conditions need to be fulfilled. First, a large enough Monte Carlo statistics has to be realized to reduce the final statistical error down to the precision desired. Second, accurate enough trial wave functions must be employed to reduce as much as possible the so-called “Fixed-Node” error (the only systematic error left in a QMC calculation, see [2]). The first condition is easy to fulfill since the system is small (only six electrons) and accumulating statistics is just a matter of making enough Monte Carlo steps and using enough processors (“brute force” approach). The second condition is much more challenging since we need to introduce trial wavefunctions with a controlled nodal quality and which can be improved in a systematic way. Here, we have realized this latter aspect by considering wavefunctions issued from Full Configuration Interaction (FCI) calculations in a large basis-set (technically, the cc-pVQZ basis set, [3]). Such a FCI trial wavefunction is expected to have a very good nodal structure. However, there is a price to pay: To handle such a function is quite expensive. More precisely, the FCI trial wavefunction used here is expressed a sum of 16,138 products of two 3 3 determinants (three ˛-electrons and three ˇ-electrons) and, at each Monte Carlo step, this wavefunction and its first- and second- derivatives have to be computed. Note that the computational cost in terms of CPU time is directly proportional to
1
http://www.eu-egee.org/.
Large-Scale QMC Electronic Structure Calculations on the EGEE Grid
197
the number of products in the trial wavefunction expansion. To the best of our knowledge, it is the first time that such a high number of determinants in a QMC calculation has been used. In Sect. 2, some technical details related to the implementation of a quantum Monte Carlo simulation and the use of our QMCDChem [4] program are presented. Section 3 presents the computational strategy employed in our application to the Li2 molecule. Section 4 gives the results and discusses the performance. Finally, some conclusions are presented in Sect. 5.
2 Technical Details A walker is a vector X of the 3N -dimensional space containing the entire set of the three-dimensional coordinates of the N electrons of the system. During the simulation, a walker (or a population of walkers) samples via a Monte Carlo Markov Chain process the 3N -dimensional space according to some target probability density (the precise density may vary from one QMC method to another). From a practical point of view, the averages of the quantities of interest (energy, densities, etc.) are calculated over a set as large as possible of independent random walks. Random walks differ from each other only in the initial electron positions X0 , and in the initial random seed S0 determining the entire series of random numbers used. In the QMC D Chem code used here, the main computational object is a block. In a block, Nwalk independent walkers realize random walks of length Nstep , and the quantities of interest are averaged over all the steps of each random walk. If Nstep is significantly larger than the auto-correlation time (which is usually rather small), the positions of the walkers at the end of the block can be considered as independent of their initial positions and a new block can be sampled using these configurations as X0 and using the current random seed as S0 . The final Monte Carlo result is obtained by averaging all the results obtained for each block. If the data associated with each block are saved on disk, the averages can be calculated as a post-processing of the data and the calculation can be easily restarted using the last positions of the walkers and the last random seed. Note that the computation of the averages does not require any time ordering. If the user provides a set of Nproc different initial conditions (walker positions and random seed), the blocks can be computed in parallel. In Fig. 1, we give a pictorial representation of four independent processors computing blocks sequentially, each block having different initial conditions.
2.1 Design of the QMCDChem Program The QMCDChem program was designed specifically to run on heterogeneous clusters via the Message Passing Interface (MPI) API [6] and also in grid environments
198
A. Monari et al.
Nwalk
Nstep
Nproc
CPU time
Fig. 1 Graphical representation of a QMC simulation. Each process generates blocks, each block being composed of Nwalk walkers realizing Nstep Monte Carlo steps
via Python2 scripts. The memory requirements, disk input/outputs and network communications were minimized as much as possible, and the code was written in order to allow asynchronous processes. This section presents the general design of the program. The behavior of the program is the following. A main Python program spawns three processes: an observer, a computation engine, and a data server (see Fig. 2).
2.1.1 The Observer The observer keeps a global knowledge of the whole calculation (current values of the computed averages, total CPU time, wall time, etc.). It updates the results using the calculated blocks at regular intervals of time and checks if the code should continue or stop by informing the data server. It also checks if a stopping condition is reached. The stopping condition can be a maximum value of the total CPU time, the wall time, the number of blocks, or a threshold on the statistical error bar of any Monte Carlo estimate.
2
http://www.python.org/.
Large-Scale QMC Electronic Structure Calculations on the EGEE Grid
199
Fig. 2 Inter-process communication of the QMCDChem program
2.1.2 The Computation Engine The computation engine starts the Fortran MPI executable. The master MPI process broadcasts the common input data to the slaves, and enters the main loop. In the main loop, the program computes one block and sends the results to the data server via a trivial python XML-RPC client. The reply of the data server determines if the main loop should be exited or if it should compute another block. When the main loop is exited, there is an MPI synchronization barrier where all the slave processes send the last walker positions and their last random seed to the master, which writes them to the disk. A linear feedback shift register (LFSR) pseudo-random number generator [5] is implemented in the code. A pool of 7000 initial random seeds was previously prepared, each random seed being separated from the previous one by 6:1011 seeds. Every time a random number is drawn, a counter is incremented. If the counter reaches 6:1011 , the next free random seed is used. This mechanism guarantees that the parallel processes will never use the same sequence of random numbers.
2.1.3 The Data Server The data server is a Python XML-RPC server whose role is to receive the computed data during a simulation and save it into files. Each file is a few kilobytes large and contains the average of interest computed over a block. The data server also computes an MD5 key [7] related to the critical input values to guarantee that the computed blocks belong to the simulation, and that the input data has not been corrupted.
200
A. Monari et al.
2.1.4 Generation of the Output The output file of the program is not created during the run which only produces block files via the data server. A separate script analyzes the blocks written to disk to produce an output. This script can be executed at any time: while a calculation is running to check the current values, or when the simulation has finished. The consistence between the input data with the blocks is checked with using the previously mentioned MD5 key.
2.1.5 Adaptation to Grid Environments A script was written to prepare as much files as there are different requested tasks on the grid, each file name containing the index of the task (for example 12.tar.gz). Each file is a gzipped tar file of a directory containing all the needed input data. The only difference between the input files of two distinct tasks is their vector X0 and their random seed S0 . The script also generates a gzipped tar file qmcchem.tar.gz which contains a statically linked non-MPI Fortran i686-linux executable, and all the needed Python files (the XML-RPC data server, the observer, etc.). A third generated file is a shell script qmcchem grid.sh which will run the job. Another script reverses this work. First, it unpacks all the tar.gz files containing the input files and the block files. Then it collects all the block files into a common directory for the future production of the output file. Termination system signals (SIGKILL, SIGTERM, etc.) are intercepted by the QMC D Chem program. If any of these signals are caught, the program tries to finish the current block and terminates in a clean way.
2.2 Advantages of Such a Design 2.2.1 Asynchronous Processes Note that the transmission of the computed data is not realized through MPI in the main loop of the computation engine, but with a Python script instead. This latter point avoids the need for MPI synchronization barriers inside the main loop, and allows the code to run on heterogeneous clusters with a minimal use of MPI statements. The main advantage of this design is that if the MPI processes are sent on machines with different types of processors, each process will always use 100% of the CPU (except for the synchronization barrier at the end) and fast processors will send more blocks to the data server than the slower processors.
Large-Scale QMC Electronic Structure Calculations on the EGEE Grid
201
2.2.2 Analysis of the Results The analysis of the blocks as a post-processing step (see Sect. 2.1) has major advantages. First, the analysis of the data with graphical user interfaces is trivial since all the raw data is present in the block files, and they are easy to read by programs, as opposed to traditional output files which are written for the users. The degree of verbosity of the output can be changed upon request by the user even after the end of the calculation, and this avoids the user to read a large file to find only one small piece of information, while it is still possible to have access to the verbose output. The last and most important feature is that the production of the output does not impose any synchronization of the parallel processes, and they can run naturally in grid environments.
2.2.3 Failure of a Process As all the processes are independent, if one task dies it does not affect the other tasks. For instance, if a large job is sent on the grid and one machine of the grid has a power failure, the user may not even remark that part of the work has not been computed. Moreover, as the process signals are intercepted, if a batch queuing system tries to kill a task (because it has exceeded the maximum wall time of the queue, e.g.), the job is likely to end gracefully and the computed blocks will be saved. This fail-safe feature is essential in a grid environment where it is almost impredictable that all the requested tasks will end as expected.
2.2.4 Flexibility The duration of a block can be tuned by the user since it is proportional to the number of steps per trajectory. In the present work, the stopping condition was chosen to be wall time limit, which is convenient in grid environments with various types of processors. When the stopping condition is reached, if the same job is sent again to the queuing system, it will automatically continue using the last walker positions and random seeds, and use the previously computed blocks to calculate the running averages. Note that between two subsequent simulations, there is no constraint to use the same number or the same types of processors.
3 Computational Strategy and Details A quantum Monte Carlo study has been performed on the Li2 molecule. The choice of such a system allowed us to consider a Full Configuration Interaction (FCI) wavefunction as the trial wave function, i.e. a virtually exact solution of the Schr¨odinger
202
A. Monari et al.
equation in the subspace spanned by the gaussian orbital basis. We remind that the computational cost of the FCI problem scales combinatorially with the number of basis functions and electrons and, therefore high quality FCI wave functions are practically impossible to be obtained for much larger systems.
3.1 The Q5Cost Common Data Format Due to the inherent heterogeneity of grid architectures, and due to the necessity of using different codes, a common format for data interchange and interoperability is mandatory in the context of distributed computation. For this reason, we have previously developed a specific data format and library for quantum chemistry [8], and its use for single processor and distributed calculations has already been reported [9]. The Q5Cost is based on the HDF5 format, a characteristic that makes the binary files portable on a multiple platform environment. Moreover, the compression features of the HDF5 format are exploited to reduce significantly the file size while keeping all the relevant information and meta-data. Q5Cost contains chemical objects related data organized in a hierarchical structure within a logical containment relationship. Moreover, a library to write and access Q5Cost files has been released [8]. The library, built on top of the HDF5 API, makes use of chemical concepts to access the different file objects. This feature makes the inclusion on quantum chemistry codes rather simple and straightforward, leaving the HDF5 low level technical details absolutely transparent to the chemical software developer. Q5Cost has emerged as an efficient tool to facilitate communication and interoperability and seems to be particularly useful in the case of distributed environments, and therefore well adapted to the grid.
3.2 Computational Details All the preliminary FCI calculations have been realized on a single processor by using the Bologna FCI code [10, 11]. The code has been interfaced with Molcas [12] to get the necessary one- and two-electron molecular integrals. The FCI computations considered in the present work involved up to 16,138 symmetry adapted and partially spin adapted determinants. All the communications between the different codes has been assured by using the Q5Cost format and library [8]. In particular, a module has been added to the Molcas code to produce a Q5Cost file containing the information on the molecular system and the atomic and molecular (self consistent field level) integrals. The Q5Cost file has been directly read by the FCI code, and the final FCI wave function has been added to the same file in a proper and standardized way. The actual QMC D Chem input has been prepared by a Python script reading the Q5Cost file content. Before running the QMC calculation
Large-Scale QMC Electronic Structure Calculations on the EGEE Grid
203
on the grid an equilibration step was performed (i.e., building “good” starting configurations, X0 , for walkers) by doing a quick variational QMC run (see, [1] and [2]) in single processor mode. QMC computations have been run on the EGEE grid over different computing elements and in a massively parallel way. Typically, for each potential energy curve point we requested to use 1000 nodes, obtaining at least about 500 tasks running concurrently. Once the job on each node was completed the results were retrieved and the output file was produced by the post-processing script to obtain the averaged QMC energy. Due to the inherent flexibility of the QMC implementation the fact of having different tasks on different nodes terminating after different number of blocks did not cause any difficulty as the output file was produced independently from the computation phase. Moreover, the failure or the abortion of some tasks did not impact significantly the quality of the results.
4 Results and Grid Performance Our results for the potential energy curve of the Li2 molecule are presented in Fig. 3 (graphical form) and Table 1 (raw data and error bars). Results are given for a set of 31 inter-nuclear distances. Let us emphasize that the data are of a very high-quality and the energy curve presented is, to the best of our knowledge, the most accurate energy curve ever published. To illustrate this point, let us mention that the dissociation energy defined as De E.Req / E.R D 1/ is found to be here De D 0:0395.2/ a.u. in excellent agreement with the experimental result -14.92 -14.93
Energy (a.u.)
-14.94 -14.95 -14.96 -14.97 -14.98 -14.99 -15
4
6
8
10 12 Li-Li distance (a.u.)
Fig. 3 The QMC Li2 potential energy curve
14
16
18
204
A. Monari et al. Table 1 Quantum Monte Carlo energies (atomic units) as a function of the Li-Li distance (atomic units). Values in parenthesis correspond to the statistical error on the two last digits Distance (a.u.) Energy (a.u.) Distance (a.u.) Energy (a.u.) 2.2 14.81854(44) 5.2 14.99491(29) 2.4 14.85192(36) 5.4 14.99431(30) 2.6 14.87861(36) 5.6 14.99279(30) 2.8 14.90269(37) 5.8 14.99090(30) 3.0 14.92195(35) 6.0 14.99018(32) 3.2 14.93861(38) 6.4 14.98541(26) 3.4 14.95324(37) 6.8 14.98088(29) 3.6 14.96428(38) 7.2 14.97804(29) 3.8 14.97320(38) 7.6 14.97281(28) 4.0 14.98250(29) 8.0 14.96984(28) 4.2 14.98629(27) 10.0 14.95951(27) 4.4 14.99016(30) 12.0 14.95747(15) 4.6 14.99285(29) 14.0 14.95624(12) 4.8 14.99358(29) 16.0 14.95606(14) 5.0 14.99479(29) 18.0 14.95569(17) 5.051 14.99492(17) ... ... 100.0 14.95539(11)
of De D 0:03928 a.u., [13]. A much more detailed discussion of these results and additional data will be presented elsewhere (Monari et al.: A quantum Monte Carlo study of the fixed-node Li2 potential energy curve using Full Configuration Interaction nodes, unpublished). For this application only 11Mb of memory per Fortran process was needed to allow the computation of the blocks. This feature is twofold. First, it allowed our jobs to be selected rapidly by the batch queuing systems as very few resources were requested. Second, as the memory requirements are very low, the code was able to run on any kind of machine. This allowed our jobs to enter both the 32-bit (x86) and 64-bit (x86 64) queues.
4.1 Grid Performance A typical analysis of the grid performance can be seen on Fig. 4. Here, we report the number of queued, running and executed tasks as a function of time for the submission of a parametric job of 1,000 tasks. One can see that after a very limited amount of time (less than 1 h) almost half of the tasks were in a running status. Correspondingly, the number of queued tasks undergoes a very rapid decay, indicating that a consistent percentage of the submitted tasks did not spend a significant amount of time in the queue. This feature is a consequence of the very limited amount of resources requested by the job,
Large-Scale QMC Electronic Structure Calculations on the EGEE Grid
205
1000 Queued Running Done
Number of tasks
800
600
400
200
0 0
1
2
3
4
5
6
7
8
9
10
Wall time (hours)
Fig. 4 Number of tasks in the queued, running, and done state
and by the fact that virtually all the queues could be used, and should therefore be ascribed to the high flexibility of our approach. Correspondingly, the number of running tasks experiences a maximum at about 1 h. It is also important to notice the high asymmetry of the peak, indicating that while the maximum amount of running tasks is achieved quite rapidly, the high efficiency (number of running tasks) is maintained for a considerable amount of time before a degradation. The number of completed tasks too experiences a quite rapid increase. After about 4 h the numbers of running tasks remains constant and quite low, and consequently a small variation is observed on the number of completed tasks. The decaying of the number of running job is due to the fact that a vast majority of tasks have been completed achieving the completion of the desired number of blocks. The remaining jobs (about 50) can be abscribed to tasks running on very slow processors and so requiring a greater amount of time to complete the blocks, moreover in some cases some of the jobs could be considered as “ghost” i.e. labelled as running although being stalled. After 9 h the job was cancelled since the desired precision level had been achieved, even if only about 700–800 tasks were actually completed. The overall process required on average about 1,000 CPU hours for each geometry equivalent to about 40 days of CPU time. It is also noteworthy to comment the distribution of the CPU time per block for each task (Fig. 5), this difference reflects the fact that some tasks have been performed on slower nodes. Again this asymmetry did not influence the final result since the final statistics were performed on the total amount of blocks.
206
A. Monari et al.
400
600
800
1000
1200
1400
1600
1800
Average CPU time per block (seconds)
Fig. 5 Histogram of the CPU time per block. This figure shows the heterogeneity of the used processors
5 Conclusions An efficient grid implementation of a massively parallel quantum Monte Carlo code has been presented. The strategy employed has been presented in detail. Some test applications have been performed on the EGEE-III grid architecture showing the efficiency and flexibility of our implementation. As shown, our approach enables to exploit the computational power and resources of the grid for the solution of a nontrivial N -body problem of chemistry. It must be emphasized that the very important computational gain obtained here has concerned the simulation of the Shr¨odinger equation for a single fixed nuclear geometry (single point energy calculation). This is in sharp contrast with the common situation in computational chemistry where parallelism is not used (or partially used) for solving the problem at hand (algorithms are in general poorly parallelized) but rather for making independent simulations at different nuclear geometries (trivial parallelization based on different inputs). We believe that the quantum Monte Carlo approach which is based on Markov chain processes and on the accumulation of statistics for independent events can represent an ideal test bed for the use of grid environments in computational chemistry. Finally, we note that the combination of grid computing power and of the QMC ability to treat chemical problems at a high-level of accuracy can open the way to the possibility of studying fascinating problems (from the domain of nano-sciences to biological systems), which are presently out of reach.
Large-Scale QMC Electronic Structure Calculations on the EGEE Grid
207
Acknowledgments Support from the French CNRS and from University of Toulouse is gratefully acknowledged. The authors also wish to thank the French PICS action 4263 and European COST in Chemistry D37 “GridChem”. Acknowledgments are also due to the EGEE-III grid organization and to the CompChem virtual organization.
References 1. W.M.C. Foulkes, L. Mit´as˘, R.J. Needs, G. Rajogopal: Quantum Monte Carlo simulations of solids Rev. Mod. Phys. 73, 33 (2001) 2. B.L. Hammond, W.A. Lester Jr., P.J. Reynolds: Monte Carlo Methods in Ab Initio Quantum Chemistry. World Scientific (1994) 3. T.H. Dunning Jr.: Gaussian basis sets for use in correlated molecular calculations. I) The atoms boron through neon and hydrogen J. Chem. Phys. 90, 1007 (1989) 4. QMCDChem is a general-purpose quantum Monte Carlo code for electronic structure calculations. Developped by M. Caffarel, A. Scemama and collaborators at Lab. de Chimie et Physique Quantiques, CNRS and Universit´e de Toulouse, http://qmcchem.ups-tlse.fr 5. P. L’Ecuyer: Tables of maximally equidistributed combined LFSR generators Math. of Comput. 68, 261–269 (1999) 6. W. Gropp, E. Lusk, N. Doss, A. Skjellum: A high-performance, portable implementation of the MPI message passing interface standard Parallel Computing, North-Holland 22, 789–828, (1996). 7. R. L. Rivest.: Technical report. Internet Activities Board, April (1992) 8. A. Scemama, A. Monari, C. Angeli, S. Borini, S. Evangelisti, E. Rossi: Common Format for Quantum Chemistry Interoperability: Q5Cost format and library O. Gervasi et al. (Eds.) Lecture Notes in Computer Science, Computational Science and Its Applications, ICCSA, Part I, LNCS 5072, 1094–1107 (2008) 9. V. Vetere, A. Monari, A. Scemama, G. L. Bendazzoli, S. Evangelisti: A Theoretical Study of Linear Beryllium Chains: Full Configuration Interaction J. Chem. Phys. 130, 024301 (2009) 10. G.L. Bendazzoli, S. Evangelisti: A vector and parallel full configuration interaction algorithm J. Chem. Phys. 98, 3141-3150 (1993) 11. L. Gagliardi, G.L. Bendazzoli, S. Evangelisti: Direct-list algorithm for configuration interaction calculations J. Comp. Chem. 18, 1329 (1997) 12. G. Karlstr¨om, R. Lindh, P.-A. Malmqvist, B. O. Roos, U. Ryde, V. Veryazov, P.-O. Widmark, M. Cossi, B. Schimmelpfennig, P. Neogrady, L. Seijo: MOLCAS: a program package for computational chemistry Computational Material Science 28, 222 (2003) 13. C. Filippi, C.J. Umrigar: Multiconfiguration wave functions for quantum Monte Carlo calculations of firstrow diatomic molecules J. Chem. Phys. 105, 213 (1996)
Generating a Virtual Computational Grid by Graph Transformations ´ Barbara Strug, Iwona Ryszka, Ewa Grabska, and Gra˙zyna Slusarczyk
Abstract This chapter aims at contributing to a better understanding of generation and simulation problems of the grid. Towards this end, we propose a new graph structure called layered graphs. This approach enables us to use attributed graph grammars as a tool to generate at the same time both a grid structure and its parameters. To illustrate our method an example of a grid generated by means of graph grammar rules is presented. The obtained results allow us to investigate properties of a grid in a more general way.
1 Introduction Grid computing is based on the distributed computing concept. The latter term refers to a model of computer processing that assumes the simultaneous execution of separate parts of a program on a collection of individual computing devices. The availability of the Internet and high performance computing gives the possibility to execute large-scale computation and to use data intensive computing applications in the area of science, engineering, industry and commerce. This idea led to the emergence of the concept of the Grid computing. The term “Grid” originated in the mid-1990s to describe a collection of resources geographically distributed that can solve large-scale problems. In the foundational paper “The Anatomy of the Grid. Enabling Scalable Virtual Organizations,” Ian Foster, Carl Kesselman, and Steve Tuecke introduced the paradigm of the Grid and its main features. According to them the Grid concept can be regarded as coordinated resource sharing and problem solving in dynamic, multi-institutional virtual organizations [3]. ´ B. Strug () • I. Ryszka • E. Grabska • G. Slusarczyk Faculty of Physics, Astronomy and Applied Computer Science, Jagiellonian University, Reymonta 4, Cracow, Poland e-mail:
[email protected];
[email protected];
[email protected];
[email protected] F. Davoli et al. (eds.), Remote Instrumentation for eScience and Related Aspects, DOI 10.1007/978-1-4614-0508-5 14, © Springer Science+Business Media, LLC 2012
209
210
B. Strug et al.
The Grid integrates and provides seamless access to a wide variety of geographically distributed computational resources (such as supercomputers, storage systems, instruments, data, service, people) from different administrative areas and presents them as a unified resource. Sharing of resources is restricted and highly controlled, with resource providers and consumers defining clearly and carefully their sharing rules. A dynamic collection of individuals, multiple groups, or institutions defined by such restrictions and sharing the computing resources of the Grid for a common purpose is called a Virtual Organization (VO) [3]. Moreover, in the Grid environment standard, open general-purpose protocols and interfaces should be used. The use of open standards provides interoperability and integration facilities. These standards must be applied for resource discovery, resource access, and resource coordination [9]. Another basic requirement of a Grid Computing system is the ability to provide the quality of service (QoS) requirements necessary for the end-user community. The Grid allows its constituent resources to be used in a coordinated fashion to deliver various qualities of service, such as response time measures; aggregated performance; security fulfillment; resource scalability; availability; autonomic features, e.g. event correlation, configuration management; and partial fail over mechanisms [10]. The Grid is used for many different applications and it seems important to be able to appropriately represent its structure. As it is a heterogeneous, changing structure, based on clusters distributed on different geographical locations, simple graphs are not powerful enough to represent such a structure. We propose to use a new graph structure called a layered graph which is based on a hierarchical graph. This flexible and yet powerful representation can be used to implement a simulator of a grid, which would allow for testing different types of a grid and different grid configurations before actually implementing them. Such an approach could lower costs of grid establishing and running. Moreover, having an adequate representation of a grid would allow us to investigate properties of a grid in a more general way and thus better understand its working and behaviour. Grid generation is discussed by Lu and Dinda [2]. This approach separates topology generation and annotations, while the method proposed here, which uses graph grammars, makes it possible to generate the structure with the annotations (attributes) [6]. Section 2 presents the layered structure of the Grid. In Sect. 3, the Grid simulation is discussed. In Sect. 4, definitions concerning hierarchical graphs are presented, while in Sect. 5 layered graphs are defined. The grid structure represented by a layered graph and the description of rules generating such a representation are discussed in Sects. 6 and 7, respectively.
2 Layered Architecture of the Grid The Grid can be seen as a framework composed of several layers. Its architecture can be compared to the OSI model [8]. The lowest layer in the Grid is a Fabric Layer which is responsible for providing physical resources that can be shared by the Grid:
Generating a Virtual Computational Grid by Graph Transformations
211
Fig. 1 Comparison between the Grid architecture and the OSI model
computational power, storage, network and also local services associated with them. In the OSI model the counterparts of the Grid Fabric layer are the Data Link Layer and the Physical Layer that together ensure proper communication between different types of media. The next layer in the Grid is responsible for connectivity issues. It defines Grid services such as communication and authentication protocols which support transactions in the grid environment. A similar role in the OSI model is assigned to the Network and Transport Layers the main function of which is logic and physical transmission of data between endpoints in a standard way (Fig. 1). The scope of responsibilities of the Resource Layer in the Grid architecture includes using protocols and security functions offered by the Connectivity Layer to perform and manage operations on individual resources in the Grid (for example initiating, securing, terminating operations). Similar functionalities are provided in the Session Layer in case of the OSI model – it enables creating and managing sessions between system elements. Above the Resource Layer in the Grid architecture the Collective Layer can be distinguished. It is tightly coupled with the previous one due to offering collective protocols, APIs and services which are used to manage the access to multiple resources. In a similar way, the Presentation Layer in the OSI model links elements from the lower layers and provides them as a cooperating environment to the highest layer. On the top of both architectures there is the Application Layer. In case of the Grid, in this layer toolkits, APIs and Software Development Kits (SDKs) can be defined and used to create applications dedicated to work in a grid environment. Analogously for the OSI structure, this layer provides the interface to cooperate with end users.
212
B. Strug et al.
In the above mentioned architecture of the Grid, particular attention is paid to resource sharing and supporting cooperation between geographically distributed computers in common problem solving. The layered architecture is a good initial point for further outlining key elements of each layer. From the end user’s perspective the two first layers (Connectivity and Fabric) can be regarded as core physical elements which coupled together build the environment. These layers are managed by middleware components the two next layers. Management can be discussed from two different points of view. The first one takes into account only one resource at time (it is the Resource Layer). Therefore, there should exist middleware components that perform such operations as authentication, authorization or security services. As the main advantage of the Grid is cooperation, the other middleware components should control the interoperability between low level elements in such a way that the user’s requirements are accomplished. These components are a part of the Collective Layer, for instance, Grid Information Service, Resource Broker.
3 Grid Simulation Building the Grid environment is a long-lasting and complex process due to a necessity of adjusting all elements to expected user’s requirements. Simulators of the system are generally regarded as a support used during work on the target environment, which enables one quick verification of suggested solutions. However, in case of high performance computing systems such as the Grid the simulation cannot cover all features of the Grid simultaneously. Therefore, many different types of simulators have been already proposed. In this section we will discuss some existing solutions. The authors of the Brick Grid simulator were particularly interested in analysis and comparison of various scheduling policies on a high-performance global computing setting [14]. The tool enables one to verify a wide range of resource scheduling algorithms and the behavior of networks. The main advantage of this solution is a possibility of incorporating existing global computing components via its foreign interface. A lack of complex network topology that could be built can be regarded as a drawback. The GridSim toolkit focuses on investigation of effective resource allocation techniques based on computational economy [13]. The system provides a possibility to create a wide range of types of heterogeneous resources. Additionally, resource brokers have been introduced and appropriate scheduling algorithms for mapping jobs to resources are used in order to optimize system or user objectives according to their goals. The simulator has been also used to verify the influence of local economy and the global positioning on securing jobs under various pricing and demand/supply situations. On top of the GridSim the Grid Scheduling Simulator (GSSIM) has been built [7]. Its authors were interested in overcoming two issues: lack of tools that generate
Generating a Virtual Computational Grid by Graph Transformations
213
workloads, events or resources, and simultaneous modelling different Grid levels, i.e. resource brokers and local level scheduling systems. The goals were achieved by introducing multilevel scheduling architectures with plugged-in algorithms for the Grid and local schedulers. Additionally, the simulator enables one to read existing real workloads and generating synthetic ones on the base of given probabilistic distributions and constraints. Another example of a simulator is the SimGrid [1] project. The main goal of the project is to facilitate research in the area of distributed and parallel application scheduling on distributed computing platforms ranging from a simple network of workstations to Computational Grids. The authors are concerned mainly with network topology and flow of data over an available network bandwidth. However, the lack of modelling job decomposition and task parallelization characteristics can be seen as a main disadvantage of the toolkit. To summarize, the research in the area of grid simulation touches a wide range of problems existing in real environments. However, their complexity enforces searching for new solutions of these issues.
4 Hierarchical Graphs Hierarchical graphs are used to represent complex objects in different domains of computer science [11, 12]. Their ability to represent the structure of an object as well as the relations of different types between its components makes them useful in representing different parts of the structure of the grid. They can represent such parts at different levels of detail at different stages of the grid construction thus hiding unnecessary low-level data and showing only a relevant part of its structure. Hierarchical Graphs (HGs) can be seen as an extension of traditional, simple graphs. They consist of nodes and edges. What makes them different from “flat” graphs is that nodes in HGs can contain internal nodes. These nodes, called children, can in turn contain other internal nodes and can be connected to any other nodes with the only exception being their ancestors. This property makes hierarchical graphs exceptionally well fitted to represent the structure of the grid consisting of a number of geographically distributed networks which can in turn contain other networks. In this paper, a hierarchical node is defined as a pair vi D .i; Cvi /, where i is a node identifier and Cvi is a set of the node’s children. Nodes which have no children are called simple, and are sometimes referred to as the lowest-level nodes. An edge e is a pair .vi ; vj /, where vi and vj are nodes connected by e. For the remaining part of this paper let X be a set of hierarchical nodes with a finite number of children. Sets of children, ancestors and a hierarchical graph are defined in the following way. Definition 1 (Children). 1. 8v 2 X; Ch0 .v/ D fvg, 2. Chn .v/ D fw W 9z 2 Chn1 .v/; w 2 Cz g,
214
B. Strug et al.
3. Ch.v/ D ChS1 .v/. S1 i i 4. ChC .v/ D 1 i D1 Ch .v/; Ch .v/ D i D0 Ch .v/ Definition 2 (Direct ancestor). The direct ancestor of the node v is defined as a node w, such that v 2 Cw , and denoted anc.v/. The ancestor of a node v that is not a child of any node will be denoted by v" . anc.v/ D
w if 9w W v 2 Cw v" if :9w v 2 Cw
Definition 3 (Ancestor). The i -th level (i 1) ancestor of the node v is defined in the following way: anc1 .v/ D anc.v/; anc.anci 1 .v// W anci 1 .v/ ¤ v" anci .v/ D v" W anci 1 .v/ D v" : Definition 4 (Hierarchical graph). A hierarchical graph G is defined as a pair .V; E/, where: • V X is a set of nodes, such that node identifiers are unique and a node may have only one direct ancestor, • E is a set of edges, which do not connect nodes related hierarchically. It is important to note that the edges between nodes are by no means limited to edges between descendants of the same node. In other words, there may exist edges between nodes having different ancestors. Such a feature can be very useful in a grid representation where nodes representing components of different parts of the grid represented by a hierarchical graph (and hence having different ancestors) can be connected to each other in different ways. For example, there can be a direct connection to a node representing a storage unit from the outside of its subnetwork. Such a connection may be represented in a hierarchical graph by an edge connecting the nodes representing the above mentioned elements. Parts of an object represented by a hierarchical graph correspond to subgraphs. Definition 5 (Subgraph). A subgraph of a hierarchical graph G D .V; E/ is a hierarchical graph g D .Vg ; Eg /, such that Vg V , Eg E and • if a node belongs to a subgraph then all its children also do 8v 2 VG 8w 2 Ch .v/ W v 2 Vg ) w 2 Vg , • if two nodes belong to a subgraph then the edge between them also does 8vi ; vj 2 VG 8e D .vi ; vj / W vi ; vj 2 Vg ) e 2 Eg . If a subgraph is removed from the graph, the remaining graph (called the rest graph) contains all nodes that do not belong to the subgraph and the edges that connect them. During this operation also all edges that connect nodes of the subgraph with nodes of the rest graph are removed. The set of all such edges is called embedding.
Generating a Virtual Computational Grid by Graph Transformations
215
Definition 6 (Embedding). Let G D .V; E/ be a hierarchical graph and g D .Vg ; Eg / its subgraph. The set of edges Em is called the embedding of g in G if • Em E, • 8e D .vi ; vj / such that fvi 2 V Vg and vj 2 Vg g, or fvj 2 V Vg and vi 2 Vg g; e 2 Em. Nodes and edges in hierarchical graphs can be labelled and attributed. Labels are assigned to nodes and edges by means of node and edge labelling functions respectively, and attributes – by node and edge attributing functions. Attributes represent properties of components and relations represented by nodes and edges. Formally, an attribute is a function a W V ! Da , which assigns elements of the domain of attribute a, Da , to elements of V . Let for the rest of this chapter RV and RE be sets of node and edge labels, respectively. Let A and B be sets of node and edge attributes and DA and DB be domains of attributes of nodes and edges respectively. Definition 7 (Labelled attributed hierarchical graph). A labelled attributed hierarchical graph is defined as a 6-tuple aHG D .V; E; V ; E ; attV ; attE / where: .V; E/ is a hierarchical graph, V W V ! RV is a node labelling function, E W E ! RE is an edge labelling function, attV W V ! P .A/ is a node attributing function assigning sets of attributes (i.e functions a W V ! Da ) to nodes and 5. attE W E ! P .B/ is an edge attributing function assigning sets of attributes (i.e. functions b W E ! Db ) to edges.
1. 2. 3. 4.
A subgraph g of a labelled attributed hierarchical graph G is defined in the same way as in Definition 5 with labelling and attributing of g defined by restrictions of respective functions in G. A labelled attributed hierarchical graph defined above may represent a potentially infinite number of grids. A given hierarchical graph G can represent a potentially infinite subset of such grids having the same structure. To represent an actual grid we must define an instance of a graph. An instance of a hierarchical graph is a hierarchical labelled attributed graph in which to each attribute a a value from the set of possible values of this attribute has been assigned. In the following, a hierarchical graph, a subgraph and an instance will mean a labelled attributed hierarchical graph, its subgraph and an instance of a labelled attributed hierarchical graph, respectively.
5 Layered Graphs A computational grid contains different types of elements such as physical nodes (computers, computational elements), virtual ones (software, operating systems, applications), and storage elements which can be treated both as physical
216
B. Strug et al.
elements (designated hard drives) or virtual elements (residing on other physical elements). Such a structure requires using graphs which can represent both types of elements. Moreover, some virtual elements of a grid are responsible for performing computational tasks sent to the grid, while other elements (services) are only responsible for managing the grid structure data, behaviour, and the flow of the tasks. Thus, a structure used as a representation for a grid should be able to reflect all the elements, their interconnections and interdependences. In this contribution, we introduce a notion of a layer composed of hierarchical graphs as a formal description of one part of a grid; for example we can have a physical layer, a management layer or a resource layer. Each layer consists of one or more graphs representing parts of a grid. For example, at the physical layer a graph may represent a network located at one place. As such parts of a grid are independent of each other they are represented by disjoined graphs. On the other hand, each such graph consists of elements that can communicate with each other so they are represented by connected graphs. Formally, a layer is defined in the following way: Definition 8. A layer Let Ly D fG1 ; G2 : : : Gn g; n 2 N be a family of labelled attributed hierarchical graphs, such that 1. Gi D .Vi ; Ei /, is a connected graph, 1 i; j n, 2. Vi \ Vj D ;, for i ¤ j , 3. Ei \ Ej D ;, for i ¤ j . Such a family will be called a layer. A grid is a dynamic structure. New resources can be added and removed form the overall structure at any time. Thus, many operations are performed not on the whole structure but on parts of it. In order to be able to define such operations we first have to introduce the notion of a sublayer. A sublayer consists of one or more graphs, each of them either belonging to a layer or being a subgraph of a graph belonging to a layer. Formally, such a structure is defined as: Definition 9. A sublayer Let Ly D fG1 ; G2 : : : Gn g be a layer, then ly D fg1 ; g2 ; : : : ; gm g is a sublayer of Ly if • m n, • gi is a subgraph of Gi . We propose to represent each of the grid layers as layers composed of hierarchical graphs. The graph layers are connected by interlayer edges which represent how the communication between different layers of a grid is carried out. As the layered graph represents the structure (topology) of a grid, a semantics is needed to define the meaning, and thus possible uses, of its elements. Such information may include an operating system installed on a given computational element, specific software installed, size of memory or storage available, type of
Generating a Virtual Computational Grid by Graph Transformations
217
resource represented by a given element etc. This information is usually encoded in attributes assigned to elements of a graph. Each attribute is actually a function assigning values to attribute names. In a graph representing a given grid, each node can have several attributes assigned, each of them having one value. Let ˙E be a set of edge labels and AE be a set of edge attributes. Definition 10. A layered graph A layered graph is a set of layers GL D .fLy1 ; Ly2 ; : : : Lyk g; E; IL ; IA /, where 1. E D feje D .vi ; vj /; vi 2 VGi ; vj 2 VGj ^ Gi ¤ Gj ^ Gi 2 Lyi ^ Gj 2 Lyj ^ i ¤ j g, 2. IL W E ! ˙E is an edge labelling function, 3. IA W E ! P .AE / is an edge attributing function. Elements of E are called inter-layer edges. In the above definition, the layers are numbered with superscripts and graphs j making part of layers – with subscripts. Throughout this chapter a notion of Gi will be used to denote that a graph Gi belongs to a layer Lyj . Moreover, a notion of Lyig will be used to denote the fact that layer i belongs to a layered graph g. Having defined a notion of a layer graph a layer subgraph has to be defined formally in order to be able to define operations on parts of a grid at different levels of abstraction. The definition of a layer subgraph makes use of a sublayer definition and traditional definition of a subgraph. Definition 11. A layered subgraph A layered subgraph of GL D .fLy1 ; Ly2 ; : : : Lyn g; E; IL ; IA /, is a layered graph gl D .fly1gl ; ly2gl ; : : : lym gl g; Eg ; ILg ; IAg / where • • • • •
m n, lyigl is a sublayer of Lyi , Eg E. ILg D IL jEg is an edge labelling function, IAg D IA jEg is an edge attributing function.
6 Grid Representation The layered graph defined in the previous section can contain any number of layers. In the description of the grid we will use three-layered graphs. These layers will represent the resource layer, management layer and computing (physical) layer. As we have only three layers they will be denoted, respectively, by RL, ML, and CL, instead of the Lyi notation. Let RV D fC, CE, RT, ST, CM, index, services, broker, job scheduling, monitoringg (where C stands for a computer, CE for a computational element, RT for a router, ST for storage and CM for a managing unit), be a set of node labels
218
B. Strug et al.
Fig. 2 A layered graph representing a grid
used in a grid representation. Let RE D fisLocatedAt, hasACopyAt, actionPassing, infoPassing, taskPassingg be a set of edge labels. An example of a layered graph representing a simple grid is depicted in Fig. 2. The top layer of this graph, layer RL, represents the main resources/services responsible for task distributing/assigning/allocating and general grid behavior. The second layer represents the elements responsible for the grid management. Each node labelled CM represents a management system for a part of a grid, for example for a given subnetwork/computing element. The management elements CM can be hierarchical, as it is shown in the example. Such a hierarchy represents a situation in which data received from the grid services is distributed internally
Generating a Virtual Computational Grid by Graph Transformations Fig. 3 A part of the graph from Fig. 2 representing a single computational element
219
V5
V6
V2 V4
V3
CE
V1
to lower-level managing components and each of them in turn is responsible for some computational units. At the same time each CM element can be responsible for managing one or more computational elements. The labels of edges are not written in this figure for clarity, instead different styles of lines representing edges are used. Each node label describes the type of a grid element represented by a given node. But grid elements, depending on their type, have some additional properties. These properties in a graph based representation are represented by attributes. Let attributes of nodes be defined on the basis of node labels. We also assume that attributes are defined for low-level nodes only. The attributes for their ancestors are computed on the basis of the children attributes. Thus, let the set of node attributes be A D fcapacity; RAM; OS; apps; CPU, class; typeg. Let attV be an attributing function for a layered graph depicted in Fig. 2. The sets of attributes are determined according to the two following rules: 8 ˆ fRAM; OS; CPU; appsg V .v/ D C ˆ ˆ ˆ ˆ fcapacity; typeg V .v/ D ST ˆ ˆ < fclassg V .v/ D RT (R1). attV .v/ D ˆ floadg V .v/ D CM ˆ ˆ ˆ ˆ floadg V .v/ D broker ˆ ˆ : fsizeg V .v/ D index for v such that :9w W v D anc.w/. S (R2). 8v such that 9w W v D anc.w/ W attV .v/ D w2ChC .v/ attV .w/. In Fig. 3 a part of the graph from Fig. 2 is shown. It represents one computational element, which is a part of a computational layer. For this graph the sets of attributes are described according to rule R1 in the following way: 8 < fRAM; OS; CPU; appsg i D 2; 3; 4 attV .vi / D fcapacity; typeg i D5 : fclassg i D6
220
B. Strug et al.
For node v1 , which is a higher level node, its attributes are computed on the basis of children attributes according to rule R2. Thus, attV .v1 / D fRAM; OS; CPU; apps; capacity; type; classg. To node attributes the appropriate values have to be assigned. Firstly, a domain for each attribute has to be defined. In this example, let DOS D fWin; Lin; MacOS; GenUnixg, Dapss D fapp1; app2; app3; app4g, DCPU D fcpu1; cpu2; cpu3g, Dclass D fc1; c2; c3; c4g, DRAM D n; n 2 N and n 2 Œ0:::64, Dcapacity D m; m 2 N and m 2 Œ0:::1000, Dtype D fFAT, FAT32, NTFS, ext3, ext4, xfsg. The last two attributes are expressed in gigabytes available. In the example, the values of attributes are as follows: RAM.v2 / D 4, RAM.v3 / D 8, RAM.v4 / D 2, CPU.v2 / D cpu1, CPU.v3 / D cpu2, CPU.v4 / D cpu1, OS.v2 / D Win, OS.v3 /D Win, OS.v4 / D Lin, apps.v2 / D app1, apps.v3 / D app1, apps.v4 / D app2, capacity.v5 / D 500, type.v5 /D FAT32, class.v6 / D c2. For the hierarchical node v1 , the values for the properties CPU, OS, and apps are a collection containing all the values of the children of v1 . In case of numerical properties, it is a sum of the values of its children.
7 Grid Generation The grid represented by a layered graph can be generated by means of a graph grammar. Graph grammars are systems of graph transformations called productions. Each production is composed of two graphs named left-hand side and right-hand side (in short, left side and right side). The right side graph can replace the left side one if it appears in the graph to be transformed. A graph grammar allows us to generate potentially large numbers of grid structures. In order to restrict the number of generated solutions, a control diagram can be introduced. Such a diagram would define the order in which grammar productions are to be applied and thus limit the number of possible replacement operations. Moreover, a grammar contains a starting element called axiom. In case of graph grammars the axiom can be either a graph or a single node. It is then changed by subsequent applications of suitable productions of the grammar. Different types of graph grammars have been investigated and their ability to generate different structures has been proved [4, 5, 11, 12]. In a grid generation process, there is a need for two categories of productions, that can further be divided into five subtypes. The productions operating on only one layer can be based on traditional productions used in graph grammars. But, as we use a more complex structure, there is a need for productions operating on several layers. Moreover, we need productions that can both add and remove elements from the grid represented by the layered graph. To make sure the functionality of a grid is preserved the productions removing elements have also to be able to invoke rules responsible for maintaining the stability of affected elements. The application of a production can also require some actions to be invoked. For example, if a production removes a node representing a computational element containing a virtual resource all other elements using this resource must be redirected to its copy. If such a copy
Generating a Virtual Computational Grid by Graph Transformations
221
Fig. 4 The production responsible for making a copy of a resource
does not exist it must be generated on other element before the node can be removed from the grid. The productions used to generate a grid can be divided into five main types: • Working on two layers, but without adding edges or nodes (e.g. dividing a manager) • Working on two layers, adding only edges (e.g. making a copy of a resource) • Working on two layers adding nodes and edges (e.g. adding a computational element) • Working on one layer and not requiring additional actions (e.g. adding a computer, a storage unit) • Working on one layer with additional actions required (e.g. removing a computer, removing a manager)
7.1 Productions Working on Two Layers Making a copy of a resource is an example of a production operating on two layers, as the resource of which a copy is to be made is located on the resource layer, while the computer it is placed on and the one where a copy is to be placed are located on the computing layer. This production is depicted in Fig. 4. The left side of the production consists of two layers, CL and RL. On the resource layer a node labeled service is depicted. This node can be matched with any node representing a service and belonging to the resource layer. On the computing layer two nodes
222
B. Strug et al.
Fig. 5 The production responsible for dividing a manager
labelled by C are shown. The one connected to a service node represents the node on which this service is located. The second one represents any other computer on the computing layer. Such a definition of the left side guarantees that the copy will be made on a computer different from the one on which a considered service is already present. When a subgraph isomorphic with the left side graph is found in a graph representing a grid it can be replaced by the right side graph within a process called production application. By applying this production a new edge is added to a current graph. This edge represents the fact that a copy of the resource is placed on a computer to which this edge connects a service node. It must be noted here that when the subgraph isomorphic with the left side is found in the current graph, there may exist other edges connecting the nodes matched to the nodes of the left side with other nodes in the graph. These edges are not affected by the application of the production. Another type of production working on two layers is depicted in Fig. 5. This production is used to model the division of a manager. It can be used in a situation when a single manager is responsible for too many computational units and one of the units is to be transferred to a different manager to speed the grid operation. The node labelled CM with two edges connected to it can be matched within a considered graph to any node labelled CM with at least two edges connected to it. The node without edges connected to it can be matched to any node labelled by CM other than the one matched to the first manager. No other edges are affected by this production and no additional actions are required.
Generating a Virtual Computational Grid by Graph Transformations
223
Fig. 6 The production responsible for adding a computer to the grid
One more type of production contains all productions working on two layers and adding nodes and edges. An example of such a production can be a production adding a computational element (i.e. a hierarchical node).
7.2 Productions Working on One Layer A production adding a computer, depicted in Fig. 6 can be considered to be a good example of a production working on one layer. A computer is always added to a computational element containing at least a router. So the left side graph is a hierarchical one with a node representing a computational element, labelled CE, with one child, labelled RT, representing a router. After a copy of the left side graph is found in a graph representing a grid it can be replaced by the right side graph. By applying this production a new edge and a new node are added to a current graph. This edge represents the fact that a new computer is connected to the router and the node represents the new computer. As it was in the previous production no other elements of the considered graph are affected. The new node that is added must also be attributed. The attributing here is transferred from attributing the right side graph. So to each new element attributes transferred from the right side are assigned. As it was shown above attributes of the hierarchical nodes are computed on the basis of their children so adding a child may result in changing attributes of all ancestors up the hierarchy to the topmost node. A similar production is used to add a storage element to the computational unit. As the only difference is in a label of one node the production is not shown in this paper. Removing a computer can also be simulated using a production working on one layer. Such a production is depicted in Fig. 7. There must be at least two computers belonging to the same computational element for this production to be applied. This requirement is based on the assumption that each computational unit must contain at least one computer and it thus ensures that the assumption is satisfied after the production is applied. Application of this production results in deleting a
224
B. Strug et al.
Fig. 7 The production responsible for removing a computer from the grid
node representing the computer being removed together with the edge connecting it to the node representing a router. After applying the considered production some additional action may be required. There may be two types of the actions to be carried out. First, one concerns the attributing of the ancestors of the removed node. As in the case of adding a computer, because attributes of the hierarchical nodes are computed on the basis of attributes of their children, removing one child may result in changing attributes of all ancestors. This change has to be propagated up the hierarchy to the topmost node. The second type of actions that may be required is concerned with interlayer connections. For example, on a computer being removed some services or their copies could have been placed. To ensure the stability of the grid the copy of each such service has to be activated or/and a new one made. The decision whether this action is actually needed is made on the basis of the embedding of the removed node. Let Em D fe1 ; e2 ; : : : en g be the embedding of the removed node, ei D .w; vi /, where w is the node being removed and vi the one connected to w by ei and let IL .ei / be a label of this edge. Then if there exists ei such that IL .ei / D hasACopyAt then the production making a copy of the resource has to be applied immediately after the one removing a computer. This production has to make a copy of a resource represented in the graph by vi . If there is an edge ei in the embedding, such that IL .ei / D isLocatedAt a copy of the service has to be found and activated and a new copy has to be made. By applying productions of the grid grammar a grid structure can be generated. This approach also enables us to modify the grid to model its changing nature. As the layered graph represents not only the structure but also interconnections it can be used to simulate the working paradigm of the grid. The above described productions are responsible for the generation and modification of the grid structure – i.e. its mainly static side. To be able to model the behaviour of the grid, that is its dynamic side, one more type of graph productions is planed to be added. These productions will be based on a notion similar to token passing in Petri nets and will make possible the simulation of user tasks processing. They will be capable of simulating the actual workings of the grid.
Generating a Virtual Computational Grid by Graph Transformations
225
8 Conclusions In this chapter, a new approach to a grid representation has been described. It makes use of graph structures. Layered graphs have been defined and an example of such a graph as a grid representation has been shown. Using a graph based representation enables us to use graph grammars as a tool to generate a grid. A grid graph grammar has been described and some of its productions have been depicted and explained. As layered graphs are attributed both the structure and the parameters of the grid can be generated at the same time. Productions presented in this work are used to generate the grid. The next step will consist in adding productions that will be able to simulate the work and behaviour of the grid as well. Then a new grid simulator will be implemented. As we have the possibility of modeling the structure of the environment in a dynamic way, the design of the simulator focuses on building the topology and adopting it in the runtime. The basic requirement is to enable the generation of a wide range of different grid structures which would be described using the proposed grid grammar for the simulation purpose. Additionally, some research is also planned in the area of jobs execution. The explicitly defined management layer gives us the opportunity to check some specific configurations or scheduling algorithms. The simulator will be deployed and tested using a cluster environment at our institution. We plan to use a computer, which has 576 cores and a modular structure. Each of its six computational units has 256 GB of memory and works as a multiprocessor memory sharing computer. Moreover, it allows for running tasks based on OpenMP technology. Acknowledgments The authors would like to thank Wojciech Grabski for the graphic concept of layered graphs visualization used in this paper in many figures.
References 1. Casanova H., Legrand A. and QuinsonM., SimGrid: a Generic Framework for Large-Scale Distributed Experiments, 10th IEEE International Conference on Computer Modeling and Simulation, 2008. 2. Dong Lu, Peter A. Dinda; GridG: Generating Realistic Computational Grids, SIGMETRICS Performance Evaluation Review, Volume 30, num 4, pp. 33-40 2003. 3. Foster I., Kesselman C., and Tuecke S.: The Anatomy of the Grid: Enabling Scalable Virtual Organizations, International Journal Supercomputer Applications 2001, pp. 200-220. 4. Grabska, E.. Graphs and designing. Lecture Notes in Computer Science, 776 (1994). 5. E.Grabska, W. Palacz, Hierarchical graphs in creative design. MG&V, 9(1/2), 115-123. (2000). 6. E. Grabska, B. Strug , Applying Cooperating Distributed Graph Grammars in Computer Aided Design, Lecture Notes in Computer Science, S vol 3911, pp. 567-574 Springer, 2005. 7. Grid Scheduling Simulator, http://www.gssim.org. 8. Ihssan A., Sandeep G. : Grid Computing: The Trend of the Millenium, Review of Business Information Systems, Volumne 11, num 2, 2007.
226
B. Strug et al.
9. Joseph J., Ernest M., and Fellenstein C.: Evolution of grid computing architecture and grid adoption models (http://www.research.ibm.com/journal/sj/434/joseph.pdf). 10. Joseph J. and Fellenstein C.: Grid Computing, IBM Press, 2004. 11. Rozenberg, G. Handbook of Graph Grammars and Computing by Graph. Transformations, vol.1 Fundations, World Scientific London (1997). 12. Rozenberg, G. Handbook of Graph Grammars and Computing by Graph. Transformations, vol.2 Applications, Languages and Tools, World Scientific London, (1999). 13. Sulistio A., Cibej U., Venugopal S., Robic B. and Buyya R. A Toolkit for Modelling and Simulating Data Grids: An Extension to GridSim, Concurrency and Computation: Practice and Experience (CCPE), Online ISSN: 1532-0634, Printed ISSN: 1532-0626, 20(13): 1591-1609, Wiley Press, New York, USA, Sep. 2008. 14. Takefusa A., Matsuoka S., Nakada H., Aida K., and Nagashima U., Overview of a performance evaluation system for global computing scheduling algorithms, in In Proceedings of the 8th IEEE International Symposium on High Performance Distributed Computing (HPDC8), 1999, pp. 97-104.
Interactive Grid Access with MPI Support Using GridSolve on gLite-Infrastructures Torsten Hopp, Marcus Hardt, Nicole Ruiter, Michael Zapf, Gonc¸alo Borges, Isabel Campos, and Jesus ´ Marco
Abstract Grid infrastructures provide large computing resources for grand challenge applications. The Interactive European Grid project developed mechanisms for the interactive and MPI-parallel usage of these resources. This chapter presents in detail an interactive approach that allows API style communication with grid resources and execution of parallel applications using MPI. The RPC API of GridSolve and the middleware gLite are integrated by the software Giggle for resource allocation. Furthermore, we describe the adaption of Giggle and GridSolve to allow the remote execution of MPI applications on RPC basis. Performance measurements are carried out to benchmark the solution. We can show an overhead of only few seconds in contrast to a job submitted via gLite, which can take minutes. We find that our approach is mainly limited by the data transfer rates to the different computing sites.
1 Introduction Grid [1] infrastructures provide large computing resources for grand challenge applications. For rapid development of scientific software, many scientists use problem solving environments like MATLAB.
T. Hopp () • M. Hardt • N. Ruiter • M. Zapf Karlsruhe Institute of Technology, Karlsruhe, Germany e-mail:
[email protected];
[email protected];
[email protected];
[email protected] G. Borges Laborat´orio de Instrumentac¸a˜ o de F´ısica Experimental de Part´ıculas (LIP), Lisbon, Portugal e-mail:
[email protected] I. Campos • J. Marco Instituto de F´ısica de Cantabria (IFCA), Santander, Spain e-mail:
[email protected];
[email protected] F. Davoli et al. (eds.), Remote Instrumentation for eScience and Related Aspects, DOI 10.1007/978-1-4614-0508-5 15, © Springer Science+Business Media, LLC 2012
227
228
T. Hopp et al.
Several national and international scientific projects work on the development and installation of computing clusters and appropriate middleware systems for resolving the challenges of heterogeneities in loosely coupled environments [2]. The Interactive European Grid project (int.eu.grid) [3] provides a distributed computing infrastructure based on the middleware gLite. Currently, gLite itself neither supports interactive nor parallel applications. Therefore, int.eu.grid developed and provides extensions for interactive and parallel usage. One of the interactive approaches is the integration of the middleware GridSolve as described in [4], which allows the use of Grid infrastructures within MATLAB via an API. For parallel applications, the gLite middleware was extended by the int.eu.grid project to allow the execution of implementations using the Message Passing Interface (MPI).
1.1 Motivation However, with the current approach described in [4], only serial GridSolve services can be called remotely. There is no support for parallel implementations using MPI. Additionally, addressable GridSolve services have to be implemented using particular templates and interface descriptions. Using the gLite extensions of the int.eu.grid project, the support of parallel applications can not be used interactively. The Message Passing Interface has not been used together with GridSolve and gLite until now. In this chapter, we describe how MPI applications can be executed based on remote procedure calls (RPC) generically. Using this solution, developers and scientist can use the computing power of Grid resources remotely without leaving their favorite problem solving environment. Especially in a scientific environment this can be beneficial, e.g., if parameter studies are carried out. Based on our solution we present performance analyses that show a reduced overhead of our RPC approach in comparison with the classic use of parallel applications in int.eu.grid, which is of interest in scientific software development. In Sect. 2, we will give a short overview over the state of the art in Grid Computing, then the main developments of our RPC based method to execute MPI programs on Grid resources are discussed (Sect. 3). Performance measurements are presented in Sect. 4. A conclusion is given in Sect. 5.
1.2 Interactive European Grid and gLite The int.eu.grid project started in 2006 with the aim to provide a grid infrastructure for scientific projects, allowing interactive and parallel applications. Currently nine computing sites in five European countries offer about 2000 CPU cores. int.eu.grid is based on the middleware gLite which allows compatibility to other large grid projects like EGEE [5]. gLite was extended to support parallel applications. The int.eu.grid project provides two implementations [6] of MPI: Open MPI [7] and PACX-MPI [8].
Interactive Grid Access with MPI Support Using GridSolve on gLite-Infrastructures
Client
request resource
C Fortran Matlab Octave
229
Agent Monitor Database
brokered decision
Scheduler
call monitoring
result
Server
Server
Server
Fig. 1 Architecture of GridSolve: three main components (client, server, agent)
Open MPI is an implementation of the MPI 1.2 and MPI 2.0 standard provided by the MPI forum [14]. Open MPI focus on the usage of heterogeneous infrastructures. int.eu.grid uses Open MPI for processing of jobs within a single computing site. PACX-MPI is a software library which allows to allocate heterogeneous clusters simultaneously. Therefore the int.eu.grid uses PACX-MPI for MPI jobs running across multiple computing sites. It considers the different latencies and bandwidths in communication via local area networks (LAN) or via wide area networks (WAN), respectively.
1.3 GridSolve GridSolve is a high level middleware providing remote access to Grid resources. The goal is abstracting the complexities of Grid infrastructures for the user to a simple API, which he uses in his favorite software development environment. It is the only tool that allows API style Grid access from MATLAB. GridSolve is a client–agent–server system which provides remote access to hardware and software resources through a variety of client interfaces. The architecture consists of four main components (Fig. 1): • The client which wants to execute an RPC. Implementations of the GridSolve client are available for C and Fortran as well as for interactive problem solving environments like MATLAB or Octave. • The server receives GridSolve RPCs from the clients and executes them on their behalf. Administrators can implement addressable GridSolve services.
230
T. Hopp et al.
• The agent schedules the GridSolve RPCs to servers based on work load measurements. It maintains a list of available resources and available GridSolve services, which is made available to the user by an API and a web interface. • The proxy is an optional component that can be used to overcome network partitions introduced for example by network address translation (NAT).
1.4 Giggle Giggle is a software tool which adapts the gLite infrastructure with GridSolve RPCs. The principle is to start GridSolve servers on gLite worker nodes. With this approach, GridSolve services can be executed on gLite resources remotely without the overhead of a job submission via gLite. However parallel applications using MPI are not supported so far. A detailed introduction to the integration with Giggle is given in Sect. 3.
2 State of the Art There are several Grid projects worldwide, mostly used for grand challenge applications [2]. The biggest multidisciplinary project is Enabling Grids for E-sciencE (EGEE) combining more than 250 computing sites in 48 countries [5]. EGEE develops and uses the middleware gLite. gLite is currently available in version 3.2. Another often used middleware is the Globus Tool Toolkit [9]. Neither gLite nor the Globus Toolkit provide interactive access to grid resources via RPCs. The independently developed GridSolve is an implementation of the GridRPC API [10]. GridRPC is a standard which describes the implementation of portable and simple RPC mechanism for Grid computing [11]. The standardization is promoted by the Open Grid Forum (OGF). In September 2007, GridRPC became an OGF standard. It provides unified access to computing resources allocated by different GridRPC compliant implementations like GridSolve, Ninf [12] or the Distributed Interactive Engineering Toolbox (DIET) [13]. In the int.eu.grid, GridRPC methods are available via the integration of GridSolve and gLite using the software Giggle [4]. MPI is the method of choice for communication between processes in high performance computing. The MPI standard appointed by the MPI forum [14] is currently available in version 2.2 and combines MPI and MPI-2. Implementations of the MPI standard are available for different programming languages including C, CCC, and Fortran. Established implementations specially adapted to the characteristics of Grids are MPICH, respectively, the enhancements MPICH-G [15] and MPICH-G2 [16], GridMPI [17] and Open MPI [7].
Interactive Grid Access with MPI Support Using GridSolve on gLite-Infrastructures
231
3 Integration of GridSolve and gLite with MPI Support To understand our RPC approach supporting parallel applications, first the architecture of the currently used system Giggle is presented in Sect. 3.1. Based on this, our recent development is pointed out in Sect. 3.2.
3.1 Architecture of Giggle The integration of GridSolve and gLite uses the software Giggle. The aim of Giggle is to start GridSolve servers on gLite worker nodes. Therefore gLite jobs are created and submitted to the batch system. Scheduled and running jobs then download a software package including GridSolve and GridSolve services from a web server (Fig. 2). The Giggle infrastructure defines three different roles: • The users work on developer workstations, where Giggle and the GridSolve client are installed. • The web server is used to store software components which will be installed on GridSolve servers by Giggle. • The service host is used to run the GridSolve agent and (optional) the GridSolve proxy. To allocate computing resources via Giggle, the following steps have to be carried out: 1. The user implements a GridSolve service using the Service creator and Service builder, and deploys the service to a web server.
Workstation
Servicehost
Webserver
gLite User Interface
Service Creator Service Builder Service Deployment
Services GridSolve Giggle
Giggle Starter
gLite Job Submit GridSolve Agent GridSolve Proxy
Compute Cluster
GridSolve Client
Fig. 2 Architecture of Giggle using GridSolve and gLite infrastructures
WN
WN
WN
WN
WN
WN
232
T. Hopp et al.
gLite User Interface
1
submitter2.sh calls
generates
giggle.jdl
Compute Cluster
Webserver
Worker Node (process-ID 0)
gs-wrapper-start-server.sh
giggle.sh
2
calls uses
4a
downloads
4b
i2g-job-submit gs-wrapper-start-server.sh uses
5
giggle.sh
3
starts
GS-Server
mpiexec when starting gLite job
WN (ID 1)
X
WN (ID 2)
WN
X
Fig. 3 Startup of the GridSolve-Server using Giggle. The master node (process-ID 0) starts the GridSolve server while all other nodes (ID 1, 2) abort the startup in the gs-wrapper-start-server.sh script
2. The user starts the GridSolve agent and the GridSolve proxy on the service host. 3. The user uses the Giggle starter on his workstation to submit a given number of gLite jobs via the gLite user interface. 4. The scheduled jobs run the specified executable, which in this case equals to a script which downloads Giggle, GridSolve and the GridSolve services from the web server and installs them on the worker node. 5. The worker nodes run Giggle which starts the GridSolve servers. 6. The GridSolve servers connect to the GridSolve agent. Afterwards, the GridSolve services can be called by the GridSolve client via RPCs, which are scheduled by the GridSolve agent to appropriate GridSolve servers.
3.2 MPI Support Recently, we developed a solution offering RPC based support of MPI applications on Grid infrastructures. This is based on the software Giggle. Running MPI jobs on the gLite infrastructure of int.eu.grid causes the job scheduler to setup the Open MPI environment and allocate the specified number of worker nodes. The executable specified in the job description is executed by the mpiexec-command. With the approach presented in Sect. 3.1, a GridSolve server would be started on each worker node allocated for the MPI job. Therefore this approach does not allow to run MPI applications via GridSolve. For the use of MPI, the startup of GridSolve servers via Giggle has to be adapted (Fig. 3). The master node of the allocated configuration is being recognized by the Giggle scripts. Only this node starts the GridSolve server (5) while all other nodes stop running Giggle scripts after having installed the required files. This resolves the explained problem of multiple GridSolve servers being started. The mechanism is
Interactive Grid Access with MPI Support Using GridSolve on gLite-Infrastructures
233
Compute Cluster Workstation
GridSolve-Client gs-call
1 URL, command, data outdata
Worker Node
6
outdata
3
GS-Service burn
infile.dat
data
5a data
7
outfile.dat
5b outdata Webserver
URL <<download>
>
4
2
executable
executable command
5
UNIX Shell
Fig. 4 Architecture and workflow of the GridSolve service burn
supported by automatically generated job description files (1) for the job submission via gLite (2), which makes the complexity of the implementation transparent for the user. The benefit of this approach in comparison to the gLite based job submission of parallel applications is that the startup of servers has to be done only once. After that, the resources are available and can be reused for an unlimited time. In contrast, the startup of gLite jobs has to be done for every submission of an application. The next problem to solve requires understanding how MPI jobs are started. MPI jobs that are executed by the mpiexec-command on the master node will be started on all allocated worker nodes via the MPI environment. In detail, the PBS NODEFILE includes all hosts allocated by the batch system. Now, the actual problem is that GridSolve RPCs can only call GridSolve services, which are run on a single host – an API allowing to start MPI-programs does not exist. Therefore we have chosen a generic approach that provides remote access to the command line of the master node via GridSolve. We have implemented this in the GridSolve service burn (Fig. 4). The first parameter passed to burn specifies a URL of an executable or archive file on a web server. The second parameter describes the command which should be executed remotely on the node which is running the service. The command refers to an executable contained in the archive file. The purpose of this approach is to package the MPI executable and required libraries in an archive file, which is then uploaded to a web server. The GridSolve service burn downloads the archive file (2) and unpacks it to a temporary folder. The command line (4) given as second parameter may then include the execution of it (5). Additionally, input data can be passed to the GridSolve service, which then stores the data on the master node running the service (3). The GridSolve service may also return output data, which is read (6) from a specified file and sent back to the GridSolve client (7). To make the executable available for all worker nodes participating in an MPI job, all scripts for downloading and unpacking are
234
T. Hopp et al.
called with the mpiexec command on the specified number of nodes. Usability of the GridSolve service is ensured by encapsulating the RPCs in local functions implemented in any of the languages which are supported by GridSolve. In summary, adapting a MPI program to run it remotely on Grid resources using our approach requires the following steps: 1. Compiling and linking the MPI application with the Open MPI compiler to an executable, e.g. on the user interface node of the Grid. 2. Packing the application together with libraries and additionally used files in an archive file and uploading it to a web server. 3. Allocating GridSolve servers with MPI support via Giggle. 4. Calling the GridSolve service burn with appropriate parameters to execute the MPI application on a specified number of nodes.
4 Results To evaluate the performance of the approach, we set up measurements on four different computing sites of int.eu.grid: at the local computing site at Karlsruhe Institute of Technology (KIT), at Instituto de F´ısica de Cantabria in Santander (IFCA), at Laborat´orio de Instrumentac¸a˜ o de F´ısica Experimental de Part´ıculas in Lisbon (LIP) and at Centro de Supercomputaci´on de Galicia in Santiago de Compostela (CESGA). To each computing site, a sample application reconstructing images of a 3D Ultrasound Computer Tomograph [18] based on diffraction tomography was submitted with an amount of 16 MB of input data and 28 kb output. The submissions were done via gLite job submit as well as via our approach by GridSolve RPCs. The calculations of the sample application were done each with one and ten processes. The time for the startup of the gLite jobs and the GridSolve RPC as well as the data transfer were measured and are referred as overhead. Additionally, the computing time was measured. Figure 5 shows the results of measurement at LIP. The overhead in case of GridSolve is about ten times smaller than the overhead of the gLite case (300 s – 22 s). For one process, the computing time of the application is approximately 870 s. Hence, the percentage of overhead can be accounted to approximately 2% of the overall execution time in the GridSolve case in contrast to approximately 25% in the gLite case. With ten CPUs, the ratio between the overhead and the computing time is significantly larger. The computing time of the application is reduced to 110 s, whereas the overheads stay constant. The percentage of the overhead is then approximately 16% in the GridSolve case in contrast to approximately 73% in the gLite case. Throughout all measurements on the four computing sites, the overheads of GridSolve are on the same level of around 10–30 s, varying only because of different WAN connections. The major part of the overhead in the GridSolve case can be
Interactive Grid Access with MPI Support Using GridSolve on gLite-Infrastructures
235
Fig. 5 Measurement of computing time and overheads by gLite and GridSolve RPC
accounted to the data transfer, while in the gLite case it depends also on the workload of computing sites and especially the computing element (CE) which allocates resources and schedules the jobs to appropriate worker nodes. In a second measurement, the data transfer rates were tested. The amount of data passed from the local workstation (100 MBit Ethernet connection) to the GridSolve service was varied. The results show data rates of several MB/s, depending on the location of the computing site (10 MB/s to KIT, 3 MB/s to CESGA). In summary, the GridSolve RPC reduces the overhead of the startup of the calculation significantly, because computing resources are preallocated and can be reused.
5 Conclusion In this chapter, we presented a new approach and implementation for interactive submission of parallel applications to Grid resources via simple RPC calls. The resources allocated by gLite are used to start GridSolve servers which can be accessed by GridSolve clients from different programming languages. To provide support for MPI applications, the existent software Giggle was adapted for the detection of a master process. The recently developed GridSolve service burn allows
236
T. Hopp et al.
remote execution of arbitrary UNIX commands. With this solution MPI applications compiled and linked on the specific platform can be executed on RPC-basis within all programming languages supported by GridSolve. Input data passed to burn (via GridSolve) is stored on the remote host and can be used by the MPI application. The distribution of the MPI application to all worker nodes attending the computation is guaranteed by burn. Wrapper functions in the programming language of choice hide the complexity of the GridSolve RPCs and the startup of servers to the user. In contrast to the classic use of parallel applications, the user can now call his application from his problem solving environment. The number of computing nodes can be chosen by the user. The resources are allocated once and can then be reused for an unlimited time. The performance results show that the overhead in the case of GridSolve RPCs is significantly lower than the overhead in case of an equivalent gLite job. It is reduced by a factor of ten. This enhances the ratio between overhead and computing time, which is very beneficial for scientists, because application testing on Grid resources is accelerated. The data transfer time is the restrictive value in the case of GridSolve RPCs and depends on the location and connection of the participating computing sites of the int.eu.grid. The interactive and parallel approach presented is e.g. of interest to outsource computing intensive kernels of scientific software to the Grid, while other software parts are executed on the local workstation. For such a scenario, it is used by the 3D Ultrasound Computer Tomography project at KIT. The approach is currently limited to the use with Open MPI, however in future work Giggle will be adapted to the usage with PACX-MPI. Furthermore, performance measurements will be extended to achieve hints for further acceleration of the GridSolve service burn. Concerning the data transfer, a study on the integration of high performance storage resources, which are available in the Grid, will be carried out.
References 1. Foster, I., Kesselman, C.: The Grid: Blueprint for a New Computing Infrastructure. Morgan Kaufmann, San Francisco (1999) 2. Grey, F., Heikkurinen, M., Mondardini, R., Prabhu, R.: GridCaf´e, http://www.gridcafe.org/ 3. Gomes, J. et. al.: A Grid infrastructure for parallel and interactive applications, Computing and Informatic 27, 173-185 (2008) 4. Hardt, M., Seymour, K., Dongarra, J., Zapf, M., Ruiter, N.: Interactive Grid-Access Using Gridsolve and Giggle, Computing and Informatics 27, No. 2, 233-248 (2008) 5. Laure, E., Jones, B.: Enabling Grids for e-Science: The EGEE Project, Grid Computing: Infrastructure, Service, and Application. CRC Press, EGEE-Publication 2009-001 (2009) 6. Dichev, K., Stork, S., Keller, R.: MPI Support on the Grid, Computing and Informatics 27, 1-10 (2008) 7. Graham, R. L., Shipman, G. M., Barrett, B. W., Castain, R. H., Bosilca, G., Lumsdaine, A.: Open MPI: A High-Performance, Heterogeneous MPI Proceedings, Fifth International Workshop on Algorithms, Models and Tools for Parallel Computing on Heterogeneous Networks (2006)
Interactive Grid Access with MPI Support Using GridSolve on gLite-Infrastructures
237
8. Gabriel, E., Resch, M., Beisel, T., Keller, R.: Distributed computing in a heterogenous computing environment, Proceedings of the 5th European PVM/MPI Users’ Group Meeting on Recent Advances in Parallel Virtual Machine and Message Passing Interface, 180-187 (1998) 9. Foster., I.: Globus Toolkit Version 4: Software for Service-Oriented Systems, IFIP International Conference on Network and Parallel Computing, 2-13 (2005) 10. Seymour, K., Hakada, N., Matsuoka, S., Dongarra, J., Lee, C., Casanova, H., Parashar, M. (ed.): Overview of GridRPC: A Remote Procedure Call API for Grid Computing, GRID 2002, 274-278 (2002) 11. Dongarra, J., Seymour, K., YarKhan, A.: Users’ Guide to GridSolve Version 0.15, http://icl.cs. utk.edu/netsolve/documents/gug.pdf (2006) 12. Nakada, H., Sato, M., Sekiguchi, S.: Design and Implementations of Ninf: Towards a Global Computing Infrastructure, Future Generation Computing Systems, Metacomputing Issue, 15, 649-658 (1999) 13. Caron, E., Desprez, F., Lombard, F., Nicod, J.-M., Philippe, L., Quinson, M., Suter, F.: A scalable approach to network enabled servers, Lecture Notes in Computer Science 2400 (2002) 14. MPI Forum: Website, http://www.mpi-forum.org/ 15. Foster, I., Karonis, N.T.: A Grid-Enabled MPI: Message Passing in Heterogeneous Distributed Computing Systems, ACM Press (1998) 16. Karonis, N. T., Toonen, B., Foster, I.: MPICH-G2: A Grid-enabled implementation of the Message Passing Interface, Journal of Parallel and Distributed Computing, 63, 551-563 (2003) 17. Takano, R., Matsuda, M., Kudoh, T., Kodama, Y., Okazaki, F., Ishikawa, Y., Yoshizawa, Y.: High Performance Relay Mechanism for MPI Communication Libraries Run on Multiple Private IP Address Clusters, Eighth IEEE International Symposium on Cluster Computing and the Grid (CCGrid2008), 401-408 (2008) 18. Gemmeke, H., Ruiter, N.: 3D Ultrasound Computer Tomography for Medical Imaging, Nuclear Instruments & Methods in Physics Research 2, 1057-1065 (2007)
File Systems and Access Technologies for the Large Scale Data Facility M. Sutter, V. Hartmann, M. G¨otter, J. van Wezel, A. Trunov, T. Jejkal, and R. Stotzka
Abstract Research projects produce huge amounts of data, which have to be stored and analyzed immediately after the acquisition. Storing and analyzing of high data rates are normally not possible within the detectors and can be worse if several detectors with similar data rates are used within a project. In order to store the data for analysis, it has to be transferred on an appropriate infrastructure, where it is accessible at any time and from different clients. The Large Scale Data Facility (LSDF), which is currently developed at KIT, is designed to fulfill the requirements of data intensive scientific experiments or applications. Currently, the LSDF consists of a testbed installation for evaluating different technologies. From a user point of view, the LSDF is a huge data sink, providing in the initial state 6 PB of storage, and will be accessible via a couple of interfaces. As a user is not interested in learning dozens of APIs for accessing data a generic API, the ADALAPI, has been designed, providing unique interfaces for the transparent access to the LSDF over different technologies. The present contribution evaluates technologies useable for the development of the LSDF to meet the requirements of various scientific projects. Also, the ADALAPI and the first GUI based on it are introduced.
M. Sutter () • V. Hartmann • M. G¨otter • T. Jejkal • R. Stotzka Karlsruhe Institute of Technology, Institute for Data Processing and Electronics, Hermann-von-Helmholtz-Platz 1, 76344, Eggenstein-Leopoldshafen, Germany e-mail:
[email protected];
[email protected];
[email protected];
[email protected];
[email protected]; J. van Wezel • A. Trunov Karlsruhe Institute of Technology, Steinbuch Centre for Computing, Hermann-von-Helmholtz-Platz 1, 76344, Eggenstein-Leopoldshafen, Germany e-mail:
[email protected];
[email protected] F. Davoli et al. (eds.), Remote Instrumentation for eScience and Related Aspects, DOI 10.1007/978-1-4614-0508-5 16, © Springer Science+Business Media, LLC 2012
239
240
M. Sutter et al.
1 Introduction Research produces huge amounts of measurement and experimental data, which has to be handled in an efficient manner. Such data is subsequently used for processing, analysis and stored for later (re-) use. For a successful analysis and achievement of new scientific results access to the data must be provided for national and international partners. Additionally, the data has to be archived for at least the legally required time period for archival of scientific data (differ from country to country), to allow the verification of the results obtained from that data. In contrast to data stored in nowadays Storage Elements, experiment data is often organized in huge trees of small files, causing ineffective data handling if the standard transport protocols, e.g. GridFTP [1], are used. The reason for the inefficient data handling belongs to the development of those protocols. They are designed to handle huge amounts of data efficiently. Small files in such scenarios are normally about 100 MB, as described for GridFTP in [2], which can be a multiple of the size of experiment data. The arising of another problem is possible if instruments are used for measuring data within remote data acquisition systems. If the instrument does not provide sufficient storage capabilities for handling the measured data, the data has to be streamed from the instrument directly to a storage infrastructure capable of handling the provided data rates. This can be worse if multiple instruments are used and multiple continuous data streams have to be merged and analyzed. Also, if sufficient computing power is not available within the instrument or the inputs of multiple instruments are required for immediate analysis in near real-time measured data or monitoring data must not be buffered in files on the instruments, since the accumulation may cause intolerable delays. Once again the only solution is to stream the data directly to a site capable of processing the data. In both cases, the instrument properties influence directly the particular requirements of the resources, e.g. data rates and processing capabilities. The problem hereby is that handling of continuous data streams is a feature normally not provided by already available storage infrastructure solutions, making the development and integration of streaming for different packages necessary. To fulfill those requirements currently a Large Scale Data Facility (LSDF) to support data intensive scientific experiments is developed at the Karlsruhe Institute of Technology (KIT). The LSDF will allow the storage and retrieving of data in an efficient manner by using different technologies. On the one hand, the LSDF is designed to allow the streaming of data, the access to data with different technologies and the handling of huge datasets consisting of files with only some MB. On the other hand, the LSDF is a huge data sink, starting with an initial capacity of 6 PB, and will grow at a rate of several PB every year.
File Systems and Access Technologies for the Large Scale Data Facility
241
2 Fundamentals In order to build the LSDF including all the features and to allow access from remote data acquisition systems the development is divided into different steps: First, a requirement analysis and evaluation will define the border conditions for data transfer, storage and pre-processing. After evaluating the existing technologies the results will describe feasible networking, transfer and processing technologies and margins of fluctuations. The last step is an interface definition for software interfaces to access the storage systems of the LSDF transparently.
2.1 Large Scale Data Facility at KIT A LSDF for data intensive experiments and applications is currently developed at KIT. The LSDF is intended to support scientific experiments and instruments collecting huge amounts of data (up to several PB) during their life time and the need to process the data in near real-time. Nevertheless instruments are not the only user community for the LSDF. Currently projects from System Biology over Material Testing to Microscopy and Beamline Facilities are involved in the development process of the LSDF. The capabilities for the development of the LSDF are based on several years of experience with projects in Grid computing. From the heterogeneous requirements in the CampusGrid project to the Petascale storage size of the GridKa project [3] KIT plays a key role in providing reliable and long time storage of scientific data in the world-wide LHC community [4] and the German scientific Grid community (D-Grid) [5]. The activities are focused on the deployment, operation and support of methods for long time and high throughput data storage. The spectrum of research and development ranges from the study of disk, tape and file systems to databases, networking equipment, and protocols. The planned IT-infrastructure, organization and technical support of data management and archival will assist different projects to facilitate access to world wide data processing in Grid and cloud computing centers, enhance cooperation through easier data exchange and improve scientific practice by faster evaluation of results. The needs for the development of the LSDF are based on the requirement that acquired data in many research areas is becoming available at rates that are similar to the exponential growth of computer power. Methods of experimental data acquisition have been improved and are performing increasingly automatically and in parallel. At the same time storage duration has grown because not all data can be processed immediately and analysis methods are improving over time. This means that the raw data remains a valuable resource over the whole lifetime of the project. Additionally, legal demands and international scientific cooperation requires a continuous on-line availability of the raw data for up to several decades.
242
M. Sutter et al.
Fig. 1 The schematic design of the LSDF. On the left side possible client technologies for accessing data in the LSDF and the usage of those technologies by applications are shown. The right side shows the server part of the LSDF for storing and archiving the data. Also, the analysis infrastructure for processing the data is shown next to the storage infrastructure
The LSDF will solve these problems by developing and provisioning technologies supporting the effective handling of huge amounts of small files, provisioning the data storage (up to EB) for data intensive science and supporting interactive and transparent access using various interfaces. The interfaces will allow high throughput access over different clients and directly attached distributed computing infrastructures (DCIs) to access, to preprocess and to analyze the data. The LSDF is also aware of handling meta data to alleviate the location of a specific data set within thousand of data sets and to allow the long time storage and archiving of data (Fig. 1).
2.2 File Systems In order to provide the storage infrastructure of the LSDF file systems have to be evaluated to fulfill the requirements, e.g. which file systems are accessible over the network, which can be used to provide the huge amount of storage capacities intended by the LSDF and especially allow the handling of small files. A selection of already existing file systems is shown below, but the list does not seem to be complete: • The Network File System (NFS) [6] is a protocol for accessing files over the network as easily as if the file is stored on local hard disk. Up to NFS version 3 only client computers are authenticated at the server. Authentication of users is possible since version 4 of the protocol.
File Systems and Access Technologies for the Large Scale Data Facility
243
• Hadoop [7] is a software framework supporting data intensive distributed applications. The processing of data is based on Google’s MapReduce [8]. The Hadoop file system consists of a cluster of data nodes. Each serves blocks of data over the network using a block protocol. Every file system requires one unique server, called the name node, which is necessary for accessing data. The Hadoop file system can also serve data over HTTP, allowing access to all content from a web browser or other clients. • The General Parallel File System (GPFS) [9] from IBM is a shared disk clustered file system. For provisioning of data GPFS manages the access by using multiple servers, mapped to a global namespace. GPFS is used by many supercomputers that populate the Top 500 List of the most powerful supercomputers on the planet, but can also be used with AIX 5L clusters, Linux clusters, on Microsoft Windows Server, or a heterogeneous cluster of AIX, Linux and Windows nodes. • The Server Message Block (SMB) [10] is a protocol to access files, printers, server services and is integrated in Microsoft Operating Systems. The Common Internet File System (CIFS) was introduced in 1996 and is an extended version of SMB, allowing e.g. Windows Remote Procedure Calls (RPC) and NT domain services. Nowadays, CIFS replaces SMB in Windows. A well known open source implementation of SMB/CIFS is Samba [11], allowing e.g. the access to Windows network shares from Linux or UNIX. • The Andrew File System (AFS) [12] is a distributed file system and was developed at Carnegie Mellon University. AFS offers a client-server architecture for federated file sharing and replicated read-only content distribution. Special features are the provisioning of location independency, scalability, security, and transparent migration capabilities. The presence of ten thousand users and hundreds of data servers are usual, even if they are spread over the Internet. For AFS there exist a couple of implementations for a broad range of heterogeneous systems including UNIX, Linux, Mac OS X, and Microsoft Windows. • The Gluster file system (GlusterFS) [13] is a distributed file system, offering storage resources from several servers as a unique file system. The servers (Cluster-Nodes) are a client-server architecture over TCP/IP. Peculiar for GlusterFS is that NAS (Network-attached storage) systems over Infiniband can be directly integrated in the cluster and a redundant connection of storage space over TCP/IP, Infiniband Verbs or Infiniband SDP (Socket Direct Protocol), is possible. Data on all cluster nodes can be read and written, whereas changes will be directly written on all servers. • The Parallel Virtual File System (PVFS) [14] is a distributed file system, built on top of local file systems. The focus of PVFS is on high performance access to large data sets and it was designed for usage in large scale computing clusters. The data is distributed across multiple servers and accessible for concurrent access by multiple clients. Specific for the client library is the usage of the Message Passing Interface (MPI) for accessing the files. Some of the file systems, e.g. NFS or SMB/CIFS are only offering the local hard disks of one node over the network, meaning that at the moment they can only
244
M. Sutter et al.
provide some TB of capacity – depending on the hard disks. Also, if only one server is involved the infrastructure has a single point of failure. If the server goes down the file system is not available any more. Other systems like e.g. GlusterFS or GPFS group a large amount of hard disks across multiple nodes over the network, offering a better solution for the LSDF as PB of storage will be available. A possible solution to overcome the single point of failure for NFS and SMB/CIFS is to export them over GPFS from all nodes on a GPFS cluster, thus exporting the whole namespace as from the single point of entry. Hadoop goes another way, it offers data from multiple nodes, but the single point of failure is the name node. Nevertheless, the file systems not spread over several nodes are also interesting as the LSDF will allow the access over different technologies, meaning that not in all cases the data has to be spread over multiple nodes. This depends on the user group willing to access the LSDF.
2.3 Data Management Systems Different data management systems are introduced below. The features of some of those systems sound very similar to the features of some systems from the previous section, e.g. GPFS or GlusterFS. The difference is that the systems in the following allow the communication with storage infrastructures from Grid environments or that they are especially developed for Grid environments. The available systems are • The Storage Resource Broker (SRB) provides a hierarchical logical namespace and a storage repository abstraction for the organization of data and can be used as a Data Grid Management System [15]. The logical namespace is provided via a Distributed Logical File System, allowing the user to access data even if it is stored in a distributed way across multiple storage systems. The integration of tape systems and a wide variety of client applications are an advantage of SRB. SRB has been ported to a variety of UNIX platforms including Linux, Mac OS X, AIX, Solaris, SunOS, SGI Irix, and also to Windows, whereas not the whole functionality is available for Windows, but data can be stored and retrieved. • The Integrated Rule-Oriented Data System (iRODS) is a Data Grid Management System based on the expertise learned from the development of SRB and more or less the replacement of SRB, even though SRB is still supported [16]. The main difference is that the management policies and consistency constraints in iRODS are characterized in so-called iRODS rules and state information, which are interpreted by the iRODS core to decide how requests and conditions should be resolved. The implementation of iRODS rules is different in comparison to other data management systems as they normally implement the requests and conditions directly within the software, making the adaption of the software with every new condition necessary. • The dCache [17] project provides a system for storage and retrieving of data, whereas the data can be distributed over a large amount of server nodes. For
File Systems and Access Technologies for the Large Scale Data Facility
•
•
•
•
•
•
245
this purpose, dCache offers a single virtual file system tree, which is accessible over several technologies, hiding the physical location of the data from the user. Another feature of dCache is the migration of the data from one server to another, allowing addition or removal of server nodes and the corresponding increase/decrease of the system. The CERN Advanced STORage manager (CASTOR) [18] is a hierarchical storage management system. It was developed at CERN to handle the measured data of different data acquisition systems running at CERN. CASTOR provides a UNIX like file system structure over a virtual namespace, where the data can be accessed via command line tools or applications built on top of the data transfer structures. The StoRM [19] project is a storage resource manager for disk based storage solutions, providing Storage Resource Manager (SRM) functionality, supporting guaranteed space reservation, direct data access and standard libraries for I/O. SRM [20] is a specification designed for the integration of storage managers in Grid environments. StoRM can benefit from high-performance parallel file systems, e.g. GPFS, but the usage of any POSIX file system, e.g. ext3, is also possible. The Gfarm project is a reference implementation of the Grid Datafarm architecture and provides the Gfarm Grid file system [21], which is a virtual file system that integrates disks of compute/file system nodes. The aim of Gfarm is to provide an alternative solution to NFS. The Gfarm file system daemon is running on every integrated node to facilitate remote file operations with access control in the Gfarm file system, as well as file replication, fast invocation and node resource status monitoring. Xrootd is a data access daemon for fast and scalable data access, organized as a hierarchical file system-like namespace and based on the concept of directories [22]. The purpose of xrootd is to allow the setup of storage infrastructures with no reasonable size limit and linear scaling capabilities. This means that the performance of the installation should increase with the factor the installation increases. Nowadays, xrootd is also named Scalla (Structured Cluster Architecture for Low Latency Access) and provides an xrootd server for data access and a cmsd server for building scalable xrootd clusters [22]. The xrootd/SE system is an enhancement of the xrootd system and integrates additional features, e.g. a SRM interface via BeStMan. BeStMan is a full implementation of the SRM v2.2 specification for disk based and mass storage systems. Xrootd/SE was designed to work on top of UNIX file system, and has been reported to work on file systems such as NFS, AFS, GFS, GPFS, PNFS, HFS+, and Lustre [23]. The Disk Pool Manager (DPM) [24] is a lightweight solution for disk storage management, mainly developed for the Large Hadron Collider (LHC) experiment of CERN and available within the LHC computing Grid. DPM implements the required SRM interfaces for a storage resource manager and allows the access to data over those interfaces (Table 1).
246
M. Sutter et al. Table 1 The table shows the integration of SRM, different data transport protocols and communication with tape storage systems (MSS/HSM) for several data management systems NFS MSS/HSM SRM GridFTP xroot dCap (POSIX) backend p p p p SRB p p p iRods p p p p p p dCache p p p p CASTOR soon p p p p StoRM p Gfarm p p p xrootd/Scalla p p p p xrootd/SE BeStMan p p p p DPM
2.4 Access Technologies The provisioning of the storage is not the only feature the LSDF has to fulfill, as it is not enough to provide only the storage capacities. It must also be possible to upload or download data from the provided storage in an efficient way. Some of the already introduced file or data management systems offer the access to the data in a POSIX compliant way, making the access very easy. Such systems are, e.g. NFS, SMB/CIFS, and GPFS. Nevertheless, a POSIX compliant access is not the only possibility for retrieval of data. A couple of different technologies are introduced in the following: • The File Transfer Protocol (FTP) [25] is a network protocol for the transport of data over IP networks. FTP is used for up- and download of data from clients to servers. Additionally, the protocol allows the creation of folders, listing of folders, renaming and deletion of folders and files. Therefore, FTP uses a control channel for sending commands and a data channel for the transport of data. The whole communication is unsecured, only a user name and password is needed for authentication, if configured on server side. • FTPS [26] is an extension to FTP. It adds support for the Transport Layer Security (TLS) and Secure Sockets Layer (SSL) protocols to the FTP protocol. FTPS encrypts the whole FTP connection, control and data channels, allowing a secure communication with the server. • FTP over SSH, sometimes also called Secure FTP, uses a SSH [27] connection for tunneling a normal FTP session. As FTP uses multiple TCP connections, one for the control channel and one for every data channel, it is particularly difficult to tunnel FTP over SSH. The SSH client has to be aware of every new data channel in order to protect it. Therefore, some clients only protect the control channel and deliver the data completely unprotected, but there also exists solutions protecting control and data channels.
File Systems and Access Technologies for the Large Scale Data Facility
247
• The Secure Copy (SCP) is a protocol for delivery of data between a client and server or two servers, but it is also the name for the corresponding program for transferring the data. The protocol is based on SSH, which is used for authentication and encryption, and on the RCP (Remote Copy Protocol), which is tunneled through the SSH protocol. The data transport is handled with RCP and authentication and encryption over the SSH protocol. • The SSH File Transfer Protocol (SFTP) is a further development of SCP from the Internet Engineering Task Force (IETF) and an extension of the SSH protocol. SCP only allows file transfers, but SFTP also allows operations on remote files. The additional capabilities compared to SCP include resuming of interrupted transfers, directory listings, and remote file removal. • The GridFTP protocol is also based on FTP and is a high-performance, secure, reliable data transfer protocol optimized for high-bandwidth wide-area networks [1]. GridFTP implements extensions that were either already specified in the FTP specification, but not commonly implemented or especially developed. One of those extensions is the Globus based security for authentication of the users. • Xroot is a protocol designed for the access to large data sets and optimized for analysis purposes. To allow the access to large data sets it is enhanced with extensions for High Performance Computing. After development started the protocol was soon integrated in ROOT from CERN for distributed data access. • The dCap protocol is for local, trusted access and data transportation, because authentication is not integrated. For the access from outside the GSIdCap protocol can be used, which is the dCap protocol with GSI authentication [28]. • Hadoop is not only a distributed file system; it also has clients for accessing data in the file system and a generic Application Programming Interface (API) for accessing data or running MapReduce jobs. The API is available for Java, a JNIbased C API for talking to Hadoop file system and a compatible CCC API for MapReduce jobs.
3 Accessing the Large Scale Data Facility Section 2 presents a lot of technologies and systems, which can be useful to fulfill the requirements of different user groups willing to store their data in the LSDF. The requirements range from simple communication with the LSDF over storage or retrieval of data from the provided resources to automatic analysis of data or long time archival of already processed data. The access will be hard as the user groups from different scientific areas have varied knowledge about storage and computation infrastructure and how to access it. Even some groups have no knowledge about these topics. They only want to store data and do not learn anything related to the underlying technologies and how to use them. This includes the usage of APIs of the corresponding systems and can
248
M. Sutter et al.
Fig. 2 Schematic view of different user groups willing to access the LSDF
be worse if security is used and the corresponding user group has no background knowledge. All the groups are interested in is how to store, access or process their data on the LSDF (Fig. 2). As the LSDF is still under development not all supported technologies are chosen at the moment. Currently, the LSDF consists of a testbed installation for evaluation of different technologies. The testbed has a size of 100 TB and is accessible via GridFTP and SSH, whereas the SSH access is only available for developers to monitor the software development. For computation purposes, a small Hadoop cluster of 10 nodes is deployed directly nearby the storage testbed to allow the access to the data with a minimal delay over the network. Nevertheless, the first “production” version of the hardware for the LSDF is already ordered and will be deployed. This installation will consist of 6 PB of storage, accessible via GPFS, and a Hadoop cluster of 232 cores. The access will additionally be possible via Samba. Every further access method is up to the involved user groups and their corresponding requirements, but still under discussion.
3.1 Abstract Data Access Layer API Nevertheless, different access technologies and systems lead to another problem: It is more difficult to access data, as every system has its own APIs. In order to use the APIs every user has to be familiar with the underlying technologies and how to
File Systems and Access Technologies for the Large Scale Data Facility
249
use them, including the security features and how they incorporate with the whole infrastructure. This makes it extremely hard for scientists as they are only interested in processing the data and not to learn about different access technologies. To overcome this problem the access to the LSDF will be possible over the Abstract Data Access Layer API (ADALAPI), which will provide platform independent data access to the LSDF. Therefore, the ADALAPI implements a logical software layer for accessing the LSDF and hides specific details of the underlying technologies. If a scientist has to access data in the LSDF, the easiest way is to submit his request over the interfaces provided by the ADALAPI. In order to successfully access data, the ADALAPI will provide a login panel to the user and prompt for the details needed for the access to the corresponding parts of the LSDF. Such details can be e.g. a user name with password or valid credentials of a X.509 certificate. After successful submission of the required privileges the access is granted and the user can start to use the data or perform operations with it. If a user wants to interact with the LSDF it is only necessary to call the common methods of the ADALAPI with the currently established connection. The common methods hide the details of the implementation and allow the listing of folders, the storage and retrieval of data for every implemented technology. This makes it very easy from the user’s point of view as he only has to connect via a specified method and is able to access the data via the same interfaces for every system he is communicating with. If another access method is needed, e.g. for another part of the LSDF, only the configuration for the part to access has to be changed. Everything beside will be the same for all implemented technologies. TheADALAPI is designed to be generic and can be extended with new technologies, which might be integrated in the LSDF in the future. Also the design allows to reuse the ADALAPI in several contexts, e.g. by graphical user interfaces or computing infrastructure. Especially the access from computing infrastructure is necessary as the immediate data analysis is one main aim for the development of the LSDF. As the LSDF currently allows the access only via GridFTP, this is the only technology currently implemented in the ADALAPI. The implementation is based on the CoG Kit [29] of the Globus Toolkit [30] and provides the standard features of the CoG Kit like storing and retrieving of data. Nevertheless, the implementation offers a couple of additional features, e.g. it is possible to up-, download and to delete folders recursively.
4 Data Browser The High Throughput Microscopy user group produces up to 10 TB data for one experiment, taking some hours. As they perform a couple of experiments every week the storage of the data are not possible on the Microscopy computer or even in the Microscopy facility. The biologists need an additional infrastructure for storing these data. Therefore the LSDF is predestined, as it is possible for scientists to store their data without any restrictions.
250
M. Sutter et al.
Of course, biologists are specialists on their field of investigation and are not willing to learn additional technologies needed for storing or retrieving data from the LSDF. As a result of this there are two different positions: the biologists which want to store a large amount of data on the one hand and on the other hand the computer scientists which are able to store these data. Nevertheless, there is no link between both positions. It is nearly impossible for a biologist to handle different protocols and/or data management systems and to deal with additional security features. To overcome these problems the Data Browser was developed. The Data Browser is an application supporting the access to the LSDF and additionally allowing the usage of meta information for every data set. The handling of meta information is extremely important as, e.g., nobody is willing to search for a 2 GB data set in a data sink of, say, 3 PB. By using the Data Browser the user only needs a few mouse clicks and has not to think about the underlying protocols or how they can be used.
4.1 Status As a short summary the Data Browser is an application for managing, analyzing and visualizing large amounts of data of different scientific projects in a transparent manner. At the moment users are able to up- and download the experiment data using the Data Browser. For accessing the LSDF the Data Browser integrates the ADALAPI, allowing the communication via unique interfaces. As the additional meta data is extremely important every microscope data set has additional meta information. This can be e.g. the kind of microscope used or the date of acquisition. This meta data is needed for distinction of data sets in the LSDF, especially if hundreds or even thousands of them are available. Therefore the Data Browser allows the handling of meta data, which is partly extracted from the available data set, but can also be extended manually by the user, and couples the data set with the meta data. If the user wants to use a specific data set, the Data Browser allows searching over the meta data. After the specific data set is located the user is able to select and to display images from it, as the microscope data sets consists only of images (Fig. 3).
4.2 Architecture of the Data Browser The main idea of the Data Browser is to use one application for all purposes. Ideally, with a single sign-on if authentication/authorization is necessary. Figure 4 shows the modules of the Data Browser. The Data Browser (in the middle) is a connector to the different tools. On the right the LSDF is the data sink with more or less unlimited disk space. The database
File Systems and Access Technologies for the Large Scale Data Facility
251
Fig. 3 Screenshot of the Data Browser. The upper panel shows the information about the certificate which is used for authentication. On the lower left panel all data sets (projects) are listed. On the lower right panel the selected data set can be browsed on the LSDF
Fig. 4 The different modules of the Data Browser, shown in the middle. On the right side are the LSDF and the access to it over the ADALAPI. On the left side is the database, containing the meta data and the access via Hibernate shown. The Data Analyzer part of the Data Browser is responsible for the processing of data. The Computing Infrastructure can access the data via the ADALAPI, or if the protocol is known directly without using the ADALAPI. This depends on how the data will be processed via the Data Analyzer and what Computing Infrastructure will be used
252
M. Sutter et al.
on the left contains the meta data of the experiment data to allow quick queries on the datasets. The Data Analyzer (not included in the Data Browser yet) will analyze the data in parallel over Grid and Cloud infrastructures. The Data Browser is written in Java. Therefore it runs on all platforms supporting Java, but is currently only tested under Windows and Linux. Before the Data Browser is able to access the data stored on the LSDF the user has to authenticate him, which is provided via the ADALAPI. After successful authentication the user is able to browse the data and to view e.g. images online. The uploading of data is also possible. Before uploading data, the meta data can be linked to the data set. This meta data is stored in a database and can be accessed via Hibernate [31], which is an open source Java persistence framework. An additional module for analyzing and browsing results is planned. The module is able to start processing jobs on the data stored in the LSDF over a computing infrastructure. Therefore, the module generates jobs for every data set and sends the job to the computing infrastructure. If the processing is triggered via the Data Browser the user is able to monitor the status of the jobs or to browse the results after successful calculation.
5 Performance Analysis In order to evaluate the transfer rates via the Data Browser we did some performance tests to measure the possible transfer rates. As the Data Browser is based on the ADALAPI and the only protocol currently implemented is GridFTP the results are only available for this protocol.
5.1 Test Description As GridFTP is designed to transfer huge files; it is not the best solution for small files, unless the pipelining mode is used [2], which is currently not implemented in Java. To measure the maximal transfer rate of the Data Browser implementation using the GridFTP protocol 14 data sets with a total size of 512 MB each are created. To determine the dependence between the transfer rates, the file size of each data set consists of files with different file sizes (Table 2). This means: Table 2 The table shows the used datasets for measuring the transfer rates. Each dataset is in the sum 512 MB, but has a different amount of files Dataset number 1 2 3 4 ... Size of file (MB) 512 256 128 64 ... Number of files 1 2 4 8 ... Sum (MB) 512 512 512 512 ...
File Systems and Access Technologies for the Large Scale Data Facility
253
Fig. 5 Transfer rates for the different data sets under Linux and Windows. The performance under Windows is less then the one under Linux and the bigger the files are, the better is the performance
5.2 Test Environment 5.2.1 GridFTP Server Linux (openSUSE 11.1), Intel Core2Quad Q9550 with 42.8 GHz, 4 GB RAM 5.2.2 Client Windows: Windows XP Professional SP3, Intel Core2Quad Q6600 with 4 2.4 GHz, 3 GB RAM Linux: Linux (openSUSE 11.0), Intel Core2Duo E8500 with 2x3.2 GHz, 4GB RAM All computers are connected with a 1 Gbit/s link.
5.3 Test Results With the given hardware the theoretical maximum transfer rate is about 120 MB/s. Due to variable network traffic the results varies for higher transfer rates. The results are the mean values of 10 runs each. The Y error bars in Fig. 5 show the standard
254
M. Sutter et al.
deviation of the time needed for transferring the data set(s). As anticipated the possible transfer rate using GridFTP depends on the file size. On Linux, the transfer rate for large files is about 50% of the theoretical maximum bandwidth. Files that are 32 MB and smaller are transferred with significantly smaller bandwidth. For these files the transfer time is nearly proportional to the number of files. The behavior on Windows is nearly similar. As the transfer rate is lower than on Linux the bandwidth will be stable for files larger than 32 MB. For smaller files the transfer time will be also proportional to the number of files. As the bandwidth under Windows is less then half the bandwidth under Linux the first notion was that there may be a problem with the used Windows client. So we checked the results with a couple of Windows clients, including other XP, Vista and Windows 7 machines. For all clients, the bandwidth was in more or less the same order of magnitude. So it was not possible to give an explanation of why the bandwidth is reduced so drastically. For large files GridFTP seems to be a convenient protocol. Nevertheless, for data sets consisting of thousands of small files it is not useable. One possible solution is to implement the pipelining mode for GridFTP in the CoG Kit as introduced in [2].
6 Discussion and Future Work In the present chapter, we introduced the LSDF, which will be a huge data sink in the final state from the user’s point of view. The LSDF will not only allow the storage of data. It is also responsible for the long time archival and near real-time processing of data. For the processing purposes the computing infrastructure will be deployed nearby the LSDF and allow a seamless access to the data. It is also possible to copy the data to the LSDF and simultaneously start the processing of that data. In order to develop the LSDF different technologies are introduced, which can be necessary for the development to fulfill the requirements of different user groups. As the development of the LSDF is in progress it is not possible to assure which technologies will be included in the production version. At the moment we have a test environment with 100 TB disk space. This test environment is accessible via GridFTP and SSH and only for evaluation of technologies and how they can be incorporated in the production system. For data processing purposes, a small Hadoop installation consisting of 10 nodes with some TB disk space is deployed and used currently. The first version of the production system will provide 6 PB of storage via GPFS. Nevertheless, the plan for the development of the LSDF is to increase the amount of storage continuously up to the range of EB in the next years. The provisioning and increasing of the storage space is not the only part we are working on in order to successfully deploy the LSDF. As the LSDF provides a huge amount of storage capacity it must be possible to store or retrieve data over the network in an efficient way. At the moment the connection of our institute to
File Systems and Access Technologies for the Large Scale Data Facility
255
the data center hosting the test environment is only 1 Gbit/s for the whole institute of about 60 persons. Nevertheless, the network connection will be extended to 10 Gbit/s in the next months only for our working group. The increasing of the hardware and network connection is not the only open point for a successful deployment of the LSDF. Also, the software parts have to be evaluated and enhanced continuously. This includes the implementation of new features for the ADALAPI. The schedule for the moment includes the integration of the Hadoop file system and the root protocol. The technology gap, which has to be resolved by the Data Browser and the underlying ADALAPI, is the transfer of small files. In order to transfer such datasets in an efficient way an appropriate protocol has to be evaluated and included in the LSDF and ADALAPI. Nevertheless, at the moment we have no concrete protocol, fulfilling our requirements in that case. There are two possible candidates: The first one is the pipelining of GridFTP [2] and the second one is the usage of an efficient network protocol. A strong candidate for that is Samba. Nevertheless, the pipelining mode is not implemented for the Java GridFTP client in the CoG Kit [29] and the Samba connection will be tested after the upgrade of the network link and the deployment of the first production release of the LSDF.
References 1. W. Allcock, J. Bresnahan, R. Kettimuthu, and M. Link, “The Globus Striped GridFTP Framework and Server,” in Supercomputing, 2005. Proceedings of the ACM/IEEE SC 2005 Conference, November 2005, pp. 54–54. 2. J. Bresnahan, M. Link, R. Kettimuthu, D. Fraser, and I. Foster, “GridFTP Pipelining,” in Proceedings of the 2007 TeraGrid Conference, June 2007. 3. S. Hermann, H. Marten, and J. v. Wezel, “Operating a TIER1 centre as part of a grid environment,” in Proceedings of the Conference on Computing in High Energy and Nuclear Physics (CHEP 2006), February 2006. 4. T. S. Pettersson and P. Lef`evre, “The large hadron collider: conceptual design,” CERN, Geneva, Tech. Rep. CERN-AC-95-05 LHC, October 1995. 5. The D-Grid project. (2010, January) D-Grid-Initiative: D-Grid-Initiative. [Online]. Available: http://www.d-grid.de/index.php?id=1&L= 6. Sun Microsystems, “Nfs: Network file system protocol specification,” The Internet Engineering Task Force, Tech. Rep. RFC 1094, March 1989. 7. The Apache Software Foundation. (2010, January) Welcome to Apache Hadoop! [Online]. Available: http://hadoop.apache.org/ 8. J. Dean and S. Ghemawat, “Mapreduce: Simplified data processing on large clusters,” in Proceedings of the 6th Symposium on Operating Systems Design and Implementation, 2004, pp. 137–149. 9. F. Schmuck and R. Haskin, “Gpfs: A shared-disk file system for large computing clusters,” in Proceedings of the 2002 Conference on File and Storage Technologies (FAST), 2002, pp. 231–244. 10. Microsoft Corporation. (2010, January) Microsoft SMB Protocol and CIFS Protocol Overview (Windows). [Online]. Available: http://msdn.microsoft.com/en-us/library/aa365233%28VS. 85%29.aspx 11. The Samba team. (2010, January) Samba - opening windows to a wider world. [Online]. Available: http://samba.org/
256
M. Sutter et al.
12. J. H. Howard, M. L. Kazar, S. G. Menees, D. A. Nichols, M. Satyanarayanan, R. N. Sidebotham, and M. J. West, “Scale and performance in a distributed file system,” ACM Transactions on Computer Systems, vol. 6, no. 1, pp. 51–81, 1988. 13. Gluster Software India. (2010, January) Gluster: Open Source Clustered Storage. Easy-to-Use Scale-Out File System. [Online]. Available: http://www.gluster.com/ 14. P. H. Carns and W. B. Ligon III and R. B. Ross and R. Thakur, “Pvfs: A parallel file system for linux clusters,” in Proceedings of the 4th Annual Linux Showcase and Conference, Atlanta, GA, October 2000, pp. 317–327. 15. C. Baru, R. Moore, A. Rajasekar, and M. Wan, “The SDSC storage resource broker,” in CASCON ’98: Proceedings of the 1998 conference of the Centre for Advanced Studies on Collaborative research. IBM Press, 1998, p. 5. 16. A. Rajasekar, M. Wan, R. Moore, and W. Schroeder, “A Prototype Rule-based Distributed Data Management System,” in HPDC workshop on ”Next Generation Distributed Data Management”, May 2006. 17. M. Ernst, P. Fuhrmann, M. Gasthuber, T. Mkrtchyan, and C. Waldman, “dCache, a Distributed Storage Data Caching System,” in Proceedings of the International CHEP 2001, Beijing, China, September 2001. 18. G. Lo Presti, O. Barring, A. Earl, R. Garcia Rioja, S. Ponce, G. Taurelli, D. Waldron, and M. Coelho Dos Santos, “CASTOR: A Distributed Storage Resource Facility for High Performance Data Processing at CERN,” 24th IEEE Conference on Mass Storage Systems and Technologies, 2007. MSST 2007., pp. 275–280, September 2007. 19. F. Donno, A. Ghiselli, L. Magnoni, and R. Zappi, “StoRM: GRID middleware for disk resource management,” in Proceedings of the International CHEP 2004, Interlaken, Switzerland, October 2004. 20. A. Shoshani, A. Sim, and J. Gu, “Storage Resource Managers,” in Grid Resource Management: State of the Art and Future Trends, 1st ed. Springer, 2003, ch. 20, pp. 321–340. 21. O. Tatebe, Y. Morita, S. Matsuoka, N. Soda, and S. Sekiguchi, “Grid Datafarm Architecture for Petascale Data Intensive Computing,” Cluster Computing and the Grid, 2002. 2nd IEEE/ACM International Symposium on, May 2002. 22. The xrootd team. (2009, January) The Scalla Software Suite: xrootd/cmsd. [Online]. Available: http://xrootd.slac.stanford.edu/ 23. Lawrence Berkeley National Laboratory Scientific Data Management Research Group. (2009, January) BeStMan. [Online]. Available: http://datagrid.lbl.gov/bestman/ 24. L. Abadie, P. Badino, J.-P. Baud, J. Casey, A. Frohner, G. Grosdidier, S. Lemaitre, G. Mccance, R. Mollon, K. Nienartowicz, D. Smith, and P. Tedesco, “Grid–Enabled Standards–based Data Management,” 24th IEEE Conference on Mass Storage Systems and Technologies, 2007. MSST 2007., pp. 60–71, Sept. 2007. 25. J. Postel and J. Reynolds, “File transfer protocol (ftp),” The Internet Engineering Task Force, Tech. Rep. RFC 959, October 1985. 26. M. Horowitz, C. Solutions, and S. Lunt, “Ftp security extensions,” The Internet Engineering Task Force, Tech. Rep. RFC 2228, October 1997. 27. T. Ylonen and SSH Communications Security Corp and C. Lonvick and Cisco Systems Inc., “The secure shell (ssh) authentication protocol,” The Internet Engineering Task Force, Tech. Rep. RFC 4252, January 2006. 28. dCache.org. (2009, January) The dCache Book. [Online]. Available: http://www.dcache.org/ manuals/Book/ 29. G. V. Laszewski, I. Foster, J. Gawor, P. Lane, N. Rehn, and M. Russell, “A java commodity grid kit,” Concurrency and Computation: Practice and Experience, vol. 13, Issues 8, pp. 645–662, 2001. 30. I. Foster, “Globus Toolkit Version 4: Software for Service-Oriented Systems,” in IFIP International Conference on Network and Parallel Computing, Springer-Verlag LNCS 3779, 2005, pp. 2–13. 31. C. Bauer and G. King, Java Persistence with Hibernate. Greenwich, CT, USA: Manning Publications Co., 2006.
Policy Driven Data Management in PL-Grid Virtual Organizations Dariusz Kr´ol, Renata Słota, Bartosz Kryza, Darin Nikolow, Włodzimierz Funika, and Jacek Kitowski
Abstract In this chapter, we intend to introduce a novel approach to data management within the Grid environment based on user-defined storage policies. This approach aims at enabling Grid application developers to specify requirements concerning storage elements which will be exploited by the application during runtime. Most of the existing Grid middleware focus on unifying access to available storage elements, e.g. by applying various virtualization techniques. While this is suitable for many Grid applications, there is a category of applications, namely the data-intensive one which often has more specific needs. The chapter outlines research and development work carried out in the PL-GRID and OntoStor projects to solve this issue within the PL-GRID infrastructure.
1 Introduction Modern scientific as well as business oriented applications are becoming more and more complex, involving a multitude of parties and their heterogeneous resources including hardware, services, and data. This poses several problems when trying to deploy such applications in Grid environments, such as the problem of defining and managing a Virtual Organization which spans several different IT infrastructures. Currently, one of the main bottlenecks in fostering the adoption of Grid environments in scientific and business applications is the complex process of configuring all necessary middleware for existing applications, including configuration of
D. Kr´ol () • B. Kryza • D. Nikolow • J. Kitowski ACC CYFRONET AGH, ul. Nawojki 11, 30-950 Krak´ow, Poland e-mail:
[email protected];
[email protected];
[email protected];
[email protected] R. Słota • W. Funika • J. Kitowski Department of Computer Science, AGH University of Science and Technology, Mickiewicza 30, 30-059, Krak´ow, Poland e-mail:
[email protected];
[email protected];
[email protected] F. Davoli et al. (eds.), Remote Instrumentation for eScience and Related Aspects, DOI 10.1007/978-1-4614-0508-5 17, © Springer Science+Business Media, LLC 2012
257
258
D. Kr´ol et al.
Fig. 1 Virtual Organizations in PL-Grid
monitoring, data management as well as security services with respects to the requirements of particular applications. In the Grid, such applications are developed within custom Virtual Organizations (Fig. 1) deployed in order to limit the number of resources to particular applications and users as well as to control the access to these resources by users of the application. Within the PL-Grid project, these issues are addressed in order to allow the end-users to create their applications on the Grid by defining requirements which will be mapped to proper VO configuration. In particular, the VO specific requirements should be taken into account by the data management layer of the Grid middleware. This includes such issues as preferred storage type, required access latency, or replication for the purpose of increased data security. In order to accomplish this, the VO management framework must provide means for defining the high level requirements related to data management and the data management layer must be able to infer from these requirements particular actions which should be performed during data storage and retrieval. In this chapter, we present our results on developing a framework for data management in Grid environments in the context of Virtual Organizations. The novelty of our approach is that users can specify Quality of Service (QoS) requirements for the storage resources in the given VO. Our framework uses those requirements to automatically create and manage the VO. The framework consists of several components. First of all, a component called FiVO is responsible for definition of the VO and the users requirements and automatic deployment and management
Policy Driven Data Management in PL-Grid Virtual Organizations
259
of the VO according to the defined policy. Data management components use the predefined policies in order to optimize data access and best meet the users expectations, using proper storage monitoring and data access estimation methods. The rest of the chapter is organized as follows: The next section presents the related work in the field of grid data management. The third section gives some details about VO management within the PL-Grid. The fourth section shows the data management challenges regarding the QoS of data access in PL-Grid, the available relevant technologies and example use cases. Implementation notes are included in the fifth section and the last section concludes the chapter.
2 Related Work The presented research intends to develop an approach to data management within the Grid environment driven by a user-defined policy. However, most of the existing solutions try to leverage the data management process by choosing storage elements on behalf of the user rather than supporting users with selecting the storage elements which are suitable for the user requirements. This section depicts various projects which are related to the described issue in various environments. DCache system [1] consolidates various types of data sources, e.g. hierarchical storage management system or distributed file systems in order to provide users with a single virtual namespace. Moreover, dCache replicates existing data to different data sources to increase throughput and data availability. Also, mechanisms for moving data to a location where it is actually used are intensively developed within the project. All of these features are transparent from the user point of view. Though, some aspects of the system behavior, e.g. data source division into pools or replica manager can be configured. The autonomic, agent-based approach presented in [2] exploits features, e.g. autonomicity and self awareness known from other agent-based systems. Therefore, it is suitable for application to dynamic environments such as the grid where storage elements can be added or removed at runtime. The agent-based approach ensures that such a change will be quickly discovered and adapting actions, e.g. adjusting a load balancer agent, will be performed. Each agent has been assigned different roles, e.g. data source accessor or storage element chooser; thus, the responsibility can be well divided between loosely coupled objects. It is worth mentioning that in this approach metadata can be assigned to each storage element which can be then exploited during finding the most suitable storage for application data. However, such attributes are static and contain only information which does not change over time. Many challenges in the data management area were reinvented in the Cloud computing era. The most important ones includes: storing and retrieving enormous amounts of data, accessing data by different users from geographically distributed
260
D. Kr´ol et al.
locations, synchronization between different types of devices or consolidating various storage types and others [3]. While there are no effective solutions available for each of the mentioned problems yet, there are some very interesting ongoing works in this area. One of the interesting observations about applying Cloud computing is a necessity of shifting the approach to storing data from relationalbased to less-structure storage which is better scalable in a highly distributed environment [4]. The Reference [5] describes a production-ready cloud architecture which is deployed by Yahoo!, along with several explanations of decisions related to data management that were taken, e.g. the necessity of applying the map-reduce paradigm to perform data intensive jobs or replacing relational databases with more schema-flexible counterparts.
3 Virtual Organization Management in PL-Grid One of the main objectives of the PL-GRID project [7] is to support Polish scientists by providing a hardware and software infrastructure which will meet the requirements of scientific applications. The infrastructure will be accessible in form of a Grid environment that encompasses various computational and storage resources placed in several computer centers in Poland. The proposed system for VO management, called FiVO, will support the PLGrid users in defining the requirements for their applications in a unified semantic way, which will be then translated by our system into the configuration of particular middleware components (e.g. VOMS, replica manager) in order to make the process of VO creation as automatic as possible (Fig. 2). Such approach will enable abstraction over both heterogeneous resources belonging to a particular domain of a Virtual Organization (e.g. services or data) as well as abstraction over heterogeneous types of middleware which is used by different sites of the Grid infrastructure, including gLite and UNICORE. This process assumes that description of the resources available to a VO is described in a semantic way. Since this is usually a very strong requirement, we are developing a system – X2R, which will allow semi-automatic translation of legacy metadata sources such as RDBMS, LDAP, or XML databases into an ontological knowledge base founded on the provided mapping [6]. This will overcome the problem of interoperability of description of resources available for a Virtual Organization. In particular, an important aspect of the work involves optimization of data access on the VO level based on the semantic agreement reached by the partners during the VO inception. This will involve custom monitoring of data access statistics within a VO as well development of extensions to existing monitoring systems with capability of monitoring of data access metrics and verifying them against the particular Service Level Agreement (SLA) present within a VO.
Policy Driven Data Management in PL-Grid Virtual Organizations
261
User Interface
FiVO
X2R
Resource description layer
Security Configurator
Monitoring Configurator
PLGrid Infrastructure Security layer
Monitoring layer
Data management layer Access time estimators
Replica manager
Fig. 2 Virtual Organizations management architecture
4 Data Management Challenges in PL-Grid For users and applications with specific data access quality of service requirements an appropriate VO should be created respecting the data storage performance demands specified in the SLA. There are a couple of challenges which need to be addressed when creating and running such VOs. The first one is automatic configuration of the VO and assignment of storage resources to it. We assume that heterogeneous storage resources are available within the grid environment. The appropriate resources and storage policies need to be selected which best match the user requirements. More than one storage resource can be selected depending on the storage policy (e.g. replication). The second one is controlling the storage performance and storage resource allocation for the goal of guaranteeing the fulfillment of user requirements with respect to storage performance within the already created VO. One technique which can be used in this case is replication of data in order to avoid storage node overload for popular data sets. Another technique for increasing the performance of a single data access is distributing (stripping) the data and accessing the stripes in parallel. A relevant storage resource selection and scheduling based on a given user
262
D. Kr´ol et al.
performance profile is needed to keep track of the resource usage and to have the performance within the specified limits. Therefore, our research aims at providing a mechanism to define non-functional requirements for the storage from the grid application developers, e.g. a required throughput or desired availability level. The requirements can be divided into two categories: soft ones and hard ones. The former category represents the constraints which should be maintained at runtime, e.g. maximum access time. However, if some of them are violated, the application should not be stopped, instead it will most probably run longer due to unoptimized storage management. The hard requirement means that each violation of such a requirement will result in an application failure, e.g. related to storage capacity. An application which tries to dump data on a storage element which does not provide enough free space or the data capacity exceeding the available storage capacity, will be interrupted and an error code will be returned in most situations. It is important to mention that some properties of storage elements can change over time, e.g. the hard drive capacity or mean access time; thus, a monitoring system has to be exploited in order to get information about the current state of available storage resources. In order to build/deploy a system providing the functionalities described above some low level storage monitoring services and relevant performance prediction methods are needed. Such technology is being developed as part of another ongoing research project codenamed “OntoStor” [10] which is briefly presented in the following section.
4.1 Storage Monitoring The OntoStor project aims at developing an ontology-based data access methodology [11] for heterogeneous Mass Storage Systems (MSS) in grid environment. Within the project a library of storage monitoring methods for various types of storage systems has been developed. MSS differs between each other, thus special monitoring methods are necessary for each supported system. In order to make the monitoring of heterogeneous storage systems easier, a common set of performance related parameters has been specified and an appropriate CIM based model (C2SM) has been developed. Since the model is based on standard parameters it allows for easier integration and development of monitoring software. Currently, three types of storage systems are supported by the model: HSM systems, disk arrays and local disks. Specific monitoring software for each storage system need to be developed since the mentioned software is storage system dependent. This software has been implemented in Java as a library package and can be included as monitoring sensors in more general monitoring systems like Nagios, Ganglia or Gemini. The monitoring systems provide, according to the C2SM model, static (e.g. total storage capacity) and dynamic (e.g. MSS load) performance related parameters allowing performance prediction for the given moment.
Policy Driven Data Management in PL-Grid Virtual Organizations
263
4.2 Methods for Storage Performance Prediction Storage system performance is manifested by two main parameters: data transfer rate and data access latency. The performance prediction concerns a given single request – not the average for the given MSS. Within the OntoStor project three types of methods for performance prediction are taken into consideration: statistical prediction, rule-based predictions and prediction based on MSS simulation. The simulation-based model is the most advanced and accurate method but it needs more detailed description of the current state of MSS. The statistical prediction is the simplest but fastest method. These methods are implemented as grid services. Depending on the user profile and the goal of prediction (replica selection or container selection) appropriate service is chosen for the given request. By using the mentioned prediction services it is possible to manage data more intelligently and to assign the best storage elements according to the user requirements. During the prediction process the appropriate prediction service is selected based on the ontological description of the service.
4.3 Sample Use Case As the PL-GRID project is focused on the user, it is crucial to gather requirements and needs directly from potential users. The result from a requirements analysis stage is a description of two typical use cases which are the most important ones from the user point of view. We present them in this section to better demonstrate how the presented system works (in particular the programming library element) in typical scenarios. The first use case describes a scenario when functions provided by the programming library are explicitly called from an application. It encompasses all situations when a new data-intensive application is created and the application developer wants to apply a special storage strategy based on his/her experience and the nature of the generated data. This type of applications often occurs in science where solutions of complex problems have to perform complicated algorithms on many very large data sets. Another field of science where a large amount of data is generated during runtime is simulation. Depending on the problem scale and desired level of details, simulation can generate hundreds of Terabytes of data per day. The second use case is intended to handle scenarios where an application already exists, e.g. in a binary form and therefore cannot be modified. In order to apply our storage management mechanism to this use case a proxy object has to be exploited. In this case, a modified version of the CCC standard library serves as the proxy which delegates all file creation requests to our programming library. Therefore, the application may not be aware of any additional element. The necessary information about a storage policy which should be applied is retrieved from a VO knowledge base. The knowledge base contains properties of the default storage strategy which
264
D. Kr´ol et al.
Fig. 3 Architecture of data management system enhanced by user-defined requirements
can be applied to applications of any user that belongs to the VO. At runtime, the programming library determines to which VO the user belongs and then retrieves the necessary information. The rest of the use case is the same as the previous one. Figure 3 depicts an overview of the system schema. The central point of the system is a module (called Data storage manager) which handles data creation requests from an application according to the defined use cases. The VO knowledge base element is exploited only when there was no storage policy given. The Monitoring system module provides information about a current environment state to better select the storage elements from the available ones where the data will be actually stored.
5 Implementation Notes An important choice from the data management point of view is deployment of the Lustre file system [8] as a primary distributed file system on Storage Elements. The Lustre provides a file striping capability which can be used to distribute data between different storage resources, e.g. single hard drives or disk arrays that are gathered to provide a single, logical namespace to the user. Another interesting feature is the pool mechanism, which is a way of grouping different storage elements, e.g. RAID arrays or partitions from several hard drives, into a single category. Once a pool is defined, the user can explicitly request to store some data on this pool only.
Policy Driven Data Management in PL-Grid Virtual Organizations
265
Apart from the functionality which is provided by a programming library, an exposed API is very important from the application developer point of view. The API should be as clean, easy to use and intuitive as possible. In the presented library, a context of use is relevant while designing the API. The developed library is intended to be used to create files in the same way as the standard library does. The difference is in the physical location of the created files. While the standard library creates files on the local hard drive or within the distributed file system with default properties in the best case, our programming library will dynamically adjust, e.g., the striping strategy while creating files according to the given storage policy and the actual state of Storage Elements. In addition, the library should provide functions to retrieve information about the currently exploited strategy along with the possibility to change the policy (and thus the striping strategy) of the specified file. The most important functions of the API are as follows: – int createFile(char *filename, StoragePolicy *policy)– is a main function of the API which creates a new file in the Lustre file system according to the given StoragePolicy object. The policy object contains storagerelated requirements defined by a user. At runtime, these requirements are mapped to the striping strategy applied by the Lustre system. It is worth mentioning that besides static information about user-defined requirements, the actual state of the runtime environment retrieved from a monitoring system (see Sect. 4.1) is included. A file descriptor related to the given name is returned. – int openFile(char *filename)/ void closeFile(char *filename) – are counterparts to the standard open()/close() functions from the standard library. The only reason to include these functions is to maintain a consistency of the API. – void changeStoragePolicy(char *filename, StoragePolicy * newPolicy) – can be executed when one wants to change the storage policy of an existing file. As a result, a file can be transferred to a different storage element if it will be more suitable for the new policy or the current state of the environment. – int getStripeCounter(char *filename)/ void setStripe Counter(int newCounter, char *filename) – the first function simply returns a number of stripes on which a file is divided. The second function changes the number of stripes. Exposing these two functions is dictated by the importance of the striping mechanism. It is the most important feature provided by the Lustre file system which is exploited to reduce the file access time as well as to increase the availability level of a file. By replacing standard functions for creating files with the presented ones, the application developers can locate the application data more precisely. Also the learning curve of the API should be very gentle due to the analogy to the standard library. Implementation work is currently in progress. We have chosen the CCC language to provide high performance and to be able to efficiently exploit the native Lustre library which is written in pure C. To communicate with the FiVO [9] and
266
D. Kr´ol et al.
the storage monitoring system the Web Services technology was selected. This set of development tools ensures that the final implementation of the presented library can be deployed on the production PL-GRID infrastructure.
6 Conclusions In this chapter, we have presented our approach to data management driven by specific user requirements in grid based Virtual Organizations. The system is comprised of several components including VO management, data management and data access estimation layers. Due to their integration it is possible to enhance the data management within the Grid not only on the global scale, but especially with respect to particular applications running within the context of Virtual Organizations. The future work will include evaluation of the system within the framework of the PL-Grid project. Acknowledgements The research presented in this paper has been partially supported by the European Union within the European Regional Development Fund program no. POIG.02.03.0000-007/08-00 as part of the PL-Grid Project (www.plgrid.pl) and ACC Cyfronet AGH grant 500-08. MSWiN grant nr N N516 405535 is also acknowledged.
References 1. G. Behrmann, P. Fuhrmann, M. Gronager and J. Kleist, A distributed storage system with dCache, in G .Behrmann et al., Journal of Physics: Conference Series, 2008 2. Zhaobin Liu, Autonomic Agent-based Storage Management for Grid Data Resource, Semantics, Knowledge, and Grid, 2006. Second International Conference on Semantics 3. Daniel J. Abadi, Data Management in the Cloud: Limitations and Opportunities, IEEE Data Engineering Bulletin, 2009, pp. 3-12 4. Robert L. Grossman and Yunhong Gu, On the Varieties of Clouds for Data Intensive Computing, IEEE Data Engineering Bulletin, 2009, pp. 44-50 5. B. Cooper, E. Baldeschwieler, R. Fonseca, J. Kistler, P. Narayan, C. Neerdaels, T. Negrin, R. Ramakrishnan, A. Silberstein, U. Srivastava and R. Stata, Building a Cloud for Yahoo!, IEEE Data Engineering Bulletin, 2009, pp. 36-43 6. A. Mylka, A. Swiderska, B. Kryza and J. Kitowski, Supporting Grid metadata management and resource matchmaking with OWL, Computing and Informatics, In preparation. 7. PL–GRID project page, http://www.plgrid.pl 8. Lustre file system project wiki, http://wiki.lustre.org 9. B. Kryza, L. Dutka, R. Slota, and J. Kitowski, Dynamic VO Establishment in Distributed Heterogeneous Business Environment, in: G. Allen, J. Nabrzyski, E. Seidel, G. D. van Albada, J. Dongarra, and P. M.A. Sloot (Eds.), Computational Science – ICCS 2009, 9th International Conference Baton Rouge, LA, USA, May 25-27, 2009 Proceedings, Part II, LNCS 5545, Springer 2009, pp. 709-718 10. The OntoStor project, http://www.icsr.agh.edu.pl/ontostor/ 11. D. Nikolow, R. Slota, J. Kitowski, Knowledge Supported Data Access in Distributed Environment, in: M. Bubak, M. Turala, K. Wiatr (Eds.), Proceedings of Cracow Grid Workshop – CGW’08, October 13-15 2008, ACC-Cyfronet AGH, 2009, Krakow, pp. 320-325.
DAME: A Distributed Data Mining and Exploration Framework Within the Virtual Observatory Massimo Brescia, Stefano Cavuoti, Raffaele D’Abrusco, Omar Laurino, and Giuseppe Longo
Abstract Nowadays, many scientific areas share the same broad requirements of being able to deal with massive and distributed datasets while, when possible, being integrated with services and applications. In order to solve the growing gap between the incremental generation of data and our understanding of it, it is required to know how to access, retrieve, analyze, mine and integrate data from disparate sources. One of the fundamental aspects of any new generation of data mining software tool or package which really wants to become a service for the community is the possibility to use it within complex workflows which each user can fine tune in order to match the specific demands of his scientific goal. These workflows need often to access different resources (data, providers, computing facilities and packages) and require a strict interoperability on (at least) the client side. The project DAME (data mining and exploration) arises from these requirements by providing a distributed Webbased data mining infrastructure specialized on massive data sets exploration with soft computing methods. Originally designed to deal with astrophysical use cases, where first scientific application examples have demonstrated its effectiveness, the
M. Brescia () INAF – Osservatorio Astronomico di Capodimonte, Via Moiariello 16, 80131 Napoli, Italy e-mail:
[email protected] S. Cavuoti • G. Longo Dipartimento di Fisica, Universit`a degli Studi Federico II, Via Cintia 26, 80125 Napoli, Italy e-mail:
[email protected];
[email protected] R. D’Abrusco Center for Astrophysics – Smithsonian Astrophysical Observatory, 60 Garden Street, Cambridge, 02138 MA, USA e-mail:
[email protected] O. Laurino INAF – Osservatorio Astronomico di Trieste, Via Tiepolo 11, 34143 Trieste, Italy e-mail:
[email protected] F. Davoli et al. (eds.), Remote Instrumentation for eScience and Related Aspects, DOI 10.1007/978-1-4614-0508-5 18, © Springer Science+Business Media, LLC 2012
267
268
M. Brescia et al.
DAME Suite results as a multi-disciplinary platform-independent tool perfectly compliant with modern KDD (knowledge discovery in databases) requirements and Information and Communication Technology trends.
1 Introduction Modern technology in ICT (information and communication technology) allows to capture and store huge quantities of data. Finding and summarizing the trends, patterns and outliers in these data sets is one of the big challenges of the information age. There has been important progress in data mining and machine learning in the last decade. Machine learning, data mining or more generally KDD (knowledge discovery in databases) discipline is a burgeoning new technology for mining knowledge from data, a methodology that a lot of heterogeneous communities are starting to take seriously. Strictly speaking, KDD is about algorithms for inferring knowledge from data and ways of validating it. So far, the main challenge is applications. Wherever there is data, information can be gleaned from it. Whenever there is too much data or, more generally, a representation in more than three dimensions (limit to infer it by human brain), the mechanism of learning will have to be automatic. When a dataset is too large for a particular algorithm to be applied, there are basically three ways to make learning feasible. The first one is trivial: instead of applying the scheme to the full dataset, use just a small subset of available data for training. Obviously, in this case, information is easy to be lost and the loss is negligible in terms of correlation discovery between data. The second method consists of parallelization techniques. But the problem is to be able to derive a parallelized version of the learning algorithm. Sometimes it results feasible due to the intrinsic natural essence of the learning rule (such as genetic algorithms). However, parallelization is only a partial remedy because with a fixed number of available CPUs, the algorithm’s asymptotic time complexity cannot be improved [1]. Background knowledge (the third method) can make it possible to reduce the amount of data that needs to be processed by a learning rule. In some cases, most of the attributes in a huge dataset might turn out to be irrelevant when background knowledge is taken into account. But in many exploration cases, especially related to data mining against data analysis problems, the background knowledge simply does not exists, or could infer a sort of wrong biased knowledge in the discovery process. In this scenario, DAME (data mining and exploration) project, starting from astrophysics requirements domain, has investigated the massive data sets (MDS) exploration by producing a taxonomy of data mining applications (hereinafter called functionalities) and collected a set of machine learning algorithms (hereinafter called models). This association functionalitymodel represents what we defined as simply “use case”, easily configurable by the user through specific tutorials. At low level, any experiment launched on the DAME framework, externally configurable through dynamical interactive Web pages, is treated in a standard way, making completely transparent to the user the
DAME: A Distributed Data Mining and Exploration Framework Within the...
269
specific computing infrastructure used and specific data format given as input. As described in what follows, the result is a distributed data mining infrastructure, perfectly scalable in terms of data dimension and computing power requirements, originally tested and successfully validated on astrophysical science cases, able to perform supervised and unsupervised learning and revealing a multi-disciplinary data exploration capability. In practice, a specific software module of the Suite, called driver management system (DRMS), that is a sub-system of the DRIVER (DR) component has been implemented to delegate at runtime the choice on which computing infrastructure should be launched the experiment. Currently, the choice is between GRID or stand alone multi-thread platform that could be replaced also by a CLOUD infrastructure (but the DRMS is engineered in an easy expandable way, so it is also under further investigation the deployment of our Suite under a multi-core platform, based on GPUCCUDA computing technique). The mechanism is simple, being in terms of a threshold-based evaluation of the input dataset dimensions and the status of GRID job scheduler at execution startup time. This could reduce both execution time on a single experiment and the entire job execution scheduling.
2 Modern E-Science Requirements E-Science communities recently started to face the deluge of data produced by new generation of scientific instruments and by numerical simulations (widely used to model the physical processes and compare them with measured ones). Data is commonly organized in scientific repositories. Data providers have implemented Web access to their repositories and http links between them. This data network, which consists of huge volumes of highly distributed, heterogeneous data, opened up many new research possibilities and greatly improved the efficiency of doing science. But it also posed new problems on the cross-correlation capabilities and mining techniques on these MDS to improve scientific results. The most important advance we expect is a dramatic need of the ease in using distributed e-infrastructures for the e-science communities. We pursue a scenario where users sit down at their desks and, through a few mouse clicks, select and activate the most suitable scientific gateway for their specific applications or gain access to detailed documentation or tutorials. We call scientific gateway an e-infrastructure which is able to offer remote access and navigation on distributed data repositories together with Web services and applications able to explore, analyze and mine data. It does not require any software installation or execution on user local PC and it permits asynchronous connection and launch of jobs and embeds to the user any computing infrastructure configuration or management. In this way, the scientific communities will expand their use of the e-infrastructure and benefit from a fundamental tool to undertake research, develop collaborations, and increase their scientific productivity and the quality of research outputs. Only if this scenario becomes reality, the barrier currently placed between the community of users and technology will disappear.
270
M. Brescia et al.
2.1 The Case of Astrophysics From the scientific point of view, the DAME project arises from the astrophysical domain, where the understanding of the universe beyond the Solar System is based on just a few information carriers: photons in several wavelengths, cosmic rays, neutrinos and gravitational waves. Each of these carriers has it peculiarities and weaknesses from the scientific point of view: they sample different energy ranges, endure different kinds and levels of interference during their cosmic journey (e.g. photons are absorbed while charged cosmic rays (CRs) are deflected by magnetic fields), sample different physical phenomena (e.g. thermal, non-thermal and stimulated emission mechanisms), and require very different technologies for their detection. So far, the international community needs modern infrastructures for the exploitation of the ever increasing amount of data (of the order of PetaByte/year) produced by the new generation of telescopes and space-borne instruments, as well as by numerical simulations of exploding complexity. Extending these requirements to other application fields, the main goal of the DAME project can be summarized in two items: • The need of a “federation” of experimental data, by collecting them through several worldwide archives and by defining a series of standards for their formats and access protocols • The implementation of reliable computing instruments for data exploration, mining and knowledge extraction, user-friendly, scalable and as much as possible asynchronous These topics require powerful, computationally distributed and adaptive tools able to explore, extract and correlate knowledge from multi-variate massive datasets in a multi-dimensional parameter space, Fig. 1. The latter results as a typical data mining requirement, dealing with many scientific, social and technological environments. Concerning the specific astrophysical aspects, the problem, in fact, can be analytically expressed as follows. Any observed (or simulated) datum defines a point (region) in a subset of RN ; such as: R.A., DEC, time, wavelength, experimental setup (i.e. spatial and/or spectral resolution, limiting magnitude, brightness, etc.), fluxes, polarization, spectral response of the instrument and PSF. Every time a new technology enlarges the parameter space or allows a better sampling of it, new discoveries are bound to take place. So far, the scientific exploitation of a multi-band (D bands), multi-epoch (K epochs) universe implies to search for patterns and trends among N points in a D K dimensional parameter space, where N >109 , D>>100, K>10. The problem also requires a multi-disciplinary approach, covering aspects belonging to Astronomy, Physics, Biology, Information Technology, Artificial Intelligence, Engineering and Statistics environments. In the last decade, the Astronomy and Astrophysics communities participated in a number of initiatives related to the use and development of
DAME: A Distributed Data Mining and Exploration Framework Within the...
271
Fig. 1 The data multi-dimensional parameter space in Astrophysics problems
e-infrastructures for science and research (e.g. EGEE1 , EuroVO2, grid.IT3 ), giving astronomers the possibility to develop well established and successful VRC (virtual research communities). A support cluster dedicated to A&A has been set up and funded in the framework of the EGEE-III project. Surveys of the requirements of the A&A community concerning data management, job management and distributed tools and more general services have been done in the framework of EGEE-II and EGEE-III projects. Requirements focus on the need to integrate astronomical databases in a computing grid and create proper science gateways to underpin the use of the infrastructure and to bridge heterogeneous e-infrastructures (i.e. EGEE and EuroVO). Moreover, astronomers envisage some advanced functionalities in the use of new computing architectures (such as shared memory systems or GPU computing) and therefore the ability to gridify applications that require to run many independent tasks of parallel jobs. This dynamic process demonstrates the effectiveness of the two basic requirements mentioned above. As for data, the concept of “distributed archives” is already familiar to the average astrophysicist. The leap forward in this case is to be able to organize the
1
http://www.eu-egee.org/. http://www.euro-vo.org/. 3 http://www.grid.it/. 2
272
M. Brescia et al.
data repositories to allow efficient, transparent and uniform access: these are the basic goals of the VO or VObs (virtual observatory). In more than a sense, the VO is an extension of the classical computational grid; it fits perfectly the data grid concept, being based on storage and processing systems, and metadata and communications management services. The VO is a paradigm to use multiple archives of astronomical data in an interoperating, integrated and logically centralized way, so to be able to “observe a virtual sky” by position, wavelength and time. Not only data actually observed are included in this concept: theoretical and diagnostic can be included as well. VO represents a new type of a scientific organization for the era of information abundance: • It is inherently distributed and Web-centric. • It is fundamentally based on a rapidly developing technology. • It transcends the traditional boundaries between different wavelength regimes and agency domains. • It has an unusually broad range of constituents and interfaces. • It is inherently multi-disciplinary. The International VO (cf. the IVO Alliance or IVOA,4 ) has opened a new frontier to astronomy. In fact, by making available, at the click of a mouse, an unprecedented wealth of data, and by implementing common standards and procedures, the VObs allow a new generation of scientists to tackle complex problems which were almost unthinkable only a decade ago [2]. Astronomers may now access a “virtual” parameter space of increasing complexity (hundreds or thousands of measured per object) and size (billions of objects). However, the link between data mining applications and the VObs is currently defined only partially. As a matter of fact, IVOA has concentrated its standardization efforts up to now mainly on data, and the definition of mechanisms to access general purpose interoperable tools for “server side” MDS manipulation is still a matter of discussion within IVOA. Behind this consideration, there is the crucial requirement to harmonize all recent efforts spent in the fields of VObs, GRID and HPC computing, and data mining.
3 Data Mining and the Fourth Paradigm of Science X-Informatics (such as bio-informatics, geo-informatics and astro-informatics) is growingly being recognized as the fourth leg of scientific research after experiment, theory and simulations [3]. It arises from the pressing need to acquire the multidisciplinary expertise which is needed to deal with the ongoing burst of data complexity and to perform data mining on MDS. The crucial role played by such tasks in astrophysics research has been recently certified by the constitution, within the IVOA, of an Interest Group on Knowledge Discovery in Data Bases (KDD-IG)
4
http://ivoa.net.
DAME: A Distributed Data Mining and Exploration Framework Within the...
273
which is seen as the main interface between the IVOA technical infrastructure and the VO-enabled science. In this context, the DAME project intends: • To provide the VO with an extensible, integrated environment for DAME • Support of the VO standards and formats, especially for application interop (SAMP) • To abstract the application deployment and execution, so to provide the VO with a general purpose computing platform taking advantage of the modern technologies (e.g. Grid, Cloud, etc.) By following the fourth paradigm of science, it is now emerging world wide (cf. the US – AVO community and the recent meeting on Astro-informatics at the 215th AAS) the need for all components (both hardware and software) of the Astroinformatics infrastructure to be integrated or, at least, made fully interoperable. In other words, the various infrastructure components (data, computational resources and paradigms, software environments, applications) should interact seamlessly exchanging information and be based on a strong underlying network component.
4 The DAME Approach to Distributed Data Mining The DAME project aims at creating a distributed e-infrastructure to guarantee integrated and asynchronous access to data collected by very different experiments and scientific communities in order to correlate them and improve their scientific usability. The project consists of a data mining framework with powerful software instruments capable to work on MDS in a distributed computing environment. The VObs have defined a set of standards to allow interoperability among different archives and databases in the astrophysics domain, and keeps them updated through the activity of dedicated working groups. So far, most of the implementation effort for the VO has concerned the storage, standardization and interoperability of the data together with the computational infrastructures. Our project extends this fundamental target by integrating it in an infrastructure, joining service-oriented software and resource-oriented hardware paradigms, including the implementation of advanced tools for KDD purposes. The DAME design takes also into account the fact that the average scientists cannot and/or does not want to become an expert also in Computer Science or in the fields of algorithms and ICT. In most cases, the r.m.s. scientist (our end user) already possesses his own algorithms for data processing and analysis and has implemented private routines/pipelines to solve specific problems. These tools, however, often are not scalable to distributed computing environments or are too difficult to be migrated on a GRID infrastructure. DAME also aims at providing a user-friendly scientific gateway to easy the access, exploration, processing and understanding of the MDS federated under standards according to VObs rules. We wish to emphasize that standardization needs to be extended to data analysis and mining methods and to algorithms development. The natural computing environment for such MDS processing is a distributed infrastructure
274
M. Brescia et al.
(GRID/CLOUD), but again, we need to define standards in the development of higher level interfaces, in order to: • Isolate end user from technical details of VO and GRID/CLOUD use and configuration • Make it easier to combine existing services and resources into experiments Data mining is usually conceived as an application (deterministic/stochastic algorithm) to extract unknown information from noisy data. This is basically true but in some way it is too much reductive with respect to the wide range covered by mining concept domains. More precisely, in DAME, data mining is intended as techniques of exploration on data, based on the combination between parameter space filtering, machine learning, soft computing techniques associated with a functional domain. The functional domain term arises from the conceptual taxonomy of research modes applicable on data. Dimensional reduction, classification, regression, prediction, clustering, filtering are examples of functionalities belonging to the data mining conceptual domain, in which the various methods (models and algorithms) can be applied to explore data under a particular aspect, connected to the associated functionality scope.
4.1 Design Architecture DAME is based on five main components: Front End (FE), Framework (FW), Registry and Data Base (REDB), Driver (DR) and Data Mining Models (DMM). The FW is the core of the Suite. It handles all communication flow from/to FE (i.e. the end user) and the rest of the components, in order to register the user, to show user working session information, to configure and execute all user experiments, to report output and status/log information about the applications running or already finished. One of the most critical factors of the FW component is the interaction of a newly created experiment with the GRID environment. The FW needs to create and configure the plug-in (hereinafter called DMPlugin) associated with the experiment. After the DMPlugin is configured, the DR component needs to run the experiment by calling the run method of the plug-in. When executed on the GRID, the process needs to migrate on a Worker Node (WN). To implement this migration, we have chosen to serialize the DMPlugin in a file. Serialization is a process of converting an object into a sequence of bits so that it can be stored on a storage medium. Our tests on the GRID environment indicate that this solution works fine and that the jdl file needed to manage the whole process is very simple. The component FE includes the main GUI (graphical user interface) of the Suite and it is based on dynamical Web pages, rendered by the Google Web Toolkit (GWT), able to interface the end users with the applications, models and facilities to launch scientific experiments. The interface foresees an authentication procedure which redirects the user to a personal session environment, collecting uploaded
DAME: A Distributed Data Mining and Exploration Framework Within the...
275
Fig. 2 Communication interface schema between FE and FW
data, check experiment status and driven procedures to configure and execute new scientific experiments, using all available data mining algorithms and tools. From the engineering point of view, the FE is organized by means of a bidirectional information exchange, through XML files, with the component FW, suite engine component, as shown in Fig. 2. The component DR is the package responsible of the physical implementation of the HW resources handled by other components at a virtual level. It permits the abstraction of the real platform (HW environment and related operative system calls) to the rest of Suite software components, including also I/O interface (file loading/storing), user data intermediate formatting and conversions (ASCII, CSV, FITS, VO-TABLE), job scheduler, memory management and process redirection (Fig. 3). More in detail, a specific sub-system of the DR component, called DRMS, has been implemented to delegate at runtime the choice of the computing infrastructure should be selected to launch the experiment. The component REDB is the base of knowledge repository for the Suite. It is a registry in the sense that contains all information related to user registration and accounts, his working sessions and related experiments. It is also a database containing information about experiment input/output data and all temporary/final data coming from user jobs and applications (Fig. 4). The component DMM is the package implementing all data processing models and algorithms available in the Suite. They are referred to supervised/unsupervised models, coming from soft computing, self-adaptive, statistical and deterministic computing environments. It is structured by means of a package of libraries (Java API) referred to the following items: • Data mining models libraries (multi-layer perceptron, support vector machine, genetic algorithms, self organizing maps, etc.) • Visualization tools • Statistical tools
276 Fig. 3 The DRIVER component as interface with computing infrastructure
Fig. 4 REDB architecture
M. Brescia et al.
DAME: A Distributed Data Mining and Exploration Framework Within the...
277
Fig. 5 DAME functional infrastructure
• List of functionalities (classification, regression, clustering, etc.) • Custom libraries required by the user The following scheme shows the component diagram of the entire suite (Fig. 5).
4.2 Distributed Environment As underlined in the previous sections processing of huge quantities of data is a typical requirement of e-science communities. The amount of computations needed to process the data is impressive, but often “embarrassingly parallel” since based on local operators, with a coarsegrained level of parallelism. In such cases, the “memory footprint” of the applications allows to subdivide data in chunks, so as to fit the RAM available on the individual CPUs and to have each CPU to perform a single processing unit. In most cases, “distributed supercomputers”, i.e. a local cluster of PCs such as a Beowulf machine, or a set of HPC computers distributed over the network, can be an effective solution to the problem. In this case, the GRID paradigm can be considered to be an important step forward in the provision of the computing power needed to tackle the new challenges. The main concept of distributed data mining applications embedded in the DAME package Suite is based on three issues (Fig. 6):
278
M. Brescia et al.
Fig. 6 The concept of distributed infrastructure in DAME
• Virtual organization of data: this is the extension of already remarked basic feature of VObs • Hardware resource-oriented: this is obtained by using computing infrastructures, like GRID, whose solutions enable parallel processing of tasks, using idle capacity. The paradigm in this case is to obtain large numbers of work requests running for short periods of time • Software service-oriented: this is the base of typical CLOUD computing paradigm. The data mining applications implemented run on top of virtual machines, seen at the user level as services (specifically Web services), standardized in terms of data management and working flow Our scientific community needs not only “traditional” computations but also the use of complex data operations that require on-line access to databases mainly mediated through a set of domain-specific Web services (e.g. VObs), and the use of HPC resources to run in silico (numerical) experiments. The DAME Suite is deployed on a multi-environment platform including both CLOUD and GRID solutions (Fig. 7). In particular, concerning the GRID side, the Suite exploits the S.Co.P.E. GRID infrastructure. The S.Co.P.E. project [4], aimed at the construction and activation of a Data Center which is now perfectly integrated in the national and international GRID initiatives, hosts 300 eight-core blade servers and 220 Terabyte of storage.
DAME: A Distributed Data Mining and Exploration Framework Within the...
279
Fig. 7 The DAME Suite deployed on the GRID architecture
The acronym stands for “Cooperative System for Multidisciplinary Scientific Computations”, that is a collaborative system for scientific applications in many areas of research. For its generality, the DAME Suite is used also for applications outside astronomy (such as, chemistry, bioinformatics and social sciences).
4.3 Soft Computing Applications The KDD scheme adopted in the DAME package is based on soft computing methods, belonging to the typical dichotomy (supervised/unsupervised) of machine learning methods. The first type makes use of prior knowledge to group samples into different classes. In the second type, instead, null or very little a priori knowledge is required and the patterns are classified using only their statistical properties and some similarity measure which can be quantified through a mathematical clustering
280
M. Brescia et al.
objective function, based on a properly selected distance measure. In the first release, the DMM implements the models as listed in the following table. Model MLP C back propagation learning rule [5]
Category Supervised
MLP with GA learning rule [5, 6]
Supervised
SVM [7]
Supervised
SOM [8] Probabilistic Principal Surfaces (PPS) [9]
Unsupervised Unsupervised
MLP with Quasi Newton (MLPQNA) [10]
Supervised
Functionality Classification, regression Classification, regression Classification, regression Clustering Dimensional reduction, pre-clustering Classification, regression
Depending on the specific experiment, the use of any of the models listed above can be executed in a more or less degree of parallelization. All the models require some parameters that cannot be defined a priori, causing the necessity of iterated experiment sessions in order to find the best tuning. Then not all the models can be developed under the message passing interface (MPI) paradigm. But the possibility to execute more jobs at once (specific GRID case) intrinsically exploits the multiprocessor architecture.
5 First Scientific and Technological Results During the design and development phases of the project, some prototypes have been implemented in order to verify the project issues and to validate the selected DMM from the scientific point of view. In this context, a Java-based plug-in wizard for custom experiment (DMPlugin) setup has been designed to extend DAME Suite features with user own algorithms to be applied to scientific cases by encapsulating them inside the Suite (Fig. 8). This facility5 extends the canonical use of the Suite: a simple user can upload and build his datasets, configure the DMM available, execute different experiments in service mode, load graphical views of partial/final results. Moreover, a prototype of the framework Suite has been developed (Fig. 9). The prototype (recently replaced by the official beta release of the web application) is a Web application implementing minimal DAME features and requirements, developed in parallel with the project advancement in order to perform a scientific validation of models and algorithms foreseen in the main Suite and to verify all 5
http://dame.dsf.unina.it/dmplugin.html.
DAME: A Distributed Data Mining and Exploration Framework Within the...
281
Fig. 8 The DMPlugin Java application to extend functionalities of the DAME Suite
basic project features designed related to the scientific pipeline workflow. The beta release of the Web application is publicly accessible.6 The prototype implements the basic user interface functionalities: a virtual file store for each registered user is physically allocated on the machine that serves the Web application. Users can upload their files, delete them, visualize them: the system tries to recognize the file type and shows images or text contextually. Any astronomical data analysis and/or data mining experiment to be executed on the prototype can be organized as a data processing pipeline, in which the use of the prototype needs to be integrated with pre- and post-processing tools, available between virtual observatory Web services. The prototype has been tested on three different science cases which make use of MDS: • Photometric redshifts for the SDSS galaxies7 : It makes use of a nested chain of MLP (multi-layer perceptron) and allowed to derive the photometric redshifts
6 7
http://dame.dsf.unina.it/beta info.html. http://dame.dsf.unina.it/dame photoz.html.
282
M. Brescia et al.
Fig. 9 The DAME prototype page example
for ca. 30 million SDSS galaxies with an accuracy of 0.02 in redshift. This result which has appeared in the Astrophysical Journal [11] was also crucial for a further analysis of low multiplicity groups of galaxies (Shakhbazian) in the SDSS sample • Search for candidate quasars in the SDSS: The work was performed using the PPS (probabilistic principal surfaces) module applied to the SDSS and SDSS C UKIDS data. It consisted in the search for candidate quasars in the absence of a priori constrains and in a high dimensionality photometric parameter space [12] • AGN classification in the SDSS [13]: Using the GRID-S.Co.P.E. to execute 110 jobs on 110 WN, the SVM model is employed to produce a classification of different types of AGN using the photometric data from the SDSS and the base of knowledge provided by the SDSS spectroscopic subsamples. A paper on the results is in preparation Concerning the results achieved and remarking what already mentioned in Sect. 4.2, using the hybrid architecture, it is possible to execute simultaneous experiments that gathered all together, bring the best results. Even if the single job is not parallelized, we obtain a running time improvement by reaching the limit value of the Amdahl’s law (N ):
DAME: A Distributed Data Mining and Exploration Framework Within the...
1 .1 P / C
P N
283
;
where if P is the proportion of a program that can be made parallel (i.e. benefit from parallelization) and (1 P ) is the proportion that cannot be parallelized (remains serial), then the resulting maximum speed up that can be achieved by using N processors is obtained by the law expressed above. For example, in the case of AGN Classification experiment (cited above), each of the 110 jobs runs for about a week on a single processor. By exploiting the GRID, the experiment running time can be reduced to about one week instead of more than 2 years (110 weeks).
6 Conclusion Generally speaking, applications for KDD will come not from computer programs, nor from machine learning experts, nor from the data itself, but from people and communities who work with the data and the problems from which it arises. That is why we have designed and provided the DAME infrastructure, to empower those who are not machine learning experts to apply these techniques to the problems that arise in daily working life. The DAME project comes out as an astrophysical data exploration and mining tool, originating from the very simple consideration that, with data obtained by the new generation of instruments, we have reached the physical limit of observations (single photon counting) at almost all wavelengths. If extended to other scientific or applied research disciplines, the opportunity to gain new insights on the knowledge will depend mainly on the capability to recognize patterns or trends in the parameter space, which are not limited to the 3-D human visualization, from very large datasets. In this sense, the DAME approach can be easily and widely applied to other scientific, social, industrial and technological scenarios. Our project has recently passed the R&D phase, de facto entering in the implementation commissioning step and by performing in parallel the scientific testing with first infrastructure prototype, accessible, after a simple authentication procedure, through the official project Web site address.8 First scientific test results confirm the goodness of the theoretical approach and technological strategy. Acknowledgements DAME is funded by the Italian Ministry of Foreign Affairs as well as by the European project VOTECH (Virtual Observatory Technological Infrastructures9 ), and by the Italian PON-S.Co.P.E..10 Official partners of the project are: • Department of Physics – University Federico II of Napoli, Italy
8
http://dame.dsf.unina.it. http://www.eurovotech.org/. 10 http://www.scope.unina.it. 9
284
M. Brescia et al.
• INAF National Institute of Astrophysics – Astronomical Observatory of Capodimonte, Italy • California Institute of Technology, Pasadena, USA
References 1. Dunham, M., 2002. Data Mining Introductory and Advanced Topics, Prentice-Hall. 2. Smareglia, R. et al., 2006, The Virtual Observatory in Italy: status and prospect. Mem. SAIt Suppl., Vol. 9, p. 423. 3. Hey, T. et al., 2009. The Fourth Paradigm. Microsoft research, Redmond Washington, USA. 4. Merola, L., 2008. The SCOPE Project. Proceedings of the FINAL WORKSHOP OF GRID PROJECTS “PON RICERCA 2000–2006, AVVISO 1575”. Catania, Italy. 5. Bishop, C. M., 1995. Neural Networks for Pattern Recognition. Oxford University Press, GB 6. Mitchell, M., 1998. An Introduction to Genetic Algorithms, Cambridge Massachusetts, The MIT Press. 7. Chang, C. C., Lin, C. J., 2001. Training Support Vector Classifiers: Theory and algorithms. In Neural Computation. Vol. 13, pp. 2119–2147. 8. Kohonen, T., 2007. Self-Organizing Maps. Vol. 30. Springer, Heidelberg. Second ed. 9. Chang K. Y., Ghosh, J., 2000. SPIE, 3962, 192. 10. Bishop, C. M., Svensen, M. & Williams, C. K. I., 1998. Neural Computation, pp.215–234 11. D’Abrusco, R. et al., 2007. Mining the SDSS archive I. Photometric Redshifts in the Nearby Universe. Astrophysical Journal, Vol. 663, pp. 752–764. 12. D’Abrusco, R. et al., 2009. Quasar Candidate Selection in the Virtual Observatory era. Under press in MNRAS. 13. Astrophysics in S.Co.P.E., Brescia, M., Cavuoti, S. et al., 2009. Mem S.A. It. Suppl. Vol 13, 56
Network Performance Monitoring for Remote Instrumentation Services: The DORII Platform Test Case Davide Adami, Alexey Chepstov, Franco Davoli, Matteo Lanati, Ioannis Liabotis, Stefano Vignola, Sandro Zappatore, and Anastasios Zafeiropoulos
Abstract Remote Instrumentation Services go far beyond in offering networked access to remote instrument resources. They are asserting themselves as a method of fully integrated instruments (including laboratory equipment, large-scale experimental facilities, and sensor networks) in a Service Oriented Architecture, where users can view and operate them in the same fashion as computing and storage resources. The deployment of test beds for a large basis of scientific instrumentation and e-Science applications is mandatory to develop new functionalities to be embedded in the existing middleware to enable such integration, to test it on the field, and to promote its usage in scientific communities. The DORII (Deployment of Remote Instrumentation Infrastructure) project is a major effort in this direction. Since the performance of the network infrastructure interconnecting the instrumental, computing and storage resources heavily affects the behaviour of DORII applications, a network monitoring platform has been designed and set-up. After the description of the network monitoring platform, the paper discusses the results concerning two selected applications with the aim of finding a correlation between performance metrics at the network and application levels.
D. Adami • F. Davoli () • S. Vignola • S. Zappatore CNIT, University of Pisa/University of Genoa Research Units, Italy e-mail:
[email protected];
[email protected];
[email protected];
[email protected] A. Chepstov High Performance Computing Center Stuttgart (HLRS), University of Stuttgart, Germany e-mail:
[email protected] M. Lanati EUCENTRE, Via Ferrata 1, 27100 Pavia, Italy e-mail:
[email protected] I. Liabotis • A. Zafeiropoulos GRNET, 56 Mesogion Ave., GR 115 27 Athens, Greece e-mail:
[email protected];
[email protected] F. Davoli et al. (eds.), Remote Instrumentation for eScience and Related Aspects, DOI 10.1007/978-1-4614-0508-5 19, © Springer Science+Business Media, LLC 2012
285
286
D. Adami et al.
1 Introduction Almost all scientific areas use specialized instrumentation, such as, laboratory equipment, measurement devices, large- and small-scale experimental facilities, sensor networks for data acquisition, other than computational and storage resources. The complex of activities that allow automated data processing and analysis, by exploiting distributed computational services like those offered by Grid architectures and cloud computing, and that can be referred to as e-Science (or, with a more precise term, Service-Oriented Science [1]), would greatly benefit from the full integration of such experimental instrumentation with the computational infrastructure into one powerful pool of resources that can be searched, selected, composed and configured, accessed, and controlled by their users. The extension of the e-Infrastructure with this complex of activities, which can be termed Remote Instrumentation Services (RIS) [2], is not straightforward, and it requires addressing a number of issues in middleware and network architectural design, middleware development, and instrumentation and measurement related aspects. Recently, a number of European research projects, among others, have been dedicated to it, and a community of researchers in the field has been forming groups and actively investigating in all aspects of this field (see, e.g., [3, 4]). An important aspect in the advancement of these concepts is the deployment of the infrastructure and of test beds that allow user communities to become acquainted with the related technology, to perform experiments on-line, and to be involved in the development of new applications and in the extension of existing ones. This is one of the main tasks of the DORII (Deployment Of Remote Instrumentation Infrastructure) project [5, 6], funded by the European Commission in the 7th Framework Program. From the architectural point of view, the e-Infrastructure addressed by DORII can be specified as shown in Fig. 1. The general architecture, the deployed applications and the test bed organization of DORII were described in [6]. In the present paper, we detail the currently deployed DORII e-Infrastructure, the performance monitoring tools’ customization and deployment, and present the results of monitoring two selected applications in the earthquake engineering and environmental communities. The chapter is organized as follows. Sections 2 and 3 describe the e-Infrastructure and the monitoring tools, respectively. Section 4 presents the deployment of the selected applications, and Sect. 5 reports its related experimental results. Section 6 contains the conclusions.
2 Overview of the e-Infrastructure One of the main requirements posed by applications of many strategic areas in science and technology (as the ones specified by ESFRI - European Strategy Forum on Research Infrastructure [7]) is to design a service-oriented IT architecture which
Network Performance Monitoring...
287
Fig. 1 The Remote Instrumentation Infrastructure, conceptual view
should allow users to manage, maintain and exploit diverse instrumentation and acquisition devices together with heterogeneous computation and storage facilities granted by the traditional Grid, as those set-up by EGEE (Enabling Grids for E-sciencE) [8], DEISA (Distributed European Infrastructure for Supercomputing Applications) [9] and many other Grid projects. Unlike the traditional Grid, the e-Infrastructure should practically enable access to remote instrumentation in high-performance computing and storage environments, and allow users and their applications to get an easy and secure access to various remote instrumentation resources, supported by high-performance Grid computation and storage facilities. The e-Infrastructure achieves these goals by providing standardized services to access integrated instrumentation resources (including expensive experimental equipment, but also smaller network-connected sensors and mobile devices), in a unified way with the traditional Grid services (e.g., as provided by gLite [10, 11]). The DORII e-Infrastructure is based both on EGEE and its middleware of choice gLite, and on specific middleware services built within the DORII project. The gLite middleware offers basic grid services such as Information, Job, and Data Management, along with Security Services. The interaction of the users with the instruments is effected via the Instrument Element (IE), originally conceived in the GRIDCC [12] project, and then re-designed within DORII. Information about the resources
288
D. Adami et al. Table 1 Computational and storage resources (vo.dorii.eu) Country Partner name Site name CPU cores Poland PSNC PSNC 2128 Spain CSIC IFCA-CSIC 1680 IFCA I2G 1680 Italy ELETTRA ELETTRA 160 SISSA-Trieste 24 INFN-Trieste 276 Greece GRNET HG-01-GRNET 64 HG-02-IASA 118 HG-03-AUTH 120 HG-04-CTI-CEID 114 HG-05-FORTH 120 HG-06-EKT 628
Storage (TB) 4 230 230 20 0.5 0 4.78 3.14 3.13 2.87 2.33 7.76
and services of the infrastructure are provided by the Berkeley Database Information Index (BDII), which uses standard LDAP (Lightweight Directory Access Protocol) databases populated by an update process. The Workload Management System (WMS) is the service responsible for the distribution and management of tasks across Grid resources, in such a way that applications are conveniently, efficiently and effectively executed. The LCG (LHC – Large Hadron Collider – Computing Grid) Computing Element (LCG-CE) is responsible for submitting jobs to the underlying local cluster of Worker Nodes (WNs). Storage Elements (SEs) are responsible for data storage and management, while the LCG File catalogue (LFC) offers a hierarchical view of files to users, with a UNIX-like client interface. From the security perspective the Virtual Organization Management Service (VOMS) is a full-fledged Attribute Authority, whose job is to assign attributes like group membership and role ownership to members of a Virtual Organization (VO), so that other Grid services can make informed decisions based on those attributes, with levels of granularity that range from extremely coarse to extremely fine [13–19]. At the time of writing of this chapter the DORII e-Infrastructure consisted of nine sites offering computational and storage resources distributed among the partners of the project. Table 1 shows the sites that support the catch-all DORII VO, where most of the DORII applications have been deployed. Table 2 presents the Instruments that have been deployed in the DORII infrastructure and its corresponding applications that are using them. Three scientific communities are involved: (a) environmental observation and monitoring; (b) earthquake engineering; and (c) experimental science.
3 Monitoring the Network and the Infrastructure An advanced network monitoring infrastructure is implemented in the DORII network. This infrastructure contains components installed in the DORII application sites and in the DORII Grid sites. The design and deployment of the DORII network
Network Performance Monitoring...
289
Table 2 Deployed applications and instruments in the DORII e-Infrastructure Application short name OCOMMOON
VO vo.dorii.eu
Instrument Glider, Float
HORUS Bench
vo.dorii.eu
Digital Cameras
SAXS
gridats
SAXS Imaging Detector
SYRMEP
gridats
SYRMEP Imaging Detector
NCSS/EEWS
vo.dorii.eu
Actuators Strain Gauges
SMIWR
vo.dorii.eu
CTD Optical Sensors
Instrument element URL http://eva.ogs. trieste.it: 8443/testIE2.3/services/ IEservice http://puerpc79. caminos. unican.es: 8443/testIE http://elettraie. grid.elettra. trieste.it: 8443/testIE/ services/ IEService http://elettraie. grid.elettra. trieste.it: 8443/testIE/ services/ IEService/ demoSecIE/ services/ IEService http://ie-01. eucentre.it: 8443/ InstrumentElement/ services/ IEServuce http://doriiie01. ifca.es
VCR used http://adamo.ogs. trieste.it: 8080
http://puerpc79. caminos. unican.es: 8080 http://lights.grid. elettra.trieste.it
http://lights.grid. elettra.trieste.it
https://dorii-vcr. grid.elettra. trieste.it
https://dorii-vcr. grid.elettra. trieste.it
monitoring infrastructure has been conducted in a manner that identifies end-to-end connectivity problems, service interruptions and bottlenecks, and is able to monitor if the given Quality of Service (QoS) performance metrics by the specific application requirements are within acceptable limits. Since each site is connected to its national research and education network ´ (NREN) and interconnected with other sites through the GEANT2 network, endto-end traffic is monitored and network statistics are collected. Special attention has been given to the distributed nature of the DORII network, since it contains (a) the Local Area Networks of the participating institutions, which include highly heterogeneous data collection parts (sensor networks, satellite links, ADSL, highspeed data transfer), (b) the corresponding NRENs providing access to each
290
D. Adami et al.
´ national research network and to the Internet, and (c) GEANT, as the backbone interconnecting all the NRENs. For this purpose, multi-domain traffic monitoring tools are installed, and are described in detail in the following sub-sections. It is important to note that part of the DORII network, including applications’ sites, is IPv6-enabled. Thus, end-to-end native IPv6 connectivity is available for some DORII applications and DORII Grid sites. The network monitoring platform deployed for the DORII project consists of the following tools: • Smokeping, for network latency measurement • Pathload, for the estimation of the available bandwidth along a network path • SNMP-based Web applications, for monitoring network interface utilization
3.1 Smokeping Smokeping [20] is a software tool that can be used to measure the network latency. More specifically, a Smokeping probe sends test packets out to the network and measures the amount of time they need to travel to a target host node and back. The RRDtool (Round-Robin Database tool) [21] is used to maintain a long-term datastore with latency measurement, and the presentation of the data on the web is done by means of a CGI (Common Gateway Interface) with some AJAX (Asynchronous JavaScript and XML) capabilities for interactive graph exploration. In the framework of the DORII project, Smokeping is used in master/slave mode: this means, Smokeping probes (slaves) are allowed to run remotely and to perform latency measurements from multiple locations to the target hosts. As shown in Fig. 2, Smokeping has been deployed as follows: • The Smokeping master is located at CNIT [22]; it maintains a configuration file with a specific section for each slave, and it stores and presents all monitoring data collected by the slaves. The latter connect via the HTTP protocol to the server monitor2.cnit.it with a username and a password that must be present in the password file on the Master server. • Remote probes (Smokeping slaves) have been installed at DORII partners’ sites EUCENTRE, ELETTRA, GRNET, CSIC-IFCA, OGS and PSNC. If the authentication phase with the master is successful, the probe retrieves the configuration file from the Smokeping master. Based on settings contained in this file, (e.g., measurement utility, target host address, measurement length and period, etc.) each slave performs latency measurements and sends back the results to the master server by using the HTTP protocol. In the DORII network infrastructure, the following targets have been identified: Computing Elements (CEs), Storage Elements (SEs), Instrument Elements (IEs), remote sites’ Access Gateways (AGs).
Network Performance Monitoring...
291
Fig. 2 Smokeping and Pathload deployment for the DORII project infrastructure
3.2 Pathload Pathload [23] is a monitoring tool that estimates the available bandwidth of a network path. The basic idea behind Pathload is that the one-way delays of a periodic packet stream show an increasing trend when the stream rate is larger than the available bandwidth. Pathload is based on a client–server architecture and consists of two main components: • pathload snd that listens on TCP port 55002 and acts as a traffic generator • pathload rcv that starts a Pathload session and acts as a traffic receiver Pathload has been customized for the DORII project. In addition to the previous components, some scripts have been introduced to monitor the status of the sender and the receiver processes and to automatically export the measurement data collected by the receiver to the management station located at CNIT via HTTP; here, the data are stored in a RRD database. In the DORII project, Pathload has been installed as follows (see Fig. 2): • Pathload sender: at each site where CEs and/or SEs of the DORII e-Infrastrcture are located (GRNET, PSNC, CSIC-IFCA) • Pathload receiver: at each site where IEs are deployed (EUCENTRE, OGS, ELETTRA, UC, etc.) and, therefore, DORII applications are running
292
D. Adami et al.
This way, the bandwidth available from the sites hosting CEs and SEs to the sites with applications (IEs and VCR – Virtual Control Room) can be estimated.
3.3 SNMP-Based Network Monitoring Various applications exist to collect and consolidate network usage information. At a basic level, such applications (also called managers) use SNMP to read statistics from each monitored device (router or host) where an SNMP agent is configured and running. A standard Management Information Base (MIB) collects counters of the number of datagrams and bytes sent and received on each interface of a device, and it also gives the number of packets discarded because of congestion. An SNMP application can periodically poll each device and convert the returned information into a view of usage across the whole network. SNMP can also help to identify network interface failures or outage conditions. In the framework of DORII, SNMP is required to be enabled on IEs, CEs, SEs and routers. Data are collected by an SNMP manager and interfaced with a Web server by using ad-hoc CGI programs.
3.4 Nagios for the Monitoring of Computational, Storage and Instrument Resources Nagios [24] is an open source monitoring system providing comprehensive and scalable monitoring of all mission-critical infrastructure components, including applications, services, operating systems and system metrics or network protocols and infrastructure. Nagios is integrated in the monitoring framework of the DORII e-Infrastructure providing information on problems and incidents related to the computational, storage and instrument resources of this infrastructure, and monitoring services such as the CE, SE, BDII, WMS, and IE.
4 Performance Evaluation Procedure and Monitoring of Selected DORII Applications To evaluate how the behaviour of DORII applications is affected by the network performance, the evaluation procedure illustrated in Fig. 3 has been defined. The basic idea behind our approach is to infer the impact of the network on the performance of the applications by measuring QoS metrics at the network layer and correlating them with metrics observed at the application level (Quality of Experience, QoE).
Network Performance Monitoring...
293
Fig. 3 Performance evaluation procedure
More specifically, while running each application under test, network statistics are collected: indeed, the network-monitoring platform provides delay and bandwidth measurements between specific couples of DORII sites, whereas the IEs, CEs and SEs are monitored by means of SNMP. At the application level, for each step, the end-user has to measure the performance metrics (e.g., the overall execution time) that have to be optimized to improve the QoE from the user’s perspective.
4.1 First Case Study: EEWS (Earthquake Early Warning System) The EEWS (Earthquake Early Warning System) aims at recording seismic data from sensors, possibly in real time, and at processing them in order to extract time history for ground speed, ground acceleration and displacements, amplitude spectrum and response spectrum. All the operations are performed remotely and on the grid, employing the DORII infrastructure. The VCR plays the role of the user interface: all the actions are performed and all the resources are accessed through this web portal (Fig. 4). The IE is located at EUCENTRE (Pavia, Italy) and it hosts the Instrument Manager (IM) devoted to access the server collecting data from the sensors. This node is in Genoa, Italy, while the seismic sensors are spread over the Liguria Region.
294
D. Adami et al.
Fig. 4 Earthquake Early Warning System data storage and processing
Measurements from each channel are time-stamped and saved locally on the IE in a separate file, and then moved to an SE. Finally, a CE retrieves each file from the SE and performs the computation. This task is carried out on a single binary, statically compiled and specified in the Input Sandbox. Since the same computation is repeated for each input file, the job is parametric, where the parameters are the file names. The JDL (Job Description Language) file characterizing the job is created using a VCR application, which is a Jython script that customizes a given template. The user is asked only to select the input folder on the right SE. The output is downloaded to the home folder on the VCR. An alternative approach is represented by the Workflow Manager, a graphical and friendly interface to specify the parameters. In the EEWS application, the user (preferably a scientist rather than a developer) is asked to join the reference VO and handle his/her personal certificate by means of the VCR credential manager tool, obviously after having applied for access to the portal. Then he/she has to open the IE located at EUCENTRE and to select the IM related to the seismic sensors from the drop-down list. Once the IM is turned on, the acquisition phase can be started. The first channel to be subscribed is chosen among a predefined list. A panel summarizes the status, showing the total number of subscribed channels, their names and the last value read, in meters per second. At any time, one or more channels can be added or removed from the list. When the user is satisfied with the acquisition, the IM is stopped and all the files are moved to the target SE. Now, the computational phase has to be started and the user can choose either a VCR application or the Workflow Manager. The first solution represents a semi automatic procedure where only some fields should be filled in, for example, to specify the input folder. The final result is a grid job launched and monitored from the same interface. The Workflow Manager reaches the same result, by using a graphical representation of the resources.
Network Performance Monitoring...
295
Fig. 5 Sensor Network communication channels
4.2 The Instruments and the Instrument Manager in EEWS A set of seismic sensors is connected to a central point through the UDP protocol, by means of wired or wireless links (Fig. 5). Each device measures the ground speed along the three Cartesian directions, so each station broadcasts at least three channels, plus state of health information. All the data are gathered by a central server, which manages replicated and out of order packets. The reconstructed stream is stored and made available as data service: the user can access the historical data series, specifying the starting point of information flow and the time window of interest. On the contrary, if a user or an application is focused on near-real-time access, the NAQS (Nanometrics Acquistion Service) is more suitable. Given that some parameters are tuned properly, it is possible to configure the service to forward the original packets from the instruments to the application, minimising the delay. Data are organized in uniquely identified sequenced and time-stamped packets, each one made up of an odd number, from 1 to 255, of 17-byte bundles. This solution allows adapting the packet size to the network; however, once decided during the instrument’s configuration phase, the number of bundles cannot be changed.
4.3 Second Case Study: Oceanographic and Coastal Observation and Modelling Mediterranean Ocean Observing Network: An Integrated System from Sensors to Model Predictions (OPATM-BFM) This application deals with the users belonging to the oceanographic modelling community: they work with numerical models and with their data to get information about the past, the present or the future state of the marine environment or to study a
296
D. Adami et al.
specific process over limited space or time scales. The tests consist of the following steps: 1. Registration: the user has to apply to the Italian Institute of Nuclear Physics (INFN) for a personal certification. 2. Login: the user opens the DORII VCR page hosted by OGS and asks for an account. 3. Credential Management: once the user has both a valid certificate and a valid VCR account, the user can upload the public and the private key to the VCR in order to create a valid proxy at each connection. 4. The workflow (from melisa.man.poznan.pl) may launch the simulation model with either (a) the most recent data or (b) historical data already stored on an SE. (a) In this case, the workflow asks the user to select the SE and the path where input data have to be uploaded from the IE (eva.ogs.trieste.it), the CE where simulations have to be run and, finally, the SE and the path where output data have to be downloaded. To optimize the performance of the application it is necessary to minimize the overall execution time. Therefore, from the end-user’s perspective, the following metrics have to be measured: • • • •
Time to upload the input data from IE to SE Time to download the input data from SE to CE Time to process the input data at CE Time to upload the output data from CE to SE
(b) In this case, the workflow asks the user to select the SE and the path where input data is located, the CE where simulations have to be run and, finally, the SE and the path where output data have to be downloaded. To optimize the performance of the application it is necessary to minimize the overall execution time. Therefore, from the end-user’s perspective, the following metrics have to be measured: • Time to download the input data from SE to CE • Time to process the input data at CE • Time to upload the output data from CE to SE 5. Logout The logic diagram depicted in Fig. 6 shows how data are transferred among the grid resources used by OPATM-BFM for retrieving the input data, running the simulations and archiving the output data.
5 Experimental Results We consider EEWS first. The user task list previously described represents the starting point for the test bed set-up employed in this work. Moreover, network performance over the grid infrastructure is monitored during the entire life cycle
Network Performance Monitoring...
297
• 1: Input Data Transfer • 2: Data Retrieval • 3: Data Processing • 4: Output Data Transfer
GRNET
SE se01.kallisto.hellasgrid.gr
CINECA Repository
IE/IM
1
eva.ogs.trieste.it (B, D)
(B, D) 2
CINECA Network Metrics B = Bandwidth D = Delay CE, SE, IE: SNMP-enabled interfaces
4 (4)
OGS PSNC 3
CE
ce.reef.man.poznan.pl
Fig. 6 Flow diagram of OPATM-BFM
of the application execution. In our experiments, we skipped the acquisition phase, since the server gathering sensors’ data is not part of the infrastructure, unlike the IE sending the query. The required network resources are very limited: as a matter of fact, a single station channel only needs few kbps. The size of the file containing the initial data set for the computation is about 115 MB and corresponds to measurement data coming from two channels and collected for a whole day. The archive is stored on a server at EUCENTRE that acts as IE and VCR; then, it is transferred to an SE at GRNET (se01.isabella.grnet.gr). The average throughput is about 7.5 Mb/s. The analysis is carried out by a parametric job, where the parameters correspond to different settings for the two seismic stations being monitored, so that a single execution of the application produces two children nodes. The job is launched three times, and the target CEs are located at different sites: GRNET (ce01.athena.hellasgrid.gr), PSNC (ce.reef.man.poznan.pl) and IFCACSIC (egeece01.ifca.es). Table 3 reports the time necessary to upload the file from the SE to each CE. The last column of the table reports the latency measured by using Smokeping. If the user chooses a local CE (ce01.athena.hellasgrid.gr) the latency is in the order of a few ms, and therefore it is not reported in the table, but also in the other two cases the latency is very low, since it is less than 100 ms. Figure 7 shows the output of Smokeping from GRNET to DORII CEs over 24 h of observation. The job output consists of files, whose size is comparable with the size of the input file. Table 4 reports the computation time. Finally, the output files are retrieved, by using the VCR at EUCENTRE. Table 5 reports the time necessary to download the output files stored by each CE from EUCENTRE. The last two columns contain the available bandwidth (estimated
298
D. Adami et al. Table 3 Upload time from se01.isabella.grnet.gr to each CE CE ce01.athena.hellasgrid.gr ce01.athena.hellasgrid.gr ce.reef.man.poznan.pl ce.reef.man.poznan.pl egeece01.ifca.es egeece01.ifca.es
Upload start time 27/11/2009, 12.59 27/11/2009, 12.59 27/11/2009, 13.07 27/11/2009, 13.04 27/11/2009, 13.46 27/11/2009, 13.46
Upload time (s) 2.0
Throughput (Mb/s) 478.4
Average latency (ms) –
2.1
460.2
–
351
2.6
59.2
241
3.9
59.2
95
9.8
94.3
82
11.7
94.3
Fig. 7 Latency measurements from GRNET to DORII CEs
Table 4 Computation time CE ce01.athena.hellasgrid.gr ce01.athena.hellasgrid.gr ce.reef.man.poznan.pl ce.reef.man.poznan.pl egeece01.ifca.es egeece01.ifca.es
Computation start time 27/11/2009, 12.59 27/11/2009, 09.18 27/11/2009, 13.07 27/11/2009, 13.04 27/11/2009, 13.48 27/11/2009, 13.48
Computation time (s) 26 65 16 47 22 45
by Pathload) and the average latency (measured by Smokeping) from each CE to EUCENTRE VCR. For example, Fig. 8 shows the behaviour of the available bandwidth estimated from GRNET to EUCENTRE in a measurement period of about 2 h.
Network Performance Monitoring...
299
Table 5 Download time from each CE to EUCENTRE VCR
CE ce01.athena. hellasgrid.gr ce01.athena. hellasgrid.gr .2ce.reef.man. poznan.pl ce.reef.man. poznan.pl egeece01. ifca.es egeece01. ifca.es
Available bandwidth (Mb/s) 92
Average latency (ms) 49.2
Getting output start time 27/11/2009, 13.18
Download time (s) 36
Throughput (Mb/s) 26.8
27/11/2009, 13.19
32
30.2
92
49.2
27/11/2009, 13.22
33
29.3
102
29.9
27/11/2009, 13.23
89
10.9
102
29.9
27/11/2009, 14.16 27/11/2009, 14.17
32
30.2
96
46.1
32
30.2
96
46.1
Fig. 8 Pathload output: available bandwidth estimated from GRNET to EUCENTRE
It is relevant to highlight that the throughput is significantly less than the available bandwidth: this means that the communication protocols are not efficient enough to utilize the available bandwidth of the communication channel. Finally, Table 6 reports the overall time for the execution of the application. As clearly shown in the above table, the amount of time necessary for exchanging the data may significantly affect the performance of the application and represents (except in the second case) the major component of the overall execution time. We now turn to consider the measurements performed on the OPATM-BFM application. First, the input data file (a tar archive file, actually containing eight different files) is transferred from the IE to the SE. Table 7 reports the data transfer time and the average throughput at the application level, whereas Fig. 9 shows the input data rate at se01.kallisto.hellasgrid.gr measured by using SNMP.
300
D. Adami et al. Table 6 Total time CE ce01.athena.hellasgrid.gr ce01.athena.hellasgrid.gr ce.reef.man.poznan.pl ce.reef.man.poznan.pl egeece01.ifca.es egeece01.ifca.es
Total time (s) 64 99 400 377 149 159
Computation time (s) 26 65 16 47 22 45
Table 7 From IE/IM to SE (Eight Files. Total Size: 8 GB) SE Start upload time se01.kallisto.hellasgrid.gr Jun 17 2010 10:30
Communication time (s) 38 34 384 330 127 114
Data transfer time 892 s (8.9 MB/s)
Fig. 9 SNMP Input/Output rate at se01.kallisto.hellasgrid.gr on June 17, 2010 Table 8 From SE to CE (One File. Size: 0.9 GB) CE ce.reef.man.poznan.pl
Start upload time Jun 23 2010 10:30
Data transfer time 159 s (5.9 MB/s)
Next (one week later), one of the files included in the archive file has been transferred from the SE (se01.kallisto.hellasgrid.gr) to the CE at PSNC (ce.reef.man.poznan.pl) for processing. Table 8 reports the time necessary to download the file and the average throughput at the application level, whereas Fig. 10 shows the available bandwidth between PSNC and GRNET measured during the experiment by using Pathload. It is relevant to highlight that the application is not able to ‘fill the pipe’ using all the available bandwidth, possibly owing to limitations of gridftp, which is used to perform the file transfer. Table 9 reports the processing time at the CE. Both the processing time and the size of the output file depend on the length of the prediction time.
Network Performance Monitoring...
301
Fig. 10 Available bandwidth between GRNET and PSNC
Table 9 Processing time at CE CE ce.reef.man.poznan.pl
Start computation time Jun 23 2010 10:33
Table 10 From CE to SE (One File. Size: 4.6 GB) SE Start upload time se01.kallisto.hellasgrid.gr Jun 23 2010 10:40
Processing time 360 s
Data transfer time 1240 s (3.7 MB/s)
Fig. 11 SNMP Input/Output rate at se01.kallisto.hellasgrid.gr on June 23, 2010
Finally, the output data file (4.6 GB) is transferred to the SE (again se01.kallisto.hellasgrid.gr). Table 10 reports the data transfer time and the average throughput, which is much less than the available bandwidth (see Fig. 8). Figure 11 shows the input and output data rates measured at se01.kallisto.hellasgrid.gr. Both file data transfers (from and to the SE) are highlighted. SNMP statistics are collected at the network interface every 5 min.
302
D. Adami et al.
6 Conclusions The DORII project has created and managed a Virtual Organization for the deployment of applications in a number of different scientific fields. Applications that were traditionally executed locally have been ported to a grid environment, where they can find and ask for computational and storage resources, and carry out the data processing operations they require efficiently and timely. More importantly, DORII middleware enables scientists to expose and access their instrumental resources on the grid, by means of universal abstractions that apply to diverse e-Science domains. At the same time, an ubiquitous monitoring service has been deployed, allowing the continuous evaluation of the distributed system performance and the discovery of possible problems and bottlenecks. The architecture and the main features of the network-monitoring infrastructure have been presented in this paper, together with two examples in performance monitoring of selected applications. Acknowledgement This work was supported by the European Commission under the DORII project (contract no. 213110).
References 1. Foster, “Service-oriented science”, Science Mag., vol. 308, no. 5723, pp. 814–817, May 2005. 2. The RINGrid Consortium, “Whitepaper on Remote Instrumentation”, Computational Methods in Science and Technol., vol. 15, no. 1, pp. 119–138, 2009. 3. F. Davoli, N. Meyer, R. Pugliese, S. Zappatore, Eds., Grid-Enabled Remote Instrumentation, Springer, New York, NY, 2008; ISBN 978-0-387-09662-9. 4. F. Davoli, N. Meyer, R. Pugliese, S. Zappatore, Eds., Remote Instrumentation and Virtual Laboratories, Springer, New York, NY, 2010; ISBN 978-1-4419-5595-1. 5. DORII Project Home Page http://www.dorii.eu. 6. D. Adami, A. Cheptsov, F. Davoli, I. Liabotis, R. Pugliese, A. Zafeiropoulos, “The DORII project test bed: Distributed eScience applications at work”, Proc. 1st Internat. Workshop on Pervasive Computing Systems and Infrastructures (PCSI), Washington, DC, April 2009. 7. ESFRI Home Page http://cordis.europa.eu/esfri/ 8. EGEE Project Home Page http://www.eu-egee.org 9. DEISA project Home Page http://www.deisa.eu 10. http://glite.web.cern.ch/glite/ 11. E. Laure, S. M. Fisher, A. Frohner, C. Grandi, P. Kunszt, A. Krenek, O. Mulmo, F. Pacini, F. Prelz, J. White, M. Barroso, P. Buncic, F. Hemmer, A. Di Meglio, A. Edlund, “Programming the Grid with gLite”, Computational Methods in Science and Technol., vol. 12, no. 1, pp. 33–45, 2006. 12. GRIDCC Project Home Page http://www.gridcc.org 13. Cheptsov, B. Koller, D. Kranzlm¨uller, T. Koeckerbauer, S. Mueller, N. Meyer, F. Davoli, D. Adami, S. Salon, P. Lazzari, “Remote Instrumentation Infrastructure for e-Science. Approach of the DORII project”, Proc. IEEE Internat. Workshop on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS 2009), Rende (Cosenza), Italy, Sept. 2009.
Network Performance Monitoring...
303
14. E. Frizziero, M. Gulmini, F. Lelli, G. Maron, A. Oh, S. Orlando, A. Petrucci, S. Squizzato, S. Traldi, “Instrument Element: a new Grid component that enables the control of remote instrumentation” Proc. 6th IEEE Internat. Symp. on Cluster Computing and the Grid Workshops (CCGRIDW’06), Singapore, May 2006. 15. R. Ranon, L. De Marco, A. Senerchia, S. Gabrielli, L. Chittaro, R. Pugliese, L. Del Cano, F. Asnicar, M. Prica, “A web-based tool for collaborative access to scientific instruments in cyberinfrastructures” in F. Davoli, N. Meyer, R. Pugliese, S. Zappatore, Eds., Grid Enabled Remote Instrumentation, Springer, New York, NY, 2008, pp. 237–251. 16. H. Kornmayer, M. St¨umpert, M. Knauer, P. Wolniewicz, “g-Eclipse – an integrated workbench tool for Grid application users, Grid operators and Grid application developers”, Proc. Cracow Grid Workshop ‘06, Cracow, Poland, Oct. 2006. 17. M. Oko´n, D. Kaliszan, M. Lawenda, D. Stoklosa, T. Rajtar, N. Meyer, M. Stroinski, “Virtual Laboratory as a remote and interactive access to the scientific instrumentation embedded in Grid environment”, Proc. 2nd IEEE Internat. Conf. on e-Science and Grid Computing (e-Science’06), Amsterdam, The Netherlands, Dec. 2006. 18. T. K¨ockerbauer, M. Polak, T. St¨utz, A. Uhl, “GVid - Video Coding and Encryption for Advanced Grid Visualization”, Proc. 1st Austrian Grid Symp., Schloß Hagenberg, Austria, Dec. 2005. 19. R. Keller, M. Liebing, “Using PACX-MPI in MetaComputing applications”, Proc. ASIM 2005 - 18th Symp. on Simulation Technique (Symposium Simulationstechnik), Erlangen, Germany, Sept. 2005. 20. Smokeping Home Page http://oss.oetiker.ch/smokeping 21. RRD Tool Home Page http://oss.oetiker.ch/rrdtool/index.en.html 22. DORII Monitoring Platform Home Page http://monitor2.cnit.it 23. Pathload Home Page http://www.cc.gatech.edu/fac/Constantinos.Dovrolis/bw-est/pathload. html 24. NAGIOS Home Page http://www.nagios.org
Index
A Abstract data access layer API (ADALAPI), 248–249 AgentService Suite, 132–134 Agents Platform Federation, 140–141 Andrew file system (AFS), 243 Average degrade percentage (ADP), 172
B Bid management (BidMan), 167
C Candidate resources pool, 183 CERN Advanced STORage manager (CASTOR), 245
D Damping ratio, 49 Data browser architecture of the data browser, 250–252 status, 250 Datalogger, 66 Data mining and exploration (DAME) project astrophysics, 270–272 distributed data mining design architecture, 274–277 distributed environment, 277–279 scientific and technological results, 280–283 soft computing applications, 279–280 X-Informatics, 272–273 Data storage manager, 264 DCache system, 245, 259 dCap protocol, 247
Defence in-depth strategy, VLAB access platform cookies, 91 regular expressions, 92 security through obscurity, 90 decomposition, infrastructure, 83 instruments layer, 83–84 multi-layer security components MetaIDS, 93–95 SARA system, 95–97 platform layer, 85–90 security considerations, 97–99 Deployment of the Remote Instrumentation Infrastructure (DORII), 106 DIET. See Distributed interactive engineering toolbox (DIET) Discovery performer (DiPe), 167 Disk pool manager (DPM), 245 Distributed computing infrastructures (DCIs), 242 Distributed Interactive Engineering Toolbox (DIET), 230 DORII. See Deployment of Remote Instrumentation Infrastructure (DORII) DORII instrument element aim and advantages, 16 applications (see Remote instrumentation services) experimental setup Fujitsu-Siemens, 19 grid-node, 19 java application, 20 mock, 20 overall architecture GRIDCC, 17 grid middleware, 17
F. Davoli et al. (eds.), Remote Instrumentation for eScience and Related Aspects, DOI 10.1007/978-1-4614-0508-5, © Springer Science+Business Media, LLC 2012
305
306 DORII instrument element (cont.) JMS broker, 19 oscilloscope IM, 17 performance evaluation average response time, 20 interpolating functions, 21 polling mode, 20 Dynamic Voltage and Frequency Scaling (DVFS), 26
E Early warning system, 43 Earthquake Early Warning System (EEWS), 293–295 Economic grid environments, 173 Enabling grids for e-science (EGEE), 230 Energy logs, 32–35 Environmental science automated phenology observations, 123 kiwi platform access layer, 121 observation workflows, 79 phenology observations phenophases, 122 technical research, 122 remotely controlled reflex camera, 126–127 scope air pollution, 119 meteorology, 119 water management, 119 surveillance equipment, pictures category, 123–125 Environmental science community, 61 Experimental science community, 61 ExpertGrid AgentService Suite, 132–134 Agents Platform Federation, 140–141 e-learning, 131–132 grid computing, 130–131 information manager, 138–139 motivations and challenges, 134–135 objectives, 135–136 project outline, 136–138 scenario builder, 139–140
F File transfer protocol (FTP), 246 FiVO, 258
Index G General parallel file system (GPFS), 243 Gfarm project, 245 Giggle, architecture of, 231 Globus toolkit, 167 Gluster file system (GlusterFS), 243 Green Grid’5000 distributed infrastructure with energy sensors Lyon site, 30 OAR, 27 packets, 29 resource management system (RMS), 26 wattmeters, 29 energy consumption energy logs, 32–35 ShowWatts, 31–32 web-based visualisation, 32 goals, data manangement technique, 26 use cases administrator’s use case, 40 energy profile, applications, 36–37 grid middleware, 38–40 improve distributed applications, 37–38 Grid e-learning. See Enforcing team cooperation (ETC) project GridFTP protocol, 8, 247 Grid middleware, 38–40 GridSolve on gLite-infrastructure giggle, 230 gridsolve client–agent–server system, 229 network address translation, 230 gridsolve/gLite vs MPI Support giggle, architecture of, 231–232 MPI support, 231–234 interactive European grid and gLite local area networks, 229 open MPI, 229 PACX-MPI, 229 wide area networks, 229 motivation gLite extensions, 228 grid computing, 228 state of the art enabling grids for e-science, 230 OGF standard, 230 GridSolve RPCs approach, 234 Grid Web Portal (VCR), 4
H Hadoop, 243
Index I Information managers (IMs) DORII project, 17 ExpertGrid, 138–139 and instruments, 62 VCR, 45 Integrated rule-oriented data system (iRODS), 244 IT security challenges, 76 Internet Crime Complaint Center, 75–76 principles, 77 STRIDE model, 77
J Java Message Service (JMS) instrument element, 62 JMeter, 17 subscribing mode, 20 Job requirement manifest (JRM), 166
K Karlsruhe Institute of Technology (KIT), 240 KIT. See Karlsruhe Institute of Technology (KIT) Kiwi Platform environmental science, 120–122 VLab system, 79, 97 Knowledge discovery in databases (KDD). See Data mining and exploration (DAME) project
L LabVIEW, 66 Large Scale Data Facility (LSDF) accessing, abstract data access layer API, 248–249 access technologies, 246–247 data browser, 249–252 data management systems, 244–246 development, 241 file systems, 242–244 future work, 254–255 KIT, 241–242 needs, 241 performance analysis test description, 252 test environment, 253 test results, 253–254
307 schematic design, 242 screenshot data browser, 251 Logs on demand tool, 37
M Mass storage systems (MSS), 262 MATLAB vs. API, 178 Maximum Transmission Unit (MTU), 186 Message passing interface (MPI) DORII, 105 gridsolve service, 233 implementation, 230 mpiexec-command, 233 program step to run, 234 use of, 232 Million of Instructions per Second (MIPS), 180 MPI. See Message-passing interface (MPI) MSS. See Mass storage systems (MSS) Multiple Resources Allocation Three Dimensional (MRA3D) algorithm flow diagram, 184 performance indexes, 181–183 resource allocation scheme, 183, 185 computing element (CE), 178 instrument element (IE), 178 minimization problem, 179 multi-resource scheduler (MRS), 179 problem statement and grid reference model, 180–181 remote instrumentation services (RISs), 177 simulations results, 187–191 setup, 185–187 Multisensor mobile system integration DORII framework instrument element (IE), 61–62 virtual control room, 62–63 e-infrastructure integration, 71 inland waters and reservoirs, 63–64 networking and communications data security and backup protections, 68–71 e-infrastructure network communication, 68 land station communication, 68 sensor platform land station, 67 power system, 67
308 Multisensor mobile system integration (cont.) sensor integration, 66–67 surface sensors, 65–66 underwater sensors, 64–65 myDBIM, 71 myWINCHIM, 71 myWSIM, 71
N Nagios, 292 Nanometrics protocol, 47 Network file system (NFS), 242
O OntoStor, 262 OPATM-BFM’s call-graph’s, 111
P Parallel virtual file system (PVFS), 243 Pathload tool, 291–292 Performance analysis e-infrastructure, 106 MPI communication application time characteristics comparison, 113 MPI improvement, 113 MPI techniques, 106 simulation model OPATM-BFM, 107 use, 108 tools and techniques application call-graph, 110 valgrind’s distribution, 110 vampirtrace advantage, 111 vampirtrace libraries, 111 Platform yagi WIFI antenna, 68 PL-Grid virtual organizations data management challenges methods for storage performance, 263 sample use case, 263–264 storage monitoring, 262 types, storage systems, 262 data management process, 259 implementation API function, 265 Lustre file system, 264 management, 260 Poisson process, 187 Polling mode, 20–23 PseudoRandom simulations, 187 PSGen cycle, 8
Index Q QMC calculation, EGEE grid architecture computational strategy, 201–203 grid performance, 203–206 QMC D Chem program, 197–200 technical details, 197
R Random resource allocation, 191 Real usage percentage (RUP), 172 Remote instrumentation services conceptual view, 287 earthquake engineering, environmental communities, 286 e-infrastructure enabling grids for E-science (EGEE), 287 European Strategy Forum on Research Infrastructure (ESFRI), 286 experimental results computation time, 298 EUCENTRE, 297 fill the pipe, 90 grid infrastructure, 86 GRNET, PSNC, IFCA-CSIC sites, 291 kallisto.hellasgrid gr, 300 prediction time, 300 network monitoring infrastructure nagios, 292 pathload, 291–292 smokeping, 290–291 SNMP-based network monitoring, 82 performance evaluation procedure Earthquake Early Warning System (EEWS), 83–85 OPATM-BFM, 85–86 Remotely controlled reflex camera, 285–287 Remote procedure calls (RPC), 178 Reservation percentage (RP), 108 Round trip time (RTT), 117
S SAN. See Storage area network (SAN) Scenario builder, ExpertGrid, 139–140 Seismic sensor network earthquakes, 43 finite state machine, 51 instrumentation data access protocol, 47 naqsServer, 48 private data stream, 48 integration
Index acquisition, 51 error, 53 on and off, 51 stopped, 52 proposed application, 48–50 VCR and IE advantages, 44 user interface, 45 Sensor integration scheme, 66 Server message block (SMB), 243 Service level agreement (SLA), 260 ShowWatts, 31–32 Simulation manager (SM), 164 Smokeping, 182, 290–291 SNMP-based network monitoring, 82 Soft real time system, 105 SoRT-Bids, 104 SoRTGrid framework experiment results, 172–174 outline, 166–169 simulation entities and behaviours, 170–171 metrics, 172 probabilities, 169–170 soft real-time system, 169 SoRTSim motivating scenario, 163–164 quality of service (QoS), 161, 162 simulation implementation and interface, 165–166 internal architecture, 165 SoRTGrid framework experiment results, 172–174 outline, 166–169 simulation, 169–171 SSH file transfer protocol (SFTP), 247 Storage resource broker (SRB), 244 StoRM, 235 Synchrotron Radiation Facility Elettra beamlines and online processing, 5 computation GridFTP protocol, 8 PSGen cycle, 9 SAN, 8 distributed control system, 7 instrument element (IE), 9 tomography workflow, imaging, 5–6 user interaction, 10–11
309 T TANGO, 4
V VCR. See Virtual control room (VCR) Virtual computational grid grid generation productions working on one layer, 223–224 productions working on two layers, 221–223 grid representation, 217–220 hierarchical graphs, 213–215 layered architecture, 210–212 layered graphs, 215–217 simulation, 212–213 Virtual control room (VCR), 45 Virtual laboratory (VL) defence in-depth strategy access platform, 90–92 decomposition, infrastructure, 83 instruments layer, 83–84 multi-layer security components, 93–97 platform layer, 85–90 security considerations, 97–99 disadvantage, 79 Kiwi Platform, 79–81 parts, 78 security issues, 81–82 WfMS, 79 Virtual Organizations management architecture, 261
W WIFI module, 67 WiMAX patch antenna, 69 Workflow Management System (WMS), 62 Worldwide LHC Computing Grid (WLCG), 186
X Xrootd, 245 Xrootd/SE system, 245