This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Concurrent Systems Engineering Series Series Editors: M.R. Jane, J. Hulskamp, P.H. Welch, D. Stiles and T.L. Kunii
Volume 68 Previously published in this series: Volume 67, Communicating Process Architectures 2009 (WoTUG-32), P.H. Welch, H.W. Roebbers, J.F. Broenink, F.R.M. Barnes, C.G. Ritson, A.T. Sampson, G.S. Stiles and B. Vinter Volume 66, Communicating Process Architectures 2008 (WoTUG-31), P.H. Welch, S. Stepney, F.A.C. Polack, F.R.M. Barnes, A.A. McEwan, G.S. Stiles, J.F. Broenink and A.T. Sampson Volume 65, Communicating Process Architectures 2007 (WoTUG-30), A.A. McEwan, S. Schneider, W. Ifill and P.H. Welch Volume 64, Communicating Process Architectures 2006 (WoTUG-29), P.H. Welch, J. Kerridge and F.R.M. Barnes Volume 63, Communicating Process Architectures 2005 (WoTUG-28), J.F. Broenink, H.W. Roebbers, J.P.E. Sunter, P.H. Welch and D.C. Wood Volume 62, Communicating Process Architectures 2004 (WoTUG-27), I.R. East, J. Martin, P.H. Welch, D. Duce and M. Green Volume 61, Communicating Process Architectures 2003 (WoTUG-26), J.F. Broenink and G.H. Hilderink Volume 60, Communicating Process Architectures 2002 (WoTUG-25), J.S. Pascoe, P.H. Welch, R.J. Loader and V.S. Sunderam Volume 59, Communicating Process Architectures 2001 (WoTUG-24), A. Chalmers, M. Mirmehdi and H. Muller Volume 58, Communicating Process Architectures 2000 (WoTUG-23), P.H. Welch and A.W.P. Bakkers Volume 57, Architectures, Languages and Techniques for Concurrent Systems (WoTUG-22), B.M. Cook Volumes 54–56, Computational Intelligence for Modelling, Control & Automation, M. Mohammadian Volume 53, Advances in Computer and Information Sciences ’98, U. Güdükbay, T. Dayar, A. Gürsoy and E. Gelenbe Transputer and OCCAM Engineering Series Volume 45, Parallel Programming and Applications, P. Fritzson and L. Finmo Volume 44, Transputer and Occam Developments (WoTUG-18), P. Nixon Volume 43, Parallel Computing: Technology and Practice (PCAT-94), J.P. Gray and F. Naghdy Volume 42, Transputer Research and Applications 7 (NATUG-7), H. Arabnia
ISSN 1383-7575 ISSN 1879-8039
Communicating Process Architectures 2011 WoTUG-33
Edited by
Peter H. Welch University of Kent, UK
Adam T. Sampson University of Abertay Dundee, UK
Jan B. Pedersen University of Nevada, Las Vegas, USA
Jon Kerridge Edinburgh Napier University, UK
Jan F. Broenink University of Twente, the Netherlands
and
Frederick R.M. Barnes University of Kent, UK
Proceedings of the 33rd WoTUG Technical Meeting, 19–22 June 2011, University of Limerick, Ireland
Preface This thirty-third Communicating Process Architectures conference, CPA 2011, takes place at the University of Limerick, 19-22 June, 2011. It is hosted by Lero, the Irish Software Engineering Research Centre, and (as for CPA 2009) co-located with FM 2011 (the 17th International Symposium on Formal Methods). Also co-located this year are SEW-34 (the 34th Annual IEEE Software Engineering Workshop) and several specialist Workshops. We are very pleased this year to have Gavin Lowe, Professor of Computer Science at the University of Oxford Computing Laboratory for our keynote speaker. His research over the past two decades has made significant contributions to the field of concurrency, with special emphasis on CSP and the formal modelling of computer security. His paper addresses a long-standing and crucial issue for this community: the verified implementation of CSP external choice, with no restrictions. We have also received a good set of papers covering many of the key issues in modern computer science, which all seem to concern concurrency in one form or another these days. Inside, you will find papers on concurrency models and their theory, pragmatics (the effective use of multicores), language ideas and implementation (for mobile processes, generalised forms of choice), tools to assist verification and performance, applications (large scale simulation, robotics, web servers), benchmarks (for scientific and distributed computing) and, perhaps most importantly, education. They reflect the increasing relevance of concurrency both to express and manage complex problems as well as to exploit readily available parallel hardware. Authors from all around the world, old hands and new faces, PhD students and professors will be gathered here this week. We hope everyone will have a good time and engage in many stimulating discussions and much learning – both in the formal sessions of the conference and in the many opportunities afforded by the evening receptions and dinners, which are happening every night, and into the early hours beyond. We thank the authors for their submissions and the Programme Committee for their hard work in reviewing the papers. We also thank Mike Hinchey at Lero for inviting CPA 2011 to be part of the week of events surrounding FM 2011 and for being so helpful during the long months of planning. Finally, we thank Patsy Finn and Susan Mitchell, also at Lero, for all the detailed – and extra – work they put in researching and making all the special arrangements we requested for CPA.
Peter Welch (University of Kent), Adam Sampson (University of Abertay Dundee), Frederick Barnes (University of Kent), Jan B. Pedersen (University of Nevada, Las Vegas), Jan Broenink (University of Twente), Jon Kerridge (Edinburgh Napier University).
vi
Editorial Board Dr. Frederick R.M. Barnes, School of Computing, University of Kent, UK Dr. Jan F. Broenink, Control Engineering, Faculty EEMCS, University of Twente, The Netherlands Prof. Jon Kerridge, School of Computing, Edinburgh Napier University, UK Prof. Jan B. Pedersen, School of Computer Science, University of Nevada, Las Vegas, USA Dr. Adam T. Sampson, Institute of Arts, Media and Computer Games, University of Abertay Dundee, UK Prof. Peter H. Welch, School of Computing, University of Kent, UK (Chair)
vii
Reviewing Committee Dr. Alastair R. Allen, Aberdeen University, UK Mr. Philip Armstrong, University of Oxford, UK Dr. Paul S. Andrews, University of York, UK Dr. Rick Beton, Equal Experts, UK Dr. John Markus Bjørndalen, University of Tromsø, Norway Dr. Jim Bown, University of Abertay Dundee, UK Dr. Phil Brooke, University of Teesside, UK Mr. Neil C.C. Brown, University of Kent, UK Dr. Kevin Chalmers, Edinburgh Napier University, UK Dr. Barry Cook, 4Links Ltd., UK Mr. Martin Ellis, University of Kent, UK Dr. Oliver Faust, Altreonic, Belgium Dr. Bill Gardner, University of Guelph, Canada Prof. Michael Goldsmith, University of Warwick, UK Mr. Marcel Groothuis, University of Twente, The Netherlands Dr. Gerald Hilderink, The Netherlands Dr. Kohei Honda, Queen Mary & Westfield College, UK Mr. Jason Hurt, University of Nevada, Las Vegas, USA Ms. Ruth Ivimey-Cook, UK Prof. Matthew Jadud, Allegheny College, USA Mr. Brian Kauke, University of Nevada, Las Vegas, USA Prof. Gavin Lowe, University of Oxford, UK Dr. Jeremy M.R. Martin, GlaxoSmithKline, UK Dr. Alistair McEwan, University of Leicester, UK Dr. Fiona A.C. Polack, University of York, UK Mr. Carl G. Ritson, University of Kent, UK Mr. Herman Roebbers, TASS Technology Solutions BV, the Netherlands Mr. Mike Rogers, University of Nevada, Las Vegas, USA Mr. David Sargeant, University of Nevada, Las Vegas, USA Prof. Steve Schneider, University of Surrey, UK Prof. Marc L. Smith, Vassar College, USA Prof. Susan Stepney, University of York, UK Mr. Bernard Sufrin, University of Oxford, UK Dr.ir. Johan P.E. Sunter, TASS, The Netherlands Dr. Øyvind Teig, Autronica Fire and Security, Norway Dr. Gianluca Tempesti, University of Surrey, UK Dr. Helen Treharne, University of Surrey, UK Dr. Kevin Vella, University of Malta, Malta Prof. Brian Vinter, Copenhagen University, Denmark Prof. Alan Wagner, University of British Columbia, Canada Prof. Alan Winfield, University of the West of England, UK Mr. Doug N. Warren, University of Kent, UK Prof. George C. Wells, Rhodes University, South Africa
This page intentionally left blank
ix
Contents Preface Peter Welch, Adam Sampson, Frederick Barnes, Jan B. Pedersen, Jan Broenink and Jon Kerridge
v
Editorial Board
vi
Reviewing Committee
vii
Implementing Generalised Alt – A Case Study in Validated Design Using CSP Gavin Lowe
1
Verification of a Dynamic Channel Model Using the SPIN Model Checker Rune Møllegaard Friborg and Brian Vinter
35
Programming the CELL-BE Using CSP Kenneth Skovhede, Morten N. Larsen and Brian Vinter
55
Static Scoping and Name Resolution for Mobile Processes with Polymorphic Interfaces Jan Bækgaard Pedersen and Matthew Sowders Prioritised Choice over Multiway Synchronisation Douglas N. Warren An Analysis of Programmer Productivity Versus Performance for High Level Data Parallel Programming Alex Cole, Alistair McEwan and Satnam Singh Experiments in Multicore and Distributed Parallel Processing Using JCSP Jon Kerridge Evaluating an Emergent Behaviour Algorithm in JCSP for Energy Conservation in Lighting Systems Anna Kosek, Aly Syed and Jon Kerridge
71 87
111 131
143
LUNA: Hard Real-Time, Multi-Threaded, CSP-Capable Execution Framework M.M. Bezemer, R.J.W. Wilterdink and J.F. Broenink
157
Concurrent Event-Driven Programming in occam-π for the Arduino Christian L. Jacobsen, Matthew C. Jadud, Omer Kilic and Adam T. Sampson
177
Fast Distributed Process Creation with the XMOS XS1 Architecture James Hanlon and Simon J. Hollis
195
Serving Web Content with Dynamic Process Networks in Go James Whitehead II
209
x
Performance of the Distributed CPA Protocol and Architecture on Traditional Networks Kevin Chalmers
227
Object Store Based Simulation Interworking Carl G. Ritson, Paul S. Andrews and Adam T. Sampson
243
A Model for Concurrency Using Single-Writer Single-Assignment Variables Matthew Huntbach
255
The Computation Time Process Model Martin Korsgaard and Sverre Hendseth
273
SystemVerilogCSP: Modeling Digital Asynchronous Circuits Using SystemVerilog Interfaces Arash Saifhashemi and Peter A. Beerel Process-Oriented Subsumption Architectures in Swarm Robotic Systems Jeremy C. Posso, Adam T. Sampson, Jonathan Simpson and Jon Timmis A Systems Re-Engineering Case Study: Programming Robots with occam and Handel-C Dan Slipper and Alistair A. McEwan The Flying Gator: Towards Aerial Robotics in occam-π Ian Armstrong, Michael Pirrone-Brusse, Anthony Smith and Matthew Jadud CONPASU-Tool: A Concurrent Process Analysis Support Tool Based on Symbolic Computation Yoshinao Isobe
287 303
317 329
341
Development of an ML-Based Verification Tool for Timed CSP Processes Takeshi Yamakawa, Tsuneki Ohashi and Chikara Fukunaga
363
Mobile Processes and Call Channels with Variant Interfaces (a Duality) Eric Bonnici and Peter H. Welch
377
Adding Formal Verification to occam-π Peter H. Welch, Jan B. Pedersen, Fred R.M. Barnes, Carl G. Ritson and Neil C.C. Brown
Implementing Generalised Alt A Case Study in Validated Design using CSP Gavin LOWE Department of Computer Science, University of Oxford, Wolfson Building, Parks Road, Oxford, OX1 3QD, UK; e-mail [email protected] Abstract. In this paper we describe the design and implementation of a generalised alt operator for the Communicating Scala Objects library. The alt operator provides a choice between communications on different channels. Our generalisation removes previous restrictions on the use of alts that prevented both ends of a channel from being used in an alt. The cost of the generalisation is a much more difficult implementation, but one that still gives very acceptable performance. In order to support the design, and greatly increase our confidence in its correctness, we build CSP models corresponding to our design, and use the FDR model checker to analyse them. Keywords. Communicating Scala Objects, alt, CSP, FDR.
Introduction Communicating Scala Objects (CSO) [14] is a library of CSP-like communication primitives for the Scala programming language [12]. As a simple example, consider the following code: val c = OneOne[String]; def P = proc{ c! ”Hello world!” ; } def Q = proc{ println (c?); } (P || Q)();
The first line defines a (synchronous) channel c that can communicate Strings (intended to be used by one sender and one receiver—hence the name OneOne; CSO also has channels whose ends can be shared); the second and third lines define processes (more accurately, threads) that, respectively, send and receive a value over the channel; the final line combines the processes in parallel, and runs them. CSO —inspired by occam [9]— includes a construct, alt, to provide a choice between communicating on different channels. In this paper we describe the design and implementation of a generalisation of the alt operator. We begin by describing the syntax and (informal) semantics of the operator in more detail. As an initial example, the code alt ( c −−> { println(”c: ”+(c ?)); } | d −−> { println(”d: ”+(d?)); } )
tests whether the environment is willing to send this process a value on either c or d, and if so fires an appropriate branch. Note that the body of each branch is responsible for performing the actual input: the alt just performs the selection, based on the communications offered by the environment. Channels may be closed, preventing further communication; each alt considers only its open channels.
2
G. Lowe / Implementing Generalised Alt
Each branch of an alt may have a boolean guard. For example, in the alt: alt ( (n >= 0 &&& c) −−> { println(”c: ”+(c?)); } | d −−> { println(”d: ”+(d?)); } )
the communication on c is enabled only if n >= 0. An alt may also have a timeout branch, for example: alt ( c −−> { println(”c: ”+(c ?)); } | after (500) −−> { println(”timeout” ); } )
If no communication has taken place on a different branch within the indicated time (in milliseconds) then the alt times out and selects the timeout branch. Finally, an alt may have an orelse branch, for example: alt ( (n >= 0 &&& c) −−> { println(”c: ”+(c?)); } | orelse −−> { println(”orelse” ); } )
If every other branch is disabled —that is, the guard is false or the channel is closed— then the orelse branch is selected. (By contrast, if there is no orelse branch and all the other branches are disabled, then the alt throws an Abort exception.) Each alt may have at most one timeout or orelse branch. In the original version of CSO —as in occam— alts could perform selections only between input ports (the receiving ends of channels, known as InPorts). Later this was extended to include output ports (the sending ends of channels, known as OutPorts), for example: alt ( in −?−> { println(”in: ”+(in ?)); } | out −!−> { out!2011; } )
The different arrows −?−> and −!−> show whether the InPort or OutPort of the channel is to be used; the simple arrow −−> can be considered syntactic sugar for −?−>. Being able to combine inputs and outputs in the same alt can be useful in a number of circumstances. The following example comes from the bag-of-tasks pattern [4]. A server process maintains a collection of tasks (in this case, in a stack) to be passed to worker processes on channel toWorker. Workers can return (sub-)tasks to the server on channel fromWorker. In addition, a worker can indicate that it has completed its last task on channel done; the server maintains a count, busyWorkers, of the workers who are currently busy. The main loop of the server can be defined as follows: serve( (! stack.isEmpty &&& toWorker) −!−> { toWorker!(stack.pop) ; busyWorkers += 1; } | (busyWorkers>0 &&& fromWorker) −?−> { stack.push(fromWorker?); } | (busyWorkers>0 &&& done) −?−> { done? ; busyWorkers −= 1 } )
The construct serve represents an alt that is repeatedly executed until all its branches are disabled — in this case, assuming no channels are closed, when the stack is empty and busyWorkers = 0. In the above example, it is possible to replace the output branch (the first branch) by one where the server receives a request from a worker (on channel req) before sending the task (! stack.isEmpty &&& req) −?−> { req?; toWorker!(stack.pop) ; busyWorkers += 1; }
However, such a solution adds complexity for the programmer; a good API should hide such complexities. Further, such a solution is not always possible. However, the existing implementation of alt has the following restriction [15]: A channel’s input and output ports may not both simultaneously participate in alts.
G. Lowe / Implementing Generalised Alt
3
This restriction makes the implementation of alts considerably easier. It means that at least one end of each communication will be unconditional, i.e. that offer to communicate will not be withdrawn once it is made. However, the restriction can prove inconvenient in practice, preventing many natural uses of alts. For example, consider a ring topology, where each node may pass data to its clockwise neighbour or receive data from its anticlockwise neighbour; this pattern can be used to adapt the above bag-of-tasks to a distributed-bag-of-tasks as follows, where give and get are aliases for the channels connecting this node to its neighbours:1 serve( (! stack.isEmpty &&& toWorker) −!−> { toWorker!(stack.pop); workerBusy = true; } | (workerBusy &&& fromWorker) −?−> { stack.push(fromWorker?); } | (workerBusy &&& done) −?−> { done?; workerBusy = false; } | (! stack.isEmpty &&& give) −!−> { give!(stack.pop); } | ((! workerBusy && stack.isEmpty) &&& get) −?−> { stack.push(get?); } )
However, now the InPorts and OutPorts of channels connecting nodes are both participating in alts, contrary to the above restriction. One goal of this paper is to present a design and implementation for a generalised alt operator, that overcomes the above restriction. McEwan [11] presents a formal model for a solution to this problem, based on a twophase commit protocol, with the help of a centralised controller. Welch et al. [17,18] implement a generalised alt, within the JCSP library. The implementation makes use of a single (system-wide) Oracle server process, which arbitrates in all alts that include an output branch or a barrier branch (which allows multi-way synchronisation); alts that use only input branches can be implemented without the Oracle. This is a pragmatic solution, but has the disadvantage of the Oracle potentially being a bottleneck. Brown [1] adopted the same approach within the initial version of the CHP library. However, later versions of CHP built upon Software Transactional Memory [6] and so was decentralised in that alts offering to communicate on disjoint channels did not need to interact; see [3,2]. Our aim in this paper is to investigate an alternative, more scalable design. In particular, we are aiming for a design with no central controller, and that does not employ additional channels internally. However, coming up with a correct design is far from easy. Our development strategy, described in later sections, was to build CSP [13] models of putative designs, and then to analyse them using FDR [5]. In most cases, our putative designs turned out to be incorrect: FDR revealed subtle interactions between the components that led to incorrect behaviour. Debugging CSP models using FDR is very much easier than debugging code by testing for a number of reasons: • FDR does exhaustive state space exploration, whereas execution of code explores the state space nondeterministically, and so may not detect errors; • The counterexamples returned by FDR are of minimal length (typically about 20 in this work), whereas counterexamples found by testing are likely to be much longer (maybe a million times longer, based on our experience of a couple of bugs that did crop up in the code); • CSP models are more abstract and so easier to understand than code. 1 This design ignores the problem of distributed termination; a suitable distributed termination protocol can be layered on top of this structure.
4
G. Lowe / Implementing Generalised Alt
A second goal of this paper, then, is to illustrate the use of CSP in such a development. One factor that added to the difficulty was that we were aiming for an implementation using the concurrency primitives provided by the Scala programming language, namely monitors. A third goal of this paper is an investigation of the relationship between abstract CSP processes and implementations using monitors: what CSP processes can be implemented using monitors, and what design patterns can we use? One may use formal analysis techniques with various degrees of rigour. Our philosophy in this work has been pragmatic rather than fully rigorous. Alts and channels are components, and do not seem to have abstract specifications against which the designs can be verified. The best we can do is analyse systems built from the designs, and check that they act as expected. We have analysed a few such systems; this gives us a lot of confidence that other systems would be correct — but does not give us an absolute guarantee of that. Further, the translation from the CSP models to Scala code has been done informally, because, in our opinion, it is fairly obvious. The rest of this paper is structured as follows. Below we present a brief overview of CSP and of monitors. In Section 1 we present an initial attempt at a design; this design will be incorrect, but presenting it will help to illustrate some of the ideas, and indicate some of the difficulties. In Section 2 we present a correct design, but omitting timeouts and closing of channels; we validate the design using FDR. That design, however, does not seem amenable to direct implementation using a monitor. Hence, in Section 3, we refine the design, implementing each alt as the parallel composition of two processes, each of which could be implemented as a monitor. In Section 4 we extend the design, to include timeouts and the closing of channels; this development requires the addition of a third component to each alt. In Section 5 we describe the implementation: each of the three processes in the CSP model of the alt can be implemented using a monitor. We sum up in Section 6. CSP In this section we give a brief overview of the syntax for the fragment of CSP that we will be using in this paper. We then review the relevant aspects of CSP semantics, and the use of the model checker FDR in verification. For more details, see [7,13]. CSP is a process algebra for describing programs or processes that interact with their environment by communication. Processes communicate via atomic events. Events often involve passing values over channels; for example, the event c.3 represents the value 3 being passed on channel c. Channels may be declared using the keyword channel; for example, channel c : Int declares c to be a channel that passes an Int. The notation {|c|} represents the set of events over channel c. In this paper we will have to talk about both CSP channels and CSO channels: we will try to make clear which we mean in each case. The simplest process is STOP, which represents a deadlocked process that cannot communicate with its environment. The process a → P offers its environment the event a; if the event is performed, the process then acts like P. The process c?x → P is initially willing to input a value x on channel c, i.e. it is willing to perform any event of the form c.x; it then acts like P (which may use x). Similarly, the process c?x:X → P is willing to input any value x from set X on channel c, and then act like P (which may use x). The process c!x → P outputs value x on channel c. Inputs and outputs may be mixed within the same communication, for example c?x!y → P. The process P 2 Q can act like either P or Q, the choice being made by the environment: the environment is offered the choice between the initial events of P and Q; hence the alt operator in CSO is very similar to the external choice operator of CSP. By contrast, P Q may act like either P or Q, with the choice being made internally, not under the control of the environment. 2x:X • P(x) and x:X • P(x) are indexed versions of these operators, with the
G. Lowe / Implementing Generalised Alt
5
choice being made over the processes P(x) for x in X. The process P Q represents a sliding choice or timeout: it initially acts like P, but if no event is performed then it can internally change state to act like Q. The process if b then P else Q represents a conditional. It will prove convenient to write assertions in our CSP models, similar in style to assertions in code. We define Assert(b)(P) as shorthand for if b then P else error → STOP; we will later check that the event error cannot occur, ensuring that all assertions are true. The process P [| A |] Q runs P and Q in parallel, synchronising on events from A. The process P ||| Q interleaves P and Q, i.e. runs them in parallel with no synchronisation. The process |||x:X • P(x) represents an indexed interleaving. The process P \ A acts like P, except the events from A are hidden, i.e. turned into internal, invisible events. Prefixing (→ ) binds tighter than each of the binary choice operators, which in turn bind tighter than the parallel operators. A trace of a process is a sequence of (visible) events that a process can perform. We say that P is refined by Q in the traces model, written P T Q, if every trace of Q is also a trace of P. FDR can test such refinements automatically, for finite-state processes. Typically, P is a specification process, describing what traces are acceptable; this test checks whether Q has only such acceptable traces. Traces refinement tests can only ensure that no “bad” traces can occur: they cannot ensure that anything “good” actually happens; for this we need the stable failures or failuresdivergences models. A stable failure of a process P is a pair (tr, X), which represents that P can perform the trace tr to reach a stable state (i.e. where no internal events are possible) where X can be refused, i.e., where none of the events of X is available. We say that P is refined by Q in the stable failures model, written P F Q, if every trace of Q is also a trace of P, and every stable failure of Q is also a stable failure of Q. We say that a process diverges if it can perform an infinite number of internal (hidden) events without any intervening visible events. In this paper, we will restrict ourselves to specification processes that cannot diverge. If P is such a process then we say that P is refined by Q in the failures-divergences model, written P F D Q, if Q also cannot diverge, and every stable failure of Q is also a stable failure of P (which together imply that every trace of Q is also a trace of P). This test ensures that if P can stably offer an event a, then so can Q; hence such tests can be used to ensure Q makes useful progress. Again, such tests can be performed using FDR. Monitors A monitor is a program module —in Scala, an object— with a number of procedures that are intended to be executed under mutual exclusion. A simple monitor in Scala typically has a shape as below. object Monitor{ private var x ,...; // private variables def procedure1 (arg1 : T1 ) = synchronized{...}; ... def proceduren (argn : Tn ) = synchronized{...}; }
The keyword synchronized indicates a synchronized block: before a thread can enter the block, it must acquire the lock on the object; when it leaves the block, it releases the lock; hence at most one thread at a time can be executing within the code of the monitor.
6
G. Lowe / Implementing Generalised Alt
It is sometimes necessary for a thread to suspend part way through a procedure, to wait for some condition to become true. It can do this by performing the command wait(); it releases the object’s lock at this point. Another thread can wake it up by performing the command notify(); this latter thread retains the object’s lock at this point, and the awoken thread must wait to re-obtain the lock. The following producer-consumer example illustrates this technique. Procedures are available to put a piece of data into a shared slot, and to remove that data; each procedure might have to suspend, to wait for the slot to be emptied or filled, respectively. object Slot{ private var value = 0; // the value in the slot private var empty = true; // is the slot empty? def put(v : Int ) = synchronized{ while(!empty) wait(); // wait until space is available value = v; empty = false; // store data notify (); // wake up consumer } def get : Int = synchronized{ while(empty) wait(); // wait until value is available val result = value; empty = true; // get and clear value notify (); // wake up producer return result ; } }
An unfortunate feature of the implementation of wait within the Java Virtual Machine (upon which Scala is implemented) is that sometimes a process will wake up even if no other process has performed a notify, a so-called spurious wake-up. It is therefore recommended that all waits are guarded by a boolean condition that is unset by the awakening thread; for example: waiting = true ; while(waiting) wait ();
with awakening code: waiting = false ; notify ();
1. Initial Design In this section we present our initial design for the generalised alt. The design is not correct; however, our aims in presenting it are: • • • •
to act as a stepping-stone towards a correct design; to illustrate some of the difficulties in producing a correct design; to introduce some features of the CSP models; to illustrate how model checking can discover flaws in a design.
For simplicity, we do not consider timeouts or the closing of channels within this model. We begin by describing the idea of the design informally, before presenting the CSP model and the analysis.
7
G. Lowe / Implementing Generalised Alt
In order for an alt to fire a particular branch, say the branch for channel c, there must be another process —either another alt or not— willing to communicate on the other port of c. In order to ascertain this, an alt will register with the channel for each of its branches. • If another process is already registered with channel c’s other port, and ready to communicate, then c will respond to the registration request with YES, and the alt will select that branch. The act of registration represents a promise by the alt, that if it receives an immediate response of YES it will communicate. • However, if no other process is registered with c’s other port and ready to communicate, then c responds with NO, and the alt will continue to register with its other channels. In this case, the registration does not represent a firm promise to communicate, since it may select a different branch: it is merely an expression of interest. If an alt has registered with each of its channels without receiving a positive response, then it waits to hear back from one of them. This process is illustrated in the first few steps of Figure 1: Alt1 registers with Chan1 and Chan2, receiving back a response of NO, before waiting. Chan2
Alt1
Chan1
/
register
o o
Alt2
NO
register NO
/ e
wait
o
commit YES
o
register
/
o deregister
YES
/
Figure 1. First sequence diagram
When a channel receives another registration attempt, it checks whether any of the alts already registered on its other port is able to commit to a communication. If any such alt agrees, the channel returns a positive response to the registering alt; at this point, both alts deregister from all other channels, and the communication goes ahead. However, if none of the registered alts is able to commit, then the channel returns a negative result to the registering alt. This process is illustrated in the last few steps of Figure 1. Alt2 registers with Chan1; Chan1 checks whether Alt1 can commit, and receives a positive answer, which is passed on to Alt2. In the Scala implementation, our aim will be to implement the messages between components as procedure calls and returns. For example, the commit messages will be implemented by a procedure in the alt, also called commit; the responses will be implemented by the values returned from that procedure. A difference between the two types of components is that each alt will be thread-like: a thread will be executing the code of the alt (although at times that thread will be within procedure calls to other components); by contrast, channels will be object-like: they will be mostly passive, but willing to receive procedure calls from active threads.
8
G. Lowe / Implementing Generalised Alt
1.1. CSP Model Each CSP model will be defined in two parts: a definition of a (generic) alt and channel; and the combination of several alts and channels into a system. The definition of each system will include two integer values, numAlts and numChannels, giving the number of alts and CSO channels, respectively. Given these, we can define the identities of alts and channels: A l t I d = { 1 . . numAlts } −− IDs o f A l t s ChannelId = { 1 . . numChannels} −− IDs o f channels
We can further define a datatype of ports, and a datatype of responses: datatype P o r t = I n P o r t . ChannelId | OutPort . ChannelId datatype Resp = YES | NO
We can now declare the CSP channels used in the model. The register, commit and deregister channels, and response channels for the former two, are declared as follows2 . channel channel channel channel channel
r e g i s t e r : A l t I d . Port r e g i s t e r R e s p : P o r t . A l t I d . Resp commit : P o r t . A l t I d commitResp : A l t I d . P o r t . Resp deregister : A l t I d . Port
We also include a CSP channel on which each alt can signal that it thinks that it is executing a branch corresponding to a particular CSO channel; this will be used for specification purposes. channel s i g n a l : A l t I d . ChannelId
The process Alt(me, ps) represents an alt with identity me with branches corresponding to the ports ps. It starts by registering with each of its ports. Below, reged is the set of ports with which it has registered, and toReg is the set of ports with which it still needs to register. It chooses (nondeterministically, at this level of abstraction) a port with which to register, and receives back a response; this is repeated until either it receives a positive response, or has registered with all the ports. A l t (me, ps ) = AltReg (me, ps , { } , ps ) AltReg (me, ps , reged , toReg ) = i f toReg =={} then A l t W a i t (me, ps , reged ) else p : toReg • r e g i s t e r .me. p → r e g i s t e r R e s p ?p ’ ! me? resp → A s s e r t ( p ’ = = p ) ( i f resp ==YES then A l t D e r e g (me, ps , remove ( reged , p ) , p ) else AltReg (me, ps , add ( reged , p ) , remove ( toReg , p ) ) )
Here we use two helper functions, to remove an element from a set, and to add an element to a set:3 remove ( xs , x ) = d i f f ( xs , { x } ) add ( xs , x ) = union ( xs , { x } ) 2 3
deregister does not return a result, and can be treated as atomic, so we do not need a response channel diff and union are the machine-readable CSP functions for set difference and union.
G. Lowe / Implementing Generalised Alt
9
If the alt registers unsuccessfully with each of its ports, then it waits to receive a commit message from a port, which it accepts. A l t W a i t (me, ps , reged ) = commit?p : reged !me → commitResp .me. p ! YES → A l t D e r e g (me, ps , remove ( reged , p ) , p )
Once an alt has committed to a particular port, p, it deregisters with each of the other ports, and then signals, before returning to its initial state. During the same time, if the alt receives a commit event, it responds negatively. A l t D e r e g (me, ps , toDereg , p ) = i f toDereg =={} then s i g n a l .me. chanOf ( p ) → A l t (me, ps ) else ( ( p1 : toDereg • d e r e g i s t e r .me. p1 → A l t D e r e g (me, ps , remove ( toDereg , p1 ) , p ) ) 2 commit?p1 : a p o r t s (me ) ! me → commitResp .me. p1 !NO → A l t D e r e g (me, ps , toDereg , p ) )
Here chanOf returns the channel corresponding to a port: chanOf ( I n P o r t . c ) = c chanOf ( OutPort . c ) = c
We now consider the definition of a channel. The process Channel(me, reged) represents a channel with identity me, where reged is a set of (port, alt) pairs, showing which alts have registered at its two ports. Channel (me, reged ) = r e g i s t e r ?a? p o r t : p o r t s (me) → ( l e t t o T r y = { ( p , a1 ) | ( p , a1 ) ← reged , p== otherP ( p o r t ) } w i t h i n ChannelCommit (me, a , p o r t , reged , t o T r y ) ) 2 d e r e g i s t e r ?a?p : p o r t s (me) → Channel (me, remove ( reged , ( p , a ) ) )
Here, ports(me) gives the ports corresponding to this channel: p o r t s (me) = { I n P o r t . me, OutPort .me}
The set toTry, above, represents all the previous registrations with which this new registration might be matched; otherP(port) returns this channel’s other port. otherP ( I n P o r t .me) = OutPort .me otherP ( OutPort .me) = I n P o r t .me
The channel now tries to find a previous registration with which this new one can be paired. The parameter toTry represents those previous registrations with which the channel still needs to check. The channel chooses (nondeterministically) a previous registration to try, and sends a commit message. It repeats until either (a) it receives back a positive response, in which case it sends a positive response to the registering alt a, or (b) it has exhausted all possibilities, in which case it sends back a negative response.4 4 The notation pa’ @@(port’,a’) : toTry binds the identifier pa’ to an element of toTry, and also binds the identifiers port’ and a’ to the two components of pa’.
10
G. Lowe / Implementing Generalised Alt
ChannelCommit (me, a , p o r t , reged , t o T r y ) = i f t o T r y =={} then −− None can commit r e g i s t e r R e s p . p o r t . a !NO → Channel (me, add ( reged , ( p o r t , a ) ) ) else ( pa ’ @@ ( p o r t ’ , a ’ ) : t o T r y • commit . p o r t ’ . a ’ → commitResp . a ’ . p o r t ’ ? resp → i f resp ==YES then r e g i s t e r R e s p . p o r t . a ! YES → Channel (me, remove ( reged , pa ’ ) ) else ChannelCommit (me, a , p o r t , remove ( reged , pa ’ ) , remove ( t o T r y , pa ’ ) ) )
1.2. Analysing the Design @ Channel(1) AA AA Alt(1) ^>>> >> >> >
AA AA A } Alt(2) }} } } }} ~} Channel(2) }
Figure 2. A simple configuration
We consider a simple configuration of two alts and two channels, as in Figure 2 (where the arrows indicate the direction of dataflow, so Alt(1) accesses Channel(1)’s inport and Channel(2)’s outport, for example). This system can be defined as follows. numAlts = 2 numChannels = 2 Channels =
|||
me : ChannelId • Channel (me, { } )
a p o r t s ( 1 ) = { I n P o r t . 1 , OutPort . 2 } a p o r t s ( 2 ) = { I n P o r t . 2 , OutPort . 1 } Procs =
|||
me : A l t I d • A l t (me, a p o r t s (me ) )
System = l e t i n t e r n a l s = {| r e g i s t e r , r e g i s t e r R e s p , commit , commitResp , d e r e g i s t e r |} w i t h i n ( Channels [| i n t e r n a l s |] Procs ) \ i n t e r n a l s
The two processes should agree upon which channel to communicate; that is, they should (repeatedly) signal success on the same channel. Further, no error events should occur. This requirement is captured by the following CSP specification. Spec = c : ChannelId • s i g n a l . 1 . c → s i g n a l . 2 . c → Spec 2 s i g n a l . 2 . c → s i g n a l . 1 . c → Spec
When we use FDR to test if System refines Spec in the traces model, the test succeeds. However, when we do the corresponding test in the stable failures model, the test fails, because System deadlocks. Using the FDR debugger shows that the deadlock occurs after the system (without the hiding) has performed
11
G. Lowe / Implementing Generalised Alt < r e g i s t e r . 2 . I n P o r t . 2 , r e g i s t e r . 1 . I n P o r t . 1 , r e g i s t e r R e s p . I n P o r t . 2 . 2 . NO, r e g i s t e r R e s p . I n P o r t . 1 . 1 . NO, r e g i s t e r . 1 . OutPort . 2 , r e g i s t e r . 2 . OutPort .1>
This is illustrated in Figure 3. Each alt has registered with one channel, and is trying to Alt(1)
Channel(1) register
o
Channel(2)
/
o
NO
o o
register NO
/
commit
/
/
register commit
Alt(2)
register
Figure 3. The behaviour leading to deadlock
register with its other channel. In the deadlocked state, Channel(1) is trying to send a commit message to Alt(1), but Alt(1) refuses this because it is waiting for a response to its last register event; Channel(2) and Alt(2) are behaving similarly. The following section investigates how to overcome this problem. 2. Improved Design The counterexample in the previous section shows that alts should be able to accept commit messages while waiting for a response to a register. But how should an alt deal with such a commit? It would be wrong to respond with YES, for then it would be unable to deal with a response of YES to the register message (recall that an alt must respect a response of YES to a register message). It would also be wrong to respond NO to the commit, for then the chance to communicate on this channel would be missed. Further, a little thought shows that delaying replying to the commit until after a response to the register has been received would also be wrong: in the example of the last section, this would again lead to a deadlock. Our solution is to introduce a different response, MAYBE, that an alt can send in response to a commit; informally, the response of MAYBE means “I’m busy right now; please call back later”. The sequence diagram in Figure 4 illustrates the idea. Alt1 receives a commit from Chan1 while waiting for a response to a register. It sends back a response of MAYBE, which gets passed back to the initiating Alt2. Alt2 pauses for a short while (to give Alt1 a chance to finish what it’s doing), before again trying to register with Chan1. Note that it is the alt’s responsibility to retry, rather than the channel’s, because we are aiming for an implementation where the alt is thread-like, but the channel is object-like. 2.1. CSP Model We now adapt the CSP model from the previous section to capture this idea. First, we expand the type of responses to include MAYBE: datatype Resp = YES | NO | MAYBE
When a channel pauses before retrying, it will signal on the channel pause; we will later use this for specification purposes. channel pause : A l t I d
12
G. Lowe / Implementing Generalised Alt
Chan2
Alt1
Chan1
/
register
o
o
NO
o
commit
o
register
register
/
MAYBE NO
Alt2
/
/
MAYBE
e
wait
o
commit YES
o
e
register
pause
/
o deregister
YES
/
Figure 4. Using MAYBE
An alt again starts by registering with each of its channels. It may now receive a response of MAYBE; the parameter maybes below stores those ports for which it has received such a response. Further, it is willing to receive a commit message during this period, in which case it responds with MAYBE. A l t (me, ps ) = AltReg (me, ps , { } , ps , { } ) AltReg (me, ps , reged , toReg , maybes ) = i f toReg =={} then i f maybes=={} then A l t W a i t (me, ps , reged ) else pause .me → AltPause (me, ps , reged , maybes ) 2 commit?p : a p o r t s (me ) ! me → commitResp .me. p !MAYBE → AltReg (me, ps , reged , toReg , maybes ) else ( p : toReg • r e g i s t e r .me. p → AltReg ’ ( me, ps , reged , toReg , maybes , p ) ) 2 commit?p : a p o r t s (me ) ! me → commitResp .me. p !MAYBE → AltReg (me, ps , reged , toReg , maybes ) −− W a i t i n g f o r response from p AltReg ’ ( me, ps , reged , toReg , maybes , p ) = r e g i s t e r R e s p ?p ’ ! me? resp → A s s e r t ( p ’ = = p ) ( i f resp ==YES then A l t D e r e g (me, ps , remove ( reged , p ) , p ) else i f resp ==NO then AltReg (me, ps , add ( reged , p ) , remove ( toReg , p ) , maybes ) else −− resp ==MAYBE AltReg (me, ps , reged , remove ( toReg , p ) , add ( maybes , p ) ) ) 2 commit?p1 : a p o r t s (me ) ! me → commitResp .me. p1 !MAYBE → AltReg ’ ( me, ps , reged , toReg , maybes , p )
G. Lowe / Implementing Generalised Alt
13
If an alt receives no positive response, and at least one MAYBE, it pauses for a short while before retrying. However, it accepts any commit request it receives in the mean time.5 AltPause (me, ps , reged , maybes ) = ( STOP AltReg (me, ps , reged , maybes , { } ) ) 2 commit?p : a p o r t s (me ) ! me → commitResp .me. p ! YES → A l t D e r e g (me, ps , remove ( reged , p ) , p )
If an alt receives only negative responses to its register messages, it again waits. A l t W a i t (me, ps , reged ) = commit?p : a p o r t s (me ) ! me → commitResp .me. p ! YES → A l t D e r e g (me, ps , remove ( reged , p ) , p )
Once the alt has committed, it deregisters the other ports, and signals, as in the previous model. A l t D e r e g (me, ps , toDereg , p ) = i f toDereg =={} then s i g n a l .me. chanOf ( p ) → A l t (me, ps ) else ( ( p1 : toDereg • d e r e g i s t e r .me. p1 → A l t D e r e g (me, ps , remove ( toDereg , p1 ) , p ) ) 2 commit?p1 : a p o r t s (me ) ! me → commitResp .me. p1 !NO → A l t D e r e g (me, ps , toDereg , p ) )
The definition of a channel is a fairly straightforward adaptation from the previous model. In the second process below, the parameter maybeFlag is true if any alt has responded MAYBE. The port is registered at the channel only if each register message received a response of NO. Channel (me, reged ) = r e g i s t e r ?a? p o r t : p o r t s (me) → ( l e t t o T r y = { ( p , a1 ) | ( p , a1 ) ← reged , p== otherP ( p o r t ) } w i t h i n ChannelCommit (me, a , p o r t , reged , t o T r y , f a l s e ) ) 2 d e r e g i s t e r ?a . p → Channel (me, remove ( reged , ( p , a ) ) ) ChannelCommit (me, a , p o r t , reged , t o T r y , maybeFlag ) = i f t o T r y =={} then −− None can commit i f maybeFlag then r e g i s t e r R e s p . p o r t . a !MAYBE → Channel (me, reged ) else r e g i s t e r R e s p . p o r t . a !NO → Channel (me, add ( reged , ( p o r t , a ) ) ) else ( pa ’ @@ ( p o r t ’ , a ’ ) : t o T r y • commit . p o r t ’ . a ’ → commitResp . a ’ . p o r t ’ ? resp → i f resp ==YES then r e g i s t e r R e s p . p o r t . a ! YES → Channel (me, remove ( reged , pa ’ ) ) else i f resp ==MAYBE then ChannelCommit (me, a , p o r t , reged , remove ( t o T r y , pa ’ ) , t r u e ) else −− resp ==NO 5 CSP-cognoscenti may point out that the “STOP ” does not affect the behaviour of the process; we include it merely to illustrate the desired behaviour of our later Scala implementation.
14
G. Lowe / Implementing Generalised Alt ChannelCommit (me, a , p o r t , remove ( reged , pa ’ ) , remove ( t o T r y , pa ’ ) , maybeFlag ) )
2.2. Analysing the Design We can again combine these alts and channels into various configurations. First, we consider the configuration in Figure 2; this is defined as earlier, but also hiding the pause events. FDR can then be used to verify that this system refines the specification Spec, in both the traces and the stable failures model. Alt(1)
However, the refinement does not hold in the failures-divergences model, since the system can diverge. The divergence can happen in a number of different ways; one possibility is shown in Figure 56 . Initially, each alt registers with one channel. When each alt tries to register with the other channel, a commit message is sent to the other alt, receiving a response of MAYBE; each alt then pauses. These attempts to register (marked “∗” in the diagram) can be repeated arbitrarily many times, causing a divergence. The problem is that the two alts are behaving symmetrically, each sending its register events at about the same time: if one alt were to send its register while the other is pausing, it would receive back a response of YES, and the symmetry would be broken. In the implementation, the pause will be of a random amount of time, to ensure the symmetry is eventually broken (with probability 1). We can check that the only way that the system can diverge is through repeated pauses and retries. We can show that the system without the pause events hidden refines the following specification: each alt keeps on pausing until both signal. SpecR = ( p : ChannelId • s i g n a l . 1 . p → SpecR 1 ( p ) 2 s i g n a l . 2 . p → SpecR 2 ( p ) ) pause . 1 → SpecR pause . 2 → SpecR SpecR 1 ( p ) = s i g n a l . 2 . p → SpecR pause . 1 → SpecR 1 ( p ) SpecR 2 ( p ) = s i g n a l . 1 . p → SpecR pause . 2 → SpecR 2 ( p )
We have built other configurations, including those in Figure 6. For each, we have used 6 In fact, FDR finds a slightly simpler divergence, where only one alt repeatedly tries to register; in the implementation, this would correspond to the other alt being starved of the processor; we consider the example in the figure to be more realistic.
G. Lowe / Implementing Generalised Alt > A Channel(1) >>> >> >> > / Channel(2) ;; A Alt(1) ;; Alt(3) ;; ;; ;; ; / Alt(2) ;; Channel(3) @ Alt(4) ;; ;; ;; ; Channel(4)
B Channel(1) >> >> >> >> > / Channel(2) /@ Alt(2) Alt(1) ::: :: :: :: Channel(3) Figure 6. Three test configurations
FDR to check that it refines a suitable specification that ensures that suitable signal events are available, in particular that if an alt signals at one port of a channel then another signals at the other port. We omit the details in the interests of brevity. But as the alts and channels are components, we would really like to analyse all systems built from them: this seems a particularly difficult case of the parameterised model checking problem, beyond the capability of existing techniques. 3. Compound Alts The model in the previous section captures the desired behaviour of an alt. However, it does not seem possible to implement this behaviour using a single monitor. We would like to implement the main execution of the alt as a procedure apply, and to implement the commit and commitResp events as a procedure commit and its return. However, these two procedures will need to be able to run concurrently, so cannot be implemented in a single monitor. Instead we implement the alt using two monitors. • The MainAlt will implement the apply procedure, to register with the channels, deregister at the end, execute the appropriate branch of the alt, and generally control the execution. • The Facet will provide the commit procedure, responding appropriately; it will receive messages from the MainAlt, informing it of its progress; if the Facet receives a call to commit while the MainAlt is waiting, the Facet will wake up the MainAlt. The definition of a channel remains the same as in the previous section. Figure 7 illustrates a typical scenario, illustrating how the two components cooperate together to achieve the behaviour of Alt1 from Figure 1. The MainAlt starts by initialising the Facet, and then registers with Chan1. When the Facet receives a commit message from Chan1, it replies with MAYBE, since it knows the MainAlt is still registering with channels. When the MainAlt finishes registering, it informs the Facet, and then waits. When the Facet subsequently receives another commit message, it wakes up the MainAlt, passing the identity of Chan1, and returns YES to Chan1. The MainAlt deregisters the other channels, and informs the Facet. In addition, if the Facet had received another commit message after sending YES to Chan1, it would have replied with NO.
16
G. Lowe / Implementing Generalised Alt
MainAlt
Chan2
Facet
Chan1
/
INIT
/
register
o o
NO
o
register
commit
/
NO
/
MAYBE
/
WAIT
e
wait
o
commit
o wakeUp.Chan1 o
deregister
/
YES
/
DONE
Figure 7. Expanding the alt
As noted earlier, if the MainAlt receives any reply of MAYBE when trying to register with channels, it pauses for a short while, before retrying; Figures 8 and 9 illustrate this for the compound alt (starting from the point where the alt tries to register with Chan2). Before pausing, the MainAlt informs the Facet. If the Facet receives a commit in the meantime, it replies YES (and would reply NO to subsequent commits). When the MainAlt finishes pausing, it checks back with the Facet to find out if any commit was received, getting a positive answer in Figure 8, and a negative one in Figure 9. Chan2
o
MainAlt
Facet
Chan1
register MAYBE
/ /
PAUSE
e
o
pause
commit YES
/
getToRun.Chan1 o
o
deregister DONE
/
Figure 8. A commit received while pausing
3.1. CSP Model We now describe a CSP model that captures the behaviour described informally above. We define a datatype and channel by which the MainAlt informs the Facet of changes of status. datatype S t a t u s = I n i t | Pause | Wait | Dereg | Done channel changeStatus : S t a t u s
17
G. Lowe / Implementing Generalised Alt
MainAlt
Chan2
o
Facet
Chan1
register
/
MAYBE
PAUSE
/
e
pause
o getToRunNo o
register NO
/
Figure 9. Pausing before retrying
When the Facet wakes up the MainAlt, it sends the identity of the port whose branch should be run, on channel wakeUp. channel wakeUp : P o r t
When the MainAlt finishes pausing, it either receives from the Facet on channel getToRun the identity of a port from whom a commit has been received, or receives a signal getToRunNo that indicates that no commit has been received. channel getToRun : P o r t channel getToRunNo
The alt is constructed from the two components, synchronising on and hiding the internal communications: A l t (me, ps ) = l e t A = {| wakeUp , changeStatus , getToRun , getToRunNo |} w i t h i n ( M a i n A l t (me, ps ) [| A |] Facet (me ) ) \ A
The definition of the MainAlt is mostly similar to the definition of the alt in Section 2, so we just outline the differences here. The MainAlt does not receive the commit messages, but instead receives notifications from the Facet. When it finishes pausing (state MainAltPause below), it either receives from the Facet the identity of the branch to run on channel getToRun, or receives on channel getToRunNo an indication that no commit event has been received. When it is waiting (state MainAltWait), it waits until it receives a message from the Facet on channel wakeUp, including the identity of the process to run. M a i n A l t (me, ps ) = changeStatus ! I n i t → MainAltReg (me, ps , { } , ps , { } ) MainAltReg (me, ps , reged , toReg , maybes ) = i f toReg =={} then i f maybes=={} then M a i n A l t W a i t (me, ps , reged ) else pause .me → changeStatus ! Pause → MainAltPause (me, ps , reged , maybes ) else p : toReg • r e g i s t e r .me. p → r e g i s t e r R e s p ?p ’ ! me? resp → A s s e r t ( p ’ = = p ) ( i f resp ==YES then
18
G. Lowe / Implementing Generalised Alt changeStatus ! Dereg → MainAltDereg (me, ps , remove ( reged , p ) , p ) else i f resp ==NO then MainAltReg (me, ps , add ( reged , p ) , remove ( toReg , p ) , maybes ) else −− resp ==MAYBE MainAltReg (me, ps , reged , remove ( toReg , p ) , add ( maybes , p ) ) )
MainAltPause (me, ps , reged , maybes ) = STOP ( getToRunNo → MainAltReg (me, ps , reged , maybes , { } ) 2 getToRun?p → MainAltDereg (me, ps , remove ( reged , p ) , p ) ) M a i n A l t W a i t (me, ps , reged ) = changeStatus ! Wait → wakeUp?p : reged → MainAltDereg (me, ps , remove ( reged , p ) , p ) MainAltDereg (me, ps , toDereg , p ) = i f toDereg =={} then changeStatus ! Done → s i g n a l .me. chanOf ( p ) → M a i n A l t (me, ps ) else p1 : toDereg • d e r e g i s t e r .me. p1 → MainAltDereg (me, ps , remove ( toDereg , p1 ) , p )
The Facet tracks the state of the MainAlt; below we use similar names for the states of the Facet as for the corresponding states of MainAlt. When the MainAlt is pausing, the Facet responds YES to the first commit it receives (state FacetPause), and NO to subsequent ones (state FacetPause’); it passes on this information on getToRun or getToRunNo. When the MainAlt is waiting, if the Facet receives a commit message, it wakes up the MainAlt (state FacetWait). Facet (me) = changeStatus . I n i t → FacetReg (me) FacetReg (me) = commit?p : a p o r t s (me ) ! me → commitResp .me. p !MAYBE → FacetReg (me) 2 changeStatus ?s → i f s==Wait then FacetWait (me) else i f s==Dereg then FacetDereg (me) else A s s e r t ( s==Pause ) ( FacetPause (me ) ) FacetPause (me) = commit?p : a p o r t s (me ) ! me → commitResp .me. p ! YES → FacetPause ’ ( me, p ) 2 getToRunNo → FacetReg (me) FacetPause ’ ( me, p ) = commit?p1 : a p o r t s (me ) ! me → commitResp .me. p1 !NO → FacetPause ’ ( me, p ) 2 getToRun ! p → FacetDereg (me) FacetWait (me) = commit?p : a p o r t s (me ) ! me → wakeUp ! p → commitResp .me. p ! YES → FacetDereg (me) FacetDereg (me) = commit?p : a p o r t s (me ) ! me → commitResp .me. p !NO → FacetDereg (me) 2 changeStatus ?s → A s s e r t ( s==Done ) ( Facet (me ) )
G. Lowe / Implementing Generalised Alt
19
3.2. Analysing the Design We have built configurations, using this compound alt, as in Figures 2 and 6. We have again used FDR to check that each refines a suitable specification. In fact, the compound alt defined in this section is not equivalent to, or even a refinement of, the sequential alt defined in the previous section. The compound alt has a number of behaviours that the sequential alt does not, caused by the fact that it takes some time for information to propagate through the former. For example, the compound alt can register with each of its ports, receiving NO in each case, and then return MAYBE in response to a commit message (whereas the sequential alt would return YES), because the (internal) changeStatus.Wait event has not yet happened. We see the progression from the sequential to the compound alt as being a step of development rather than formal refinement: such (typically small) changes in behaviour are common in software development. 4. Adding Timeouts and Closing of Channels We now extend our compound model from the previous section to capture two additional features of alts, namely timeouts and the closing of channels. We describe these features separately from the main operation of alts, since they are rather orthogonal. Further, this follows the way we developed the implementation, and how we would recommend similar developments are carried out: get the main functionality right, then add the bells and whistles. We describe the treatment of timeouts first. If the alt has a timeout branch, then the waiting stage from the previous design is replaced by a timed wait. If the Facet receives a commit during the wait, it can wake up the MainAlt, much as in Figure 7. Alternatively, if the timeout time is reached, the alt can run the timeout branch. However, there is a complication: the Facet may receive a commit at almost exactly the same time as the timeout is reached — a race condition. In order to resolve this race, we introduce a third component into the compound alt: the Arbitrator will arbitrate in the event of such a race, so that the Facet and MainAlt proceed in a consistent way. Figure 10 corresponds to the earlier Figure 7. The WAIT message informs the Facet that the MainAlt is performing a wait with a timeout. When the Facet subsequently receives a commit message, it checks with the Arbitrator that this commit has not been preempted by a timeout. In the figure, it receives a returned value of true, indicating that there was no race, and so the commit request can be accepted. Figure 11 considers the case where the timeout is reached without a commit message being received in the meantime. The MainAlt checks with the Arbitrator that indeed no commit message has been received, and then deregisters all channels before running the timeout branch. Figures 12 and 13 consider cases where the timeout happens at about the same time as a commit is received. The MainAlt and the Facet both contact the Arbitrator; whichever does so first “wins” the race, so the action it is dealing with is the one whose branch will be executed. If the Facet wins, then the MainAlt waits for the Facet to wake it up (Figure 12). If the MainAlt wins, then the Facet replies NO to the commit, and waits for the MainAlt to finish deregistering channels (Figure 13). We now consider the treatment of channels closing. Recall that if there is no timeout branch and all the channels close, then the alt should run its orelse branch, if there is one, or throw an Abort exception. However, if there is a timeout branch, then it doesn’t matter if all the branches are closed: the timeout branch will eventually be selected. When a channel closes, it sends a chanClosed message to each alt that is registered with it; this message is received by the Facet, which keeps track of the number of channels that have
20
G. Lowe / Implementing Generalised Alt
MainAlt
Chan2
Arbitrator
Facet
Chan1
/
INIT
/
INIT
/
register
o o
NO
o
register NO
/
/
MAYBE
/
WAIT-TO
e
wait
o o o
commit
COMMIT
/
true
o
commit
wakeUp.Chan1
deregister
/
YES
/
DONE
Figure 10. Expanding the alt Chan2
MainAlt
Arbitrator
Facet
/
WAIT-TO wait, timeout
e
TIMEDOUT
o
/
true
/
DEREG
o
Chan1
deregister
/
deregister DONE
/
Figure 11. After a timeout
closed. If an alt subsequently tries to register with the closed channel, it returns a response of CLOSED. When the MainAlt is about to do a non-timed wait, it sends the Facet a setReged message (replacing the WAIT message in Figure 7), including a count of the number of channels with which it has registered. The Facet returns a boolean that indicates whether all the channels have closed. If so, the MainAlt runs its orelse branch or throws an Abort exception. Otherwise, if subsequently the Facet receives sufficient chanClosed messages such that all channels are closed, it wakes up the MainAlt by sending it an allClosed message; again, the MainAlt either runs its orelse branch or throws an Abort exception.
21
G. Lowe / Implementing Generalised Alt
Chan2
MainAlt
Arbitrator
Facet
/
WAIT-TO wait, timeout
e
o o
commit
COMMIT
/
true
/
TIMEDOUT
o
Chan1
false
e
wait
o
wakeUp.chan1
/
YES
Figure 12. A commit beating a timeout in a race Chan2
MainAlt
Arbitrator
Facet
Chan1
/
WAIT-TO
e
wait, timeout TIMEDOUT
o
/
o
commit
true
o
COMMIT
/
false
NO DEREG
/
/
Figure 13. A timeout beating a commit in a race
4.1. CSP Model We now describe a CSP model that capture the behaviour described informally above. We extend the types of ports, responses and status values appropriately. datatype P o r t = I n P o r t . ChannelId datatype Resp = YES | NO | MAYBE datatype S t a t u s = I n i t | Pause | Dereg | Commit
We extend the type of signals to include timeouts and orelse. We also include events to indicate that a process has aborted, and (to help interpret debugging traces) that a process has timed out. TIMEOUTSIG = 0 ORELSESIG = −1
22
G. Lowe / Implementing Generalised Alt
channel s i g n a l : A l t I d . union ( ChannelId , { TIMEOUTSIG , ORELSESIG} ) channel a b o r t : A l t I d channel t i m e o u t : A l t I d
Finally we add channels for a (CSO) channel to inform an alt that it has closed, and for communications with the Arbitrator (for simplicity, the latter channel captures communications in both directions using a single event). channel chanClosed : P o r t . A l t I d channel checkRace : S t a t u s . Bool
The alt is constructed from the three components, synchronising on and hiding the internal communications: A l t (me, ps ) = l e t A = {| wakeUp , changeStatus , getToRun , getToRunNo |} w i t h i n ( ( M a i n A l t (me, ps ) [| A |] Facet (me ) ) [| {|checkRace|} |] A r b i t r a t o r ( I n i t ) ) \ union ( A , {|checkRace|} )
The definition of the MainAlt is mostly similar to as in Section 3, so we just describe the main differences here. It starts by initialising the other two components, before registering with channels as earlier. M a i n A l t (me, ps ) = changeStatus ! I n i t → checkRace . I n i t ?b → MainAltReg (me, ps , { } , ps , { } ) MainAltReg (me, ps , reged , toReg , maybes ) = i f toReg =={} then i f maybes=={} then i f member ( TIMEOUT, ps ) then MainAltWaitTimeout (me, ps , reged ) else M a i n A l t W a i t (me, ps , reged ) else r e t r y .me → changeStatus ! Pause → MainAltPause (me, ps , reged , maybes ) else p : toReg • i f p==TIMEOUT o r p==ORELSE then MainAltReg (me, ps , reged , remove ( toReg , p ) , maybes ) else r e g i s t e r .me. p → r e g i s t e r R e s p ?p ’ ! me? resp → A s s e r t ( p ’ = = p ) ( i f resp ==YES then changeStatus ! Dereg → MainAltDereg (me, ps , remove ( reged , p ) , p ) else i f resp ==NO then MainAltReg (me, ps , add ( reged , p ) , remove ( toReg , p ) , maybes ) else −− resp ==MAYBE MainAltReg (me, ps , reged , remove ( toReg , p ) , add ( maybes , p ) ) ) MainAltPause (me, ps , reged , maybes ) = STOP ( getToRunNo → MainAltReg (me, ps , reged , maybes , { } ) 2 getToRun?p → MainAltDereg (me, ps , remove ( reged , p ) , p ) )
Before doing an untimed wait, the MainAlt sends a message to the Facet on setReged, giving the number of registered channels, and receiving back a boolean indicating whether all branches are closed. If so (state MainAltAllClosed) it runs the orelse branch if there is one, or aborts. If not all branches are closed, it waits to receive either a wakeUp or allClosed message.
G. Lowe / Implementing Generalised Alt
23
M a i n A l t W a i t (me, ps , reged ) = setReged ! card ( reged ) ? a l l B r a n c h e s C l o s e d → i f a l l B r a n c h e s C l o s e d then M a i n A l t A l l C l o s e d (me, ps , reged ) else −− w a i t f o r s i g n a l from Facet wakeUp?p : reged → MainAltDereg (me, ps , remove ( reged , p ) , p ) 2 a l l C l o s e d → M a i n A l t A l l C l o s e d (me, ps , reged ) M a i n A l t A l l C l o s e d (me, ps , reged ) = i f member (ORELSE, ps ) then changeStatus ! Dereg → MainAltDereg (me, ps , reged , ORELSE) else a b o r t .me → STOP
The state MainAltWaitTimeout describes the behaviour of waiting with the possibility of selecting a timeout branch. The MainAlt can again be woken up by a wakeUp event; we also model the possibility of an allClosed event, but signal an error if one occurs (subsequent analysis with FDR verifies that they can’t occur). We signal a timeout on the timeout channel. The MainAlt then checks with the Arbitrator whether it has lost a race with a commit; if not (then branch) it runs the timeout branch; otherwise (else branch) it waits to be woken by the Facet. MainAltWaitTimeout (me, ps , reged ) = changeStatus ! WaitTO → ( ( wakeUp?p : reged → MainAltDereg (me, ps , remove ( reged , p ) , p ) 2 a l l C l o s e d → e r r o r → STOP ) t i m e o u t .me → checkRace . Timedout? resp → i f resp then changeStatus ! Dereg → MainAltDereg (me, ps , reged , TIMEOUT) else wakeUp?p : reged → MainAltDereg (me, ps , remove ( reged , p ) , p ) ) MainAltDereg (me, ps , toDereg , p ) = i f toDereg =={} then changeStatus ! Done → s i g n a l .me. chanOf ( p ) → M a i n A l t (me, ps ) else p1 : toDereg • d e r e g i s t e r .me. p1 → MainAltDereg (me, ps , remove ( toDereg , p1 ) , p )
The model of the Facet is a fairly straightforward extension of that in Section 3, dealing with the closing of channels and communications with the Arbitrator as described above. Facet (me) = changeStatus ?s → A s s e r t ( s== I n i t ) ( FacetReg (me, 0 ) ) FacetReg (me, c l o s e d ) = commit?p : a p o r t s (me ) ! me → commitResp .me. p !MAYBE → FacetReg (me, c l o s e d ) 2 changeStatus ?s → ( i f s==WaitTO then FacetWaitTimeout (me, c l o s e d ) else i f s==Dereg then FacetDereg (me) else A s s e r t ( s==Pause ) ( FacetPause (me, c l o s e d ) ) ) 2 chanClosed?p : a p o r t s (me ) ! me → A s s e r t ( closed
24
G. Lowe / Implementing Generalised Alt commit?p : a p o r t s (me ) ! me → commitResp .me. p ! YES → FacetPause ’ ( me, closed , p ) 2 getToRunNo → FacetReg (me, c l o s e d ) 2 chanClosed?p : a p o r t s (me ) ! me → A s s e r t ( closed
FacetPause ’ ( me, p ) = commit?p1 : a p o r t s (me ) ! me → commitResp .me. p1 !NO → FacetPause ’ ( me, p ) 2 getToRun ! p → FacetDereg (me) 2 chanClosed?p1 : a p o r t s (me ) ! me → FacetPause ’ ( me, p ) FacetWait (me, closed , nreged ) = commit?p : a p o r t s (me ) ! me → checkRace . Commit? resp → ( i f resp then −− wake up body o f M a i n A l t wakeUp ! p → commitResp .me. p ! YES → FacetDereg (me) else commitResp .me. p !NO → FacetWait (me, closed , nreged ) ) 2 changeStatus ?s → A s s e r t ( s==Dereg ) ( FacetDereg (me ) ) 2 chanClosed?p : a p o r t s (me ) ! me → A s s e r t ( closed
The Arbitrator keeps track of whether it has received a Timedout or Commit message, or neither, on this round; it replies true to the first Timedout or Commit, indicating no race, and false to any subsequent one. Arbitrator ( status ) = checkRace . I n i t ! f a l s e → A r b i t r a t o r ( I n i t ) 2 checkRace . Timedout ! ( s t a t u s == I n i t ) → A s s e r t ( s t a t u s == I n i t o r s t a t u s ==Commit ) ( A r b i t r a t o r ( Timedout ) ) 2 checkRace . Commit ! ( s t a t u s == I n i t ) →
G. Lowe / Implementing Generalised Alt
25
A s s e r t ( s t a t u s == I n i t o r s t a t u s ==Timedout ) ( i f ( s t a t u s == I n i t ) then A r b i t r a t o r ( Commit ) else A r b i t r a t o r ( Timedout ) )
The definition of a channel is very similar to as before, except in the initial state it may receive a command to close; it then sends a chanClosed message to each of the alts registered with it, and then sends a CLOSED response to any subsequent attempt to register. Channel (me, reged ) = . . . −− as b e f o r e 2 c l o s e .me → ChannelClosing (me, reged ) ChannelClosing (me, reged ) = i f reged =={} then ChannelClosed (me) else pa ’ @@ ( p o r t ’ , a ’ ) : reged • chanClosed ! p o r t ’ . a ’ → ChannelClosing (me, remove ( reged , pa ’ ) ) ChannelClosed (me) = r e g i s t e r ?a : c a l t s (me) ? p o r t : i n t e r ( p o r t s (me) , a p o r t s ( a ) ) → r e g i s t e r R e s p . p o r t . a !CLOSED → ChannelClosed (me) 2 d e r e g i s t e r ?a?p : p o r t s (me) → ChannelClosed (me)
4.2. Analysing the Design We have built configurations, using this extended compount alt, as in Figures 2 and 6. We have again used FDR to check that each refines a suitable specification, both including and excluding the possibility of timeouts and of channels closing. This gives us very strong confidence that the design is also correct in other configurations. 5. Code In this section we outline the Scala implementation of the alt. We omit the full code for reasons of space constraints, and because most of it is a fairly straightforward implementation of the CSP model. We give the code for the MainAlt in the appendix. Most of the CSP events are implemented by a call to a procedure with the same name, or the return from a procedure. The implementation needs to deal with a number of issues that we have ignored in the CSP models. • In the CSP models, each alt registered with its channels in a nondeterministic order. Our analysis has, therefore, considered all possible orders for registering. The implementation supports two different orders: if the alt is a prialt, then the channels are registered in the order given in the program; otherwise, on the first execution of the alt, the channels are registered in the order given in the program, and on subsequent executions, they are registered starting from the one after the one registered last on the previous execution. (We discuss in the Conclusions how alts with different priorities interact.) • The alt has to evaluate the guards. Our CSP analysis ignored guards; this abstraction is sound, as it is equivalent to the branches whose guards are false being filtered out prior to registration.
26
G. Lowe / Implementing Generalised Alt
• The implementation of standard reading and writing on channels has to be adapted to interact with alts: if a thread attempts to do a standard read or write on a channel, the channel checks whether any alt registered at the other port is able to commit. • The implementation also supports buffered channels. This part of the implementation is very similar to that for synchronous channels, except the OutPort will always reply YES to a register if the buffer is not full, and the InPort will always reply YES to a register if the buffer is not empty. 5.1. The Alt Class The structure of the Alt class is as in Figure 14. The class is parameterised by a sequence of Events, defining its branches. In the case of standard branches, the Event acts as a wrapper around the port, the guard and the command; the Events for timeout and orelse branches are similar; see the class diagram in Figure 15. The Alt class also has a boolean parameter priAlt indicating whether it is a prialt. It provides a default constructor, initialising to a non-prialt. class Alt (events: Seq[Alt.Event], priAlt : Boolean){ def this (events: Seq[Alt.Event]) = this (events, false ) def apply (): Unit = MainAlt.apply (); def repeat = CSO.repeat { this(); } private [cso] def commit(n:Int) : Int = Facet.commit(n); private [cso] def chanClosed(n:Int) = Facet.chanClosed(n); private object MainAlt extends Pausable{ def apply (): Unit = synchronized {...} def wakeUp(n:Int) = synchronized {...} def allClosed = synchronized{...} } private object Facet { private var status = INIT; def commit(n:Int) : Int = synchronized{...} def chanClosed(n:Int) = synchronized{...} def changeStatus(s:Int) = synchronized {...} def setReged(nReged:Int) : Boolean = synchronized{...} def getToRun : Int = synchronized{...} } private object Arbitrator { def checkRace(s:Int) : Boolean = synchronized{...} } } Figure 14. The structure of the Alt class
At the top level the class provides two public procedures: apply, which corresponds to executing the alt, and which is implemented within the MainAlt; and repeat, which simply repeatedly executes the alt. It also has two procedures that are private to the CSO package and called by channels, commit and chanClosed; both correspond to the CSP events with the same name, and are implemented within the Facet. Each Event has procedures
27
G. Lowe / Implementing Generalised Alt Event cmd guard register deregister :: : InPortEvent
OutPortEvent
port
InPort registerIn :: :
TimeoutEvent
OrElseEvent
port
OutPort registerOut :: : Chan :: :
SyncChan
BufImpl Figure 15. Class diagram for Events
def register (a: Alt , n: Int ) : Int = {...} def deregister(a: Alt , n: Int ) = {...}
to allow an alt to register and deregister with the underlying channel. When the alt calls register, it passes the index n of the corresponding branch. For standard Events (i.e. excluding timeout and orelse Events), these calls are forwarded on to the appropriate channels. The index n is returned in subsequent calls to the commit and chanClosed procedures. The Facet has a procedure changeStatus, which the MainAlt uses to inform it when it moves between the different stages of its operation: registering with channels, pausing, waiting, waiting with a timeout, deregistering with channels, and executing the appropriate branch; the status variable records the current status. We explain the other procedures of the Facet and Arbitrator below. When MainAlt.apply is called, it calls register on each of its Events for which the guard is true (cf. state MainAltReg in the CSP model). If any call of register returns YES, that Event’s command can be executed; see below. If none returns YES, and at least one returns MAYBE, then it enters the pausing phase. In the pausing phase (cf. state MainAltPause), the MainAlt calls a procedure pause, within the Pausable trait. This performs a binary exponential back-off algorithm, inspired by the IEEE 802.3 Ethernet Protocol (see [8] or, e.g., [16]). The call to pause sleeps for a random amount of time. The maximum possible length of pause doubles on each call, to increase the chance of two alts in contention getting out of sync. (A final call to a procedure resetPause resets the maximum delay to a starting value of 1ns.) After the call to pause completes, the MainAlt finds out from the Facet whether it has received any commit call in the meantime, by calling the getToRun procedure (this plays the role of both the getToRun and getToRunNo events; a negative result represents the latter); if that returns a positive result, the appropriate Event’s command can be executed; see below. Otherwise, the MainAlt again tries to register with the remaining channels. If and when the MainAlt receives a reply of NO from each of its standard Events, it needs to wait for one to become ready. Consider, first, the case where there is no timeout Event
28
G. Lowe / Implementing Generalised Alt
(cf. state MainAltWait). The MainAlt first calls Facet.setReged, passing in the number of registered Events. A boolean is returned that indicates whether all the channels have closed in the meantime. If so, then no Event is enabled, so the alt executes the orelse branch if there is one, or throws an Abort exception. If at least one Event is enabled, the MainAlt sets a boolean flag waiting, and executes while(waiting) wait()
waiting for one of two things to happen. • If the Facet receives a call to commit, it wakes up the MainAlt by calling the following procedure, passing in the index of the Event: def wakeUp(n:Int) = synchronized { assert(waiting ); toRun = n; waiting = false ; notify (); }
• If the Facet finds that all the channels have been closed, it wakes up the MainAlt by calling the following procedure; def allClosed = synchronized{ assert(waiting ); allBranchesClosed = true; waiting = false ; notify (); }
When the MainAlt is awoken, it checks the value of allBranchesClosed; if it’s true (so allClosed was called), it executes the orelse branch if there is one, or throws an Abort exception; otherwise (so wakeUp was called) it can run the process indicated by toRun. If there is a timeout branch, then things proceed much as above, except the MainAlt performs a timed wait, and the Facet will not call allClosed. When the MainAlt wakes up, it can tell, by inspecting the waiting variable, whether the timeout was reached, or if it was awoken by the Facet calling wakeUp. In the latter case, things proceed as above. If the timeout was reached, then the MainAlt checks whether there was a race with a received commit by calling Arbitrator.checkRace; if there was no race, then it proceeds as above; if there was a race, it performs another (untimed) wait, waiting to be woken by the Facet. (Likewise, if the Facet receives a commit while the MainAlt is waiting, it checks whether there was a race with a timeout by calling Arbitrator.checkRace.) Finally, once the Event to execute has been selected, the MainAlt calls deregister on each of the other registered Events, and executes the command of the selected Event. 5.2. Testing We have tested the implementation on configurations corresponding to Figures 2 and 6. The implementation seems robust and efficient. For example, the configuration in Figure 2 achieves over 40,000 communications per second (over 20,000 on each channel) on a standard quad-core PC. We have also used it in our Concurrent Programming course. The binary back-off algorithm seems to work well, giving rather small delays. We performed informal testing of a simple configuration based on Figure 2; in this configuration, the two alts are likely to register with channels at the same time, leading to more pausing than in most other configurations. On average, only about 0.1% of the total time was spent in the pause procedure. 5.3. Restrictions The current implementation is subject to the following restrictions on usage:
G. Lowe / Implementing Generalised Alt
29
1. If a shared port (e.g. of a ManyMany channel, whose ports can be shared by several senders and by several receivers) is involved in an alt, it must not simultaneously be read or written by a non-alt process; 2. An alt may not have two simultaneously enabled branches using the same channel (although it may have two branches using the same channel with disjoint guards); 3. The current implementation does not cover network channels. The reason for the first restriction is that —as mentioned in the introduction— the alt itself does not perform a read or write: the body of the selected branch is responsible for this. Consider, for example, P || Q || R where def P = proc{ alt ( c −?−> { val x = c?; ... } | ... ) } def Q = proc{ val y = c?; ... } def R = proc{ c!3; ... }
There is a danger that the alt in P selects its first branch, but then the read in that branch is pre-empted by the read in Q. It seems impossible to avoid this, without the alt performing the read or write. The reason for the second restriction is that without it the system can livelock in certain circumstances. Consider, for example, P || Q where def P = proc{ alt ( c −?−> { ... } | c −?−> { ... } ) } def Q = proc{ c!3; ... }
Suppose P has already registered with c corresponding to the first branch. Now suppose Q executes c!3, and so locks c. At this point, P tries to register with c corresponding to the second branch, but is blocked, because it cannot get the lock on c. Meanwhile, within Q, c will repeatedly call P’s commit method, but repeatedly get a response of MAYBE, because P has not finished registering. The system is stuck, since Q’s write on c will never complete, so P will never get the lock on c to finish registering. We believe the extension to network channels would be reasonably straightforward: essentially the same design can be used, although with register and commit messages being sent across the network. 6. Conclusions In this paper we have described the development of a generalised alt operator, using CSP models and FDR analysis to develop a correct design. We consider the use of CSP and FDR to have been invaluable in this work: we believe we would not have ended up with a working implementation without such an approach. Recall that we initially (Section 2) used a sequential model of an alt, and then (Section 3) developed this into a parallel model, corresponding to the composition of several objects. We consider this approach to have been useful: first considering the inter-component protocol, and then considering the intra-component protocol. (In fact, some of the latter stage proved inconsistent with the former, so we subsequently revised the models from the former stage, so as to tell a coherent story in this paper.) The next development steps were to add the features of channels closing and timeouts (Section 4); separating these features from the main line of development helped to clarify the ideas. The final step was to to refine the models to code (Section 5) and test; as noted in Section 5.2, the implementation gives very acceptable performance. Despite the formal development, this step was not completely straightforward. Besides the inevitable small errors, one issue proved difficult to resolve. A feature of the Scala implementation is that when each ob-
30
G. Lowe / Implementing Generalised Alt
ject of a class has an inner object —like the Facet inside each Alt— the inner object is created at the point of first use. Unfortunately, if the object is multi-threaded, it is possible for two threads to create this object at almost the same time, meaning the inner object is not unique! An earlier implementation of Alt fell foul of this, so that sometimes (once every few million iterations) an Alt ended up with two Facets, leading to incorrect behaviour. This problem is now avoided by MainAlt.apply initialising the Facet (and Arbitrator) initially, before any other thread can call a procedure. 6.1. CSP and Monitors One of the goals of this paper was to investigate the relationship between CSP processes and implementations using monitors; we discuss this here. Recall that our aim is to avoid using internal channels (or anything that is channel-like, or has the same overheads in terms of context switches). Some may object to this restriction, and argue for using channels; we would agree with this point of view in many applications; but sometimes it is necessary to code at a lower level of abstraction in order to achieve greater efficiency. A simple monitor that does not perform wait and has procedures f1 ,. . . ,fn corresponds to a process with shape Proc ( s t a t e ) = f 1 ? arg 1 → . . . → fResp 1 ! resp 1 → Proc ( s t a t e 1 ) 2 ... 2 f n ? arg n → . . . → fResp n ! resp n → Proc ( s t a t e n )
Here arg1 ,. . . ,argn correspond to the arguments of the procedures, and resp1 ,. . . ,respn correspond to the returned values; state captures the state of the monitor, and state1 ,. . . ,staten capture how the state is updated. The elided part mostly corresponds to calls to and returns from other monitors, the call and return events being consecutive; the elided part may also include signal events that are used within the specification for the FDR analysis. Some CSP processes cannot be written in the above form, but can still be implemented as a monitor by using wait. For example, suppose we have a CSP process with a state of the following form, as an intermediate state within the part of the process corresponding to a procedure call. e1 ?x 1 → P1 2 . . . 2 en ?x n → Pn
This process is waiting to receive a message from another process. The process can often be implemented by the code waiting = true ; while(waiting) wait (); // wait to be woken up wakeUpType match { // which wake−up event happened? case 1 => ... // code for P1 ... case n => ... // code for Pn }
and by providing procedures of the following form, for k = 1,...,n (corresponding to events of the form ek !arg in the other process). def ek (arg : Tk ) = synchronized{ assert(waiting ); wakeUpType = k; xk = arg; // pass data waiting = false ; notify (); // wake up waiting process }
G. Lowe / Implementing Generalised Alt
31
Here waiting, wakeUpType and xk (k = 1,...,n) are private variables of the monitor. In order for this to work, we need to ensure that other processes try to perform one of e1 ,. . . ,en only when this process is in this waiting state. Further, we need to be sure that no other process calls one of the main procedures f1 ,. . . ,fn while this process is in this state. We can test for both of these requirements within our CSP models. The restrictions in the previous paragraph prevent many processes from being directly implemented as monitors. In such cases we believe that we can often follow the pattern corresponding to the use of the Facet: having one monitor that performs most of the functionality, and a second monitor (like the Facet) that keeps track of the state of the main monitor, receives procedure calls, and passes data on to the main monitor where appropriate. In some such cases, it will also be necessary to follow the pattern corresponding to the use of the Arbitrator, to arbitrate in the case of race conditions. We leave further investigation of the relationship between CSP and monitors for future work. 6.2. Priorities An interesting question concerns the behaviour of a system built as the parallel composition of prialts with differing priorities, such as P || Q where: def P = proc{ prialt ( c1 −!−> { c1!1; } | c2 −!−> { c2!2; } ) } def Q = proc{ prialt ( c2 −?−> { println(c2?); } | c1 −?−> { println(c1?); } ) }
It is clear to us that such a system should be able to communicate on either c1 or c2, since both components are; but we should be happy whichever way the choice between the channels is resolved. Consider the implementation in this paper. Suppose that P runs first, and registers with both of its channels before Q runs; then when Q tries to register with c2, it will receive a response of YES, so that branch will run: in other words, Q’s priority will be followed. Similarly, if Q runs first, then P’s priority will be followed. If both run at the same time, so they both receive a response of MAYBE to their second registration attempt, then they will both pause; which channel is chosen depends upon the relative length of their pauses. 6.3. Future Plans Finally, we have plans for developing the implementation of alts further. We would like to change the semantics of alt, so that the alt operator is responsible for performing the read or write of the branch it selects. This will remove the first restriction discussed at the end of Section 5. (This would also remove a possible source of bugs, where the programmer forgets to read or write the channel in question.) This would not change the basic protocol described in this paper. A barrier synchronisation [10] allows n processes to synchronise together, for arbitrary n. It would be useful to extend alts to allow branches to be guarded by barrier synchronisations, as is allowed in JCSP [17]. Acknowledgements We would like to thank Bernard Sufrin for implementing CSO and so interesting us in the subject, and also for numerous discussions involving the intended semantics for alts. We would also like to thank the anonymous referees for a number of useful comments and suggestions.
32
G. Lowe / Implementing Generalised Alt
References [1] Neil Brown. Communicating Haskell Processes: Composable explicit concurrency using monads. In Communicating Process Architectures (CPA 2008), pages 67–83, 2008. [2] Neil Brown. Choice over events using STM. http://chplib.wordpress.com/2010/03/04/ choice-over-events-using-stm/, 2010. [3] Neil Brown. Conjoined events. In Proceedings of the Advances in Message Passing Workshop, 2010. http://twistedsquare.com/Conjoined.pdf. [4] N. Carriero, D. Gelernter, and J. Leichter. Distributed data structures in Linda. In Proc. Thirteenth ACM Symposium on Principles of Programming Languages, pages 236–242, 1986. [5] Formal Systems (Europe) Ltd. Failures-Divergence Refinement—FDR 2 User Manual, 1997. Available via URL http://www.formal.demon.co.uk/FDR2.html. [6] Tim Harris, Simon Marlow, Simon Peyton Jones, and Maurice Herlihy. Composable memory transactions. In PPoPP ’05, pages 48–60, 2005. [7] C. A. R. Hoare. Communicating Sequential Processes. Prentice Hall, 1985. [8] IEEE 802.3 Ethernet Working Group website, http://www.ieee802.org/3/. [9] INMOS Ltd. The occam Programming Language. Prentice Hall, 1984. [10] H. F. Jordan. A special purpose architecture for finite element analysis. In Proc. 1978 Int. Conf. on Parallel Processing, pages 263–6, 1978. [11] Alistair A. McEwan. Concurrent Program Development. DPhil, Oxford University, 2006. [12] Martin Odersky, Lex Spoon, and Bill Venners. Programming in Scala. Artima Press, 2008. [13] A. W. Roscoe. The Theory and Practice of Concurrency. Prentice Hall, 1997. [14] Bernard Sufrin. Communicating Scala Objects. In Proceedings of Communicating Process Architectures (CPA 2008), 2008. [15] Bernard Sufrin. CSO API documentation. http://users.comlab.ox.ac.uk/bernard.sufrin/CSO/ doc/, 2010. [16] Andrew S. Tanenbaum. Computer Networks. Prentice Hall, 1996. [17] Peter Welch, Neil Brown, James Moores, Kevin Chalmers, and Bernhard Sputh. Integrating and extending JCSP. In Communicating Process Architectures (CPA 2007), 2007. [18] Peter Welch, Neil Brown, James Moores, Kevin Chalmers, and Bernhard Sputh. Alting barriers: synchronisation with choice in Java using CSP. Concurrency and Computation: Practice and Experience, 22:1049–1062, 2010.
A. Code Listing We give here the code for the MainAlt. private object MainAlt extends Pausable{ private var waiting = false ; // flag to indicate the alt is waiting private var toRun = −1; // branch that should be run private var allBranchesClosed = false; // are all branches closed? private var n = 0; // index of current event /∗ Execute the alt ∗/ def apply (): Unit = synchronized { Facet.changeStatus(INIT); Arbitrator .checkRace(INIT); var enabled = new Array[Boolean](eventCount); // values of guards var reged = new Array[Boolean](eventCount); // is event registered ? var nReged = 0; // number of registered events var done = false; // Have we registered all ports or found a match? var success = false; // Have we found a match? var maybes = false; // have we received a MAYBE? var timeoutMS : Long = 0; // delay for timeout var timeoutBranch = −1; // index of timeout branch var orElseBranch = −1; // index of orelse branch if ( priAlt ) n=0; toRun = −1; allBranchesClosed = false;
G. Lowe / Implementing Generalised Alt // Evaluate guards; this must happen before registering with channels for( i <− 0 until eventCount) enabled(i) = events(i ). guard(); while(!done){ var count=0; // number of events considered so far while(count<eventCount && !done){ if (! reged(n)){ // if event (n) not already registered val event = events(n); if (enabled(n)){ event match { case Alt.TimeoutEvent(tf, ) => if (timeoutBranch>=0 || orElseBranch>=0) throw new RuntimeException(”Multiple timeout/orelse branches in alt”); else{ timeoutMS = tf(); timeoutBranch = n; reged(n) = true; } case Alt.OrElseEvent( , ) => if (timeoutBranch>=0 || orElseBranch>=0) throw new RuntimeException(”Multiple timeout/orelse branches in alt”); else{ orElseBranch = n; reged(n) = true; } case => { // InPortEvent or OutPortEvent event. register ( theAlt ,n) match{ case YES => { Facet.changeStatus(DEREG); toRun = n; done=true; success=true; } case NO => { reged(n) = true; nReged += 1; } case MAYBE => maybes = true; case CLOSED => enabled(n) = false; // channel has just closed } // end of event . register ( theAlt ,n) match } // end of case } // end of event match } // end of if (enabled(n)) } // end of if (! reged(n)) n = (n+1)%eventCount; count += 1; } // end of inner while if (! done) // All registered , without finding a match if (maybes){ // Random length pause to break symmetry Facet.changeStatus(PAUSE); pause; // see if a commit has come in toRun = Facet.getToRun; if (toRun<0) maybes = false; // No, so reset variables for next round else{ done = true; success = true; } // done } // end of if (maybes) else done=true; } // end of outer while resetPause; // All events now registered with their channels if (! success){ // No registration returned YES if (timeoutMS==0){ // no timeout if (nReged==0) // no event enabled if (orElseBranch>=0) toRun = orElseBranch else throw new Abort; // Need to wait for a channel to become ready waiting=true; allBranchesClosed = Facet.setReged(nReged); if (! allBranchesClosed) while(waiting) wait(); // wait to be awoken } else{ // with timeout Facet.changeStatus(WAITTO); waiting=true; wait(timeoutMS); // wait to be awoken or for timeout if (waiting){ // assume timeout was reached ( this could be a spurious wakeup) if ( Arbitrator .checkRace(TIMEDOUT)){ waiting = false; toRun = timeoutBranch; } else // A commit was received just before the timeout . while(waiting) wait() // Wait to be woken
33
34
G. Lowe / Implementing Generalised Alt } // end of if ( waiting ) } // end of else (with timeout ) } // end of if (! success ) // Can now run branch toRun, unless allBranchesClosed if (allBranchesClosed) if (orElseBranch>=0) toRun = orElseBranch else throw new Abort; // Deregister events Facet.changeStatus(DEREG); for(n <− 0 until eventCount) if(n != toRun && reged(n)) events(n).deregister(theAlt,n)
// Finally , run the selected branch Facet.changeStatus(DONE); events(toRun).cmd(); } // end of apply /∗ Implementation of WakeUp events; called by Facet to wake up the MainAlt ∗/ def wakeUp(n:Int) = synchronized { assert(waiting ); // use of Arbitrator should ensure this toRun = n; waiting = false ; notify (); // wake up MainAlt }
}
/∗ Receive notification from Facet that all branches have been closed ∗/ def allClosed = synchronized{ assert(waiting); allBranchesClosed=true; waiting=false; notify(); }
Verification of a Dynamic Channel Model using the SPIN Model Checker Rune Møllegaard FRIBORG a,1 and Brian VINTER b a eScience Center, University of Copenhagen b Niels Bohr Institute, University of Copenhagen Abstract. This paper presents the central elements of a new dynamic channel leading towards a flexible CSP design suited for high-level languages. This channel is separated into three models: a shared-memory channel, a distributed channel and a dynamic synchronisation layer. The models are described such that they may function as a basis for implementing a CSP library, though many of the common features known in available CSP libraries have been excluded from the models. The SPIN model checker has been used to check for the presence of deadlocks, livelocks, starvation, race conditions and correct channel communication behaviour. The three models are separately verified for a variety of different process configurations. This verification is performed automatically by doing an exhaustive verification of all possible transitions using SPIN. The joint result of the models is a single dynamic channel type which supports both local and distributed any-to-any communication. This model has not been verified and the large state-space may make it unsuited for exhaustive verification using a model checker. An implementation of the dynamic channel will be able to change the internal synchronisation mechanisms on-the-fly, depending on the number of channel-ends connected or their location. Keywords. CSP, PyCSP, Distributed Computing, Promela, SPIN.
Introduction Most middleware designers experience situations where they need to choose between generality and performance. To most experienced programmers this dilemma is natural since high performance implementations are typically based on assumptions that from the usage point translates into limitations. The PyCSP project has since the beginning had a strict focus on generality, attempting to present the programmer with only one channel type and one process type. The single channel type has succeeded while the single process type has been attempted through individual PyCSP packages with separate process types. These process types are: greenlets (co-routines), threads and processes, as seen from the operating system view. The reason for the three different packages were to enable PyCSP applications to have up to 100,000 CSP processes (co-routines), posix threads for cross-platform support and OS processes executing in parallel while not being limited by the CPython Global Interpreter Lock [1]. We would like to reach the point of only one process type and one channel type, but for this to be possible we need a channel type which preserves the qualities of the previous channel type, i.e. be of the kind any-to-any and support external choice in both directions as well as offer both channel poisoning and retirement like the existing PyCSP channels. The new channel must support three possible process locations: within the same PyCSP process, 1 Corresponding Author: Rune Møllegaard Friborg, eScience Center, University of Copenhagen, DK-2100 Copenhagen, Denmark. Tel.: +45 3532 1421; Fax: +45 3521 1401; E-mail: [email protected].
36
R.M. Friborg and B. Vinter / Verification of a Dynamic Channel Model
on a different PyCSP process within the same compute-node and, finally, a different PyCSP process on a different compute-node. It is of course trivial to make a common channel type based on the lowest common denominator, i.e. a networked channel, since this will also function within a compute-node and even within a process. However, the downside is selfevident since the overhead of deploying an algorithm based on shared-nothing mechanisms is much slower than algorithms that can employ shared state. The solution to the proposed problem is a channel that can start out with the strongest possible requirements, i.e. exist within a single process, and then dynamically decrease the requirements as needed, while at the same time employing more complex, and more costly, algorithms. The present work is a presentation of such a dynamic channel type. The algorithms that are employed grow quite complex to ensure maximum performance in any given scenario, thus much work has been put into verifying the different levels a channel may reach. Using the SPIN model checker, we perform an exhaustive verification of the local and distributed channel levels for a closed set of process configurations. These include any-to-any channels with input and output guards in six different combinations. Background PyCSP is currently a mix of four implementations providing one shared API, such that the user can swap between them manually. The four implementations do not share any code and the different channel implementations can not function as guards for a single external choice, since they are not compatible. Listing 1. A simple PyCSP example demonstrating the concurrent nature in CSP upholding to unbounded non-determinism and protected against race conditions during termination. # Select implementation import pycsp . t h r e a d s as pycsp # ( a l t e r n a t i v e s : pycsp . processes , pycsp . g r e e n l e t s , pycsp . n et ) @pycsp . p r o c e s s def source ( chan out , N) : f o r i i n r a n g e (N ) : c h a n o u t ( ” H e l l o (%d ) \ n ” % ( i ) ) p y c s p . r e t i r e ( c h a n o u t ) # The c h a n n e l h a s one l e s s w r i t e r @pycsp . p r o c e s s def sink ( chan in ) : # The l o o p t e r m i n a t e s on t h e s i g n a l t h a t a n n o u n c e s t h a t a l l # w r i t e r s have r e t i r e d while True : sys . stdout . write ( chan in ( ) ) ch = p y c s p . C h a n n e l ( ) pycsp . P a r a l l e l ( 5 * s o u r c e ( ch . w r i t e r ( ) , 1 0 ) , 5 * s i n k ( ch . r e a d e r ( ) ) )
# Run i n p a r a l l e l # Five source processes # Five sink processes
In listing 1, we show a simple application using the current PyCSP where channels are any-to-any, synchronous and uni-directional. Processes can commit to reading or writing from single channels or they can commit to a set of distinct channels using the alt (external choice) construct. Committing to the alt construct means that exactly one of the channel
R.M. Friborg and B. Vinter / Verification of a Dynamic Channel Model
37
operations will be accepted and all others are ignored. The alt construct allow a mix of read and write operations. The first model, which we present in section 2.1, is a model of the current channel implementation for threads in PyCSP. In [2] we presented PyCSP for scientific users as a means of creating scalable scientific software. The users of a CSP library should not have to think about whether they might be sending a channel-end to a process that might be running in a remote location. Or how they work around an external choice on channels, that does not support it. One of the powerful characteristics of CSP is that every process is isolated, which means that we can move it anywhere and as long as the channels still work, the process will execute. Because of this, processes can easily be reused, since all the inputs and outputs are known. The network-enabled PyCSP implementation is a prototype and uses a single channel server to handle all channel traffic. The single channel server runs the thread implementation of PyCSP internally, which creates a temporary thread for every request. The server is a serious bottle-neck for the channel communication and has limited the type of parallel applications implemented in PyCSP. In this paper we present a distributed channel model and check its correctness using the SPIN Model Checker. Promela and the SPIN Model Checker Promela (Process Meta Language) is a process modeling language whose purpose is to verify the logic in concurrent systems. In Promela models, processes can by created dynamically and can communicate through synchronous or asynchronous message channels. Also the processes are free to communicate through shared memory. If variables or channels are created globally, then they are available through shared memory to all Promela processes. Promela has a basic set of types for variables: bit, bool, byte, mtype (similar to enum), short and int. The models presented in this paper use shared memory when modeling internal communication, and message channels when modeling distributed communication. In Promela, every statement is evaluated to one of two states, it is either enabled or blocked. Statements as assignments, declarations, skip or break are always enabled, while conditions evaluated to false are blocked. When a statement is blocked, the execution for that process halts until the statement becomes enabled. The following is an example of a blocked statement following an enabled statement: value = 1; v a l u e == 0 ;
/* enabled */ /* blocked */
When executing (simulating) a Promela model, the statements in concurrent processes are selected randomly to simulate a concurrent environment. To allow modeling synchronisation mechanisms, a sequence of statements can be indicated as atomic, by using the atomic keyword and enclosing the statements in curly brackets: atomic { / * The f i r s t s t a t e m e n t i n an a t o m i c r e g i o n i s a l l o w e d t o b l o c k . * / p r o c e s s l o c k == 0 ; process lock = 1; }
To organise the code in Promela we use inline functions. When declaring inline functions, the parameters in the parameter list have no types. The inline functions are exclusively used as a replace-pattern, when generating the complete model with a single body for each thread. All values passed to an inline function is pass-by-reference. There is no return construct in Promela, thus values must be returned by updating variables through the parameter list.
38
R.M. Friborg and B. Vinter / Verification of a Dynamic Channel Model
Control flows in Promela can be defined using either if .. fi or do .. od constructs. The latter executes the former repeatedly until the break statement is executed. Listing 2 shows two examples of the constructs. The first where else is taken only when there is no other enabled guards and another where an always enabled condition might never be executed. Listing 2. Example of control flows in Promela. if : : (A == t r u e ) −> p r i n t f ( ”A i s t r u e , B i s unknown ” ) ; : : ( B == t r u e ) −> p r i n t f ( ”B i s t r u e , A i s unknown ” ) ; : : e l s e −> p r i n t f ( ”A and B a r e f a l s e ” ) ; fi do : : ( s k i p )−> p r i n t f ( ” I f A i s a l w a y s t r u e , t h e n t h i s may n e v e r p r i n t e d . ” ) ; break ; / * b r e a k s t h e do l o o p * / : : (A == t r u e ) −> p r i n t f ( ”A i s t r u e ” ) ; i = i + 1; od
If the SPIN model checker performs an automatic verification of the above code, then it will visit every possible state until it aborts with the error: “max search depth too small”. The reason is that, there is no deterministic set of values for i , thus the system state space can never be completely explored. It is crucial that all control flows have a valid end-state otherwise SPIN can not verify the model. The SPIN model checker can verify models written in Promela. In 1986, Vardi and Wolper [3] published the foundation for SPIN, an automata-theoretic approach to automatic program verification. SPIN [4] can verify a model for correctness by generating a C program that performs an exhaustive verification of the system state space. During simulation and verification SPIN checks for the absence of deadlocks, livelocks, race conditions, unspecified receptions and unexecutable code. The model checker can also be used to show the correctness of system invariants, find non-progress execution cycles and linear time temporal constraints, though we have not used any of those features for the model checking in this paper. 1. Related Work Various possibilities for synchronous communication can be found in most network libraries, but we focus exclusively on network-enabled communication libraries that support Hoare’s CSP algebra [5,6]. Several projects have investigated how to do CSP in a distributed environment. JCSP [7], Pony/occam-π [8] and C++CSP [9] provide network-enabled channels. Common to all three is that they use a specific naming for the channels, such that channels are reserved for one-to-one, one-to-any, network-enabled and so on. JCSP and C++CSP2 have the limitation that they can only do external choice (alt) on some channel types. Pony enables transparent network support for occam-π. Schweigler and Sampson [8] write: “As long as the interface between components (i.e. processes) is clearly defined, the programmer should not need to distinguish whether the process on the other side of the interface is located on the same computer or on the other end of the globe”. Unfortunately the pony implementation in occam-π is difficult to use as basis for a CSP library in languages like C++, Java or Python, as it relies heavily on the internal workings of occam-π. Pony/occam-π does
R.M. Friborg and B. Vinter / Verification of a Dynamic Channel Model
39
not currently have support for networked buffered channels. The communication overhead in Python is quite high, thus we are especially interested in fast one-to-one buffered networked channels, because they have the potential to hide the latency of the network. This would, for large parallel computations, make it possible to overlap computation with communication. 2. The Dynamic Channel We present the basis for a dynamic channel type that combines multiple channel synchronisation mechanisms. The interface of the dynamic channel resembles a single channel type. When the channel is first created, it may be an any-to-any specialised for co-routines. The channel is then upgraded on request, depending on whether it participates in an alt and on the number of channel-ends connected. The next synchronisation level for the channel may be an optimised network-enabled one-to-one with no support for alt. Every upgrade stalls the communication on the channel momentarily while all active requests for a read or write are transformed to a higher synchronisation level. The upgrades continue, until the lowest common denominator (a network-enabled any-to-any with alt support) is reached. This paper presents three models that are crucial parts in the dynamic channel design. These are: a local channel synchronisation model for shared memory, a distributed synchronisation model and the model for on-the-fly switching between synchronisation levels. We have excluded the following features to avoid state-explosion during automatic verification: mobility of channel ends, termination handling, buffered channels, skip / timeout guards and a discovery service for channel homes. Basically, we have simplified a larger model as much as possible and left out important parts, to focus on the synchronisation model handling the communication. The different models are written in Promela to verify the design using the SPIN model checker. The verification phase is presented in section 3 where the three models are modelchecked successfully. The full model-checked models are available at the PyCSP repository [10]. After the following overview, the models are described in detail: • the local synchronisation model is built around the two-phase locking protocol. It provides a single CSP channel type supporting any-to-any communication with basic read / write and external choice (alt). • the distributed synchronisation model is developed from the local model, providing the same set of constructs. The remote communication is similar to asynchronous sockets. • the transition model enables the combination of a local (and faster) synchronisation model with more advanced distributed models. Channels are able to change synchronisation mechanisms, for example based on the location of channel ends, making it a dynamic channel. For all models presented we do not handle operating system errors that cause threads to terminate or lose channel messages. We assume that all models are implemented on top of systems that provide reliable threads and message protocols. 2.1. Channel Synchronisation with Two-Phase Locking The channel model presented here is similar to the PyCSP implementation (threads and processes) from 2009 [11] and will work as a verification of the method used in [11,12]. It is a single CSP channel type supporting any-to-any communication with basic read / write and external choice (alt). In figure 1 we show an example of how the matching of channel operations comes about. Four processes are shown communicating on two channels using the presented design for negotiating read, write and external choice. Three requests have been posted to channel A
40
R.M. Friborg and B. Vinter / Verification of a Dynamic Channel Model
and two requests to channel B. During an external choice, a request is posted on multiple channels. Process 2 has posted its request to multiple channels and has been been matched. Process 1 is waiting for a successful match. Process 3 has been matched and is going to remove its request. Process 4 is waiting for a successful match. In the future, process 1 and process 4 are going to be matched. The matching is initiated by both, but only one process marks the match as successful. Channel A
Read queue READY
Write queue SUCCESS(B)
Requests 1
Process 1 Read value from channel A
READY 2 4 3 Channel B
Read queue SUCCESS(B)
Process 2 External choice (alt) on the channel operations: ▪ Read from B ▪ Write value to A
Write queue SUCCESS
Process 3 Write value to channel B
Process 4 Write value to channel A
Figure 1. Example of four processes matching channel operations on two channels.
Listing 3. Simple model of a mutex lock with a condition variable. This is the minimum functionality, which can be expected from any multi-threading library. typedef processtype { mtype s t a t e ; bit lock ; b i t waitX ; }; p r o c e s s t y p e p r o c [THREADS ] ; inline acquire ( lock id ) { a t o m i c { ( p r o c [ l o c k i d ] . l o c k == 0 ) ; p r o c [ l o c k i d ] . l o c k = 1 ; } } inline release ( lock id ) { proc [ l o c k i d ] . lock = 0; } inline wait ( lock id ) { a s s e r t ( p r o c [ l o c k i d ] . l o c k == 1 ) ; / * l o c k m u s t be a c q u i r e d * / atomic { release ( lock id ); p r o c [ l o c k i d ] . waitX = 0 ; /* r e s e t wait condition */ } ( p r o c [ l o c k i d ] . waitX == 1 ) ; / * w a i t * / acquire ( lock id ); } inline notify ( lock id ) { a s s e r t ( p r o c [ l o c k i d ] . l o c k == 1 ) ; / * l o c k m u s t be a c q u i r e d * / p r o c [ l o c k i d ] . waitX = 1 ; / * wake up w a i t i n g p r o c e s s * / }
We use the two-phase locking protocol for channel synchronisation. When two processes are requesting to communicate on a channel, we accept the communication by first acquiring
R.M. Friborg and B. Vinter / Verification of a Dynamic Channel Model
41
the two locks, then checking the state of the two requests and if successful, updating and finally the two locks are released. This method requires many lock requests resulting in a large overhead, but it has the advantage that it never has to roll-back from trying to update a shared resource. To perform the local synchronisation between threads, we implement the simple lock model shown in listing 3. This is straight-forward to model in Promela, as every statement in Promela must be executable and will block the executing thread until it becomes executable. The implemented lock model is restricted to single processes calling wait . If multiple processes called wait , then the second could erase a recent notify . For the models in the paper, we never have more than one waiting process on each lock. Now that we can synchronise processes, the process state proc[ id ]. state can be protected on read and update. When blocked, we wait on a condition lock instead of wasting cycles using busy waiting, but the condition lock adds a little overhead. To avoid deadlocks, the process lock must be acquired before a process initiates a wait on a condition lock and before another process notifies the condition lock. The process calls wait in write (Listing 4) and is blocked until notified by offer (Listing 6). The offer function is called by the matching algorithm, which is initiated when a request is posted. To provide an overview, figure 2 shows a pseudo call graph of the model with all inline functions and the call relationship. A process can call read, write or alt to communicate on channels. These then posts the necessary requests to the involved channels and the matching algorithm calls offer for all matching pairs. Eventually a matching pair arrives at a success and the waiting process is notified. Communicating process
Channel.read
Alt
Remove request from all involved channels
Channel.remove_write
Channel.remove_read
Channel.write
Initialise request and then post the request to all involved channels
Channel.post_write
Channel.offer - Test matched requests for possible success Request.state: SUCCESS
Lock.notify - Wake up sleeping process
Request.state: READY
Channel.post_read
Channel.match Read and write requests
ditio con ked led c o l b b a Set to en
Lock.wait - Sleep if no match could be made
n
Figure 2. Pseudo call graph for the local channel synchronisation.
In write (Listing 4), a write request is posted to the write queue of the channel and again removed after a successful match with a write request. The corresponding functions read , post read and remove read are not shown since they are similar, except that remove read returns the read value.
42
R.M. Friborg and B. Vinter / Verification of a Dynamic Channel Model
Listing 4. The write construct and the functions for posting and removing write requests. The process index pid contains the Promela thread id. i n l i n e w r i t e ( c h i d , msg ) { p r o c [ p i d ] . s t a t e = READY; p o s t w r i t e ( c h i d , msg ) ; / * i f no s u c c e s s , t h e n w a i t f o r s u c c e s s * / acquire ( pid ); if : : ( p r o c [ p i d ] . s t a t e == READY) −> w a i t ( p i d ) ; : : e l s e skip ; fi ; release ( pid ); a s s e r t ( p r o c [ p i d ] . s t a t e == SUCCESS ) ; remove write ( ch id ) } i n l i n e p o s t w r i t e ( ch id , msg to write ) { /* acquire channel lock */ a t o m i c { ( ch [ c h i d ] . l o c k == 0 ) −> ch [ c h i d ] . l o c k = 1 ; } match ( c h i d ) ; ch [ c h i d ] . l o c k = 0 ; / * r e l e a s e c h a n n e l l o c k * /
} inline remove write ( ch id ) { /* acquire channel lock */ a t o m i c { ( ch [ c h i d ] . l o c k == 0 ) −> ch [ c h i d ] . l o c k = 1 ; }
}
ch [ c h i d ] . l o c k = 0 ; / * r e l e a s e c h a n n e l l o c k * /
When matching read and write requests on a channel we use the two-phase locking protocol where the locks of both involved processes are acquired before the system state is changed. To handle specific cases where multiple processes have posted multiple read and write requests, a global ordering of the locks (Roscoe’s deadlock rule 7 [13]) must be used to make sure they are always acquired in the same order. In this local thread system we order the locks based on their memory address. This is both quick and ensures that the ordering never changes during execution. An alternative index for a distributed system would be to generate an index as a combination of the node address and the memory address. Listing 5. Matching pairs of read and write requests for the two-phase locking. i n l i n e match ( c h i d ) { w = 0; r = 0; do / * Matching a l l reads t o a l l w r i t e s * / : : ( r w = 0; do : : (w o f f e r ( ch id , r , w) ; w = w+ 1 ; : : e l s e break ; od ; r = r +1; : : e l s e break ; od ; }
R.M. Friborg and B. Vinter / Verification of a Dynamic Channel Model
43
The two-phase locking in offer (Listing 6) is executed for every possible pair of read and write requests found by match (Listing 5). The first phase acquires locks and the second phase releases locks. Between the two phases, updates can be made. Eventually when a matching is successful, three things are updated: the condition lock of both processes is notified, the message is transferred from the writer to the reader and proc[ id ]. state is updated. One disadvantage of the two-phase locking is that we may have to acquire the locks of many read and write requests that are not in a ready state. The impact of this problem can easily be reduced by testing the state variable before acquiring the lock. Normally, this behaviour results in a race condition. However, the request can never change back to the ready state once it has been committed and remains posted on the channel. Because of this, the state can be tested before acquiring the lock, in order to find out whether time should be spent acquiring the lock. When the lock is acquired, the state must be checked again to ensure the request is still in the ready state. PyCSP [10] uses this approach in a similar offer method to reduce the number of acquired locks. Listing 6. The offer function offering a possible successful match between two requests. i n l i n e o f f e r ( c h i d , r , w) { r p i d = ch [ c h i d ] . r q u e u e [ r ] . i d ; w p i d = ch [ c h i d ] . wqueue [w ] . i d ; i f /* acquire locks using global ordering */ : : ( r p i d < w p i d ) −> a c q u i r e ( r p i d ) ; a c q u i r e ( w pid ) ; : : e l s e s k i p −> a c q u i r e ( w pid ) ; a c q u i r e ( r p i d ) ; fi ; i f / * Does t h e two p r o c e s s e s match ? * / : : ( p r o c [ r p i d ] . s t a t e == READY && p r o c [ w p i d ] . s t a t e == READY) −> p r o c [ r p i d ] . s t a t e = SUCCESS ; p r o c [ w p i d ] . s t a t e = SUCCESS ; / * T r a n s f e r message * / ch [ c h i d ] . r q u e u e [ r ] . msg ch [ c h i d ] . wqueue [w ] . msg proc [ r p i d ] . r e s u l t c h = proc [ w pid ] . r e s u l t c h =
= ch [ c h i d ] . wqueue [w ] . msg ; = NULL ; ch id ; ch id ;
notify ( r pid ); n o t i f y ( w pid ) ; / * b r e a k match l o o p by u p d a t i n g w and r * / w = LEN ; r = LEN ; : : e l s e skip ; fi ; i f /* release locks using reverse global ordering */ : : ( r p i d < w p i d ) −> r e l e a s e ( w pid ) ; r e l e a s e ( r p i d ) ; : : e l s e s k i p −> r e l e a s e ( r p i d ) ; r e l e a s e ( w pid ) ; fi ; }
The alt construct shown in listing 7 is basically the same as a read or write, except that the same process state is posted to multiple channels, thus ensuring that only one will be matched. The alt construct should scale linearly with the number of guards. For the verification of the model we simplify alt to only accept two guards. If the model is model-checked success-
44
R.M. Friborg and B. Vinter / Verification of a Dynamic Channel Model
fully with two guards we expect an extended model to model-check successfully with more than two guards. Adding more guards to the alt construct in listing 7 is a very simple task, but it enlarges the system state-space and is unnecessary for the results presented in this paper. Listing 7. The alt construct. i n l i n e a l t ( c h i d 1 , op1 , msg1 , c h i d 2 , op2 , msg2 , r e s u l t c h a n , r e s u l t ) { p r o c [ p i d ] . s t a t e = READY; r e s u l t = NULL ; if : : ( op1 == READ) −> p o s t r e a d ( c h i d 1 ) ; :: else p o s t w r i t e ( c h i d 1 , msg1 ) ; fi ; if : : ( op2 == READ) −> p o s t r e a d ( c h i d 2 ) ; :: else p o s t w r i t e ( c h i d 2 , msg2 ) ; fi ; acquire ( pid ); / * i f no s u c c e s s , t h e n w a i t f o r s u c c e s s * / if : : ( p r o c [ p i d ] . s t a t e == READY) −> w a i t ( p i d ) ; : : e l s e skip ; fi ; release ( pid ); a s s e r t ( p r o c [ p i d ] . s t a t e == SUCCESS ) ; if : : ( op1 == READ) −> r e m o v e r e a d ( c h i d 1 , r e s u l t ) ; :: else remove write ( ch id1 ) ; fi ; if : : ( op2 == READ) −> r e m o v e r e a d ( c h i d 2 , r e s u l t ) ; :: else remove write ( ch id2 ) ; fi ; r e s u l t c h a n = proc [ pid ] . r e s u l t c h ; }
2.2. Distributed Channel Synchronisation The local channel synchronisation described in the previous section has a process waiting until a match has been made. The matching protocol performs a continuous two-phase locking for all pairs, thus the waiting process is constantly being tried even though it is passive. This method is not possible in a distributed model with no shared memory, instead an extra process is created to function as a remote lock, protecting updates of the posted channel requests. Similar to the local channel synchronisation, we must lock both processes in the offer function and retrieve the current process state from the process. Finally, when a match is found, both processes are notified and their process states are updated. In figure 3, an overview of the distributed model is shown. The communicating process can call read, write or alt to communicate on channels. These then post the necessary requests to the involved channels through a Promela message channel. The channel home (channelThread) receives the request and initiates the matching algorithm to search for a successful offer amongst all matching pairs. During an offer, the channel home communicates with the lock processes (lockThread) to ensure that no other channel home conflicts. Finally, a matching pair arrives at a success and the lock process can notify the waiting process. In listing 8 all Promela channels are created with a buffer size of 10 to model an asynchronous connection. We have chosen a buffer size of 10, as this is large enough to never get filled during verification in section 3. Every process communicating on a channel is required to have a lock process (Listing 9) associated, to handle the socket communication going in on proc * chan types.
R.M. Friborg and B. Vinter / Verification of a Dynamic Channel Model
Communicating process
Node running communicating process and lockThread
Channel.read
Request.state: READY
Alt
Initialise request and then post the request to all involved channels
Channel.write
Remove request from all involved channels
Lock.wait - Sleep if no success yet
ch_cmd_chan ! POST_WRITE
ch_cmd_chan ! POST_READ
ch_cmd_chan ! REMOVE_WRITE
ch_cmd_chan ! REMOVE_READ
network traffic Node running channelThread
Set blocked condition to enabled
channelThread
proc_release_lock_chan ! RELEASE_LOCK
Channel.match Read and write requests
proc_acquire_lock_chan ! ACQUIRE_LOCK
Channel.offer - Test matched requests for possible success
proc_cmd_chan ! REMOVE_ACK
proc_release_lock_chan ! NOTIFY_SUCCESS
network traffic pr Node running communicating process and lockThread Lock.notify - Wake up sleeping process
Request.state: SUCCESS
lockThread
ch_accept_lock_chan ! ACCEPT_LOCK
Figure 3. Pseudo call graph for the distributed channel synchronisation.
Listing 8. Modeling asynchronous sockets. / * D i r e c t i o n : c o m m u n i c a t i n g p r o c e s s −> c h a n n e l T h r e a d * / chan c h c m d c h a n [ C ] = [ 1 0 ] o f { byte , byte , b y t e } ; / * cmd , p i d , msg * / # d e f i n e POST WRITE 1 # d e f i n e POST READ 2 # d e f i n e REMOVE WRITE 3 # d e f i n e REMOVE READ 4 / * D i r e c t i o n : c h a n n e l T h r e a d −> c o m m u n i c a t i n g p r o c e s s * / chan p r o c c m d c h a n [ P ] = [ 1 0 ] o f { byte , byte , b y t e } ; / * cmd , ch , msg * / # d e f i n e REMOVE ACK 9 / * D i r e c t i o n : c h a n n e l T h r e a d −> l o c k T h r e a d * / chan p r o c a c q u i r e l o c k c h a n [ P ] = [ 1 0 ] o f { b y t e } ; / * ch * / / * D i r e c t i o n : l o c k T h r e a d −> c h a n n e l T h r e a d * / chan c h a c c e p t l o c k c h a n [ C ] = [ 1 0 ] o f { byte , b y t e } ; / * p i d , p r o c s t a t e * / / * D i r e c t i o n : c h a n n e l T h r e a d −> l o c k T h r e a d * / chan p r o c r e l e a s e l o c k c h a n [ P ] = [ 1 0 ] o f { byte , byte , b y t e } / * cmd , ch , msg * / # d e f i n e RELEASE LOCK 7 # d e f i n e NOTIFY SUCCESS 8
45
46
R.M. Friborg and B. Vinter / Verification of a Dynamic Channel Model
The lockThread in listing 9 handles the remote locks for reading and updating the process state from the channel home thread. The two functions remote acquire and remote release are called from the channel home process during the offer procedure. The lockThread and the communicating process use the mutex lock operations from listing 3 for synchronisation. Listing 9. The lock process for a communicating process. proctype lockThread ( byte id ) { b y t e c h i d , cmd , msg ; byte ch id2 ; bit locked ; do : : p r o c a c q u i r e l o c k c h a n [ i d ] ? c h i d −> c h a c c e p t l o c k c h a n [ c h i d ] ! id , proc [ i d ] . s t a t e ; locked = 1; do : : p r o c r e l e a s e l o c k c h a n [ i d ] ? cmd , c h i d 2 , msg ; −> if : : cmd == RELEASE LOCK −> a s s e r t ( c h i d == c h i d 2 ) ; break ; : : cmd == NOTIFY SUCCESS −> a s s e r t ( c h i d == c h i d 2 ) ; acquire ( id ) ; / * m u t e x l o c k op * / p r o c [ i d ] . s t a t e = SUCCESS ; proc [ id ] . r e s u l t c h = ch id2 ; p r o c [ i d ] . r e s u l t m s g = msg ; notify ( id ) ; / * m u t e x l o c k op * / release ( id ) ; / * m u t e x l o c k op * / fi ; od ; locked = 0; : : p r o c c m d c h a n [ i d ] ? cmd , c h i d , msg −> if : : cmd == REMOVE ACK −> p r o c [ i d ] . w a i t i n g r e m o v e s −−; fi ; : : t i m e o u t −> a s s e r t ( l o c k e d == 0 ) ; a s s e r t ( p r o c [ i d ] . w a i t i n g r e m o v e s == 0 ) ; break ; od ; } i n l i n e remote acquire ( ch id , lock pid , g e t s t a t e ) { proc acquire lock chan [ lock pid ]! ch id ; c h a c c e p t l o c k c h a n [ c h i d ] ? id , g e t s t a t e ; a s s e r t ( l o c k p i d == i d ) ; } i n l i n e remote release ( ch id , lock pid ) { p r o c r e l e a s e l o c k c h a n [ l o c k p i d ] ! RELEASE LOCK , c h i d , NULL ; }
R.M. Friborg and B. Vinter / Verification of a Dynamic Channel Model
47
The offer function in listing 10 performs a distributed version of the function in listing 6. In this model we exchange the message from the write request to the read request, update the process state to SUCCESS, notifies the condition lock and release the lock process, all in one transmission to the Promela channel proc release lock chan . We may still have to acquire the locks of many read and write requests that are not in ready state. Acquiring the locks are now more expensive than for the local channel model and it would happen more often, due to the latency of getting old requests removed. If an extra flag is added to a request the offer function can update the flag on success. If the flag is set, we know that the request has already been accepted and we avoid the extra remote lock operations. If the flag is not set, the request may still be old and not ready, as it might have been accepted by another process. Listing 10. The offer function for distributed channel communication. i n l i n e o f f e r ( c h i d , r , w) { r p i d = ch [ c h i d ] . r q u e u e [ r ] . i d ; w p i d = ch [ c h i d ] . wqueue [w ] . i d ; i f /* acquire locks using global ordering */ : : ( r p i d < w p i d ) −> remote acquire ( ch id , r pid , r s t a t e ) ; r e m o t e a c q u i r e ( c h i d , w pid , w s t a t e ) ; : : e l s e s k i p −> r e m o t e a c q u i r e ( c h i d , w pid , w s t a t e ) ; remote acquire ( ch id , r pid , r s t a t e ) ; fi ; i f / * Does t h e two p r o c e s s e s match ? * / : : ( r s t a t e == READY && w s t a t e == READY) −> proc release lock chan [ r pid ]! NOTIFY SUCCESS , c h i d , ch [ c h i d ] . wqueue [w ] . msg ; p r o c r e l e a s e l o c k c h a n [ w pid ] ! NOTIFY SUCCESS , c h i d , NULL ; w = LEN ; r = LEN ; / * b r e a k match l o o p * / : : e l s e skip ; fi ; i f /* release locks using reverse global ordering */ : : ( r p i d < w p i d ) −> r e m o t e r e l e a s e ( ch id , w pid ) ; remote release ( ch id , r p i d ) ; : : e l s e s k i p −> remote release ( ch id , r p i d ) ; r e m o t e r e l e a s e ( ch id , w pid ) ; fi ; }
Every channel must have a channel home, where the read and write requests for communication are held and the offers are made. The channel home invokes the matching algorithm for every posted request, as the post * functions did in the local channel model. In this model every channel home is a process (Listing 11). In another implementation there might only be one process per node maintaining multiple channel homes through a simple channel dictionary.
48
R.M. Friborg and B. Vinter / Verification of a Dynamic Channel Model Listing 11. The channel home process.
proctype channelThread ( byte c h i d ) { DECLARE LOCAL CHANNEL VARS do : : c h c m d c h a n [ c h i d ] ? cmd , i d , msg −> if : : cmd == POST WRITE −> match ( c h i d ) ; : : cmd == POST READ −> match ( c h i d ) ; : : cmd == REMOVE WRITE −> p r o c c m d c h a n [ i d ] ! REMOVE ACK, c h i d , NULL ; : : cmd == REMOVE READ −> p r o c c m d c h a n [ i d ] ! REMOVE ACK, c h i d , NULL ; fi ; : : t i m e o u t −> / * c o n t r o l l e d s h u t d o w n * / / * r e a d and w r i t e q u e u e s m u s t be e m p t y * / a s s e r t ( ch [ c h i d ] . r l e n == 0 && ch [ c h i d ] . wlen == 0 ) ; break ; od ; }
The functions read , write and alt are for the distributed channel model identical to the local channel model. We can now transfer a message locally using the local channel model or between nodes using the distributed channel model. 2.3. Dynamic Synchronisation Layer The following model will allow channels to change the synchronisation mechanism on-thefly. This means that a local channel can be upgraded to become a distributed channel. Activation of the upgrade may be caused by a remote process requesting to connect to the local channel. The model presented in this section can not detect which synchronisation mechanism to use, it must be set explicitly. If channel-ends were part of the implementation, a channel could keep track of the location of all channel-ends and thus it would know what mechanism to use. A feature of the dynamic synchronisation mechanism is that specialised channels can be used, such as a low-latency one-to-one channel resulting in improved communication time and lower latency. The specialised channels may not support constructs like external-choice (alt), but if an external-choice occurs the channel is upgraded. The upgrade procedure adds an overhead, but since channels are often used more than once this is an acceptable overhead. Figure 4 shows an overview of the transition model. In the figure, the communicating process calls read or write to communicate on channels. These then call the functions enter , wait and leave functions. The enter function posts the request to the channel. The wait function ensures that the post is posted at the correct synchronisation level, otherwise it calls the transcend function. The leave function is called, when the request has been matched successfully. The model includes a thread that at any time activates a switch in synchronisation level and thus may force a call to the transcend function.
49
R.M. Friborg and B. Vinter / Verification of a Dynamic Channel Model
Communicating process
Channel.read
Ch.wait_read
Ch.enter_read
Channel.write
Ch.leave_read
Ch.enter_write
Ch.leave_write
Ch.wait_write Request.state: READY
Request.state: READY transcend_read remove using old sync_level and post with new
Channel.post_read
Initialise request and then post the request to all involved channels
Channel.remove_read
Remove request from all involved channels
Channel.post_write
transcend_write remove using old sync_level and post with new
Channel.remove_write
Lock.wait - Sleep if no match could be made Channel.match Read and write requests
Lock.notify - Wake up sleeping process
Request.state: SUCCESS R
Lock.notify - Wake up sleeping process
Channel.offer - Test matched requests for possible success
Request.state: READY
Thread
Channel.switch_sync_level
Figure 4. Pseudo call graph for the dynamic synchronisation layer.
To model the transition between two levels (layers) we set up two groups of channel request queues and a synchronisation level variable per channel. Every access to a channel variable includes the channel id and the new synchronisation level variable sync level . Every communicating process is viewed as a single channel-end and is provided with a proc sync level . This way the communicating process will know the synchronisation level that it is currently at, even though the sync level variable for the channel changes. The synchronisation level of a channel may change at any time using the switch sync level function in listing 12. The match and offer functions from section 2.1 have been extended with a sync level parameter used to access the channel container. The post * functions update the proc sync level variable to the channel synchronisation level before posting a request, while the remove * functions read the proc sync level variable and uses the methods of that level to remove the request. Other than that, the functions match, offer , post * and remove * are similar to the ones from the local channel model. The switching of synchronisation level in listing 12 works by notifying all processes with a request for communication posted to the channel. The channel sync level variable is changed before notifying processes. In listing 14 when a process either tries to enter wait or is awoken by the notification, it will check that the proc sync level variable of the posted request still matches the sync level variable of the channel. If these do not match, we activate the transcend (Listing 13) function. During a transition, the proc state variable is temporarily changed to SYNC, so that the request is not matched by another process between release and leave read . The leave read function calls remove read which uses the proc sync level variable to remove the request and enter read calls post read which uses the updated channel sync level variable.
50
R.M. Friborg and B. Vinter / Verification of a Dynamic Channel Model Listing 12. Switching the synchronisation level of a channel.
i n l i n e s w i t c h s y n c l e v e l ( ch id , t o l e v e l ) { b y t e SL ; b y t e r , w, r p i d , w p i d ; SL = ch [ c h i d ] . s y n c l e v e l ; atomic { ( ch [ c h i d ] . l v l [ SL ] . l o c k == 0 ) −> ch [ c h i d ] . l v l [ SL ] . l o c k = 1 ; } / * a c q u i r e * / ch [ c h i d ] . s y n c l e v e l = t o l e v e l ;
}
/* Notify connected processes */ r = 0; do : : ( r r p i d = ch [ c h i d ] . l v l [ SL ] . r q u e u e [ r ] ; acquire ( r pid ); if : : p r o c s t a t e [ r p i d ] == READY −> notify ( r pid ) ; /* Notify process to transcend */ : : e l s e −> s k i p ; fi ; release ( r pid ); r = r +1; : : e l s e break ; od ; w = 0; do : : (w w p i d = ch [ c h i d ] . l v l [ SL ] . wqueue [w ] ; a c q u i r e ( w pid ) ; if : : p r o c s t a t e [ w p i d ] == READY −> n o t i f y ( w pid ) ; / * N o t i f y p r o c e s s t o t r a n s c e n d * / : : e l s e −> s k i p ; fi ; r e l e a s e ( w pid ) ; w = w+ 1 ; : : e l s e break ; od ; ch [ c h i d ] . l v l [ SL ] . l o c k = 0 ; / * r e l e a s e * /
Listing 13. The transition mechanism for upgrading posted requests. inline transcend read ( ch id ) { p r o c s t a t e [ p i d ] = SYNC ; release ( pid ); leave read ( ch id ); enter read ( ch id ); acquire ( pid ); }
In listing 14 the read function from the local channel model (Section 2.1) is split into an enter, wait and leave part. To upgrade blocking processes we use the transition mechanism in listing 13 which can only be used between an enter and a leave part. We require that all synchronisation levels must have an enter part, a wait / notify state and a leave part.
R.M. Friborg and B. Vinter / Verification of a Dynamic Channel Model
51
Listing 14. The read function is split into an enter, wait and leave part. inline enter read ( ch id ) { p r o c s t a t e [ p i d ] = READY; post read ( ch id ); } inline wait read ( ch id ) { / * i f no s u c c e s s , t h e n w a i t f o r s u c c e s s * / acquire ( pid ); do : : ( p r o c s y n c l e v e l [ p i d ] == ch [ c h i d ] . s y n c l e v e l ) && ( p r o c s t a t e [ p i d ] == READY) −> wait ( pid ) ; : : ( p r o c s y n c l e v e l [ p i d ] ! = ch [ c h i d ] . s y n c l e v e l ) && ( p r o c s t a t e [ p i d ] == READY) −> transcend read ( ch id ); : : e l s e break ; od ; release ( pid ); } i n l i n e l e a v e r e a d ( ch id ){ a s s e r t ( p r o c s t a t e [ p i d ] == SUCCESS | | p r o c s t a t e [ p i d ] == SYNC ) ; remove read ( ch id ) ; } inline read ( ch id ) { enter read ( ch id ); wait read ( ch id ); leave read ( ch id ); }
The three models presented can be used separately for new projects or they can be combined to the following: a CSP library for a high-level programming language where channelends are mobile and can be sent to remote locations. The channel is automatically upgraded, which means that the communicating processes can exist as co-routines, threads and nodes. Specialised channel implementations can be used without the awareness of the communicating processes. Any channel implementation working at a synchronisation level in the dynamic channel, must provide six functions to the dynamic synchronisation layer: enter read , wait read , leave read , enter write , wait write and leave write . 3. Verification Using SPIN The commands in listing 15 verify the state-space system of a SPIN model written in Promela. The verification process checks for the absence of deadlocks, livelocks, race conditions, unspecified receptions, unexecutable code and user-specified assertions. One of these userspecified assertions checks that the message is correctly transferred for a channel communication. All verifications were run in a single thread on an Intel Xeon E5520 with 24 Gb DDR3 memory with ECC. Listing 15. The commands for running an automatic verification of the models. s p i n −a model . p g c c −o pan −O2 −DVECTORSZ=4196 −DMEMLIM=24000 −DSAFETY \\ −DCOLLAPSE −DMA=1112 pan . c . / pan
52
R.M. Friborg and B. Vinter / Verification of a Dynamic Channel Model
The local and the distributed channel models are verified for six process configurations and the transition model is verified for three process configurations. The results from running the SPIN model checker to verify models is listed in table 1. The automatic verification of the models found no errors. The “threads in model” column shows the threads needed for running the configuration in the specific model. The number of transitions in table 1 does not relate to how a real implementation of the model performs, but is the total amount of different transitions between states. If the number of transitions is high, then the model allows a large number of statements to happen in parallel. The SPIN model checker tries every transition possible, and if all transitions are legal the model is verified successfully for a process configuration. This means that for the verified configuration, the model has no deadlocks, no livelocks, no starvation, no race-conditions and do not fail with a wrong end-state. The longest running verification which completed was the distributed model for the configuration in figure 5(f). This configuration completed after verifying the full state-space in 9 days. This means that adding an extra process to the model would multiply the total number of states to a level where we would not be able to complete a verification of the full statespace. The DiVinE model checker [14] is a parallel LTL model checker that should be able to handle larger models than SPIN, by performing a distributed verification. DiVinE has not been used with the models presented in this paper. Table 1. The results from using the SPIN model checker to verify models. Model Local Local Local Local Local Local Distributed Distributed Distributed Distributed Distributed Distributed Transition sync layer Transition sync layer Transition sync layer
The process configurations in figure 5 cover a wide variety of possible transitions for the local and distributed models. None of the configurations check a construct with more than two processes, but we expect the configurations to be correct for more than two processes. The synchronisation mechanisms are the same for a reading process and a writing process in the presented models. Based on this, we can expect that all the configurations in figure 5 can be mirrored and model-checked successfully. The local one-to-one communication is handled by the configuration in figure 5(a). Configurations in figure 5(c) and figure 5(d) cover the one-to-any and any-to-any cases, and we expect any-to-one to also be correct since it is a mirrored version of a one-to-any. The alt construct supports both input and output guards, thus figure 5(b) presents an obvious configuration to verify. In CSP networks this configuration does not make sense, but the verification of the configuration in figure 5(b) shows that two competing alts configured with the worst-case priority do not cause any livelocks. We must also model-check when alt communicates with reads or writes (Figure 5(e)).
R.M. Friborg and B. Vinter / Verification of a Dynamic Channel Model
53
read write write write
read
alt
(a) write
read
alt
(b) read
(c) read
write
write
alt
alt alt write
read
(d)
(e)
alt alt
(f)
Figure 5. Process configurations used for verification.
Finally, the configuration in figure 5(f) verify when alts are communicating on one-to-any and any-to-one. These configurations cover most situations for up to two processes.
4. Conclusions We have presented three building blocks for a dynamic channel capable of transforming the internal synchronisation mechanisms during execution. The change in synchronisation mechanism is a basic part of the channel and can come about at any time. In the worst case, the communicating processes will see a delay caused by having to repost a communication request to the channel. Three models have been presented and model-checked: the shared memory channel synchronisation model, the distributed channel synchronisation model and the dynamic synchronisation layer. The SPIN model checker has been used to perform an automatic verification of these models separately. During the verification it was checked that the communicated messages were transferred correctly using assertions. All models were found to verify with no errors for a variety of configurations with communicating sequential processes. The full model of the dynamic channel has not been verified, since the large state-space may make it unsuited for exhaustive verification using a model checker. With the results from this paper, we can also conclude that the synchronisation mechanism in the current PyCSP [11,12] can be model-checked succesfully by SPIN. The current PyCSP uses the two-phase locking approach with total ordering of locks, which has now been shown to work correctly for both the shared memory model and the distributed model. 4.1. Future Work The equivalence between the dynamic channel presented in this paper and CSP channels, as defined in the CSP algebra, needs to be shown. Through equivalence, it can also be shown that networks of dynamic channels function correctly. The models presented in this paper will be the basis for a new PyCSP channel, that can start out as a simple pipe and evolve into a distributed channel spanning multiple nodes. This channel will support mobility of channel ends, termination handling, buffering, scheduling of lightweight processes, skip and timeout guards and a discovery service for channel homes.
54
R.M. Friborg and B. Vinter / Verification of a Dynamic Channel Model
5. Acknowledgements The authors would like to extend their gratitude for the rigorous review of this paper, including numerous constructive proposals from the reviewers. References [1] David Beazly. Understanding the Python GIL. http://dabeaz.com/python/UnderstandingGIL.pdf. Presented at PyCon 2010. [2] Rune M. Friborg and Brian Vinter. Rapid Development of Scalable Scientific Software Using a Process Oriented Approach. Journal of Computational Science, page 11, March 2011. [3] Moshe Y. Vardi and Pierre Wolper. An Automata-Theoretic Approach to Automatic Program Verification. Proc. First IEEE Symp. on Logic in Computer Science, pages 322–331, 1986. [4] Gerard J. Holzman. The Model Checker Spin. IEEE Trans. on Software Engineering, pages 279–295, May 1997. [5] C.A.R. Hoare. Communicating Sequential Processes. Communications of the ACM, pages 666–676, August 1978. [6] C.A.R. Hoare. Communicating Sequential Processes. Prentice-Hall, 1985. [7] Peter H. Welch, Neil Brown, James Moores, Kevin Chalmers, and Bernhard Sputh. Integrating and Extending JCSP. In A.A.McEwan, S.Schneider, W.Ifill, and P.Welch, editors, Communicating Process Architectures 2007, Jul 2007. [8] M. Schweigler and A. Sampson. p0ny - the occam-π Network Environment. Communicating Process Architectures 2006, pages 77–108, Jan 2006. [9] Neil C. Brown. C++CSP Networked. In Ian R. East, David Duce, Mark Green, Jeremy M. R. Martin, and Peter H. Welch, editors, Communicating Process Architectures 2004, pages 185–200, sep 2004. [10] Pycsp distribution. http://code.google.com/p/pycsp. [11] Rune M. Friborg, John Markus Bjørndalen, and Brian Vinter. Three Unique Implementations of Processes for PyCSP. In Communicating Process Architectures 2009, pages 277–292, 2009. [12] Brian Vinter, John Markus Bjørndalen, and Rune M. Friborg. PyCSP Revisited. In Communicating Process Architectures 2009, pages 263–276, 2009. [13] A. W. Roscoe. The Theory and Practice of Concurrency. Prentice-Hall International Series in Computer Science, 2005. ˇ ska, and P. Roˇckai. DiVinE: Parallel Distributed Model Checker. In Parallel and [14] J. Barnat, L. Brim, M. Ceˇ Distributed Methods in Verification 2010, pages 4–7, 2010.
Programming the CELL-BE using CSP Kenneth SKOVHEDE a,* Morten N. LARSEN a and Brian VINTER a a eScience Center, Niels Bohr Institute, University of Copenhagen Abstract. The current trend in processor design seems to focus on using multiple cores, similar to a cluster-on-a-chip model. These processors are generally fast and power efficient, but due to their highly parallel nature, they are notoriously difficult to program for most scientists. One such processor is the CELL broadband engine (CELL-BE) which is known for its high performance, but also for a complex programming model which makes it difficult to exploit the architecture to its full potential. To address this difficulty, this paper proposes to change the programming model to use the principles of CSP design, thus making it simpler to program the CELL-BE and avoid livelocks, deadlocks and race conditions. The CSP model described here comprises a thread library for the synergistic processing elements (SPEs) and a simple channel based communication interface. To examine the scalability of the implementation, experiments are performed with both scientific computational cores and synthetic workloads. The implemented CSP model has a simple API and is shown to scale well for problems with significant computational requirements. Keywords. CELL-BE, CSP, Programming
Introduction The CELL-BE processor is an innovative architecture that attempts to tackle the problems, that prevent processors from achieving higher performance [1,2,3]. The limitations in traditional processors are primarily problems relating to heat, clock frequency and memory speed. Instead of using the traditional chip design, the CELL-BE consists of multiple units, effectively making it a cluster-on-a-chip processor with high interconnect speed. The CELL-BE processor consists of a single PowerPC (PPC) based processor connected to eight SPEs1 through a 204.8 GB/s EIB2 [4]. The computing power of a CELL-BE chip is well investigated [5,6], and a single CELL blade with two CELL-BE processors can yield as much as 460 GFLOPS [7] at one GFLOPS per Watt [7]. Unfortunately, the computing power comes at the price of a very complex programming model. As there is no cache coherent shared memory in the CELL-BE, the processes must explicitly transfer data between the units using a DMA model which resembles a form of memory mapped IO [8,4]. Furthermore to fully utilize the CELL-BE, the application must use task-, memory-, data- and instruction-level (SIMD3 ) parallelization [5]. A number of papers discuss various computational problems on the CELL-BE, illustrating that achieving good performance is possible, but the process is complex [5,9,10]. In this paper we focus on the communication patterns and disregard instruction-level and data parallelization methods because they depend on application specific computations and cannot be easily generalized. C.A.R. Hoare introduced the CSP model in 1978, along with the concept of explicit communication through well-defined channels. Using only channel based communication, each Author: E-mail: . Synergistic Processing Elements, a RISC based processor. 2 Element Interconnect Bus. 3 Single Instruction Multiple Data.
* Corresponding 1
56
K. Skovhede et al. / Programming the CELL-BE using CSP
participating process becomes a sequential program [11,12]. It is possible to prove that a CSP based program is free from deadlocks and livelocks [11] using CSP algebra. Furthermore, CSP based programs are easy to understand, because the processes consist of sequential code and channels which handle communication between the processes. This normally means that the individual processes have very little code, but the total number of processes are very high. This work uses the CSP design rules and not the CSP algebra itself. By using a CSP like interface, we can hide the underlying complexity from the programmer giving the illusion that all transfers are simply channel communications. We believe that this abstraction greatly simplifies the otherwise complex CELL-BE programming model. By adhering to the CSP model, the implementation automatically obtains properties from CSP, such as being free of race-conditions and having detectable deadlocks. Since the library does not use the CSP algebra, the programmer does not have to learn a new language but can still achieve many of the CSP benefits. 1. Related Work A large number of programming models for the CELL-BE are available [13,14,15,16] illustrating the need for a simpler interface to the complex machine. Most general purpose libraries cannot be directly used on the CELL-BE, because the SPEs use a different instruction set than the PPC. Furthermore, the limited amount of memory available on the SPEs makes it difficult to load a general purpose library onto them. 1.1. Programming Libraries for the CELL-BE The ALF [13] system allows the programmer to build a set of dependent tasks which are then scheduled and distributed automatically according to their dependencies. The OpenMP [14] and CellSs [15] systems provide automatic parallelization in otherwise sequential code through the use of code annotation. As previously published [16], the Distributed Shared Memory for the CELL-BE (DSMCBE), is a distributed shared memory system that gives the programmer the “illusion” that the memory in a cluster of CELL-BE machines is shared. The channel based communication system described in this paper uses the communication system from DSMCBE, but does not use any DSM functionality. It is possible to use both communication models at the same time, however this is outside the scope of this paper. The CellCSP [17] library shares the goals of the channel based system described in this paper but by scheduling independent processes with a focus on processes, rather than communication. 1.2. CSP Implementations The Transterpreter [18] is a virtual machine that can run occam-π programs. By modifying the Transterpreter to run on the SPEs [19], it becomes possible to execute occam-π on the CELL-BE processor and also utilize the SPEs. The Transterpreter implementation that runs on the CELL-BE [19] has been extended to allow programs running in the virtual machine to access some of the SPE hardware. A similar project, trancell [20], allows a subset of occamπ to run on the SPU, by translating Extended Transputer Code to SPU binary code. Using occam-π requires that the programmer learns and understands the occam-π programming language and model, and also requires that the programs are re-written in occamπ. The Transterpreter fro CELL-BE has an extension that allows callbacks to native code [19], which can mitigate this issue to some extent. A number of other CSP implementations are available, such as C++CSP [21], JCSP [22] and PyCSP [23]. Although these may work on the CELL-BE processor they can currently
K. Skovhede et al. / Programming the CELL-BE using CSP
57
only utilize the PPC and not the high performing SPEs. We have used the simplified channel interface in the newest version of PyCSP [24] as a basis for developing the channel communication interface. Since DSMCBE [16] is written in C, we have produced a flattened and non-object oriented interface. 2. Implementation This section gives a short introduction to DSMCBE and describes some design and implementation details of the CSP library. For a more detailed description and evaluation of the DSMCBE system see previous work [16]. 2.1. Distributed Shared Memory for the CELL-BE (DSMCBE) As mentioned in the introduction, the basis for the implementation is the DSMCBE system. The main purpose of DSMCBE is to provide the user with a simple API that establishes a distributed shared memory system on the CELL-BE architecture. Apart from its main purpose, the underlying framework can also be adjusted to serve as a more generic platform for communication between the Power PC element (PPE) and the Synergistic Processing Elements (SPEs). Figure 1 shows the DSMCBE model along with the components involved. The DSMCBE system consists of four elements which we describe below:
Figure 1. DSMCBE Internal Structure.
The DSMCBE PPE/SPE modules contains the DSMCBE functions which the programmer will call from the user code. To manipulate objects in the system, the programmer will use the functions from the modules to create, acquire and release objects. In addition the two modules are responsible for communicating with the main DSMCBE modules which are located on the PPC. The PPE handler is responsible for handling communication between the PPC user code and the request coordinator (see below). Like the PPE handler, the SPE handler is responsible for handling communication between user code on the SPEs and the request coordinator (see below). However the SPE handler also manages allocation and deallocation of Local Store (LS) memory, which enables the SPE handler to perform memory management without interrupting the SPEs.
58
K. Skovhede et al. / Programming the CELL-BE using CSP
The DSMCBE library uses a single processing thread, called the request coordinator, which is responsible for servicing requests from the other modules. Components can then communicate with the request coordinator by supplying a target for the answer. Using this single thread approach makes it simpler to execute atomic operations and reduces the number of locks to a pair per participating component. Each PPC thread and SPE unit functions as a single component, which results in the request coordinator being unable to determine if the participant is a PPC thread or a SPE. As most requests must pass through the request coordinator, an obvious drawback to this method is that it easily becomes a bottleneck. With this communication framework it is easier to implement channel based communication, as the Request Coordinator can simply be extended to handle channel requests. 2.2. Extending DSMCBE with Channel Based Communication for CELL-BE This section will describe how we propose to extend the DSMCBE model with channel based communication. We have used the DSMCBE system as a framework to ensure atomicity and enable memory transfers within the CELL-BE processor. The implementation does not use any DSM methods and consists of a separate set of function calls. We have intentionally made the programming model very simple; it consists of only six functions: • • • • • •
All functions return a status code which describes the outcome of the call. 2.2.1. Channel Communication The basic idea in the communication model is to use channels to communicate. There are two operations defined for this: and . As in other CSP implementations, the read and write operations block until a matching request arrives, making the operations a synchronized atomic event. When writing to a channel, the calling process must supply a pointer to the data area. The result of a read operation is a pointer to a data area, as well as the size of the data area. After receiving a pointer the caller is free to read and write the contents of the area. As the area is exclusively owned by the process there is no possibility of a race condition. As it is possible to write arbitrary memory locations, when using C, it is the programmers responsibility not to use the data area after a call to write. Logically, the caller can consider the operation as transferring the data and ownership of the area to the recipient. After receiving a pointer from a read operation, and possibly modifying data area, the process may forward the pointer again using . As the reading process has exclusive ownership of the data area, it is also responsible for freeing the data area, if it is no longer needed. The operation results in the same output regardless of which CELL-BE processor the call originates from. If both processes are in the same memory space the data is not copied ensuring maximal speed. If the data requires a transfer, the library will attempt to do so in the most efficient manner. 2.2.2. Transferable Items The CELL-BE processor requires that data is aligned and have certain block sizes, a constraint that is not normally encountered by a programmer. We have chosen to expose a sim-
K. Skovhede et al. / Programming the CELL-BE using CSP
59
ple pair of functions that mimic the well-known and functions called and , respectively. A process wishing to communicate can allocate a block of memory by calling the function and get a standard pointer to the allocated data area. The process is then free to write data into the allocated area. After a process has used a memory block, it can either forward the block to another channel, or release the resources held by calling . 2.2.3. Channel Creation When the programmer wants to use a channel it is necessary to create it by calling the method. To distinguish channels, the create function must be called with a unique number, similar to a channel name or channel object in other CSP systems. This channel number is used to uniquely identify the channel in all subsequent communication operations. The create function allows the caller to set a buffer size on the channel, thus allowing the channel writers to write data into the channel without awaiting a matching reader. A buffer in the CSP model works by generating a sequence of processes where each process simply reads and writes an element. The number of processes in the chain determines the size of the buffer. The semantics of the implemented buffer are the same as a chain of processes, but the implementation uses a more efficient method with a queue. The channel type specifies the expected use of the channel, with the following options: one-to-one, one-to-any, any-to-one, any-to-any and one-to-one-simple. Using the channel type it is possible to verify that the communication patterns correspond to the intended use. In situations where the participating processes do not change it is possible to enable "low overhead" communication by using the channel type one-to-one-simple. Section 2.2.8 describes this optimization in more detail. A special convention borrowed from the DSMCBE model is that read or write operations on non-existing channels will cause the caller to block if the channel is not yet created. Since a program must call the create function exactly once for each channel, some start-up situations are difficult to handle without this convention. Once a process has created the channel, it processes all the pending operations as if they occurred after the channel creation. 2.2.4. Channel Poison As all calls are blocking they can complicate the shutdown phase of a CSP network. The current CSP implementations support a channel poison state, which causes all pending and following operations on that channel to return the poison. To poison a channel, a process calls with the id of an existing channel. When using poison, it is important to check the return value of the read and write operations, as they may return the poison status. A macro named can be used to check the return value and exit the current function when encountered. However the programmer is still fully responsible for making the program handle and distribute poison correctly. 2.2.5. External Choice As a read operation is blocking, it is not possible to wait for data on more than one channel, nor is it possible to probe a channel for its content. If a process could see whether or not a channel has content, a race condition could be introduced. Thereby a second process could read the item right after the probe, resulting in a blocking read. To solve this issue, CSP uses the concept of external choice where a process can request data from multiple channels and then gets a response once a channel is ready. To use external choice, the process must call a variation of the function named
60
K. Skovhede et al. / Programming the CELL-BE using CSP
, where is short for “alternation”, the term used in C.A.R. Hoare’s original paper [25]. Using this function, the process can block for a read operation on multiple channels. When one of the channels has data, the data is returned, as with the normal read operation, along with the channel id of the originating channel. This way of dealing with reads ensures that race conditions cannot occur. With the channel selection done externally, the calling process has no way of controlling which channel to read, should there be multiple available choices. To remedy this, the calling process must also specify what strategy to use if multiple channels are ready. The JCSP library offers three strategies: arbitrary, priority and fair. Arbitrary picks a channel at random whereas priority chooses the first available channel, prioritized by the order in which the channels are given. Fair selection keeps count of the number of times each channel has been selected and attempts to even out the usage of channels. The current implementation of CSP channels for CELL-BE only supports priority select, but the programmer can emulate the two other modes. Similar to the read function, a function called allows a process to write to the first available channel. This function also supports a selection strategy and returns the id of the channel written to. There is currently no mechanism to support the simultaneous selection of channel readers and writers, though there are other ways of engineering this. 2.2.6. Guards To prevent a call from blocking, the calling function can supply a guard which is invoked when no data is available. The implementation defines a reserved channel number, called which can be given as a channel id when requesting read or write from multiple channels. If the operation would otherwise block, the function returns a pointer and as the channel value. Other CSP implementations also offer a time-out guard, which performs a skip, but only if the call blocks for a certain period. This functionality is not available in the current implementation, but could be added without much complication. 2.2.7. Processes for CELL-BE The hardware in the CELL-BE is limited to a relatively low number of physical SPEs, which prevents the generation of a large number of CSP processes. To remedy this situation the implementation also supports running multiple processes on each SPE. Since the SPEs have little support for timed interrupts, the implementation is purely based on cooperative switching. To allow multiple processes on the SPE, we have used an approach similar to CELLMT [26], basically implementing a user-mode thread library, but based on the standard C functions and . The CSP threading library implements the function, and allocates ABI compliant stacks for each of the processes when started. After setting up the multithreading environment, the scheduler is activated which transfers control to the first processes. Since the function is implemented by the library, the user code must instead implement the function, which is activated for each process in turn. This means that all processes running on a single SPE must use the same function, but each process can call the function !
and thus obtain a unique id, which can be used to determine what code the process will execute. When a process is executing it can cooperatively yield control by calling
" , which will save the process state and transfer control to the next available process. Whenever a process is waiting for an API response, the library will automatically call a similar function called " ". This function will yield if another process is ready to execute, meaning that it is not currently awaiting an API response. The
K. Skovhede et al. / Programming the CELL-BE using CSP
61
effect of this is that each API call appears to be blocking, allowing the programmer to write a fully sequential program and transparently run multiple processes. As there is no preemptive scheduling of threads, it is possible for a single process to prevent other processes from executing. This is a common trade-off between allowing the SPE to execute code at full speed, and ensuring progress in all processes. This can be remedied by inserting calls to inside computationally heavy code, which allows the programmer to balance the single process execution and overall system progress in a fine grained manner. The scheduler is a simple round-robin scheduler using a ready queue and a waiting queue. The number of threads possible is limited primarily by the amount of available LS memory, which is shared among program code, stack and data. The running time of the scheduler is O(N ) which we deem sufficient, given that all processes share the limited LS, making more than 8 processes per SPE unrealistic. 2.2.8. SPE-to-SPE Communication Since the PPC is rarely a part of the actual problem solving, the memory blocks can often be transferred directly from SPE to SPE without transferring it into main memory. If a SPE is writing to a buffered channel, the data may not be read immediately after the write. Thus, the SPE may run out of memory since the data is kept on the SPE in anticipation of a SPE-to-SPE transfer. To remedy this, the library will flush data to main memory if an allocation would fail. This is in effect a caching system, and as such it is subject to the regular benefits and drawbacks of a cache. One noticeable drawback is that due to the limited available memory, the SPEs are especially prone to memory fragmentation, which happens more often when using a cache, as the memory stays fully populated for longer periods. If the channel is created with the type one-to-one-simple, the first communication will be used to determine the most efficient communication pattern, and thus remove some of the internal synchronization required. If two separate SPEs are communicating, this means that the communication will be handled locally in the SPE Handler shown in Figure 1, and thus eliminate the need to pass messages through the Request Coordinator. A similar optimization is employed if two processes on the same SPE communicate. In this case the data is kept on the SPE, and all communication is handled locally on the SPE in the DSMCBE SPE module shown in Figure 1. Due to the limited amount of memory available on the SPE, data may be flushed out if the channel has large buffers or otherwise exhaust the available memory. These optimizations can only work if the communication is done in a one-to-one fashion where the participating processes never change. Should the user code attempt to use such a channel in an unsupported manner, an error code will be returned. 2.2.9. Examples To illustrate the usage of the channel-based communication Listing 1 shows four simple CSP processes. Listing 2 presents a simple example that uses the alternation method to read two channels and writes the sum to an output channel. 3. Experiments When evaluating system performance, we focus mainly on the scalability aspect. If the system scales well, further optimizations may be made specific to the application, utilizing the SIMD capabilities of the SPEs. The source code for the experiments are available from .
62
K. Skovhede et al. / Programming the CELL-BE using CSP
1 # i n c l u d e < d smcbe_csp . h> 3 i n t d e l t a 1 ( GUID i n , GUID o u t ) { void∗ value ; 5 while (1) { 7 CSP_SAFE_CALL( " r e a d " , d s m c b e _ c s p _ c h a n n e l _ r e a d ( i n , NULL, &v a l u e ) ) ; CSP_SAFE_CALL( " w r i t e " , d s m c b e _ c s p _ c h a n n e l _ w r i t e ( o u t , v a l u e ) ) ; 9 } } 11 i n t d e l t a 2 ( GUID i n , GUID outA , GUID outB ) { 13 void ∗ inValue , outValue ; size_t size ; 15 while (1) { 17 CSP_SAFE_CALL( " r e a d " , d s m c b e _ c s p _ c h a n n e l _ r e a d ( i n , &s i z e , &i n V a l u e ) ) ; CSP_SAFE_CALL( " a l l o c a t e " , d s m c b e _ c s p _ i t e m _ c r e a t e (& o u t V a l u e , s i z e ) ) ; 19 memcpy ( o u t V a l u e , i n V a l u e , s i z e ) ; / / Copy c o n t e n t s a s we n eed two c o p i e s 21 CSP_SAFE_CALL( " w r i t e A" , d s m c b e _ c s p _ c h a n n e l _ w r i t e ( outA , i n V a l u e ) ) ; 23 CSP_SAFE_CALL( " w r i t e B" , d s m c b e _ c s p _ c h a n n e l _ w r i t e ( outB , o u t V a l u e ) ) ; } 25 } 27 29
i n t p r e f i x ( GUID i n , GUID o u t , v o i d ∗ d a t a ) { CSP_SAFE_CALL( " w r i t e " , d s m c b e _ c s p _ c h a n n e l _ w r i t e ( o u t , d a t a ) ) ;
31
r e t u r n d e l t a 1 ( in , out ) ; }
33 35
i n t t a i l ( GUID i n , GUID o u t ) { v o i d ∗ tmp ;
37
CSP_SAFE_CALL( " r e a d " , d s m c b e _ c s p _ c h a n n e l _ r e a d ( i n , NULL, &tmp ) ) ; CSP_SAFE_CALL( " f r e e " , d s m c b e _ c s p _ i t e m _ f r e e ( tmp ) ) ;
39 r e t u r n d e l t a 1 ( in , out ) ; 41 }
Listing 1. Four simple CSP processes. 1 i n t add ( GUID inA , GUID inB , GUID o u t ) { 3 void ∗ data1 , ∗ data2 ; 5 7
GUID c h a n n e l L i s t [ 2 ] ; c h a n n e l L i s t [ 0 ] = inA ; c h a n n e l L i s t [ 1 ] = inB ;
9
GUID ch an ;
11 13
while (1) { d s m c b e _ c s p _ c h a n n e l _ r e a d _ a l t ( CSP_ALT_MODE_PRIORITY , c h a n n e l L i s t , 2 , &chan , NULL, &d a t a 1 ) ; d s m c b e _ c s p _ c h a n n e l _ r e a d ( ch an == inA ? inB : inA , NULL, &d a t a 2 ) ;
15
∗( i n t ∗) data1 = ∗(( i n t ∗) data1 ) + ∗(( i n t ∗) data2 ) ;
17 dsmcbe_csp_item_free ( data2 ) ; dsmcbe_csp_channel_write ( out , d at a1 ) ;
19 } 21 }
Listing 2. Reading from two channels with alternation read and external choice. To better fit the layout of the article the macro is omitted.
K. Skovhede et al. / Programming the CELL-BE using CSP
63
All experiments were performed on an IBM QS22 blade, which contains 2 connected CELL-BE processors, giving access to 4 PPE cores and 16 SPEs. 3.1. CommsTime A common benchmark for any CSP implementation is the CommsTime application which sets up a ring of processes that simply forwards a single message. The conceptual setup is shown in Figure 2. This benchmark measures the communication overhead of the channel operations since there is almost no computation required in the processes. To better measure the scalability of the system, we have deviated slightly from the normal CommsTime implementation, by inserting extra successor processes as needed. This means that each extra participating process will add an extra channel, and thus and thus produce a longer communication ring. Figure 3 shows the CommsTime when communicating among SPE processes. The PPE records the time between each received message, thus measuring the time it takes for the message to traverse the ring. The time shown is an average over 10 runs of 10.000 iterations. As can be seen, the times seems to stabilize around 80 μseconds when using one thread per SPE. When using two or more threads the times stabilizes around 38 μseconds, 27 μseconds, and 20 μseconds respectively. When using multiple threads, the communication is performed internally on the SPEs, which results in a minimal communication overhead causing the average communication overhead to decrease.
Figure 2. Conceptual setup for the CommsTime experiment with 4 SPEs.
We have executed the CommsTime sample from the JCSP library v.1.1rc4 on the PPE. The JCSP sample uses four processes in a setup similar to Figure 2 but with all processes placed on the PPE. Each communication took on average 63 μseconds which is slightly faster than our implementation, which runs at 145 μseconds on the PPE. Even though JCSP is faster, it does not utilize the SPEs, and cannot utilize the full potential of the CELL-BE. 3.2. Prototein Folding Prototeins are a simplified 2D model of a protein, with only two amino acids and only 90 degree folds [27]. Folding a prototein is computationally simpler than folding a full protein, but exhibit the same computational characteristics. Prototein folding can be implemented with a bag-of-tasks type solution, illustrated in Figure 4, where partially folded prototeins are placed in the bag. The partially folded prototeins have no interdependencies, but may differ in required number of combinations and thus required computational time. As seen in Figure 5 the problem scales very close to linearly with the number of SPEs, which is to be expected for this type of problem. This indicates that the communication latency is not a limiting factor, which also explains why the number of SPE threads have very little effect on the scalability.
64
K. Skovhede et al. / Programming the CELL-BE using CSP CommsTime 140 1 threads 2 threads 3 threads 4 threads
120
Time(μs)
100 80 60 40 20 0 2
3
4
5
6
7
8 9 10 11 Number of SPEs
12
13
14
15
16
Figure 3. CommsTime using 2-16 SPEs with 1-4 threads per SPE.
Figure 4. Conceptual setup for Prototein folding with 3 SPEs. Prototein 16 14
Speedup
12 10 8 1 threads 2 threads 3 threads 4 threads
6 4 2 1
2
3
4
5
6
7 8 9 10 Number of SPEs
11
12
13
14
15
16
Figure 5. Speedup of prototein folding using 1-16 SPEs.
3.3. k Nearest Neighbors (kNN) The kNN application is a port of a similar application written for PyCSP [28]. Where the PyCSP model is capable of handling an extreme number of concurrent processes, the library is limited by the number of available SPEs and the amount of threads each SPE can accommodate. Due to this, the source code for the two applications are hard to compare, but the
65
K. Skovhede et al. / Programming the CELL-BE using CSP
overall approach and communication patterns are the same. Figure 6 shows a conceptual ring based setup for finding the kNN.
Figure 6. Conceptual setup for the kNN experiment with 4 SPEs, each running 2 threads.
This ring-based approach means that each process communicates only with its neighbor. To support arbitrary size problems, one of the channels are buffered. The underlying system will attempt to keep data on the SPE, in anticipation of a transfer, but as the SPE runs out of memory, the data will be swapped to main memory. This happens completely transparent to the process, but adds an unpredictable overhead to the communication. This construction allows us to run the same problem size on one to 16 SPEs. KNN 16 14
Speedup
12 10 8 6 4 1 thread 2 threads
2 1
2
3
4
5
6
7 8 9 10 Number of SPEs
11
12
13
14
15
16
Figure 7. Speedup of the k Nearest Neighbors problem using 1-16 SPEs to search for 10 nearest neighbors in a set with 50k elements with 72 dimensions.
As seen in Figure 7 this does not scale linearly, but given the interdependencies we consider this to be a fairly good result. Figure 7 also shows that using threads to run multiple solver processes on each SPE offers a performance gain, even though the processes compete for the limited LS memory. This happens because the threads implement an implicit form of double buffering, allowing each SPE to mask communication delays with computation. The achieved speedup indicates that there is a good balance between the communication and computation performed in the experiment. The speedup for both graphs is calculated based on the measured time for running the same problem size on a single SPE with a single solver thread.
66
K. Skovhede et al. / Programming the CELL-BE using CSP
3.4. Communication to Computation Ratio The ring based communication model used in the kNN experiment is quite common for problems that use a n2 approach. However, the scalability of such a setup is highly dependent on the amount of work required in each subtask. To quantify the communication to computation ratio required for a well-scaling system, we have developed a simple ring-based program that allows us to adjust the number of floating point operations performed between communications. The computation performed is adjustable and does not depend on the size of the transmitted data, allowing us to freely experiment with the computational workload. The setup for this communication system is shown in Figure 8. The setup is identical to the one used in the kNN experiment, but instead of having two communicating processes on the same SPE, the processes are spread out. This change cause the setup to loose the possibility for the very fast internal SPE communication channels, which causes more load on the PPE and thus gives a more realistic measurement for the communication delays.
Figure 8. Conceptual setup for non-structured ring based communication.
As seen in Figure 9, the implementation scales well if computation performed in each ring iteration is around 100MFLOPS. Comparing the two graphs in Figure 9, shows that increasing the number of threads on the SPEs, results in a decrease in performance. This happens because the extra processes introduce more communication. This increase in communication causes a bigger strain on the PPE, which results in more latency than the processes hide. In other words, the threads cause more latency than they can hide in this setup. The speedup for both graphs in Figure 9 are calculated based on measurements from a run with the same data size on a single SPE with a single thread. Comparing the Communication to Computation experiment with the kNN experiment reveals that the use of optimized channels reduces the latency of requests to a level where the threads are unable to hide the remaining latency. In other words, the latency becomes so low, that the thread switching overhead is larger than the latency it attempts to hide. This is consistent with the results from the CommsTime experiment, which reveals that the communication time is very low when performing inter-SPE communication. This does not mean that the latency is as low as it can be, but it means that the extra communication generated by the threads increases the amount of latency that must be hidden. 4. Future Work The main problem with any communication system is the overhead introduced by the communication. As the experiments show, this overhead exists but can be hidden because the CELL-BE and library are capable of performing the communication and computation simultaneously. But this hiding only works if the computational part of a program has a sufficient size. To remedy this, the communication overhead should be reduced significantly.
67
K. Skovhede et al. / Programming the CELL-BE using CSP
CommToComp − 1 thread 20
Speedup
15
10
5
0 2
3
4
5
6
7
8 9 10 Number of SPEs
11
12
13
14
15
16
12
13
14
15
16
CommToComp − 2 threads 20
Speedup
15
10
5
0 2 0.2 Mflop
3
4
5
2 Mflop
6
7
10 Mflop
8 9 10 Number of SPEs 20 Mflop
11
100 Mflop
200 Mflop
400 Mflop
Figure 9. Communication To Computation ratio, 16 bytes of data.
The decision to use the request coordinator to handle the synchronization simplifies the implementation, but also introduces two performance problems. One problem is that if the system becomes overwhelmed with requests, the execution will be sequential, as the processes will only progress as fast as the request coordinator responds to messages. The other problem is that the requests pass through both the SPU handler and the request coordinator, which adds load to the system and latency to each communication operation. 4.1. Reduce Request Latency Since the SPEs are the main workhorse of the CELL-BE, it makes sense to move much of the decision logic into the SPU handler rather than handle it in the request coordinator. The request coordinator is a legacy item from the DSM system, but there is nothing that prevents participating PPE processes from communicating directly with the SPU handler. 4.2. Increase Parallelism Even if the request coordinator is removed completely, the PPE can still be overwhelmed with requests, which will make everything run sequentially rather than in parallel. It is not possible to completely remove a single synchronization point, but many communication operations involve exactly two processes. In the common case where these two processes reside on separate SPEs, it is possible to perform direct SPE-to-SPE communication through the use of signals and DMA transfers. If this is implemented, it will greatly reduce the load on the PPE for all the presented experiments.
68
K. Skovhede et al. / Programming the CELL-BE using CSP
4.3. Improve Performance of the SPU Handler The current implementation uses a shared spinning thread that constantly checks for SPE and request coordinator messages. It is quite possible that this can be improved by using a thread for each SPE which uses the SPE events rather than spinning. Experiments performed for the DSMCBE [16] system show that improving the SPU handler can improve the overall system performance. 4.4. Improve Memory Exhaustion Handling When the communication is handled by the SPEs internally, it is likely that they will run out of memory. If the SPU handler is involved, such situations are detected and handled gracefully. Since this is essentially a cache system, a cache policy can greatly improve the performance of the system, by selectively choosing which elements to remove from the LS and when such an operation is initiated. 4.5. Process Migration The processes are currently bound to the SPE that started them, but it may turn out that the setup is ineffective and can be improved by moving communicating processes closer together, i.e. to the same SPE. There is limited support for this in the CELL-BE architecture itself, but the process state can be encapsulated to involve only the current thread stack and active objects. However, it may prove to be impossible to move a process, as data may occupy the same LS area. Since the C language uses pointers, the data locations cannot be changed during a switch from one SPE to another. One solution to this could be to allocate processes in slots, such as those used in CELL CSP [17]. 4.6. Multiple Machines The DSMCBE system already supports multiple machines, using standard TCP-IP communication. It would be desirable to also support multiple machines for CSP. The main challenge with multiple machines is to implement a well-scaling version of the alternation operations, because the involved channels can span multiple machines. This could use the cross-bar approach used in JCSP [29].
5. Conclusion In this paper we have described a CSP inspired communication model and a thread library, that can help programmers handle the complex programming model on the CELL-BE. We have shown that even though the presented models introduce some overhead, it is possible to get good speedup for most problems. On the other hand Figure 9 shows, that if the computation to communication ratio is too low - meaning too little computation per communication, it is very hard to scale the problems to utilize all 16 SPEs. However we believe that for most programmers solving reasonable sized problems, the tools provided can significantly simplify the writing of programs for the CELL-BE architecture. We have also shown that threads can be used to mask some latency, but at the same time they generate some latency, which limits their usefulness to certain problems. DSMCBE and the communication model described in this paper is open source software under the LGPL license, and are available from .
K. Skovhede et al. / Programming the CELL-BE using CSP
69
Acknowledgements The authors acknowledge the Danish National Advanced Technology Foundation (grant number 09-067060) and the innovation consortium (grant number 09-052139) for supporting this research project. Furthermore the authors acknowledge Georgia Institute of Technology, its Sony-Toshiba-IBM Center of Competence, and the National Science Foundation, for the use of Cell Broadband Engine resources that have contributed to this research.
References [1] Wm. A. Wulf and Sally A. Mckee. Hitting the Memory Wall: Implications of the Obvious. Computer Architecture News, 23:20–24, 1995. [2] J. A. Kahle, M. N. Day, H. P. Hofstee, C. R. Johns, T. R. Maeurer, and D. Shippy. Introduction to the Cell multiprocessor. IBM J. Res. Dev., 49(4/5):589–604, 2005. [3] Gordon E. Moore. Readings in computer architecture. chapter Cramming more components onto integrated circuits, pages 56–59. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 2000. [4] Thomas Chen. Cell Broadband Engine Architecture and its first implementation - A Performance View, 2005. . Accessed 26 July 2010. [5] Martin Rehr. Application Porting and Tuning on the Cell-BE Processor, 2008. . Accessed 26 July 2010. [6] Mohammed Jowkar. Exploring the Potential of the Cell Processor for High Performance Computing, 2007.
. Accessed 26 July 2010. [7] IBM. IBM Doubles Down on Cell Blade, 2007. !!!"#. Accessed 26 July 2010. $ [8] IBM. Cell BE Programming Handbook Including PowerXCell 8i, 2008. $%&$'"('")*&+, !"%&**)-*#-. ',/01'2 $$$$!3# . Accessed 26 July 2010. [9] Jakub Kurzak, Alfredo Buttari, and Jack Dongarra. Solving Systems of Linear Equations on the CELL Processor Using Cholesky Factorization. IEEE Trans. Parallel Distrib. Syst., 19(9):1175–1186, 2008. [10] Asim Munawar, Mohamed Wahib, Masaharu Munetomo, and Kiyoshi Akama. Solving Large Instances of Capacitated Vehicle Routing Problem over Cell BE. In HPCC ’08: Proceedings of the 2008 10th IEEE International Conference on High Performance Computing and Communications, pages 131–138, Washington, DC, USA, 2008. IEEE Computer Society. [11] C.A.R. Hoare. Communicating Sequential Processes. Prentice-Hall, London, 1985. ISBN: 0-131-532715. [12] A.W. Roscoe, C.A.R. Hoare, and R. Bird. The theory and practice of concurrency, volume 216. Citeseer, 1998. [13] IBM. Accelerated Library Framework Programmer’s Guide and API Reference, 2009.
45)0 6 407 $ . Accessed 26 July 2010. [14] Kevin O’Brien, Kathryn O’Brien, Zehra Sura, Tong Chen, and Tao Zhang. Supporting OpenMP on Cell. In IWOMP ’07: Proceedings of the 3rd international workshop on OpenMP, pages 65–76, Berlin, Heidelberg, 2008. Springer-Verlag. [15] Pieter Bellens, Josep M. Perez, Rosa M. Badia, and Jesus Labarta. CellSs: a Programming Model for the Cell BE Architecture. In ACM/IEEE CONFERENCE ON SUPERCOMPUTING, page 86. ACM, 2006. [16] Morten N. Larsen, Kenneth Skovhede, and Brian Vinter. Distributed Shared Memory for the Cell Broadband Engine (DSMCBE). In ISPDC ’09: Proceedings of the 2009 Eighth International Symposium on Parallel and Distributed Computing, pages 121–124, Washington, DC, USA, 2009. IEEE Computer Society. [17] Mads Alhof Kristiansen. CELL CSP Sourcecode, 2009. . Accessed 26 July 2010. [18] Christian L. Jacobsen and Matthew C. Jadud. The Transterpreter: A Transputer Interpreter. In Ian R. East, David Duce, Mark Green, Jeremy M. R. Martin, and Peter H. Welch, editors, Communicating Process Architectures 2004, volume 62 of Concurrent Systems Engineering Series, pages 99–106, Amsterdam, September 2004. IOS Press.
70
K. Skovhede et al. / Programming the CELL-BE using CSP
[19] Damian J. Dimmich, Christian L. Jacobsen, and Matthew C. Jadud. A Cell Transterpreter. In Peter Welch, Jon Kerridge, and Fred Barnes, editors, Communicating Process Architectures 2006, volume 29 of Concurrent Systems Engineering Series, pages 215–224, Amsterdam, September 2006. IOS Press. [20] Ulrik Schou Jørgensen and Espen Suenson. trancell - an Experimental ETC to Cell BE Translator. In Alistair A. McEwan, Wilson Ifill, and Peter H. Welch, editors, Communicating Process Architectures 2007, pages 287–298, jul 2007. [21] Alistair A. Mcewan, Steve Schneider, Wilson Ifill, Peter Welch, and Neil Brown. C++CSP2: A Many-toMany Threading Model for Multicore Architectures, 2007. [22] P. H. Welch, A. W. P. Bakkers (eds, and Nan C. Schaller. Using Java for Parallel Computing - JCSP versus CTJ. In Communicating Process Architectures 2000, pages 205–226, 2000. [23] Otto J. Anshus, John Markus Bjørndalen, and Brian Vinter. PyCSP - Communicating Sequential Processes for Python. In Alistair A. McEwan, Wilson Ifill, and Peter H. Welch, editors, Communicating Process Architectures 2007, pages 229–248, jul 2007. [24] Brian Vinter, John Markus Bjørndaln, and Rune Møllegaard Friborg. PyCSP Revisited, 2009. . Accessed 26 July 2010. [25] C. A. R. Hoare. Communicating sequential processes. Commun. ACM, 21(8):666–677, 1978. [26] Vicenç Beltran, David Carrera, Jordi Torres, and Eduard Ayguadé. CellMT: A cooperative multithreading library for the Cell/B.E. In HiPC, pages 245–253, 2009. [27] Brian Hayes. Prototeins. American Scientist, 86(3):216–, 1998. [28] Rune Møllegaard Friborg. PyCSP kNN implementation, 2010. . Accessed 26 July 2010. [29] P.H. Welch and B. Vinter. Cluster Computing and JCSP Networking. Communicating Process Architectures 2002, 60:203–222, 2002.
Static Scoping and Name Resolution for Mobile Processes with Polymorphic Interfaces Jan Bækgaard PEDERSEN 1 , Matthew SOWDERS School of Computer Science, University of Nevada, Las Vegas Abstract. In this paper we consider a refinement of the concept of mobile processes in a process oriented language. More specifically, we investigate the possibility of allowing resumption of suspended mobile processes with different interfaces. This is a refinement of the approach taken currently in languages like occam-π. The goal of this research is to implement varying resumption interfaces in ProcessJ, a process oriented language being developed at UNLV. Keywords. ProcessJ, process oriented programming, mobile processes, static name resolution
Introduction In this paper we redefine static scoping rules for mobile processes with polymorphic (multiple possible varying) suspend/resume interfaces, and develop an algorithm to perform correct name resolution. One of the core ideas behind mobile processes is the ability to suspend execution (almost) anywhere in the code and return control to the caller, who can then treat the suspended process as a piece of data, that can be transmitted to a different (physical) location, and at a later point in time, resumed and continue executing from where it left off. We shall use the word start the first time a mobile procedure is executed/invoked, and resume for all subsequent executions/invocations. Let us illustrate the problem with an example from occam-π. In occam-π [16], mobile processes are all initially started and subsequently resumed with the original (procedure) interface; that is, every resumption requires the same parameter list, even if some of these parameters have no meaning for the code that is to be executed. An example from [17] is shown in Figure 1. The reindelf process only uses the initialise channel (line 1) in the in station compound (initialise local state) code block (line 7). For each subsequent resumption (lines 11, 13, and 15) of this process, a ’dummy’ channel-end must be passed as the first parameter. The channel end represents a channel on which no communication is ever going to happen. Not only does that make the code harder to read, but also opens the possibility of incorrect code should the channel be used for communication in the subsequent code blocks. Similarly, should subsequent resumptions of the process require different channels, the initial call must provide ’dummy’ values for these the first time the process is called. 1 Corresponding Author: Jan Bækgaard Pedersen, University of Nevada Las Vegas, 4505 Maryland Parkway, Las Vegas, NV, 89154, United States of America. Tel.: +1 702 895 2557; Fax: +1 702 895 2639; E-mail: [email protected].
72
J.B. Pedersen and M. Sowders / Static Scoping and Name Resolution for Mobile Processes 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16:
MOBILE PROC reindelf (CHAN AGENT.INITIALIZE initialize?, SHARED CHAN AGENT.MESSAGE report!, SHARED CHAN INT santa.a!, santa.b!) IMPLEMENTS AGENT ... local state declarations SEQ ... in station compound (initialise local state) WHILE TRUE SEQ ... in station compound SUSPEND -- move to gathering place ... in the gathering place SUSPEND -- move to santa’s grotto ... in santa’s grotto SUSPEND -- move to compound : Figure 1. occam-π example.
For ProcessJ [13], a process oriented language being developed at the University of Nevada, Las Vegas, we propose a different approach to mobile process resumption. When a process explicitly suspends, it defines with which interface it should be resumed. This of course means that parameters from the previous resumption are no longer valid. Static scoping analysis as we know it no longer suffices to perform name resolution. In this paper we present a new approach to name resolution for mobile processes with polymorphic interfaces. In ProcessJ, a suspend point is represented by the three keywords suspend resume with followed by a parameter list in parentheses (like a formal parameter list for a procedure as found in most languages). A suspended mobile process is resumed by a simple invocation using the name of the variable holding the reference to it, followed by a list of actual parameters (like a regular procedure call). For example, if a suspended mobile is held in a variable f , and the interface defines one integer parameter, then f (42) is a valid resumption. Let us start with a small example without any channels or local variables: 1: 2: 3: 4: 5: 6: 7: 8: 9:
mobile void foo(int x, int y) { B1 while (B2 ) { B3 suspend resume with (int z); B4 } B5 } Figure 2. Simple ProcessJ example.
The first (and only) time B1 is executed, it has access to the parameters x and y from the original interface (line 1). The first time B2 is executed will be immediately after the execution of B1 . That is, following the execution of B1 , which had access to the parameters x and y. B2 cannot access x or y, as we will see shortly. If B2 evaluates to true the first time it is reached, the process will execute B3 and suspend itself. B4 will be executed when the process is resumed though the interface that declares the parameter z (line 5). The previous parameters x and y are now no longer valid. To realize why these parameters should no longer be valid, imagine they held channels to the previous local environment (the caller’s
J.B. Pedersen and M. Sowders / Static Scoping and Name Resolution for Mobile Processes
73
environment) in which the process was executed, but in which it no longer resides; these channels can no longer be used, so it is imperative that the parameters holding references to them not be used again. Therefore, B4 can only reference the z parameter, and not x and y. But what happens now when B2 is reached a second time? x and y are no longer valid, but what about z? Naturally z cannot be referenced by B2 either as the first time B2 was reached, the process was started through the original interface and there was no z in that interface. Furthermore, if we look closely at the code, we also realize that the first time the code in block B3 is reached, just like B2 , the parameters from the latest process resumption (which here is also the first) would be x and y. The second time the code block B3 is executed will be during the second execution of the body of the while loop. This means, that foo has been suspended and resumed once, and since the interface of the suspend statement has just one parameter, namely z, and not x and y, neither can be referenced. So in general, we cannot guarantee that x and y can be referenced anywhere except block B1 . The same argument holds for z in block B4 . We can illustrate this by creating a table with a trace of the program and by listing with which parameters the most recent resumption of the process happened. Table 1 shows a trace of the process where B2 is evaluated three times, the first two times to true, and the last time to false. By inspecting Table 1, we see that both B2 and B3 can be reached with disjoint sets of parameters; therefore disallowing referenced to both x and y as well as z. B5 could have appeared with the parameters x and y had B2 evaluated to false the first time it was evaluated, thus we can draw the same conclusion for B5 as we did for B2 and B3 . Table 1. Trace of sample execution. Started/resumed interface f oo(x, y)
Remarks foo(int x, int y) B2 = true suspend resume with (int z); B2 = true suspend resume with (int z); B2 = f alse
Table 2 shows in which blocks (Bi ) the three interface parameters can be referenced. Later on we shall add local variables to the code and redo the analysis. Table 2. Parameters that can be referenced in various blocks. Parameter x y z
Blocks that may reference it B1 B1 B4
If we had changed z to x (and retained their shared type int), all of a sudden, x would now also be a valid reference in the blocks B2 , B3 , and B5 ; that is, everywhere in the body of the procedure.
74
J.B. Pedersen and M. Sowders / Static Scoping and Name Resolution for Mobile Processes
We start by examining the parameters of the interfaces, and later return to incorporate the local variables (for which regular static scoping rules apply) into a single name resolution pass containing both parameters and local variables. In the next section we look at related work, and then proceed in section 2 to present a method for constructing a control flow graph (CFG) based on the ProcessJ source code. In section 3 we define sets of declarations to be used in the computation of valid reference, and in section 4 we illustrate how to compute these sets, and finally in section 5 we present the new name resolution algorithm for mobile processes with polymorphic interfaces. Finally we wrap up with a result section and some thoughts about future work. 1. Related Work The idea of code mobility has been around for a long time. In 1969 Jeff Rulifson introduced a language called the Decode-Encode-Language (DEL) [15]. One could download a DEL program from a remote machine, and the program would control communication and efficiently use limited bandwidth between the local and remote hosts [4]. Though not exactly similar to how a ProcessJ process can be sent to different computational environments, DEL could be considered the beginning of mobile agents. Resumable processes are similar to mobile agents. In [5], Chess et al. provides a classification of Mobile Code Languages. In a Mobile Code Language, a process can move from one computational environment to another. A computational environment is container of components, not necessarily a host. For example, two Java Virtual Machines running on the same host would be considered two different computational environments. The term Strong Mobility [5] is used when the process code, state, and control state are saved before passing them to another process to resume at the same control state and with the same variable state in a potentially different computational environment. The term Weak Mobility in contrast does not preserve control state. Providing mobility transparently means the programmer will not need to save the state before sending the process. All that is needed is to define the positions where the process can return control using a suspend statement or a suspend resume statement. The process scheduling is also transparent to the end programmer because mobile processes are scheduled the same as normal processes. 1.1. The Join Calculus and Chords The Join Calculus [9] is a process algebra that extends Milner’s π-calculus [12] and that models distributed and mobile programming. Mobility is treated slightly different in the Join Calculus. The Join Calculus has the concept of Locality, or the computational environment [5] where the process is executed. Locality is inherent to the system and a process can define its locality rather than the suspend-send-resume approach used in occam-π. Cω [3] is a language implementation of the Join Calculus and an extension of the C# programming language. Cω uses chords, a method with multiple interfaces that can be invoked in any order. The body of the method will not execute until every interface has been invoked at least once. ProcessJ does not treat multiple interfaces this way; only one interface is correct at a time, and the process can only be resumed with that exact interface. Therefore, we are forced to either implement run-time errors, or allow querying the suspended mobile about which interface it is ready to accept. 1.2. The Actor Model ProcessJ also differs from Hewitts’ actor model [2,10,11] in the same way; In the actor model, any valid interface can be invoked, and the associated code will execute; again, for ProcessJ, only the interface that the suspended process is ready to accept can be invoked.
J.B. Pedersen and M. Sowders / Static Scoping and Name Resolution for Mobile Processes
75
A modern example of the Actor Model is Erlang actors. Erlang uses pattern matching and receive to respond to messages sent. Figure 3 is a basic actor that takes several differing message types and acts according to each message sent. It is possible to specify a wild card ’ ’ message that will match all other messages so there is a defined default behavior. Erlang also has the ability to dynamically load code on all nodes in a cluster using the nl command [1], or send a message to a process running on another node. A combination of these features could be used to implement a type of weak mobility in Erlang; this is illustrated in Figure 3. 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16:
loop () → receive % If I receive a string ”a” print ”a” to standard out "a" → io:format("a"), loop(); % If I receive a process id and a string ”b” % write ”echo” to the given process id {Pid, "b"} → Pid ! "echo", loop(); % handle any other message I might receive → io:format("do not know what to do."), loop(); end. Figure 3. Erlang Actors can respond to multiple message interfaces.
1.3. Delimited Continuations and Swarm In 2009, Ian Clarke created a project called Swarm [6]. Swarm is a framework for transparent scaling of distributed applications utilizing delimited continuations in Scala through the use of a Scala compiler plug-in. A delimited continuation, also known as a functional continuation [8], is a functional representation of the control state of a process. The goal of Swarm is to deploy an application to an environment with distributed data and move the computations to where the data resides instead of moving the data to the where the process resides. This approach is similar to that used in MapReduce [7] though it is more broadly applicable because not every application can map to the MapReduce paradigm. 1.4. occam-π Versus ProcessJ Mobiles The occam-π language has built in support for mobile processes [16]. The method adopted by occam-π allows processes to suspend rather than always needing to complete. A suspended process can then be communicated on a channel and resumed from the same state it was suspended, providing strong mobility. In occam-π, a mobile process must implement a mobile process type [16]; this is to assure that the process receiving the (suspended) mobile will have the correct set of resources to re-animate the mobile. Mobile processes in ProcessJ with polymorphic interfaces cannot make use of such a technique, as there is no way of guaranteeing that the receiving process will resume the mobile with the correct interface. Naturally, this can be rather detrimental to the further execution of the code; a runtime error would be generated if the mobile is not in a state to accept the interface with which is is resumed. The runtime check added by the
76
J.B. Pedersen and M. Sowders / Static Scoping and Name Resolution for Mobile Processes
compiler is inexpensive and is similar in use to an ArrayOutOfBoundsException in Java. In ProcessJ we approach this problem (though not the scope of this paper, but worth mentioning) in the following way: It is possible to query a mobile process about its next interface (the one waiting to be invoked); this can be done as illustrated in Figure 4. If a process is not in a 1: 2: 3: 4: 5: 6: 7: 8:
MobileProc p = c.read(); // Receive a mobile on channel c if (p.accepts(chan.read)) { // is p’s interface (chan.read) ? chan intChan; par { p(intChan.read); // Resume p with a reading channel end c.write(42); } } Figure 4. Runtime check to determine if a process accepts a specific interface.
state, in which it is capable of accepting a resumption with a certain interface, the check will evaluate to false, and no such resumption is performed. This kind of check is necessarily a runtime check. 2. Control Flow Graphs and Rewriting Rules The key idea to determine which parameters can be referred in a block, is to consider all paths from interfaces leading into that block. If all paths to a block include a definition from an interface of a parameter with the same name and type, then this parameter can be referenced in that block. This can be achieved by computing the intersection of all the parameters declared in interfaces that can flow into a block (directly or indirectly through other nodes.) We will develop this technique through the example code in Figure 2. The first step is to generate a source code-based control flow graph (CFG), which can be achieved using a number of simple graph construction rules for control diverting statements (these are if-, while-, do-, for-, switch-, and alt-statements as well as break and continue). Theses rules are illustrated in Figure 5. For the sake of completeness, it should be noted, that the depiction of the switch statement in Figure 5 is based on each statement case having a break statement at its end; that is, there are no fall though cases. If for example B1 could fall through to B2 the graph would have an arc from e to B1 , from e to B2 , and to represent the fall through case, an arc from B1 to B2 . continue statements in loops add an extra arc to the boolean expression controlling the loop, and a break in an if statement would skip the rest of the nodes from it to the end of the statement by adding an arc directly to the next node in the graph. If we apply the CFG construction rules from Figure 5 in which we treat procedure calls and suspend/resume statements as non-control-diverting statements (The original process interface can be thought of as resume point and will thus be the first ’statement’ in the first block in the CFG.), we get the control flow graph shown in Figure 6. Note, the I0 before B1 represents the original procedure interface, and the I1 between B3 and B4 represents the suspend/resume interface. Having the initial interface and the suspend/resume statements mixed with the regular block commands will not work for the analysis to come, so we need to separate those out. This can be done using a simple graph rewriting rule; each interface gets its own node. This rewriting rule is illustrated in Figure 7.
J.B. Pedersen and M. Sowders / Static Scoping and Name Resolution for Mobile Processes
.. . if (b) S1 else S2 .. .
.. . if (b) S .. .
b
S1
S2
if-then-else statement
77
b
S
if-then statement
.. . do
.. . while (b) S .. .
S while (b) .. .
b
S
while statement
.. . for (i; e; u) S .. .
.. . alt { g1 :
b
S
} .. .
u
S1 .. . gn : Sn
for statement
g1
...
gn
S1
...
Sn
alt statement
e
B1
b
do statement
i
.. . switch (e) { case c1 : B1 .. . case cn : Bn } .. .
S
...
Bn
switch statement Figure 5. CFG construction rules.
78
J.B. Pedersen and M. Sowders / Static Scoping and Name Resolution for Mobile Processes
I 0 B1
B2
B 3 I 1 B4
B5
Figure 6. CFG for the example code in Figure 2.
We will refer to the nodes representing interfaces as interface nodes and all others (with code) as code nodes. With an interface node we associate a set of name/type/interface triples (ni ti Ii ), namely the name (ni ) of the parameter, its type (ti ) and the interface (Ii ) in which it was declared. In addition, we introduce a comparison operator = ˆ between triples defined ˆ (nj tj Ij ) ⇔ (ni = nj ∧ ti = tj ). The corresponding set in the following way: (ni ti Ii ) = ˆ . We introduce interface nodes for suspend/resume points intersection operator is denoted ∩ into the graph in the following manner: if a code block Bi has m suspend/resume statements, then split Bi into m + 1 new code blocks Bi1 , . . . , Bim+1 interspersed with interface nodes Ii1 , . . . , Iim . Bi1 and/or Bim+1 might be empty code nodes (Technically, so might all the
I im
Bim+1
...
...
I i1
...
...
...
Bi 1
Bi
Figure 7. CFG rewriting rule.
other code nodes, but that would be a little strange, as that would signify 2 or more suspend statements following each other without any code in between). Also, since the parameters of the procedure interface technically also make up an interface, we need to add an interface node for these as well. This is also covered by the rewriting rule in Figure 7, and in this case Bi1 will be empty and Ii2 will be I0 . Rewriting the CFG from Figure 6 results in the graph depicted in Figure 8. We now have a CFG with code and interface nodes. Each interface node has information about the parameters it declares, as well as their types. This CFG is a directed graph (VCF G , ECF G ), where the vertices in V are either interface nodes (Ii ) or code nodes (Bi ). An edge in ECF G is a pair of nodes (N, M ) representing a directed edge in the CFG from N to M ; that is, if (N, M ) ∈ ECF G , then the control flows from the code represented by vertex N to the code represented by the vertex M in the program. 3. In and Out Sets For the nodes representing an interface, Ii , we are not interested in the incoming arcs. Since a suspend/resume point represented by an interface node re-defines which parameters can be accessed, they will overwrite any existing parameters. We can now define, for each node in the CFG, sets representing incoming and outgoing parameters. We define two sets for each node N (N is either a code node (Bi ) or an interface node (Ii )) in the CFG, namely the in set (Ik (N )) and the out set (Ok (N ))). Each of these sets
J.B. Pedersen and M. Sowders / Static Scoping and Name Resolution for Mobile Processes
I0
79
{(x int I 0 ),(y int I 0 )}
B1
B4
B2
I1
B3
B5
{(z int I1)} Figure 8. The altered CFG of the example in Figure 6.
are subscripted with a k denoting a generation. Generations of in and out set are dependent on the previous generations. The in set of a code block ultimately represent the parameters that can be referenced in that block. The out set for a code block is a copy of the in set; while technically not necessary, they make the algorithm that we will present later look nicer. For interface nodes, in sets are ignored (there is no code in an interface node). We can now define the following generation 0 sets for an interface node Ii (representing an interface (ti,1 ni,1 , . . . , ti,ki ni,ki )) and a code node Bi : I0 (Ii ) O0 (Ii ) I0 (Bi ) O0 (Bi )
:= := := :=
{} {(ni,1 ti,1 Ii ), . . . , (ni,ki ti,ki Ii )} {} {}
Since an interface node introduces a new set of parameters, we only define its out set. The (k + 1)th generation of in and out sets can easily be computed based on the k th generation. Recall that a parameter (of a certain name and type) can only be referenced in a code block Bi if all interfaces Ij that have a path to Bi define it (both name and type must be the same!); this leads us to the following definition of the k + 1th generation for in and out sets: Ik+1 (Ii ) Ok+1 (Ii ) Ik+1 (Bi ) Ok+1 (Bi )
:= := := :=
{} Ok (Ii ) ˆ (N,B )∈E Ok (N ) i CF G Ik+1 (Bi )
That is, the k + 1th generation of the in set of block Bi is the intersection of the out sets of all its immediate predecessors at generation k in the CFG. To determine the set of references that are valid within a code block we repeatedly apply the four rules (only the two rules for the code blocks will change any sets after the first iteration) until no sets change. Table 3 shows the results after two generations; the third does not change anything, so the result can be observed in the column labeled I1 . To see that x and y or z cannot be referenced in block B2 , consider the set I1 (B2 ): ˆ O0 (B4 ) = {(x int I0 ), (y int I0 )}∩ ˆ {(z int I1 )} = { } I1 (B2 ) := O0 (B1 )∩
80
J.B. Pedersen and M. Sowders / Static Scoping and Name Resolution for Mobile Processes Table 3. Result of in and out sets after 2 generations. I0 B1 B2 B3 I1 B4 B5
I0 {} {} {} {} {} {} {}
O0 {(x int I0 ), (y int I0 )} {} {} {} {(z int I1 )} {} {}
I1 {} {(x int I0 ), (y int}I0 )} {} {} {} {(z int I1 )} {}
O1 {(x int I0 ), (y int I0 )} {(x int I0 ), (y int I0 )} {} {} {(z int I1 )} {(z int I1 )} {}
If two triples have have the same name and type both triples will be represented in the result set (with different interface numbers of course.) We can now formulate the algorithm for computing in and out sets.
4. Algorithm for In and Out Set Computation Input: ProcessJ mobile procedure. Method: 1. Using the CFG construction rules from Figure 5, construct the control flow graph G. 2. For each interface node Ii , and code node Bj in G = (V, E) initialize Ik+1 (Ii ) := { } Ok+1 (Ii ) := Ok (Ii ) Ik+1 (Bj ) := ˆ (N,Bj )∈E Ok (N ) Ok+1 (Bj ) := Ik+1 (Bj ) 3. Execute this code: done = false; while (!done) { done = true; for (B ∈ V ) do { // only for code nodes B = ˆ (N,B)∈E O(N ) if (B = B) done = false; O(B) = I(B) = B } } Result: Input sets for all code block with valid parameter references. It is worth pointing out that in the algorithm generations of in and out sets are not used. This does not impact the correctness of the computation (because the operator used is the intersection operator.) If anything, it shortens the runtime by allowing sets from generation k + 1 to be used in the computation of other generation k + 1 sets. With this in hand, we can now turn to performing the actual scope resolution. This can be achieved using a regular static scope resolution algorithm with a small twist, as we shall see in the following section.
J.B. Pedersen and M. Sowders / Static Scoping and Name Resolution for Mobile Processes
81
5. Static Name Resolution for Mobile Processes Let us re-introduce the code from Figure 2, but this time with local variables added (lines 2, 5, and 8); this code can be found in Figure 9. Also note, the local variable z in line 8 has the same name as the parameter in the interface in line 7. Naturally, this means that the interface parameter is hidden by the local variable. 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12:
mobile void foo(int x, int y) { int a; B1 while (B2 ) { int q; B3 suspend resume with (int z); int w,z; B4 } B5 } Figure 9. Simple ProcessJ example with local variables.
As briefly mentioned in the previous section, the regular static name resolution algorithm works almost as-is. The only differences are that we have to incorporate the in sets computed by the algorithm in the previous section in the resolution pass, and the way scopes are closed will differ slightly. Different languages have different scoping rules, so let us briefly state the static scoping rules for parameters and locals in a procedure in ProcessJ. • Local variables cannot be re-declared in the same scope. • An interface/procedure declaration opens a scope in which only the parameters are held. The scoping rules of interface parameters are what we defined in this paper. • The body of a procedure opens a scope for local variables. (this means, that we can have parameters and locals named the same, but the parameters will be hidden by the local variables.) • A block (a set of { }) opens a new scope (Local variable names can now be reused, though re-declared local variables hide other local variables or parameters in enclosing scopes. The scope of a local variable declared in a block is from the point of declaration to the end of the block. • A for-statement opens a scope (it is legal to declare variables in the initialization part of a for-statement. The scope of such variables is the rest of the for-statement. • A suspend/resume point open a new scope for the new parameters. Since we treat a suspend/resume point’s interface like the original procedure interface, an implicit block ensues immediately after, so a new scope is opened for that as well (If we did not do this, we would break the rule that parameters and local can have shared names, as the in this situation would reside in the same scope.) A symbol table, in this context, is a two dimensional table mapping names to attributes. In addition, a symbol table has a parent (table), and an access list of block numbers that represent which blocks may perform look-ups in them. This access list contains the result of the algorithm that computed which blocks can access an interface’s parameters. If the use of a name in block Bi requires a look-up in a table that does not list i in its access list, the look-up query is passed to the parent recursively, until either the name is successfully resolved, or the end of the chain of tables is reached, resulting in an unsuccessful lookup of that name.
82
J.B. Pedersen and M. Sowders / Static Scoping and Name Resolution for Mobile Processes
Using the example from Figure 2, a total of 5 scopes are opened, two by interfaces (The original procedure’s interface declaring parameters x and y, accessible only by code in block B1 , and the suspend/resume point’s interface declaring parameter w and z, accessible only by code in block B4 ), one by the main body of the procedure (declaring local variable a), one by a block (declaring local variable q), and one following the suspend/resume point (declaring the local variable z, which hides the parameter from the interface of the suspend/resume statement). In Figure 10, the code has been decorated with +Ti to mark where the ith scope is opened, and −Ti to mark where it is closed. Furthermore the implicit scopes opened by the parameter list of an interface, and the body following a suspend/resume statement have been added; these are the underlined brackets in lines 2, 12, 14, 17, 18, and 22. Note the closure of three scopes, −T4 , −T3 , −T2 , at the end of the block making up the body
mobile void foo {+T0 (int x, int y) {+T1 int a; B1 while (B2 ) {+T2 int q; B3 suspend resume with {+T3 (int z); {+T4 int w,z; B4 }−T4 }−T3 }−T2 B5 }−T1 }+T0 Figure 10. Simple ProcessJ example annotated with scope information.
of the while-loop. Since there are no explicit markers in the code that close down scopes for suspend/resume points (T3 ), and the following scope (T4 ), these get closed automatically when an enclosing scope (T2 ) is closed. This is easily controlled when traversing the code (and not the CFG), as a typical name resolution pass would. Figure 11 illustrates the 5 symbol tables, the symbols they declare, their access lists, and the nodes in the CFG with which they are associated. We summarize in Table 4 which variables (locals and parameters) can be referenced in which blocks. Note, although block 4 appears in the access list in symbol table T3 in Figure 11 (and the parameter z is in O1 (B4 )), the local variable z in table T4 hides the parameter.
J.B. Pedersen and M. Sowders / Static Scoping and Name Resolution for Mobile Processes
83
T0 N V int x int y
I0
int a;
B1
T1 N
{1}
V
int a B2
int q;
B5
B3
T2 N V int q
{1,2,3,4}
{1,2,3,4}
T3 N V int z
I1
int w,z;
B4
T4 N
{4}
V
int z int w {1,2,3,4}
Figure 11. CFG with symbol tables. Table 4. Final list of which variables/parameters can be access in which blocks. Block B1 B2 B3 B4 B5
Locals a ∈ T1 a ∈ T1 q ∈ T2 , a ∈ T1 w ∈ T4 , z ∈ T4 , q ∈ T2 , a ∈ T1 a ∈ T1
Parameters x ∈ T0 , y ∈ T0 − − z ∈ T4 −
6. Results and Conclusion We have presented an algorithm that can be applied to create a control flow graph (CFG) at a source code level, and an algorithm to determine which procedure parameters and suspend/resume parameters can be referenced in the code of a mobile procedure. Additionally, we presented a method for performing static scope resolution on a mobile procedure (mobile process) in a process oriented language like ProcessJ. This analysis obeys the standard static scoping rules for local variables and also takes into account the new rules introduced by making a procedure mobile with polymorphic interfaces (and thus resumable in the ’middle of the code’, immediately after the point of exit (suspend point)). 7. Future Work The ProcessJ compiler generates Java code using JCSP to implement CSP primitives like channels, processes and alternations. Additional implementation work is required to integrate
84
J.B. Pedersen and M. Sowders / Static Scoping and Name Resolution for Mobile Processes
the algorithm as well as the JCSP code generation into the ProcessJ compiler. A possible implementation of mobiles using Java/JCSP can follow the approach taken in [14], which unfortunately requires the generated (and compiled) bytecode to be rewritten; this involved reloading the bytecode and inserting new bytecode instructions, something that can be rather cumbersome. However, we do have a new approach, which does not require any bytecode rewriting at all. We expect to be able to report on this in a different paper in the very near future. References [1] Ericsson AB. Erlang STDLIB, 2010. http://www.erlang.org/doc/apps/stdlib/stdlib. pdf. [2] Gul Agha. Actors: a model of concurrent computation in distributed systems. MIT Press, Cambridge, 1986. [3] Nick Benton, Luca Cardelli, and Cedric Fournet. Modern Concurrency Abstractions for C#. In ACM TRANS. PROGRAM. LANG. SYST, pages 415–440. Springer, 2002. [4] Peter Braun and Wilhelm Rossak. Mobile Agents: Basic Concepts, Mobility Models, and the Tracy Toolkit. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 2004. [5] David Chess, Colin Harrison, and Aaron Kershenbaum. Mobile agents: Are they a good idea?Mobile Agents: Are they a good idea? In Jan Vitek and Christian Tschudin, editors, Mobile Object Systems Towards the Programmable Internet, volume 1222 of Lecture Notes in Computer Science, pages 25–45. Springer Verlag, Berlin, 1997. [6] Ian Clarke. swarm-dpl - A transparent scalable distributed programming language, 2008. http:// code.google.com/p/swarm-dpl/. [7] Jeffrey Dean and Sanjay Ghemawat. Mapreduce: simplified data processing on large clusters. Commun. ACM, 51:107–113, January 2008. [8] Matthias Felleisen. Beyond continuations. Computer Science Dept. Indiana University Bloomington, Bloomington IN, 1987. [9] C´edric Fournet and Georges Gonthier. The Join Calculus: A Language for Distributed Mobile Programming. In Gilles Barthe, Peter Dybjer, Lu´ıs Pinto, and Jo˜ao Saraiva, editors, Applied Semantics, volume 2395 of Lecture Notes in Computer Science, pages 268–332. Springer Verlag Berlin / Heidelberg, 2000. [10] Carl Hewitt. Viewing control structures as patterns of passing messages. Artificial Intelligence, 8(3):323364, June 1977. [11] Carl Hewitt, Peter Bishop, Irene Greif, Brian Smith, Todd Matson, and Richard Steiger. Actor induction and meta-evaluation. In In ACM Symposium on Principles of Programming Languages, pages 153–168, 1973. [12] Robin Milner. Communicating and mobile systems: the pi-calculus. Cambridge University Press, Cambridge[England] ;;New York, 1999. [13] Jan B. Pedersen et al. The ProcessJ homepage, 2011. http://processj.cs.unlv.edu. [14] Jan B. Pedersen and Brian Kauke. Resumable Java Bytecode - Process Mobility for the JVM. In The thirty-second Communicating Process Architectures Conference, CPA 2009, organised under the auspices of WoTUG, Eindhoven, The Netherlands, 1-6 November 2009, pages 159–172, 2009. [15] Jeff Rulifson. DEL, 1969. http://www.ietf.org/rfc/rfc0005.txt. [16] Peter H. Welch and Frederick R.M. Barnes. Communicating Mobile Processes: introducing occam-π. In Ali E. Abdallah, Cliff B. Jones, and Jeff W. Sanders, editors, 25 Years of CSP, volume 3525 of Lecture Notes in Computer Science, pages 175–210. Springer Verlag, April 2005. [17] Peter H. Welch and Jan B. Pedersen. Santa Claus - with Mobile Reindeer and Elves. In Fringe Presentation at Communicating Process Architectures conference, September 2008.
J.B. Pedersen and M. Sowders / Static Scoping and Name Resolution for Mobile Processes
85
A. Appendix To illustrate the construction of the CFG in more depth, Figure 13 shows the control flow graph for a for loop with conditional break and continue. The code from which the CFG in Figure 13 was generated is shows in Figure 12. In Figure 13 the body of the for loop is represented by the largest shaded box, the if statement containing the break statement is the box shaded with vertical lines, and the if statement containing the continue statement is the box shaded with horizontal lines. 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13:
for ( i ; b1 ; u ) { B1 if (b2 ) { B2 break; } B3 if (b3 ) { B4 continue; } B5 } Figure 12. Example code with conditional break and continue statements.
Figure 13. CFG for the example code shown in Figure 12.
Prioritised Choice over Multiway Synchronisation a
Douglas N. WARREN a,1 School of Computing, University of Kent, Canterbury, UK
Abstract. Previous algorithms for resolving choice over multiway synchronisations have been incompatible with the notion of priority. This paper discusses some of the problems resulting from this limitation and offers a subtle expansion of the definition of priority to make choice meaningful when multiway events are involved. Presented in this paper is a prototype extension to the JCSP library that enables prioritised choice over multiway synchronisations and which is compatible with existing JCSP Guards. Also discussed are some of the practical applications for this algorithm as well as its comparative performance. Keywords. CSP, JCSP, priority, choice, multiway synchronisation, altable barriers.
Introduction CSP [1,2] has always been capable of expressing external choice over multiway synchronisation: the notion of more than one process being able to exercise choice over a set of shared events, such that all processes making that choice select the same events. For some time, algorithms for resolving such choices at run-time were unavailable and, when such algorithms were proposed, they were non-trivial [3,4]. Conversely priority, a notion not expressed in standard CSP, has been a part of CSP based languages from a very early stage. Priority, loosely, is the notion that a process may reliably select one event over another when both are available. Whilst being compatible with simple events such as channel inputs, algorithms for resolving choice over multiway synchronisation have been incompatible with priority. This paper introduces an algorithm for implementing Prioritised Choice over Multiway Synchronisation (PCOMS) in JCSP [5,6] through the use of the AltableBarrier class. This addition to the JCSP library, allows entire process networks to be atomically paused or terminated by alting over such barriers. They also enable the suspension of sub-networks of processes, for the purposes of process mobility, with manageable performance overheads. This paper assumes some knowledge of both JCSP and occam-π [7,8] – the latter being the basis of most pseudo-code throughout the paper. This paper intends to establish that using the AltableBarrier class simplifies certain problems of multiway synchronisation. However, there are no immediately obvious problems which require PCOMS per se. For example, graceful termination of process networks can be achieved using conventional channel communication. However, if such networks have a complicated layout or if consistency is required at the time of termination then graceful termination using channel communication becomes more complicated. This same problem using PCOMS requires only that all affected processes are enrolled on (and regularly ALT over) an AltableBarrier which they prioritise over other events. 1
D.N. Warren / Prioritised Choice over Multiway Synchronisation
Introduced in this paper are the background and limitations of existing multiway synchronisation algorithms. In Section 3 the limitations of existing notions of priority and readiness are discussed and proposals for processes to pre-assert their readiness to synchronise on barriers are made. This section also proposes the notion of nested priority, the idea that several events may be considered to be of the same priority but to exist in a wider priority ordering. Section 5 details the interface that JCSP programmers need to use in order to include AltableBarriers in their programs. Section 6 details the inner workings of the algorithm itself. Section 7 details stress tests performed on the algorithm as well as comparative performance tests with the previous (unprioritisable) AltingBarrier algorithm. The results are discussed in Section 8 as well as some proposed patterns for implementing fair alting and for influencing the probability of certain events being selected through partial priority. Section 9 concludes the paper. 1. Background This section considers some of the existing algorithms for resolving choice over multiway synchronisation both where the set of events are limited and where the set of events may be arbitrarily large. Also considered are some of the attempts to model priority in CSP. Some of the earliest algorithms resolving choice over multiway synchronisation are database transaction protocols such as the two phase commit protocol [9]. Here the choice is between selecting a ‘commit’ event or one or more processes choosing to ‘abort’ an attempt to commit changes to the database. Initially such protocols were blocking. After the commit attempt was initiated, a coordinator would ask the enrolled nodes to commit to the transaction. If all nodes commit in this way then the transaction is confirmed by an acknowledgement otherwise the nodes are informed that they should abort the transaction. In either case the network and the nodes themselves were considered to be reliable and responsive to such requests. Later incarnations were non-blocking and tolerated faults by introducing the possibility that transactions could timeout [10], these are sometimes referred to as a 3 phase commit protocol. The first phase asks nodes if they are in a position to synchronise (which the coordinator acknowledges), the second involves the processes actually committing to the synchronisation, this being subject to timeouts, the third ensures that the ‘commit’ or ‘abort’ is consistent for all nodes. The protocols above are limited in that they can be considered to be choosing over two events ‘commit’ and ‘abort’. A more general solution was proposed by McEwan [3] which reduced the choice to state machines connected to a central controller. This was followed by an algorithm which coordinates all multiway synchronisations through a single central Oracle [11] and implemented as a library extension for JCSP [5,4] in the form of the AltingBarrier class. All of the above algorithms are incompatible with user defined priority. The database commit protocols are only compatible with priority to the extent that committing to a transaction is favoured over aborting it. The more general algorithms have no mechanism by which priority can be imposed and in the case of JCSP AltingBarriers this incompatibility is made explicit. There have been many attempts to formalise event priority in CSP. Fidge [12] considers previous approaches which (either statically or dynamically) assign global absolute priority values to specific events, these approaches are considered to be less modular and compo→ − sitional. Fidge instead proposes an asymmetric choice operator ( [] ) which favours the left operand. Such an operator is distinguished from the regular external choice operator in that
D.N. Warren / Prioritised Choice over Multiway Synchronisation
89
it excludes the traces of the right hand (low priority) operand where both are allowed by the system, i.e. the high priority event is always chosen where possible. While this might be considered ideal, in practice the arbitrary nature of scheduling may allow high priority events to not be ready, even when the system allows it. Therefore low priority events are not excluded in practice in CSP based languages. However the introduction of readiness tests to CSP by Lowe [13] allow for priority to be modelled as implemented in CSP based languages. Using this model priority conflicts (an inevitable possibility with locally defined relative priority structure) are resolved by arbitrary selection, this is the same result as occurs with JCSP AltableBarriers (albeit with higher performance costs). However, Lowe treats readiness as a binary property of events, in Section 3.1 a case is presented for treating readiness as a (possibly false) assertion that all enrolled processes will be in a position to synchronise on an event in the near future. This distinction allows for processes to pre-emptively wait for multiway synchronisations to occur. Section 2.2 establishes this as being necessary to implement meaningful priority. 2. Limitations of Existing External Choice Algorithms Existing algorithms offering choice over multiway synchronisation do not offer any mechanism for expressing priority; they offer only arbitrary selection (which is all that standard CSP describes). Listed in this section are two (not intuitively obvious) ways in which repeated use of these selection algorithms can be profoundly unfair – although it is worth bearing in mind that CSP choice has no requirement for priority and repeated CSP choice has no requirement for fairness. As such, while these aspects of existing choice resolution algorithms are arguably undesirable, they all constitute valid implementations of external choice. 2.1. Arbitration by Barrier Size Pre-existing algorithms for resolving choice over multiway synchronisation have in common an incompatibility with priority [4], this means that event selection is considered to be arbitrary - in other words no guarantees are made about priority, fairness or avoiding starvation. It is therefore the responsibility of programmers to ensure that this limitation has no adverse effects on their code. While the problems of arbitrary selection may be relatively tractable for code involving channel communications, code containing barrier guards pose extra complications. Consider the following occam-π pseudo-code: PROC P1 ( BARRIER a , b ) ALT SYNC a SKIP SYNC b SKIP : PROC P2 ( BARRIER a ) SYNC a : PROC P3 ( BARRIER b ) SYNC b :
Three different types of processes, one enrolled on ‘a’ and ‘b’, the other two enrolled on either one or the other but not both. Consider a process network containing only P1 and P3 processes:
90
D.N. Warren / Prioritised Choice over Multiway Synchronisation PROC main ( VAL INT n , m ) BARRIER a , b , c : PAR PAR i = 0 FOR n P1 (a , b ) PAR i = 0 FOR m P3 ( b ) :
In such a network event ‘a’ is favoured. In order for either event to happen all of the processes enrolled on that event must be offering it. Since the set of processes enrolled on ‘a’ is a subset of those enrolled on ‘b’, for ‘b’ to be ready implies that ‘a’ is also ready (although the reverse is not true). It is therefore necessary for all of the P3 processes to offer ‘b’ before all of the P1 processes offer ‘a’ and ‘b’ in order for synchronisation on ‘b’ to be possible (even then the final selection is arbitrary). However as the ratio of P1 to P3 processes increases this necessary (but not sufficient) condition becomes less and less likely. This state of affairs may however be desirable to a programmer. For example another process in the system may be enrolled on ‘a’ but also waiting for user input. Provided that processes P1 and P3 are looped ad infinitum, ‘a’ may represent a high priority, infrequently triggered event while ‘b’ is less important and is only serviced when ‘a’ is unavailable. A naive programmer may consider that this property will always hold true. However consider what happens if P2 processes are dynamically added to the process network. Initially ‘a’ continues to be prioritised over ‘b’ but once the P2 processes outnumber the P3 processes it becomes more and more likely that ‘b’ will be picked over ‘a’, even if ‘a’ would otherwise be ready. For this reason a programmer needs to not only be aware of the overall structure of their program in order to reason about which events are selected but also the numbers of processes enrolled on those events. This becomes even more difficult if these numbers of enrolled processes change dynamically. 2.2. Unselectable Barriers As well as making the selection of events depend (to an extent) on the relative numbers of processes enrolled on competing barriers, existing algorithms for resolving external choice over multiway synchronisation can allow for the selection of certain events to be not only unlikely but (for practical purposes) impossible. Consider the pseudo-code for the following two processes: PROC P1 ( BARRIER a , b ) WHILE TRUE ALT SYNC a SKIP SYNC b SKIP : PROC P2 ( BARRIER a , c ) WHILE TRUE ALT SYNC a SKIP SYNC c SKIP :
D.N. Warren / Prioritised Choice over Multiway Synchronisation
91
If a process network is constructed exclusively out of P1 and P2 processes then the sets of processes enrolled on ‘a’, ‘b’ and ‘c’ have some interesting properties. The set of processes enrolled on ‘b’ and those enrolled on ‘c’ are both strict sub-sets of those enrolled on ‘a’. Further the intersection of the sets for ‘b’ and ‘c’ is the empty set. Since choice resolution algorithms (like the Oracle algorithm used in JCSP AltingBarriers) always select events as soon as they are ready (i.e. all enrolled processes are in a position to synchronise on the event), this means that for an event to be selected it must become ready either at the same time or before any competing events. However, because ‘a’ is a superset of ‘b’ and ‘c’ it would be necessary for ‘a’, ‘b’ and ‘c’ to become ready at the same time for ‘a’ to be selectable. This is impossible because only one process may make or retract offers at a time and no process offers ‘a’, ‘b’ and ‘c’ simultaneously. It is therefore impossible for ‘a’ to be selected as either ‘b’ or ‘c’ must become ready first. The impossibility of event ‘a’ being selected in the above scenario holds true for AltingBarrier events: each process offers its set of events atomically and the Oracle deals with each offer atomically (i.e. without interruption by other offers). However, this need not happen. If it is possible for processes to have offered event ‘a’ but to have not yet offered event ‘b’ or ‘c’, then ‘a’ may be selected if a sufficient number of processes have offered ‘a’ and are being slow about offering ‘b’ or ‘c’. This gives a clue as to how priority can be introduced into a multiway synchronisation resolution algorithm. 3. Limitations of Existing Priority Models 3.1. Case for Redefining Readiness and Priority As discussed in Section 2.2, selecting events as soon as all enrolled processes have offered to synchronise can cause serious problems for applying priority to choice over multiway synchronisation. As such meaningful priority may be introduced by allowing processes to pre-emptively wait for synchronisations to occur or by suppressing the readiness of other events in favour of higher priority ones. Here to pre-emptively wait on a given event means to offer only that event and to exclude the possibility of synchronising on any others that would otherwise be available in an external choice. Events which are not part of that external choice may be used to stop a process preemptively waiting, for example a timeout elapsing may trigger this. Once a process stops pre-emptively waiting it is once again free to offer any of the events in an external choice. In other words a process waits for the completion of one event over any other in the hope that it will be completed soon, if it is not then the process may consider offering other events. Waiting in this way requires the resolution of two problems. The first is that if processes wait indefinitely for synchronisations to occur, the network to which the process belongs may deadlock. The corollary to this is that where it is known in advance that an event cannot be selected, it should be possible for processes to bypass waiting for that event altogether (so as to avoid unnecessary delays). The second is that, as a consequence of the first problem, when a process does stop waiting for a high priority event and begins waiting for a lower priority one, it is possible that the higher priority event may become ready again. Here, ready again means that the event now merits its set of processes enrolled on it pre-emptively waiting for its completion. Thus it must be possible for a process to switch from pre-emptively waiting for a low priority synchronisation to a higher priority one. While there are almost an infinite number of ways of pre-emptively determining the readiness of any event, it is proposed that PCOMS barriers use flags to pre-emptively assert the readiness of enrolled processes. Each process uses its flags (one each per barrier that it is enrolled on) to assert whether or not it is in a position to synchronise on that event in the near future. It is a necessary condition that all enrolled processes assert their readiness for those
92
D.N. Warren / Prioritised Choice over Multiway Synchronisation
processes to begin waiting for that synchronisation to take place. If this condition becomes false during such a synchronisation attempt then that attempt is aborted. Conversely if this condition becomes true then this triggers enrolled processes waiting for lower priority events to switch to the newly available event. For the purposes of causing synchronisation attempts to be aborted because of a timeout, such timeouts falsify the assertion of the absent processes that they are in a position to synchronise in the near future. Their flags are changed to reflect this, this in turn causes the synchronisation attempt as a whole to be aborted. In this way high priority events are given every opportunity to be selected over their lower priority counterparts, while the programmer is given every opportunity to avoid wasteful synchronisation attempts where it is known that such a synchronisation is unlikely, 3.2. Case for Nested Priority While there are positive uses for prioritising some multiway synchronisations over others (graceful termination, pausing, etc.) there may be some circumstances where imposing a priority structure on existing arbitrary external choices can be undesireable. Consider the process network for the TUNA project’s one-dimensional blood clotting model [11]. Each SITE process communicates with others through a ‘tock’ event and an array of ‘pass’ events, each process being enrolled on a pass event corresponding to itself as well as the two processes in front of it in a linear pipeline. Although the events offered at any given time depend on the SITE process’ current state, it is a convenient abstraction to consider that the SITE process offers all events at all times, as in the following pseudo-code: PROC site ( VAL INT i ) WHILE TRUE ALT ALT n = 0 FOR 3 SYNC pass [ i + n ] SKIP SYNC tock SKIP :
Here the SITE process makes an arbitrary selection over the events that it is enrolled on. Now suppose that the SITE processes also offer to synchronise on a ‘pause’ barrier. This barrier would need to be of higher priority than the other barriers and would presumably only be triggered occasionally by another process waiting for user interaction. A naive way of implementing this could be the following: PROC site ( VAL INT i ) WHILE TRUE PRI ALT SYNC pause SKIP PRI ALT n = 0 FOR 3 SYNC pass [ i + n ] SKIP SYNC tock SKIP :
Here the SITE process prioritises the ‘pause’ barrier most highly, followed by the ‘pass’ barriers in numerical order, followed by the ‘tock’ barrier. This might not be initially considered a problem as any priority ordering is simply a refinement of an arbitrary selection scheme.
D.N. Warren / Prioritised Choice over Multiway Synchronisation
93
However when more than one process like this is composed in parallel problems begin to emerge, each individual SITE process identified by the ‘i’ parameter passed to it prefers the ‘pass[i]’ event over other pass events further down the pipeline. In other words SITE2 prefers ‘pass[2]’ over ‘pass[3]’, while SITE3 prefers ‘pass[3]’ over all others and so on. This constitues a priority conflict as there is no event consistently favoured by all processes enrolled on it. To paraphrase, each process wishes to select its own ‘pass’ event and will only consider lower priority events when it is satisfied that its own ‘pass’ event is not going to complete. Since no processes can agree on which event is to be prioritised there is no event which can be selected which is consistent with every process’ priority structure. There are two ways in which this can be resolved. The first is that the system deadlocks. The second is that each process wastes time waiting for its favoured event to complete, comes to the conclusion that the event will not complete and begins offering other events. This second option is effectively an (inefficient) arbitrary selection. The proposed solution to this problem for PCOMS barriers is to allow groups of events in an external choice to have no internal priority but for that group to exist in a wider prioritised context. For the purposes of expressing this as occam-π pseudo-code, a group of guards in an ALT block are considered to have no internal priority structure but if that block is embedded in PRI ALT block then those events all fit into the wider priority context of the PRI ALT block. For example in this code: PROC site ( VAL INT i ) WHILE TRUE PRI ALT SYNC pause SKIP ALT ALT n = 0 FOR 3 SYNC pass [ i + n ] SKIP SYNC tock SKIP :
The ‘pause’ event is considered to be have higher priority than all other events but the ‘pass’ and ‘tock’ events are all considered to have the same priority, thereby eliminating any priority conflict. All processes instead are willing to offer any of the ‘pass’ or ‘tock’ events without wasting time waiting for the completion of any one event over any other. 4. Implementation Nomenclature For the purposes of discussing both the interface and implementation of JCSP PCOMS barriers, it is necessary to describe a number of new and extant JCSP classes as well as some of their internal fields or states. A UML class diagram is shown in Figure 1. PCOMS barrier The generic name for any barrier which is capable of expressing nested priority and which can (when involved in an external choice) optimistically wait for synchronisation to occur (as opposed to requiring absolute readiness). AltableBarrierBase The name of a specific JCSP class representing a PCOMS barrier. An AltableBarrierBase contains references to all enrolled processes through their AltableBarrier front-ends. AltableBarrier A JCSP class representing a process’s front-end for interacting with an AltableBarrierBase. There is exactly one AltableBarrier per process per AltableBarrier-
94
D.N. Warren / Prioritised Choice over Multiway Synchronisation
Base that it is enrolled on. Henceforth, unless otherwise noted, the term barrier is used as a short hand for an AltableBarrier. Further an AltableBarrier, may in context, refer to the AltableBarrierBase to which it belongs. For example a process which selects an AltableBarrier also selects the AltableBarrierBase to which it belongs. GuardGroup a collection of one or more AltableBarriers which are considered to be of equal priority. BarrierFace A class used to store important information about a process’ current state regarding synchronisation attempts on AltableBarriers. Includes the AltableBarrier (if any) that a process is trying to synchronise on, the local lock which must be claimed in order to wake a waiting process, etc. There is a maximum of one BarrierFace per process. ‘Status’, PREPARED, UNPREPARED and PROBABLY READY Each AltableBarrier has a ‘status’ flag which records whether a process is PREPARED or UNPREPARED to synchronise on that barrier in the near future. An AltableBarrierBase is considered PROBABLY READY iff all enrolled processes are PREPARED. Being PROBABLY READY is a prerequisite for a process to attempt a synchronisation on an AltableBarrier. Alternative An existing JCSP class which is the equivalent of an occam-π ALT. Calling its priSelect() method causes it to make a prioritised external choice over its collection of Guards. altmonitor A unique object stored in an Alternative. If an Alternative (when resolving external choice) checks all of its Guards and finds none of them are ready then the invoking process calls the wait() method on the altmonitor. The process then waits for any of the Guards to become ready before being woken up by a corresponding notify() call on the altmonitor.
Figure 1. UML diagram showing how the relationship between new and existing JCSP classes.
D.N. Warren / Prioritised Choice over Multiway Synchronisation
95
5. Description of PCOMS Interface This section illustrates the interface programmers use to interact with AltableBarriers. The source code for all of the classes described in this section can be downloaded from a branch in the main JCSP Subversion repository [14]. All of these classes are contained in the org.jcsp.lang package. 5.1. Compatibility with Existing JCSP Implementation The AltableBarrier class, although not directly extending the Guard class, is nevertheless designed to be used in conjunction with the Alternative class in JCSP. A single object shared between all enrolled processes of the class AltableBarrierBase is used to represent the actual PCOMS barrier. Each process then constructs its own AltableBarrier object, passing the AltableBarrierBase object to the constructor. This creates an individual front-end to the barrier for that process and enrols that process on the barrier. The AltableBarrier is included as a Guard in an Alternative by passing an array of AltableBarriers to the constructor of a GuardGroup object. This class extends the Guard class and functions as a collection of one or more AltableBarriers. \\ construct a new barrier Al ta bl eB ar ri er Ba se base = new Al ta bl eB ar ri er Ba se (); \\ enrol a process on a barrier AltableBarrier bar = new AltableBarrier ( base ); \\ create a GuardGroup containing only one barrier GuardGroup group = new GuardGroup ( new AltableBarrier []{ bar });
5.2. Mechanism for Expressing Nested Priority Guards are passed to an Alternative constructor as an array, the order in which the elements are arranged determines the priority ordering. Since the GuardGroup class extends Guard, the relative priority of AltableBarriers is determined by the position of the GuardGroup to which they belong. However a single GuardGroup can contain more than one AltableBarrier, such barriers have no priority ordering within the GuardGroup (the selection process is detailed later but may be considered arbitrary). In this way a group of barrier guards with no internal priority between themselves can be nested within a larger priority structure \\ various AltableBarriers intended to have different priorities . \\ assume these variables have real AltableBarrier \\ objects assigned AltableBarrier highBar , midBar1 , midBar2 , lowBar ; \\ create 3 different GuardGroups , one for each priority level . \\ note that mid has two AltableBarriers which are of \\ equal priority GuardGroup high = new GuardGroup ( new AltableBarrier []{ highBar } ); GuardGroup mid = new GuardGroup ( new AltableBarrier []{ midBar1 , midBar2 } ); GuardGroup low = new GuardGroup ( new AltableBarrier []{ lowBar } ); Guard [] guards = new Guard []{ high , mid , low }; Alternative alt = new Alternative ( guards );
96
D.N. Warren / Prioritised Choice over Multiway Synchronisation
5.3. Mechanisms for Manipulating Readiness As explained earlier (Section 3.1), the ability to express meaningful priority over multiway synchronisation requires the ability to express a future ability to engage on an event as well as the ability to correct for false positive and negative readiness tests. With regard to the former, a PCOMS barrier is considered ready if all of the enrolled processes have advertised the fact that they are able to synchronise on that barrier in the near future. To this end the all AltableBarrier objects have a flag indicating whether a process is PREPARED or UNPREPARED to synchronise on that barrier. For a synchronisation to be attempted all enrolled process must be PREPARED. These flags do not reflect whether or not a process is actually offering an event at any given moment. Instead it indicates whether or not (in the programmer’s opinion) that process will be in a position to offer that event within a reasonable time frame and that the process network as a whole will not deadlock if other processes act on this information. While a process is evaluating an Alternative’s priSelect() method, the state of this flag is managed automatically. A process becomes PREPARED to synchronise on a given barrier as soon as it is encountered in that Alternative, likewise it is automatically made UNPREPARED if a synchronisation attempt is made but that process fails to engage on that event before a timeout elapses (this state persisting until that process actually is in a position to engage on that event again). At all other times a user defined default state holds for each individual AltableBarrier object. It is however possible for the programmer to override this state temporarily (i.e. until the state is changed automatically) by calling the AltableBarrier’s setStatus() method or more permanently by overriding its default state by calling its setDefaultStatus() method. In general any process which regularly evaluates an Alternative containing a given AltableBarrier such as server processes should set this default to PREPARED. Conversely processes which act as clients or which wait for user or network input (and thus may be significantly delayed before attempting a synchronisation with a barrier) should set this default to UNPREPARED. While changes to the default after construction are left at the programmer’s discretion, such changes should be unnecessary unless a significant change in the behaviour of a process occurs. AltableBarrier bar1 = new AltableBarrier ( base , AltableBarrier . UNPREPARED ); AltableBarrier bar2 = new AltableBarrier ( base , AltableBarrier . PREPARED ); bar1 . setStatus ( AltableBarrier . PREPARED ); bar2 . setDefaultStatus ( AltableBarrier . UNPREPARED );
5.4. Discovery and Acknowledgement of Events After Selection Once an AltableBarrier has been selected by a call to the priSelect() method, the index returned by that method will indicate the GuardGroup object to which that barrier belongs. Calling the lastSynchronised() method on that GuardGroup will reveal the specific AltableBarrier selected. By this point the actual synchronisation on the barrier will have taken place. Therefore, unlike with JCSP channel synchronisations, it is unnecessary for the programmer to do anything else to complete or acknowledge the synchronisation having occurred. To paraphrase, an AltableBarrier is used in the same way as an AltingBarrier with two exceptions. The first being that AltableBarriers need to be enclosed in a GuardGroup to which it belongs. This GuardGroup must be interrogated if the selected barrier is ambiguous. The second is that priority cannot be expressed using AltingBarriers.
D.N. Warren / Prioritised Choice over Multiway Synchronisation
97
int index = alt . priSelect (); Guard selectedGuard = guards [ index ]; AltableBarrier selected = null ; if ( selectedGuard instanceof GuardGroup ) { GuardGroup group = ( GuardGroup ) selectedGuard ; selected = group . lastSynchronised (); } \\ The synchronisation has already taken place at this point , \\ no further action is required to acknowledge the event .
5.5. Current Limitations JCSP AltableBarriers (via an enclosing GuardGroup object) can be used with any number of existing JCSP Guards in any combination with two restrictions. The first is that no Alternative object may enclose both a GuardGroup and a AltingBarrier (the latter being the name of a class which implements the old Oracle algorithm). Code required to ensure consistency of selection for AltingBarriers can cause inconsistency for the new AltableBarriers. The second restriction is that only the priSelect() method of the Alternative class is considered safe for use with AltableBarriers, behaviour when using the select() or fairSelect() methods is not considered here. It should also be noted that the existing AltableBarrier implementation lacks any mechanism for allowing processes to resign from a barrier. This restriction is not intended to be permanent. In the interim processes wishing to resign from an AltableBarrier should spawn a new process and pass it the unwanted AltableBarrier object, this process should loop infinitely, offering to synchronise on that barrier with each iteration. Finally, AltableBarriers are incompatible with the use of any boolean preconditions.
6. Description of PCOMS Algorithm This section details the inner workings of the PCOMS algorithm as applied to JCSP. The algorithm is inspired in part by the 3 phase commit protocol [10]. Specifically the algorithm can be broken down into 3 distinct phases. The first concerns establishing whether or not an AltableBarrier (or group of AltableBarriers) is in a position for enrolled processes to begin pre-emptively waiting for a synchronisation to occur and selecting such a barrier in a manner consistent with priority ordering. The second phase involves waiting for the synchronisation itself and includes details of mechanisms for ensuring consistency of selection between processes as well as of the mechanisms for aborting synchronisation attempts. The third phase involves ensuring that any synchronisations are consistently reported by all processes. Details are also given for processes which have begun waiting on the ‘altmonitor’, an object unique to each instance of the Alternative class used to wake processes waiting for any (even non barrier) guards to become ready. While the possibility that this algorithm could be simplified should not be ruled out, the relative complexity of this algorithm serves to prevent deadlock. Much of the complexity is required for compatibility with existing Alternative Guards. For example, special provisions must be made for processes waiting on the altmonitor object. This is because such processes may be woken up by either a successful barrier synchronisation or by a conventional JCSP Guard.
98
D.N. Warren / Prioritised Choice over Multiway Synchronisation
6.1. Phase 1: Readiness Testing Figure 2 outlines the logic. All Guards in a JCSP Alternative have their readiness tested by a call to their enable() method: calling enable() on a GuardGroup initiates readiness tests on all of the AltableBarriers that that GuardGroup contains. A call to the enable() method of a GuardGroup returns true iff an attempt to synchronise on an AltableBarrier has been successful. When enable is called on a GuardGroup, it claims a global lock: this lock is required for all reading and writing operations to all data related to AltableBarrier synchronisations. This lock is not released until either a process begins waiting for a synchronisation to occur or the invoking enable() method has been completed and is ready to return. Once the global lock has been claimed, the process sets the status flag of all of the AltableBarriers contained in the GuardGroup (and all of those contained in higher priority GuardGroups1 ) to PREPARED. The next step is to select a barrier on which to attempt synchronisation. For each GuardGroup encountered in the Alternative so far, in priority order, all of the AltableBarriers in each GuardGroup are examined to see if they are PROBABLY READY. If no AltableBarriers are PROBABLY READY then the next GuardGroup is examined. If no AltableBarriers are ready in all of the GuardGroups under consideration, then the enable() method releases the global lock and returns false. If one or more AltableBarriers are found to be PROBABLY READY, then they are each tested to see if any have been selected by other processes. If some of them have, then those that have not are eliminated from consideration for now. In either case, an AltableBarrier is arbitrarily selected from the list of PROBABLY READY barriers which remain. In this way, an AltableBarrier is selected which is PROBABLY READY, of equal or greater priority to other possible barriers and is, if possible, the same choice of barrier as selected by the process’ peers. 6.2. Phase 2: Awaiting Completion The process holding the global lock now has an AltableBarrier on which it intends to attempt a synchronisation. It is already the case that this barrier is one of (or the) highest priority barriers currently available and that (where applicable) it is also a barrier which has been selected by other processes. However, there may be other processes enrolled on this barrier currently attempting to synchronise on other lower priority barriers. In order for the barrier synchronisation to complete, it is necessary for those processes waiting on other barriers to be stolen (see Section 6.2.1). These processes, where they could be stolen, continue to wait but are now waiting for the ‘stealing’ barrier to complete. See Figure 3. Having ensured maximum consistency between all processes attempting barrier synchronisations, the process holding the global lock checks to see if it is the last process required to complete the barrier synchronisation. If it is, then the waiting processes are informed of the successful synchronisation and woken up (see Section 6.3). If not, then the process will need to begin waiting – either for a successful synchronisation attempt or for the synchronisation to be aborted. If this is the only process currently attempting to synchronise on the barrier, then a timeout process is started (see Section 6.2.2) to ensure that any synchronisation attempt is not continued indefinitely. The BarrierFace is then updated to reflect the currently selected 1 During the time between the evaluation of one GuardGroup and another it is possible for a synchronisation attempt on an AltableBarrier to have timed-out. In such a case the currently running process may have had its status flag (associated with that barrier) set to UNPREPARED. Given that this process is now once again in a position to offer that event, it is necessary for such flags to be reset to PREPARED.
D.N. Warren / Prioritised Choice over Multiway Synchronisation
99
Figure 2. Flow Chart showing Phase 1.
barrier and an object representing a local lock used to wake the process once it has begun waiting. The object used for this local lock is the enclosing Alternative object itself: this has the virtue of being unique to each process and of being distinct from the Alternative’s altmonitor (this is to avoid the process being woken up by non-barrier guards becoming ready). Then, the process claims its local lock and, afterwards, releases the global lock. It then calls the local lock object’s wait() method, meaning that the process will sleep either until a barrier synchronisation is successful or its synchronisation attempt is aborted. During
100
D.N. Warren / Prioritised Choice over Multiway Synchronisation
this waiting time a process may be stolen (see Section 6.2.1) any number of times. For the purposes of ensuring deadlock freedom, it is important to note that all processes which wait for synchronisations to complete – as well as processes which wake them up – have always claimed the global lock first, then claim the local lock before releasing the global lock. When waiting processes are woken up, they initially own their local lock and then claim the global lock; this inversion in the order in which locks are claimed can potentially cause deadlock. To counter this, there are two strict conditions imposed on waking processes: 1. A process must first be waiting before another process can attempt to wake it. 2. The BarrierFace of each process has a ‘waking’ flag which is set to true once a process has woken it up. No process will attempt to wake a process with a true ‘waking’ flag. This means that locks are always claimed in the order global-then-local until a process is woken up, after which locks are claimed in the order local-then-global. In summary a process attempting a synchronisation will do one of two things. If it is the last process required to complete a synchronisation, it will do so. Otherwise it will begin waiting for the synchronisation to complete or for the attempt to be aborted. In any case, after this phase has been completed, the process in question will know whether or not it successfully synchronised on a barrier and, if so, which one. If synchronisation was successful, then phase 3 (Section 6.3) ensures that this is consistently reported by all processes involved.
Figure 3. Flow Chart showing Phase 2.
D.N. Warren / Prioritised Choice over Multiway Synchronisation
101
6.2.1. Stealing Stealing is the way in which processes enrolled on a given barrier but currently waiting for the completion of different barriers are switched from waiting on the latter to the former. For each of the processes enrolled on the stealing barrier the following tests are run, failing any means that the process isn’t stolen: 1. Is the process currently evaluating an Alternative? 2. Is it currently waiting for the completion of another barrier? 3. Does the stealing barrier have and equal or higher priority than the old barrier from the point of view of the process being stolen? 2 4. Is the process’ ‘waking’ flag still false? If these conditions are met then the process is stolen by simply changing the AltableBarrier object recorded in the process’ BarrierFace. 6.2.2. Timeouts A timeout process is created when the first process enrolled on a barrier begins waiting for its completion, its purpose is to abort synchronisation attempts on a barrier which take too long. When created and started in its own Thread a timeout process waits for a time period dependant on the number of processes enrolled on its corresponding barrier. Currently this time period is 500 milliseconds multiplied by the number of enrolled processes. This formula is entirely arbitrary but has proved to be a generous estimation of the time required to complete barrier synchronisations of any size. A more detailed analysis of PCOMS barrier performance would be required to minimise the time spent waiting for false-positive synchronisation attempts. When the timeout period has elapsed the timeout process claims the global lock and examines an internal flag, the state of which depends on whether or not the barrier synchronisation was successful while the timeout process was asleep, if it was then it releases the global lock and terminates. If the synchronisation attempt has yet to complete then the timeout process aborts the synchronisation attempt in the following way. Each process enrolled on that barrier but which is not currently attempting to synchronise on it has its status flag (associated with the timed-out barrier) set to UNPREPARED. In other words, that process’ assertion that it will synchronise on that barrier in the near future has been proven false therefore it is amended to UNPREPARED until such time as that process is in a position to synchronise on the barrier. Changing the status of some processes to UNPREPARED means that the barrier as a whole is no longer PROBABLY READY, such a change is the only way in which synchronisation attempts are aborted. All processes currently waiting on the aborted barrier have their BarrierFace objects amended to reflect that they are no longer waiting for any barrier. Normally, these processes also have their ‘waking’ flags set to true and are then awoken. If any of these processes are waiting on the altmonitor (see Section 6.4), they are not awoken. Currently there is no mechanism for the programmer to set or terminate these timeouts manually nor to change the amount of time that an event takes to timeout. 6.3. Phase 3: Ensuring Consistency Having progressed past phases 1 and 2, a process will have selected a barrier to attempt a synchronisation on and will have either succeeded or failed to synchronise (in the interim it may 2 A process waiting for the completion of a barrier may be stolen by a barrier which the process considers to be of equal priority. This is allowed because the process which initiated the stealing may have a specific priority ordering (where the process being stolen does not) or the stealing process may not be enrolled on the same set of events.
102
D.N. Warren / Prioritised Choice over Multiway Synchronisation
have been stolen by another barrier). In either case, the result must be acted on such that the process either continues to searching for another guard to select or acknowledges a successful synchronisation and ensures that the acknowledgement is consistent for all processes. At this stage, a process will have access to its BarrierFace which will either contain the AltableBarrier on which the process has synchronised or will contain a null value in its place. If the latter is the case, then the synchronisation attempt was aborted, the process moves back to phase 1 and either attempts a new synchronisation or (if no AltableBarriers are PROBABLY READY) the enable() method terminates returning false. If the process did synchronise, a number of things need to be done to ensure that this is reported correctly. The most important complication is that the enable() method invoked belongs to a specific GuardGroup, which in turn represents one or more AltableBarrier objects contained within. However, because a process may be stolen by a barrier in another GuardGroup, the Guard that the Alternative selects may be different from the GuardGroup whose enable() method has been called. The selected AltableBarrier has a reference to the GuardGroup which contains it: this GuardGroup has a field called ‘lastSynchronised’ and the selected AltableBarrier is assigned to this field. Whether or not the currently executing GuardGroup contains the selected AltableBarrier, the global lock is released and the enable method returns true. Returning true here causes the previously enabled Guards to be disabled in reverse order. The disable() method of a GuardGroup (which also begins by claiming the global lock) changes the status of all the AltableBarriers it contains from PREPARED back to its default value. If the GuardGroup’s ‘lastSynchronised’ field has been set to a non-null value (i.e. the selected AltableBarrier belongs to this GuardGroup), then the executing process releases the global lock and synchronises on a ‘gatekeeper’ barrier (this being a Barrier object with the same set of enrolled processes as its corresponding AltableBarrier). This prevents synchronised processes from proceeding until they have all been woken up and have executed the important parts of their disable() methods. The disable() method returns true iff its ‘lastSynchronised’ field is set to a non null value. The Alternative class has also been subtly altered such that if a GuardGroup’s disable() method ever returns true, then that GuardGroup’s index is returned by the priSelect() method in preference to any non-barrier Guards which may have become ready in the interim. In this way, all processes that have successfully synchronised on a barrier will have stopped their Alternative’s enable sequence and begun disabling all previously enabled Guards. Only the GuardGroup that contains the successful AltableBarrier will return true when its disable() method is called; no process will be able to proceed until all other processes enrolled on that barrier have also woken up (this prevents processes from waking up, acknowledging the successful synchronisation and then immediately selecting the same event again in the same Alternative). The Alternative class itself has been subtly altered to prevent the readiness of non-barrier Guards from taking precedence over a GuardGroup. 6.4. Behaviour when Waiting on the Altmonitor When a process begins waiting for a barrier synchronisation during a call to enable() in a GuardGroup, that process can only be woken by the success or failure of barrier synchronisations. However, when an Alternative has enabled all of its Guards, it begins waiting on its altmonitor; when any of the enabled Guards become ready, the process is woken up (this includes non-barrier Guards). As such, the way in which processes waiting on their altmonitor are dealt with is subtly different. The following is a list of those differences: 1. After the last GuardGroup in the Alternative has its enable() method called, the global lock is not released. It remains claimed until just before the process begins waiting on the altmonitor. This eliminates the possibility that another process may attempt to
D.N. Warren / Prioritised Choice over Multiway Synchronisation
103
steal it in the interim, since this process is no longer in a position to initiate synchronisation attempts it must always be in a position where it can be stolen. 2. Prior to waiting on the altmonitor, the process’ BarrierFace indicates that the altmonitor is its local lock (for the purposes of waking the process) and that it is currently not attempting to synchronise on any event. 3. While the process is waiting, it is not possible for aborted synchronisation attempts to wake it. Only a successful synchronisation attempt or a non-barrier Guard becoming ready will wake the process. 4. When waking up, the process must immediately claim the global lock in order to check whether or not a barrier synchronisation has occurred. If it has, then the process’s BarrierFace sets its ‘waking’ flag to true. If it has not, then the possibility remains open for any given barrier to be selected until such time as its GuardGroup’s disable() method is called. These changes modify the behaviour associated with waiting for barrier synchronisation to allow for possibility of non-barrier guards being selected, while eliminating risks of inconsistency and / or deadlock. 7. Testing This section outlines some of the tests run to build confidence in the deadlock freedom of AltableBarriers, as well as to compare AltableBarriers with the previous AltingBarrier’s algorithm. The source code for all of these tests is available in a branch of the main JCSP subversion repository [15]. All of these tests are part of the org.jcsp.demos.altableBarriers package. For brevity the pertinent sections of all of the test programs are rendered as occam-π pseudo-code. For the purposes of assessing performance, it should be noted that at the time of writing, the source code for several AltableBarrier related classes contain a significant quantity of debugging statements sent to the standard output. Also no attempts have yet been made to optimise any of the source code. 7.1. Stress Testing Since a formal proof of deadlock has not been attempted, the VisualDemo class exists as a means of stress testing as much of the AltableBarrier’s functionality as possible. It is designed to test the responsiveness of processes to infrequently triggered high priority events, compatibility with existing channel input guards as well as the ability to permit the arbitrary selection of nested low priority events. The process network (Figure 4) centres around processes of following type, connected in a ring via AltableBarriers labelled ‘left’ and ‘right’: PROC node ( BARRIER pause , left , right , CHAN SIGNAL mid ) WHILE TRUE PRI ALT SYNC pause SYNC pause mid ? SIGNAL SKIP ALT SYNC left SKIP SYNC right SKIP :
104
D.N. Warren / Prioritised Choice over Multiway Synchronisation
Figure 4. Process diagram showing the way in which node processes are connected in the VisualDemo class
As well as all ‘node’ processes, the ‘pause’ barrier is enrolled on by another process which defaults as UNPREPARED. In an infinite loop it waits for 5 seconds before offering only the ‘pause’ barrier. As such, every 5 seconds all node processes synchronise on the ‘pause’ barrier and then wait a further 5 seconds before being unpaused. Each process is also connected via its ‘mid’ channel to another process (one each per node). This process, in an infinite loop, waits for a random timeout between 0 and 10 seconds then sends a signal to its corresponding node process via its ‘mid’ channel. Thus, when not synchronising on the ‘pause’ barrier, a node process may synchronise on an ordinary channel communication at random but relatively regular intervals. When not synchronising on the ‘pause’ or ‘mid’ events, node processes synchronise with either of their neighbouring node processes. The selection is arbitrary however where one of the node’s neighbours has synchronised on its mid channel, the node process selects the other neighbour (no excessive waiting for the other process to synchronise on its event). As the name suggests, the VisualDemo class offers a GUI interface which shows these events happening graphically in real time. Although no timing analysis of this system has been attempted, at high numbers of processes (˜100) the ‘pause’ barrier can take a palpably long time to complete a synchronisation (>1 second after the event first becomes available). This test can be left to run over several days and has yet to deadlock. While this does not eliminate the possibility of deadlock it is the most complete test of the capabilities of AltableBarriers devised at the time of writing. As such the algorithm is considered to be provisionally deadlock free. 7.2. Comparison with Oracle To compare the relative performance of AltingBarriers with AltableBarriers, a process network consisting of node processes connected in a ring is constructed. The number of node processes in the network is determined by a ‘PROCESSES’ field. Each node process is connected to a number of processes ahead of it by barriers, the number of nodes it is connected to is determined by the ‘OVERLAP’ field. Therefore each node is enrolled on ‘OVERLAP’
D.N. Warren / Prioritised Choice over Multiway Synchronisation
105
number of barriers, this connects itself with (‘OVERLAP’-1) processes ahead of it in the process ring. The pseudo-code for each node is as follows: PROC node ( VAL INT id , [] BARRIER bars ) INITIAL INT count IS 0: INT start . time , end . time : TIMER tim : SEQ tim ? start . time WHILE TRUE SEQ ALT i = 0 FOR OVERLAP SYNC bars [ i ] count := count + 1 IF (( count > ITERATIONS ) AND ( id = 0)) SEQ tim ? end . time out . write (( end . time - start . time ) , out !) KILL -- terminate all procs , test proc has finished TRUE SKIP :
When the process network is started each node makes an arbitrary selection over the barriers that it is enrolled on. It then increments a counter for every iteration of this choice. Once the first node in the network (the node with an ID of 0) has reached a fixed number of iterations the entire network is terminated. The amount of time elapsed between the process network starting and it terminating can be used to compare the performance when the barriers are implemented as AltingBarriers versus when they are implemented as AltableBarriers. The first set of results has a fixed ring of 50 processes, completing 100 iterations. The number of processes to which each node was connected was varied to highlight its effect on speed at which these external choices are resolved. A network using AltingBarriers is tested as is one using AltableBarriers where all processes default to PREPARED, finally a network using AltableBarriers where all processes default to UNPREPARED is also used. Table 1. Time (ms) for 50 processes to complete 100 iterations Overlap 2 3 4
AltingBarrier 250 294 303
PREPARED 12867 13622 14093
UNPREPARED 19939 33652 57939
It is immediately apparent that the existing JCSP AltingBarrier algorithm is approximately two orders of magnitude faster than both versions of the AltableBarrier algorithm. The degree to which this difference is due to inherent algorithm complexity versus debugging statements, spawning of extra processes and a lack of optimisation is unclear. A detailed analysis of the effects of these factors is beyond the scope of this paper. Both the AltingBarrier and ‘PREPARED’ networks show modest increases in their completion times as the set of barriers evaluated increases whereas the ‘UNPREPARED’ network shows a more dramatic increase. This discrepancy may be due to the need of the ‘UNPREPARED’ nodes to examine (and initially reject as unready) all barriers that it encounters until all enrolled processes are in a position to synchronise. Conversely the ‘PREPARED’ nodes will select a barrier to attempt a synchronisation with immediately.
106
D.N. Warren / Prioritised Choice over Multiway Synchronisation
The next experiment uses the same AltingBarrier, ‘PREPARED’ and ‘UNPREPARED’ set up as the previous one. However the number of barriers each node is enrolled on is limited to two, the number of processes in the ring is instead varied to examine its effect on performance. As before, 100 iterations are required to terminate the process network. Here, Table 2. Time (ms) to complete 100 iteration for processes overlapped by two Num processes 25 50 75 100
AltingBarrier 70 111 330 638
PREPARED 5818 11066 17957 24432
UNPREPARED 13218 28545 34516 44308
the AltingBarrier network shows a steeper (possibly n*n) relationship between the number of processes and completion time. The two AltableBarrier implementations show a steadier (possibly linear) relation to the number of processes. As before the ‘PREPARED’ network outperforms the ‘UNPREPARED’ one. In both experiments the older AltingBarrier algorithm is significantly faster than networks using AltableBarriers. In both experiments nodes which defaulted to being ‘PREPARED’ to synchronise on their barriers outperformed those which were ‘UNPREPARED’. 7.3. Priority Conflict Resolution The pre-existing JCSP AltingBarrier class lacked any mechanism for expressing priority over events. By adding such mechanisms the AltableBarrier class makes it possible for the unwary programmer to introduce priority conflicts. Since the priority of events in an external choice are determined locally, it is possible that these priorities can be defined in such a way as to conflict with eachother. To test the behaviour of AltableBarriers under these conditions and to ensure that such code results in an arbitrary choice, the ConflictTest class creates a network of processes like the following: PROC P1 ( BARRIER a , b ) WHILE TRUE PRI ALT SYNC a SKIP SYNC b SKIP : PROC P2 ( BARRIER a , b ) WHILE TRUE PRI ALT SYNC b SKIP SYNC a SKIP :
Both P1 and P2 are enrolled on barriers ‘a’ and ‘b’. P1 processes prefer to synchronise on ‘a’ over ‘b’, while the opposite is true of P2. In both cases all processes are considered to be PREPARED to synchronise on both barriers. So long as the process network as a whole contains at least one P1 and P2 processes the behaviour of the program is the same. All P1 processes immediately begin to wait pre-emptively for event ‘a’ to complete while all P2 processes wait for ‘b’. Both sets of processes deadlock until one of the barrier
D.N. Warren / Prioritised Choice over Multiway Synchronisation
107
synchronisation attempts times out, as an example we will presume that ‘a’ times out first. As such all processes not waiting for it to complete (all P2 processes) have their status with regard to ‘a’ set to UNPREPARED, all P1 processes then abort their synchronisation attempt on ‘a’. Since all P1 processes have abandoned waiting for ‘a’, they are now in a position to consider ‘b’. Either all P1 processes will synchronise on ‘b’ before its synchronisation attempt times out or ‘b’ will timeout and there will be a brief interval during which both ‘a’ and ‘b’ will be considered to be not PROBABLY READY. During this period a number of processes will reassert their readiness to synchronise on both barriers and begin waiting on the ‘altmontior’ until either event is ready. Since it will always be the case that there will be at least one process not waiting on the altmonitor, there will always be at least one process capable of arbitrating the conflict. Any further iterations of this choice are likely to be resolved in the same way without the initial delay. Again assuming that ‘b’ was selected over ‘a’, all P2 processes are still considered UNPREPARED to synchronise on ‘a’, since they have have not encountered a guard containing ‘a’ they have no opportunity to reset their status flag to their default of PREPARED. This means that all P2 processes begin waiting on ‘b’ as usual. All P1 processes, seeing that ‘a’ is not PROBABLY READY, skip ‘a’ and immediately synchronise on ‘b’. This means that although priority conflicts can be (and are) resolved as arbitrary selections, there can be significant performance delays associated with making that choice. It is therefore recommended that nested priority be used to avoid delays caused by priority conflicts. If both P1 and P2 consider ‘a’ and ‘b’ to be of the same priority then there are no delays in making a selection. 8. Discussion Given the testing (Section 7) performed so far it is possible to provisionally conclude that process networks using AltableBarriers are robust and that they are not vulnerable to priority conflicts. The comparison tests with the existing AltingBarrier algorithm reveals that AltableBarriers should be avoided for performance reasons where the ability to prioritise barrier events is not required. If priority is required and if performance is not an issue, AltableBarriers are useful and offer trivial or manageable delays for modest process networks. 8.1. Future Work Existing tests have already established that AltableBarriers can be used to atomically pause or terminate process networks and that (using nested priority) this need not affect the existing priority framework or introduce priority conflict. This section details as yet untested patterns for ensuring fairness, avoiding starvation and (possibly) affecting the probability of events being selected. 8.1.1. Fair Alting Where nested priority is used, selection of the barriers within that block is considered to be arbitrary, therefore no guarantees are made about the fairness of that selection in general. Similarly fair alting cannot be achieved in the same way that is achieved using channel guards (imposing a priority ordering on all events with the last selected guard last). This is because imposing a priority ordering on all barriers where those barriers have overlapping sets of enrolled processes leads to priority conflicts. To get around this problem, code of the following type could be used to ensure a degree of fairness:
108
D.N. Warren / Prioritised Choice over Multiway Synchronisation PROC fair . alter ([] BARRIER bars ) BARRIER last . selected : WHILE TRUE PRI ALT ALT i = 0 FOR SIZE bars ( NOT ( bars [ i ] = last . selected )) && SYNC bars [ i ] last . selected := bars [ i ] SYNC last . selected SKIP :
Care must be taken to chose an initially consistent ‘last.selected’ for all processes, it is also important to note that preconditions are not currently compatible with AltableBarriers and that the ‘last.selected’ barrier would need to be fully removed from the nested priority block. However this system ensures that all processes consider the last selected barrier event to be of a lower priority than its peers without imposing a conflict prone priority structure on the rest of the barriers. Further because the selection of the low priority barrier is done on the basis of the last selected barrier, this change in the priority ordering is guaranteed to be consistent for all processes enrolled on that barrier, therefore there this does not cause a priority conflict. While this may prevent any one event dominating all others, it may not however guarantee complete fairness. The possibility exists that in sets of overlapping events larger than two, two events may consistently alternate as the last selected barrier. 8.1.2. Partial Priority As well allowing for a general priority structure while avoiding priority conflicts, nested priority may be useful in affecting the probability of one or more events being selected. This proposed scheme will be known as partial priority from this point onwards. Consider the simplified model of the SITE processes in the TUNA blood clotting model [11] in Section 3.2, no priority ordering is imposed on any of the events. In the case of the old JCSP AltingBarriers this meant that the ‘pass’ events were always selected over the ‘tock’ event. Using AltableBarriers also allows for arbitrary selection of events, in practice (and in the absence of preferences by other processes) the event initially selected by any process is the first one listed in a GuardGroup. As such if the ‘pass’ events are occur before the ‘tock’ event in a GuardGroup, the ‘pass’ events are naturally favoured over the ‘tock’ event. Now consider what happens if one process, selected at random, prioritises ‘tock’ over the ‘pass’ barriers: PROC site ([] BARRIER pass , BARRIER tock ) WHILE TRUE PRI ALT SYNC tock SKIP ALT i = 0 FOR SIZE pass SYNC pass [ i ] SKIP :
Since the behaviour of processes with regards to priority is determined locally and since process scheduling is unpredictable in JCSP, it is reasonable to assume that a number of unprioritised SITE processes will be scheduled before the prioritised one. These processes will initially select ‘pass’ events to synchronise on. Eventually some of these ‘pass’ events will complete. However once the prioritised SITE process is scheduled it immediately selects the ‘tock’ event and steals any other processes waiting for other events. Thus, an unpredictable
D.N. Warren / Prioritised Choice over Multiway Synchronisation
109
(possibly random) number of processes will complete ‘pass’ events before all processes are made to synchronise on the ‘tock’ event. Using partial priority in this way may be another way in which starvation can be avoided in otherwise priority free external choices. It may or may not be the case that using this approach will have a predictable effect on the probability of certain events being selected. 8.1.3. Modelling in CSP While it is possible to provisionally assert that the AltableBarrier algorithm is deadlock free given the stress tests run on it, it is not possible to guarantee this until the algorithm has been modelled in CSP. At the time of writing no such CSP models have been attempted. Despite this (and the relative complexity of the algorithm) modelling the AltableBarrier algorithm in CSP should not be considered intractable. Two different approaches to modelling the algorithm may be attempted. The first is to model the algorithm in detail, this would almost certainly require modelling individual fields as separate processes. The second is to strip the algorithm down to its barest essentials (more or less a model of the 3 phase commit protocol [10]) and identify the circumstances where such a simple system could deadlock. The rest of the verification process would then consist of proving that such circumstances are impossible (this may or may not be done using CSP). 9. Conclusion The AltableBarriers algorithm presented in this paper, although noticeably slower than using the existing JCSP AltingBarrier class, can be practically applied to the prioritisation of multiway synchronisation. This allows large, infrequently triggered barrier events with large sets of enrolled processes to be consistently selected over smaller barrier events as well as channel communications without any major changes to existing JCSP classes. As such AltableBarriers are applicable in such problems as graceful termination as well as atomically pausing entire process networks. By allowing multiway synchronisations to be prioritised, it is no longer the case that events with small sets of enrolled processes are automatically favoured over events with large sets. Further the ability to create groups of events with no internal priority within larger priority structures allows the programmer to avoid priority conflicts. While as yet untested, there also appears to be no reason not to avoid possible problems of starvation. Partial priority as well as fair alting provide mechanisms for ensuring a degree of fairness in otherwise priority free arbitrary selections. Acknowledgements The comments of the anonymous reviewers on this paper are gratefully appreciated. Credit is also due to Peter Welch and Fred Barnes (and to CPA’s contributors in general) whose collective musings on the subject have helped to shape this research. This work is part of the CoSMoS project, funded by EPSRC grant EP/E053505/1. References [1] C.A.R. Hoare. Communicating Sequential Processes. Communications of the ACM, 21(8):666–677, August 1978. [2] A.W. Roscoe. The Theory and Practice of Concurrency. Prentice Hall, 1997. ISBN: 0-13-674409-5. [3] A.A. McEwan. Concurrent program development, d.phil thesis. The University of Oxford, 2006.
110
D.N. Warren / Prioritised Choice over Multiway Synchronisation
[4] P.H. Welch, N.C.C. Brown, J. Moores, K. Chalmers, and B. Sputh. Alting barriers: synchronisation with choice in Java using CSP. Concurrency and Computation: Practice and Experience, 22:1049–1062, 2010. [5] P.H. Welch, N.C.C. Brown, J. Moores, K. Chalmers, and B. Sputh. Integrating and Extending JCSP. In Alistair A. McEwan, Steve Schneider, Wilson Ifill, and Peter Welch, editors, Communicating Process Architectures 2007, volume 65 of Concurrent Systems Engineering Series, pages 349–370, Amsterdam, The Netherlands, July 2007. IOS Press. ISBN: 978-1-58603-767-3. [6] P.H. Welch and P.D. Austin. The JCSP (CSP for Java) Home Page, 1999. Accessed 1st. May, 2011: http://www.cs.kent.ac.uk/projects/ofa/jcsp/. [7] P.H. Welch and F.R.M. Barnes. Communicating mobile processes: introducing occam-pi. In A.E. Abdallah, C.B. Jones, and J.W. Sanders, editors, 25 Years of CSP, volume 3525 of Lecture Notes in Computer Science, pages 175–210. Springer Verlag, April 2005. [8] P.H. Welch and F.R.M. Barnes. Mobile Barriers for occam-pi: Semantics, Implementation and Application. In J.F. Broenink, H.W. Roebbers, J.P.E. Sunter, P.H. Welch, and D.C. Wood, editors, Communicating Process Architectures 2005, volume 63 of Concurrent Systems Engineering Series, pages 289–316, Amsterdam, The Netherlands, September 2005. IOS Press. ISBN: 1-58603-561-4. [9] C. Mohan and B. Lindsay. Efficient commit protocols for the tree of processes model of distributed transactions. ACM SIGOPS Operating Systems Review, 19(2):40–52, 1985. [10] D. Skeen and M. Stonebraker. A formal model of crash recovery in a distributed system. IEEE Transactions On Software Engineering, SE-9:219–228, 1983. [11] P.H. Welch, F.R.M. Barnes, and F.A.C. Polack. Communicating complex systems. In M.G. Hinchey, editor, Proceedings of the 11th IEEE International Conference on Engineering of Complex Computer Systems (ICECCS-2006), pages 107–117, Stanford, California, August 2006. IEEE. ISBN: 0-7695-2530X. [12] C.J. Fidge. A formal definition of priority in csp. ACM Transactions on Programming Languages, Vol 15. No 4:681–705, 1993. [13] G. Lowe. Extending csp with tests for availability. Communicating Process Architectures, pages 325–347, 2009. [14] D.N. Warren. PCOMS source code. Accessed 1st. May, 2011: http://projects.cs.kent.ac.uk/ projects/jcsp/svn/jcsp/branches/dnw3_altbar/src/org/jcsp/lang/. [15] D.N. Warren. PCOMS test code. Accessed 1st. May, 2011: http://projects.cs.kent.ac.uk/ projects/jcsp/svn/jcsp/branches/dnw3_altbar/src/org/jcsp/demos/altableBarriers/.
An Analysis of Programmer Productivity versus Performance for High Level Data Parallel Programming Alex COLE a , Alistair McEWAN a and Satnam SINGH b a Embedded Systems Lab, University Of Leicester b Microsoft Research, Cambridge Abstract. Data parallel programming provides an accessible model for exploiting the power of parallel computing elements without resorting to the explicit use of low level programming techniques based on locks, threads and monitors. The emergence of Graphics Processing Units (GPUs) with hundreds or thousands of processing cores has made data parallel computing available to a wider class of programmers. GPUs can be used not only for accelerating the processing of computer graphics but also for general purpose data-parallel programming. Low level data-parallel programming languages based on the Compute Unified Device Architecture (CUDA) provide an approach for developing programs for GPUs but these languages require explicit creation and coordination of threads and careful data layout and movement. This has created a demand for higher level programming languages and libraries which raise the abstraction level of data-parallel programming and increase programmer productivity. The Accelerator system was developed by Microsoft for writing data parallel code in a high level manner which can execute on GPUs, multicore processors using SSE3 vector instructions and FPGA chips. This paper compares the performance and development effort of the high level Accelerator system against lower level systems which are more difficult to use but may yield better results. Specifically, we compare against the NVIDIA CUDA compiler and sequential C++ code considering both the level of abstraction in the implementation code and the execution models. We compare the performance of these systems using several case studies. For some classes of problems, Accelerator has a performance comparable to CUDA, but for others its performance is significantly reduced; however in all cases it provides a model which is easier to use and enables greater programmer productivity. Keywords. GPGPU, Accelerator, CUDA, comparisons.
Introduction The emergence of low cost and high performance GPUs has made data-parallel computing widely accessible. The hundreds or thousands of processing cores on GPUs can be used not only for rendering images but they may also be subverted for general purpose data-parallel computations. To relieve the programmer for thinking in terms of graphics processing architectural features (e.g. textures, pixel shaders and vertex shaders) the manufacturers of GPUs have developed C-like programming languages that raise the abstraction level for GPU programming beyond pixel and vertex shaders to the level of data-parallel operations over arrays. However, these languages still require the programmer to think in fairly low level terms e.g. explicitly manage the creation and synchronization of threads as well as manage data layout and movement and the programmer also has to write code for the host processor to manage the movement of data to and from the graphics card and to correctly initiate a sequence of
112
A. Cole et al. / Data-Parallel Programming Systems
operations. For some programmers it may be important to control every aspect of the computation in order to achieve maximum performance and it is justifiable to expend significant effort (weeks or months) to devise an extremely efficient solution. However, often one wishes to trade programmer productivity against performance – i.e. implement a data-parallel computation in a few minutes or hours and achieve around 80% of the performance that might be available from a higher cost solution that takes more effort to develop. Originally, writing data-parallel programs for execution on GPUs required knowledge of graphics cards, graphics APIs and shaders to set up and pass the data and code correctly. Over time, libraries and languages were developed to abstract these details away. The Accelerator library from Microsoft [1] is one such library. Accelerator provides a high level interface to writing data parallel code using parallel array objects and operations on those arrays. This interface hides target details, with the array objects and operations representing generic data so that Accelerator can be retargeted with little effort. Note that a “target” is the device on which code is run and Accelerator supports more than just a GPU. A Just In Time compiler (JIT) is used to convert high level descriptions to target-specific instructions at run time. As the popularity of General Purpose GPU computing (GPGPU) increased, GPU manufacturers started to develop systems with dedicated hardware and associated GPGPU software such as NVIDIA’s CUDA [2]. Older methods encoded data and code in graphics terms and ran through the full graphics pipeline, including parts which were only relevant to graphics. These newer systems provide direct access to the major processing elements of a Graphics Processing Unit (GPU) and a native programming model. Although these systems provide more direct access, they have returned to requiring lower level knowledge of the (non-graphics specific) hardware. On the other hand, more abstract systems still require no detailed knowledge. CUDA code is often tightly matched to a device on which it is to run for optimisation purposes, Accelerator code can be retargeted to different classes of hardware very quickly. The question is how do these systems compare, and how do they compare to no data parallel code at all (i.e. sequential C++ code)? Are any penalties incurred by using the JIT in Accelerator, and how do development efforts compare for similar performances? Do the gains outweigh any penalties or is the abstraction simply too high? This paper presents an overview of the history of data parallelism with a focus on GPGPU in Section 1. It then examines Accelerator and compares it with CUDA on a GPU and sequential C++ code on a multi-core Central Processing Unit (CPU). These comparisons are performed for a number of case studies (Section 2), namely convolution (Section 3) and electrostatic charge map estimation (Section 4). Each case study includes an introduction to the algorithm, results and a conclusion; with the paper finished by a discussion on development (Section 5) and conclusions (Section 6). The following contributions are made: • A comparison of the programming models used in CUDA, Accelerator and C++ code executed on a regular processor. • A demonstration that very good speed-ups can be obtained using Accelerator from descriptions that are high level and easier to write than their CUDA counterparts. • A demonstration that the Accelerator model provides a higher level technique for data-parallel computing that permits good performance gains with a greater degree of programmer productivity than CUDA. 1. Background 1.1. Data Parallelism Data-parallel programming is a model of computation where the same operation is performed on every element of some data-structure. The operation to be performed on each element is
A. Cole et al. / Data-Parallel Programming Systems
113
typically sequential although for nested data-parallel systems it may itself be a data-parallel operation. We limit ourselves to sequential operations. Furthermore, we limit ourselves to operations that are independent i.e. it does not matter in which order we apply the operation over the data-structure. This allows us to exploit data-parallel hardware by performing several data-parallel operations simultaneously. Key distinguishing features of such data-parallel programs is that they are deterministic (i.e. every time you run them you get the same answer); they do not require the programmer to explicitly write with threads and locks and synchronization (this is done automatically by the compiler and run-time); and the programmer’s model of the system in essence needs only a single ‘program counter’ i.e. this model facilities debugging. This technique allows us to perform data-parallel operations over large data sets quickly but requires special hardware to exploit the parallel description. There are multiple types of data parallel systems, including Single Instruction Multiple Data (SIMD) and Single Program Multiple Data (SPMD). The former is a single instance of a program operating on multiple sets of data at once; an example of this is the Streaming SIMD Extensions (SSE) instruction set on modern x86 processors. The latter is multiple instances of the same program running in parallel, with each instance operating on a subset of the data. The former performs multiple operations in lockstep, the latter may not. One important aspect of data parallel code is the independence of the data—performing the required calculation on one section of the data before another section should give the same results as performing the calculations in a different order. 1.2. Graphics Processing Units To render an image on a screen may require the processing of millions of pixels at a sufficiently high rate to give a Frames Per Second (FPS) count which is smooth to the eye. For example a 60Hz 1080p HDTV displays over 124 million pixels per second. This is a massively compute intensive task which involves converting 3D scene descriptions to 2D screen projections. A GPU performs this task in parallel by utilising a highly specialised processing pipeline which can apply custom rendering effects in the form of shaders to large numbers of pixels. This GPU can be part of a dedicated graphics card or integrated on to the main board of a PC, laptop or even now phones. The processing pipeline of the GPU, and the memory in dedicated systems are well suited to high data throughput with emphasis on the number of parallel operations rather than the time for one operation. The memory system may take hundreds of clock cycles to complete a read request but then delivers a batch of data at once. The memory system can have multiple requests in various stages of completion at once. Older GPUs contained multiple user programmable stages within the pipeline for custom processing (called “shaders”). One for calculating custom lighting and colouring effects across a complete primitive (triangle) and another for calculating per-pixel effects. The operations within these stages are relatively simple when compared to the capabilities of many modern CPUs; however, one GPU will contain hundreds or thousands of these simple cores (the NVIDIA GeForce GTX480 contains 480 [3], the ATI Radeon HD 5970 contains 3200 [4]). Each core is assigned a fraction of the total workload, leading to the desired high throughput. More modern GPUs still contain this configurable ability but the range of operations available to each stage has increased, leading to a convergence allowing the stages to be combined into a “unified shader”. 1.3. General Purpose Graphic Processing Unit Computing GPGPU is using a GPU for calculations other than just graphics, i.e. general purpose calculations [5,6]. This field largely took off with the advent of configurable graphics cards, though some work had been done as far back as 1978 [7] and more recently using fixed function
114
A. Cole et al. / Data-Parallel Programming Systems
!!
" # !
Figure 1. Expression Graph
pipelines (those without shaders) [8,9,10]. These configurable cards allowed a great deal of transformations to be applied to data, though only when presented as graphics data and graphics shader operations. This involved converting data to textures and operations to shaders, then passing both to a standard graphics pipeline to render an output which was saved as the result. Many systems such as Sh [11], Brook [12] and Scout [13] were developed to abstract the graphics APIs used for the processing. These systems meant that users were not required to learn about the underlying hardware and this is where Microsoft’s Accelerator library [1] is targeted. More modern GPGPU developments use a unified shader architecture. As the capabilities of the various pipeline shader stages increased their abilities largely converged, allowing the physical hardware for the stages to be combined into a single block of cores. This allowed for better load balancing and design optimisation, but also led to the development of direct access systems which better utilised this shader for GPGPU. This model allows a programmer to pass data and code to the GPU shaders more directly, removing the reliance on the graphics pipeline and graphics concepts. Direct access systems include CUDA [2] by NVIDIA, Close To Metal (CTM)/Stream by ATI (discontinued), DirectCompute by Microsoft and OpenCL by the Khronos group. These systems are still very low level. To get the best performance requires writing code to take into account low level designs issues such as memory location and thread layout. 1.4. Accelerator Accelerator is based around a collection of data-parallel arrays and data-parallel operations which are used in a direct and intuitive style to express data-parallel programs. The Accelerator system performs a significant number of powerful optimizations to produce efficient code which is quickly JIT-ed into GPU code via DirectX or into SIMD SSE4 code using a customized JIT-er. The data-parallel computation to be instantiated on a target like an FPGA is represented as an expression tree that contains nodes for operations and memory transforms (e.g. see Figure 1). The Accelerator system supports several types of data-parallel arrays (floating point, integer, boolean and multi-dimensional arrays) and a rich collection of data-parallel operations. These include element-wise operations, reduction operations and rank changing operations. A very important aspect of Accelerator’s design is the provision of
A. Cole et al. / Data-Parallel Programming Systems
Section (bi , ci , si , b j , c j , s j ) Shift (m, n) Rotate (m, n) Replicate (m, n) Expand (bi , ai , b j , a j ) Pad (m, ai , m, a j , c) Transpose(1,0)
115
Ri, j = Abi + si × i, b j + s j × j Ri, j = Ai−m, j−n Ri, j = A(i−m)modM,( j−n)modN Ri, j = Ai mod m, j mod n Ri, j = Ai−bi modM,( j−b j )modN Ai−m, j−n if in bounds Ri, j = c otherwise Ri, j = A j,i
Figure 2. Examples of transform operations of size M × N arrays
operators that specify memory access patterns and these are exploited by each target to help produce efficient vector code, GPU code or FPGA circuits. Examples of memory transform operations are shown in Figure 2. Even on a low end graphics card, it is possible to get impressive results for a 2D convolver. All 24 cores of a 64-bit Windows 7 workstation are effectively exercised by the x64 multicore target, which exploits SIMD processor instructions and multithreading. Stencilstyle computations [14] are examples of problems that map well to Accelerator. As a concrete example, we show a very simple F Accelerator program that performs the point-wise addition of two arrays using a GPU (Listing 1). When executed, this program uses the GPU to compute a result array containing the elements 7; 9; 11; 13; 15. To perform the same computation on a multicore processor system using vector instructions, we write the same program but specify a different target (Listing 2). open System open Microsoft . ParallelArrays [ < EntryPoint >] let main ( args ) = let x = new FloatParallelArray ( Array . map float32 [|1; 2; 3; 4; 5 |]) let y = new FloatParallelArray ( Array . map float32 [|6; 7; 8; 9; 10 |]) let z = x + y use dx9Target = new DX9Target () let zv = dx9Target . ToArray1D ( z ) printf " % A \ n " zv 0 Listing 1. F Accelerator code targeting GPU.
use multicoreTarget = new X64MulticoreTarget () let zv = multicoreTarget . ToArray1D ( z ) Listing 2. F Accelerator code targeting X64 CPU (only the two changed lines are shown).
The FPGA target does not work in an on-line mode and does not return a result. Instead it generates VHDL circuits which need to be implemented using FPGA vendor tools. A key point here is that we can take the same computation and instantiate it on three wildly different computational devices. Accelerator running on the GPU currently uses DirectX 9, which introduces a number of limitations to the target. First is the lack of data-type support, with only Float32 data-types that are not quite compliant with IEEE 754 (the floating point number standard). Secondly the code is quite limited in performance by both shader length limits, which restrict the amount of
116
A. Cole et al. / Data-Parallel Programming Systems
code which can be run; and memory limitations. In DirectX 9 there is no local shared memory as in CUDA, limited register files and limited numbers of textures in which to encode input data. 1.5. CUDA NVIDIA CUDA provides direct access to an NVIDIA GPU’s many hundreds or thousands of parallel cores (termed the “streaming multiprocessor” in the context of GPGPU), rather than being required to run code through the graphics pipeline. In CUDA one writes programs as functions (called kernels) which operate on a single element, equivalent to the code in the inner loop of sequential array code. Threads are then spawned, one for every element, each running a single kernel instance. When a program is executed through CUDA on the GPU the programmer first declares how many threads to spawn and how they are grouped. This allows the system to execute more threads than there are cores available by splitting the work up into “blocks”. The low level nature of CUDA allows for code to be very highly tailored to the device it is running on, leading to optimisations such as using thread local memory (which is faster than global memory), or configuring exactly how much work each thread does. The CUDA kernel code to add two arrays, equivalent to the code in Section 1.4, is shown in Listing 3. This code adds a single set of elements together after first determining the current thread ID for use as an index. __global__ void DoAddition ( float aCudaA [] , float aCudaB [] , float aCudaTot [] , int iSize ) { const int index = ( blockIdx . x * blockDim . x ) + threadIdx . x ; if ( index < iSize ) { aCudaTot [ index ] = aCudaA [ index ] + aCudaB [ index ]; } }
A. Cole et al. / Data-Parallel Programming Systems
117
// Copy the input data over . cudaMemcpy ( cudaArrayOne , a1 , size , c u d a M e m c p y H o s t T o D e v i c e ); cudaMemcpy ( cudaArrayTwo , a2 , size , c u d a M e m c p y H o s t T o D e v i c e ); // Call the GPU code from the host . dim3 dimBlocks (1) , dimThreads ( size ); DoAddition < < < dimBlocks , dimThreads > > >( cudaArrayOne , cudaArrayTwo , cudaArrayOut , size ); // Save the result . cudaMemcpy ( ao , cudaArrayOut , size , c u d a M e m c p y D e v i c e T o H o s t ); } Listing 3. CUDA addition code
CUDA code has three parts—host code, which runs on the CPU and is regular C/C++; link code, which invokes a kernel and has custom, C-based, syntax; and the kernel code itself, which also has custom syntax. The example code shows the kernel first (“DoAddition”), then the host code (“main”) and finally the link code (“Link”). In this instance the link code initialises the CUDA arrays and spawns many instances of the kernel function, each of which calculates a single output result based on it’s unique thread and block ID. The “DoAddition<<>>” code is custom CUDA syntax and requires a special compiler; however, everything else in that function is regular C code and could equally be placed in the device code. 2. Case Studies The CUDA programming model is designed to allow maximum flexibility in code, requiring in-depth target knowledge but allowing for lots of optimizations. Conversely the Accelerator programming model provides easy to use, high-level access to data but includes a JIT, which is an overhead not present in CUDA. We believe that despite this Accelerator gives reasonable speed for many classes of problems and we have instrumented the overhead of the JIT and found it to be very small (less than 3%) for realistic workloads. We also believe that the development effort involved is lower in Accelerator and so justifies some reduced performance. Indeed, it may be possible for the JIT-bases scheme to produce faster code because it can exploit extra information about the execution environment which is only available at run-time. The motivation behind this work was to find out just how much of an improvement or overhead Accelerator has compared to other data parallel and sequential programming models. Two case studies were used to test Accelerator’s performance against other systems: convolution and electrostatic charge map generation. Each algorithm was run in CUDA on a GPU, C++ code on a CPU and Accelerator on both a GPU and a CPU. The Accelerator tests were run on both platforms using the same code for two different implementations. The CUDA tests were run for two different implementations and the C++ tests for three. These studies were run in a common framework to share as much code as possible and reduce the areas which could affect timings. The experiments were run on an AMD 1055T CPU at 2.8GHz (the Phenom II X6) with 8Gb of DDR3 RAM and an NVIDIA GTX 460 GPU (the Palit Sonic) with 2Gb of GDDR5 RAM. Every individual test was run ten times for every target with the results displaying the totals for all ten runs. Memory and initialisations were all reset between every run. The case studies are aimed at determining the speed ups available for a reasonable amount of effort. While it is technically possible to write code by hand that will match any-
118
A. Cole et al. / Data-Parallel Programming Systems
thing generated by a compiler the effort involved is unreasonable. JIT compilers can use advanced knowledge of the data to be processed through techniques such as branch prediction and memory layout optimisation, writing generic code by hand to exploit such dynamic information is very difficult although some optimisations can still be applied. The optimisations applied to CUDA code were limited to those found in the Programming Massively Parallel Processors [15] book. This was assumed to be a reasonable estimation of the ability of a non-specialist. 3. Convolution Case Study 3.1. Introduction Convolution is the combination of two signals to form a third signal. In image processing a blur function is a weighted average of a pixel and its surrounding pixels. This is the convolution of two 2D signals—an input image and a blur filter which contains the weightings. In a continuous setting both signals are theoretically infinite. In a discrete setting such as image processing both signals are clipped. Figure 3 shows a 1D convolution example. The filter (a) is applied to the current element “7” of the input (m) and its surrounding elements (highlighted). These are multiplied to give array ma and all the elements are summed together to give the current element in the output array (n). This is shown generically in Equation (1) where mt and nt are the current input and output points respectively, N is the filter radius and a is the filter.
Figure 3. Convolution of a radius 1 1D filter and an 8 element input array with one operation highlighted.
The filter (Equation (2)) used in this study is a discretised Gaussian curve, rounded to zero beyond the stated radius (generally very small values). The “radius” of the filter represents the number of non-zero values—a 1D filter with a radius of 5 will yield 11 total non-zero values, a (square) 2D filter of radius 5 will yield 121. As it is a constant for every element in the input signal, the filter values are pre-computed prior to running any timed code. This is a part of the previously discussed common test framework and ensures that all the implementations run off the same initial data. nt =
2N+1
∑
ak m(t+k−N)
(1)
k=0
ak =
2N+1 (k − N)2 (i − N)2 / ∑ 2σ 2 2σ 2 i=0
(2)
A. Cole et al. / Data-Parallel Programming Systems
119
Figure 4. Convolution case study GPU results for a radius 5 filter on grids from 2048 × 64 to 2048 × 2048 elements.
A 2D convolver has a 2D input and a 2D filter (in this case a square of width 2N + 1). A separable convolver is a special case of 2D convolver in which the result is the same as applying a 1D convolution to every row of original input and to every column of those results applying a second 1D convolution. A radius five filter on a 2D input array implemented as a separable convolver would require only 11 + 11 = 22 calculations per output, compared to 11 × 11 = 121 for basic 2D convolution. When calculating results close to the edge of the input data the radius of the filter may extend beyond the limits of known data, in this case the nearest known value is used. This zone is called the “apron”, as is the zone of input data loaded but not processed by CUDA when splitting the data up into processing chunks [16]. Accelerator does not permit explicit array indexing so convolution is implemented as whole array operations using array shifts where every element in the input array has the effect of one element in the filter calculated in parallel. 3.2. Results Figures 4 and 5 show the results for a 2D separable convolver using a radius five Gaussian filter (Equation (2)) on grids of size 2048 × 64 to 2048 × 2048. Figure 4 shows the results for the various GPU targets (“Acc” is “Accelerator”). Both Accelerator and CUDA ran two different implementations. Figure 5 shows the CPU target results with “C++” using three different implementations of varying complexity. 3.3. Discussion For the convolver the Accelerator GPU version is only marginally slower than the CUDA version. The CUDA code used here was based on public documentation [16] which included optimisations based on loop unrolling and usage of low latency shared memory. While the
120
A. Cole et al. / Data-Parallel Programming Systems
Figure 5. Convolution case study CPU results for a radius 5 filter on grids from 2048 × 64 to 2048 × 2048 elements.
speed is slower the development efforts here are significantly different. Examples of the code required to implement a convolver in Accelerator, C++ and CUDA can be found in Appendix A. Additional examples can be found in the Accelerator user guide [17] and CUDA convolution white paper [16]. One point to note, clearly visible on the GPU graph, is the constant overhead from the Accelerator JIT. The performance of Accelerator on the CPU was significantly better than the original C++ sequential code (“C++ 1”) and slightly better than the more advanced versions (“C++ 2” and “C++ 3”). These versions performed the apron calculations separately from the main calculations, rather than using a single piece of generic code. Generic code requires branches to clamp the requested input data to within the bounds of the available data. The Accelerator code here was between two and four times faster than the C++ versions, and with significantly less development effort than “C++ 2” and “C++ 3”. Both Accelerator CPU implementations display a very interesting and consistent oscillating graph which requires further investigation. All the alternate code versions (“Acc 2”, “CUDA 2”, “C++ 2” and “C++ 3”) rely on the fact that the filter in use (a Gaussian curve) was symmetrical and so performed multiple filter points using common code. The only place where the alternate code gives significant speed improvements over the original is in the “C++” implementations and the number of other optimisations applied there implies that using symmetry made little difference. 4. Charge Map Case Study 4.1. Introduction Electrostatic charge maps are used to approximate the field of charge generated by a set of atoms in space. The space in question is split up into a 3D grid and the charge at every point
A. Cole et al. / Data-Parallel Programming Systems
121
Figure 6. Grid of points showing the full calculation for charge at one point.
in that grid is calculated by dividing the charge of every atom by their distance from the point and summing all the results. The finer the grid the better the approximation as space is continuous in reality. The basic calculation is given in Equation (3), where N is the number of atoms, Ci is the charge of atom i and dist(i, xyz) is the distance between atom i and grid point xyz. The total (Gxyz ) is the final charge at that point in space. The 3D world grid is divided up into slices with each 2D layer calculated independently. For this test only one slice was calculated but all the atoms were used, regardless of their location in 3D space. This is demonstrated in Figure 6 for one point (circled). The large circles with numbers in are atoms with their charges, the numbers by the lines are the distance between one atom and the currently calculated point and the sum in the corner is the overall calculation. N
Ci dist(i, xyz) i=1
Gxyz = ∑
(3)
There are two obvious methods for parallelising this algorithm. The first is to loop through the atoms sequentially and calculate the current atom’s effect on every point in the grid in parallel. The second is the reverse—loop through grid points and calculate every atom’s effect on that point in parallel. This latter option would require a parallel addition, such as a sum-reduce algorithm and would also generate very long code in Accelerator, due to loops being unrolled by the compiler. The number of atoms should be small compared to the number of grid points being calculated and may be small compared to the number of GPU processing elements available. The lack of parallel addition, shorter programs and greater resource usage makes the former option the only realistic option. A third option, calculating every atom’s effect on every grid point simultaneously, is not possible as at present Accelerator does not provide the 3D arrays required to store all the atom offsets from all the points in the 2D grid. DirectX 9, upon which the Accelerator GPU target is currently based, has relatively low limits on shader lengths; however, the Accelerator JIT can split programs into multiple shaders to bypass this limit. New Accelerator GPU targets are alleviating this restriction. The original algorithm was found in the Programming Massively Parallel Processors book [15] and the CUDA code is based on the most advanced version of the code in there. Two Accelerator implementations were produced, the second pre-computing a number of constants to simplify the distance calculations based on the fact that the distance between two atoms is
122
A. Cole et al. / Data-Parallel Programming Systems
Figure 7. Electrostatic charge map case study GPU results for 4-200 atoms with a grid size 1024 x 1024.
constant. This fact was used with a rearrangement of the standard Pythagorean equation to get the distance to one atom based on the distance to the last atom. 4.2. Results Figures 7 and 8 show the results for the electrostatic charge map experiment. These are for a range of atom counts placed in a constant size grid. Both graphs were generated with the same set of randomly placed input atoms for consistency. The grid was a single 1024 × 1024 slice containing just over 1,000,000 points. Results were run for 4 to 200 atoms in 4 atom steps with results re-run for the “Accelerator 1 GPU” target between 106 and 114 atoms in 1 atom intervals (not shown). The experiments were only run to 200 atoms because the CUDA target stopped running beyond that point. “Acc 1” is a basic Accelerator implementation performing the full distance calculation on the GPU for every point in parallel. “Acc 2” is the alternate distance calculation, the timings here include the longer expression generation phase for pre-computing constants. Similarly “C++ 1”, “C++ 2”, “CUDA 1” etc. show the results for different implementations run on a given target. 4.3. Discussion Figure 8 show the results for Accelerator and C++ running on a CPU. Here the optimised C++ versions (“C++ 2” and “C++ 3”) were the fastest. Although they were very slightly faster than the basic Accelerator CPU version far more effort was used to write them. In terms of development effort “Accelerator 1” was on a par with “C++ 1”, and the benefits there are clear to see. The CPU results for “Accelerator 2” are also interesting. Extra effort was put into this version to attempt to make the calculations run on the GPU (or multi-core CPU) faster at the expense of running more calculations at code generation time (see electrostatic charge
A. Cole et al. / Data-Parallel Programming Systems
123
Figure 8. Electrostatic charge map case study CPU results for 4-200 atoms with a grid size 1024 x 1024.
map introduction). This was not worth the effort as the results there are significantly slower than the basic Accelerator version. For the GPU results (Figure 7) CUDA is unparalleled in speed—almost parallel to the x axis but it is important to note that far more development effort was used in that version compared to the Accelerator version. For Accelerator the results again show the JIT overhead seen in the Convolution study, and for “Accelerator 1” also show a discontinuity between 108 and 112 atoms. The gradient of “Accelerator 1” before this discontinuity is around seven times greater than the gradient of “CUDA” and after is around ten times greater, with “Accelerator 2” consistent throughout. The discontinuity between 108 and 112 atoms, which more fine-grained testing revealed to be located between 111 and 112 atoms, is consistent and repeatable. Accelerator is limited by DirectX 9’s shader length limit but has the ability to split long programs up into multiple shaders to bypass this limit. The length of generated program in this case study depends on the number of atoms being processed; atoms are processed sequentially in an unrolled loop as Accelerator does not generate loops. It is believed that 112 atoms is the point at which this splitting occurs as the length of generated code exceeds the maximum. The time jump in the discontinuity is very close in size to the JIT overhead displayed earlier (less than twice the height), most likely resulting from multiple compilation stages or program transfer stages. The increase in gradient can be explained by requiring multiple data transfers between GPU and host (the computer in which the GPU is located). 5. Development An important consideration for any program is the ease of development. The code in Appendix A helps demonstrate the differences in development efforts between CUDA, C++ and Accelerator, metrics were unavailable as portions of the code were based on existing examples. Even when an algorithm implementation is relatively constant between the various lan-
124
A. Cole et al. / Data-Parallel Programming Systems
guages, for example convolution, much more work is required in CUDA before the operation can begin in terms of low-level data shifting between global and local memory. Due to its model the CUDA version does have more options for manual improvement—with Accelerator the user is entirely bound by the layout decisions of the JIT. This is not always a bad thing, however. It is always possible to write code at the assembly level but compilers for high-level languages exist because they are seen as an acceptable trade-off: Accelerator is no different. CUDA uses a separate language with a separate compiler. Accelerator is embedded in C++; it is usable from languages with C extensions and can use operator overloading, making it possible to interchange Accelerator and sequential code with very little effort. Listing 4 shows a function to add two values of type “float t” together and defines two C input arrays and one C output array. Listing 5 shows C code which defines the “float t” type and uses the generic code wrapped in a loop to add the two input arrays together sequentially. Similarly, Listing 6 shows Accelerator using the same generic function and input data, this time defining the type as an Accelerator array object and performing the calculation in parallel on the GPU. float gInputArrayA [4] = {10 , 20 , 30 , 40} , gInputArrayB [4] = {9 , 8 , 7 , 6} , gOutputArray [4];
float_t DoCalculation ( float_t a , float_t b ) { // More complex calculations can be used here with operators . return a + b ; } Listing 4. Generic addition code
// Set the data to float . typedef float float_t ; void main () { // Loop over the data . for ( int i = 0; i != 4; ++ i ) { gOutputArray [ i ] = DoCalculation ( gInputArrayA , gInputArrayB ); } } Listing 5. C++ use of generic addition code
// Set the data to FloatParallelArray . typedef float Fl oa tPa ra ll el Ar ra y ; void main () { // Set up the target .
A. Cole et al. / Data-Parallel Programming Systems
125
Target * target = CreateDX9Target (); // Convert the data . FloatParallelArray aParA ( gInputArrayA , 4) , aParB ( gInputArrayB , 4) , // Build expression tree . aParTot = DoCalculation ( aParA , aParB ); // Run expression and save . target - > ToArray ( aParTot , gOutputArray , 4); // Clean up . target - > Delete (); } Listing 6. Accelerator use of generic addition code
6. Conclusions and Future Work We compared the programming models used in CUDA, the Accelerator library and C++ code and demonstrate that Accelerator provides an attractive trade-off between programmer productivity and performance. Accelerator allows us to implement a 2D convolver using high level data-parallel arrays and data-parallel array operations without mentioning detailed information about threads, scheduling and data layout yet it delivers almost the same performance as the hand written CUDA implementation. Section 5 looked at the code for convolution using the Accelerator, CUDA and C++ systems. The CUDA code is the most complex, regardless of the advantages afforded by that additional complexity. The Accelerator code did introduce some restrictions (e.g. the use of whole array operations) but these restrictions allow the system to efficiently implement dataparallel operations on various targets like GPUs. Additionally the model means the code is complete—all further optimisation work is left to the compiler. Once past the few overheads the programming model is very similar to C++ code, providing operations on arrays in a manner similar to operations on individual elements. This is also shown in the development discussion by the example using common code for both systems (Listings 4, 5 and 6). The sequential C++ was the simplest to write, but offered no additional acceleration. The CUDA implementation was the fastest to run, though not always by much; so we believe that Accelerator gives a good balance between development and speed. We also demonstrated reasonable speed ups can be obtained using the Accelerator library. The graphs of CPU results (Figures 5 and 8) show how Accelerator performed compared to C++ code. The only time where Accelerator was slower than the C++ code was within the electrostatic charge map implementation, and only when compared to heavily optimised and tweaked implementations. The GPU and CPU used for the tests were both around £200 which makes comparing their results directly justifiable from a performance/pound point of view. Given that the CPU and GPU Accelerator tests ran from the same implementations with different targets the GPU results show how much of an advantage is available over the CPU for these case studies. When these tests were first run Accelerator was significantly behind CUDA on the GPU, but re-runs with the latest versions of both have brought the two sets of results much closer together as the JIT in Accelerator improves to give better and better code outputs. A range of algorithms were looked at for different classes of problems showing speedups in some. While a more in-depth study looking at categories such as “The Seven Dwarfs” [18], or the updated and more comprehensive “Thirteen Dwarfs” [19], is required; this work shows that some areas are well suited to Accelerator and that a comprehensive re-
126
A. Cole et al. / Data-Parallel Programming Systems
view of algorithm classes would be useful work. In the cases where Accelerator can be used, further work is required to give a more complete picture of the situations to which it is well suited. The results for the electrostatic charge map case study are vastly in CUDA’s favour, but the convolution results are arguably in Accelerator’s favour as the performance gap is minimal and the development gap is huge. For this reason we believe that Accelerator is a useful system, but more work is required to determine problem classes that it is not suitable for. One major optimisation method for CUDA is the use of shared memory, of which Accelerator has no knowledge currently. One possible avenue for speed up investigation is an automated analysis of the algorithm to group calculations together in order to utilise said shared memory. The GPU results were also produced despite the limitations caused by DirectX 9 listed in Section 1.4. Several of the results have shown the overheads due to Accelerator’s JIT. These tests were run without using Accelerator’s “parameters” feature which can pre-compile an expression using placeholders (called “parameters”) for input data and store the result. Every run in the results was repeated ten times and summed, however as it uses off-line compilation the CUDA code was only ever built once. Acknowledgements This work was carried out during an internship at Microsoft Research in Cambridge. The studentship is jointly funded by Microsoft Research and the Engineering and Physical Sciences Research Council (EPSRC) through the Systems Engineering Doctorate Centre (SEDC). References [1] David Tarditi, Sidd Puri, and Jose Oglesby. Accelerator: Using Data Parallelism to Program GPUs for General-Purpose Uses. In ASPLOS-XII: Proceedings of the 12th international conference on Architectural support for programming languages and operating systems, pages 325–335, New York, NY, USA, 2006. ACM. [2] NVIDIA. CUDA homepage. http://www.nvidia.com/object/cuda_home.html, 2010. [3] NVIDIA. GeForce GTX 480 Specifications, 2010. [4] ATI. ATI Radeon HD 5970 Specifications, 2011. [5] John D. Owens, Mike Houston, David Luebke, Simon Green, John E. Stone, and James C. Phillips. GPU Computing. Proceedings of the IEEE, 96(5):879–899, 2008. [6] John D. Owens, David Luebke, Naga Govindaraju, Mark Harris, Jens Krger, Aaron E. Lefohn, and Timothy J. Purcell. A Survey of General-Purpose Computation on Graphics Hardware. Computer Graphics Forum, 26(1):80–113, 2007. [7] J. N. England. A system for interactive modeling of physical curved surface objects. SIGGRAPH Comput. Graph., 12(3):336–340, 1978. [8] Christian A. Bohn. Kohonen Feature Mapping through Graphics Hardware. In In Proceedings of Int. Conf. on Compu. Intelligence and Neurosciences, pages 64–67, 1998. [9] Jed Lengyel, Mark Reichert, Bruce R. Donald, and Donald P. Greenberg. Real-Time Robot Motion Planning Using Rasterizing Computer Graphics Hardware. SIGGRAPH Comput. Graph., 24(4):327–335, 1990. [10] Kenneth E. Hoff, III, John Keyser, Ming Lin, Dinesh Manocha, and Tim Culver. Fast Computation of Generalized Voronoi Diagrams Using Graphics Hardware. In SIGGRAPH ’99: Proceedings of the 26th annual conference on Computer graphics and interactive techniques, pages 277–286, New York, NY, USA, 1999. ACM Press/Addison-Wesley Publishing Co. [11] Michael McCool and Stefanus Du Toit. Metaprogramming GPUs with Sh. http://libsh.org/ brochure.pdf, 2004. [12] Ian Buck, Tim Foley, Daniel Horn, Jeremy Sugerman, Kayvon Fatahalian, Mike Houston, and Pat Hanrahan. Brook for GPUs: Stream Computing on Graphics Hardware. ACM Trans. Graph., 23(3):777–786, 2004.
A. Cole et al. / Data-Parallel Programming Systems
127
[13] Patrick S. McCormick, Jeff Inman, James P. Ahrens, Charles Hansen, and Greg Roth. Scout: A HardwareAccelerated System for Quantitatively Driven Visualization and Analysis. In VIS ’04: Proceedings of the conference on Visualization ’04, pages 171–178, Washington, DC, USA, 2004. IEEE Computer Society. [14] M. Lesniak. PASTHA - parallelizing stencil calculations in Haskell. Declarative Aspects of Muilticore Programming, Jan 2010. [15] David B. Kirk and Wen-mei W. Hwu. Programming Massively Parallel Processors: A Hands-on Approach. Morgan Kaufmann, 1 edition, 2010. [16] Victor Podlozhnyuk. Image Convolution with CUDA. http://developer.download.nvidia.com/ compute/cuda/sdk/website/C/src/convolutionSeparable/doc/convolutionSeparable. pdf, 2007. [17] Accelerator Team. Microsoft Accelerator v2 Programming Guide. Microsoft Research, 2010. [18] Phillip Colella. Defining Software Requirements for Scientific Computing. Presentation, 2004. [19] Krste Asanovic, Ras Bodik, Bryan C. Catanzaro, Joseph J. Gebis, Parry Husbands, Kurt Keutzer, David A. Patterson, William L. Plishker, John Shalf, Samuel W. Williams, and Katherine A. Yelick. The Landscape of Parallel Computing Research: A View from Berkeley. Technical report, Electrical Engineering and Computer Sciences, University of California at Berkeley, 2006.
128
A. Cole et al. / Data-Parallel Programming Systems
A. Convolution Code Presented here is the main “kernel” code used to perform a 1D convolution in CUDA, Accelerator and sequential C++. Code to set up and destroy data arrays and target interactions has been omitted. The code used for the case studies in Section 3 is based on multiples calls to the code presented here. A.1. C++ code Listing 7 is the reference C++ code for a 1D convolver. This code is presented first as it most clearly demonstrates the basic convolution algorithm. “arrayInput” is the input data as a C array, “arrayOutput” is the resulting C array, “filter” is an array containing the full filter (a “filterRadius” value of five results in eleven filter values). The code loops through every element in the input array (of which there are “arrayWidth”), for each one calculating the result according to all filter values and surrounding elements (clipped to the array size). The operation of “Clamp” (Listing 8) is the main basis of the two improved C++ implementations which separate the loops into three loops to deal with start, middle and end of array values separately and do away with branching in “Clamp”. for ( int j = 0; j != arrayWidth ; ++ j ) { float sum = 0; for ( int u = - filterRadius , p = 0; u <= filterRadius ; ++ u , ++ p ) { int J = Clamp ( j + u , 0 , arrayWidth - 1); sum += arrayInput [ J ] * filter [ p ]; } arrayOutput [ j ] = sum ; } Listing 7. C++ convolution code
int Clamp ( int x , int min , int max ) { return ( x < min ) ? min : ( x > max ) ? max : x ; } Listing 8. Additional C++ code
A.2. Accelerator code Listing 9 is the code to perform a 1D convolution in Accelerator. The C++ sequential code loops over every input element in turn, calculating the effect of each surrounding element before moving on to the next. In contrast this code calculates the effect of one offset on every element using Accelerators “Shift” function which behaves much like the clamped array lookup in the C++ code, but for every element in parallel. Because Accelerator is a JIT system the main loop builds a large unrolled expression tree which is evaluated when “ToArray” is called for a specified target (here DirectX 9). This has the effect that all the filter values are known at compile time and become constants in the executed GPU code. In this code “arrayTemp” and “arrayInput” are Accelerator objects representing arrays on the GPU, the former is declared and initialised to 0 in the code given. “arrayOutput” is a C array to which the final result is saved after GPU execution.
A. Cole et al. / Data-Parallel Programming Systems
129
size_t dims [] = { arrayWidth }; intptr_t shifts [] = {0}; FloatParallelArray arrayTemp (0.0 f , dims , 1); for ( int u = - filterRadius , p = 0; u <= filterRadius ; ++ u , ++ p ) { shifts [0] = u ; arrayTemp += Shift ( arrayInput , shifts , 1) * filter [ p ]; } ParallelArrays :: Target * target = MicrosoftTargets :: CreateDX9Target (); target . ToArray ( arrayTemp , arrayOutput , height , width , width * sizeof ( float )); Listing 9. Accelerator convolution code
A.3. CUDA code The CUDA code equivalent to the previous two examples is shown in Listing 10. This code is by far the most complex and its development is well documented in the “Image Convolution with CUDA” white paper by Victor Podlozhnyuk [16]. The code within the function “DoOneRow” does a 1D convolution, or part of one according to the number of available processing units on the GPU—if the input is larger the data is partitioned into blocks in “DoRows”. “Part 1” (see in-code comments) is responsible for determining the limits of data for which the current block is responsible, including aprons which may be beyond the input data limits or may be beyond the processing limits of the current block. “Part 2” uses every thread in the current block to load one value from global memory to faster memory and waits for all other threads to complete. “Part 3” performs the actual convolution algorithm on one element, with many threads running at once. Unlike the C++ and Accelerator code listings, this version does not save its result to a C array, requiring explicit conversion after the main calculation. __global__ void DoOneRow ( float * const arrayOutput , const float * const arrayInput , const int width , const int pitch , const int radius ) { // Part 1. Load all values for calculations into variables . __shared__ float fRowData [1280]; const float * const filter = & gc_fFilter [ radius ]; const int dataStart = blockIdx . x * blockDim .x , apronStart = dataStart - radius , apronClamp = max ( apronStart , 0) , alignedStart = apronStart & ( - HALF_WARP ) , dataEnd = dataStart + blockDim .x , apronEnd = dataEnd + radius , dataEndClamp = min ( dataEnd , width ) , apronEndClamp = min ( apronEnd , width ) , apronOffset = apronStart & ( HALF_WARP - 1) , maxX = width - 1; int
130
A. Cole et al. / Data-Parallel Programming Systems load = threadIdx . x + alignedStart , pos = threadIdx . x - apronOffset ; // Part 2. Copy data from global to local memory , clamped . while ( load < apronEnd ) { if ( load >= apronEndClamp ) { fRowData [ pos ] = arrayInput [ maxX ]; } else if ( load >= apronClamp ) { fRowData [ pos ] = arrayInput [ load ]; } else if ( load >= apronStart ) { fRowData [ pos ] = arrayInput [0]; } load += blockDim . x ; pos += blockDim . x ; } __syncthreads (); // Part 3. All data is loaded locally , do the calculation . const int pixel = dataStart + threadIdx . x ; if ( pixel < dataEndClamp ) { float * const dd = fRowData + threadIdx . x + radius ; float total = 0; for ( int i = - radius ; i <= radius ; ++ i ) { total += filter [ i ] * dd [ i ]; } arrayOutput [ pixel ] = total ; } } // Non - truncating integer division . # define CEILDIV (m , n ) \ ((( m ) + ( n ) - 1) / ( n )) extern " C " void DoRows ( float * arrayOutput , float * arrayInput , int width , int threads , int pitch , int radius ) { // Call the GPU code from the host . dim3 dimBlocks ( CEILDIV ( width , threads )) , dimThreads ( threads ); DoOneRow < < < dimBlocks , dimThreads > > >( arrayOutput , arrayInput , width , pitch , radius ); } Listing 10. CUDA convolution code
Experiments in Multicore and Distributed Parallel Processing using JCSP Jon KERRIDGE School of Computing, Edinburgh Napier University, Edinburgh UK, EH10 5DT [email protected] Abstract. It is currently very difficult to purchase any form of computer system be it, notebook, laptop, desktop server or high performance computing system that does not contain a multicore processor. Yet the designers of applications, in general, have very little experience and knowledge of how to exploit this capability. Recently, the Scottish Informatics and Computer Science Alliance (SICSA) issued a challenge to investigate the ability of developers to parallelise a simple Concordance algorithm. Ongoing work had also shown that the use of multicore processors for applications that have internal parallelism is not as straightforward as might be imagined. Two applications are considered: calculating using Monte Carlo methods and the SICSA Concordance application. The ease with which parallelism can be extracted from a single application using both single multicore processors and distributed networks of such multicore processors is investigated. It is shown that naïve application of parallel programming techniques does not produce the desired results and that considerable care has to be taken if multicore systems are to result in improved performance. Meanwhile the use of distributed systems tends to produce more predictable and reasonable benefits resulting from parallelisation of applications.
Keywords: multicore processors, distributed processing, parallel programming, Groovy, JCSP, Monte Carlo methods, concordance.
Introduction The common availability of systems that use multicore processors is such that it is now nearly impossible to buy any form of end-user computer system that does not contain a multicore processor. However, the effective use of such multicore systems to solve a single large problem is sufficiently challenging that SICSA, the Scottish Informatics and Computer Science Alliance, recently posed a challenge to evaluate different approaches to parallelisation for a concordance problem. There will be other challenges to follow. The concordance problem is essentially input/output bound and thus poses particular problems for parallelisation. As a means of comparison, a simple compute bound problem is also used as an experimental framework: namely the calculation of using a Monte Carlo method. The aim of the experiments reported in this paper is to investigate simple parallelisation approaches (using the JCSP packages [1, 2] for Java, running on a variety of Windows platforms) and see whether they provide any improvement in performance over a sequential solution. In other words, is parallelisation worth the effort? In section 2, experiments using the Monte Carlo calculation of are presented. Section 3 describes and discusses the experiments undertaken with the concordance example. Finally, some conclusions are drawn.
132
J. Kerridge / Experiments in Multicore and Distributed Parallel Processing
1. Calculating Using Monte Carlo Methods The calculation of using Monte Carlo statistical methods provides an approximation based on the relation of the area of a square to an inscribed circle [3]. Given a circle of radius r inscribed in a square of side 2r, the areas of the circle and square are, respectively, r2and 4r2 – so, the ratio of these areas is /4. Hence, if sufficient random points are selected within the square, approximately /4 of the points should lie within the circle. The algorithm proceeds by selecting a large number of points (N = 1,024,000) at random and determining how many lie within the inscribed circle (M). Thus if sufficient points are chosen, can be approximated by (M/N)*4. The following sequential algorithm, Listing 1, written in Groovy [4], captures the method assuming a value of r = 1 (and using only the top-right quadrant of the circle). The algorithm is repeated 10 times and the results, including timings, are averaged. 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19
def r = new Random() def timer = new CSTimer() def pi = 0 def int N = 10240000 def startTime = timer.read() for ( run in 1..10) { print "-" def int M = 0 for ( i in 1..N){ def x = r.nextDouble() def y = r.nextDouble() if (( (x*x) + (y*y)) < 1.0 ) M = M + 1 } pi = pi + ((double)M)/ ((double)N) * 4.0 } def endTime = timer.read() def elapsedTime = (endTime - startTime)/10 pi = pi / 10.0 println "\n$pi,$elapsedTime" Listing 1. Sequential implementation of estimation.
The ‘obvious’ way to parallelise this algorithm is to split the task over a number of workers (W), such that each worker undertakes N/W iterations. A manager process is needed to initiate each worker and collate the results when all the workers have completed their task. Listing 2 shows the definition of such a worker process using Groovy Parallel and JCSP. 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37
class Worker implements CSProcess { def ChannelInput inChannel def ChannelOutput outChannel void run(){ def r = new Random() for ( run in 1..10){ def N = inChannel.read() def int M = 0 for ( i in 1..N){ def x = r.nextDouble() def y = r.nextDouble() if (( (x*x) + (y*y)) < 1.0 ) M = M + 1 } outChannel.write (M) } } } Listing 2. Worker process definition.
133
J. Kerridge / Experiments in Multicore and Distributed Parallel Processing
The corresponding manager process is shown in Listing 3. Each run of the calculation is initiated by a communication from the manager process to each worker {52}1. The manager process then waits for the returned value of M from each worker {53}. 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61
class Manager implements CSProcess { def ChannelOutputList outChannels def ChannelInputList inChannels void run () { def timer = new CSTimer() def startTime = timer.read() def workers = outChannels.size() def pi = 0.0 def N = 10240000 def iterations = N / workers for ( run in 1..10) { print "." def M = 0 for ( w in 0 ..< workers) outChannels[w].write (iterations) for ( w in 0 ..< workers) M = M + inChannels[w].read() pi = pi + ( ( ((double)M)* 4.0) / ((double)N) ) } def endTime = timer.read() def elapsedTime = (endTime - startTime)/10 pi = pi / 10.0 println "\n$workers,$pi,$elapsedTime" } } Listing 3. Manager process definition.
This parallel formulation has the advantage that it can be executed as a single parallel within one Java Virtual Machine (JVM) or over several JVMs using net channels. Furthermore, the JVMs can be executed on one or more cores in a single machine or over several machines, simply by changing the manner of invocation. 1.1 Experimental Framework The experiments were undertaken on a number of different machines and also over a distributed system in which each node comprised a multicore processor. Table 1 shows the three different machine types that were used. Table 1. Specification of the experimental machines used in the experiments.
Name
CPU
Office E8400 Home Q8400 Lab E8400
cores
Speed (Ghz)
L2 Cache (MB)
RAM (GB)
Operating System
Size bits
2 4 2
3.0 2.66 3.0
6 4 6
2 8 2
Windows XP Windows 7 Windows 7
32 64 32
The Lab and Office machines were essentially the same except that the Lab machines were running under Windows 7 as opposed to XP. The Home machine was a quad core 64bit machine. The Lab machines were also part of a distributed system connected by a 100 Mbit/sec Ethernet connected to the internet and thus liable to fluctuation depending on network traffic. 1
The notation {n} and {n..m} refer to line numbers in one of the Listings. Each line is uniquely numbered.
134
J. Kerridge / Experiments in Multicore and Distributed Parallel Processing
1.2 Single Machine Performance The experiments on a single machine were undertaken as follows. The sequential algorithm was executed on each machine type to determine the ‘sequential’ performance of each machine. The average performance for the sequential version over 10 runs for each machine type is shown in Table 2. The effect of the 64-bit architecture on the Home machine is immediately apparent. Using the Windows Task Manager to observe CPU usage on each of the machines it was noted that the maximum CPU usage was never more than 50%. Table 2. Sequential performance of each machine.
Office Home Time (secs)
4.378
2.448
Lab 4.508
The parallel version of the algorithm was then executed on each machine in a single JVM with various numbers of worker processes. The corresponding times and associated speedup is shown in Table 3. The performance in each case was monitored using the Task Manager and in each case the CPU usage was reported as 100%. However, the only version which showed any speedup of the parallel version over the sequential version was the Home machine with 2 workers. In all other cases the use of many parallel workers induced a slowdown even though the CPU was indicating a higher percentage use. The same behaviour was observed by Dickie [5] when undertaking the same Monte Carlo based calculation of in a .NET environment. It was observed that as the number of threads increased CPU usage rose to 100% and overall completion time got worse. Further analysis using Microsoft’s Concurrency Visualizer tool [6] showed this additional processor usage was taken up with threads being swapped. Table 3. Parallel performance with varying number of workers in a single JVM.
Office Workers (secs) Speedup 2 4 8 16 32 64 128
4.621 4.677 4.591 4.735 4.841 4.936 5.063
0.947 0.936 0.954 0.925 0.904 0.887 0.865
Home (secs) 2.429 8.171 7.827 7.702 7.601 7.635 7.541
The Office and Lab machines use the same processor (E8400) and both show a gradual slowdown as the number of workers is increased. Whereas, the Home machine (Q8400) initially shows a speedup then followed by an initial dramatic decrease in performance which then slowly gets worse. An explanation of this could be that the L2 cache on the Q8400 is 4MB whereas the E8400 has 6MB and that this has crucially affected the overall performance. The parallel version of the algorithm was then reconfigured to run in a number of JVMs assuming each JVM was connected by a TCP/IP based network utilising the net channel capability of JCSP. The intention in this part of the experiment was to run each
J. Kerridge / Experiments in Multicore and Distributed Parallel Processing
135
JVM on a separate core. Each JVM was initiated from the command line by a separate execution of the java environment. The experiments were conducted twice: once just using the command line java command directly and secondly using the Windows start command so that the affinity of the JVM to a particular core could be defined. This would, it was hoped, ensure that each JVM was associated with a distinct core thereby increasing the parallelism. In the case of the Home and Lab machines this appeared to have no effect. In the case of the Office machine an effect was observed and the execution using the start command had a similar performance to the Lab Machine. Table 4 shows the performance from runs that did not use the start command. Table 4. Parallel performance with varying number of JVMs in a single machine.
JVMs 2 4 8
Office (secs) Speedup
Home (secs)
4.517 4.534 4.501
2.195 1.299 1.362
0.969 0.966 0.973
Lab Speedup (secs) Speedup 1.115 1.885 1.797
4.369 4.323 4.326
1.032 1.043 1.042
The Office machine, which uses Windows XP showed a slowdown when run without the start command, whereas the other two machines both showed speedups, relative to the sequential solution. These machines use Windows 7 and, as there was no difference in the performance when using start or not, it can be deduced that Windows 7 does try to allocate new JVMs to different cores. The Home machine has 4 cores and it can be seen that the best speedup is obtained when 4 JVMs are used. Similarly, the Lab machine has two cores and again the best speedup occurs when just two JVMs are utilised. 1.3 Distributed Performance The multi JVM version of the algorithm was now configured to run over a number of machines using a standard 100 Mbit/sec Ethernet TCP/IP network. These experiments involved Lab machines only, which have two cores. One of the machines ran the TCPIPNode Server, the Manager process and one Worker in one core. The TCPIPNode Server is only used to set up the net channel connections at the outset of processing. The Manager is only used to initiate each Worker and then to receive the returned results and thus does not impose a heavy load on the system. The performance using both two and four machines is shown in Table 5. Table 5. Performance using multiple JVMs on two and four machines.
Two Machines JVMs 2 4 8 16
Time (secs) 4.371 2.206
Speedup 1.031 2.044
Four Machines Time (secs)
Speedup
2.162 1.229 1.415
2.085 3.668 3.186
136
J. Kerridge / Experiments in Multicore and Distributed Parallel Processing
The best performance is obtained when the number of JVMs used is the same as the number of available cores. Unfortunately, the best speedup relates to the number of machines and not the number of available cores. 1.4 Conclusions Resulting from the Monte Carlo Experiments The Monte Carlo determination of is essentially an application that is processor bound with very little opportunity for communication. Hence the normal behaviour of CSP-based parallelism, with many processes ready to execute but awaiting communication, does not happen. JCSP currently relies on the underlying JVM to allocate and schedule its threads (that implement JCSP processes) over multiple cores. In turn, the JVM relies on the underlying operating system (Windows, in our experiments). The disappointing observation is that this combination seems to have little ability to make effective use of multiple cores for this kind of application. Utilising parallel processes within a single JVM had little effect and the result was worse performance. Performance improvement was only achieved when multiple machines were used in a distributed system. 2. Concordance Related Experiments The SICSA Concordance challenge [7] was specified as follows: Given: a text file containing English text in ASCII encoding and an integer N. Find: for all sequences, up to length N, of words occurring in the input file, the number of occurrences of this sequence in the text, together with a list of start indices. Optionally, sequences with only 1 occurrence should be omitted. A set of appropriate text files of various sizes was also made available, with which participants could test their solutions. A workshop was held on 13th December 2010 where a number of solutions were presented. The common feature of many of the presented solutions was that as the amount of parallelism was increased the solutions got slower. Most of the solutions adopted some form of Map-Reduce style of architecture using some form of tree data structure. The approach presented here is somewhat different in that it uses a distributed solution and a different data structure. The use of a distributed solution using many machines was obvious from the work undertaken on Monte Carlo . The data structures were chosen so they could be accessed in parallel, thereby enabling a single processor to progress the application using as many parallel processes as possible. However, the number of such parallel processes was kept small as it had been previously observed that increased numbers of parallel processes tended to reduce performance. The Concordance problem is essentially input-output bound and thus a solution needs to be adopted that mitigates such effects. For example, one of the text files is that of the Bible which is 4.681 MB in size and comprises 802,300 words. For N=6 (the string length) and ignoring strings that only occur once, this produces an output file size of 26.107 MB. 2.1 Solution Approach It was decided to use N as the basis for parallelisation of the main algorithm. The value of N was likely to be small and thus would not require a large number of parallel processes on each machine. It was thus necessary to create data structures that could read the data
J. Kerridge / Experiments in Multicore and Distributed Parallel Processing
137
structures in parallel (with each value of N accessed by a separate process). One approach to processing character strings is to convert each word to an integer value based on the sum of the ASCII values of each character in the word. This has the benefit that subsequent processing uses integer comparisons, which are much quicker than string comparisons. The approach used to parallelise the reading of the input file was to split it into equal sized blocks, in terms of the number of words and then send each block to a worker process. The input blocks were distributed in turn over the available worker processes. Once a worker process received a block it would do some initial processing, which should be completed before the next block was to be received. This initial processing removed any punctuation from the words and then calculated the integer value of each word in the block. Some initial experiments determined that a block size of 6k words was a good compromise between the overall time taken to read the file and the ability of a worker process to complete the initial processing before the next block needed to be received so that the read process was not delayed. This appeared to be a good compromise for the number of workers being used, which were 4, 8 and 12. The worker process could now calculate the values for N = 2..6 (N=6 was the maximum value chosen2). This was simply undertaken by summing the requisite number of integers in turn from the single word sequence values previously calculated during the initial phase. This could be easily parallelised because each process would need to read the N=1 values but would write to a separate data structure for N = 2..6. This was then undertaken for each block in the worker. The blocks were structured so that last N-1 words were repeated at the start of the next block. This meant that there was no need to transfer any values between workers during processing. The second phase of the algorithm was to search each of the N sequences to find equal values, which were placed in a map comprising the value and the indices where the value was found. Only sequences with equal values could possibly be made from the same string of words. However, some values could be created from different sequences of words (simply because the sum of the characters making up the complete string was the same) and these need eliminating (see below). This phase was repeated for each block in the worker. The result was that for each block a map structure was created which recorded the start index where sequences of equal value were found in that block. Experiments were undertaken to apply some form of hash algorithm to the creation of the value of a sequence. It was discovered that the effect was negligible in that the number of clashes remained more or less constant; the only aspect that changed was where the clashes occurred. Yet again this processing could be parallelised because each set of sequence values could be read in parallel and the resulting map could also be written in parallel as they were separated in N. Each of these maps was then processed to determine which sequence values corresponded to different word sequences. This resulted in another map which comprised each distinct word sequence as the key and the indices where that string was found in the block. Yet again, this processing was parallelisable in N. At the end of this phase, each block contained a partial concordance for the strings it contained in a map with the sequence value as key and a further map of the word strings and indices as the entry in N distinct data structures. The penultimate phase merged each of the partial concordances contained in each block to a concordance for the worker process as a whole. This was also parallelisable in N. The final phase was to merge to the worker concordances into a final complete concordance for each of the values of N. Initially, the sequence values in each data structure were sorted 2
N=6 was chosen because it was known that the string “God saw that it was good” occurs several times in Genesis.
138
J. Kerridge / Experiments in Multicore and Distributed Parallel Processing
so that a merge operation could be undertaken with the workers sending entries in a known order to the process undertaking the merge. In the first instance the entries were sent to the initial process that read the input file where the complete concordance was created in a single file by merging the concordance entries from each worker in a manner similar to a tape merge. In a second implementation, additional processes were run in each worker that just sent the entries for one value of N to a separate merge process. There was thus N such merge processes each generating a single output file for the corresponding value of N. The effect of each of these parallelisations is considered in the following subsections. 2.2 The Effect of Phase Parallelisation Each parallelisation did improve the performance of the application as a whole. For example, the second phase where each sequence for N = 1..6 is searched to find the indices of equal sequence values. The sequential version of the processing is shown in Listing 4. 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81
def localEqualWordMapListN = [] // contains an element for each N value for ( i in 1..N) localEqualWordMapListN[i] = [] // initialise to empty list def maxLength = BL - N for ( WordBlock wb in wordBlocks) { // sequential version that iterates through the sequenceBlockList for ( SequenceBlock sb in wb.sequenceBlockList){ // one sb for each value of N def length = maxLength def sequenceLength = sb.sequenceList.size() if (sequenceLength < maxLength) length = sequenceLength // last block def equalMap = defs.extractEqualValues ( length, wb.startIndex, sb.sequenceList) def equalWordMap = defs.extractUniqueSequences ( equalMap, sb.Nvalue, wb.startIndex, wb.bareWords) localEqualWordMapListN[sb.Nvalue] << equalWordMap } } Listing 4. Sequential version of equal map processing.
The data structure localEqualWordMapListN {62}is used to hold the map comprising the sequence value as key which has an entry, which is itself a map comprising the word string as key and the indices where the word string starts as the entry. Each of the map entries is initialised to an empty list {63}. The variables maxLength {64} and length {69..71} are used to determine how many values in the sequence values are to be used an varies with the block size and the value of N. The last block may only be partially full. The WordBlock structure {65} holds all the data structures associated with each block and these are held in a list called wordBlocks. The loop {65..81} iterates over each such WordBlock. Within each WordBlock there are N SequenceBlocks and the loop {76..80} iterates over each of these. Each iteration initially finds the location of each sequence value that has multiple instances in the block. This is achieved by the method extractUniqueSequences {72} which stores the result in the map equalMap. The map equalMap is then passed to the method extractUniqueSequences {75} which creates the required output map, which is then appended to localEqualWordMapListN {79}. The crucial aspect of this process is that there are N SequenceBlocks. Thus, the process can be parallelised in N (as shown in Listings 5 and 6). It can be seen that lines {82..85} are the same as {62..65}. Lines {86..93} create a list of process instances using the collect method of Groovy. The process ExtractEqualMaps {87} utilises the same parameters as the methods used in the sequential version. The
J. Kerridge / Experiments in Multicore and Distributed Parallel Processing
139
process list procNet {86} is then executed in a PAR {94}. This has the effect of determining the localEqualWordMapListN {82} for each value of N in parallel. 82 83 84 85 86 87 88 89 90 91 92
def localEqualWordMapListN = [] // contains an element for each N value for ( i in 1..N) localEqualWordMapListN[i] = [] def maxLength = BL - N for ( WordBlock wb in wordBlocks) { def procNet = (1..N).collect { n -> new ExtractEqualMaps( n: n, maxLength: maxLength, startIndex: wb.startIndex, words: wb.bareWords, sequenceList: wb.sequenceBlockList[n-1].sequenceList, localMap: localEqualWordMapListN[n]) } new PAR(procNet).run() } Listing 5. Parallel invocation of ExtractEqualMaps.
Listing 6 shows the definition of the process ExtractEqualMaps. By inspection it can be seen that the internal method calls of extractEqualValues {104} and extractUniqueSequences {106} are essentially the same as those in the sequential version except that they refer to the properties of the process rather than the actual variables. The definition is, however, unusual because it contains no channel properties. In this case the process will access memory locations that are shared between the parallel instances of the process. However the data structures were designed so that multiple processes can read the structures but they write to separate data structures ensuring there are no memory synchronisation and contention probems. 93 class ExtractEqualMaps implements CSProcess { 94 def n 95 def maxLength 96 def startIndex 97 def sequenceList 98 def words 99 def localMap 100 void run(){ 101 def length = maxLength 102 def sequenceLength = sequenceList.size() 103 if ( sequenceLength < maxLength) length = sequenceLength 104 def equalMap = defs.extractEqualValues ( length, startIndex, 105 sequenceList) 106 def equalWordMap = defs.extractUniqueSequences ( equalMap, 107 n, startIndex, words) 108 localMap << equalWordMap 109 } 110 } Listing 6. Definition of the process ExtractEqualMaps.
2.3 Performance Improvements Resulting from Internal Parallelisation Table 6 shows the performance data for the ExtractEqualMap phase of the algorithm. Worker Style 1 is the sequential version of the algorithm and the parallel version is represented as Style 2. The speedup resulting from increasing the number of workers is linear and is very close to the reasonable limit represented by the increased number of workers. The speedup due to the change in algorithm is about 2.5 times, which, given that N =3, is in fact for more encouraging than was achieved in the Monte Carlo experiments. In these experiments no attempt was made to run multiple JVMs on each machine.
140
J. Kerridge / Experiments in Multicore and Distributed Parallel Processing
Table 6. Analysis of parallelisation performance by workers and technique (N=3).
Worker Style
Workers
Time (secs)
1
4
138.263
1
8
69.584
2
4
53.600
2
8
27.559
1.94
2
12
17.957
2.98
Speedup by workers
Speedup by style
1.99 2.58 2.52
2.4 Effect of Parallelising the Merge Phase Table 7 shows the total time for worker processing for a system employing 12 workers for increasing values of N. The time taken is determined by the last worker to finish its task. The workers are of Style 2 which has all internal phases parallelised. Only the merge phase is sequential. It can be seen that the ratio of the time taken is increasing more rapidly than the ratio of the file output size, thus real benefit can be achieved by overlapping the merge phase. Table 7. Performance of the sequential merge for 12 workers.
N
Total Time (secs)
Time Ratio
Output File Size Size (KB) Ratio
3
44.034
4
62.044
1.41
21,412
1.20
5
82.003
1.86
23,926
1.34
6
102.896
2.34
25,810
1.45
17,798
The effect of overlapping the merge phase rather than writing all the output to a single file by undertaking N merges each of which writes to its own file is shown in Table 8. Worker Style 2 has all phases of the internal algorithm fully parallelised but writes the final concordance to file using a sequence of merges for each value of N. Whereas Worker Style 3 undertakes the merge of each value of N in parallel by having a separate merge process running on its own machine. The worker process has an internal set of N parallel processes that write the entries of each partial concordance to each of the N merge processes. By comparing the values in Tables 7 and 8 it can be seen that the increase is now more in line with the increase in the size of the output file. The improvement is unlikely to be linear as the size of each output file varies. The largest file is associated with N = 1 because that file contains all instances of places where any single word has been repeated. In the case of the bible N=1 constitutes about 25% of the total output. The overall improvement for N = 6 in using the parallel merge represents a speed up of 1.61 and is thus worthwhile.
J. Kerridge / Experiments in Multicore and Distributed Parallel Processing
141
Table 8. Performance of the parallel merge phase for 12 workers.
Worker Style N
Total Time (secs)
Time Ratio
2
3
44.034
2
6
102.896
3
3
32.319
1.36
3
6
63.866
1.61
2.5 Overall Processing Improvements As a final experiment the performance of the system with two different input files was undertaken for N = 6 and 12 workers. This is shown in Table 9. The input file WaD contains the text for Elizabeth Gaskell’s Wives and Daughters. The comparison is undertaken on the basis of the number of words in the input file, the size of the output file, the size of the file for N = 1 and the total processing time. As can be seen the smallest ratio is that for the Time, implying that as the size of the input file varies the overall time will increase at the smallest rate. Table 9. Ratio analysis for Bible and Wives and Daughters.
Words
Total Output Output for (KB) N = 1 (KB)
Time (secs)
Bible WaD
802,300 268,500
26,107 5,488
6,297 2,044
63.809 27.302
Ratio
2.99
4.76
3.08
2.34
3. Conclusions These experiments have produced interesting and sometimes unexpected results. Perhaps the most disappointing result was that obtained in the calculation of where it seems that, for compute bound processing, the ability of the associated JVM (supporting JCSP) and/or the operating system automatically to make effective use of multiple cores was limited. The concordance example, however, produces more promising results in two ways. First the effect of introducing parallelism to the individual phases that make up the algorithm always improved the performance of the phase. Secondly, these improvements were achieved without taking any account of the fact that the machines were dual core, which is what the parallel system designer would wish. The fact that the improvements were achieved using a distributed system is also encouraging given the number of high performance clusters being built. The key aspect of the success of the concordance solution was that it was designed parallel from the outset in terms of its internal data structures and the manner in which the processes were to communicate. A highly optimised sequential solution was not the starting point as this would have been much harder to parallelise.
142
J. Kerridge / Experiments in Multicore and Distributed Parallel Processing
The solution has the benefit of being scalable in terms of the value of N and the number of workers. It also has the benefit of being capable of scaling to any size of input file. If the available memory size in all the workers were insufficient to hold all the data structures then these could be written to file as required. Solutions that assume sufficient memory to hold the entire input file and the internal data structures are not scalable in the same manner. Finally, the time taken to process the Bible input file, for N = 6, sequentially was 210 seconds in comparison to the 64 seconds taken by the parallel version. Acknowledgements The author gratefully acknowledges the very helpful comments made by the anonymous referees and also the efforts of the editors in improving the coherence and presentation of this paper. References [1] JCSP Home Page, http://www.cs.kent.ac.uk/projects/ofa/jcsp/, accessed 28th April, 2011. [2] P.H. Welch, N.C.C. Brown, J. Moores, K. Chalmers, and B. Sputh. “Integrating and Extending JCSP”. In A.A. McEwan, S. Schneider, W. Ill, and P.H. Welch, editors, Communicating Process Architectures 2007, volume 65 of Concurrent Systems Engineering Series, pp. 349–370, Amsterdam, The Netherlands, July 2007. IOS Press. ISBN: 978-1-58603-767-3. [3] E. Andersson. “Calculation of Pi Uisng Monte Carlo Methods”, th http://www.eveandersson.com/pi/monte-carlo-circle, accessed 28 April, 2011 [4] J Kerridge, K Barclay and J Savage. “Groovy Parallel! A Return to the Spirit of occam? ”, in JJ Broenink et al (Eds.), Communicating Process Architectures 2005, pp. 13-28, IOS Press, Amsterdam, 2005. [5] S Dickie. “Can design patterns (and other software engineering techniques) be effectively used to overcome concurrency and parallelism problems that occur during the development stages of video games?”, MSc Thesis, School of Computing, Edinburgh Napier University, 2010. [6] Microsoft, Visual Studio 2010 Concurrency Visualizer, see http://msdn.microsoft.com/enus/magazine/ee336027.aspx, accessed 28th April, 2011. [7] SICSA Concordance Challenge, th http://www.macs.hw.ac.uk/sicsawiki/index.php/MultiCoreChallenge, accessed 28 April, 2011.
Evaluating an Emergent Behaviour Algorithm in JCSP for Energy Conservation in Lighting Systems Anna KOSEK a and Aly SYED b and Jon KERRIDGE c a Risoe DTU National Laboratory of Sustainable Energy, Denmark; E-mail: [email protected] b NXP Semiconductors, The Netherlands; E-mail: [email protected] c Edinburgh Napier University, United Kingdom; E-mail: [email protected] Abstract. Since the invention of the light bulb, artificial light is accompanying people all around the world every day and night. As the light bulb itself evolved a lot through years, light control systems are still switch-based and require users to make most of the decisions about its behaviour. This paper presents an algorithm for emergent behaviour in a lighting system to achieve stable, user defined light level, while saving energy. The algorithm employs a parallel design and was tested using JCSP. Keywords. emergent behaviour, lighting system, energy saving
Introduction The main energy sources on Earth are not renewable, according to EIA1 statistics from 2008, renewable solar, geothermal, wind, hydro-power and biomass energy is only 7% of the world energy supply. Energy usage grows every year, according to EIA between 1997 and 2008 it grew by 604.5 Terawatt hours (TWh) that is 15.5% of total energy use [1]. There is a widely recognised need to reduce energy usage in any of the major sectors of the economy. Lighting takes on average 37.8% of the total energy use in buildings over 90m2 [2] and in houses it is 17% [3]. Therefore lighting usage has a big share of overall energy consumption and gives opportunity to conserve shrinking energy sources. There are many ways to save energy on lighting, very obvious one is to replace existing high-power incandescent bulbs with energy-saving Compact Fluorescent Lamps (CFL) and Light Emitting Diode (LED) lights. This can bring 70% of savings when switching from incandescent to CFL [4] or LED [5]. Further reductions in energy use can be achieved by better controlling light, such as they consume energy sparingly. Statistics from the Dutch NEN2 norm NEN2916 [6] shows an estimate of potential energy savings for different smart light controls. Statistics from Table 1 indicate that daylight-dependent switching can save 60% of energy use in spaces with daylight, that is up to 22% of the total energy use for lighting in the building, if we replace on/off with dimming capability the reduction is 80%, that is overall on average 30% of enegry conservation in a building. The number of devices controlled by home automation systems grows every day and many devices are already hidden for people’s sight, becoming more and more pervasive. On1 U.S.
Energy Information Administration, http://www.eia.doe.gov Normalisatie-instituut (eng. Dutch Standards Institute), http://www.nni.nl
2 Nederlandse
144
A. Kosek et al. / Evaluating an Emergent Behaviour Algorithm Table 1. Energy reduction possibilities for lighting. Smart light control Turning all lights off at a certain time Manual switching Daylight-dependent switching Daylight-dependent switching and turning all lights off at a certain time Daylight-dependent dimming
Reduced number of light bulb burning hours (%) Spaces with daylight Spaces with artificial light 30 30 30 10 60 10 80 30 80
4
going device miniaturisation makes it possible to create a network of many dust-like devices that will eventually disappear from people’s sight, therefore locating and managing them will become very difficult [7]. Building automation systems are usually centrally controlled, however the new trend in building automation systems is to have more distributed approach to control. Small systems are easy to manage with a central control, but for large systems problems with scalability appear and a central management unit becomes a system’s bottleneck. The other reason to move toward distributed control system is that central control systems are difficult to change in response to changing occupancy and layout of the building. When considering pervasive systems self-organisation, self-reconfiguration and adaptation to environmental conditions are desired control mechanisms. In distributed systems, it is possible to program devices to behave in a predefined manner, but it is also possible to give devices some simple behaviour and let the total distributed system’s behaviour emerge out of actions of individual devices. This paper describes an emergent behaviour algorithm for energy conservation in a lighting system. The presented algorithm is inspired by the workings of human society. Section 1 presents an example of a lighting system and proposed scenario for light sources adaptation in a single space. Section 2 describes the Lazy and Enthusiastic Employee algorithm for emergent behaviour and Section 3 proposes a production regulation scheme used in this algorithm. In Section 4 of this paper we describe how the algorithm can be adjusted to achieve interesting behaviour. In Section 5 a proposed scenario implementation in JCSP [8,9] is discussed. Section 6 presents different experiments performed with implemented simulation and results are described in Section 7. Conclusions and Further work are presented in Section 8. 1. Scenario Description The scenario chosen to describe the lighting system is presented in Figure 1, which shows a model of a room that is used as an office. The space is equipped with two different light source types: 16 ceiling lights bulbs that are capable of dimming, another source of light in this room is a mirror that is placed outside the window. This mirror can be actuated such that it reflects sun light to the ceiling in the room. A light sensor also placed outside the window detects and informs the mirror if there is enough sunlight to reflect into the room. It is possible to control the amount of sunlight reflected into the room by changing the angle of the mirror to the incident sunlight. The space is divided into 16 regions equipped with one light sensor each. Regions are associated with lamps and part of the ceiling that are illuminating it using reflected sunlight from the mirror. The system is implemented such that all devices namely the 16 lamps, 16 light sensors, a mirror and the outside light detecting sensor are all autonomous devices that control their own actions without the influence of a central control device. This can be implemented for example using the architecture presented in [10].
A. Kosek et al. / Evaluating an Emergent Behaviour Algorithm
145
Figure 1. Proposed scenario.
When the sunlight conditions outside change and less light is reflected into the room, then the light sensors detect change in light intensity in the room. The light sensors then inform the lights, which adjust their dimming to compensate such that the amount of light in the room is maintained at the user defined level. As all devices in the room are autonomous and no central control is present, devices need to react to a signal from the sensor individually. This paper describes an algorithm that is responsible for emergent behaviour of the presented lighting system. 2. Lazy and Enthusiastic Employee Algorithm for Emergent Behaviour Let’s consider an example of a brick making company. The company can make bricks that have different cost price. One is made form local materials, therefore it has a lower production cost, but the number of bricks that can be made in certain time is limited, due to limited resources. The second type of brick is more expensive to produce, because it is made of imported materials, but the number of bricks that are being made, at a certain period of time, is not limited. The company has a pool of employees, they are divided into lazy and enthusiastic employees, a lazy worker is not very conducive to work, on the other hand an enthusiastic employee is always happy to answer a manager’s request. However both lazy and enthusiastic employees are able make bricks at the same rate once they start working at full capacity. A manager monitors the factory’s overall production and periodically informs the employees about the factory performance. The manager has no control over choosing what type of bricks are made, he only issues a feedback on the work that has been done by all employees. For optimal functioning of the factory the issues that need to be addressed are: • Is it possible to make this brick making company reliable, as long as there are employees to work? • How to minimise the production cost by using available employees and material? Because the factory manager has no control over choosing the brick type that is being made, the algorithm to minimise production cost has to be deployed in employees. Let’s assign all lazy employees to production of expensive bricks and very enthusiastic employees to cheap bricks production. When the request to increase bricks production is triggered all available employees respond, but enthusiastic employees will start working immediately and try to increase brick production rapidly. Lazy employees, on the other hand, will try to avoid working, therefore they will wait for a while and then will increase bricks production by a small factor. If the production of cheap bricks is enough to fulfil a request,
146
A. Kosek et al. / Evaluating an Emergent Behaviour Algorithm
expensive bricks are not produced or only small number of those bricks is being produced. If the inexpensive production is not enough, then the lazy worker will eventually boost production of expensive bricks and take over the remaining percentage of the delivery. When a request to decrease the production is triggered, the lazy workers quickly recognise opportunity to work less and are very happy to decrease the production. Enthusiastic employees, on the other hand, are very happy to work, so are not that inclined to decrease their workload. Enthusiastic employees will wait for a while for the situation to stabilise and, if it is necessary, they will decrease production only slightly. The first reaction of both lazy and enthusiastic employees is significant for minimising cost of production. Because the brick company has to be reliable eventually lazy workers will have to work hard if the production of cheap bricks is not enough to fulfil the request. Similarly, the enthusiastic employees will have to work less if the production exceeds the required amount. As shown in Figure 2 the factory is designed to supply stable flow of bricks and react to request to increase and decrease production without any central control.
Figure 2. Factory scheme.
Another issue for emergent systems is a feedback loop problem. If, on a request, all employees are equally vigorous to change their behaviour, the behaviour if the system alters very quickly and it is proportional to the number of the employees answering a request. Therefore if employees are happy to answer a request to increase the production all at the same time, this behaviour will be escalated and trigger another request to decrease the production, that can lead to an infinite feedback loop. Introducing the Lazy and Enthusiastic Employee algorithm slows down the reaction of some parts of the factory, which reduces the rapid increase or decrease of production. The feedback loop can also appear, but it will truncate,as the reaction to decreasing and increasing production is not the same, the employees will eventually adapt to the situation and the loop will be broken. 3. The Used Production Regulation Scheme For the purpose of this paper we present a production regulation scheme that is used with the emergent behaviour algorithm. In the factory all employees are capable of reacting to a value propagated by the manager and adapt their production rate. The ideal production rate in the factory is provided by the manager and is remembered by all employees. The ideal value is provided with a margin, that determines a range around this value where the production rate changes within an error range. Therefore the workers do not try to compensate any more while the production is within the margin. This is done to avoid production rate changing continuously when the value received from the manager is in a margin very close to the ideal value. The production rate in an environment is divided into three groups: above agreed range, inside of the range and below it. If we let pi be an ideal production rate value in the factory, pi > 0, m be a margin and pa be actual production of the factory, then the value of pa is in the agreed range if pa ∈ (pi − m, pi + m) provided m < pi and m > 0.
A. Kosek et al. / Evaluating an Emergent Behaviour Algorithm
147
The scheme that regulates production rate pa in the factory takes a value from the manager at a particular time and if the value is above the specified range, therefore pa > pi + m, the production is decreased, if the value is below the range, therefore pa < pi − m, the production is increased. If the value pa is inside of the range, there is no action taken. Let d be a production rate of an employee of the factory, where d > 0 and let Δd be a factor of increase or decrease of d, where Δd > 0. Then if pa < pi − m, then workers production rate is increased d = d + Δd, if pa > pi + m, then workers production rate is decreased d = d − Δd, otherwise the request is dropped and the production rate d is kept constant until another request is issued. The behaviour can be alternated by changing the value of delta Δd depending on some conditions. Let c{n} be a sequence of conditions c1 , c2 , ..., cn , where c{n} = {c1 , c2 , ..., cn }, n ∈ N+ , cn ∈ R and let g : c{n} → Δd, Δd > 0 and h : c{n} → Δd, Δd > 0, be functions that generate the decrease/increase factor of the bricks production by each employee accordingly, then: Δd =
g(c{n} ) if pa < pi − m; h(c{n} ) if pa > pi + m.
Functions g and h that generate the decrease or increase factor of brick production by each employee can be adjusted to regulate factory behaviour. Functions g and h can be either directly dependent or independent of pa , the actual value of production in the factory. The first group of production decrease or increase factor functions (g and h , where g ⊂ g and h ⊂ h) produce values independent of the value of the actual production in the factory, therefore c{n} is not dependent on pa . For example, let’s assume the ideal production is pi =100 bricks/min and the margin is m =5 bricks/min accepted production rate set is between 95 bricks/min and 105 bricks/min. If the input from the manager informs that the actual production is pa =50 bricks/min, then the production is increased. An employee can decide to decrease the production by a factor of Δd =1 bricks/min or Δd =10 bricks/min, that is independent of value of pa . The production rate scheme with factor functions independent of the actual production only checks if the value belongs to any of three ranges and uses functions not based on the actual value of pa . This function can be dependent of some other conditions used to calculate the increase or decrease factor. Therefore the employee that adapts to the manager’s request only need to know if the value is outside of the set (pi − m, pi + m) and react depending on the situation. The second group of production decrease or increase factor functions (g and h , where g ⊂ g and h ⊂ h) produce values dependent of the value of the actual production in the factory, therefore pa or function of pa can be one of conditions in sequence c{n} . For example one of the conditions of functions g and h can be the distance between actual and ideal production rate, therefore c1 = |pi − pa |, where c1 ∈ c{n} . 4. Algorithm Adjustments There are several factors that can be adjusted to achieve emergent behaviour when using Lazy and Enthusiastic Employee algorithm: in this paper we consider production decrease or increase factor functions and margin’s m size. Production decrease or increase factor functions helps varying workers’ behaviour. As mentioned in algorithm description any worker performs different behaviour when the overall production is too low and react differently in case of over-producing, therefore functions for production decrease or increase factor are different. Let’s consider a factory with 200 workers, where 100 of them are lazy employees and remaining 100 are enthusiastic employees. The ideal production rate is set to be 10000
148
A. Kosek et al. / Evaluating an Emergent Behaviour Algorithm
bricks/minute, all workers start from individual production rate, that is 0 bricks/minute, the default increase/decrease rate (Δd) is 10 bricks/minute. Let’s assign all enthusiastic employees to producing cheap bricks (1$ per unit) and lazy employees to production of expensive bricks (10$ per unit). By selecting appropriate production decrease or increase factor functions (g and h) we need to minimise the cost and maximise the factory reliability. At any time period the factory is reliable when actual production rate is within the accepted range, therefore pa ∈ (pi − m, pi + m) provided m < pi and m > 0, where pi is an ideal overall production rate. Within the proposed example let’s consider decrease or increase factor functions (g and h ) that produce values independent of the value of the actual production in the factory. Considered functions for enthusiastic employees g a , h a , and lazy employees g b , h b , are as follows: g a1 (Δd) = 3, h a1 (Δd) = 1,
g b1 (Δd) = 1, h b1 (Δd) = 3,
(1)
Workers production rate factor in the first example of g and h functions is constant and reaction to a low production is faster than reaction to over-producing for enthusiastic employee (g a1 (Δd) > h a1 (Δd)), and opposite for lazy employee (g b1 (Δd) < h b1 (Δd)). This way the enthusiastic employee is increasing production faster than decreasing, independently of value of Δd. In this case the production increase/decrease functions u1 and u2 are constant and independent of vale of pa . The second example functions are as follows: g a2 (step, Δd) = Δd · (step4 /100)/100, h a2 (step, Δd) = Δd · ((step − 10)4 /100)/100, g b2 (step, Δd) = Δd · ((step − 10)3 /10 + 100)/100, h b2 (step, Δd) = Δd · (100 − (step)3 /10)/100,
where step ∈ [0, 10], where step ∈ [0, 10], where step ∈ [0, 10], where step ∈ [0, 10].
(2)
Workers production rate factor in the second example of g and h functions is modelled using curves presented in Figure 3. Curves in Figure 3 based on x4 and x3 are used to calculate fraction of Δd being added or subtracted. The shape of the curve influences worker’s production rate and is one of the parameters of the algorithm. According to curves from Figure 3, enthusiastic workers increase their production rapidly at first and then stabilise, while decreasing production slowly and then picking up when no other employees are willing to decrease the production. The behaviour changes depending on a step, where step ∈ [0, 10]. At first an employee continues on the chosen curve and when the overall production reaches other side of the required range (pi − m, pi + m), then the curve is changed and a worker behaves differently.
Figure 3. Functions used to decrease and increase bricks production in the factory.
A. Kosek et al. / Evaluating an Emergent Behaviour Algorithm
149
The workers’ behaviour for both sets of g and h functions is compared to the behaviour without the Lazy and Enthusiastic algorithm and presented in Figure 4. The g and h functions used for comparison are as follows: g a3 (Δd) = 2, h a3 (Δd) = 2,
g b3 (Δd) = 2, h b3 (Δd) = 2.
(3)
As presented in Figure 4, the Lazy and Enthusiastic Employee algorithm’s performance depends on choice of g and h functions. Graphs 4.A1 and 4.A3 are very similar, therefore overall production of the factory is stable in both of those cases. When we look closer at individual behaviour of workers, graph 4.B3 shows that both work the same and try to sustain the production rate, in graph 4.B1 enthusiastic employees overtake the production with 9.910.2 bricks/minute, and force the lazy workers to decrease their production to around 0-2 bricks/minute. The third case (graph 4.B2), when using curves from Figure 3, enthusiastic workers take over the production completely, not letting lazy employees contribute to the overall production rate. The second group of production decrease or increase factor functions (g and h ) produce values dependent of the value of the actual production in the factory. Therefore the value of increase/decrease of bricks production depends on the actual value of overall production pa . An example of functions g and h depend of the distance between pa and pi , as introduced in previous section, and can be as follows: g a4 (pa , pi ) = (1 − (|pi − pa |/pi )) h a4 (pa , pi ) = (|pi − pa |/pi ) g b4 (pa , pi ) = (|pi − pa |/pi ) h b4 (pa , pi ) = (1 − (|pi − pa |/pi ))
pi , pa > 0, pi , pa > 0, pi , pa > 0, pi , pa > 0.
(4)
Functions g and h are designed to behave differently when distance between pa and pi is large and differently when both values are in close proximity. Special attention was paid to the region close to the value of pi in order to enable enthusiastic employees to overtake work done by lazy workers and stabilise the production. The behaviour of the factory is presented in Figure 5. As shown in Figures 4 and 5, the emergent behaviour of workers in a factory varies depending on chosen production decrease or increase factor functions. Table 2 presents combined results of the described behaviours, presenting cost of production and factory reliability. The cost of production is calculated with assumption that cheap bricks cost 1$ and expensive 10$ per unit. Factory reliability is measured from 100 samples of behaviour (100 minutes), the factory is reliable if the actual production is within the accepted range, therefore pa ∈ (pi − m, pi + m) provided m < pi and m > 0, where pi is an ideal overall production rate. Note that the limitation in production of the cheap bricks is not included in results Table 2, as the aim is to compare speed of reaction of workers depending on increase and decrease factor functions. In experiments described in Section 6, the light production from the mirror is limited by the outdoor lighting conditions. Table 2. Algorithm evaluation towards production cost and factory reliability. Function used (1) (2) (3) (4)
cheap bricks production cost 798601 84390 435400 891502
expensive bricks production cost 800520 0 4354000 936440
overall production cost 1599121 84390 4789400 1827942
factory reliability 72% 84% 76% 94%
150
A. Kosek et al. / Evaluating an Emergent Behaviour Algorithm
Figure 4. A1,A2,A3- Overall behaviour of workers with functions (1),(2) and (3) respectively; B1,B2,B3- Behaviour of groups of workers with functions (1),(2) and (3) respectively.
Figure 5. A4- Overall behaviour of workers with functions (4); B4- Behaviour of groups of workers with functions (4).
A. Kosek et al. / Evaluating an Emergent Behaviour Algorithm
151
Based on data represented in Table 2, functions (2) minimised the cost considerably while maintaining factory reliability within 84%. The performance of factory reliability when using functions (4) is impressive, while the cost is around 20 times higher than the production with functions (2). The overall behaviour of the algorithm points out that functions regulating production decrease or increase factor need to be selected carefully. When maximising for production cost, functions (2) or their variations should be used. 5. Implementation The Lazy and Enthusiastic Employee algorithm can be used to sustain a stable light level in a room with many autonomous light sources,as shown in Figure 1, while saving electricity. If we assume that light bulbs are lazy employees, as they need electricity to work, which is ”expensive”. Mirror using sunlight, which is ”cheap”, is assigned to be an enthusiastic employee and it is expected to give as much light as is needed and possible. This way it is possible to use the algorithm to save energy and ensure stable light level in the space as long as all light sources can function properly. This system can also exhibit emergent behaviour based on this simple model. All devices that are needed for the ligting system described in this paper are autonomous and collaborate by formatting ad-hoc networks. This means that there are many concurrent behaviours that represent a real word scenarios need to be simulated. We have chosen, therefore, the JCSP (Communicating Sequential Processes (CSP) [11] for Java) library for simulation of this system. Java was chosen as a programming language of the simulation because of its maturity and ease of programming CSP based networks of processes. JCSP implements concurrent programming with processes running and communicating simultaneously. A parallel solution was chosen for a simulation to represent many devices, working autonomously, performing a behaviour depending on a value received from sensors. Devices are autonomous and do not rely on any global synchronisation; devices only synchronise on messages and, therefore, a CSP model is natural to represent the discussed pervasive system. A sensor reacts to change of the light intensity and periodically sends a broadcast signal to all available devices. Components of the presented system decide how to react to the received signal. As a broadcast mechanism is not available in CSP, a repeater was used to ensure that all devices receive a single signal. The broadcast mechanism is fixed to the number of available devices, repeating the message to all available devices. If any of devices is not available to receive the request, the broadcast process is blocked. All devices from the scenario in Figure 1 are CSP processes and use channels to communicate. For the purpose of this simulation, we assume perfect and reliable communication links. The behaviour of the presented system needs to be visualised. All devices send their state to a Graphical Interface (GI) in order to show results of the implemented behaviour. The GI accepts state information from devices on any-to-one channel, visualises the data and calculates values for indoor sensors. The GI of the presented simulation was built with jcsp.awt [8] active components to enable direct communication between the GI and other processes. The architecture implies broadcast communication between devices, as sensors do not know which devices are interested in the passed light intensity value. The connection between broadcast process and a device is one-to-one to make sure that the massage is delivered. The connection between device and GI is a many-to-one channel enabling all devices to write to GI. As the control algorithm is implemented in individual lamps and the mirror, the behaviour of the system can only be observed when all devices run simultaneously. CSP has already been successfully used to simulate systems presenting emergent behaviour [12,13,14, 15], showing that a process-oriented approach is suitable for modelling a non-deterministic environment.
152
A. Kosek et al. / Evaluating an Emergent Behaviour Algorithm
Figure 6. CSP process model diagram for the implemented lighting system.
All 16 lamps (in Figure 6: L1-L16), 17 sensors (in Figure 6: indoor sensors S1-S16, outdoor sensor I) and the mirror (in Figure 6: M) are CSP processes communicating over a broadcast process (in Figure 6: B) using one-to-one and one-to-many channels. Every message sent to the broadcast process is replicated and sent to all devices. Signal is next interpreted according to directions from the Lazy and Enthusiastic Employee algorithm in every device individually. All devices report their state to the graphical interface process (in Figure 6: GI). The first factor of the Lazy and Enthusiastic Employee algorithm is choice of the behaviour curve; according to results presented in Section 4, the chosen curves for lights and mirror are presented in Figure 3. The choice was based on the algorithm’s high performance in overall production costs and user comfort, as presented in Table 2. The second factor that we consider in the simulation is the size and location of margins that define the range of reaction of the system. In this implementation, we use variable values of ideal light intensity in a space pi , where pi > 0, and fixed margin m=50. Therefore, a region R1 = [pi , pi + 50] for mirror and region R2 = [pi − 50; pi ) for the lights is used. The regions are excluding / therefore lights and mirror stabilise on different values from light sensors. (R1 ∩ R2 = 0), When the light is stable, the mirror is trying to go up and opposite; when the mirror is stable the lights want to dim down. Therefore, the light first reaction is always enthusiastic. This enables the behaviour of taking over the task to illuminate the space. Light can eventually become lazy when there is no need for a fast reaction or the space illumination should be stabilised. For purpose of the simulation, we use arbitrary light intensity units to express values delivered by light intensity sensors. The ideal intensity, defined by user, is 500 units. We have also assumed that if the light is dimmed to x%, the light sensor senses 10 · x units. 6. Experiments The main goal of the system is to sustain user defined light intensity in a space while maintaining low energy use, using as much natural light as possible. We have performed two experiments using a different control algorithms for a space with 16 lamps and a mirror reflecting light into a ceiling. The aim of these two experiments is to compare energy use for a system with and without dimming control algorithm deployed. The space is divided into 16 regions, both a single lamp and a mirror has an influence on this space, also light from outside is simulated to influence the sensor value with 25%, 12.5%, 10% and 0% of environment’s light intensity depending on the distance of regions from the window.
A. Kosek et al. / Evaluating an Emergent Behaviour Algorithm
153
Experiment 1. Lights and mirror react to a sensor value and are designed to sustain user defined level of light. Both mirror and light are willing to accept every request and adapt to it, therefore the Δd is constant, therefore there is no algorithm used to regulate devices behaviour. Experiment 2. Lights and mirror react to a sensor value and are designed to sustain user defined level of light and use the Lazy and Enthusiastic Employee Algorithm (L&EE Algorithm). The algorithm was implemented as described in Section 4. The simulation was
Figure 7. Experiments’ input data for light intensity outside.
run for 60 seconds with identical input for all experiments. The light intensity of the environment was changed over time according to Figure 7. We have created a data reference set for proposed experiments. We have measured an energy consumption of 16 lamps that use no dimming, all light sustain constant light level that is 100%. 7. Results and Analysis The simulation has a graphical interface to present results of experiments (Figure 8).The first part of the simulation GUI shows a 2D model of the considered space. There are 16 lamps on the ceiling and dimming percentage associated with a lamp (Figure 8,D).
Figure 8. Simulation GUI.
The mirror’s illumination is the same for whole room an is represented in the GUI by a half-circle (Figure 8,C). There are 16 regions and 16 sensors, each associated with each region. The value of the sensor in each region is the sum of light from a lamp, mirror and light spread from the window. The value of intensity in agreed units in a region is shown in Figure 8,B. For the purpose of this simulation we calculate the value of the sensor only using intensity from one lamp and mirror. The brightness of the environment due to sunlight
154
A. Kosek et al. / Evaluating an Emergent Behaviour Algorithm
is shown in a strip outside the room (Figure 8,A). Colours and brightness of all components in the simulation is adjusted depending on a data received from devices. The simulation realtime data is output to graphs as shown in Figure 9. To simplify the GIU only data from lamp 1 (top-left corner of the room), mirror and sensor 1 associated with region 1 are shown in graphs. The x axis in graphs is time measured in milliseconds.
Figure 9. Light, mirror and indoor sensor behaviour for Experiment 1 and 2.
Experiment 1. In this experiment lights react to values from the sensor and try to fulfil the request to increase or decrease the light level in the space. All lights and mirror react the same to sensor values, increasing or decreasing light level with the same factor, therefore no algorithm regulating their behaviour is used. The graphs in Figure 9 are divided into time phases that were described in Figure 7. In phase 1 light and mirror both are trying to dim up, both light sources stop as soon as they reach the defined range. In phase 2, as light outside decreases, mirror gives away less light and light has to compensate. Both light sources slightly decrease in phase 3. As environmental light decreases to 20 units in phase 4, light takes over lighting the space. In phase 5 both light and mirror are stable as both of them have reached the range. In this experiment we can observe that the mirror is usually not used, as it has a limit over its dimming up. The lamp can dim up easily, therefore it usually takes over lighting the space. Experiment 2. In this experiment we use L&EE algorithm to control light intensity in the space. The graphs in Figure 9 are divided into the sample five time phases. At the start of the simulation, in phase 1, all the lamps are off, as sensors start sending values to lamps, lamps notice that the light intensity is smaller than defined by the user, therefore they start slowly dimming up. In phase 1 mirror is also willing to dim up, and as the light intensity outside is 200 units mirror fast dim up to 20%. Lamps waits for a while and then slowly starts dimming up to keep the desired light level. In phase 2 the environmental conditions change and intensity decreases to 100 units, the mirror gives less light, therefore lamps have to compensate. Soon lamp 1 becomes stable as the range of the ideal intensity was reached. In phase 3 the light from outside increases to 700 units and mirror takes this opportunity to dim up, meanwhile light notices a possibility to give away less light, therefore it dims down. After a while the mirror takes over lighting up the space and light 1 is off. Table 3. Energy usage for all experiments. Algorithm No control, lamps 100% dim Dimming control, no algorithm L&EE Algorithm
Energy usage (J) 86700 31428 16019
A. Kosek et al. / Evaluating an Emergent Behaviour Algorithm
155
In phase 4 light from the outside rapidly decreases to 20 units. The mirror stops giving light, so the light bulb is forced to dim up slowly, until the space will be illuminated within the agreed range. In phase 5 light outside increases to 300 units, therefore mirror goes up and lamp 1 dims down to 10%. As the light from outside spreads unevenly in the room. Regions closer to the window are brighter than regions further away from the window. At the end of this experiment lamp 1 is dimmed to only 10%, while lamp 4 (top-right corner of the room) needs to be dimmed to 20%. Table 4. Energy savings when comparing energy usage from experiment 1 and 2 to the reference data. Algorithm used No dimming algorithm L&EE Algorithm
Energy savings (%) 63.8 81.5
Energy usage results from two experiments are compared in Table 3. The energy usage data is calculated with assumption that all lamps are 100 Watt. From Table 3 we can further calculate the percentage of the energy savings while using L&EE algorithm and experiment with no dimming control algorithm used compared to the reference data. Results are shown in Table 4. Using data reference set and other experiments, we can see that Lazy and Enthusiastic Employee algorithm can reduce energy usage, when there is light outside that can be used to illuminate the space. 8. Further Work and Conclusions In this paper we have shown an emergent behaviour arising from autonomous lighting system devices. This behaviour is based on a simple model inspired by human society. Processoriented approach was chosen for representing this non-deterministic environment. We have used CSP to model and JCSP to implement this system of many concurrently executing devices and their message exchanges. The Graphical Interface benefits from use of any-to-one input channel for receiving information about devices’ state in order to simulate the overall light intensity in the room. The algorithm helps saving energy in spaces with daylight and enables devices that use less energy to take over a task from devices that use more energy without a central control. The algorithm was tested with different parameters and a simulation of a lighting system in an office space was developed in order to show possible energy savings. The algorithm presented in this paper is tested using simulation, we have chosen arbitrary units for light intensity as we did not use any lighting model. This algorithm can be tested in a real system with actual devices. In this simulation model we also assume that light sensor value is a sum of luminance from light that is above its location, the light distributed by the mirror and the light that gets into the room through the window. In real system, in general a sensor could be be affected by more than one light source but no light distribution model was used in this simulation. In a real system the value from the sensor is a sum of the actual lighting condition in the room, in the simulation, this value is calculated from lamps and light outside, but not according to any lighting model, therefore calculations can be inaccurate. In this paper we described algorithm with only two options for employees: lazy and enthusiastic, but its possible to make whole range of workers and assign them different behaviours using different behaviour curves, as presented in Figure 3. Acknowledgements The work presented in this paper and part of the experiments have been performed at NXP Semiconductors, The Netherlands.
156
A. Kosek et al. / Evaluating an Emergent Behaviour Algorithm
References [1] U.S. Energy Information Administration. Retail sales and direct use of electricity to ultimate consumers by sector, by provider, 1997 through 2008, 2008. [2] U.S. Energy Information Administration. Electricity consumption (kwh) by end use for all buildings, 2003, 2008. [3] Kurt W. Ruth and Kurtis McKenney. Energy consumption by consumer electronics in U.S. residences, Final report to the Consumer Electronics Association (CEA) January 2007, 2007. [4] WorldwatchInstitute. Compact Fluorescent Lamps Could Nearly Halve Global Lighting Demand for Electricity, 2008. [5] U.S. Department of Energy. Comparison of LEDs to Traditional Light Sources, 2009. [6] NEN. NEN 2916:2004, Dutch Standards Institute Norm, Energy performance of residential buildings, 2004. [7] Marco Mamei and Franco Zambonelli. Field-Based Coordination for Pervasive Multiagent Systems. Springer, 2006. [8] P. H. Welch and P. D. Austin. The jcsp home page. http://www.cs.ukc.ac.uk/projects/ofa/jcsp/, 1999. [9] Peter H. Welch, Neil C.C. Brown, J. Moores, K. Chalmers, and B. Sputh. Integrating and Extending JCSP. In Alistair A. McEwan, Steve Schneider, Wilson Ifill, and Peter Welch, editors, Communicating Process Architectures 2007, volume 65 of Concurrent Systems Engineering Series, pages 349–370, Amsterdam, The Netherlands, July 2007. IOS Press. ISBN: 978-1-58603-767-3. [10] Aly A. Syed, Johan Lukkien, and Roxana Frunza. An ad hoc networking architecture for pervasive systems based on distributed knowledge. In Proceedings of Date2010, Dresden, 2010. [11] C. A. R. Hoare. Communicating sequential processes. Commun. ACM, 21(8):666–677, 1978. [12] C. G. Ritson and P. H. Welch. A process-oriented architecture for complex system modelling. Concurr. Comput. : Pract. Exper., 22:965–980, June 2010. [13] Christopher A. Rouff, Michael G. Hinchey, Walter F. Truszkowski, and James L. Rash. Experiences applying formal approaches in the development of swarm-based space exploration systems. Int. J. Softw. Tools Technol. Transf., 8:587–603, October 2006. [14] Peter H. Welch, Frederick R. M. Barnes, and Fiona A. C. Polack. Communicating complex systems. In Proceedings of the 11th IEEE International Conference on Engineering of Complex Computer Systems, pages 107–120, Washington, DC, USA, 2006. IEEE Computer Society. [15] A.T. Sampson, P.H. Welch, and F.R.M. Barnes. Lazy Simulation of Cellular Automata with Communicating Processes. In J.F. Broenink, H.W. Roebbers, J.P.E. Sunter, P.H. Welch, and D.C. Wood, editors, Communicating Process Architectures 2005, volume 63 of Concurrent Systems Engineering Series, pages 165–175, Amsterdam, The Netherlands, September 2005. IOS Press. ISBN: 1-58603-561-4.
LUNA: Hard Real-Time, Multi-Threaded, CSP-Capable Execution Framework M. M. BEZEMER, R. J. W. WILTERDINK and J. F. BROENINK Control Engineering, Faculty EEMCS, University of Twente, P.O. Box 217 7500 AE Enschede, The Netherlands {M.M.Bezemer, J.F.Broenink} @utwente.nl Abstract. Modern embedded systems have multiple cores available. The CTC++ library is not able to make use of these cores, so a new framework is required to control the robotic setups in our lab. This paper first looks into the available frameworks and compares them to the requirements for controlling the setups. It concludes that none of the available frameworks meet the requirements, so a new framework is developed, called LUNA. The LUNA architecture is component based, resulting in a modular structure. The core components take care of the platform related issues. For each supported platform, these components have a different implementation, effectively providing a platform abstraction layer. High-level components take care of platform-independent tasks, using the core components. Execution engine components implement the algorithms taking care of the execution flow, like a CSP implementation. The paper describes some interesting architectural challenges encountered during the LUNA development and their solutions. It concludes with a comparison between LUNA, C++CSP2 and CTC++. LUNA is shown to be more efficient than CTC++ and C++CSP2 with respect to switching between threads. Also, running a benchmark using CSP constructs, shows that LUNA is more efficient compared to the other two. Furthermore, LUNA is also capable of controlling actual robotic setups with good timing properties. Keywords. CSP, framework architecture, hard real-time, performance comparison, rendezvous communication, scheduling, threading.
Introduction Context Nowadays, many embedded systems have multiple cores at their disposal. In order to be able to run more challenging (control) algorithms, embedded control software should be able to make use of these extra cores. Developing complex concurrent software tends to become tedious and error-prone. CSP [1] can ease such a task. Especially in combination with a graphical modeling tool [2], designing such complex system becomes easier and the tool could help in reusing earlier developed models. CTC++ [3] is a CSP based library, providing a hard real-time execution framework for CSP based applications. When controlling robotic setups, real-time is an important property. There are two levels of real-time: hard real-time and soft real-time. According to Kopetz [4]: “If a result has utility even after the deadline has passed, the deadline is classified as soft (. . . ) If a catastrophe could result if a deadline is missed, the deadline is called hard”. Figure 1 shows the layered design, used in our Control Engineering group, for embedded software applications connected to actual hardware. Each layer supports a type of real-time,
158 M.M. Bezemer et al. / LUNA: Hard Real-Time, Multi-Threaded, CSP-Capable Execution Framework
Safety layer
Hard real-time
I/O hardware
Meas. & Act.
Sequence control
Soft real-time
Supervisory control & Interaction
User interface
Non real-time
Loop control
Embedded software
Process
D/A
Power amplifier
A/D
Filtering/ Scaling
Actuators Physical system Sensors
Figure 1. Software architecture for embedded systems [5].
varying from non real-time to hard real-time. The ‘Loop control’ is the part of the application responsible for controlling the physical system and it is realised in a hard real-time layer. The hard real-time layer has strict timing properties, guaranteeing that given deadlines are always met. If this for whatever reason fails, the system is considered unsafe and catastrophic accidents might happen with the physical system or its surroundings due to moving parts. The soft real-time layer tries to meet its deadlines, without giving any hard guarantees. If the design is correct nothing serious should happen in case such a deadline is not met. This layer can be used for those parts of the application which are more complex and require more time to run its tasks, like algorithms which map the environment, plan future tasks of the physical system or communicate with other systems. The non real-time layer does not try to meet any deadlines, but provides means for long running tasks or for an user interface. The left-over resources of the system are used for these tasks, without giving any guarantees of the availability of them. Robotic and mechatronic setups like the ones in our lab require a hard real-time layer, since it is undesirable for the actual setups to go haywire. The use of Model Driven Development (MDD) tools makes developing for complex setups a less complex and more maintainable task [6]. For the multi-core or multi-CPU embedded platforms, we would like to make use of these extra resources. Unfortunately, the CTC++ library, as it is, is not suitable for these platforms, as it can only use one core or CPU. This paper evaluates possibilities to overcome this problem. The requirements for a suitable framework that can be used for robotic and mechatronic setups are: • Hard real-time. This incorporates that the resulting application needs to be deterministic, so it is possible to guarantee that deadlines are always met. The framework should provide a layered approach for such hard real-time systems (see Figure 1). • Multi-platform. The setups have different kind of hardware platforms to run on, like PowerPC, ARM or x86 processors. Also different operating systems should be supported by the framework. • Thread support. In order to take advantage of multi-core or multi-CPU capable target systems. • Scalability. All kind of setups should be controlled: From the big robotic humanoids in our lab to small embedded platforms with limited computer resources. • CSP execution engine. Although, it should not force the use of CSP constructs when the developer does not want it, as this might result in not using the framework at all. • Development time. The framework should decrease the development time for complex concurrent software. • Debugging and tracing. Provide good debugging and tracing functionality, so developed applications using the framework can be debugged easily and during development unexpected behaviour of the framework can be detected and corrected. Realtime logging functionalities could preserve the debug output for later inspection.
M.M. Bezemer et al. / LUNA: Hard Real-Time, Multi-Threaded, CSP-Capable Execution Framework 159
The CTC++ library meets most requirements, however as mentioned before, it does not have thread support for multi-core target systems. It also has a tight integration with the CSP execution engine, so it is not possible to use the library without being forced to use CSP as well. This is an obstacle to use the library from a generic robotics point of view and results in ignoring the CTC++ library altogether, as is experienced in our lab. A future framework should prevent this tight integration. By adding a good MDD tool to the toolchain, the robotic oriented people can gradually get used to CSP. It might seem logical to perform a major update to CTC++. But unfortunately the architecture and structure of the library became outdated over the years, making it virtually impossible to make such major changes to it. So other solutions need to be found to solve our needs. Existing Solutions This section describes other frameworks, which could replace the CTC++ library. For each framework the list with requirements is discussed to get an idea of the usability of the framework. A good candidate is the C++CSP2 library [7] as it already has a multi-threaded CSP engine available. Unfortunately it is not suitable for hard real-time applications controlling setups. It actively makes use of exceptions to influence the execution flow, which makes a application non deterministic. Exceptions are checked at run-time, by the C++ run-time engine. Because the C++ run-time engine has no notion of custom context switches, exceptions are considered unsafe for usage in hard real-time setups. Also as exceptions cannot be implemented in a deterministic manner, as they might destroy the timing guarantees of the application. Exceptions in normal control flow also do not provide priorities which could be set for processes or groups of processes. This is essential to have hard, soft and non real-time layers in a design in order to meet the scheduled deadlines of control loops. And last, it makes use of features which are not commonly available on embedded systems. On such systems it is common practice to use the microcontroller C library (uClibc) [8], in which only commonly used functionality of the regular C library is included. Most notably, one of the functionalities which is not commonly included in uClibc is Thread Local Storage, but is used by C++CSP2. Since Java is not hard real-time, for example due to the garbage collector, we did not look into the Java based libraries, like JCSP [9]. Although, there is a new Java virtual machine, called JamaicaVM [10], which claims to be hard real-time and supporting multi-core targets. Nonetheless, JCSP was designed without hard real-time constraints in mind and so may not be suitable for hard real-time. Besides these specific CSP frameworks, there are non-CSP-based frameworks to which a CSP layer might be added. OROCOS [11] and ROS [12] are two of these frameworks and both claim to be real-time. But both will not be able to run hard real-time 1KHz control loops on embedded targets which are low on resources. Their claim about being real-time is probably true when using dedicated hardware for the control loops, which are fed by the framework with ‘setpoints’. Basically, the framework is operating at a soft real-time level, since it does not matter if a setpoint arrives slightly late at the hardware control loop. In our group we like to design the control loops ourselves and are not using such hardware control loop solutions. Furthermore, it is impossible to use formal methods to confirm that a complex application, using one of these frameworks, is deadlock or livelock free, because of the size and complexity of these frameworks [13]. Based on the research performed on these frameworks, we have decided to start over and implement a completely new framework. Available libraries, especially the CTC++ and C++CSP2 libraries, are helpful for certain constructs, ideas and solutions. The new framework can reuse these useful and sophisticated parts, to prevent redundant work and knowl-
160 M.M. Bezemer et al. / LUNA: Hard Real-Time, Multi-Threaded, CSP-Capable Execution Framework
edge being thrown away. After implementing the mentioned requirements, it should be able to keep up with our future expansion ideas. Outline The next section describes the general idea behind the new framework, threading, the CSP approach, channels and alternative functionality. Section 2 compares the framework with the other related CSP frameworks mentioned earlier, for some timing tests and when actually controlling real setups. In the next section, the conclusions about the new framework are presented. And the last section discusses future work and possibilities. 1. LUNA Architecture The new framework is called LUNA, which stands for ‘LUNA is a Universal Networking Architecture’. A (new) graphical design and code generation tool, like gCSP [14], is also planned, tailored to be compatible with the LUNA. This MDD tool will be called Twente Embedded Real-time Robotic Application (TERRA). It is going to take care of model optimisations and by result generating more efficient code, in order to reduce the complexity and needs of optimisations in LUNA itself. >
<
' *+
;
!" %&
/+ +
&
'" & -
:
Figure 2. Overview of the LUNA architecture.
LUNA is a component based framework that supports multiple target platforms, currently planned are QNX, RTAI and Xenomai. To make development more straightforward, Linux and Windows will also be supported as additional platforms. Figure 2 shows the overview of the LUNA components and the levels they are on. The gray components are not implemented yet, but are planned for future releases. The Core Components (1) level contains basic components, mostly consisting of platform supporting components, providing a generic interface for the platform specific features. OS abstraction components are available to support the target operating system (OS), like threading, mutexes, timers and timing. The architecture abstraction components provide support for features specific to an architecture (or hardware platform), like the support for (digital) input and output (I/O) possibilities. Other components can make use of these core components to make use of platform specific features without knowledge of the actual chosen platform. Another group of core components are the utility components, implementing features like debugging, generic interfaces and data containers. The next level contains the High-level Components (2). These are platform independent by implementing functionality using the core components. An example is the Networking component, providing networking functionality and protocols. This typically uses a socket component as platform-dependent glue and build (high-level) protocols upon these sockets.
M.M. Bezemer et al. / LUNA: Hard Real-Time, Multi-Threaded, CSP-Capable Execution Framework 161
The Execution Engine Components (3) implement (complex) execution engines, which are used to determine the flow of the application. For example a CSP component provides constructs to have a CSP-based execution flow. The CSP component typically uses the core components for threading, mutexes and so on and it uses high-level components like networking to implement networked rendezvous channels. Components can be enabled or disabled in the framework depending on the type of application one would like to develop, so unused features can be turned off in order to save resources. Since building LUNA is complex due to the component based approach and the variety of supported platforms, a dedicated build system is provided. It is heavily based on the OpenWrt buildroot [15,16]. The initially supported platform is QNX [17], which is a real-time micro-kernel OS. QNX natively supports hard real-time and rendezvous communication. This seemed ideal to start with, relieving the development load for an initial version of LUNA. As QNX is POSIX compliant, a QNX implementation of LUNA would result in supporting other POSIX compliant operating systems as well. Or, at least it would support parts of the OS which are compatible, as not many operating systems are fully POSIX compliant. 1.1. Threading Implementation LUNA supports OS threads (also called kernel threads) and User threads to be able to make optimal use of multi-core environments. OS threads are resource-heavy, but are able to run on different cores and User threads are light on resources, but must run in a OS thread and are thus running on the same core as the OS thread. A big advantage of using OS threads is the preemptive capabilities of these threads: Their execution can be forcefully paused anywhere during its execution, for example due to a higher priority thread becoming ready. User threads can only be paused at specified moments, if such a moment is not reached, for example due to complex algorithm calculations, other User threads on the same OS thread will not get activated. Combining resource-heavy OS threads and non preemptive capable User threads results in a hybrid solution. This allows for constructing groups of threads which can be preempted but are not too resource-heavy. As the term already implies, the OS threads are provided and maintained by the OS. For example, the QNX implementation uses the POSIX thread implementation provided by QNX and for Windows LUNA would use the Windows Threads. Therefore, the behaviour of an OS thread might not be the exactly the same for each platform. The User threads are implemented and managed by LUNA, using the same principles as [7,18], except the LUNA User threads are not run-time portable to other OS threads. There is no need for it and this will break hard real-time constraints. Figure 3 shows the LUNA threading architecture. Two of the components levels of Figure 2 are visible, showing the separation of the threading implementation and the CSP implementation. UThreadContainer (UTC) and OSThread are two of the available thread types, both implementing the IThread interface. This IThread interface requires a Runnable, which acts as a container to hold the actual code which will be executed on the thread. The CSP functionality, described in more detail in the next section, makes use of the Runnable to provide the code for the actual CSP implementation. To make the earlier mentioned hybrid solution work, each OS thread needs its own scheduler to schedule the User threads. This scheduling mechanism is divided into two objects: 1. the UTC which handles the actual context switching in order to activate or stop a User thread.
162 M.M. Bezemer et al. / LUNA: Hard Real-Time, Multi-Threaded, CSP-Capable Execution Framework
&
"# " % !$
!$
!!
Figure 3. UML diagram of threads and their related parts.
2. the UScheduler which contains the ready and blocked queue and decides which User thread is the next to become active. The UTC also contains a list with UThreads, which are the objects containing the ‘context’ of a User thread: the stack, its size and other related information. Besides this context relation data, it also contains a relation with the Runnable which should be executed on the User thread. For the CSP functionality a ‘separation of concerns’ approach is taken for the CSP processes and the threads they run on. The CSP processes are indifferent whether the underlying thread is an OS thread or a User thread, which is a major advantage when running on multi-core targets. This approach can be taken a step further in a distributed CSP environment where processes are activated on different nodes. This will also facilitate deployment, seen from a supervisory control node. Due to this separation, it is also possible to easily implement other execution models. The figure shows that the Sequential, Parallel and Recursion processes are not inheriting from CSProcess but from CSPConstruct. The CSPConstruct interface defines the activate, done and exit functions and CSProcess defines the actual run functionality and context blocking mechanisms. Letting the processes inherit from CSPConstruct is an optimisation: This way they do not require context-switches because their functionality is placed in the activate and done functions, which is executed in the context of its parent respectively child threads. The Alternative implementation still is a CSProcess, because it might need to wait on one of its guards to become ready and therefore needs the context blocking functionality of the CSProcess. The UTC implements the Runnable interface so that it can be executed on an OS thread. When the UTC threading mechanism starts, it switches to the first User thread as a kickstart for the whole process. When the User thread is finished, yields or is explicitly blocked, the UTC code switches to the next User thread which is ready for execution. Due to this architectural decision, the scheduling mechanism is not running on a separate thread, but makes use of the original thread, in between the execution of two User threads. During tests, the number of threads was increased to 10,000 without any problems. All threads got created initially and they performed their task: increase a number and print it. After executing its task, each thread was properly shutdown.
M.M. Bezemer et al. / LUNA: Hard Real-Time, Multi-Threaded, CSP-Capable Execution Framework 163
1.2. LUNA CSP Since LUNA is component based, it is possible to add another layer on top of the threading support. Such a layer is the support layer for a CSP-based execution engine. It is completely separated from the threading model, so it will run on any platform that has threading support within LUNA. Each CSP process is mapped on a thread. Because of the separation of CSP and the threading model, the CSP processes are indifferent whether the underlying thread is an OS thread or a User thread, which is a major advantage when running on multi-core targets. This will also facilitate code generation, since code generation needs to be able to decide how to map the CSP processes on the available cores in an efficient way without being limited by thread types. Figure 4 shows the execution flow of three CSProcess components, being part of this greater application: P = Q || R || S Q = T; U Process P is a parallel process and has some child processes, of which process Q is one. Process Q is a sequential process and also has some child processes. Process T is one of these child processes and it does not have any child processes of its own.
Figure 4. Flow diagram showing the conceptual execution flow of a CSProcess.
First, the pre run of all processes is executed, this can be used to initialise the process just before running the actual semantics of the CSProcess. Next the processes are waiting in wait for next iteration until they are allowed to start their run body. After all processes have
164 M.M. Bezemer et al. / LUNA: Hard Real-Time, Multi-Threaded, CSP-Capable Execution Framework
executed their pre run the application itself is really started, so the pre run does not have to be deterministic yet. The post run of each process is executed, when the process is shutdown, normally when the application itself is shutdown. It gives the processes a chance to clean up the things they initialised in their pre run. In this example, P will start when it is activated by its parent. Due to the parallel nature of the process, all children are activated at once and next the process will wait until all children are done before signalling the parent that the process is finished. Process Q is only one of the processes that is activated by P. Q will activate only its first child process and waits for it until it is finished, because Q is a sequential process. If there are more children available, the next one is activated and so on. T is just a simple code blob which needs to be executed. So at some point it is activated by Q, it executes its code and sends signal back to Q that it is finished. Same goes for Q, when all its child processes are finished, it sends back a signal to P, telling it is finished. Due to this behaviour, the CSP constructs are implemented decentralised by the CSProcesses, instead of implemented by a central scheduler. This results in a simple generic scheduling mechanism, without any knowledge of the CSP constructs. Unlike CTC++, which has a scheduler implemented that has knowledge of all CSP constructs in order to implement them and run the processes in the correct order.
Figure 5. Steps from a model to a LUNA based mapping on OS threads.
Since the CSP processes are indifferent to the type of thread they run on and how they are grouped on OS threads, LUNA needs to provide a mechanism to actually attach these processes to threads. When looking at a gCSP model (left-most part of the figure), a compositional hierarchy can be identified in the form of a tree (middle part of the figure). The MDD tool has to map the processes onto a mix of OS and User threads using the compositional information and generate code. Because of the ‘separation of concerns’ code generation is straightforward as the interoperation of OS and User threads is handled by LUNA. Figure 5 shows the required steps to map the model to OS threads. First, the model needs to be converted to a model tree (number 1 in the figure). This model-tree contains the compositional relations between all processes. Second, the user (or the modeling tool) needs to group processes (2) which are put on the same OS thread, for example criteria for grouping could be processes which heavily rely on communication or try to balance the execution load. Each process is mapped to a UThread object (3). Except for the compositional processes mentioned in the previous section, they are mapped onto CSPConstructs. Next, each group of of UThreads is put in an UThreadContainer (UTC) (4). Finally, each UTC is mapped to an OS thread (5), so the groups of processes can actually run in parallel and have preemption capabilities. It is clear that making good groups of processes will influence the efficiency of the application, so using an automated tool is recommended [19].
M.M. Bezemer et al. / LUNA: Hard Real-Time, Multi-Threaded, CSP-Capable Execution Framework 165
1.3. Channels One of the initial reasons for supporting QNX was the availability of native rendezvous communication support between QNX threads. This indeed made it easy to implement channels for the OS threads, but unfortunately it was not for the User threads. Main problem is that two User threads which want to communicate may be placed on the same OS thread. If one User thread wants to communicate over a rendezvous channel and the other side is not ready, the QNX channel blocks the thread. But QNX does not know about the LUNA implemented scheduler and its User threads, so it blocks the OS thread instead. The other User thread which is required for the communication now never becomes ready and a deadlock occurs. So unfortunately, for communication between User threads on the same OS thread the QNX rendezvous channels are not usable and the choice to initially support QNX became less strong. Figure 6 shows the 2 possible channel types. Channel 1 is a channel between two OS threads. The QNX rendezvous mechanism can be used for this channel. Channel 2a and 2b are communication channels between two User threads; it does not matter whether the User threads are on the same OS thread or not. For this type of channel the QNX rendezvous mechanisms cannot be used as explained earlier, as it could block the OS thread and therefore prevent execution of other User threads on that OS thread. An exception could be made for
Figure 6. Overview of the different channel situations.
OS threads with one User thread, but such situations are undesired since it is more efficient to directly run code on the OS thread without the User thread in between. Guarded channels are also not supported by QNX, so for this type of channels a custom implementation is also required.
Figure 7. Diagram showing the channel architecture.
Figure 7 shows the architecture of the channel implementation. A channel is constructed modularly: The buffer, Any2In and Out2Any types can be exchanged with other compatible types. The figure shows an unbuffered any-to-any channel, but a buffered any-to-one is also possible, along with all kinds of other combinations.
166 M.M. Bezemer et al. / LUNA: Hard Real-Time, Multi-Threaded, CSP-Capable Execution Framework
write () { ILockable . lock () if ( isReaderReady ()) { IReader reader = f i n d R e a d y R e a d e r O r B u f f e r () transfer ( writer , reader ) reader . unblockContext () ILockable . unlock () } else { setWriterReady ( writer ) ready_list . add ( writer ) writer . blockContext ( ILockable ) } } Listing 1. Pseudocode showing the channel behaviour for a write action.
Listing 1 shows the pseudocode for writing on a channel. The ILockable interface is used to gain exclusive access to the channel, in order to make it ‘thread safe’. Basically, there are two options: Either there is a reader (or buffer) ready to communicate or not. If the reader is already waiting, the data transfer is performed and the reader is unblocked so it can be scheduled again by its scheduler when possible. In the situation that the reader is not available, the writer needs to be added to the ready list of the channel, so the channel knows about the writers which are ready for communication. This list is ordered on process priority. And, the writer needs to be blocked until a reader is present. The same goes for reading a channel, but exactly the other way around. The findReadyReaderOrBuffer() method checks if there is buffered data available, otherwise it calls a findReadyReader() method to search for a reader which is ready. The isReaderReady() and findReadyReader() methods are implemented by the Out2Any block or by a similar block that is used. So depending on the input type of the channel, the implementation is quite simple when there is only one reader allowed on the channel or more complex when multiple readers are allowed. The transfer() method is implemented by the (Un)bufferedChannel and therefore is able to read from a buffer or from an actual reader depending on the channel type. LUNA supports communication between two User threads on the same OS thread by a custom developed rendezvous mechanism. When a thread tries to communicate over a channel and the other side is not ready, it gets blocked using the IThreadBlocker (see Figure 3). By using the IThreadBlocker interface, the thread type does not matter since the implementation of this interface is dependent on the thread type. For User threads, the scheduler puts the current thread on the blocked queue and activates a context-switch to another User thread which is ready. This way the OS thread is still running and the User thread is blocked till the channel becomes ready and the scheduler activates it. And for OS threads, it uses a semaphore to completely block the OS thread until the channel is ready. As mentioned in the start of this section, there are different implementations of channels: the QNX implementation used for communication between OS threads and the LUNA implementation for communication between User threads and/or OS threads. It would be cumbersome for a developer to have to remember to choose between these types, especially when the User threads are not yet mapped to their final OS threads. So a channel factory is implemented in LUNA. When all CSP processes are mapped on their threads, this factory can be used to determine what types of channels are required. Having the information of the type of threads to map the CSP processes on is sufficient to determine the required channel implementation. At run-time, before the threads are activated, the factory needs be invoked to select a correct implementation for each channel. If a developer (or code generation) moves
M.M. Bezemer et al. / LUNA: Hard Real-Time, Multi-Threaded, CSP-Capable Execution Framework 167
a CSP process to another OS thread, the factory will adapt accordingly, using the correct channel implementation for the new situation. 1.4. Alternative The Alternative architecture is shown in Figure 8. It is a CSProcess itself, but it also has a list of other CSProcesses which implement the IGuard interface. Alternative uses the list when it is activated and will try to find a process which meets its IGuard conditions. Currently, the only guarded processes that are available are the GuardedWriter and GuardedReader processes. But others might be added, as long as they implement the IGuard interface.
Figure 8. Diagram showing the relations for the Alternative architecture.
In the case of channel communication, it first checks if a reader or writer is guaranteed to perform channel communication without blocking and makes sure this guarantee stays intact. Next, it performs the communication itself. The Alternative implements a sophisticated protocol in order to make sure the communication is guaranteed, even though different threads are part of the communication or some of the processes on the channel might be not guarded. First in Figure 9 a situation is shown, where a guarded reader gains access on a channel, but blocks when it should actually read the contents, as another reader came in-between. Some of the objects in Figure 8 are grouped by the dashed boxes, they are shown in Figure 9 as a single object to keep things simple.
Figure 9. Sequence diagram showing a situation were a guarded reader blocks.
Assume we have an any-to-any channel, which has a writer waiting to communicate (1 in the figure). The Alternative is activated (2) and checks if the GuardedReader is ready. The GuardedReader is only ready if there is a writer or buffer waiting to communicate, so it checks with the channel. When the GuardedReader indeed is ready, it gets activated so it can be scheduled by the scheduler to actually perform the communication.
168 M.M. Bezemer et al. / LUNA: Hard Real-Time, Multi-Threaded, CSP-Capable Execution Framework
Unfortunately before the communication takes place, another Reader is activated and wants to communicate on the channel as well (3), since there is a writer present the communication takes place. Later, the GuardedReader is activated (4), but the writer is not available anymore and the GuardedReader is blocked, even though it gained access to the channel through the Alternative. To prevent such behaviour a more sophisticated method is used, shown in Figure 10. This example describes a situation where both channel ends are guarded to be able to describe the protocol completely. Whether this situation is used in real applications or not is out of scope. Again the writer registers at the channel, telling that it is ready to write data (1). There is no reader available yet, so the write is put in the ready list and gets a false as result. Next, Alternate2 continues to look for a process which can be activated, but this is not interesting for the current situation.
!"#
# $
$
$ #
$
$
Figure 10. Sequence diagram showing the correct situation.
Alternate1 checks whether the GuardedReader is ready or not, when it becomes active (2). Since the ready list has items on it, the channel is ready for communication. To prevent that other readers are interfering with our protocol, the channel gets locked. If Reader wants to read from the channel it gets blocked due to the lock. This is in contrast with the previous example, where the GuardedReader got wrongly blocked. When the isReady() request returns positive, the Alternative1 checks whether another, previously isReady() requested, guard has not been reconfirmed. If this is not the case, it will lock() the Alternative for exclusive reconfirm request, preventing other guards taking over the current communication. Before the actual transfer, Alternative1 needs to check whether the GuardedWriter is still ready to write. It might be possible that Alternate2 found another process to activate and the GuardedWriter is not ready anymore. Using the confirm() method, Alternative1 asks the channel for this and the channel forwards the question at Alternate2 via GuardedWriter with the reconfirm() method. Assuming that the GuardedWriter is still ready, the channel directly performs the transfer of data. This is not necessary, but is more efficient as the channel becomes available for other communications earlier. In the end, Alternative1 revokes the isReady() requests of its other guarded processes, since a process was chosen, and it activates GuardedReader. For this example situation it is unnecessary, since the transfer is completed already, but for other (non reader/writer) processes it is required to run the guarded process code. Also, the GuardedReader might be used to activate a chain of other processes.
M.M. Bezemer et al. / LUNA: Hard Real-Time, Multi-Threaded, CSP-Capable Execution Framework 169
The described alternative sequence of Figure 10 has been tested for some basic use cases. Although it is not formally proven, it is believed that this implementation will satisfy the CSP requirements of the alternative construction. 2. Results This section shows some of the results of the tests performed on/with the LUNA framework. The tests compare LUNA with other CSP frameworks, to see how the LUNA implementation performs.
Figure 11. Overview of the used test setup.
All tests in this section are performed on a embedded PC/104 platform with 600 MHz x86 CPU as shown in Figure 11. It is equipped with an FPGA based digital I/O board to connect it with actual hardware when required for the test. While implementing and testing LUNA, QNX seemed to be slower than Linux. To keep the test results comparable, all presented tests are executed under QNX (version 6.4.1) and compiled with with the corresponding qcc (version 4.3.3) with the same flags (optimisation flag: -O2) enabled. 2.1. Context-switch Speed After the threading model was implemented, a context-switch speed test was performed to get an idea of the efficiency of the LUNA architecture and implementation. To measure this speed, an application was developed consisting of two threads switching 10,000 times. The execution times were measured and the average switching time was calculated to get a more precise context-switching time. Table 1 shows these times. Table 1. Context-switch speeds for different platforms. Platform CTC++ ‘original’ C++CSP2 CTC++ QNX LUNA QNX
OS thread (μs) 3.224 3.213 3.226
User thread(μs) 4.275 3.960 1.569
The CTC++ ‘original’ row shows the test results of the original CTC++ library compiled for QNX. It is not a complete QNX implementation, but only the required parts for the test are made available. In order to be able to compile the CTC++ library for QNX, some things needed to change: • The setjmp/ longjmp implementation used when switching to another User thread. The Stack Pointer (SP) was changed to use the correct field for QNX. • Linux does not save the signal mask by default when executing setjmp and longjmp. QNX does, which slows down the context switches considerable. Therefore, the ‘ ’ versions of setjmp and longjmp are used for the QNX conversion. • The compiler and its flags in order to use the QNX variants. • The inclusions of the default Linux headers are replaced with their QNX counterparts.
170 M.M. Bezemer et al. / LUNA: Hard Real-Time, Multi-Threaded, CSP-Capable Execution Framework
• Some platform-dependent code did not compile and is not required to be able to run the tests, so it was removed. To use the C++CSP2 library with QNX, the same changes were made as for CTC++ library except the SP modification, as it was not required for C++CSP2. As mentioned the setjmp/ longjmp are used for the quick conversion to QNX, although the library already used longjmp, but not setjmp. This might indicate that the author knew of this difference and intended different behaviour. The QNX implementation for the C++CSP2 library is also not complete, only the required parts are tested, all other parts are not tested for compatibility. For the test a custom application was created as the provided C++CSP2 test suite did not contain a pure context switching test. CTC++ QNX [20] is an initial attempt to recreate the CTC++ library for QNX. It was not completely finished, but all parts needed for the commstime benchmark are available. LUNA QNX is the new LUNA framework compiled with the QNX platform support enabled. For other platforms the results will be different, but the same goes for the other libraries. The OS thread column shows the time it takes to switch between two OS threads. The User thread column shows the time it takes to switch between two User threads placed on the same OS thread. For LUNA it is clear that the OS thread context-switches are slower than the User thread switches, which is expected and the reason for the availability of User threads. All 3 OS thread implementations almost directly invoke the OS scheduler and therefore have roughly the same context-switch times. A surprising result is found for C++CSP2: The OS thread context-switch time is similar with the User thread time. The User threads are switched by the custom scheduler, which seems to contain a lot overhead, probably for the CSP implementation. Expected behaviour is found in the next test, when CSP constructs are executed. In this test the custom scheduler gets invoked for the OS threads as well, resulting in an increase of OS context switch time. In this situation the User threads become much faster than the OS threads. The context-switch time for the LUNA User threads is much lower compared to the others. The LUNA scheduler has a simple design and implementation, as the actual CSP constructs are in the CSProcess objects themselves. This approach pays off when purely looking at context-switch speeds. The next section performs a test that actually runs CSP constructs, showing whether it also pays off for such a situation. 2.2. Commstime Benchmark To get an better idea of the scheduling overhead, the commstime benchmark [21] is implemented, as shown in Figure 12. This test passes a token along a circular chain of processes. The Prefix process starts the sequence by passing the token to Delta, which again passes it on to the Prefix via the Successor process. The Delta process also signals the TimeAnalysis process, so it is able to measure the time it took to pass the token around. The difference between this benchmark and the context-switch speed test, is that in this situation a scheduler is required to activate the correct CSP process depending on the position of the token. Table 2 shows the cycle times for each library for the commstime benchmark. The commstime tests are taken from the respective examples and assumed to be optimal for their CSP implementation. LUNA QNX has two values: the first is for the LUNA channel implementation and the second value for the QNX channel implementation. It is remarkable that the QNX channels are slower than the LUNA channels. This is probably due to the fact the QNX channels are always any-to-any and the used LUNA channels one-to-one. The amount of context-switches of OS threads is unknown, since the actual thread switching is handled
M.M. Bezemer et al. / LUNA: Hard Real-Time, Multi-Threaded, CSP-Capable Execution Framework 171
Figure 12. Model of the commstime benchmark.
by the OS scheduler having preemption capabilities and there is no interface to retrieve this data. Table 2. Overhead of the schedulers implemented by the libraries for their supported thread types. Platform CTC++ ‘original’ C++CSP2 CTC++ QNX LUNA QNX
Normally, the library is used with modeling tools in combination with code generation. This would result in a different implementation of the commstime benchmark. In general, the readers and the writers become separate processes, instead of integrated within the Prefix, Delta, Successor and TimeAnalysis processes. For example, in this situation the Successor is implemented using a sequential process containing a reader, an increment and a writer process. Table 3 shows the results when gCSP in combination with code generation is used to design the commstime benchmark application. gCSP code generation is only available for CTC++, so for LUNA the CTC++ code is rewritten manually as if it would have been generated. Table 3. Commstime results when using MDD tools to create the test. Platform CTC++ ‘original’ C++CSP2 CTC++ QNX LUNA QNX
The implementation of the C++CSP2 test was somewhat different compared to the other implementations. Since C++CSP2 threads are destroyed when one cycle is done, they need to be recreated for each cycle. The processes added to sequential process, for example in the Successor, cannot contain a loop, since this would prevent the execution of the second and the third process because those processes need to wait on preceding processes. Due to this limitation, the C+CSP2 implementation needs to recreate 15 threads each cycle, hence the +15 in the table. The construction and destruction of these threads generates a lot of overhead, resulting in the high cycle times around 12.5ms. It was not possible to prevent this behaviour when using the ‘code generated’ code, due to differences in the design ideas behind the libraries.
172 M.M. Bezemer et al. / LUNA: Hard Real-Time, Multi-Threaded, CSP-Capable Execution Framework
The table also shows that the close result of the CTC++ ‘original’ and the CTC++ QNX libraries were accidental. Now the difference is bigger, which is expected since the CTC++ QNX library uses OS threads which have much more overhead compared to the User threads. For the first results, the optimised channels of the QNX variant probably resulted in the small difference between the two. The benchmark results of LUNA are much better compared to the CTC++ library. Furthermore the LUNA results are better than the C++CSP2 results when looking at Table2. This is due to the efficient context-switches, as described in the previous section. When compensating for the required context-switch times, the results for C++CSP2 and LUNA are similar. When comparing both tables, it is clear that using MDD tools with code generation results in slower code. For simple applications it is advisable to manually create the code, especially for low-resource embedded systems. When creating a complex application to control a large setup, like a humanoid robot, it saves a lot of development time to make use of the MMD tools. For this ‘code generated’ results, the LUNA framework has good cycle times, which is encouraging since the planning of TERRA, the new MDD tool, which will feature code generation for LUNA. It is advisable for such an MDD tool to invest effort into optimising code generation to get good performance on the target system. 2.3. Real Robotic Setup Next, an implementation for a real robotic setup was developed with LUNA, to see whether it is usable in a practical way. To keep things easy for a first experiment, a simple pan-tilt setup is used, with 2 motors and 2 encoders. These 2 degrees of freedom can be controlled using a joystick. The control algorithm of this setup requires about 50 context switches to completely run one cycle. The CTC++ library already has an implementation for this setup available and a similar implementation was made for LUNA to keep the comparison fair. Real-time logging functionality was added in order to be able to measure timing information and to compare LUNA with the CTC++ library. Table 4 shows the timing results of LUNA and the CTC++ implementation. The experiments have been performed with 100Hz and 1kHz sample frequencies, so each control loop cycle should be respectively 10ms and 1ms long. As the measurements were performed for about 60 seconds, the 100Hz measurements resulted in about 6,000 samples and the 1 kHz resulted in about 60,000 samples. The processing time is found by subtracting the idle time from the cycle time. The idle time is calculated by measuring the time between the point where the control code is finished and the point where the timer fires an event for the next cycle. Table 4. Timing results of the robotic implementation. Platform CTC++ ‘original’
M.M. Bezemer et al. / LUNA: Hard Real-Time, Multi-Threaded, CSP-Capable Execution Framework 173
The results show that LUNA performs well within hard real-time boundaries. The mean values are a good match compared to the used frequencies and a low standard deviation value shows that the amount of missed deadlines is negligible. Due to periodically missed clock ticks, the maximum cycle time of the 1kHz measurements is twice the sample time. This phenomenon can be explained by the mismatch between the requested timer interval and the PC/104’s hardware timer [22]. The timer can not fire exactly every 1ms, but instead it fires every 0.999847ms and for every 6535 instances the timer will not fire. In the 100Hz case this will not be noticed, because the cycle time is large enough and these kind of errors are relatively small. When looking at the CTC++ ‘original’ implementation, it is seen that the 100Hz results are good as well, although the mean cycle time, shows that the obtained frequency is 90.9Hz instead of 100Hz. Same goes for the 1kHz measurement where a 847.5Hz frequency was obtained instead. From this it can be concluded that CTC++ has problems to closely provide the requested frequencies. For a frequency of 1kHz, the standard deviation becomes very large. A third frequency was also measured, 1000.15Hz, which is an exact match with the available frequency of the setup. This solves the very large standard deviation and the incorrect mean cycle times for the CTC++ library. It should be noted that this frequency is setup dependent and therefore needs to be measured for each setup separately, in order to gain these good results. The frequency of 1000.15Hz indeed solves the maximum cycle times of LUNA being two periods long. For setups which needs to be extremely accurate this is important, as it can make the difference between an industrial robot moving smoothly or scratching your car. The other values are not much different, showing that LUNA is more robust for all frequencies than the CTC++ library and frequency tuning is not required to get reasonable hard real-time properties. It is also noticeable that the processing times for the LUNA User threads are lower compared to the CTC++ processing times. Suggesting that the overhead is much lower and that more resources are available for the controlling code. Even the LUNA OS threads processing times are comparable with the CTC++ User thread processing times. 3. Conclusions Good results are obtained using LUNA, it has fast context-switches and the commstime benchmark is faster than the C++CSP2 and CTC++ implementations. These benchmark results are good but the main requirement, the real-time behaviour of the library, is much more important when controlling robotic setups. The simple robotic setup indeed performed as expected; it reacts smoothly on the joystick commands. The maximum and minimum cycle time values are close to the (requested) mean cycle time and the standard deviation values are low, showing that the hard real-time properties of LUNA are good. The choice for QNX is not that obvious anymore when the provided rendezvous channels are only usable between OS threads. Nonetheless, QNX provides a good platform to build a real-time framework, there is enough support from the OS to keep implementation tasks maintainable. All requirements mentioned in the introduction are met. The first three of them are obvious: LUNA is a hard real-time, multi-platform, multi-threaded framework. Scalability is also met, even though LUNA was not yet tested with a big (robotic) setup, early scalability tests showed that having 10,000 processes poses no problem. The CSP execution engine is the only implemented execution engine at the moment. But the requirement to not be dependent on it is met, as it is possible to turn it off and use the User
174 M.M. Bezemer et al. / LUNA: Hard Real-Time, Multi-Threaded, CSP-Capable Execution Framework
and OS threads in a non CSP related way. Using the provided interface it is also possible to add other execution engines like a state machine execution engine. Developing applications using LUNA is straightforward, for example one does not need to keep the type of threads and channels in mind while designing the control application. It is possible to just create the CSP processes, connect them with channels and let the LUNA factories decide on the actual implementation types. Finally, Debugging and Tracing is also not a problem – it is possible to enable the debugging component if required. This component contains means for debugging and tracing the other components as well as the application is being developed. It is also possible to send the debug and trace information over a (local) network to a development PC, in order to have run-time analysis or to store it for off-line analysis. The required logger does not influence the executing application noticeable as it is a realtime logger. It has predefined buffers to store the debug information and only when there is idle CPU time available, it sends the buffered content over the network freeing up the buffer for new data. Especially logging the activation of processes is interesting, as this could provide valuable timing information, like the cycle time of a control loop or the jitter during execution. So it is possible to influence the application with external events and directly see the results of such actions. It is also possible to following the execution of the application by monitoring the states (running, ready, blocked, finished) of the processes. This information could also be fed back to the MDD tool, in order to show these states in the designed model of the application. For future work an implementation for Linux (and Windows) would be convenient. It is much faster to try out new implementation on the development PC than on a target. Of course this requires more work, hence the choice to support QNX first, but it certainly pays off by reducing development time. The flexibility to easily move processes between the groups of OS and User threads reduces development time even more, as the developer does not required to change his code when moving processes. Building the simple robotic setup took some time. There are only about 51 processes to control this setup. Of course this could be less, but it takes too much time to develop controller applications by hand, so code generation for LUNA is required. In order to attract users to start using LUNA – and also for educational purposes – code generation is required. So, soon after LUNA evolves into an initial and stable version, TERRA needs to be built as well to gain these advantages and properly use LUNA. When TERRA and code generation are available, algorithms to optimise the model for a specified target with known resources can be implemented. Before code generation, these algorithms [19] can schedule the processes automatically in an optimal manner for the available resources. These scheduling algorithms are also interesting for performing timing analysis of the model, in order to estimate whether the model will be able to run real-time with the available resources. To see whether LUNA is capable of controlling setups larger than the example setup, it is planned to control the Production Cell [23] with it. It is already partially implemented, but the work is not completely finished yet. Performing similar tests, as done in Section 2.3, really shows the advantages of using LUNA. Another planned test with the Production Cell is to control it with Arduinos [24]. The Production Cell has 6 separate production cell units (PCUs), each PCU is almost a separate part of the setup. Using one Arduino for each PCU seems like a nice experiment for distributed usage of LUNA. This would require support for a new platform within LUNA, which does not use operating system related functionalities.
M.M. Bezemer et al. / LUNA: Hard Real-Time, Multi-Threaded, CSP-Capable Execution Framework 175
References [1] C.A.R. Hoare. Communicating Sequential Processes. Prentice-Hall, London, 1985. [2] D.S. Jovanovi´c, B. Orlic, G.K. Liet, and J.F. Broenink. gCSP: a graphical tool for designing CSP systems. In I. East, Jeremy Martin, P.H. Welch, David Duce, and Mark Green, editors, Communicating Process Architectures 2004, volume 62, pages 233–252, Amsterdam, September 2004. IOS press. [3] B. Orlic and J.F. Broenink. Redesign of the C++ Communicating Threads library for embedded control systems. In F. Karelse, editor, 5th PROGRESS Symposium on Embedded Systems, pages 141–156, Nieuwegein, NL, 2004. STW. [4] H. Kopetz. Real-Time Systems - Design Principles for Distributed Embedded Applications. Kluwer Academic Publishers, 1997. [5] J.F. Broenink, Y. Ni, and M.A. Groothuis. On model-driven design of robot software using co-simulation. In E. Menegatti, editor, SIMPAR, Workshop on Simulation Technologies in the Robot Development Process, November 2010. [6] M.A. Groothuis, R.M.W. Frijns, J.P.M. Voeten, and J.F. Broenink. Concurrent design of embedded control software. In T. Margaria, J. Padberg, G. Taentzer, T. Levendovszky, L. Lengyel, G. Karsai, and C. Hardebolle, editors, Proceedings of the 3rd International Workshop on Multi-Paradigm Modeling (MPM2009), volume 21 of Electronic Communications of the EASST journal. EASST, ECEASST, October 2009. [7] N.C.C. Brown. C++CSP2: A Many-to-Many Threading Model for Multicore Architectures. In A.A. McEwan, W. Ifill, and P.H. Welch, editors, Communicating Process Architectures 2007, pages 183–205, July 2007. [8] uClibc website, 2011. http://www.uclibc.org/. [9] P.H. Welch, N.C.C. Brown, J. Moores, K. Chalmers, and B. Sputh. Integrating and Extending JCSP. In A.A. McEwan, W. Ifill, and P.H. Welch, editors, Communicating Process Architectures 2007, pages 349–369, July 2007. [10] JamaicaVM website, 2011. http://www.aicas.com/jamaica.html. [11] OROCOS website, 2011. http://www.orocos.org/. [12] ROS website, 2011. http://www.ros.org/. [13] E.W. Dijkstra. Notes on structured programming. In O.J. Dahl, E.W. Dijkstra, and C.A.R. Hoare, editors, Structured programming, chapter 1, pages 1–82. Academic Press Ltd., London, UK, 1972. [14] D.S. Jovanovi´c. Designing dependable process-oriented software, a CSP approach. PhD thesis, University of Twente, Enschede, The Netherlands, 2006. [15] OpenWRT website, 2011. http://www.openwrt.org/. [16] F. Fainelli. The OpenWrt embedded development framework. Free and Open source Software Developers’ European Meeting (FOSDEM), January 2008. [17] QNX website, 2011. http://www.qnx.com. [18] Mordor website, 2011. http://code.mozy.com/projects/mordor/. [19] M.M. Bezemer, M.A. Groothuis, and J.F. Broenink. Analysing gcsp models using runtime and model analysis algorithms. In P.H. Welch, H.W. Roebbers, J.F. Broenink, F.R.M. Barnes, C.G. Ritson, A.T. Sampson, D. Stiles, and B. Vinter, editors, Communicating Process Architectures 2009, volume 67, pages 67–88, November 2009. [20] B. Veldhuijzen. Redesign of the CSP execution engine. MSc thesis 036CE2008, Control Engineering, University of Twente, February 2009. [21] P.H. Welch and D.C. Wood. The Kent Retargetable occam Compiler. In Brian O’Neill, editor, Parallel Processing Developments, Proceedings of WoTUG 19, volume 47 of Concurrent Systems Engineering, pages 143–166, Amsterdam, The Netherlands, March 1996. World occam and Transputer User Group, IOS Press. ISBN: 90-5199-261-0. [22] M. Charest and B. Stecher. Tick-tock - Understanding the Neutrino micro kernel’s concept of time, Part II, April 2011. http://www.qnx.com/developers/articles/article_826_2.html. [23] M.A. Groothuis and J.F. Broenink. HW/SW Design Space Exploration on the Production Cell Setup. In P.H. Welch, H.W. Roebbers, J.F. Broenink, and F.R.M. Barnes, editors, Communicating Process Architectures 2009, Eindhoven, The Netherlands, volume 67 of Concurrent Systems Engineering Series, pages 387–402, Amsterdam, November 2009. IOS Press. [24] Arduino website, 2011. http://www.arduino.cc/.
Concurrent Event-driven Programming in occam-π for the Arduino Christian L. JACOBSEN a , Matthew C. JADUD b , Omer KILIC c and Adam T. SAMPSON d a Department of Computer Science, University of Copenhagen, Denmark b Department of Computer Science , Allegheny College, PA, USA c School of Engineering and Digital Arts, University of Kent, UK d Institute of Arts, Media and Computer Games, University of Abertay Dundee, UK {christian , matt , omer , adam} @concurrency.cc Abstract. The success of the Arduino platform has made embedded programming widely accessible. The Arduino has seen many uses, for example in rapid prototyping, hobby projects, and in art installations. Arduino users are often not experienced embedded programmers however, and writing correct software for embedded devices can be challenging. This is especially true if the software needs to use interrupts in order to interface with attached devices. Insight and careful discipline are required to avoid introducing race hazards when using interrupt routines. Instead of programming the Arduino in C or C++ as is the custom, we propose using occam-π as a language as that can help the user manage the concurrency introduced when using interrupts and help in the creation of modular, well-designed programs. This paper will introduce the Arduino, the software that enables us to run occam-π on it, and a case study of an environmental sensor used in an Environmental Science course. Keywords. Transterpreter, occam-pi, Arduino, embedded systems, interrupts, sensing.
Introduction It is easy to run into basic issues regarding concurrency when programming embedded hardware, and often the use of interrupts is required to handle internal and external events. Interrupts introduce concurrency; concurrency, when coupled with shared state, easily introduces race hazards. When coupled with the non-deterministic nature of interrupt-driven systems, race hazards can be challenging for experienced programmers to diagnose, and rapidly become very difficult for novice programmers to fix. Traditionally, the novice embedded systems developer would be a student of engineering or computing: an individual committed to learning the how and why of their problems. The Arduino, a low-cost (less than $30), open hardware platform used by more than 150,000 artists and makers for exploring interactive art, e-textiles, and robotics. It’s release has radically changed the (traditional) demographics of the embedded systems world, and despite the (likely) non-technical background of users attracted to the Arduino, it presents them with the same challenge as the budding electrical engineer: how do you make a single processor do two or more things at the same time? In the past two years, we have focused our development efforts on bringing occam-π to the Arduino platform. occam-π’s use of processes and channels (from Hoare’s CSP [1]) provide powerful abstractions for expressing parallel notions (“blink an LED while turning a motor”) as well as managing random events in the world (“wait for pin 7 to go high while doing something else”). In addition, processes are an encapsulated abstraction for hardware,
178
C.L. Jacobsen et al. / occam-π for the Arduino
and channels provide well-defined interfaces that allow for the design of systems that (1) mirror the structure of the hardware they control and (2) allow for easy substitution when, for example, an EEPROM module is replaced with an SD card or some other form of data storage. Our work demonstrates the feasibility of running a virtual machine on an embedded platform with as little as 32KB of space for code and 2KB of RAM, while performing sufficiently well to allow for the development of interesting software and hardware. While there have been other runtime and language efforts targeting devices this size (for example, TinyOS [2], Mantis [3], Contiki [4]), these projects typically target very small research communities. Our goal in porting to the Arduino is to support the diverse and growing community of makers exploring embedded systems by helping them use occam-π to manage the concurrency inherent in their hardware/software systems. Using our tools we have explored several interesting problems that required careful handling of interrupts and real time concerns, including an environmental sensor for monitoring energy and room usage on a college campus (as demonstrated in this paper) and the real time control of an unmanned aerial vehicle, as demonstrated in [5]. 1. The Arduino The Arduino is described as “an open-source electronics prototyping platform based on flexible, easy-to-use hardware and software. It’s intended for artists, designers, hobbyists, and anyone interested in creating interactive objects or environments.” [6] In this regard, the Arduino is not just a piece of hardware, but rather an ecosystem consisting of hardware, software, documentation, and above all, its community. The concurrency.cc board, which we discuss in Section 1.3, is our own derivative of the Arduino’s open hardware design. In total, it is estimated that at the time of writing, over 150,000 Arduino (or Arduino compatible) devices have been shipped to users worldwide. The software officially supported for programming the Arduino is a custom integrated development environment (IDE) based on the Processing IDE [7], and a set of libraries based on Wiring [8]. Both of these projects are “sister projects” to the Arduino project. The minimalistic Arduino IDE provides support for editing, syntax highlighting, compilation, and uploading of code to a device. Programming is done in C++ and the Wiring libraries provide functions for interacting with the Arduino and a wide variety of sensors, motors, and an endless variety of storage and communication devices. 1.1. The Arduino Community The Arduino’s single greatest asset at this point is not the choice of microcontroller but the community itself. It’s large number of enthusiastic developers, users, and merchants make it easy to get started with the Arduino. The Arduino project’s core development team is quite small, and while there is little contribution to the core from the global community, there is a large “external” community developing libraries and examples for the platform. While this code does not typically make it into the distribution, it finds its way onto many websites and into repositories all over the world. Both code and circuitry examples can be found to support users in exploring the control of LCD displays, data storage peripherals, and sensors and motors of all sorts. The user community helps perpetuate the popularity of the platform. Enthusiastic users “tweet” about their creations (or make creations that tweet), write blog posts, or even generate videos of their creations for sharing on sites like YouTube and Vimeo. These enthusiasts— often artists, makers, and hobbyists with no formal background in computing or electronics—
179
C.L. Jacobsen et al. / occam-π for the Arduino
are keen to help other newcomers to the community, recommending resources and solutions when a new explorer gets stuck. 1.2. occam-π and the concurrency.cc Community Although occam is an old language, it has a tiny user community; hence why we have chosen a large and vibrant community of makers and learners in our most recent porting efforts. To grow in this community, we have taken a number of steps—but it will take time and persistence to see the value of these efforts. First, we chose a URL for our project that we hoped would be representative and memorable: concurrency.cc. The .cc country code was chosen to match that used by the Arduino project (arduino.cc) as well as reflect a commitment to open hardware and software (a la the Creative Commons1 ). Mailing lists, open repositories, and easy-to-use bug trackers are not adequate to attract new end-users: we needed to automate the building of packages containing an IDE and toolchain that could be easily installed on all major operating systems. However, we know the occam-π project, and our efforts to grow our community of users on the Arduino, are hampered by numerous issues. Poor documentation remains our worst enemy: there are few resources for the occam-π programming language available online, and our own documentation efforts are slowed by a lack of contributors. Until it is easy for people to download tools, read (or watch) examples, and implement those examples successfully on their own—and have resources available to let them continue exploring—we expect that it will be difficult to significantly grow the occam-π user community. (We look at resources like the “How to tell if a FLOSS project is doomed to FAIL”[9] and Bacon’s “The Art of Community: Building the New Age of Participation”[10] as guides down the long and challenging road to attracting and retaining users in an open, participatory framework.) 1.3. Arduino Hardware and the concurrency.cc Board The most popular Arduino boards, the Uno and Mega, are both based on the megaAVR series of processors by Atmel2 . The specifications for the Uno and Mega can be seen in Table 1 along with those for the LilyPad Arduino, an official 3rd party Arduino variant, and the concurrency.cc board, which is Arduino compatible. The megaAVR processors used on these boards are typical embedded microcontrollers with a modest amount of flash and RAM. They all have general purpose (digital) input-output capability, as well as 10-bit analog-todigital (ADC) conversion hardware, pulse-width modulation hardware (PWM), and support a variety of common embedded protocols (UART, SPI, TWI). Table 1. Common Arduino configurations Board Uno Mega LilyPad c.cc
MCU ATmega328 ATmega2560 ATmega328 ATmega328
Flash 32 KB 256 KB 32 KB 32 KB
SRAM 2 KB 8 KB 2 KB 2 KB
MHz 16 16 8 16
UART 1 4 1 1
ADC 6 16 6 6
PWM 6 14 6 6
GPIO 14 54 14 14
The standard Arduino board (the Arduino Uno) has three status LEDs (power, serial transmit and receive) as well as one LED that can be controlled via one of the processors pins. This LED can be used when initially working with the board in order to ensure that everything is working correctly: write a program to continuously blink the LED, compile 1
and upload it. Success is evident. When using the Arduino environment blinking this LED is sufficient as a getting started exercise, but when using a concurrent language we would like to be able to easily illustrate the concurrent nature of our programs. To do this, we often blink several LEDs in parallel, a feat that cannot be accomplished without attaching further LEDs to the standard Arduino board. For this reason, demonstrating concurrency on a standard Arduino in a classroom or workshop environment involves connecting several devices (eg. LEDs) to the board using jumper wires that easily fall out during experimentation and use. To remedy this, we have developed an Arduino variant that we call the the concurrency.cc board (Figure 1), or “c.cc board” for short. The c.cc board is an Arduino derivative developed by the third author that incorporates a number of features a standard Arduino does not.
Figure 1. The concurrency.cc board.
First, the c.cc board incorporates a boost converter circuit that allows it to run off lowvoltage sources like single AA batteries. Second, it uses a JST connector (as opposed to a larger barrel jack), meaning high energy density lithium polymer batteries can be plugged directly into the board. Third, a mini-USB plug is used, which is now common on many small electronic devices. Finally, four LEDs are designed directly into the board, allowing basic demonstrations of concurrency without the need for an external circuit. These features permit the use of readily available power sources (AA batteries) in teaching environments with an integrated “display” of four LEDs that students can use to see multiple outputs in parallel without needing to construct a separate, error-prone circuit. 1.3.1. Blinking Four LEDs A first project on any embedded platform is to blink an LED, which demonstrates that code has been uploaded to the processor and that one or more registers that affect the external state can be manipulated. When programming with a concurrent programming language, blinking four LEDs independently should be no harder than blinking one. To blink one LED, we might write the program in Listing 1. 1 2 3 4 5
#INCLUDE "plumbing.module" PROC main () blink (13, 500) : Listing 1. Blinking the built-in LED on and off at a rate of 500ms.
C.L. Jacobsen et al. / occam-π for the Arduino
181
To blink four LEDs at different rates, we would use a PAR and four instances of the blink() procedure with different output pins and toggle rates (Listing 2). 1 2 3 4 5 6 7 8 9
#INCLUDE "plumbing.module" PROC main PAR blink blink blink blink :
() (13, (12, (11, (10,
500) 400) 300) 200) Listing 2. Blinking four LEDs in parallel at different rates.
For comparison, we have included a C++ program that blinks four LEDs at different rates (Listing 3). It follows the pattern of all programs written in the Arduino environment, which involves implementing both a setup() and a loop() separately. The former is run once, the latter is run repeatedly until power is removed. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
boolean state[4] = {false, false, false, false}; unsigned long prev = 0; void setup () { for (int i = 0; i < 4; i++) pinMode(10+i, OUTPUT); } void toggle (int pin) { state[pin − 10] = !state[pin − 10]; digitalWrite(pin, state[pin − 10]); } void loop () { unsigned long time = millis(); if (time != prev) { if ((time % 500) == 0) { toggle(13); if ((time % 400) == 0) { toggle(12); if ((time % 300) == 0) { toggle(11); if ((time % 200) == 0) { toggle(10); prev = time; }
} } } }
Listing 3. Blinking four LEDs at different rates in C++.
Even though the problem is “simple,” the second author still made several errors in writing this code. The first mistake was made in implementing toggle(). To index into the array state (which holds the current state of the four Arduino pins), the value 13 was subtracted to obtain an index instead of 10; this code executed and produced very odd behavior, whereas an equivalent occam-π program would have crashed at runtime. To debug this, a print statement was added to toggle() that output the values in the state array. Once this was fixed, the serial printing was removed—at which point, the program ceased to function correctly. The reason was that the loop() procedure was running too quickly, and as a result multiple readings (and therefore multiple pin toggles) were taking place in sub-millisecond timeframes. As a fix, the conditional on line 16 was added. While we believe there is much more work to be done to support occam-π on the Arduino, we also believe that blinking four LEDs concurrently should be easy. An appar-
182
C.L. Jacobsen et al. / occam-π for the Arduino
ently simple problem (“blink four LEDs at different rates”) is not a program that a novice programmer—an enthusiastic artist or maker—can tackle without running afoul of either the language (C++) or the complexities of managing both state and hardware timing. 2. Implementation The execution of the occam-π language on the Arduino is made possible by the Transterpreter virtual machine [11]. This is the same virtual machine that is used on desktop-class hardware, and it has also been used in the past on the LEGO Mindstorms RCX [12] and the Surveyor SRV-1 [13] mobile robotics platform. The Transterpreter on the Arduino uses a megaAVR specific wrapper3 , supporting the ATMega328 and larger processors. The smaller processors in the megaAVR range do not have enough flash (minimum 32KB) or RAM (minimum 2KB) required by the virtual machine. Program execution is facilitated by uploading the virtual machine to the Arduino’s flash memory alongside occam-π bytecode, which can be uploaded separately. This section will deal with specific aspects of the implementation that pertain to the megaAVR family of processors, particularly the implementation of interrupts as well as sleep modes. Details of other aspects of the virtual machine can be found in the papers referenced above. 2.1. General Architecture
Implemented in occam
Linked as required
User Programs
Plumbing heartbeat, button.press ...
Runtime Libraries Serial, PWM, TWI ...
Scheduler
Interpreter
Error Handling
Virtual Machine
Implemented in C
Firmware
Interrupts
Figure 2. The structure of the software.
Figure 2 depicts the occam-π software stack for the Arduino. Underlying the runtime system is the virtual machine and the portable bytecode interpreter. The virtual machine provides a number of services besides just interpreting bytecode: it loads, parses and checks bytecode located at a specific address, schedules processes, provides a clock for use by both the virtual machine and user programs, and provides error handling (which catches errors and 3
The wrapper contains the platform specific portions of the virtual machine.
C.L. Jacobsen et al. / occam-π for the Arduino
183
attempts to print a useful error message on the serial port). The only service that is exposed directly to the user however, is the wait.for.interrupt routine, which is used by both libraries and user code to wait for a particular interrupt to fire. The runtime libraries, the Plumbing libraries4 , and the user program are compiled into a monolithic bytecode file. Dead-code elimination ensures that unused portions of the libraries or user program are removed from the generated bytecode, and the symbol and debugging information is stripped before the code is uploaded to the device. These measures keep the bytecode files small enough to fit in the limited amount of flash memory available. The runtime library provides functions for interacting with the serial port, changing the state of individual pins, or using other features of the chip, such as PWM or TWI. This functionality is implemented entirely in occam-π, which has direct access to interrupts and memory (required for manipulating control registers). The Plumbing library is a high level library, and provides the interface we expect most programmers to use. Plumbing provides a process oriented programming interface which lets the user “plumb” together processes such as heartbeat, blink, button.press, pin.toggle5 . The user program can mix the services of the higher level Plumbing library with other libraries as desired. Full access to memory and interrupts is provided to user code, and therefore the user can write code or libraries which interact directly with the hardware. 2.2. Interrupts The megaAVR range used on the Arduino boards support a wide range of both internal and external interrupt sources. Examples of internal interrupts are those generated by the UART module (serial communication) and the timer. External interrupts are generated in response to a change in the state of one of the processors pins: a pulse from a rotary encoder on a servo, an infrared sensor, or a mechanical switch are all examples of possible external interrupts. While in simple cases it is possible to poll instead of using interrupts, this style of programming has drawbacks and limitations. For example, if a program needs to count very short pulses from a rotary encoder, it may be hard to ensure that the polling occurs “often enough.” Put another way, the polling must be frequent enough to guarantee that a pulse is not missed. As the complexity of a program increases, it becomes harder to ensure that polling can provide the desired resolution. Thus, the use of interrupts can ensure that a program can deal with short signals without complicating its timing logic with frequent polling. The use of interrupts may also provide lower latency between the signal occurring and the program becoming aware of the signal. This may be useful in situations where a signal must be acknowledged in some way, for example during communication with a device. Avoiding polling also allows the processor to enter one of several low power modes, conserving power while still being able to wake from external or internal interrupts, for example a signal generated by a switch or an internal source such as serial communication or timeout. 2.2.1. Interrupts on the megaAVR Processors The ATmega328p processor, which is the one most commonly used in Arduino branded and derivative hardware6 , allows all I/O pins to act as interrupt sources. However, only two pins have their own dedicated interrupt vector (the external interrupts) and the remaining pins are multiplexed over three further interrupt vectors (the pin change interrupts) with a maximum 4 Full source available for download from http://projects.cs.kent.ac.uk/projects/kroc/trac/ browser/kroc/trunk/tvm/arduino/occam/include/plumbing.module?rev=7082. 5 The online (Creative Commons licensed) book Plumbing for the Arduino [14] introduces these components and the ways they can be connected to form complete programs. 6 Other ATmega processors are also used but they are generally similar, varying in amounts of flash, RAM, I/O pins, or internal devices.
184
C.L. Jacobsen et al. / occam-π for the Arduino
of eight pins to an interrupt request. The external interrupts can be configured for different levels and edges whereas the pin change interrupts are activated by any change in any of the interrupt requests’ corresponding pins (each of the multiplexed pins on a pin change interrupt can be individually enabled or disabled). An ATmega processor can also generate a number of internal interrupts from the UART (serial port), TWI (the Two Wire Interface bus), ADC (analog to digital converter), a number of timers, and other devices. 2.2.2. Interrupts in the Virtual Machine The Transterpreter for the megaAVR supports all of the interrupts available on the processor: the external interrupts, the internal interrupts, as well as the pin change interrupts, which must use some occam-π support code to demultiplexes the individual pins. The interrupt support relies on the virtual machine’s inner and outer loops. The inner loop is the run loop proper, part of the portable virtual machine, which fetches and dispatches bytecodes. The outer loop repeatedly executes the inner loop, while dealing with any platform specific or exceptional tasks that may arise during execution of the inner loop. If an platform specific event occurs, such as an interrupt being raised, the inner loop will terminate, returning control to the outer loop, which can then take the appropriate action. In order to wire the interrupts into the virtual machine, the processor’s interrupt vectors are set up to point to a simple interrupt service routine (ISR). This ISR performs two actions: it sets a flag to indicate to the inner loop that an interrupt has occurred, and it also updates an internal structure to indicate which interrupt fired. When the interrupt service routine has finished executing, the inner loop will resume execution and will eventually inspect the status flag, seeing that an interrupt has fired. When the inner loop is at a safe rescheduling point, it will return control to the outer loop. The outer loop can then handle the interrupt condition. Internally, the virtual machine keeps track of all the available interrupts at all times, as the VM does not know in advance which interrupts will be required by the user program. For each interrupt, the virtual machine tracks (1) whether it has fired, (2) when it fired, and (3) whether a process is waiting for that interrupt. This is implemented using two 16-bit words per interrupt: one holds the identifier of a waiting process (or a value indicating that no process is waiting) and the other holds the time the interrupt fired (or the lowest possible time value to indicate that it has not yet fired). This structure uses up a considerable amount of RAM on the smaller megaAVR parts. For example, on the ATmega328p 12 interrupts are monitored resulting in a structure taking up 24 16-bit words (48 bytes). This table uses up close to 2.5% of the available 2KB RAM on the ATmega328p. It is for this reason that the support for demultiplexing the pin change interrupts are not included in the virtual machine (which would provide better performance but at the cost of higher RAM usage). Applications which need to demultiplex pin change interrupts can include the relevant occam-π runtime support code. 2.2.3. Interrupts in occam-π Casual users of occam-π on the Arduino need not be aware that the underlying system makes extensive use of interrupts. In fact, users of the Plumbing library are unlikely to ever realise that interrupts exist! Other users might need to use the interrupt facilities provided by the virtual machine in order to write interfaces to external devices. However, working with interrupts should provide no great surprises to the user, as the underlying mechanisms of the interrupt system closely match the event based semantics of occam-π. The semantics of the interrupt system provided by the virtual machine is like that of a channel communication. To demonstrate this, we will use a fictional channel type: CHAN INTR, which represents an interrupt channel carrying integer value corresponding to the time the interrupt fired. An interrupt channel can be constructed using this (fictional) type: CHAN INTR int0:, which a process can now use to wait for an interrupt on interrupt
C.L. Jacobsen et al. / occam-π for the Arduino
185
vector int0. Using this syntax, a process would simply perform a read from the channel (interrupt ? time.fired) in order to wait for an interrupt. Interrupts, like channel communications, block the reading process until the interrupt (or channel) ‘fires.’ When the interrupt has fired, the process can continue and will have received the time the interrupt fired into the variable time.fired. For reasons of implementation simplicity and performance, waiting on an interrupt has not been implemented as a channel communication. Instead, the interrupt mechanism is implemented using procedure call interface wait.for.interrupt (VAL INT interrupt.number, INT time.fired)
This procedure call has the exact same semantics as the channel communication shown above: the process calling wait.for.interrupt sleeps until the interrupt fires. When it has fired, the process is resumed and receives the time the interrupt fired in the pass-pyreference parameter time.fired. If the interrupt fired before a call to wait.for.interrupt, wait.for.interrupt will return immediately supplying the time the interrupt fired. 2.3. Interrupts and Low Power Traditionally, interrupt handlers are supposed to be very short, simple programs. When a hardware interrupt fires, state is saved so that the processor can return to its current point of execution, a pointer is looked up in the appropriate register, and the processor jumps to the interrupt handling routine. Embedded systems developers are taught to handle the interrupt as quickly as possible, perhaps by reading a value and storing it in a global variable. Control is then returned to the central control loop. Lifting interrupts into the virtual machine has a cost. At the least, the firing of the interrupt must be acknowledged7 . If a process is waiting on the interrupt, the workspace of the waiting process is updated, and then an interrupt is raised within the virtual machine. This allows the cooperative scheduler to then begin executing occam-π code after the call to wait.for.interrupt. Ideally, a “concurrency aware” runtime should put the processor into a low power state when all of the processes it is executing are waiting for an internal communication, a timer event, or an external interrupt. To sleep the ATmega family of processors, one simply issues a single assembly instruction: sleep. For testing, this functionality was tested within the runtime as part of a branch8 . 2.3.1. Polling for Interrupts Table 2. Average polling latencies for occam-π code (N=18) and standard deviation. Poll w/o powersave Poll w/ powersave
occam (σ) 0.2267 ms (0.0429 ms) 0.2212 ms (0.0421 ms)
First, to ascertain that the changes made to the virtual machine did not impact its execution of code in the general case, we wrote a program that polled continuously for interrupts without any of the rescheduling that is common in most occam-π programs. This meant that the scheduler would never have a chance to execute, and the performance of a firmware with or without powersaving enabled should run the same. This is the case shown in Table 2: neither runtime performs significantly better when the userspace program is continuously polling. 7 http://projects.cs.kent.ac.uk/projects/kroc/trac/browser/kroc/trunk/tvm/arduino/ interrupts.c?rev=7130 8 http://projects.cs.kent.ac.uk/projects/kroc/trac/browser/kroc/branches/avr-sleep
186
C.L. Jacobsen et al. / occam-π for the Arduino
2.3.2. Interrupt Latency Second, we compared the time it takes for the an occam-π program to respond to interrupts as opposed to a program written in C. It is already known from prior work that sequential bytecode executes 100x to 1000x slower than native code. To measure interrupt latency, one Arduino was used to generate digital events that triggered the external interrupts of a second Arduino. Then, a logic analyzer9 measured the time that it takes for the second Arduino to wake and toggle pin 13 (the build-in LED). Table 3 shows that the handling of interrupts in occam-π is roughly 100x slower than a C program that does the same thing. Table 3. Average interrupt handling latencies for occam-π and C code (N=18) and standard deviation. Interrupt w/o powersave Interrupt w/ powersave
occam (σ) 0.1425 ms (0.0036 ms) 2.4485 ms (0.0210 ms)
C (σ) 0.0013 ms (0.0000 ms) 2.3339 ms (0.0233 ms)
Of interest is the second row of Table 3. This shows how long it takes to handle an interrupt when the processor is placed into a power-saving sleep mode. As can be seen, it does not matter whether the interrupt handling code is written in C or occam-π; it takes more than 2ms to wake from sleep and begin executing code. The occam-π code is approximately 0.11ms slower than the C, which is (again) in keeping with our previous measurements and the Transterpreter’s known performance on interpreting sequential code. The occam-π code does differ from the C code in one critical way: the virtual machine will not preempt a running process when an interrupt occurs due to the cooperative scheduling used by occam-π. The virtual machine must instead wait until it reaches a safe point (a rescheduling point) in the code before it can deschedule the current process and reschedule another. This could, in theory, mean that there could be no upper bound on the interrupt latency in an occam-π program. In practice this is not often an issue, and when it is, the compiler has an option for emitting more reschedule points, and it is possible to manually insert reschedule points in the code. 2.3.3. Interrupts: Practical Implications The relatively poor performance of interrupt handling in the Transterpreter has a practical implication for programmers using our tools: we are limited as to how much information we can process using interrupts. For example, a serial communications handler written in occamπ will not be able to process characters at a baud rate of much more than 300bps. (While higher rates might be possible, it is unlikely much additional work could be done in-between the receipt of individual characters.) That said, not all interrupt-driven applications are high performance or require microsecond response times. It is often the case that we need to respond to an interrupt in a submillisecond (but not sub-microsecond) timeframe. In these cases, where the interrupt represents a clock tick or a sensor crossing a threshold, we can respond in more than adequate time while simultaneously helping the programmer deal with the traditional complexity of interrupt-driven programming. In the next section we discuss an environmental sensor that falls exactly into this category. 3. Case Study: A Room Usage Monitor Being able to handle interrupt-driven sources in a simple and reliable manner is motivated by real-world need. Many sensors and devices that might be used with an Arduino change state 9
http://www.saleae.com/logic/
C.L. Jacobsen et al. / occam-π for the Arduino
187
Figure 3. Sensor exterior with motion and light sensors noted.
Figure 4. Sensor interior with Arduino, microSD card, and RTC noted.
in response to events in the world. For an Arduino to detect that change of state, one must either busywait or we can wait for an interrupt—which means we allow other processes to execute while waiting for sensor input from the world. As an example of a recent, real-world use of interrupts on the Arduino, we share a case regarding the recent design and development of a sensor that was built and deployed by undergraduates enrolled in ES210: Research Methods at Allegheny College as part of their studies in Environmental Science (Figures 3, 4). The students wanted to determine what kind of energy waste was taking place in classrooms on campus, and the sensors would help them determine when (1) the lights were on and (2) there was no one in the room. We were given two weeks to research the components, prototype the sensor, and develop kits that the students could assemble as part of their laboratory sessions.
188
C.L. Jacobsen et al. / occam-π for the Arduino
3.1. Sensor Design and Implementation The room usage sensor had to be able to detect the light level in a room as well as detect when the room was occupied. Commercial off-the-shelf solutions in this space cost at least $100 to $150, and were all closed or “opaque” solutions, meaning that the students would have little say regarding how the device should function. For the sensors to be useful to the students, our design needed to satisfy a number of hardware and software requirements. Measure Light. Record ambient light level, ideally able to distinguish between “daylight + lights” vs. “just daylight.” Fixed-cycle Measurements. Record light levels on a fixed interval (eg. every minute). Movement-based Measurements. Record light levels when the room is occupied, throttling measurements (eg. no more than one motion-based event every 2 minutes). Accurate Timing. All measurements should be stamped with an accurate timestamp. Easy Assembly. ES students can be assumed to have no prior electronics background of any sort, yet they must be able to assemble/solder the entire sensor themselves. Minimize Budget. Build sensors for $50 each or less.. Maximize Reusability. Sensors need to be re-usable on a component-by-component basis for future use in classroom contexts. Easily Analyzed Data. Students need to easily be able to extract and analyze data using commonly available tools (eg. Open Office or Google Spreadsheets). Our final bill of materials for each sensor came to approximately $75 per sensor. Each node is capable of the accurate measurement of temperature and light (the latter on a logarithmic scale in the same range as the human eye), has an extremely accurate real-time clock (or RTC, ±2 seconds/year), can detect motion using a common passive infra-red motion sensing module, and uses a FAT-formatted, microSD card for data storage, allowing students to easily extract data from their sensor nodes at the end of the experiment. All major components are modular: the sensing components, RTC, microSD, and microcontroller are all easily removed from the node and incorporated in other designs, meaning that the total “lost” or “sunk” cost per node is under $5. 3.2. Sensor Control The sensor was developed using the Plumbing library for occam-π on the Arduino10 . There are two interrupts from the outside world: the RTC (which triggers an interrupt once every minute) and the IR motion sensor (which can trigger an interrupt once every five seconds). There are two analog sensors (temperature and light intensity), and one device attached to the serial output line (the microSD logger). Whereas a solution written in C++ would likely need to leverage some kind of global state to pass sensor data from an interrupt routine (perhaps triggered by the motion sensor) into a control loop, our firmware has one process in the network that listens to each interrupt that we might receive from the outside. Both real.clock and motion use digital.input (defined in the Plumbing library) to wait on interrupts from the RTC and passive IR sensor. These processes then signal other processes that serve to throttle the rate at which either clock-based or motion-based events are logged (n.minute.ticker). When enough minutes have gone by to trigger a clock-based reading, or we have waited enough minutes to register another motion event, then a SIGNAL is generated to the get.type process. 10 Complete source code for the sensor can be found online on GitHub: https://github.com/jadudm/ Paper-ES-Sensor/tree/57b3e14e6922a84d2d6f4a8d3c1034528ea5fcb5.
189
C.L. Jacobsen et al. / occam-π for the Arduino
RTC
clock
delta
SIGNAL
SIGNAL
1
SIGNAL
PIR
motion
n.minute.ticker
n.minute.ticker
1
5
SIGNAL
SIGNAL
timed.reading
SIGNAL
motion.throttle D9
init.packet
SIGNAL
SIGNAL
SIGNAL
startup.reading
READING
LEGEND
get.type Interrupt READING
Pin get.time Constant READING
ALTing process
get.data
READING
A0, A1
Figure 5. A process network for monitoring room usage.
store TX0, D13
190
C.L. Jacobsen et al. / occam-π for the Arduino
The get.type process sits in our core pipeline, holding a largely uninitialized READING record. It is the only process in the core that contains an ALT, meaning we have isolated the non-determinism/potential randomness in our core data gathering pipeline into one location only. get.type watches three input channels, and depending on which fires, it populates the READING record with a flag indicating whether this was an initial reading taken at sensor startup (which can only happen once), a reading triggered by the clock, or a reading triggered by motion. Once we have tagged the record, it is populated with the current time (which we request from our RTC), the current temperature and light level (which is an instantaneous reading taken from the temperature and light sensors), and finally this data is serialized as plain text out to the microSD logging module. 3.3. Development Process The sensor control code was developed incrementally over a period of six days, from January 1, 2011 through January 6, 2011. At the start of the process, we had no experience with the RTC, the microSD (which was not part of the initial design), or the sensor platform itself. At the time that software development commenced, the hardware had not yet been fully designed: the circuit existed only as a prototyped on a breadboard. Our choice of occam-π as a language for embedded development made it possible to be co-developing hardware and software simultaneously without concerns that the fundamental architecture of our control software might be unsound, or (for that matter) that we might introduce critical bugs along the way. The first commit11 explored only the real time clock: PROC main () SEQ serial.setup(TX0, 57600) zero.clock(FALSE)
1 2 3 4 5 6 7 8 9 10 11 12 13
CHAN SIGNAL s: CHAN [3]INT time: CHAN INT light: PAR current.time (s!, time!) adc (A0, VCC, s?, light!) display (time?, light?) : Listing 4. Blinking four LEDs in parallel at different rates.
This three process network (current.time, adc, and display) only aspired to transmit the current time and light level back to the developer over a serial link. By the end of the first day of development, explorations were underway to store data to a 256KB EEPROM12 . This low-cost integrated circuit was originally intended as the destination for the sensor’s data; as was discovered in testing, it would prove to be difficult for the student researchers to easily extract their data (violating a design goal for the project), which is a fundamental step in their research. The utility of a process-oriented decomposition came when we replaced our data storage medium—originally an EEPROM, later a microSD card—and we were able to easily avoid fundamental changes in the process network. A single process called store accepted 11 12
a READING structure and serialized it out to the EEPROM. When we switched to a microSD card (which makes it possible for students to easily read and process the data they have collected), we instead a different physical protocol to access the storage medium, but our process network did not change. Instead, we inserted a new process that read from a channel of type READING and handled the low-level details correctly. This modular, type-safe approach to embedded programming allowed us to develop a useful system leveraging the Plumbing libraries quickly. Specifically, we found that the abstractions that have been developed with the intention of making complex tasks simple (like waiting patiently for an interrupt from the outside world) work incredibly well. After several weeks of deployment on multiple sensors, our efforts paid off: the students were able to determine that classrooms sat idle as much as 47% of the time (with lights on), which across campus amounts to a substantial energy loss. This information will be used to inform decisions about future renovations on campus (regarding building automation), and the project itself sets the stage for future collaborations at the intersection of computing and environmental science within the institution.
4. Conclusion and Future Work We see two important lines along with our future work should proceed: that which focuses on the community, and that which continues to explore fundamental issues regarding the implementation of concurrent and parallel programming languages in embedded contexts. 4.1. Growing and Supporting Community Our porting of occam-π to the Arduino platform is an extension of previous work involving the successful use of the Transterpreter on a variety of small robotics platforms in educational contexts. What differentiates this work from previous efforts is our improved handling and abstraction over fundamentally complex aspects of hardware/software interaction and the large number of enthusiastic users in the Arduino community. In order to introduce this community to occam-π, we have begun work on a small, Creative Commons licensed book titled Plumbing for the Arduino. This book introduces the occam-π programming language and Plumbing libraries through a series of exercises grounded fully on the Arduino platform. Like all open projects, the book is a work in progress, but has been successfully used by members of the community who (1) have little programming experience or (2) no occam-π programming experience to become productive explorers and, in some cases, contributors to our ongoing efforts. We consistently use process diagrams in the book to introduce Plumbing and the architecture of the occam-π programs. These diagrams translate, in a straightforward fashion, into occam-π code. Given this correspondence between diagrams and code there have been several attempts to create visual programming environments for occam-π. In [15], we describe a number of these efforts and our own ongoing efforts towards developing an interactive visual programming interface for a dataflow language. Currently, we are working on integrating the Plumbing library into our visual tool with the hope that we might introduce many of the concepts of dataflow programming without having to wrestle with the syntax of occam-π (or any other language for that matter). We will be investigating whether we can create a large and diverse enough set of components that occam-π can be easily used to generate sensors, like the one presented in this paper, for sensor prototyping applications. This would ultimately enable scientists to rapidly prototype low-cost sensors based on the Arduino platform, and program them visually using the occam-π language.
192
C.L. Jacobsen et al. / occam-π for the Arduino
4.2. Implementing Concurrency It is important to note that the Plumbing libraries do not guarantee the programmer protection from race hazards at a low level. For example, it is possible for two processes to claim that they are responsible for setting the hardware state of pin 13; one process might try to turn it off while another might try to turn it on—a classic race. Our runtime currently does not provide a mechanism for tracking these kinds of resources, but it could (at the expense of some of the already limited RAM resources on the Arduino). In the same spirit, we cannot currently protect the user from attempting to wait in multiple places on the same interrupt. On one hand, this might be useful: if a single pin goes high, we could want multiple (different) processes to wake up and begin executing. That said, there are interrupts for which this could be bad: if two processes were to attach to the serial receive interrupt, we would (again) have a race, where one process might get the first character, than the other process the second... or, one might starve the other. In the case of the former example, we could simply use our existing implementation of wait.for.interrupt and enable the use of BARRIERs on the Arduino—the process waiting for the interrupt could enroll on the BARRIER, and any processes wanting to synchronize on that event would also enroll on the same barrier. It is the second case for which we need protection, however: we do not generally want multiple processes to be able to respond to the same interrupt. Executing a bytecode interpreter on a processor executing at 16 MHz is, sometimes, a challenge. For example, we cannot write a pure-occam-π implementation of a serial receiver, as large programs with lots of parallel processes introduce long delays between opportunities for the serial handler to execute. Either we run occam-π on an Arduino with a faster processor (which does not exist), or we might look to other ways to speed up our runtime. There have been, in the past, explorations that seek to transform occam-π into either native C programs directly from the bytecode [16], generate C from a new compiler [17], or leverage existing frameworks like the Low Level Virtual Machine (LLVM) project [18,19]. The first would require a great deal more work—and would be unique to our toolchain. The second requires (at the least) updates to the virtual machine’s scheduler API. The third would require developing an entire backend for LLVM targeting the megaAVR series of processors. While improved performance would be nice, it has not yet become critical, and therefore we acknowledge the potential need, but have not yet run into a situation where the Transterpreter on the Arduino has been completely inadequate. 5. Acknowledgements This work was supported in part by the Department of Computer Science and the Department of Environmental Science at Allegheny College, as well as a grant from the Institute for Personal Robotics (http://roboteducation.org/). We wish to especially thank those students in the Spring 2011 offering of ES210: Research Methods at Allegheny College for their efforts and willingness to explore across disciplinary boundaries. References [1] C.A.R. Hoare. Communicating Sequential Processes. Prentice-Hall, London, 1985. ISBN: 0-131-532715. [2] David Gay, Philip Levis, Robert von Behren, Matt Welsh, Eric Brewer, and David Culler. The nesc language: A holistic approach to networked embedded systems. In Proceedings of the ACM SIGPLAN 2003 conference on Programming language design and implementation, PLDI ’03, pages 1–11, New York, NY, USA, 2003. ACM.
C.L. Jacobsen et al. / occam-π for the Arduino
193
[3] Shah Bhatti, James Carlson, Hui Dai, Jing Deng, Jeff Rose, Anmol Sheth, Brian Shucker, Charles Gruenwald, Adam Torgerson, and Richard Han. MANTIS OS: an embedded multithreaded operating system for wireless micro sensor platforms. Mob. Netw. Appl., 10:563–579, August 2005. [4] Adam Dunkels, Bjorn Gronvall, and Thiemo Voigt. Contiki - a lightweight and flexible operating system for tiny networked sensors. In Proceedings of the 29th Annual IEEE International Conference on Local Computer Networks, LCN ’04, pages 455–462, Washington, DC, USA, 2004. IEEE Computer Society. [5] Ian Armstrong, Matthew Jadud, Michael Pirrone-Brusse, and Anthong Smith. The Flying Gator: Towards Aerial Robotics in occam-π. In Peter Welch, Adam Sampson, Fred Barnes, Jan Pedersen, Jan Broenink, and Jon Kerridge, editors, Communicating Process Architectures 2011, volume 68 of Concurrent Systems Engineering Series, pages 329–340, Amsterdam, June 2011. IOS Press. [6] Massimo Banzi, David Cuartielles, Tom Igoe, and Gianluca Martinoand David Mellis. The Arduino. http://www.arduino.cc/, February 2011. [7] Ben Fry and Casey Reas. Processing. http://processing.org/. [8] Hernando Barrag´an. Wiring. http://wiring.org.co/. [9] Tom ’spot’ Callaway. How to tell if a FLOSS project is doomed to FAIL. https://www. theopensourceway.org/wiki/How_to_tell_if_a_FLOSS_project_is_doomed_to_FAIL, 2009. [10] Jono Bacon. The Art of Community. Building the New Age of Participation. O’Reilly Media, 2009. [11] Christian L. Jacobsen and Matthew C. Jadud. The Transterpreter: A Transputer Interpreter. In Ian R. East, David Duce, Mark Green, Jeremy M. R. Martin, and Peter H. Welch, editors, Communicating Process Architectures 2004, volume 62 of Concurrent Systems Engineering Series, pages 99–106, Amsterdam, September 2004. IOS Press. [12] Jonathan Simpson, Christian Jacobsen, and Matthew C. Jadud. A Native Transterpreter for the LEGO Mindstorms RCX. In Alistair A. McEwan, Wilson Ifill, and Peter H. Welch, editors, Communicating Process Architectures 2007, volume 65 of Concurrent Systems Engineering, pages 339–348, Amsterdam, July 2007. IOS Press. [13] Matthew Jadud, Christian L. Jacobsen, Jon Simpson, and Carl G. Ritson. Safe parallelism for behavioral control. In 2008 IEEE Conference on Technologies for Practical Robot Applications, pages 137–142. IEEE, November 2008. [14] Matthew Jadud, Christian Jacobsen, and Adam Sampson. Plumbing for the arduino. http:// concurrency.cc/book/. [15] Jonathan Simpson and Christian L. Jacobsen. Visual process-oriented programming for robotics. In Communicating Process Architectures 2008, volume 66 of Concurrent Systems Engineering, pages 365– 380, Amsterdam, September 2008. IOS Press. [16] Christian L. Jacobsen, Damian J. Dimmich, and Matthew C. Jadud. Native Code Generation Using the Transterpreter. In P. Welch, J. Kerridge, and F. Barnes, editors, Communicating Process Architectures 2006, volume 64 of Concurrent Systems Engineering, pages 269–280, Amsterdam, September 2006. IOS Press. [17] Adam T. Sampson and Neil C. C. Brown. Tock: One year on, September 2008. Fringe presentation at Communicating Process Architectures 2008. [18] Carl G. Ritson. Translating etc to llvm assembly. In CPA, pages 145–158, 2009. [19] Chris Lattner and Vikram Adve. LLVM: A compilation framework for lifelong program analysis & transformation. In Proceedings of the International Symposium on Code Generation and Optimization, CGO ’04, pages 75–88, Washington, DC, USA, 2004. IEEE Computer Society.
Fast Distributed Process Creation with the XMOS XS1 Architecture James HANLON and Simon J. HOLLIS Department of Computer Science, University of Bristol, UK {hanlon , hollis} @cs.bris.ac.uk Abstract. The provision of mechanisms for processor allocation in current distributed parallel programming models is very limited. This makes difficult, or even prohibits, the expression of a large class of programs which require a run-time assessment of their required resources. This includes programs whose structure is irregular, composite or unbounded. Efficient allocation of processors requires a process creation mechanism able to initiate and terminate remote computations quickly. This paper presents the design, demonstration and analysis of an explicit mechanism to do this, implemented on the XMOS XS1 architecture, as a foundation for a more dynamic scheme. It shows that process creation can be made efficient so that it incurs only a fractional overhead of the total runtime and that it can be combined naturally with recursion to enable rapid distribution of computations over a system. Keywords. distributed process creation, distributed runtime, dynamic task placement, parallel recursion,
Introduction An essential issue in the design of scalable, distributed parallel computers is the rate at which computations can be initiated, and results collected as they terminate [1]. This requires an efficient method of process creation capable of dispatching a program and data on which to operate to a remote processor. This paper presents the design, implementation, demonstration and evaluation of a process creation mechanism for the XMOS XS1 architecture [2]. Parallelism is being employed on an increasingly large scale to improve performance of computer systems, particularly in high performance systems, but increasingly in other areas such as embedded computing [3]. As current programming models such as MPI (Message Passing Interface) provide limited support for automated management of processing resources, the burden of doing this mainly falls on the programmer. These issues are not relevant to the expression of a program as, in general, a programmer is concerned only with introducing parallelism (execution on multiple processors) to improve performance, and not how the computation is scheduled on the underlying system. When we consider that future high performance systems will run on the order of 109 threads [4], it is clear that the programming model must provide some means of dynamic processor allocation to remove this burden. This is the situation we have with memory in sequential systems, where allocation and deallocation is performed with varying degrees of automaticy. This observation is not new [5,6], but it is only as existing programming models and software struggle to meet the increasing scale of parallelism that the problem is again coming to light. For instance, capabilities for process creation and management were introduced in the MPI-2.0 specification, stating that: “Reasons for including process management in MPI are both technical and practical. Important classes of message-passing applications require this control. These include task farms, serial applications with parallel modules and prob-
196
J. Hanlon and S.J. Hollis / Fast Distributed Process Creation with the XMOS XS1 Architecture
lems that require a run-time assessment of the number and type of processes that should be started” [7]. Several MPI implementations support process creation and management functionality, but it is pitched as an ‘advanced’ feature that is difficult to use and problematic with many current job-scheduling systems. More encouragingly, language-level abstractions for dynamic process creation and placement have appeared recently in the Chapel [8] and X10 [9], which are being developed by Cray and IBM respectively as part of DARPA’s High Productivity Computing Systems program. Both support these concepts as key ingredients in the design of parallel programs, but they are built on software communication libraries and statically-mapped program binaries. Consequently, they are subject to the same communication inefficiencies and inflexibility of single-program approaches. A run-time assessment of required processing resources concerns large class of programs whose structure is irregular, such as unstructured-grid algorithms like the Spectral Element Method [10], unbounded such as recursively-structured algorithms like Branch-and-Bound search [11] and Adaptive Mesh Refinement [12], or composite, where a program may be composed of different parallel subroutines that are themselves executed in parallel, possibly each with its own structure. These all require a means of dynamic processor allocation that is able to distribute computations over a set of processors, depending on requirements determined at runtime. The combination of parallelism and recursion is a powerful mechanism for growth which can be used to implement distribution efficiently. This must be supported with a mechanism for process creation with the ability to dispatch, initiate and terminate computations efficiently on remote processors. This paper presents the design and implementation of an explicit scheme for dynamic process creation in a distributed memory parallel computer. This work is intended to be a key building block for a more automatic scheme. The implementation is on the the XMOS XS1 architecture, which has low-level provisions for concurrency, allowing a convincing proofof-concept implementation. Based on this, the process creation mechanism is evaluated by combining it with controlled recursion in two simple algorithms to demonstrate the rate and granularity at which it is possible to create remote computations. Performance models are developed in each case to interpret the measured results and to make predictions for larger systems and workloads. This analysis highlights the efficiency, scalability and effectiveness of the concept and approach taken. The rest of this paper is structured as follows. Section 1 describes the XS1 architecture, the experimental platform and the notations and conventions used. Section 2 gives a brief overview of the design and implementation details. Section 3 presents the performance models and experimental and predicted results. Finally, Section 4 concludes and Section 5 discusses possible future extensions to the work. 1. Background 1.1. Platform The XMOS XS1 processor architecture [2] is general-purpose, multi-threaded, scalable and has been designed from the ground up to support concurrency. It allows systems to be constructed from multiple XCore processors which communicate with each other through fast communication links. The key novel aspect of this architecture with respect to the work in this paper is the instruction set support for processes and communication. Low-level threading and communication are key features, exposed with operations, for example, to provide synchronous and asynchronous fork-join thread-level parallelism and channel-based message passing communication. Provision of these features in hardware allows them to be performed in the same order of magnitude of time as memory references, branches and arithmetic. This allows efficient high-level notations for concurrency to be effectively built.
J. Hanlon and S.J. Hollis / Fast Distributed Process Creation with the XMOS XS1 Architecture
197
The system used to demonstrate and evaluate the proposed process creation mechanism is an experimental board called the XK-XMP-64 [13]. It connects together 64 XCore processors in 16 XS1-G4 devices which run at 400MHz. The G4 devices are interconnected in a 4-dimensional hypercube which equivalently can be viewed as a 2-dimensional torus. Mathematically, this is defined in the following way [14]: Definition 1. A d-dimensional hypercube is a graph G = (N, E) where N is the set of 2d nodes and E is the set of edges. Each node is labeled with a d-bit identifier. For any m, n ∈ N, an edge exists between m and n if and only if m ⊕ n = 2k for 0 ≤ k ≤ d where ⊕ is the bitwise exclusive-or operator. Hence, each node has d = log N edges and |E| = d2d−1 . Each core in the G4 package has a private 64kB memory and is interconnected via internal links to an integrated switch. It is convenient to view the whole system as a 6-dimensional hypercube. As each core can run 8 hardware threads, the system is capable of 512-way concurrency with an aggregate 25.6 GIPS performance. 1.2. Notation For presentation of the algorithms in this paper, a simple imperative, block-structured notation is used. The following points describe the non-standard elements that appear in the examples. 1.2.1. Sequential and Parallel Composition A set of instructions that are to be executed in sequence are composed with the ‘;’ separator. A sequence of instructions comprises a process. For example, the block { I1 ; I2 ; I3 } defines a simple process to perform three instructions, I1 , I2 and I3 in sequence. Processes may be executed in parallel by composition within a block with the ‘|’ separator. Execution of a parallel block initiates the execution of the constituent processes simultaneously. The parallel block successfully terminates only when all processes have successfully terminated. This is referred to as synchronous fork-join parallelism. For example, the block declaration { P1 | P2 | P3 } denotes the parallel execution of three processes P1 , P2 and P3 . 1.2.2. Aliasing The aliases statement is used to create new references to sub-sections of an array. For example, the statement A aliases B[i . . . j] sets A to refer to the sub-section of B in the index range i to j. 1.2.3. Process Creation The on statement reveals explicitly to the programmer the process creation mechanism. The statement on p do P
198
J. Hanlon and S.J. Hollis / Fast Distributed Process Creation with the XMOS XS1 Architecture
is semantically equivalent to executing a call to P, except that process P is transmitted to processor p, which then executes P and communicates back any results using channels, leaving the original processor free to perform other tasks. By composing on in parallel, we can exploit multi-threaded parallelism to offload work while executing another process. For example, the statement { P1 | on p do P2 } causes P1 to be executed while P2 is offloaded and executed on processor p. 1.3. Measurements All timing measurements presented were made with hardware timers, which are accessible through the ISA and have 10ns resolution. Constant values were extrapolated through the measurements taken by fitting performance models to the data. 1.4. Conventions All logarithms are to the base 2. p is defined as the number of processors and is taken to be a positive power of two. A word is taken to be 4 bytes and is a unit of input in the performance models. 2. Implementation The on statement causes the closure of a process P located at a guest processor to be sent to a remote host processor, the host to execute P and to send back any updated free variables of P stored at the guest. The execution of on is synchronous in this respect. The closure of a process P is a complete description of P allowing it to be executed independently and is defined in the following way: Definition 2. The closure C of a process P consists of three elements: a set of arguments A, which represents the complete variable context of P as we don’t consider global variables, a set of procedure indicies I and a set of procedures Q: C(P) = (A, I, Q) where |A| ≥ 0 and |I| = |Q| ≥ 1. Each argument a ∈ A is a ordered sequence of one or more integer values. Each process P ∈ Q is an ordered sequence of one or more instructions. IP is an integer value denoting the index of procedure P. Each core maintains a fixed-size jump table denoted ‘jump’, which records the location of each procedure in memory. As the procedure address may not be consistent between cores the indicies are guaranteed to be. This allows relative branches to be expressed in terms of an index which is locally referenced at execution. Each node in the system is initialised with a minimal binary containing the process creation kernel. The complete program is loaded on node 0, from where parts of it can be copied onto other nodes to be executed. 2.1. Protocol The process creation mechanism is implemented as a point-to-point protocol between a guest core and a host core. Any running thread is able to spawn the execution of a process on any other core. It consists of the following four phases.
J. Hanlon and S.J. Hollis / Fast Distributed Process Creation with the XMOS XS1 Architecture
199
2.1.1. Connection Initialisation A guest initiates a connection by sending a single byte control token and a word identifying itself. It waits for an acknowledgment from the host indicating a host thread has been allocated and the connection is properly established. A core may host multiple guest computations, each on a different thread. 2.1.2. Transmission of Closure C(P) is transmitted in three parts. Firstly, a header is sent containing |A| and |Q|. Secondly, each a ∈ A is sent with a single word header denoting the type of the argument. For referenced arrays, this is followed by length(a) and the values contained. The host writes these directly into heap-allocated space and the argument value is set to this address. Single-value variables are treated similarly and constant values can be copied directly into the argument value. Lastly, each P ∈ Q is sent with a two word header denoting IP and length(P) in bytes. The host allocates space on the heap and receives the instructions of P from the guest, read from memory in word-chunks from jump[IP ] to jump[IP ] + length(P). On completion, the host sets jump[IP ] to the address of P on the heap. 2.1.3. Execution/Wait for Completion Once C has been successfully transmitted, the host initialises the thread’s registers and stack with the arguments of P and initiates execution. The connection is left open and the guest thread waits for the host to indicate P has halted. 2.1.4. Transmission of Results and Teardown Once P has halted, all referenced array and variable arguments contained in C (now the results) are transmitted back to the guest. The guest writes them back directly to their original locations. Once this has been completed, the connection is terminated. The guest continues execution and the host thread frees the memory allocated to the closure and yields. 2.2. Performance Model The runtime cost of this mechanism is captured in the following way: Definition 3. The runtime of process creation Tc is a function of the total size of the argument values n, procedure descriptions m and the results o and is given by Tc (n, m, o) = (Ci +Cw n +Cw m +Cw o) ·Cl where Ci and Cw are constants relating to initialisation and termination, and overhead per (word) value transmitted respectively. The value n is inclusive of the size of referenced arrays and hence o ≤ n. As all communication is synchronised, Cl is a constant factor overhead relating to the latency of the path between the guest and host processors. Normalising Cl = 1 to a single hop off-chip, the per-word overhead Cw was measured as 150ns. The initialisation overhead Ci is dependent on the size of the closure. 3. Demonstration and Evaluation The aim of this section is to demonstrate the use of process creation combined with parallel recursion to evaluate the performance of the design and its implementation in realising efficient growth. To do this, we develop performance models to combine with experimental results, allowing us to extrapolate to larger systems and inputs. We start with a simple algorithm to demonstrate the fast distribution of parallel computations and then show how this can be applied to a practical problem.
200
J. Hanlon and S.J. Hollis / Fast Distributed Process Creation with the XMOS XS1 Architecture
proc distribute (t, n) is if n = 1 then node (t) else { distribute (t, n/2) | on t + n/2 do distribute (t + n/2, n/2) } Figure 1. A recursive process distribute to rapidly distribute another process node over a set of processors.
3.1. Rapid Process Distribution The algorithm distribute given in Figure 1 is inspired by [1] and works by spawning a new copy of itself on a remote processor each time it recurses. Each process then itself recurses, continuing this behaviour and hence, each level of the recursion subdivides the set of processors in half, resulting in a doubling of the capacity to initiate computations. This growth follows the structure of a binary tree. When each instance of distribute executes with n = 1, the node process is executed and the recursion halted. The parameter t indicates the node identifier and the algorithm is executed from node 0 with t = 0 and n = p. 3.1.1. Runtime The hypercube interconnection topology of the XK-XMP-64 provides an optimal transport in terms of hop distance between remote creations; this is established by the following theorem. Theorem 1. Every copy of distribute is always created on a neighbouring node when executed on a hypercube. Proof. Let H = (N, E) be a d-dimensional hypercube. When distribute is executed with t = 0 and n = N, starting at node 0 on H, the recursion follows the structure of a binary tree of depth d = log |N|, where identifiers at level i are multiples of |N|/2i . A node p at depth i with identifier k|N|/2i creates a new remote child node c with identifier k|N|/2i + |N|/2i+1 . As |N| = 2d , c = k2d−i + 2d−i−1 and hence, p ⊕ c = 2d−i−1 . Given that m and n are fixed, that o = 0 (there are no results) and from Theorem 1 we can normalise Cl to 1, the runtime Tc (m, n, o) of the on statement in distribute is Θ(1), which we define as the initialisation overhead C j . Using this, we can express the parallel runtime of distribute Td on p processors. In each step, the number of active processes double, but we count the runtime at each level of recursion, which terminates when n/2i = 1 or i = log n. Hence, log p
Td (p) =
∑ (Tc +Co)
i=1
=(C j +Co ) log p
(1)
where Co is the the sequential overhead at each level. C j was measured as 18.4μs and Co was measured as 60ns. 3.1.2. Results Figure 2a gives the predicted and measured execution time of distribute as a function of the number of processors. The prediction almost exactly matches the runtime given by Equation 1. Figure 2b shows the inaccuracy between the measured and predicted results more clearly, by giving the measured execution time for each level in the recursion, that is, the difference between consecutive points in Figure 2a. It shows that the assumption made based on Theorem 1 does not hold and that the first two levels take fractionally less time than the
J. Hanlon and S.J. Hollis / Fast Distributed Process Creation with the XMOS XS1 Architecture
120
201
20 15
80
Time (μs)
Time (μs)
100
60 40
5
Td Td
20 0 10
20
30 p
40
50
10
0 60
(a) Measured vs. predicted () execution time.
1
2
3 4 Level
5
6
(b) Execution times for each level of recursion of distribute .
Figure 2. Measured execution time of distribute over varying numbers of processors. (b) clearly shows the inter- vs. intra-chip latencies.
last four levels (3.85μs). This is due to the reduced on-chip communication costs. Overall though, each level of recursion completes on average in 18.9μs and it takes only 114.60μs to populate all 64 processors. Moreover, using the performance model given by Td , we can extrapolate to larger p than is possible to measure with the current platform. For example, when p = 1024, Td (1024) = 190μs. 3.1.3. Remarks By using the performance model to make predictions, we have assumed a hypercube topology and efficient support for concurrency. Although other architectures and larger systems cannot make such provisions, the model and results provide a reasonable lower bound on execution time with respect to the approach described. The hypercube has rich communication properties and supports exponential growth, but it does not scale well due to the number of connections at each node and length of wires in realistic packagings. Although distribute has optimal single-hop behaviour and we obtain peak performance, it is well known that efficient embeddings of binary trees into lower-degree networks such as meshes and tori exist [14], allowing reasonable dispersion. In this case, the granularity of process creation would have to be chosen to match the capabilities of the architecture. Provision of efficient ISA-level operations for processes and communications allows fine-grained performance, particularly in terms of short messages. Many current architectures do not support these operations at a such a low-level and cannot exploit the full potential of this approach, although again it generalises at a coarser granularity of message size to match the relative performance of these operations. 3.2. Mergesort Mergesort is a well known sorting algorithm [15] that works by recursively halving a list of unsorted numbers until unit sub-lists are obtained. These are then successively merged together such that each merging step produces a sorted sub-list, which can be performed in time Θ(n) for sub-lists of size n/2. Figure 3a gives the sequential mergesort algorithm seq-msort . Mergesort’s branching recursive structure matches that of distribute , allowing us to combine them to obtain a parallel version. Instead of sequentially evaluating the recursive calls, conditional on some threshold value Cth , a local recursive call is made in parallel with the
202
J. Hanlon and S.J. Hollis / Fast Distributed Process Creation with the XMOS XS1 Architecture
proc seq-msort (A) is if |A| > 1 then { a aliases A[0..|A|/2 − 1] ; b aliases A[i..|A|] ; seq-msort (a) ; seq-msort (b) ; merge(A,a,b) } (a)
proc par-msort (t, n, A) is if |A| > 1 then { a aliases A[0 . . . |A|/2 − 1] ; b aliases A[i . . . |A|] ; if |A| > Cth then { par-msort (t, n/2, a) | on t + n/2 do par-msort (t + n/2, n/2, b) } else { par-msort (t, n/2,a) ; par-msort (t + n/2, n/2,b) } ; merge(A,a,b) } (b)
Figure 3. Sequential and parallel mergesort processes.
second call which is migrated to a remote core. This threshold is used to control the extent to which the computation is distributed. In each of the experiments for an input of size 2k and available processors p = 2d , the threshold is set as 2k /p. The approach taken in distribute is used to control the placements of each of the sub-computations. Initially, the problem is split in half; this will have the greatest benefit to the execution time. Depending on the problem size, further remote branchings of the problem may not be economical, and the remaining steps should be evaluated locally, in sequence. In this case, the algorithm simply reduces to seq-msort . This parallel formulation of mergesort is essentially just distribute with additional work and communication overhead, but it will allow us to more concretely quantify the relative costs of process creation. The parallel implementation of mergesort par-msort is given in Figure 3b. It uses the same sequential merge procedure and the parameters t and n control the placement of processes in the same way as they were used with distribute . We can now analyse the performance and behaviour of par-msort and the process creation mechanism by looking at the parallel runtime. 3.2.1. Runtime We first define the runtime of the sequential components of par-msort . This includes the sequential merging and sorting procedures. The runtime Tm of merge is linear and is defined as Tm (n) = Ca n +Cb for constants Ca ,Cb > 0, relating to the per-word and per-merge overheads respectively. These were measured as Ca = 90ns and Cb = 830ns. The runtime Ts (n, 1) of seq-msort , is expressed as a recurrence: n (2) Ts (n, 1) = 2Ts , 1 + Tm (n) 2 which has the solution (3) Ts (n, 1) = n(Cc log n +Cd ) for constants Cc ,Cd > 0. These were measured as Cc = 200ns and Cd = 1200ns. Based on this we can express the runtime of par-msort as the combination of the costs of creating new processes, moving data, merging and sorting sequentially. The key component of this is the cost Tc , relating to the on statement in the parallel formulation, which is defined as Tc (n) = Ci + 2Cw n.
J. Hanlon and S.J. Hollis / Fast Distributed Process Creation with the XMOS XS1 Architecture
203
This is because we can normalise Cl to 1 (due to Theorem 1), the size of the procedures sent is constant and the number of arguments and results are both n. The initialisation overhead Ci was measured as 28μs, larger than that for distribute as the closure contains the descriptions of merge and par-msort . For the parallel runtime, the base sequential case is given by Equation 2. With two processors, the work and execution time can be split in half at the cost of migrating the procedures and data: n n Ts (n, 2) = Tc + Ts , 1 + Tm (n). 2 2 With four processors, the work is split in half at a cost of Tc (n/2) and then in quarters at a cost of Tc (n/4). After the data has been sequentially sorted in time Ts (n/4, 1) it must be merged at the two children of the master node in time Tm (n/2), and then again at the master in time Tm (n): n n n n Ts (n, 4) =Tc + Tc + Tm + Tm (n) + Ts , 1 2 4 2 4 Hence in general, we have: log p n
n n Ts (n, p) = ∑ Tc i + Tm i−1 + Ts ,1 2 2 p i=1 for n ≥ p as each leaf sub-process of the sorting computation must operate on at least one data item. We can then express this precisely by substituting our definitions for Ts , Tc and Tm and simplifying: n n 2n 2n Ts (n, p) =Cw (p − 1) +Ci log p +Ca (p − 1) +Cb log p + Cc log +Cd p p p p n n 2n (4) = (p − 1)(Cw +Ca ) + (Ci +Cb ) log p + Cc log +Cd p p p For p = 1, this reduces to Equation 3. This definition allows us to express the a lower bound and minimum for the runtime. 3.2.2. Lower Bound We can give a lower bound Tm s on the parallel runtime Ts (n, p) such that ∀n, p Ts (n, p) ≥ Tm s . This is obtained by considering the parallel overhead, that is the cost of distributing the problem over the system. In this case it relates to the cost of process creation, including moving processes and their data, the Tc component of Ts : log p
Tm s (n, p) =
∑ Tc
k=1
=
n 2k
log p
∑
Ci + 2Cw
k=1
= Ci log p +Cw
n 2k
2n (p − 1). p
(5)
Equation 5 is then the sum of the costs of process creation and movement of input data. When n = 0, Tm s relates to Equation 1; this is the cost of transmitting and initiating just the computations over the system. For n ≥ 0, this includes the cost of moving the data.
204
J. Hanlon and S.J. Hollis / Fast Distributed Process Creation with the XMOS XS1 Architecture
3.2.3. Minimum Given an input of length m ≤ n for some sub-computation of par-msort , creation of a remote branch is beneficial only when the cost of this is less than the local sequential case: m m Tc + Ts , 1 + Tm (n) < Ts (m, 1) 2 2 m m m Tc , 1 + Tm (n) < 2Ts , 1 + Tm (m) + Ts 2 2 2 m m < Ts ,1 Tc 2 2 Hence, initiation of a remote sorting process for an array of length n is beneficial only when Tc (n) < Ts (n, 1). That is, the cost of remotely initiating a process to perform half the work and receiving the results is less than the cost of sequentially sorting m/2 elements. Therefore at the inflection point we have (6) Tc (n) = Ts (n, 1) . 3.2.4. Results Figure 4 shows the measured execution time of par-msort as a function of the number of processors used for varying input sizes. Figure 4a shows just three small inputs. The smallest possible input is 256 bytes as the minimum size for any sub-computation is 1 word. The minimum execution time for this size is at p = 4 processors, when the array is subdivided twice into 64 byte sections. This is the point given by Equation 6 and indicates directly the total cost incurred in offloading a computation. For p < 4, the cost of sorting sequentially dominates the runtime, and for p > 4, the cost of creating a new processes and transferring the array sections dominates the runtime. With the next input of size 512 bytes, the minimum moves to p = 8, where the array is again divided into 64 byte sections. This holds for each input size and in general gives us the minimum size for which creating a new process will further reduce the runtime. The runtime lower bound Tm s (0, p) given by Equation 5 is also plotted on Figure 4a. This shows the small and sub-linear cost with respect to p of the overheads incurred with the distribution and management of processes around the system. Relative to Ts (64, p) this constitutes most of the overall work performed, which is expected as the array is fully decomposed into unit sections. For larger sized inputs, as presented in Figure 4b, this cost becomes just a fraction of the total work performed. Figure 5 shows predicted execution times for par-msort for larger p and n. Each plot contains the execution time Ts as defined by Equation 4, and Tm s with and without the transfer of data. Figure 5a gives results for the smallest input size possible to sort on 1024 cores (4kB) and includes the measurements for Tm s (0, p) and Ts . It reiterates what was shown in Figure 4a and shows that beyond 64 cores, very little penalty is incurred to create up to 1024 sorting instances, with Tm s accounting for around 23% of the total runtime for larger systems. This is due to the exponential growth of the distribution mechanism. Figure 5b gives results for the largest measured input of 32kB, showing the same trends, where Tm s this time is around just 3% of the runtime between 64 and 1024 cores. Figure 5c and Figure 5d present predictions made by the performance model for more realistic workloads of 10MB and 1GB respectively. Figure 5c shows that 10MB could be sorted sequentially in around 7s and in parallel in at least 0.6s. Figure 5d shows that 1GB could be sorted in just under 15m sequentially or at least 1m in parallel. What these results
J. Hanlon and S.J. Hollis / Fast Distributed Process Creation with the XMOS XS1 Architecture
Figure 4. Measured execution time of par-msort as a function of the number of processors. (a) highlights the minimum execution time and the Tm s lower bound.
make clear is that the distribution of the input data dominates and bounds the runtime and that the distribution of data constituting the process descriptions is a negligible proportion of the overall runtime for reasonable workloads. The relatively small sequential workload O(n/p log(n/p)) of mergesort, which decays quickly as p increases, emphasises the cost of data distribution. For heavier workloads, such as O((n/p)2 ), we would expect to see a much more dramatic reduction in execution time and the cost of data distribution still eventually to bound runtime, but then by a relatively fractional amount. 4. Conclusions This paper presents the design, implementation, demonstration and evaluation of an efficient mechanism for dynamically creating computations in a distributed memory parallel computer. It has shown that a computation can be dispatched to a remote processor in just tens of microseconds, and when this mechanism is combined with recursion, it can be used to efficiently implement parallel growth. The distribute algorithm demonstrates how an empty array of processors can be populated with a computation exponentially quickly. For 64 cores, it takes just 114.60μs and for 1024 cores this will be of the order of 190μs. The par-msort algorithm extends this by performing additional computational work and communication of data which allowed us to obtain a clearer picture of the cost of process creation with respect to varying problem sizes. As the cost of transferring and invoking remote computations is related primarily to the size of the closure, this cost grows slowly with system size and is independent of data. With a 10MB input, it represents around just 0.001% of the runtime. The sorting results also highlight two important issues: the granularity at which it is possible to create new processes and costs of data movement. They show that the computation can be subdivided to operate on just 64 byte chunks and for performance to still be improved. The cost of data movement is significant, relative to the small amount of work performed at each node; for more intensive tasks, these costs would diminish. However, these results assume a worst case, where all data originates from a single core. In other systems, this cost may be reduced by concurrent access through a parallel file system or from prior data distribution. The XS1 architecture provides efficient support for concurrency and communications and the XK-XMP-64 provides an optimal transport for the described algorithms, so we expect our lightweight scheme to be fast, relative to the performance of other distributed systems.
206
10
100
1
10 Time (ms)
Time (ms)
J. Hanlon and S.J. Hollis / Fast Distributed Process Creation with the XMOS XS1 Architecture
0.1 Tm s (0, p) Tm s (0, p) Tm s (n, p) Ts (n, p) Ts (n, p)
0.01 0.001
0.0001
1 0.1
Tm s (0, p) Tm s (0, p) Tm s (n, p) Ts (n, p) Ts (n, p)
0.01 0.001
0.0001 p
1024
512
256
128
64
32
16
8
4
2
1
1024
512
256
128
64
32
16
8
4
2
1
p
(a) n = 64 (256B) with measured results up to 64 cores. (b) n = 8192 (32kB) with measured results up to 64 cores. 1e+06 100000 10000 1000 100 10 1 0.1 0.01 0.001
10000 100
Time (ms)
Time (ms)
1000 Tm s (0, p) Tm s (n, p) Ts (n, p)
10 1 0.1 0.01
0.001
1024
p
512
256
128
64
32
16
8
4
2
(c) n = 2621440 (10MB).
1
1024
512
256
128
64
32
16
8
4
2
1
p
Tm s (0, p) Tm s (n, p) Ts (n, p)
(d) n = 268435465 (1GB).
Figure 5. Predicted () performance of par-msort for larger n and p ≤ 1024. All plots are log-log.
Hence, the results provide a convincing proof-of-concept implementation, demonstrating the kind of performance that is possible and, with respect to the topology, establish a reasonable lower bound on the performance of the approach presented. The results generalise to more dynamic schemes where placements are not perfect and other larger architectures such as supercomputers, where interconnection topologies are less well connected and communication is less efficient. In these cases, the approach applies at a coarser granularity with larger problem sizes to match the relative performance. 5. Future Work Having successfully designed and implemented a language and runtime allowing explicit process creation with the on statement, we will continue with our focus on the concept of growth in parallel programs and plan to extend the work in the following ways. Firstly, by looking at how placement of process closures can be determined automatically by the runtime, relieving the programmer of having to specify this. Secondly, by implementing the language and runtime with C and MPI to target a larger platform, which will provide a more scalable demonstration of the concepts and their generality. And lastly, by looking at generic optimisations that can be made to the process creation mechanism to improve overall performance and scalability. More details about the current implementation are available online1 , 1 http://www.cs.bris.ac.uk/
~hanlon/sire
J. Hanlon and S.J. Hollis / Fast Distributed Process Creation with the XMOS XS1 Architecture
207
where news of future developments will also be published. Acknowledgments The authors would like to thank XMOS for their support, in particular from David May, Henk Muller and Richard Osborne. References [1] David May. The Transputer revisited. In Millennial Perspectives in Computer Science: Proceedings of the 1999 Oxford-Microsoft Symposium in Honour of Sir Tony Hoare, pages 215–246. Palgrave Macmillan, 1999. [2] David May. The XMOS XS1 Architecture. XMOS Ltd., October 2009. http://www.xmos.com/ support/documentation. [3] Asanovic, Bodik et al. The Landscape of Parallel Computing Research: A View from Berkeley. Technical Report UCB/EECS-2006-183, EECS Department, University of California, Berkeley, Dec 2006. http: //www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-183.html. [4] Dongarra, J., Beckman, P. et al. International Exascale Software Project Roadmap. Technical Report UTCS-10-654, University of Tennessee EECS Technical Report, May 2010. http://www.exascale.org/. [5] D. May. The Influence of VLSI Technology on Computer Architecture [and Discussion]. Philosophical Transactions of the Royal Society of London. Series A, Mathematical and Physical Sciences, 326(1591):pp. 377–393, 1988. [6] Per Brinch Hansen. The nature of parallel programming. Natural and Artifical Parallel Computation, pages 31–46, 1990. [7] MPI 2.0. Technical report, Message Passing Interface Forum, November 2003. http://www. mpi-forum.org/docs/. [8] B.L. Chamberlain, D. Callahan, and H.P. Zima. Parallel programmability and the Chapel language. International Journal of High Performance Computing Applications, 21(3):291–312, 2007. [9] Philippe Charles, Christian Grothoff, Vijay Saraswat, Christopher Donawa, Allan Kielstra, Kemal Ebcioglu, Christoph von Praun, and Vivek Sarkar. X10: an object-oriented approach to non-uniform cluster computing. In OOPSLA ’05: Proceedings of the 20th annual ACM SIGPLAN conference on Objectoriented programming, systems, languages, and applications, pages 519–538, New York, NY, USA, 2005. ACM. [10] A. Patera. A spectral element method for fluid dynamics: Laminar flow in a channel expansion. Journal of Computational Physics, 54(3):468–488, June 1984. [11] Bernard Gendron and Teodor Gabriel Crainic. Parallel branch-and-bound algorithms: Survey and synthesis. Operations Research, 42(6):1042–1066, 1994. [12] Marsha J Berger and Joseph Oliger. Adaptive mesh refinement for hyperbolic partial differential equations. Journal of Computational Physics, 53(3):484 – 512, 1984. [13] XMOS. XK-XMP-64 Hardware Manual. XMOS Ltd., Feburary 2010. http://www.xmos.com/ support/documentation. [14] F. Thomson Leighton. Introduction to parallel algorithms and architectures: array, trees, hypercubes. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1992. [15] D. E. Knuth. The Art of Computer Programming, volume 3, Sorting and Searching, chapter 5.2.4, Sorting by Merging, pages 158–168. Reading, MA: Addison-Wesley, 2nd ed. edition, 1998.
Serving Web Content with Dynamic Process Networks in Go James WHITEHEAD II Oxford University Computing Laboratory, Wolfson Building, Parks Road Oxford, OX1 3QD, United Kingdom [email protected] Abstract. This paper introduces webpipes, a compositional web server toolkit written using the Go programming language as part of an investigation of concurrent software architectures. This toolkit utilizes an architecture where multiple functional components respond to requests, rather than the traditional monolithic web server model. We provide a classification of web server components and a set of type definitions based on these insights that make it easier for programmers to create new purpose-built components for their systems. The abstractions provided by our toolkit allow servers to be deployed using several concurrency strategies. We examine the overhead of such a framework, and discuss possible enhancements that may help to reduce this overhead. Keywords. concurrency, web-server, software architecture, golang, Go programming language
Introduction The construction of a web server is an interesting case study for concurrent software design. Clients connect to the server and request resources using the text-based HTTP [1] protocol. These resources may include various types of content, such as static documents, images, or dynamic content provided by some web application. In ideal conditions, the web server would handle these requests sequentially, ensuring that each is served as quickly as possible. Unfortunately, the server is often capable of producing content much faster than the client is capable of receiving it. When confronted with actual workloads in real-world conditions, web servers must be capable of responding to many clients concurrently. There are several approaches to providing this concurrent behaviour. The ubiquitous Apache ‘httpd’ web server uses a combination of process and/or thread pools in order to respond to requests [2]. Lighttpd [3] and Nginx [4] both utilize an event-driven asynchronous architecture [5] to ensure scalability. Yaws [6], written in Erlang, and occwserv [7], written in occam-π, both make use of lightweight threads. In addition to the different approaches for concurrency, each web server approaches the problem of serving web requests differently. This paper introduces webpipes [8], a compositional web server toolkit, written as part of an investigation of concurrent software architecture. This toolkit enables the programmer to construct multi-purpose web servers where the configuration and program code directly reflect the actual behaviour of the server. Built upon the premise that any web request can be fulfilled by a series of single-purpose components, webpipes allows even complicated web configurations can be expressed in a way that is clear and understandable. The webpipes toolkit is an example of a more general component-based architecture, where networks of components communicate with each other via message passing over explicit channels to process and fulfill requests. Although this particular implementation makes
210
J. Whitehead II / Serving Web Content with Dynamic Process Networks in Go
use of specific Go language features, the architecture and techniques used should translate well to any language that supports process-oriented programming and message passing. The main contributions of this paper are a compositional architecture for constructing processing pipelines, and an implementation of this architecture for serving web requests. Combined, these are an example of using concurrency as a software design principle, rather than a feature addition to an otherwise sequential program. Additionally, we present our impressions of the Go programming language for concurrent programming. The rest of the paper is organised as follows: Section 1 provides a short introduction to the Go programming language and the features that are relevant to the implementation of the webpipes toolkit. The design and implementation of the toolkit is presented in Section 2, including both an architectural overview and details of individual server components. Section 4 addresses the performance of the toolkit, while we examine the initial conclusions and discuss future work in Section 5. 1. Go Programming Language Go is a statically-typed, systems programming language with a syntax reminiscent of C. Programs are compiled to native code and are linked with a small runtime environment that performs automatic memory management and scheduling for lightweight processes called ‘goroutines’. It also features a concurrency model heavily inspired by CSP [9] where goroutines communicate with each other via message passing over explicit channels. Pointers are a feature of the language, however the type system does not allow for pointer arithmetic. In this paper we focus on the features of the Go programming language that are used in the webpipes toolkit and may be unfamiliar to the reader. In particular we will not cover the syntax or basic semantics of the language, which can be found in the official language specification [10]. More comprehensive information on the language can be found on the language website [11]. 1.1. Concurrency The feature of Go that is most relevant to this work is the built-in support for concurrency. This includes a control structure called a goroutine, a cross between a lightweight thread and a coroutine. Spawning a new goroutine is a matter of prefixing a function call with the go keyword. The evaluation of the call will execute in a separate goroutine, while the calling goroutine continues execution with the next statement. The cost of creating a goroutine is mainly the allocation of the initial stack, plus the cost of the function call itself. The stack of a goroutine is segmented, starts small, and grows on demand. This allows larger number of goroutines to be spawned without the massive resource consumption associated with using operating system threads. Once a goroutine has been created, its subsequent activity is completely independent of its creator, except that they can share memory and may communicate with each other through channels. Shared memory among goroutines allows for efficient implementation of certain algorithms, but its use is generally discouraged. The documentation for the Go language states: “Don’t communicate by sharing memory; share memory by communicating.” and this pattern can be seen throughout the existing code base. A channel in Go is an explicitly typed, first-class value that provides synchronous manyto-many communication between goroutines. Channels may carry any first-class value, including functions or even other channels. Channels are dynamically allocated using the make function, which takes the type of the channel to be created and optionally the size of the channel’s buffer. By setting the buffer of a channel to a value greater than 0, sends are asynchronous as long as the buffer is not
J. Whitehead II / Serving Web Content with Dynamic Process Networks in Go
211
full, and similarly with receives when the buffer is non-empty. Here are two example channel declarations: primes := make(chan uint, 10) writers := make(chan string)
// buffered uint channel with 10 slots // unbuffered channel of string values
Sending a value v on a channel ch is ch <- v. Receiving from a channel ch is the expression <-ch, which can be used anywhere an expression can be used, including control structures and assignments. Channels are bidirectional when created but can be constrained to allow only sending or receiving using type conversion, as shown in the following: var primes_recv <-chan uint = primes primes_send := chan<- uint(primes)
The first assignment is an example of an implicit type conversion to the reading end of a channel of unsigned integers (<-chan uint). This conversion may also happen implicitly during assignment and the resolution of function arguments and results. The second assignment shows an explicit cast from a bidirectional channel to the sending end (chan<- uint). These examples also illustrate the difference between explicit variable declaration and a short form of declaration using the := operator. The type of a value declared this way is automatically inferred at compilation. Following the execution of this code there are three versions of the channel: primes which is still bidirectional, and two references to the send and receive ends of the same channel. Although the directionality of a channel can be reduced in this way, you cannot obtain a bidirectional channel reference from a unidirectional one. 1.1.1. Idiomatic goroutines Since the execution of a goroutine is independent of the calling code, it is a common Go idiom to pass a channel into the goroutine so it can communicate its status to the caller. This might look something like this: func doSomething(done chan bool) { // do some work here done <- true } done := make(chan bool) go doSomething(done) isDone := <-done
// spawn a new goroutine // wait for it to finish
The type of the channel could be anything, in this case we chose the bool type, but it could just as well be a channel of numbers or strings. It is not necessary to assign the value that is received over the channel, it could be discarded by not assigning it. In the event that a process needs to wait for more than one channel event, but cannot be sure which event will occur next, the select statement can be used. Similar to the ALT provided by occam, if multiple cases can proceed, the runtime system makes a pseudo-random fair choice to decide which communication event proceeds. If no case can proceed, the default case (if supplied) is taken. This allows for the implementation of non-blocking tests of send and receive events, where the communication only proceeds if it would be non-blocking. 1.2. Objects and Methods Go does not have classes and does not support the kind of class-based object oriented style with inheritance popularised by C++ and Java. However, Go does allow the developer to declare new named types, and then allows for the definition of methods that act on these
212
J. Whitehead II / Serving Web Content with Dynamic Process Networks in Go
types. This style of programming is very general in that methods can be defined for any sort of data. Take the following type definitions for binary and unary functions on integers: type UnaryIntFunc func(int) int type BinaryIntFunc func(int, int) int
Although it may seem a bit odd to readers who are more familiar with traditional objectoriented programming, we can define methods on these new types. For example, consider the Curry method for binary integer functions: func (fn BinaryIntFunc) Curry(x int) UnaryIntFunc { return func(y int) int { return fn(x, y) } }
This method takes a BinaryIntFunc as its receiver, the terminology used in Go to indicate the value on which the method is invoked, and takes a single int argument. It then returns a unary function in the form of an anonymous closure. This function represents the partially evaluated form of the binary function with the first argument already fixed. These definitions can be used to manually curry any binary function on integers, as in the following: var Add BinaryIntFunc = func(x, y int) int { return x + y } AddTwo := Add.Curry(2) five := AddTwo(3) seven := AddTwo(5)
We should point out a few subtleties in this example. Firstly, the declaration of the Add function uses the full form of variable declaration where the type is explicitly specified. This is because any implicit form of declaration will assume the simplest type, in this case the underlying func(int, int) int type. Since there is no Curry method defined for that type, the program will fail to compile. By specifying the type explicitly, we give the compiler the information it needs in order to locate the Curry method and compile the program. Secondly, in our definition of the Add function, we have omitted the type specifier for the first argument. When a function has multiple consecutive arguments with the same type, only the last type specifier is required. The most important thing to take away from this example is that the notion of an “object” in Go is quite different to other object-oriented programming languages. Although the same sort of construct in possible in Java through the use of class wrappers and anonymous classes, the Go program is succinct and clear. This technique ends up being a very powerful feature in the design and implementation of the webpipes toolkit. 1.3. Interfaces Types Rather than providing an explicit type hierarchy for objects, Go provides a way to specify the behaviour of types using interfaces. An interface type specifies a set of methods that a type must provide in order to implement the interface. For example, the Writer interface is defined by the io package in the standard libraries: type Writer interface { Write(p []byte) (n int, err os.Error) }
J. Whitehead II / Serving Web Content with Dynamic Process Networks in Go
213
A type satisfies this interface if it implements a method called Write that takes a slice1 of bytes and returns both an integer and an os.Error object (Go allows a function to return multiple results). The documentation for this interface notes that the return values should indicate the number of bytes that were actually written as a result of the call, and any error that occurred. Once the interface has been defined, it can be used as a type throughout a program: in function argument and result types, and even in further type definitions. Here are two further examples from the Go standard libraries. func WriteString(w Writer, s string) (n int, err os.Error) type WriteCloser interface { Writer Closer }
The WriteString function is a utility function that takes in a string and converts it to a slice of bytes before writing it to the writer. By using the Writer interface, the WriteString function can be applied to any object that provides the Write method. This means that writing a string to a file, a network socket or to a gzipped network socket are all the same as far as the WriteString function is concerned. In the specification of an interface type we can list another interface, as shown in the above definition of WriterCloser. This declares WriterCloser to be a sub-type of both the Writer and Closer interfaces; any type that implements the first can be used wherever the second or third is required. 2. Design and Implementation The webpipes toolkit is a compositional web server framework that is centered around the observation that different classes of HTTP requests can be served by different single-purpose components. These components can be chained together for more complicated behaviour. This is in contrast to more conventional web server design, where requests of all classes are served by a single handler. Although there is clearly a potential advantage to the single handler design from a performance standpoint, web servers that are written in this way can be incredibly difficult to understand and configure. Our toolkit is a layer that runs on top of the basic web server functionality provided by the http standard library. It provides components that enable various response behaviours, such as interacting with external applications via the CGI [12] protocol or compressing server output. These components can be composed together into processing pipelines that will handle different classes of HTTP requests. As an example, consider a web server that serves static files but compresses any of those files served from the js/ or css/ subdirectories. Figure 1 shows the process network for this server, which directly reflects the desired behaviour of this server. In this figure the leftmost and rightmost entities represent the portions of the web server that are responsible for handling the details of the HTTP protocol. The front-end of the server accepts an incoming network connection, reads a request from the client, and then dispatches it to the appropriate handler. The back-end of the web server either closes the connection or attempts to re-use it to handle another request from the same client. Handlers are represented by the paths leading from the server, and represents distinct processing pipelines that are distinguished by URL. Each of these paths ends in an Output component, a toolkit-provided core component that performs the actual writes to the client. 1 Although slices are a higher-level contiguous view of an array, they are distinct types. In this paper, as in Go, we refer to these simply as slices.
214
J. Whitehead II / Serving Web Content with Dynamic Process Networks in Go
Figure 1. Process network for static file server with path-based compression.
The basic function of web servers is to implement the HTTP protocol, and we have used the http package from the standard Go libraries to do this. The focus of the webpipes toolkit is on providing the content. This involves both the header and the body of the HTTP response. 2.1. Allowing for Collaborative Response Generation In the simple web server model, where each request is fulfilled by a single handler, the logic is very simple: the handler first sets and outputs the headers, and then writes the body of the response directly to the client. This matches the output required by the protocol. Unfortunately, this does not work as soon as we allow multiple components to be involved in the generation of a response— if the first component started writing the content, then it would already be too late for any of the later components to set headers or examine the content being generated. The webpipes package addresses this problem by introducing a set of new types that are used to de-couple the setup of a response from content generation, by removing direct access to the underlying network connection and providing an alternate means of transmitting a response. This is accomplished through the Conn type, which is a struct that represents an HTTP client connection. This type encapsulates the request itself and provides a way of generating a response by defining the following methods: // Response headers and status code methods func (c *Conn) SetHeader(string, string) func (c *Conn) GetHeader(string) string func (c *Conn) SetStatus(int) // General utility func (c *Conn) HTTPStatusResponse(int) // Content methods func (c *Conn) NewContentWriter() io.WriteCloser func (c *Conn) NewContentReader() io.ReadCloser
The first three methods allow a component to set and get the headers and the numeric status code for the response. The HTTPStatusResponse method is a utility that provides a canned HTTP status response, such as 404 (Not Found) and 500 (Internal Server Error). These responses normally include content that indicates what sort of error has occurred to the user, and is used frequently in writing web components, so it is included as a utility method. The final two methods are the ones that enable the package to break response generation into two steps, by allowing for the allocation of pipelines of content readers and writers. A content writer is an object that can write to the output steam of the response, while a content reader provides a way to read content has been written by components earlier in the pipeline.
J. Whitehead II / Serving Web Content with Dynamic Process Networks in Go
215
A component that produces content is called a source and uses a content writer, while filter components use both a content reader and writer to examine or transform the content. These readers and writers are created in pairs, starting with a content source, and thus a call to NewContentWriter. The Conn object creates both a writer and a corresponding reader, storing the reader so it can be returned from the next call to NewContentReader. Subsequent invocations create new pairs of writers/readers as needed. This means that an individual component will only have one half of a pair, and all pipeline ends will be connected into a content pipeline. Operations on these readers and writers are synchronous. Figure 2 shows a version of the same server from Figure 1 augmented to display the content pipeline that is created for each of the different paths. Each “W” represents a content writer, and each “R” indicates a content reader.
Figure 2. Static file server process network with dynamic content pipeline.
2.2. Understanding Components Up to this point, we have used the term component to mean a portion of the web server that is partially responsible for responding to a request. The webpipes package explicitly defines the Component interface type with a single method that takes a Conn object along with the request and returns a boolean value: type Component interface { HandleHTTPRequest(*webpipes.Conn, *http.Request) bool }
This boolean value indicates whether or not the connection should be passed to the next component. If a component ever returns false it should be prepared to manually perform output and close the connection, however this operation is rate. Consider a component that responds to every request with the contents of a static string, in a plain text response2 : type StrComp struct { content string } func (s StrComp) HandleHTTPRequest(c *Conn, r *Request) bool { writer := c.NewContentWriter() if writer == nil { 2 Package names in this example have been omitted for formatting reasons. As indicated in the interface declaration, the Conn type is defined by the webpipes package and the Request type is defined by the http package. In a typical program, all type references are normally fully qualified.
216
J. Whitehead II / Serving Web Content with Dynamic Process Networks in Go c.HTTPStatusResponse(http.StatusInternalServerError) return true } c.SetHeader("Content-Type", "text/plain; charset=utf-8") c.SetStatus(http.StatusOK) generate := func() { io.WriteString(writer, s.content) writer.Close() } go generate() return true
}
Since this component is a source, the first thing we do is request a content writer from the connection. If the connection is unable to return one for some reason, this means there has been an internal server error, so we indicate this to the client using the HTTPStatusResponse method. This should never happen, as it indicates a semantic error in the component pipeline, such as a source component appearing somewhere other than the start of the pipeline. Once we’ve obtained the content writer, we set the header and the status on the response. Next we create a function that will actually perform the output of the content and then invoke it in a new goroutine. We then return true to indicate that the connection should be passed onto the next component. The operations that are taken in the main goroutine, such as allocating the content writer, setting the status code and header, are all part of the setup phase of responding to a request. The work performed in the generate function, and thus in the “content goroutine”, is part of the content generation phase of request fulfillment. When the HandleHTTPConnection method call returns, it indicates that the setup phase is complete and that the connection can be sent to the next component in the network. In order to use this component, we would need to create a new instance of the StrComp struct type, set to contain the string of our content. For example: hello := StrComp{content: "Hello World!"}
2.3. Classifying Components Although the Component interface type allows very general components, we have already identified two kinds of components that might exist in a processing pipeline: sources and filters. Each of these is a specialisation of a more general type, called a pipe. These components do not normally concern themselves with the content stream, although they can always request content readers and writers from the connection object. Pipes are often used to examine the request or response headers and take some action, such as re-routing the request to another portion of the network or logging a message to the console. The webpipes packages takes these classifications and provides a set of corresponding type declarations that assist in the creation of new components by automatically handling the boilerplate allocation of content readers and writers: type Pipe func(*Conn, *Request) bool type Source func(*Conn, *Request, io.WriteCloser) bool type Filter func(*Conn, *Request, io.ReadCloser, io.WriteCloser) bool
These type declarations allows the programmer to write components that are simply functions. This avoids the need to create a type wrapper and define an HandleHTTPConnection
J. Whitehead II / Serving Web Content with Dynamic Process Networks in Go
217
method. Here’s a TextStringSource that works quite similar to the StrComp component we’ve just defined in a more compact and readable form: func TextStringSource(content string) Source { return func(conn *Conn, req *Request, writer io.WriteCloser) bool { conn.status = http.StatusOK conn.SetHeader("Content-type", "text/plain; charset=utf-8") generate := func() { io.WriteString(writer, content) writer.Close() } go generate() return true } }
This code defines a function that produces a text string component based on the content argument. Because this component is declared as a Source, it will automatically inherit the HandleHTTPConnection defined by the webpipes package. 2.4. Writing a Filter Component Filter components allow the programmer to examine and transform the contents of a response before it is output to the client. This may be as simple as performing compression on the output stream or something as complicated as performing semantic analysis of the output and altering it in some way. The body of a filter component looks quite similar to that of a source component, with the exception of the content goroutine. Rather than being able to unconditionally write the content to the response, the filter first use the content reader to obtain the data. Here is an example of a filter that performs rot13 encryption, a simple substitution cipher, on any alphabetic characters in a content stream: var Rot13Filter Filter = func(conn *Conn, req *http.Request, reader io.ReadCloser, writer io.WriteCloser) bool { filter := func() { // Wrap the reader so all reads come out rot13’d rot13 := NewRot13Reader(reader) io.Copy(writer, rot13) writer.Close() reader.Close() } go filter() return true }
This filter makes use of two functions that we haven’t seen before. The NewRot13Reader is a function provided by the toolkit. This function takes in an io.Reader and returns another that performs on-demand reads from the inner io.Reader, encrypts the data using the rot13 cipher, and returns the result. The io.Copy() function is a utility provided by the standard libraries to copy all of the data from a writer to a reader. If the programmer chooses, they could manually read in the blocks of data from the content reader and output them to the content writer; the io.Copy is just a utility that helps to automate this mechanical process when the transformation can be expressed using readers and writers.
218
J. Whitehead II / Serving Web Content with Dynamic Process Networks in Go
Figure 3. File server with private area requiring authentication.
Other than spawning the content goroutine to perform the filtering, this component does not perform any other work: it does not need to change the status of the response or set any additional headers. In a more realistic example, such as changing the encoding or compressing the stream, the client would need to be notified via headers so they would know how to decode the data. Although the filter components that are presented in this work focus on textual transformations, it is also possible to alter binary content such as images. This could be used to add watermarks to images on-the-fly, replace images with lower-quality versions on older clients or any number of other complicated behaviours. 2.5. Pipe Components and Branching Networks The most general component type, Pipe is only a thin wrapper around the base Component type, providing a HandleHTTPConnection method that just invokes the pipe function itself. By default, a pipe component does not need any content readers or writers, so there’s no additional work for the method to perform. A good example of a simple pipe component is one that logs incoming requests to a file for later review. Such an access logging component does not need to know anything about the content being written. It just takes information about the client, request, and response and logs an entry. A more complicated pipe example is one that makes routing decisions based on details provided in the request, such as an authentication component. The HTTP simple authentication protocol requires clients to specify credentials using headers and allows the server to examine and validate these credentials before proceeding with the request. If the credentials are not valid, the client may be prompted by their web browser or may simply be denied access to the resource, depending on configuration. This sort of construct requires a branching network, an example of which is shown in Figure 3. Rather than a non-branching path from start to finish, there are two edges leading from the authentication component that correspond to these different behaviours. Although branching networks are slightly more difficult to construct, the components have the same type and semantics as any other component. 2.5.1. Conditional Components It is possible that a source or filter component may not apply to every single request it encounters. This is a problem since the HandleHTTPConnection method provided by the toolkit automatically allocates the content readers and writers and the toolkit doesn’t provide a way to de-allocate them. In this case, the programmer can either construct their own type that implements the Component interface or they can utilize a pipe component that then invokes another component when the conditions are correct.
J. Whitehead II / Serving Web Content with Dynamic Process Networks in Go
219
For example, not all clients are able to support compression of content. This information is only available from the headers that are sent by the client along with the request. General compression components should ensure that the content is only compressed by a supported method, if any. This could be accomplished by taking the following steps: 1. Check the HTTP protocol version to see which headers need to be examined. 2. Check the headers to see if any of the compression methods that are supported by the server are also supported by the client. 3. If no supported compression methods are supported, return true and make no alterations. 4. Otherwise, invoke the HandleHTTPConnection method of the appropriate compression filter, passing in the connection and request in order to insert the compression filter into the content stream. Accordingly, the webpipes toolkit provides unconditional components that perform gzip and flate compression, called GzipFilter and FlateFilter, and a conditional component that performs the appropriate header checks, called CompressionFilter. The programmer is free to include whatever component is most appropriate for the desired configuration. 3. Creating Component-based Web Servers We have introduced the compositional web server architecture provided by the webpipes toolkit and introduced the different classifications of server components. This section explains how we can use the very basic server provided by the http package along with our toolkit to construct function web servers using these components. 3.1. Default Multiplexing Server The http package provides a very simple multiplexing server. That is, the server utilises a single network connection, but allows multiple handlers to respond to incoming requests. In order to accomplish this, handlers must be explicitly registered with the server along with a URL prefix string that is used to match requests to the appropriate handler. The webpipes toolkit allows you to take advantage of this server by ensuring that the created component chains conform to the http.Handler interface that is required by this server. The following is an example server with two handlers: func main() { http.Handle("/", webpipes.Chain( webpipes.FileServer("http-data", "/"), webpipes.OutputPipe, )) http.Handle("/hello", webpipes.Chain( webpipes.TextStringSource("Hello, world!\n"), webpipes.OutputPipe, )) server := http.Server{ Addr: ":12345", } log.Printf("Starting test server on %s", server.Addr) err := server.ListenAndServe() if err != nil { log.Fatalf("Error: %s", err.String()) } }
220
J. Whitehead II / Serving Web Content with Dynamic Process Networks in Go
The first handler is a chain of components that serves files from the http-data subdirectory using a component created using the FileServer function. This function takes a base directory and a prefix and serves files from that directory, stripping the prefix from the start of the URL. If this were the only handler registered with the server, any well-formed request would be routed to it, since any such request would begin with /. The second handler is for any request that begins with the URL /hello. The server automatically finds the longest prefix match between the request and the registered handlers, so this handler is able to provide a more specific pattern than the file server without any issues. Each client that is routed through this handler is sent the text string "Hello, world!" as text/plain content using the previously introduced TextStringSource. Both component chains include the OutputPipe component, which handles the transmission of the response to the client’s network socket. The last few lines of this program are boilerplate that create a new http.Server object with the Addr member set to the host and port on which the server should bind. We then invoke the ListenAndServe method on this server to begin listening for client connections. The server will continue until an error occurs in the accepting of new connections and will report this error before exiting. There are a few additional attributes you can set on the http.Server object including socket read and write timeouts, but for many servers the above code is sufficient. 3.2. Constructing Component Chains In the preceding example, the webpipes.Chain function was used without explaining its purpose. This function takes any number of components as arguments and returns an object that implements the http.Handler interface, allowing it to be used directly as an argument to http.Handle. These components are stored in an array and when a connection arrives, each component is applied to the connection in turn; a simple form of functional composition. For many web server configurations, this sequential execution of components is likely the most efficient. Each connection is served as quickly as it can be, subject to the number connections currently being processed by the server. The Component interface defined by the webpipes package, however, only defines the behaviour of components and how they interact with each other; it doesn’t specify anything about how the components are connected together or how they are actually executed when handling a client connection. The Chain function is just one example of how component networks might be connected and executed. In the case where a processing pipeline is not necessarily sequential, such as the previously mentioned SimpleAuth component that requires branching, the webpipes toolkit provides additional methods of constructing handlers from components. The ProcNetwork function and its variants can be used to turn a list of components into a static process network, while the NetworkHandler function can take the input and output channel for a network of processes and allow it to be used as a handler in the default web server. When a list of components is passed to the ProcNetwork function, a small goroutine is started for each of them that sits and accepts incoming connections from the channels that connect each component to its neighbors. When a connection is received, the component function itself is invoked and when this function concludes, the connection is passed along the network. The ProcNetworkInOut function is the same, but allows you to specify the input and output channels for the overall network. This function is really only used when the programmer is constructing their own process networks and channels, but helps connect portions of a process network together. Either of the channel arguments may be nil and the function will
J. Whitehead II / Serving Web Content with Dynamic Process Networks in Go
221
create a new channel for you. Both versions of the ProcNetwork function always return the input and output channels for the constructed network. For example, the following code allocates four channels (two for overall network input and output and two to connect the components together) and spawns three goroutines that will act as server farms for each of the components. in, out := webpipes.ProcNetwork( webpipes.FileServer("http-data", "/"), webpipes.CompressPipe, webpipes.OutputPipe, )
Once created, these channels can then be passed to NetworkHandler, enabling the process network to be used as a handler in the web server: http.Handle("/compressed", webpipes.NetworkHandler(in, out)
When a request is received, the web server invokes the ServeHTTP method that is defined by the webpipes package for network handlers. This injects the connection into the process network via the input channel and then waits for the connection to emerge on the output channel. Once received, the function returns, allowing the http server to close or re-use the client connection as appropriate. 3.2.1. Manual Process Networks It is conceivable that a programmer would like to create their own network of processes and then use it as a handler in the default web server. The NetworkHandler function can also be used for this purpose, as long as the network has an input and an output channel that are exclusively open-ended; that is no other process should attempt to write to the input channel or read from the output channel. In reality, anyone who is creating their own process network and server farms should probably implement the http.Handler interface directly, since it will give them better control over how requests are handled. 4. Performance and Comparison Web server performance is typically measured using some combination of throughput and latency. Throughput indicates the number of requests that can be served per second, while latency indicates the amount of time taken to serve a request. These micro-benchmarks are performed under varying loads, used to simulate a multiple clients attempting to access the web resources simultaneously. We have measured the performance of both webpipes and the stock Go http package against the Apache ‘http’ web server, representing the ideal case in these benchmarks. It is not entirely fair to compare a fledgling programming language with an unoptimised http library against a well-tested production web server. However, this enables us to show that there are no artificial bottlenecks in the testing environment by measuring a server that with ideal performance within our testing ranges. The benchmark we performed involved requesting a static 4KB file multiple times across a multitude of concurrent connections, to illustrate the throughput of the server in terms of requests per second. These tests were performed in the following test environment: • Server: 3.00Ghz Intel Core 2 Duo with 4GB of RAM, running Ubuntu 10.4.1 and Linux kernel 2.6.32-24.
222
J. Whitehead II / Serving Web Content with Dynamic Process Networks in Go
• Clients: 1.66Ghz Intel Centrino Duo with 1GB of RAM, running Ubuntu 10.10 and Linux kernel 2.6.35-22. • Switch: HP Procurve 2724 Gigabit switch.
60 40
percentage of errors
10000
80
Apache Go Chain ProcChain
Apache Go Chain ProcChain
●
0
0
0
20
5000
requests/second
15000
100
The ‘httperf’ [13] program was used to perform the benchmarks by generating a sustained load against the server. In addition, we created a tool called ‘autohttperf’ [14] that distributes the load generation across multiple client machines and can perform automated stress testing of web servers.
5000
10000
attempted requests/second
Figure 4. Web server throughput.
15000
●
0
5000
10000
15000
requests/second
Figure 5. Percentage of connection errors.
Figure 4 shows the number of successful requests per second against the number of attempted requests per second and Figure 5 shows the percentage of connection/request errors against the number of attempted requests per second. You can see that Apache maintains the ideal result, being able to serve all incoming requests within the client timeout window. On the other hand, the Go http web server with no framework overhead is able to sustain just under a third of this amount before it reaches its saturation point. This saturation point is artificial 3 – it is a hardcoded number of connections that are allowed to be active at any point in time. In my test environment, without this limit, the web server would reliably crash due to memory allocation errors. These measurements do, however, represent a lower bound on what the server is capable of. The performance of the webpipes framework is shown using two different methods of connecting components together, Chain and ProcChain. Recall that the first uses function calls and method invocation to invoke components, while the second has the additional synchronization overhead of a channel that is connecting each process. The load that is able to be sustained by a server using our toolkit is reasonable for low-traffic situations, it remains ideal until somewhere between 1500-2000 requests per second, but performance above that is difficult to measure due to the related bugs in Go. 4.1. Identifying Overhead For a compiled language with a relatively simple http package and web server, it is initially surprising that the Go-based web server is unable to achieve better performance under heavier 3 There is currently a bug in Go that causes performance measurements of Go web servers to be unreliable above this point. At the time of publication, these bugs have been reported but are outstanding.
J. Whitehead II / Serving Web Content with Dynamic Process Networks in Go
223
loads. However, it turns out that this specific type of test, where a small static file is being delivered to the network client as part of the response, is an optimised case in the Apache server implementation. Specifically, it makes use of the sendfile [15] system call which can be used to copy data from a file to a socket. This system call is an optimisation over the standard read/write in that it attempts to avoid kernel/user context switches and in some cases is capable of performing a zero-copy [16] transmission of the file data. We believe that the performance of the Go http package could be improved through the use of this system call. Currently, the test we are performing hits a non-optimal path through the I/O code in Go’s libraries: 1. The headers are written by the file handler to the response. These writes are performed using a buffered network writer with the default limit of 4KB of buffer space. Since the header data is smaller than this limit, the write is buffered until later. 2. The file is read in 32Kb blocks using the read system call. Since the file in question is under this limit, it only requires a single call to read. 3. The data is written to the http.ResponseWriter object which then performs the write on the underlying buffered network connection without copying. 4. The data is then written to the buffered network connection. Because the buffer is not empty, this either requires the headers to be send in a different network packet from the payload or data to be copied. The bufio in this case will perform a copy of the file data to fill the non-full buffer and then flush that to the client. The remaining file data is then sent without requiring a copy. The webpipes framework introduces even more overhead with its system of content readers and writers. Currently, these are implemented using the io.PipeReader and io.PipeWriter interfaces which explicitly copy data on every paired read/write operation. For the static file test performed above, this means that in addition to the buffering issue identified above, the file data is copied an additional time before ever being written. Interestingly, in our tests we do not show a significant difference between the Chain and ProcChain versions of constructing processing pipelines despite the heavy use of short-lived goroutines in the latter. However, it is possible that any differences between the two may be masked by the outstanding issues with the Go web server. If these results hold, it will be an encouraging result for programs that make heavy use of goroutines. 5. Conclusions and Future Work In this paper, we have presented a compositional architecture for constructing processing pipelines. This architecture was demonstrated through the webpipes toolkit, a framework for constructing component-based web servers using the Go programming language. This framework makes use of a series of type and semantic definitions that allow multiple components to participate in the generation of responses. This is accomplished by de-coupling the generation of response headers from the generation of the actual content of the response. This toolkit encourages the use of simple components that are strung together to create more complicated behaviours. The behaviour of web servers constructed using the webpipes toolkit are easy to understand; often there is a direct correlation between the structure of the code used to create a server and the definition of that server’s behaviour. The same might be accomplished in mainstream web servers through the clever use of domain specific configuration languages, however we believe the component model used by the webpipes toolkit is inherently easier to understand. This work is the product of an initial case study in concurrent software design. Throughout the design process we have restricted ourselves to a model of concurrency where com-
224
J. Whitehead II / Serving Web Content with Dynamic Process Networks in Go
ponents communicate with each other solely via message passing over explicit channels. In order to accomplish this, it was necessary to determine an appropriate level of abstraction for processes and define the nature of the communication between these processes. Our design follows the premise that a web request can be fulfilled by a series of single-purpose components. We chose to model these notions of components as concurrent processes. The components communicate with each other, passing an abstraction connection object. This object contain the details of the request as well as a way for components to construct the response. By thinking about how web server behaviour might be modelled as a collection of concurrent processes, this abstraction came quite naturally.
5.1. Impressions of Go
Go is a relatively young entry in the field of programming languages, released in November 2009. It offers a syntax that is familiar to C, C++ and Java programmers while providing a novel interface-based type system and channel-based concurrency. We initially explored the Go programming language in order to understand how it relates to other concurrent programming languages, specifically Scala, Erlang, and occam-π. After working with the language for some time, we chose it as the target language for our case study. This choice was primarily motivated by the following: 1. The ability to define new concurrent processes without need for additional wrappers or classes. This enabled rapid prototyping and experimentation without verbose programs. 2. The simple concurrency model built around synchronous communication via firstclass channels. 3. The “safety” provided by the static typing, lack of pointer arithmetic, and requirements for explicit casting. Through our experience implementing webpipes, we have found the first point, combined with the ability to define methods on any data type, to be incredibly important. This allowed us to create an abstraction for components that made them easy to use when constructing new web servers. In addition, the ability to provide methods for functions gave us the ability to provide boilerplate code for components, making it easier for developers to add new functional components to the system with minimal code. Although the pervasive use of interface types can take some getting used to, their consistent use allows easy interoperability with the rich set of standard packages included with the language. It is a testament to these libraries that we can implement a fully functional compositional framework for constructing web servers in under 1500 lines of code! On the other hand there are opportunities for improvement with the Go programming language. In particular the goroutine scheduler is immature and does not always scale well when multiplexing goroutines onto more than one operating system process. Additionally, there are currently two sets of compilers for the language. The gc compiler set is a very fast compiler that can produce reasonable code, while the gccgo front-end is much slower but can produce more efficient code. The latter is currently not able to multiplex goroutines and instead uses operating system threads, although this feature is expected to be implemented in the future. Despite the relatively young age of the programming language, we believe that Go helps to fill an interesting niche in the field of programming languages. The unique feature-set and aims of the language make it worth investigation for systems-level concurrent programming.
J. Whitehead II / Serving Web Content with Dynamic Process Networks in Go
225
5.2. Future Work Through the development and evaluation of the webpipes toolkit, we have identified many opportunities that require further investigation. The overhead introduced by the content reader/writer system negatively affects the performance of the toolkit. In addition it is not a very idiomatic style of writing Go code. It would be worth investigating a design removes the explicit use of content readers and writers by providing a higher level of abstraction for the registration of filter components. This abstraction could remove an additional level of copying and synchronization from the toolkit and would be much closer to the way Go programs are normally written. The use of pointers in a concurrent program introduces the possibility of race conditions and non-deterministic behaviour when components retain the reference and access it directly. This problem might be alleviated by introducing a concept of ownership to the type system, perhaps utilizing some form of linear types [17] or uniqueness typing [18]. This would allow us to specify at the language level that only one reference to an object should exist at any time. The Go language provides for a form of distributed channels via the netchan package. It would be interesting to develop a web server that acts as a load balancer using the componentbased architecture presented in this paper and these network channels in order to distribute the incoming load to multiple back-end web servers. It is possible that may expose other advantages or drawbacks of our architecture. References [1] T. Berners-Lee, R. Fielding, and H. Frystyk. RFC1945: Hypertext Transfer Protocol–HTTP/1.0. RFC Editor United States, 1996. [2] Apache: HTTP Server - Multi-Processing Modules (MPMs). http://httpd.apache.org/docs/2.0/ mpm.html, October 2010. [3] Lighttpd Web Server. http://www.lighttpd.net/, August 2010. [4] Nginx Web Server. http://nginx.org/en/, April 2011. [5] Matt Welsh, David Culler, and Eric Brewer. Seda: an architecture for well-conditioned, scalable internet services. In Proceedings of the eighteenth ACM symposium on Operating systems principles, SOSP ’01, pages 230–243, New York, NY, USA, 2001. ACM. doi: 10.1145/502034.502057. [6] Yaws - Yet Another Webserver. http://yaws.hyber.org/, September 2010. [7] Fred Barnes. occwserv: An occam web-server. In J.F. Broenink and G.H. Hilderink, editors, Communicating Process Architectures 2003, volume 61 of Concurrent Systems Engineering Series, pages 251–268, Amsterdam, The Netherlands, September 2003. IOS Press. [8] James Whitehead II. Webpipes: A Compositional Web Sever Toolkit. http://github.com/jnwhiteh/ webpipes. [9] C.A.R. Hoare. Communicating Sequential Processes. Prentice-Hall, London, 1985. ISBN: 0-131-532715. [10] The Go Programming Language Specification. http://golang.org/doc/go_spec.html, March 2011. [11] The Go Programming Language. http://golang.org/, March 2011. [12] D. Robinson and K. Coar. The common gateway interface (CGI) version 1.1. Technical report, RFC 3875, October 2004. [13] David Mosberger and Tai Jin. httperf: a tool for measuring web server performance. SIGMETRICS Perform. Eval. Rev., 26:31–37, December 1998. doi: 10.1145/306225.306235. [14] James Whitehead II. autohttperf: Automated and distributed benchmarking using httperf. https:// github.com/jnwhiteh/autohttperf. [15] Erich Nahum, Tsipora Barzilai, and Dilip D. Kandlur. Performance issues in www servers. IEEE/ACM Trans. Netw., 10:2–11, February 2002. doi: 10.1109/90.986497. [16] Dragan Stancevic. Zero copy I: user-mode perspective. Linux J., 2003:3–, January 2003.
226
J. Whitehead II / Serving Web Content with Dynamic Process Networks in Go
[17] P. Wadler. Linear types can change the world. In M. Broy and C.B. Jones, editors, Programming concepts and methods: proceedings of the IFIP Working Group 2.2/2.3 Working Conference on Programming Concepts and Methods, Sea of Galilee, Israel, 2-5 April, 1990, pages 561–581. North-Holland, 1990. ISBN: 978-0-444-88545-6. [18] Edsko Vries, Rinus Plasmeijer, and David M. Abrahamson. Implementation and application of functional languages. chapter Uniqueness Typing Simplified, pages 201–218. Springer-Verlag, Berlin, Heidelberg, 2008. ISBN: 978-3-540-85372-5.
Performance of the Distributed CPA Protocol and Architecture on Traditional Networks Kevin CHALMERS School of Computing, Edinburgh Napier University, Edinburgh, EH10 5DT, UK Abstract. Performance of communication mechanisms is very important in distributed systems frameworks, especially when the aim is to provide a particular type of behaviour across a network. In this paper, performance measurements of the distributed Communicating Process Architectures networking protocol and stack is presented. The results presented show that for general communication, the distributed CPA architecture is close to the baseline network performance, although when dealing with parallel speedup for the Mandelbrot set, little performance is gained. A discussion into the future direction of the distributed CPA architecture and protocol in relation to occam- and other runtimes is also presented. Keywords. JCSP, CSP for .NET, networking, distributed systems.
Introduction Communicating Processes for Java (JCSP) [1] is a Java library implementation of Hoare’s CSP model of concurrency [2]. Recent work on JCSP has focused on refining and improving the library [3], and improving distributed systems capabilities [4]. In this paper, ongoing work allowing inter-framework communication (for example .NET to Java distributed CPA behaviour) is presented, looking at general network performance benchmarks of the new protocol and architecture. The rest of this paper is broken up as follows. In Section 1, the background to the ongoing work with the Distributed CPA Architecture is presented. In Section 2, the approach used to gather the experimental data presented is given and reasoning behind the results provided. Section 3 briefly discusses a new implementation of CSP for .NET [5] which has been used in conjunction with JCSP to provide the results presented. Section 4 presents the results for the experiments undertaken, and finally in Section 5 conclusions and future work is presented. 1. Background The Distributed CPA Protocol and Architecture [4] was developed to overcome problems discovered in the original JCSP Networking implementation [6]. In particular, two areas were addressed: x The use of serialised objects for general messaging, leading to incompatibilities with other frameworks. x Resource usage being high, particularly for the number of processes in operation.
228
K. Chalmers / Performance of the Distributed CPA Protocol
The protocol underpinning the new implementation of JCSP networking was developed to allow inter-framework communication, and the updated framework refined and reduced the processes in use. This work led to the JCSP networking functionality being very lightweight in comparison to the original version. To continue work enabling inter-framework communication, a number of smaller projects have been initiated looking at integrating other CSP inspired libraries and runtimes by implementing the new network architecture and protocol in PyCSP [7], C++ CSP [8] [9] and occam-, leading to primitive communication between JCSP, PyCSP and occam-. However, at this stage, only fully realised implementations of the networking protocol and architecture has been implemented in JCSP and CSP for .NET. 1.1 Distributed CPA Frameworks JCSP networking is not the only implementation of distributed CPA architectures. JCSP networking itself is based on the T9000 virtual channel model for distributed CPA communications [10]. occam- has had two separate networking capabilities developed. The first was pony [11], which emulated the same T9000 virtual channel model to a certain degree. More recent work has led to the development of a PyCSP to occam- communication mechanism [12]. C++CSP has also featured a networking implementation [13], although this has not seen work in recent years. However, the C++CSP library itself provides interest as a library to implement a native version of the distributed CPA protocol and architecture that other runtimes can hook into. This possibility will be discussed at the end of this paper. 2. Approach In this section, the approach that will be taken to analyse the general performance of the distributed CPA architecture shall be discussed. From a general performance point of view, we are most interested in network metrics, in particular latency and bandwidth. 2.1 Network Performance Network latency provides a value for the general overhead of sending a single packet of data using the communication mechanism under test. It is essentially the time taken for a single packet of data to travel from the host machine to the destination. The standard method of gathering latency information is to use a ping test, and halving the time taken for a ping round-trip message. Network bandwidth is the measure of the data throughput rate of the communication mechanism. Typically, this is measured in Mbit/second. To test bandwidth, packets of various sizes are sent across the network, and the time taken to send the data is used to determine the data throughput. Ideally, any communication mechanism for distributed systems should have close to the general performance of the network. Therefore, baseline network performance is also gathered, and then compared to the results gathered for the distributed system communication mechanism. There may be a slight overhead for latency due to extra message headers and other communication information, but bandwidth should be similar. 2.2 Mandelbrot To allow analysis of how well the distributed CPA architecture copes with parallel
K. Chalmers / Performance of the Distributed CPA Protocol
229
problems, generation of the Mandelbrot set at various levels of detail has also been performed. There are two approaches that can be taken here. Firstly, breaking a single generation of the Mandelbrot set into smaller pieces will enable an analysis of the distributed CPA architecture when breaking down a single problem. Secondly, scaling the resolution of the Mandelbrot set will enable an analysis of scaling the problem to the number of processors in operation in the system. 2.3 JCSP and CSP for .NET 2.0 To provide a reasonable analysis on the performance of the distributed CPA architecture, tests were also performed using JCSP and CSP for .NET 2.0. These tests should ensure that it is the performance of the architecture and communication mechanism itself that is analysed. To avoid general overheads that are either Java or .NET dependent, object serialisation is kept to a minimum. The Mandelbrot experiments require very simple object serialisation, but nothing that should affect the comparison. As the experiments incorporate CSP for .NET 2.0, a general discussion on the changes made since the previous edition [5] shall be discussed in the following section. 3. CSP for .NET 2.0 In this section, the new features of CSP for .NET are introduced. The new version of CSP for .NET is an update of the existing library [5] with changes to channel creation, the addition of primitive and reference passing channels, and the inclusion of networking capabilities to allow communication with other distributed CPA frameworks. 3.1 Channel Creation Channel creation in CSP for .NET 2.0 no longer utilises factory based methods. For example, in Java using JCSP a channel is normally created using the Channel factory object: One2OneChannel chan = Channel.one2one();
CSP for .NET 2.0 uses standard new calls to create channel objects: One2OneChannel