Communications in Computer and Information Science
199
Tai-hoon Kim Hojjat Adeli Rosslin John Robles Maricel Balitanas (Eds.)
Advanced Communication and Networking Third International Conference, ACN 2011 Brno, Czech Republic, August 15-17, 2011 Proceedings
13
Volume Editors Tai-hoon Kim Hannam University 133 Ojeong-dong, Daeduk-gu Daejeon 306-791, Korea E-mail:
[email protected] Hojjat Adeli The Ohio State University 470 Hitchcock Hall, 2070 Neil Avenue Columbus, OH 43210-1275, USA E-mail:
[email protected] Rosslin John Robles Hannam University 133 Ojeong-dong, Daeduk-gu Daejeon 306-791, Korea E-mail:
[email protected] Maricel Balitanas Hannam University 133 Ojeong-dong, Daeduk-gu Daejeon 306-791, Korea E-mail:
[email protected]
ISSN 1865-0929 e-ISSN 1865-0937 ISBN 978-3-642-23311-1 e-ISBN 978-3-642-23312-8 DOI 10.1007/978-3-642-23312-8 Springer Heidelberg Dordrecht London New York Library of Congress Control Number: Applied for CR Subject Classification (1998): C.2, H.4, H.3, I.2, H.5, I.6
© Springer-Verlag Berlin Heidelberg 2011 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com)
Foreword
Advanced communication and networking are areas that have attracted many academic and industry professionals in research and development. The goal of the ACN conference is to bring together researchers from academia and industry as well as practitioners to share ideas, problems and solutions relating to the multifaceted aspects of advanced communication and networking. We would like to express our gratitude to all of the authors of submitted papers and to all attendees for their contributions and participation. We believe in the need to continue this undertaking in the future. We acknowledge the great effort of all the Chairs and the members of the Advisory Boards and Program Committees of the above-listed event. Special thanks go to SERSC (Science & Engineering Research Support soCiety) for supporting this conference. We are grateful in particular to the speakers who kindly accepted our invitation and, in this way, helped to meet the objectives of the conference. July 2011
Chairs of ACN 2011
Preface
We would like to welcome you to the proceedings of the 2011 International Conference on Advanced Communication and Networking (ACN 2011), which was held during August 15–17, 2011, at Brno University, Czech Republic. ACN 2011 focused on various aspects of advances in advanced communication and networking with computational sciences, mathematics and information technology. It provided a chance for academic and industry professionals to discuss recent progress in the related areas. We expect that the conference and its publications will be a trigger for further related research and technology improvements in this important subject. We would like to acknowledge the great effort of all the Chairs and members of the Program Committee. We would like to thank all of the authors of submitted papers and all the attendees for their contributions and participation. Once more, we would like to thank all the organizations and individuals who supported this event as a whole and, in particular, helped in the success of ACN 2011. July 2011
Tai-hoon Kim Hojjat Adeli Rosslin John Robles Maricel Balitanas
Organization
Honorary Chair Hojjat Adeli
The Ohio State University, USA
General Co-chairs Alan Chin-Chen Thanos Vasilakos Martin Drahansk´ y
Chang National Chung Cheng University, Taiwan University of Western Macedonia, Greece Brno University, Czech Republic
Program Co-chairs Tai-hoon Kim Byeong-Ho Kang Muhammad Khurram Khan
Hannam University, Korea University of Tasmania, Australia King Saud University, Saudi Arabia
Workshop Co-chairs Seok-soo Kim Filip Ors´ ag
Hannam University, Korea Brno University, Czech Republic
International Advisory Board Hsiao-Hwa Chen Petr Han´ acek Kamaljit I. Lakhtaria
National Cheng Kung University, Taiwan Brno University, Czech Republic Atmiya Institute of Technology and Science, India
Publicity Co-chairs Debnath Bhattacharyya Ching-Hsien Hsu Deepak Laxmi Narasimha Prabhat K. Mahanti V´aclav Maty´ aˇs
SERSC, India Chung Hua University, Taiwan University of Malaya, Malaysia University of New Brunswick, Canada Brno University, Czech Republic
Publication Chair Rosslin John Robles
Hannam University, Korea
X
Organization
Program Committee Aboul Ella Hassanien Ai-Chun Pang Andres Iglesias Prieto Chao-Tung Yang Chia-Chen Lin Cho-Li Wang Chu-Hsing Lin Dimitrios Vergados Don-Lin Yang Dvor´ ak Radim Farrukh A. Khan Gianluigi Ferrari Gyoo-Seok Choi H´ajek Josef Hejtm´ankov´ a Dana Hong Sun Hui Chen
Janusz Szczepanski Javier Garcia-Villalba Jiann-Liang Jieh-Shan George YEH Juha Jaakko R¨ oning Kazuto Ogawa Ki-Young Lee Kwok-Yan Lam Kyung-Soo Jang Li Weng Marc Lacoste Marvan Aleˇs Matthias Reuter Michel Deza Mohammad Riaz Moghal ˇ an Mr´ acek Step´ Myung-Jae Lim
N. Jaisankar Novotn´ y Tom´aˇs Rui L. Aguiar Shijian Li Shun-Ren Yang Soon Ae Chun Sun-Yuan Hsieh Tae (Tom) Oh V´ana Jan Victor Leung Viktor Yarmolenko Vincenzo De Florio Witold Pedrycz Yoo-Sik Hong Yong-Soon Im Young-Dae Lee
Table of Contents
Clock Synchronization for One-Way Delay Measurement: A Survey . . . . . Minsu Shin, Mankyu Park, Deockgil Oh, Byungchul Kim, and Jaeyong Lee
1
API-Oriented Traffic Analysis in the IMS/Web 2.0 Era . . . . . . . . . . . . . . . Daizo Ikeda, Toshihiro Suzuki, and Akira Miura
11
Analysis of the Characteristics of EEG Caused by Dyspnea and the Influence of the Environmental Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jeong-Hoon Shin
19
A Study on MAC Address Spoofing Attack Detection Structure in Wireless Sensor Network Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sungmo Jung, Jong Hyun Kim, and Seoksoo Kim
31
Mapping Based on Three Cameras for 3D Face Modeling . . . . . . . . . . . . . . Jae-gu Song and Seoksoo Kim
36
A Social Education Network Based on Location Sensing Information Using Smart-Phones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jang-Mook Kang and Sook-Young Choi
42
User-Oriented Pseudo Biometric Image Based One-Time Password Mechanism on Smart Phone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wonjun Jang, Sikwan Cho, and Hyung-Woo Lee
49
Prototype Implementation of the Direct3D-on-OpenGL Library . . . . . . . . Joo-Young Do, Nakhoon Baek, and Kwan-Hee Yoo Open API and System of Short Messaging, Payment, Account Management Based on RESTful Web Services . . . . . . . . . . . . . . . . . . . . . . . SunHwan Lim, JaeYong Lee, and ByungChul Kim Privacy Reference Architecture for Personal Information Life Cycle . . . . . Yong-Nyuo Shin, Woo Bong Chun, Hong Soon Jung, and Myung Geun Chun A Rate Adaptation Scheme to Support QoS for H.264/SVC Encoded Video Streams over MANETs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chhagan Lal, Vijay Laxmi, and M.S. Gaur Minimizing Scheduling Delay for Multimedia in Xen Hypervisor . . . . . . . . Jeong Gun Lee, Kyung Woo Hur, and Young Woong Ko
59
66 76
86 96
XII
Table of Contents
Efficient Allocation of Transmission Power and Rate in Multicarrier Code-Division Multiple-Access Communications . . . . . . . . . . . . . . . . . . . . . Ye Hoon Lee A Quality of Service Algorithm to Reduce Jitter in Mobile Networks . . . P. Calduwel Newton and L. Arockiam
109
117
Performance Analysis of HDD and DRAM-SSD Using TPC-H Benchmark on MYSQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hyun-Ju Song, Young-Hun Lee, and Seung-Kook Cheong
125
User Authentication Platform Using Provisioning in Cloud Computing Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hyosik Ahn, Hyokyung Chang, Changbok Jang, and Euiin Choi
132
Profile for Effective Service Management on Mobile Cloud Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Changbok Jang, Hyokyung Chang, Hyosik Ahn, Yongho Kang, and Euiin Choi
139
Context Model Based on Ontology in Mobile Cloud Computing . . . . . . . . Changbok Jang and Euiin Choi
146
SPECC - A New Technique for Direction of Arrival Estimation . . . . . . . . In-Sik Choi
152
Trading Off Complexity for Expressiveness in Programming Languages for Embedded Devices: Visions and Experiences . . . . . . . . . . . . . . . . . . . . . . Vincenzo De Florio and Chris Blondia
161
Electric Vehicle Telematics Framework for Smart Transportation . . . . . . . Junghoon Lee, Hye-Jin Kim, Gyung-Leen Park, Ho-Young Kwak, Young-cheol Kim, and JeongHoon Song
176
E-Contract Securing System Using Digital Signature Approach . . . . . . . . Nashwa El-Bendary, Vaclav Snasel, Ghada Adam, Fatma Mansour, Neveen I. Ghali, Omar S. Soliman, and Aboul Ella Hassanien
183
Fault Tolerance Multi-Agents for MHAP Environment: FTMA . . . . . . . . . SoonGohn Kim and Eung Nam Ko
190
An Error Detection-Recovery Agent for Multimedia Distance System Based on Intelligent Context-Awareness: EDRA RCSM . . . . . . . . . . . . . . . SoonGohn Kim and Eung Nam Ko
197
An Error Sharing Agent Running on Situation-Aware Ubiquitous Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SoonGohn Kim and Eung Nam Ko
203
Table of Contents
XIII
Integrated Retrieval System for Rehabilitation Medical Equipment in Distributed DB Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . BokHee Jung, ChangKeun Lee, and SoonGohn Kim
209
Effective Method Tailoring in Construction of Medical Information System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . WonYoung Choi and SoonGohn Kim
215
A Study on the Access Control Module of Linux Secure Operating System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . JinSeok Park and SoonGohn Kim
223
An fMRI Study of Reading Different Word Form . . . . . . . . . . . . . . . . . . . . . Hyo Woon Yoon and Ji-Hyang Lim
229
Intelligent Feature Selection by Bacterial Foraging Algorithm and Information Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jae Hoon Cho and Dong Hwa Kim
238
The Intelligent Video and Audio Recognition Black-Box System of the Elevator for the Disaster and Crime Prevention . . . . . . . . . . . . . . . . . . . . . . Woon-Yong Kim, Seok-Gyu Park, and Moon-Cheol Lim
245
Real-Time Intelligent Home Network Control System . . . . . . . . . . . . . . . . . Yong-Soo Kim
253
LCN : Largest Common Neighbor Nodes Based Routing for Delay and Disruption Tolerant Mobile Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Doo-Ok Seo, Gwang-Hyun Kim, and Dong-Ho Lee
261
A Perspective of Domestic Appstores Compared with Global Appstores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Byungkook Jeon
271
A Design of Retrieval System for Presentation Documents Using Content-Based Image Retrieval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hongro Lee, Kwangnam Choi, Ki-Seok Choi, and Jae-Soo Kim
278
Data Quality Management Based on Data Profiling in E-Government Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Youn-Gyou Kook, Joon Lee, Min-Woo Park, Ki-Seok Choi, Jae-Soo Kim, and Soung-Soo Shin
286
Design of Code Template for Automatic Code Generation of Heterogeneous Smartphone Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Woo Yeol Kim, Hyun Seung Son, and Robert Young Chul Kim
292
A Study on Test Case Generation Based on State Diagram in Modeling and Simulation Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Woo Yeol Kim, Hyun Seung Son, and Robert Young Chul Kim
298
XIV
Table of Contents
An Efficient Sleep Mode Procedure for IEEE 802.16e Femtocell Base Station . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sujin Kwon, Young-uk Chung, and Yong-Hoon Choi Performance Analysis of Wireless LANs with a Backoff Freezing Mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ho Young Hwang, Seong Joon Kim, Byung-Soo Kim, Dan Keun Sung, and Suwon Park Performance Analysis of Forward Link Transmit Power Control during Soft Handoff in Mobile Cellular Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jin Kim, Suwon Park, Hyunseok Lee, and Hyuk-jun Oh Performance Improvement Method for Wi-Fi Networks Sharing Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jongwoo Kim, Suwon Park, Seung Hyong Rhee, Yong-Hoon Choi, HoYoung Hwang, and Young-uk Chung Energy Saving Method for Wi-Fi Stations Based on Partial Virtual Bitmap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sangmin Moon, Taehyu Shin, Suwon Park, Hyunseok Lee, Chae Bong Sohn, Young-uk Chung, and Ho Young Hwang A Handover Scheme for Improving Throughput Using Vehicle’s Moving Path Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sang Hyuck Han, Suwon Park, and Yong-Hoon Choi Effects of the Location of APs on Positioning Error in RSS Value Based Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hyeonmu Jeon, Uk-Jo, Mingyu-Jo, Nammoon Kim, and Youngok Kim
306
312
321
328
335
341
347
Distributed Color Tracker for Remote Robot Applications and Simulation Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yong-Ho Seo
353
Mobile Robot Control Using Smart Phone and Its Performance Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yong-Ho Seo, Seong-Sin Kwak, and Tae-Kyu Yang
362
Study on Data Transmission Using MediaLB Communication in Vehicle Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chang-Young Kim and Jong-Wook Jang
370
Topology Configuration for Effective In-Ship Network Construction . . . . Mi-Jin Kim, Jong-Wook Jang, and Yun-sik Yu Performance Analysis of Inter-LMA Handoff Scheme Based on 2-Layer in Hierarchical PMIPv6 Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jongpil Jeong, Dong Ryeol Shin, Seunghyun Lee, and Jaesang Cha
380
393
Table of Contents
XV
A Study on Dynamic Gateway System for MOST GATEWAY Scheduling Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Seong-Jin Jang and Jong-Wook Jang
403
Implementation Automation Vehicle State Recorder System with In-Vehicle Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sung-Hyun Baek and Jong-Wook Jang
412
Adapting Model Transformation Approach for Android Smartphone Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Woo Yeol Kim, Hyun Seung Son, Jae Seung Kim, and Robert Young Chul Kim Implementation of a Volume Controller for Considering Hearing Loss in Bluetooth Headset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hyuntae Kim, Daehyun Ryu, and Jangsik Park An Extended Cloud Computing Architecture for Immediate Sharing of Avionic Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Doo-Hyun Kim, Seunghwa Song, Seung-Jung Shin, and Neungsoo Park
421
430
439
Implementation of Switching Driving Module with ATmega16 Processor Based on Visible LED Communication System . . . . . . . . . . . . . . . . . . . . . . . Geun-Bin Hong, Tae-Su Jang, and Yong K. Kim
447
A Performance Evaluation on the Broadcasting Scheme for Mobile Ad-Hoc Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kwan-Woong Kim, Tae-Su Jang, Cheol-Soo Bae, and Yong K. Kim
453
Carbon Nanotube as a New Coating Material for Developing Two Dimensional Speaker Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jeong-Jin Kang and Keehong Um
460
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
467
Clock Synchronization for One-Way Delay Measurement: A Survey Minsu Shin1 , Mankyu Park1, Deockgil Oh1 , Byungchul Kim2 , and Jaeyong Lee2 1
Dept. of Satellite Wireless Convergence Electronics and Telecommunications Research Institute Daejeon, Korea {msshin,neomkpark,dgoh}@etri.re.kr 2 Dept. of Information Communication Engineering Chungnam National University Daejeon, Korea {byckim,jyl}@cnu.ac.kr
Abstract. In this paper we present the comprehensive survey on the clock synchronization algorithms, which should be considered for the measurements of the network delay. We categorize the clock synchronization algorithms into two basic types according to how they acquire synchronization between clocks, which are external source based schemes and end-to-end measurement based schemes. While external source based schemes are the synchronization methods using centralized time source such as NTP, GPS and IEEE 1588 to have global synchronization for all end hosts, end-to-end schemes obtain synchronization information through network measurements between end hosts. We briefly introduces some algorithms in both categories. However, we have focused more on the end-to-end schemes which can be classified again into online and offline shemes according to whether they can be applied for real time operation. We survey the recent progresses on these end-to-end algorithms and special concerns are on the estimation of true the one-way delay without the effect of clock skew. The problems in depolying each end-toend scheme are also described. The potential further research issues in online one-way delay estimation are discussed. Keywords: clock synchronization, delay measurement, one-way delay, clock skew.
1
Introduction
The fast expansion of the Internet to deliver increasingly important and various services make network monitoring and performance measurements essential for effective network management. Many applications may benefit from the knowledge of the end-to-end delay metrics. Network latency is an important indicator of the operating network status, which changes with the variations of the network traffic patterns and congestions. Many QoS sensitive applications require T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 1–10, 2011. c Springer-Verlag Berlin Heidelberg 2011
2
M. Shin et al.
the delay constraints to be met. Therefore, the knowledge of the end-to-end delay can be used for Service Level Agreement(SLA) validation between network service providers and customers. Through the end-to-end delay measurements, researchers can learn more about the underlying properties or characteristics of the current networks, for example, network topology, traffic patterns and protocol distributions, etc. In addition, end-to-end delay metrics are widely utilized to the algorithms for performance enhancement of some protocol including TCP since they are the foundations for many other measurement metrics such as bandwidth, jitter and packet loss. While Round Trip Time (RTT) is basically used to represent end-to-end delay, the needs for One-Way Delay (OWD) measurement are also addressed due to some reasons. Measuring the one-way delay instead of the round-trip delay is motivated by reasons, such as asymmetricities of path and queuing as well as each application’s characteristics[3]. In real measurements, the delay shows changing trend of about 100 msec over the duration of 70 min because of the clock difference[4]. End-to-end one-way delay experienced by a packet is the time taken to travel from source to destination by a packet and can be measured from the difference between the arrival time, according to the destination clock, and the timestamp added by the source and conveyed by the packet. If the two clocks at the both end hosts are perfectly synchronized, then the one-way delay can be calculated by subtracting the sender timestamp from the receiver timestamp and this measured delay will be the true delay between the two end hosts. However, two clocks are rarely perfectly synchronized in real systems. The clocks may have different values at a certain moment and they may run at different speeds. Since the clocks at both end hosts are involved in delay measurement, synchronization between two clocks becomes an important issue. Before proceeding the discussion on one-way delay measurements, we would like to introduce some terminologies for the clock behavior which are usually accepted in literaure[4]. The relative difference of the time reported by two clocks is called offset, the rate at which the clock progresses is called frequency and the relative difference of two clocks’s frequency is called skew. In addition, it is sometimes convenient to compare the frequency ratio between two clocks instead of the skew. This is called clock ratio. Due to the offset and skew between two clocks, end-to-end delay measurements becomes inaccurate and the expected performance enhancement from the measurement results is not quaranteed. To solve the clock synchronization problems, many algorithms and methods are introduced. There are two kinds of clock synchronization approaches according to how they synchronize clocks : external server based methods and end-toend measurement based methods. Basic idea of external server based methods is to locate global server providing time information to every host in the network. Every host have to recognize the server and operate under the time synchronization through the server. Network Time Protocol(NTP)[6], Global Positioning System(GPS) and IEEE 1588[7] could be included within this category. While external server based methods focuses on the synchronization of the clock itself, because that is the first uncertainty source, end-to-end measurement based
Clock Synchronization for OWD Measurement: A Survey
3
methods mainly focus on the detection and removal of the clock skew existing between two clocks so that time values generated by each of the end hosts can be considered to be synchronized. In this paper, we survey the time synchronization issues in recent years, especially for the one-way delay measurements. The rest of the paper is organized as follows. We define the basic one-way delay model and notations used through the paper in Sec. 2, and describe the brief review for the external server based methods in Sec. 3. In Sec. 4, we focus intensively on the end-to-end measurement based methods. End-to-end methods are classified into offline and online schemes. In Sec. 5, we discuss issues to be considered for such an algorithms proposed for real time operation to be incorporated with the transport protocols. Lastly, we conclude this paper in Sec. 6.
2
Basic Terminology and One-Way Delay Model
To understand the effect of clock skew and the insight idea for methods to remove it, we need to define the relation between time instances and corresponding delays. In this section we introduce the terminology for clocks, timestamps, and delays used in measurements. To get consistent with previous works, we mainly adopt following nomenclature from [4][12] to characterize clock. t1s .l
tis.l
~ t is .l t1s, RTT
t1s,OWD
1st pkt
ˎˎ
1st ACK
D ~ti s .l d1
t1r , l
Cs
Cr ~ r .a ti
t1r,OWD
t1r , a
t1r, RTT
i th pkt
di
tir ,l
tir ,a
Fig. 1. Relation between Timing Information
In Fig. 1, Cs and Cr denote sender and receiver clocks. For all values, superscripts s and c means that the value is measured according to the sender clock Cs and receiver clock Cr , respectively, and another superscripts a and l means the arriving and leaving timestamps, respectively. Moreover, d¯i is end-to-end delay consistent with the true clock. Fig. 1 shows the timing relation between Cs and Cr when Cs and Cr run at different frequency. The end-to-end delay of the i-th
4
M. Shin et al.
r.l packet consistent with Cr is tr.a - tr.l i i . However ti is not known at the receiver, r.a s.l so ti - ti is typically used as one-way delay, which is not consistent with neither Cs nor Cr . To make it consistent with either clock, we need to determine the skew between two clocks and remove it from the observable one-way delay.
3
External Server Based Synchronization
Several external mechanisms have been introduced to physically synchronize the end host’s clocks so that the time information from both of the end hosts have no offset and skew. The Network Time Protocol(NTP), Global Positioning System (GPS) and IEEE 1588[7] can be included within the categoriy. NTP is broadly deployed in the Internet to synchronize distributed clocks to each other or to the time server having an accurate clock. It can provide accuracy typically within a millisecond on LANs and up to a few tens of milliseconds on WANs[8]. The NTP system consists in a hierarchy of primary and secondary time servers, clients and interconnecting transmission paths[9]. Under normal circumstances clock synchronisation is determined using only the most accurate and reliable servers and transmission paths, so that the actual synchronisation paths usually assume a hierarchical configuration with the primary reference source at the root and servers of decreasing accuracy at increasing layers toward the leaves. Although the clock offset between the synchronized host and the NTP server can often be maintained on the order of multiple milliseconds, the accuracy of NTP is affected in part by the path characteristics between NTP clients and servers, which makes NTP not a good choice for an accurate network measurement. GPS system can also be used for the clock syncronization with high accuracy in the order of microseconds and wide coverage for large scale networks. Many measurement architectures incorporated with the GPS system have been proposed[8][10]. However, it requires additional hardware systems such as antenna and distribution equipment for every or group of hosts, which make its use impractical in the viewpoint of economy and convienience. The new IEEE Standard Precision Time Protocol (PTP), IEEE1588[7], is now a very comprehensive solution to do very precise time synchronization in an Ethernet network. The IEEE 1588 time synchronization protocol specifes how such synchronization can be achieved over wired and wireless networks. It is an external synchronization protocol in which all clocks in the network trace their time. This protocol is the first standard available which makes it possible to synchronize the clocks of different end devices over a network at speeds faster than one microsecond. The protocol supports system-wide synchronization accuracy in the submicrosecond range with minimal network and local clock computing resources. However, its accuracy can be affected from the network fluctuations introduced by network elements such as switches and routes, which imposes limitation of its use to small network. Aforementioned external server based methods for the time synchronization have their own advantages and potentials for further considerations. However,
Clock Synchronization for OWD Measurement: A Survey
5
they are, at least at the moment, not appropriate solution due to their practical limitations in terms of the cost and the susceptibility to network characteristics.
4
End-to-End Measurement Based Synchronization
Although time keeping is the ideal case for clock synchronization, it is hard to achieve without the help of hardware devices like GPS or hardware-based NTP server as described in Sec. 3. Another approach for time synchronization is to find clock uncertainty existing between sender and receiver clocks and remove it to make both clocks almost perfectly synchronized. Most alogorithms included in this category perform the estimation and removal process of clock skew from the network measurement while external server based methods try not to have any uncertainty in clock dynamics through providing the same clock source to every end host in the network. However, the asymmetry of the network path, the amount of traffic flow and the bandwidth makes it difficult to estimate the delay difference in two directions, which is essential for calculating clock offset between two hosts. Fortunately, in most cases frequency keeping is enough for this purpose. For example, in delay measurement the dynamic part, mainly queuing delay, attracts much more attention than the static part composed of propagation delay and transmission delay. Besides, many measurement methods, such as available bandwidth estimation, are independent from a constant offset. From these observations, many contributions are devoted to determine the clock skew in the measurement and by removing the effect of the skew we can transform the delay measurements so that they are consistent with a single clock. In some cases, one-way delay metrics are more important than round-trip time measurement with the reasons exampled in Sec. 1. Therefore, most of algorithms have focused on the detection and estimation of clock skew existing in the unidirectional path. End-to-end measurement based synchronization methods can be classified into two sub-groups according to their real-time applicability, i.e., offline syncronization method and online synchronization method. Offline synchronization methods carry out their estimation and removal of clock skew with the prerequisite of network measurements taken for a certain period of time ahead. On the other hand, online synchronization methods calculate clock skew immediately on receiving packets involved for the calculation and derive required metrics through the clock skew removal. Proposed schemes in the literature are described in the following two sections, respectively. 4.1
Offline Synchronization Approches
Offline synchronization methods calculate the clock skew existing between sender and receiver clocks from the network measurement data. To deal with the clock synchronization problems such as relative offset and skew, Paxson propsed the median line fitting algorithm using forward and reverse path measurements of delay between a pair of hosts[11][12]. Moon et al.
6
M. Shin et al.
focused on filtering out the effects of clock skew only with the unidirectional delay measurement to determine the variable portion of the delay. The basic idea of the algorithm is to fit a line lying under all delay points and being as closely to them as possible, and use the slope of the line as the estimated skew[4]. The authors formulated this idea as a linear programming problem. Khlifi et al. proposed two offline clock skew estimation and removal algorithms [13]. They formalized the clock skew model in Eq. (1), which is identical to the model in [4]. di = dri + (α − 1)tsi + θ . (1) , where di is the measured delay experienced by the i-th packet, dri is the ture one-way delay experienced by i-th packet, α is the clock skew between two clocks, tsi is defined by the difference of generation times for i-th and the first packets according to the sender clock, and θ is the relative offset between two clocks. In Average technique, they adopted the notion of the phase plot to show the evolution of the difference between the packe delays di − di−1 and thus provided the way of calculating the estimate of the skew, α only with the obtainable values at the very low complexity in eq. (2). α =
dk − dl−1 k−l−1
s tk − tsl−1 / +1 . k−l−1
(2)
, where k and l are the indexes of the minimun measured delay for the two same sized intervals selected from the begining and end of measurement trace and considered to avoid the possible extreamities in the boundaries of the trace. Another algorithm proposed by khlifi et al., which is called direct skew removal technique, is to estimate the true one-way delay directly under the assumption that the minimum system clock resolution is equal to 1 msec and thus the variation in measured delay due to the skew is increased or decreased by the multiple of 1 msec, in the form of steps, depending on the sign of the skew (e.g., if the sender clock progresses faster than receiver clock, then the skew becomes negative). They have also considered the effect in the presence of clock resets. A quite different approache called Piece-wise Reliable Clock Skew Estimation Algorithm (PRCSEA) from the other previous algorithms was presented by Bi et al. in that it provides reliablility test to estimation results so that evenfually it doesn’t care about the presumption for clock dynamics[5]. Most algorithms in the field do not handle clock drift, assuming that the clock skew ramains constant, because it is very hard to decide where the skew changes in reality. PRCSEA takes those clock dynamics into consideration and introduces verification to estimation results so that it can handle the clock drift by naturally eliminating the needs to identify the skew changing point in its recursive process. Instead of providing specific skew estimation algorithm, it focuses on verifying the results of skew estimation using any existing algorithm for that. The authors showed that it has low time compexity even when there exits clock adjustment and drift within the measurement by evaluation and it performs well across diverse clock dynamics by simulation.
Clock Synchronization for OWD Measurement: A Survey
4.2
7
Online Synchronization Approches
Online end-to-end synchronization schemes are aiming at the estimation and removal of the clock skew in the real-time manner upon receiving packets and thus can be utilized to improve the performance of the operating protocols, e.g., it can be adopted for the bandwidth estimation or loss differenciation shemes to enhance TCP congestion control performance. However, not much work has been done for online skew estimation and removal so far comparing to the offline algorithms. Tobe et al. presented a simple scheme for estimating the skew through reducing the number of samples for calculation, called Estimation of Skew with Reduced Samples (ESRS) to alleviate the problem of collecting many samples for long period, which makes an algorithm unsuitable for on-line calculation of the skew[14]. While adopting similar delay model in [4], they proposed some modifications. To reduce the number of calculation, a measurement that has an inter-arrival time outside the expected range is not taken into consideration from the observation that the variable inter-arrival time induced by the network characterisics such as probe compression should be excluded in the skew calculation. With this reduction process, it was shown that the significant portion of samples can be removed depending on the network status, e.g., 86% of reduction factor can be obtained when the network seems to be unstable or fluctuated with other traffics. Moreover, the algorithm enhances the estimate of the base delay incrementally on arrival of packets rather than calculating the skew after a certain period so that the estimate of the skew converges to a certain value and the OTT value without the clock skew can be determined thereafter. This skew estimation scheme was utilized for loss differentiation algorithm in [15] by which that the detected packet loss is determined as congestion loss in TCP congestion control loop when the Relative One-Way Trip Time (ROTT) value calculated by ESRS has been maintained over a threshold. Khlifi et al. proposed sliding window technique and combined approach for the online synchronization[13]. The basic idea of the sliding window technique is to continuously evaluate the variation of the minimum measured delay. For this, the length T of the evaluation interval called window size is chosen and the minimum measured delay is determined for every interval. In the case that the minimum delay of the current inveral is smaller than the one of the previous interval, the algorithm decides that the skew is negative and the skew value is decreased by 1. In the opposite case, the algorithm increases the skew value by 1. Then the true delay can be obtained by the every measured delay minus this skew value. This technique gives quick response to skew effect with good accuracy so that it could be applied for online synchronization. However, the choice of the window size, T , has great influence on the performance of the algorithm and the window size has to be the duration that the skew effect takes to reach 1 ms to quarantee its performance. Since the proper window size is unlikely to know in reality and totally depedns on the clock resoltion of the end systems, it could be limited to be widely used for online applications.
8
M. Shin et al.
In [16], the authors propose to use their convex hull based algorithm (originally designed for offline skew estimation) to remove the skew from online delay measurements. Their idea is to estimate the skew at fixed intervals and to use the last estimate to remove the skew effect from upcoming measures. In [18], Choi et. al proposed one-way delay estimation algorithm without clock synchronization between sender and receiver clocks. They made analytical derivation of forward and reverse delay separately in terms of two RTT values measured by the sender and the receiver. tsn,OW D = ts0,OW D −
n
[tsi,RT T − tri,RT T ]
i=1 n
trn,OW D = −ts0,OW D +
[tsi,RT T − tri,RT T ] + tsn+1,RT T .
(3)
i=1
As seen in Eq. (3), they did not focus on the detection of clock skew and utilized the fact that the time duration between two leaving or arriving events is not dependent on the presence of the skew. This idea leads to estimate oneway delay just with the obtainable measures upon receiving each packet such as RTT values at the sender and receiver. With this characteristic, the algorithm can be incorporated with any end-to-end tranport protocol and the performance enhancement for TCP with this one-way delay estimation algorithm had been shown in [18]. However, some uncertainty can be induced by the heuristics for determining the initial paramter, i.e., ts0,OW D in Eq. (3), and overall accuracy of the algorithm can be affected by this ambiguity. As another approach using one-way delays in forward and reverse paths which is similar with [18], Kim et al. presented end-to-end one-way delay estimation sheme using one-way delay variation and round-trip time (RTT)[19]. This algorithm is based on the idea that one-way delay variation, i.e., jitter, depends only on the difference of RTTs because the effects of clock skew can be naturally removed in the RTT calculation process. They showed mathematically that the jitters for each direction can be given using RTTs measured at the sender and the receiver without a priori clock synchronization, and furthermore the ratio of one-way delays are equal to the ratio of one-way jitters expressed by the measured RTT assuming a certain condition. By the way that the measurements for RTTs and estimations of one-way delays and offsets are made only with the samples satisfying the condition, they, eventually, determines the unknown one-way delays from the obtainable RTT values with the reduced calculations. With these processes, the algorithm can provide following characteristics : without any assumption of time synchronization, it can track the variations of one-way delays in real time and work well under realistic network conditions because it just takes samples satisfying the certain condition.
Clock Synchronization for OWD Measurement: A Survey
5
9
Open Issues in Online OWD Measurements
Although many researchers have devoted their efforts on the clock synchronization algorithms for the one-way delay measurement and therefore many valuable contributions are presented in the literature, Online delay calcualtion algorithms are not much than the offline algorithms. Moreover, even such an online calculation algorithms mentioned in Sec. 1 and Sec. 4.2 are rarely adopted to the operation of any transport protocol in reality, to enhance their performance for example. We want to discuss the reasons in the paper. First of all, TCP, as one of typical transport protocols in the internet, is a sender oriented protocol which means that it is prefered for every modification in the protocol to be involved only to the sender side. This is required to quarantee that all end hosts do not have to be changed. However, for calculation and delivery of the forward one-way delay measurement which is more important for TCP congestion control than reverse one-way delay, receiver side modification is inevitably required. To address this issue, the way of modifying the current TCP timestamp option can considered as in Sync-TCP [17].
6
Conclusion
In this paper we present the comprehensive survey on the clock synchronization algorithms in recent years, which should be considered for the measurements of the network delay. By defining the end-to-end delay model, we analyze the important factors and their processes of the previous works in a unified way. We categorize the clock synchronization algorithms into two basic types according to how they acquire synchronization between clocks, external source based schemes and end-to-end measurement based schemes. While external source based schemes are the synchronization methods using centralized time source such as NTP, GPS and IEEE 1588 to have global synchronization for all end hosts, end-to-end schemes obtain synchronization information through network measurements between end hosts. We briefly introduces some of algorithms in both categories. However, we have focused more on the end-to-end schemes which can be subdivided into online and offline shemes according to whether they can be applied for real time operation. We survey the recent progresses on these end-to-end algorithms and special concerns are on the clock synchronization for the one-way delay measurements. The problems in depolying each end-to-end scheme are also described. As network bandwidth increases dramatically and the asymmetry becomes more likely, inaccurate measurements for the network characteristics will cause potential network performance degradation. In that sense, the consideration for these clock dynamics should be prudently taken with the shemes summarized in the paper. Acknowledgments. This work was supported by the IT R&D program of KCC[2009-S-039-02, Development of Satellite Return Link Access Core Technology for High Efficient Transmission].
10
M. Shin et al.
References 1. Bolot, J.C.: Characterizing End-to-End Packet Delay and Loss in the Internet. Journal of High-Speed Networks 2(3), 305–323 (1993) 2. Mills, D.L.: Improved algorithms for synchronizing computer network clocks. IEEE/ACM Trans. Netw. 3(3), 245–254 (1995) 3. Almes, G., Kalidindi, S., Zekauskas, M.: A One-Way Delay Metric for IPPM. IETF RFC 2679 (September 1999) 4. Moon, S., Skelley, P., Towsley, D.: Estimation and removal of clock skew from network delay measurements. In: Proc. IEEE INFOCOM 1999, New York, NY (March 1999) 5. Bi, J., Wu, Q., Li, Z.: On estimating clock skew for one-way measurements. Computer Communications, 1213–1225 (2006) 6. Mills, D.L.: Network Time Protocol (Version 3) Specification, Implementation and Analysis. IETF RFC 1305 (1992) 7. IEEE Std. 1588-2088. IEEE Standard for a Precision Clock Synchronization protocol for Networked Measurement and Control Systems. IEEE (July 2008) 8. Vito, L.D., Rapuano, S., Tomaciello, L.: One-Way Delay Measurement: State of the Art. IEEE Trans. Instrumentation and Measurement 57(12), 2742–2750 (2008) 9. Sethi, A.S., Gao, H., Mills, D.L.: Management of the Network Time Protocol(NTP) with SNMP. Technical Report No. 98-09 (November 1997) 10. Jeong, J., Lee, S., Kim, Y., Choi, Y.: Design and Implementation of One-Way IP Performance Measurement Tool, vol. 2343(2), pp. 673–686. Springer, London (2002) 11. Paxson, V.: On calibrating measurements of packet transit times. In: Proc. ACM SIGMETRICS 1998, Madison, WI, June 1998, pp. 11–21 (1998) 12. Paxson, V.: Measurements and Analysis of End-to-End Internet Dynamics. Ph.D. dissertation. Lawrence Berkeley Nat. Lab., Univ. California, Berkeley (1997) 13. Khlifi, H., Grgoirea, J.C.: Low-complexity offline and online clock skew estimation and removal. Computer Networks 50(11), 1872–1884 (2006) 14. Tobe, Y., Aida, H., Tamura, Y., Tokuda, H.: Detection of change in one-way delay for analyzing the path status. In: Proc. of the Passive and Active Measurement Workshop (PAM 2000), pp. 61–68 (April 2000) 15. Tobe, Y., Tamura, Y., Molano, A., Ghosh, S., Tokuda, H.: Achieving moderate fairness for UDP flows by path-status classification. In: Proc. 25th Annu. IEEE Conf. Local Computer Networks (LCN 2000), Tampa, FL, pp. 252–261 (November 2000) 16. Zhang, L., Liu, Z., Xia, C.H.: Clock synchronization algorithms for network measurements. In: Proc. IEEE INFOCOM 2002, New York, NY, pp. 160–169 (June 2002) 17. Weigle, M.C., Jeffay, K., Smith, D.: Delay-based early congestion detection and adaptation in TCP: impact on web performance. Computer Communications, 837– 850 (2005) 18. Choi, J.H., Yoo, C.: One-way delay estimation and its application. Computer Communications, 819–828 (2005) 19. Kim, D., Lee, J.: One-way delay estimation without clock synchronization. IEICE Electronics Express 4(23), 717–723 (2007) 20. Aoki, M., Oki, E., Rojas-Cessa, R.: Measurement Scheme for One-Way Delay Variation with Detection and Removal of Clock Skew. ETRI Journal 32(6), 854–862 (2010)
API-Oriented Traffic Analysis in the IMS/Web 2.0 Era Daizo Ikeda1, Toshihiro Suzuki1, and Akira Miura2 1
NTT DOCOMO, INC., 3-5 Hikarino-oka, Yokosuka-shi, Kanagawa 239-8536, Japan {ikeda,suzukitoshi}@nttdocomo.co.jp 2 Prefectural University of Kumamoto, 3-1-100, Tsukide, Kumamoto, 862-8502, Japan
[email protected]
Abstract. This paper presents an analysis method for dealing with API-oriented traffic, one of the major operational challenges in the IMS/Web 2.0 era which must be overcome in developing a highly stable and reliable communication system. Traffic evaluation methods for the commercial i-mode service in Japan are being extended to address this issue. In particular, we suggest that API traffic models are defined based on process sequences and reflected in the performance evaluation of nodes in a system. Network capacity planning should deal with the impact of estimated API traffic on a mobile network, especially gateway modules. Our proposal enables mobile operators to construct a highly stable and reliable system which supports service innovation by providing APIs to application developers. Keywords: mobile communications, traffic pattern, traffic analysis, performance evaluation, API, IMS, Web 2.0.
1
Introduction
In the next generation of mobile communications, there will arise a wide variety of services and applications which make use of an application programming interface (API), which a mobile network offers. Open APIs allow application developers to make use of functions such as the remote control of devices and to acquire information, including location and presence, for service innovation. The introduction of these API requests make it difficult to estimate the amount of traffic in the communication network, even though many studies have been conducted in the traffic management domain [1] [2]. However, we have developed a technique to extend the traffic analysis method used for a legacy second generation mobile network for the imode service to an API-equipped mobile network for the coming IMS/Web 2.0 era. The i-mode service was launched in February 1999 by NTT DOCOMO, INC., a leading mobile operator in Japan, and currently has approximately fifty million subscribers. The aim of this service is to be at the forefront of the mobile Internet by creating an environment which provides easy-to-use e-mail operation and Internet access. The i-mode service is provided over a large-scale network comprising a mobile packet communication system as the core network and i-mode servers that are connected to the Internet. This service acquired more subscribers than expected within a relatively short time period, and thus faced operational challenges due to overloads. T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 11–18, 2011. © Springer-Verlag Berlin Heidelberg 2011
12
D. Ikeda, T. Suzuki, and A. Miura
This paper presents a concept for developing a high quality communication system for IMS/Web 2.0 based on developing techniques for an i-mode core network.
2
Operational Challenges in the PDC-P
The i-mode service is provided over a core network of the second-generation mobile communication system called personal digital cellular-packet (PDC-P) [3] [4] [5]. As shown in Fig. 1, this system mainly comprises a large number of PDC-P nodes: packet processing modules (PPMs), packet gateway modules (PGWs), and mobile message packet gateway modules (M-PGWs) [6]. All of the network elements are Unix based and connected through LANs, some of which are connected over a WAN. Multiple M-PGWs, with interfaces between the PDC-P network and i-mode servers, are installed to balance the traffic load and improve reliability. To reduce cost and achieve an early launch, this system shares home location registers (HLRs) and paging procedures with the legacy PDC network via switching units.
PDC-P HLR PGW IP network
Router PPM
Internet
MPGW
i-mode server
PPM: Packet processing module PGW: Packet gateway module M-PGW: Mobile message PGW HLR: Home location register
Fig. 1. PDC-P network
The highest priority was placed on an early launch of i-mode and for this reason scalability was expected to be achieved at a later stage. However, beyond market expectations, the number of mobile subscribers rapidly grew to one million, and then to ten million within one year after commencement. At the peak period, the number of newly-acquired subscribers reached more than fifty thousand per day, and each mobile station was receiving large numbers of unsolicited email messages from personal computers. As the amount of traffic increased to the point of overload, hardware and software bottlenecks emerged in the PDC-P network resulting in
API-Oriented Traffic Analysis in the IMS/Web 2.0 Era
13
occasional service interruptions due to software problems. Immediate steps were taken to not only bolster the hardware, but also enhance the software in terms of robustness and quality maintenance to overcome the overloads. Furthermore, a survey was conducted to estimate the number of network elements required, to determine the need for future equipment upgrades such as to servers, routers, and switching units in a timely and effective manner. The following sections describe practical strategies that we adopted to meet such needs.
3 3.1
Practical Approaches to Overcome Challenges Traffic Characteristic Identification
There are two major characteristics of mobile data communications. One is that users move around and the other is that the throughput tends to fluctuate because of the existence of wireless sections. Another factor to consider is the characteristics of packet switching: data packets occur at random while a terminal is in a communication mode. We found that the packet call distribution is similar to voice call distribution based on measurements of the node traffic in the PDC-P network and that between M-PGWs and i-mode servers [7]. In general, the precision of a distribution is evaluated using a decision coefficient. This was more than 0.9 in the actual measurement, which indicates strongly that the i-mode traffic pattern follows an exponential distribution. Fig. 2 shows the measurement results of number of requests to initiate i-mode communication by packet arrival.
Number of Communication Requests
f (t)
: number of requests to initiate i-mode communication by packet arrival per unit time : Exponential distribution
f(t) = λ*exp(-λt)
Arrival Interval (msec)
Fig. 2. i-mode traffic pattern
t
14
D. Ikeda, T. Suzuki, and A. Miura
3.2
Bottleneck Identification
One of the major targets is to enhance the capacity in order to eliminate hardware and software bottlenecks, which are primarily due to the sharply increasing amount of traffic. To avoid processing bottlenecks which may result from simultaneous attempts to evaluate the performance of and to develop software, a physically independent performance evaluation site and overload simulators were newly constructed in an already-constructed debugging environment as shown in Fig. 3. Furthermore, through fixed point observation of traffic patterns, and the gathering and analysis of processing logs on the commercial PDC-P, we identified a number of bottleneck points. This allowed us to perform recovery and effectiveness tests. As a result of the testing, we successfully reduced the number of software update failures of the commercial i-mode system, which might result due to errors in signals or process sequences. Thus, by evaluating the effectiveness of software upgrades before introducing them into the commercial system, we successfully improved the robustness against overloads.
Simulated call scenario
Performance evaluation site
M-PGW
PGW
PGW
i-mode server simulator
M-PGW simulator
Compile machine
PGW simulator
Software update within 24 hours PGW
Debugging test
• Performance measurement • CPU use ratio Calculation Equation
Problems Problem information
Analysis
PGW Freehand input terminals Problem information
PPM
Commercial network
Call simulator scenario generator
Debugging site
M-PGW
HLR simulator
Log gathering server
Scenario development
PPM simulator
PPM
Base station simulator
Maintenance Terminal
Test progress report
Sampling test items
Problem database
Source code analysis tool
Program fault diagnosis
Diagnosis based on source code analysis
Fig. 3. Development environment
3.3
Performance Evaluation
We recorded measurements under overload or rapidly-changing traffic conditions in the performance evaluation site, and conducted a regression analysis of the CPU use ratio for the servers. The performance evaluation equation is defined as n
CPU occupancy = ∑ ai Xi i =1
(1)
API-Oriented Traffic Analysis in the IMS/Web 2.0 Era
15
where ai stands for coefficients representing the weight for each process, such as, user packet transfer, communication requests, and channel switching and Xi stands for parameters indicating the number of times the process is performed [8]. We selected the signals that affect the processing capacity, including call control signals and maintenance signals [9], and extracted the n parameters that indicate the number of times that a certain process is executed based on the number of signals. The coefficients are calculated in a single regression analysis by generating a traffic load specific to a particular coefficient in the performance evaluation site. For example, the coefficient for user packet transfer is calculated by generating a user packet transfer load exclusively. Similarly the coefficient for each other type of process is calculated by applying the load for that particular process. Thus, a number of measurements are conducted to cover all parameters. This technique clarified the performance limits and defined highly accurate criteria for increasing or decreasing the number of network elements [10]. One benefit was an improvement in the overload robustness. We clearly detailed procedures for replacing PDC-P nodes and switching units with higher capacity ones by optimizing the thresholds for the timing of the upgrades. Also, the performance evaluation results were applied to optimize the congestion control.
4
Proposal for API-Oriented Traffic Analysis
In the coming IMS/Web 2.0 era, service development is expected to form a part of a collaborative innovation process where a mobile operator provides service enablers to application developers through open APIs. Application developers may be allowed to create a long tail service using network resources and functions as service enablers. As a result of service enhancement, a variety of traffic is expected to pour into the communication system, including requests for normal call initiation or location information acquisition, from Web 2.0 applications using an API. In such an environment, the observation of traffic and the identification of its characteristics will become increasingly important. We undertook the following steps to estimate the impact of API-oriented traffic on a mobile network. (1) API-oriented traffic models were defined based on process sequences. (2) Under the models, the CPU use ratios were calculated by using a performance evaluation formula. (3) The evaluation results were examined for network capacity planning. Different services generate different traffic characteristics. One essential approach will be to investigate the traffic patterns of major service types and to evaluate the effects on the core network by distinguishing arriving packet calls from service requests from an API. Here, we applied analysis techniques originally used for the imode traffic to the API-oriented case. When a mobile network receives an API request, some call control signals are expected to be transmitted in the network in order to initiate a function or acquire the requested information. Taking into account API specifications such as Parlay X and Next Generation Service Interface (NGSI),
16
D. Ikeda, T. Suzuki, and A. Miura
Gateway Module
Application
Mobile Station
API request Signal a
Signal b
Signal c
Signal b
Fig. 4. Signals triggered by an API request
both of which were defined by Open Mobile Alliance (OMA) [11] [12], we assumed the number of call control signals triggered by an API request, as shown in Fig. 4. Following this step, API traffic models were defined based on the traffic patterns for communication requests due to packet arrival. Two non-API models, which offer no open APIs, were used to evaluate the performance of two cases, namely, user packet transmission alone, and both user packets and call control signals. The volumes of user packets and call control signals were determined based on commercial traffic models. In addition, three API traffic models were used to evaluate the performance of the estimated API requests, along with the two non-API models dealing with user packets and call control signals mentioned above, and these were defined based on the conventional commercial traffic of call requests. Medium API traffic means that the volume of API requests is equivalent to the number of communication requests due to packet arrival per second. Consequently, the CPU use ratio was calculated by applying a performance evaluation formula as described in the chapter 3. The results, shown in Fig. 5, made it clear that API-oriented traffic models led to high CPU occupancy, reaching the threshold of 60% with a smaller number of user data packets transferred than was the case with a non-API traffic model. This result also implied that API-oriented traffic, along with traffic for the initiation of communication, should be reflected in network capacity planning. One example is the planning of gateway modules, which process both the initiation of communications triggered by packet arrival and API requests. Based on assumptions of API request volumes, it is critical to estimate the possible transferred user packet data for stable CPU use. It should also be possible to clarify the need for exclusive gateway modules to process API requests by making such performance evaluations.
API-Oriented Traffic Analysis in the IMS/Web 2.0 Era
17
CPU occupancy (%)
80
high API traffic model medium API traffic model low API traffic model
60
non API model (user packets and call control signals) non API model (user packets)
40
20
0
1
1022
33 10
4 4 10
pps
Fig. 5. Performance evaluation results
5
Conclusions
This paper has presented methods for evaluating the performance of a mobile network handling API-oriented traffic. Prior research on traffic analysis in a legacy system can be extended to address one of the major operational challenges in the IMS/Web 2.0 era. Traditional indicators, such as packets per second (PPS), are insufficient by themselves because new call control signals invoked by API requests can have a large impact on the CPU use. Network capacity planning should take account of the impact of estimated API traffic on a mobile network, especially on the gateway modules. Our proposal enables mobile operators to construct a highly stable and reliable system for service innovation by providing APIs to application developers. In the future, it will be important to develop a method for congestion control using API-oriented traffic models. One possible approach will include distinguishing arriving packet calls from service requests originating from an open API. Commercial API traffic analysis is expected to be a key factor in improving the performance evaluation accuracy and effect of congestion control.
18
D. Ikeda, T. Suzuki, and A. Miura
References 1. El Barachi, M., Glitho, R., Dssouli, R.: Control-level call differentiation in IMS-based 3G core networks. IEEE Network Magazine 25(1), 20–28 (2011) 2. Pandey, S., Jain, V., Das, D., Planat, V., Periannan, R.: Performance study of IMS signaling plane. In: International Conference on IP Multimedia Subsystem Architecture and Applications, IMSAA 2007, pp. 1–5 (2007) 3. Oonuki, M., Kobayashi, K., Nakamura, K., Kimura, S.: Special Issue on Mobile Packet Data Communications System, Overview of PDC-P System. NTT DoCoMo Technical Journal 5(2), 14–19 (1997) (in Japanese) 4. Ikeda, D.: 2nd Generation Cellular Networks (PDC-P). In: Esaki, H., Sunahara, H., Murai, J. (eds.) Broadband Internet Deployment in Japan, Ohmsha, Tokyo, pp. 38–43 (2008) 5. Telecommunication Technology Committee (TTC): PDC Digital Mobile Communications Network Inter-Node Interface (DMNI) Signaling Method of Mobile Packet Communications System. JJ-70.20 (2001) 6. Hanaoka, M., Kaneshige, S., Hagiya, N., Ohkubo, K., Yakura, K., Kikuta, Y.: Special Issue on i-mode Service Network System. NTT DoCoMo Technical Journal 1(1), 14–19 (1999) 7. Yoshihara, K., Suzuki, T., Miura, A., Kawahara, R.: Evaluation of Congestion Control of the PDC Mobile Packet Data Communication System. In: IEEE Global Telecommunications Conference, GLOBECOM 2002, vol. 2, pp. 1965–1969 (2002) 8. Miura, A., Suzuki, T., Yoshihara, K., Sasada, K., Kikuta, Y.: Evaluation of the Performance of the Mobile Communications Network Providing Internet Access Service. IEICE Transactions on Communications E84-B(12), 3161–3172 (2001) 9. Ikeda, D., Miura, A.: Provision of Paging in a Mobile Packet Data Communication System. In: 4th Asia-Pacific Symposium on Information and Telecommunication Technologies (APSITT 2001), pp. 176–180 (2001) 10. Miura, A., Shinagawa, N., Ishihara, F., Suzuki, T., Mochida, H.: Network Design Based on Network and Traffic Characteristics. In: 19th International Teletraffic Congress (ITC19), pp. 819–828 (2005) 11. Open Mobile Alliance (OMA): Reference Release Definition for Parlay Service Access. Approved Version 1.0 (2010) 12. Open Mobile Alliance (OMA): Next Generation Service Interfaces Requirements. Candidate Version 1.0 (2010)
Analysis of the Characteristics of EEG Caused by Dyspnea and the Influence of the Environmental Factors Jeong-Hoon Shin Dept. of Computer & Information Communication Eng. Catholic University of Dae-Gu, Korea
[email protected]
Abstract. Regarding the breathing and respiratory diseases related to physical health, various kinds of medicinal treatment have been developed for certain physical changes (increasing heartbeats and blood pressure caused by a fast rate of breathing). However, most of the researches which have been executed until now have been simply related to various kinds of medicinal treatment for certain physical changes. The combined research for non-medicinal treatment and environmental factors has not been executed. Such a problem could provide certain patients with risks (including side-effects and medicinal poisoning) while medicinal treatment is applied. The high cost has been a kind of burden for patients and a problem for the clinical utilization. In order to solve such a problem, the characteristics of EEG among different kinds of non-medicinal treatment and the changes of the surrounding environment are analyzed throughout this thesis. Also, the inducing tendency of the changes related to the environmental factors and physical conditions (regarding the changes of brainwave) are analyzed. Based on the results of the analysis, the stable neuro/bio feedback treatment and the training technique which are not related to the environmental changes and the possible method of applying brainwave to the healthcare equipment are to be suggested. Keywords: brainwave, dyspnea, temperature changes, environmental factors, influence.
1
Introduction
Together with the scientific development, various kinds of medicinal treatment for certain physical conditions have been developed. The medicinal treatment shows a fast effect. Also, since exercise or training effort is not required, the medicinal treatment is widely used for many patients. Together with the newly-generated virus treatment, such treatment is continuously developed and studied. However, regarding most of the researches which have been executed until now, the field of the medicinal treatment which brings a fast effect for certain physical changes has been studied. The results of the research related to the fast treatment provide the possibility of unexpected risks (side-effects, medicinal poisoning and recurrence of symptoms after stopping medicine). Such a problem prevents the application of such treatment from being applied in the clinical and medical equipment industries. T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 19–30, 2011. © Springer-Verlag Berlin Heidelberg 2011
20
J.-H. Shin
Throughout this thesis, the environmental changes which could be applied as variables for the physical changes (those related to the brainwave state) are to be considered. Also, the influence of the environmental changes on the measured brain signals is to be analyzed and suggested. Based on the results of the analysis of the changes related to the physical characteristics with the consideration of the environmental changes, it will be ultimately possible to provide stable results for the analysis of the brainwave signals, which are not related to the environmental changes. Together with the changes related to the surrounding environment, the characteristics of the brainwave, which could be used as a kind of non-medicinal treatment, are to be analyzed. Then, the inducing tendency of the environmental and physical changes (those related to the brainwave state) is analyzed. Based on the result of the analysis, the stable neuro/bio feedback treatment and training as well as the utilization in various fields including healthcare equipment are suggested. The structure of this thesis is as follows. First of all, in the second chapter, the tendency for the medical equipment and the clinical diagnosis by using brainwave is introduced. In the third chapter, the structure and process of the experimental environment for “the characteristics of brainwave in regard to the emotional changes with the consideration of environmental factors” are introduced. Also, in the fourth chapter, the analysis for the result of the experiment executed in this thesis is suggested. In the fifth chapter, the conclusion and the direction of the follow-up research are described.
2
Related Works
In order to utilize the bio signals in the fields of healthcare technology and neuro/bio feedback, it is possible to classify the currently-processed major research themes in the world into four different categories such as the measurement and sensor technology for bio signals, the transmission and monitoring technology for bio signals, the measurement technology for bio signals and the standardization technology. Among the different researches related to bio signals, the one related to brainwave is actively executed by measuring the potential difference of the weak brainwave signals caused by the physiological activity of the brain with the electrode on the epidermis of the head. According to the result of the prior research, it is possible to find the functional state of the central nervous system in regard to brain tumor, cerebrovascular accident and other head injuries by quantitatively analyzing the frequency of the brainwave signals. The research is actively executed with various applications. However, most of the researches which have been executed until now in regard to brainwave have been independently limited to the four categories mentioned above. There has not been any combined research with the consideration of the surrounding environmental factors. At the moment, the utilization method of the bio information in the technical field of healthcare and the treatment method of mental diseases by utilizing the neuro/bio feedback are suggested in various ways. For the credible application and clinical utilization, it is highly necessary to execute a combined research by considering the surrounding environmental factors.
Analysis of the Characteristics of EEG Caused by Dyspnea and the Influence
3 3.1
21
Formation of the Experimental Environment Environmental Factors
As shown in Figure 1, the experimental environment established in this thesis consists of three different rooms for the analysis of the changes related to the state of dyspnea caused by the changes of the surrounding temperature. After analyzing the inner temperature changes and the state of dyspnea, brainwave is measured. In order to prevent combined environmental changes in advance and increase credibility for the analysis of the influence according to the temperature changes, the inner temperature and humidity of each independent room are respectively set as 28℃(temperature) /30%(humidity), 38℃(temperature) /39%(humidity) and 50℃(temperature) /35% (humidity) during the experimental period.
Fig. 1. Experimental Environment
3.2
Formation of the Subject Group
The subject group consists of 40 men and women in their 20s. Also, in order to get rid of the variability according to the period of measurement, the experiment has been executed in the same time period between 3:00pm and 8:00pm, in which those of the subject group could feel comfortable. 3.3
Location of the Electrode and Brainwave Measurement Equipment
The analysis for dyspnea caused by temperature changes and the changes related to the characteristics of brainwave has been executed by maintaining all the overall factors which could influence the experiment other than the one related to temperature changes. The measurement equipment which has been used for the experiment is the 8-channel brainwave measurement device (Laxtha Co., Ltd., Korea). Regarding the electrode for the measurement of brainwave, the points of Fp1, Fp2, T3, T4, C3, C4, O1 and O2 have been selected according to International 10-20 System of Electrode Placement shown in Figure 2. After executing the measurement process, the brainwave data of the subject group has been digitalized with the sampling rate of 256Hz.
22
J.-H. Shin
Regarding the selection of the points for the electrode placement, the two points of Fp1 and Fp2 have been selected to analyze the changes related to the state of the cerebral activation on the frontal lobe. Also, the four points of T3, T4, C3 and C4 have been selected to analyze the changes related to the state of the cerebral activation on the part of diencephalon which is involved in the physiological responses related to body temperature and blood pressure and the part of medulla oblongata which is involved in digestion, blood circulation, breathing adjustment and reflection. The two points of O1 and O2 have been selected to analyze the changes related to the state of the cerebral activation on the occipital lobe which is responsible for the visual information.
Fig. 2. International 10-20 system of Electrode Placement
3.4
Experimental Method
5 Step 1 The people in the subject group of the experiment are instructed to have stable positions with electrodes on their scalps for 10 minutes after entering the room where the measurement process is executed according to the specified order. 5 Step 2 After the time for stabilization is passed, the pulse and blood pressure of the subject group of people is measured. Also, at the same time, the brainwave in the state of stabilization is collected for three minutes. 5 Step 3 After the brainwave in the state of stabilization is completely measured, one minute of free time is given to each person in the subject group. The people in the subject group are instructed to show stable positions in the natural state. At this moment, the brainwave is not measured. 5 Step 4 After the brainwave in the stabilized state is completely measured, the people in the subject group are instructed to hold their breath for thirty seconds in order to provide changes to their heartbeat and blood pressure. During this period, their brainwave is
Analysis of the Characteristics of EEG Caused by Dyspnea and the Influence
23
collected. Also, after collecting the brainwave, it is necessary to measure the pulse and blood pressure of the people in the subject group. In this experiment, the people in the subject group are instructed to experience anxiety, stifling atmosphere and dizziness through dyspnea by holding their breath. 5 Step 5 After completing Step 4, the people in the subject group are instructed to get out of the room where the measurement process is executed. Then, they are instructed to have 10 minutes for rest in a comfortable position before entering the room for the next measurement. After entering the room, the process from Step 1 to Step 4 is reexecuted. The measurement process is repeatedly executed until the process in each room with 28℃ or 38℃ or 50℃ is completed.
4
Experiment and Analysis of Results
In this thesis, as a linear quantitative standard for the analysis of the influence caused by the changes related to the state of brainwave (the heartbeat and blood pressure according to the temperature) and the state and related changes of brainwave activation, the analysis of the correlation coefficients among the EEG signal channels (All Pair-Cross Pearson’s Correlation Function) is executed prior to the statistical generalization. Also, in order to analyze the influence of the specific frequency broadband included in the brainwave signals, the band pass filter is used, while the classification process is executed for each EEG signal frequency broadband. By additionally executing the analysis of the correlation coefficients among the EEG signal channels (All Pair - Cross Pearson’s Correlation Function) according to each frequency broadband, the central broadband of the brainwave signals which play an important role in regard to the changes related to the state of brainwave according to the heartbeat and blood pressure is checked. Also, in order to prevent the decrease of credibility for the results of the analysis according to the rapid application of miscellaneous sounds, the ensemble averaging over the moving window within the period of measurement is analyzed. The analysis of the brainwave signals, which has been executed in this thesis, is indicated as a block diagram in Figure 3.
Fig. 3. Block diagram of analyzing process of the brainwave
24
J.-H. Shin
4.1 Analysis of the Characteristics of Brainwave According to Temperature Changes For the analysis of the characteristics of brainwave and the correlation of cerebral activities in the stabilized state according to the temperature changes, the crosscorrelation analysis of the brainwave data collected from the eight electrodes attached to the points of Fp1, Fp2, T3, T4, C3, C4, O1 and O2 is executed. The average data related to 40 people in the subject group is calculated and indicated in Figure 4.
(a)28℃, Brain Map
(b)38℃, Brain Map
(c)50℃, Brain Map
(d) 28℃, CrossCorrelation Map
(e) 38℃, CrossCorrelation Map
(f) 50℃, CrossCorrelation Map
Fig. 4. Correlation among Different Channels in the Stabilized State according to the Temperature Changes
In this thesis, by analyzing the characteristics of the brain signals in the stabilized state according to the temperature changes, the heartbeat and blood pressure of the people in the subject group are changed as they show dyspnea after accomplishing their mission for the simulation (holding their breath) as shown in Figure 4. As the inner temperature increases, the state of brainwave is activated on the overall part of the brain. Also, when the heartbeat and blood pressure increase in the state of increased inner temperature, the changes related to the state of brainwave occur between the left hemisphere and the occipital lobe. As a result, it is possible to know that the correlation coefficients among the channels for T3, T4, C3, C4, O1 and O2 increase. 4.2 Analysis of the Characteristics of Brainwave According to Dyspnea Caused by Temperature Changes For the analysis of the characteristics of brainwave and the correlation of cerebral activities when the breathing state is changed according to the temperature changes, the cross-correlation analysis of the brainwave data collected from the eight electrodes attached to the points of Fp1, Fp2, T3, T4, C3, C4, O1 and O2 is executed. The average data related to 40 people in the subject group is calculated and indicated in Figure 5.
Analysis of the Characteristics of EEG Caused by Dyspnea and the Influence
(a) 28℃, Brain Map
(b) 38℃, Brain Map
(c) 50℃, Brain Map
(d) 28℃, CrossCorrelation Map
(e) 38℃, CrossCorrelation Map
(f) 50℃, CrossCorrelation Map
25
Fig. 5. Correlation among Different Channels in the State of Dyspnea according to the Temperature Changes
In this thesis, by analyzing the characteristics of the brain signals in the state of dyspnea according to the temperature changes, the biggest change is shown at 38℃ between 28℃ and 50℃. It is possible to know that the active changes related to the state of brainwave occur on the overall part of the brain at 38℃. Also, the correlation between the parietal lobe and the occipital lobe of the brain (the correlation among T3, T4, C3, C4, O1 and O2) increases the most at 38℃ between 28℃ and 50℃. At that point, the changes related to the state of brainwave occur in the most active way. 4.3 Analysis of the Brainwave Factors Causing Dyspnea for Each Frequency Broadband According to the results of each analysis described in Clause 4.1 and Clause 4.2, the inner temperature changes influence the changes related to the state of brainwave. Also, they influence the changes related to the states of heartbeat and blood pressure. In this thesis, in order to analyze the changes related to the overall state of brainwave and the heartbeat and blood pressure which change the state, the collected brainwave data has been classified for each frequency broadband. Then, the correlation among the channels has been analyzed. Based on the result of the analysis, the specific analysis of characteristics has been executed. 4.3.1 Analysis of the Influence by the Field of Delta Wave (0.1Hz~4Hz) on Dyspnea Regarding the changes related to heartbeat and blood pressure in the field of delta wave as shown in Figure 6, it is possible to know that the changes related to the correlation among the channels for the measurement of the changes in regard to the state of brainwave are similar to the changes related to the overall correlation for the brainwave signals as shown in Figure 5. The level of response in the field of delta
26
J.-H. Shin
wave shows the biggest change at 38℃ between 28℃ and 50℃ as the states of the heartbeat and blood pressure of the people in the subject group change after accomplishing the mission in the simulation (holding breath). Also, it is possible to know that the changes related to the active state of brainwave occur on the overall part of the brain at 38℃. The correlation between the parietal lobe and the occipital lobe of the brain (the correlation among T3, T4, C3, C4, O1 and O2) increases the most at 38℃ between 28℃ and 50℃. Also, it is possible to know that the changes related to the state of brainwave actively occur at this point. Such a result is similar to the one for the analysis of the correlation among different channels according to temperature changes when the symptoms of dyspnea occur.
(a) 28℃, Brain Map
(b) 38℃, Brain Map
(c) 50℃, Brain Map
(d) 28℃, CrossCorrelation Map
(e) 38℃, CrossCorrelation Map
(f) 50℃, CrossCorrelation Map
Fig. 6. Correlation among the Channels for the Field of Delta Wave according to the Temperature Changes and Dyspnea
4.3.2 Analysis of the Influence by the Field of Theta Wave (4Hz~8Hz) on Dyspnea As shown in Figure 7, there are great differences between the changes related to the correlation coefficients among different channels in regard to the estimation of the changes related to the overall state of brainwave when there are changes of heartbeat and blood pressure according to the temperatures in the field of theta wave and the changes related to the correlation of the overall brainwave signals described in Figure 4 and Figure 5. By comparing (a), (b) and (c) in Figure 1-4 with (a), (b) and (c) in Figure 4, 5 and 6, it is possible to know that there are clearly small changes in regard to the changes related to the state of brainwave according to temperature changes. Also, the state of brainwave between the frontal lobe and the temporal lobe is being actively executed. In regard to the correlation coefficients among different channels for the measurement of the changes related to the state of brainwave, it is possible to know that there are small changes related to the state of brainwave as well as the changes related to the surrounding temperature.
Analysis of the Characteristics of EEG Caused by Dyspnea and the Influence
(a) 28℃, Brain Map
(b) 38℃, Brain Map
(c) 50℃, Brain Map
(d) 28℃, CrossCorrelation Map
(e) 38℃, CrossCorrelation Map
(f) 50℃, CrossCorrelation Map
27
Fig. 7. Correlation among Different Channels in the Field of Theta Wave according to the Temperature Changes and Dyspnea
4.3.3 Analysis of the Influence by the Field of Alpha Wave (8Hz~13Hz) on Dyspnea By comparing (a), (b) and (c) in Figure 8 with (a), (b) and (c) in Figure 4, it is possible to know that the overall correlation coefficients among different channels are clearly smaller than those in the stabilized state when there are changes related to temperature, heartbeat and blood pressure. However, through the comparison with (a), (b) and (c) in Figure 4, it is possible to find common areas which provide more activated responses as the inner temperature increases. Also, when there are more changes related to the states of heartbeat and blood pressure with a higher level of inner temperature, the state of brainwave changes between the left hemisphere and the occipital lobe. As a result, the correlation coefficients among different channels at T3, T4, C3, C4, O1 and O2 increase.
(a) 28℃, Brain Map
(b) 38℃, Brain Map
(c) 50℃, Brain Map
(d) 28℃, CrossCorrelation Map
(e) 38℃, CrossCorrelation Map
(f) 50℃, CrossCorrelation Map
Fig. 8. Correlation among Different Channels in the Field of Alpha Wave according to the Temperature Changes and Dyspnea
28
J.-H. Shin
4.3.4 Analysis of the Influence by the Field of Beta Wave (13Hz~30Hz) on Dyspnea When there are changes related to the states of heartbeat and blood pressure in the field of beta wave as shown in Figure 9, it is possible to know that there are great differences in regard to the changes of correlation coefficients among different channels for the measurement of the changes related to the overall state of brainwave unlike the changes related to the correlation of the entire brainwave signals as shown in Figure 4 and Figure 5.
(a) 28℃, Brain Map
(b) 38℃, Brain Map
(c) 50℃, Brain Map
(d) 28℃, CrossCorrelation Map
(e) 38℃, CrossCorrelation Map
(f) 50℃, CrossCorrelation Map
Fig. 9. Correlation among Different Channels in the Field of Beta Wave according to the Temperature Changes and Dyspnea
By comparing (a), (b) and (c) in Figure 9 with (a), (b) and (c) in Figure 4, 5 and 6, it is possible to know that there are small changes related to the overall state of brainwave when the states of temperature, heartbeat and blood pressure change. Also, regarding Figure 5, it is possible to know that the correlation coefficients among different channels rapidly change even if there are small temperature changes (from 28℃ to 38℃). There are almost no changes related to the correlation coefficients among different channels in the field of beta wave. When there are rapid changes related to the inner temperature, it is possible to know that there are small changes related to the correlation coefficients among different channels. Furthermore, even if the overall state of brainwave is shown according to the changes related to the correlation coefficients among different channels at T3, T4, C3, C4, O1 and O2, there are only small changes related to the correlation coefficients among different channels in the field of beta wave with the rapid change of the surrounding temperature. By considering all the contents mentioned above, it can be concluded that the correlation coefficients among different channels in regard to the broadband of beta wave is not greatly influenced by the changes related to the surrounding temperature and environment if the correlation coefficients among different channels are considered for the measurement of the changes related to the state of brainwave.
5
Conclusion and Direction for the Follow-Up Research
In this thesis, the changes related to the characteristics of brainwave according to the changes of the surrounding environment and the properties of the wave according to
Analysis of the Characteristics of EEG Caused by Dyspnea and the Influence
29
heartbeat and blood pressure have been analyzed by using the cross-correlation method and considering the correlation for the changes related to the state of brainwave. With the ensemble averaging over the moving windows within the period of measurement, it has been possible to reduce the amount of analytic errors caused by the deviation of individual bio signals and the introduction of miscellaneous waves in the environment which is same as those of the people in the subject group. Also, it has been found that it is possible to execute the correlation analysis among different channels by utilizing the quantitative analytic method for each frequency broadband of the brainwave signals for the practical application. Such a method is widely used in the clinical and neuro feedback fields. By considering the results of the experiment which has been executed in this thesis and the related analytic method, it is possible to summarize and organize such analytic results for each frequency broadband according to the temperature changes and dyspnea as shown in Table 1. Table 1. Analysis of the Characteristics of Brainwave according to the Temperature Changes and Dyspnea Stabilized State
Room Temperature (28℃)
Medium Temperature (38℃)
High Temperature (50℃)
- Maintenance of a relatively constant state of activation for the correlation coefficients among different channels
- Increasing the state of activation among different channels at T3, T4, C3, C4, O1 and O2 - The changes related to the correlation among different channels in the fields of theta wave and beta wave are not influenced by the temperature changes.
- The state of activation among different channels at T3, T4, C3, C4, O1 and O2 increases. - The changes related to the correlation among different channels in the fields of alpha wave and delta wave are influenced by the temperature changes up to a certain degree. - The correlation among different channels in the fields of theta wave and beta wave is barely influenced by the temperature changes.
Dyspnea (Holding Breath) - Maintenance of a relatively constant state of activation for the correlation coefficients among different channels and increasing the correlation coefficients more than the stabilized state - As the inner temperature increases, the states of cerebral activation and brainwave among different channels increase for the same external stimulation. - The changes related to the correlation among different channels in the fields of beta wave and theta wave are not greatly influenced by the temperature changes. - As the inner temperature increases, the states of cerebral activation and brainwave among different channels increase for the same external stimulation. - The correlation among different channels in the fields of alpha wave and delta wave is influenced by the temperature changes up to a certain degree. - The correlation among different channels in the fields of theta wave and beta wave is barely influenced by the temperature changes.
30
J.-H. Shin
The characteristics of EEG and the influence of the temperature changes have been considered as the results of this research. It will be necessary to combine all the variable environmental factors other than the factor of temperature in order to execute an additional research for the practical application. Acknowledgments. This work was supported by research grants from the Catholic University of Daegu in 2010.
References 1. Hsiu, H., Hsu, W.-C., Hsu, C.L., Huang, S.-M., Hsu, T.-L., Wang, Y.-Y.L.: Spectral analysis on the microcirculatory laser Doppler signal of the acupuncture effect. In: 30th Annual International Conference of the 2008, Engineering in Medicine and Biology Society, IEEE-EMBS 2008, pp. 2916–2919 (August 2008) 2. Li, N., Wang, J., Deng, B., Dong, F.: An analysis of EEG when acupuncture with Wavelet entropy. In: 30th Annual International Conference of the 2008, Engineering in Medicine and Biology Society, IEEE-EMBS 2008, pp. 1108–1111 (August 2008) 3. He, W.-X., Yan, X.-G., Chen, X.-P., Liu, H.: Nonlinear Feature Extraction of Sleeping EEG Signals. In: 27th Annual International Conference of the 2005, Engineering in Medicine and Biology Society, IEEE-EMBS 2005, pp. 4614–4617 (September 2005) 4. Murata, T., Akutagawa, M., Kaji, Y., Shichijou, F.: EEG Analysis Using Moving Averagetype Neural Network. In: 30th Annual International Conference of the 2008, Engineering in Medicine and Biology Society, IEEE-EMBS 2008, pp. 169–172 (August 2008) 5. Kaji, Y., Akutagawa, M., Shichijo, F., Nagashino, H., Kinouchi, Y., Nagahiro, S.: EEG analysis using neural networks to detect change of brain conditions during operations. In: IFMBE Proceedings, pp. 1079–1082 (April 2006) 6. Sun, Y., Ye, N., Xu, X.: EEG Analysis of Alcoholics and Controls Based on Feature Extraction. In: The 8th International Conference on Signal Processing (2006)
A Study on MAC Address Spoofing Attack Detection Structure in Wireless Sensor Network Environment Sungmo Jung1, Jong Hyun Kim2, and Seoksoo Kim1,* 1
Department of Multimedia, Hannam University, Daejeon-city, Korea Eectronics and Telecommunications Research Institute, Daejeon-city, Korea
[email protected],
[email protected],
[email protected] 2
Abstract. Wireless sensor network applies authentication by registering/managing user IP and MAC addresses. However, the existing methods are vulnerable to MAC address spoofing, in which a malicious user changes a client’s MAC address into his own, calling for a new detection method. Therefore, this paper provides a method of detecting MAC address spoofing attacks in real-time by collecting wireless traffic data through AirSensor/AP- and using a MAC Address Index table in TMS. Keywords: Wireless Sensor Network, MAC Address Spoofing, Spoofing Attack Detection.
1
Introduction
A wired network can be used only when a user receives an IP address and physical port connection from a network administrator. The network administrator applies various authentication and security systems, using the NAC(Network Admission Control) system[1] in order to register/manage user IP and MAC(Media Access Control) addresses[2]. A wired network environment provides DHCP(Dynamic Host Configuration Protocol)[3]–based IP for user convenience. Thus, it employs MAC address registration/authentication systems so as to effectively detect malicious users. However, the existing wired network environment, using MAC address registration/ authentication methods based on the NAC system, is vulnerable to MAC Spoofing attacks[4] and, particularly, a MAC address can be easily changed in most client systems. The number of people using a wireless network is sharply increasing, for it requires no physical port connection. But such an environment allows malicious users to easily access the network, posing more serious threats compared to wired networks. Although a large number of research has been done in order to detect MAC address spoofing attacks in wireless networks, the existing methods including PTD(Personal Trusted Device)-based wireless network management[5] and wireless MAC address spoofing detection[6] are still not sufficient to discover such attacks in advance. Therefore, this paper provides a method of detecting MAC address spoofing attacks in real-time by collecting wireless traffic data through AirSensor/AP- and using a MAC Address Index table in TMS(Threat Management System). *
Corresponding author.
T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 31–35, 2011. © Springer-Verlag Berlin Heidelberg 2011
32
2 2.1
S. Jung, J.H. Kim, and S. Kim
Related Researches MAC Spoofing Attack Method
There is a unique address in the NIC(Network Interface Card), which is called a hardware or MAC address. A MAC address, assigned to a network card manufacturer, is composed of 48 bits. The number is assigned to venders by IEEE[7] and cannot be duplicated. MAC address spoofing attacks avoid NIC-based authentication by changing MAC addresses. The following is the general scenario of MAC address spoofing attacks.
① A malicious user scans MAC addresses of surrounding clients using wireless networks ② He changes one of the scanned MAC addresses into his MAC address ③ He blocks a client’s wireless network connection through De-Auth[8] attacks ④ He attempts communication with AP using the fake MAC address ⑤ He uses the fake MAC address in order to receive internal network authentication ⑥ He produces SoftAP[9] using other wireless network cards ⑦ A client attempts communication through SoftAP and the malicious user attempts sniffing of the client’s data
The network authentication technology using MAC addresses are most used in private or internal networks. However, the existing wireless network environment is vulnerable to MAC address spoofing attacks, calling for a new detection method. 2.2
WSLAN Vulnerability Diagnostic Tool
There is quite a large number of methods to attack wireless networks but not sufficient number of detection methods. Therefore, wireless vulnerability diagnostic tools use systems that make virtual wireless network attacks in order to find out which part of the network is particularly vulnerable. This paper also uses such system so as to diagnose vulnerability and provide solutions. Diagnostic tools can discover varied vulnerability to EAP START DoS, EAP FAILURE DoS, EAP LogOFF DoS, Fake AP, and so on[11]. The following figure shows how the tools collect scan data for surrounding APs and carry out virtual attacks.
Fig. 1. Structure of WSLAN Vulnerability Diagnostic Tool
A Study on MAC Address Spoofing Attack Detection Structure
2.3
33
WSLAN MAC Spoofing Detection Method
Previous studies[12] are mainly focused on detection of MAC address spoofing. They use MAC address spoofing attack tools and analyze MAC addresses or sequence numbers through packet dump. Then, extracted attack patterns are compared with rule-based attack detection modules in order to discover MAC address spoofing. In particular, when such a tool is used, the sequence number increases by 1, which indicates MAC address spoofing in the network.
Fig. 2. MAC Spoofing Detection due to Increase the value of Sequence
In this case, however, real-time detection is impossible if the malicious user exploits a client’s MAC address using traffic sniffing through his own wireless NIC. Therefore, this paper aims to complement the existing studies by detecting changes in MAC addresses in real-time.
3
MAC Address Spoofing Attack Detection Structure
We need the following technologies in order to detect and prevent MAC address spoofing.
① Collection of wireless network traffic data at AP and transmission of the data to TMS ② Continuous updates of MAC addresses through wireless traffic capture based on AirSensor and analysis modules, which can reduce detection errors ③ TMS that receives the traffic and client data from AirSensor and AP, detecting/ preventing MAC address spoofing attacks after analyzing the MAC/IP addresses and time
34
S. Jung, J.H. Kim, and S. Kim
This paper provides the solution system composed of AP, AirSensor, and TMS module. TMS collects wireless traffic and client data from AirSensor and AP in order to detect/prevent MAC address spoofing, establishing a safe wireless network environment. Table 1. Module Categories
AP
AirSensor
TMS
Packet Queue Save using IPTABLES Packet Analysis using IPQ Library Send Packet Information Receive Protection Signal Packet Drop or Accept Packet Detection using Channel Hopping Packet Categories Module Packet Attack Detection Module Packet Information Transmission Module Receive Packet Information Display Packet Information Detect MAC Spoofing Send Protection Signal
Fig. 3. Structure of Suggested System
AP transmits packet data of a wireless client to TMS, which in turn saves the data and creates a MAC Address Index. The table contains the client’s MAC address, IP address, AP address, and time data. Here, AirSensor also collects client data through channel hopping, which is sent back to TMS. TMS detects MAC address spoofing
A Study on MAC Address Spoofing Attack Detection Structure
35
using the table updated in real-time and sends the result of analysis to AP. According to the analysis result, AP executes a block module for packets. The following figure shows the structure of the system.
4
Conclusion
In order for safe wireless sensor networks, this paper provides a system of detecting MAC address spoofing attacks by collecting wireless traffic data based on AirSensor/ AP and sending the information to TMS. This solution can detect/block MAC address spoofing in real-time as well as protect private/internal networks. In case more advanced wireless network technologies such as 802.11n or 802.11i are commercialized, this solution can be fully applied with follow-up research. Acknowledgement. This paper has been supported by the Software R&D program of KEIT. [2010-10035257, Development of a Global Collaborative Integrated Security Control System]
References 1. Willens, S.M.: Network access control system and process. Google Patents (1999) 2. Ye, W., Heidemann, J., Estrin, D.: Medium access control with coordinated adaptive sleeping for wireless sensor networks. IEEE/ACM Transactions on Networking 12(3), 493–506 (2004) 3. Alexander, S., Droms, R., et al.: DHCP options and BOOTP Vendor Extensions, Citeseer (1997) 4. Wright, J.: Detecting wireless LAN MAC address spoofing. White Paper (2003) 5. Virendra, M., Upadhyaya, S.: Securing information through trust management in wireless networks. In: The Workshop on Secure Knowledge Management, pp. 201–206 (2004) 6. Guo, F., Chiueh, T.: Sequence number-based MAC address spoof detection. Recent Advances in Intrusion Detection, 309–329 (2006) 7. IEEE Standard for Local and Metropolitan Area Networks. IEEE Std 802 (2001) 8. Bellardo, J., Savage, S.: Disassociation and De-auth attack. In: USENIX Security Symposium (2003) 9. Shoobridge, R.A.: Wireless access point software system. Google Patents (2000) 10. Sinha, A., Darrow, N.J.: Systems and methods for wireless vulnerability analysis. Google Patents (2009) 11. Liu, C., Yu, J.T.: An analysis of DoS attacks on wireless LAN. In: Proc. 6th IASTED International Multi-Conference on Wireless and Optical Communications (2006) 12. Vigna, G., Gwalani, S., Srinivasan, K., Belding-Royer, E.M., Kemmerer, R.A.: An intrusion detection tool for AODV-based ad hoc wireless networks. IEEE Computer Society, Los Alamitos (2004)
Mapping Based on Three Cameras for 3D Face Modeling Jae-gu Song and Seoksoo Kim* Dept. of Multimedia, Hannam Univ., 133 Ojeong-dong, Daedeok-gu, Daejeon-city, Korea
[email protected],
[email protected]
Abstract. In this research, we use three cameras to produce a 3D face model based on 2D images. The suggested method extracts a facial region using color values and calculates the face structure by applying the AAM algorithm. The three processed images are combined to produce a final 3D image. Keywords: Mapping, 3D modeling, Face modeling.
1
Introduction
3D face modeling is a study on computer vision, which is widely applied to face recognition, games, VFX, and so on, for it is more elaborate and realistic compared to 2D modeling. And yet, it is still not possible to produce a highly realistic face model, which makes the technology limited to VFX or animation. So far, 3D face modeling has been produced by measurement of coordinates or 2D images [1][2]. However, most studies require very expensive equipment and dozens of cameras, restricting the application. In this research, we used only 3 cameras in order to take a picture of a face in a mobile environment and extracted features from the images to be applied to 3D modeling. In Chapter 2, we will discuss related studies and, in Chapter 3, provide how to produce a 3D model using only 3 cameras. Also, the physical and software environment, applied to the system, will be explained. Chapter 4 will provide a conclusion and prospects.
2
Related Works
First of all, face modeling requires extraction of features. To that end, we will consider the results of the major studies. 2.1
AAM (Active Appearance Models)
AAM is an algorithm that establishes a statistical model for shapes/appearances of an object and matches the result with new images in order to find an object [3]. AAM has the following characteristics. ▪ It uses rules related with locations/sizes of facial components. ▪ It is sensitive to a change in a gradient, an angle, expression of a face. *
Corresponding author.
T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 36–41, 2011. © Springer-Verlag Berlin Heidelberg 2011
Mapping Based on Three Cameras for 3D Face Modeling
37
▪ It can extend rules related with a change or an improvement can be made through normalization of conditions. ▪ Each component of a face is related with a specific GKS distance in terms of locations. The algorithm establishes a statistical model of an object class in advance and adjusts the statistical model in order to minimize a difference between the appearances of the object class and those of an object in a new image, searching for the target object in a new image. AAM is as follows. Assuming that a shape s is described by feature points, s = [ 1, 1, 2, 2, . . . , , ] in the image, a shape is represented in AAM as a mean shape s0 plus a linear combination of shape bases {s }: s(P)= s P
∑
p s
(1)
where p = [ 1, 2, . . . , ] are the shape parameters. Usually, the mean shape s0 and the shape bases {s } are learned by applying PCA to the training shapes. To consider global trans- formation of a shape, the shape bases set {s } is expanded to include four additional bases representing global translation, scaling, and rotation. Figure 1 shows some examples of the shape bases. [4][5].
Fig. 1. The linear shape model of an independent AAM. The model consists of a triangulated base mesh s0 plus a linear combination of n shape vectors si. The base mesh is shown on the left, and to the right are the first three shape vectors s1, s2, and s3 overlaid on the base mesh[4].
2.2
IC-LK (Inverse Compositional Lukas-Kanade)
The IC-LK algorithm, an image matching algorithm, is an improved version of LK algorithm with an enhanced speed [6]. It has the same performance as the previous LK algorithm and can be applied to the fitting algorithm of the 2D AAM model. Instead of using an entire face, the LK algorithm carries out tracking through a square surrounding the features, proving an improved speed compared to AAM.
3
Mapping Based on 3 Cameras
In this research, we used a method of extracting a 2D image and took a picture in front of the object as well as at an angle of 45/90 degrees. Figure 2 shows how to convert a 2D image into a 3D image(fitting).
38
J.-g. Song and S. Kim
Fig. 2. 3D Mapping Using Cameras
For the purpose of convenience, we used three HD-class web cameras. Figure 3 shows the physical structure of this study.
Fig. 3. Physical Structure of Camera-based 3D Mapping Research
○ Input of 2D images using three cameras 1
Figure 4 shows three images taken in front of the object as well as at an angle of 45 and 90 degrees, using three web cameras. The image below is a picture box for adaptation effects and the gray effects show normal operation.
Fig. 4. Recognition of Images from Web Cameras
Mapping Based on Three Cameras for 3D Face Modeling
39
○ Detection of a facial region(YCbCr color model) 2
Fig. 5. Application of YCbCr Color Model
Figure 5 shows how to separate a face region using an YCbCr color model. The YCbCr color model was most effective in separating face colors, for it can express a large number of colors with a small quantity of data. The following formula was applied. 1.7772
0.5
(2)
0.5 1.402 The chrominance values in YCbCr are always in the range of 0 to 1. 3 Corner detect
(3)
○
Fig. 6. Detection of corners using the Moravec Corners Detector
We used the Moravec Corners Detector in order to detect corners of all regions as well as the facial region and distances. The points detected in the facial region can be expressed as Vector M. M
x ,y ,x ,y ,…x ,y T
(4)
In Formula (4), x refers to a coordinate (x, y, z) of i-time. Here, we could extract the value of a vector of a facial shape by applying PCA(Principal Component Analysis), which can extract local features[7]. In Formula (5) and (6), we could obtain statistics as to the degree of x and y distribution by applying PCA. /
(5)
40
J.-g. Song and S. Kim
/
(6)
○ Extraction of features using the AMM algorithm 4
Fig. 7. Extraction of features using the AMM algorithm
We extracted the final features by using the AMM algorithm in order to secure the eyes, nose, and mouth of the facial region. 5 Combination of the face region extracted from the 2D images
○
Fig. 8. Combination of the Face Region Extracted from the 2D Images
○ Production of a 3D Model 6
Fig. 9. Production of a 3D Model
4
Conclusion
In this research, we suggested a method of 3D facial modeling using 2D images obtained from three cameras. First, we extracted a facial region from three pictures taken in front of the object and from the sides using the YCbCr color model. Then, we analyzed x, y coordinates distributed within the facial region through corner detection.
Mapping Based on Three Cameras for 3D Face Modeling
41
Based on the coordinates, we applied the AMM algorithm so as to extract features of the eyes, nose, and mouth. Finally, a 3D model was produced from 2D images based on the location values. In this study, we used 2D images obtained from web cameras. However, compared to the previous method of 3D facial modeling, the suggested method can not produce elaborate images. In order to solve this problem and to produce more elaborate 3D face modeling, we need to apply the IC-LK algorithm, obtain face data, and extract mesh-type values for 3D objects, as in the case of AMM application. Acknowledgement. This paper has been supported by the 2011 Hannam University Research Fund.
References 1. Russ, T., Boehnen, C., Peters, T.: 3D Face Recognition Using 3D Alignment for PCA. In: Proc. of the 2006 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition (CVPR 2006), vol. 2, pp. 1391–1398 (2006) 2. Ansari, A., Abdel-Mottaleb, M.: Automatic facial feature extraction and 3D face modeling using two orthogonal view with application to 3D face recognition. The Journal of the Pattern Recognition 38, 2549–2563 (2005) 3. Cootes, T.F., Edwards, D.J., Taylor, S.J.: Active Appearance Models. IEEE Trans. Pattern Anal. Mach. Intell. 23(6), 681–685 (2001) 4. Matthews, I., Baker, S.: Active Appearance Models revisited. IJCV 60(2), 135–164 (2004) 5. Zhou, M., Liang, L., Sun, J., Wang, Y.: AAM based face tracking with temporal matching and face segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2010, pp. 701–708 (2010) 6. Baker, S., Matthews, I.: Lucas-Kanade 20 Years On: A Unifying Framework. IJCV (2004) 7. Russ, T., Boehnen, C., Peters, T.: 3D Face Recognition Using 3D Alignment for PCA. In: IEEE Conf. on Computer Vision and Pattern Recognition, vol. 2, pp. 1391–1398 (2006)
A Social Education Network Based on Location Sensing Information Using Smart-Phones* Jang-Mook Kang1,** and Sook-Young Choi2 1
Electronic Commerce Research Institute of Dongguk University, 707 Seokjang-dong, Gyeongbuk, South Korea
[email protected] 2 Department of Computer Education,Woosuk University 490 Samrae-eup, Jeonbuk, South Korea
[email protected]
Abstract. This paper aims to construct a social network among e-learning learners on the basis of the location information of the learner and the instructor obtained using GPS sensors, etc. For this, we suggest a system that supports the construction of a social education network service using the location information of the smart-phone in e-learning. This system provides a mechanism to form a social network service among students who take the same on-line course or have similar interests. Through this system, e-learners can create communities for learning and exchange help among them. The support of a location based social network service in our system would increase interactions among e-learners and improve satisfaction regarding their mobile learning environment. Keywords: Smart-Phone, Social Education, Collaborative Learning.
1 Introduction The social dimension of learning is also central to the idea of situated learning and communities of practice to be used in the social context in the network era. Social education has always been of great significance to teachers, learners and others. Learning is a function of the activity, context and culture in which it occurs, where social interaction is critical. Tinto [4] stresses that academic satisfaction is not enough for some students who suffer from isolation. The intensity and reciprocity of a social interaction can, together with other factors, result in such drastic measures as students dropping out of a course. This problem (low-level of social interaction) is more serious with e-learning courses. Carr [1] points out that anecdotal evidence and studies by individual institutions suggest that online course completion is much lower than in F2F (face to face) courses. A number of studies have found the retention of e-learners to be lower *
This work was supported by Woosuk University(2011). This work was supported by the National Research Foundation of Korea Grant funded by the Korean Government (NRF-2010-330-B00017). ** Corresponding author. T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 42–48, 2011. © Springer-Verlag Berlin Heidelberg 2011
A Social Education Network Based on Location Sensing Information Using Smart-Phones
43
than the retention of on-campus learners. Interaction with classmates and the professor is a significant contributor to perceived learning in on-line courses [2, 3]. Students who report a high level of interaction report a high level of perceived learning in a course. Accordingly, it would be important to boost e-learners so that they can construct a social network among them. Through the social network service, students will have F2F meetings as well as on-line meetings. This social interaction would increase the student’s satisfaction with the course, increasing the probability that the student will not drop the course. Currently, there are increasing demands and interest in location-sensing based services with advancements in smart-phones (which have GPS capability), PDAs, Bluetooth, dedicated GPS equipment and other devices (such as i-Pad, navigation devices, digital cameras and so on). The provisioning of services using location information is known as location-based services (LBS). With the use of mobile users’ geographic information, the relevant services are provided to the users. There are varieties of LBS applications [6, 7]. Positioning and routing guidance are known as the most common use. Location-sensitive services and location-based games are also at work [5]. Mobile social networking appears to the users as the result of social network services coming to mobile devices, especially smart-phones. This paper proposes a system that supports the construction of a social education network service using the location information of the smart-phone in mobile-learning. This system provides a mechanism to form a social network service among the students who take the same on-line course or have similar interests. Through this system, e-learners can create communities for learning and exchange help among them. In addition, they can have F2F meetings for collaborative learning by using location based sensing information. This supporting of a location based social network service in our system would increase interactions among e-learners and improve satisfaction regarding their mobile learning environment. Consequently, it would make mobile-learning course completion rates higher.
2 Construction of Social Network Education Based on Location Information With the development of the Internet, various forms of content are provided to the users. In particular, educational content that was provided offline in the past is now provided to users online. Thus, the learners can utilize various forms of content anywhere, any time by using their smart-phones. As of now, however, such e-learning content is pre-prepared for service to the users. It is difficult therefore to provide interaction services real-time the existing e-learning methods only provide bulletins for sharing the lectures or for the needs of study groups. They do not provide various methods for forming a community or a small group among the learners. In particular, it is inefficient compared with face-to-face offline meetings and discussions, and thus there exists limitations on collaborative learning.
44
J.-M. Kang and S.-Y. Choi
For this purpose, in this paper, we will collect location-based information through the GPS sensor of a smart-phone and then combine it with educational content. That is, it will support a method that can construct a social education network based on the collected location information of the learners. This method will inform a mobile learner of the current location or location logging-information of other learners who attend the same online course or the instructor of the relevant course. Based on the location information provided to mobile learners, collaborative learning can be requested of other learners or the instructors. Furthermore, if other learners in the vicinity also attend the course, a learner may request them to meet at a specific place for discussion and collaborative learning. Fig. 1 explains the basic concepts of this system.
Fig. 1. Construction of location-based social education network
In Fig. 1, Learner A executes the online course application on a smart-phone. Learner A requests educational content from the mobile-learning server. The mobile-learning server generates additional information from the location information of Learner A and provides it together with the educational content. Alternatively, it provides additional information based on the location information included in the educational content. For example, if the content of the course is about ‘2010 Korea-US FTA at the Blue House,’ the information on the locality of the Blue House is provided in addition to the educational information. Learner A provides his location to the mobile-learning server at the same time he requests the educational content from the mobile-learning server. At this time, the mobile-learning server searches for the enrollment and location information of Learner B who is near to Learner A. Alternatively, it searches for information such as the location information received from the smart-phone of the instructor of the educational content, i.e., the professor or the assistant of the course. By using the information of the course or Learner B, Learner A can search for the people who attend the same course or who may assist.
A Social Education Network Based on Location Sensing Information Using Smart-Phones
45
In particular, the location of Learner A is tracked by the GPS sensing capability of the smart-phone. The location information of Learner A is used to help him meet at a specific place other people who attend the same course if they are nearby. In this way, online e-learning can be extended to face-to-face meetings. In other words, close social relationships between students and between students and instructors can be formed by using smart-phones. Learner A sends a message requesting collaboration to the mobile-learning server for a person in the local area who can assist on the course. The mobile-learning server sends the request message it received from Learner A to Learner B or the instructor. It realizes a real-time social network by receiving a response to the request message. To make the above services possible, the mobile-learning server performs the following functions. First, the mobile-learning server receives the location information and GPS identification information from the learner and the instructor. Second, it saves the location information in the database. Third, it adds the educational information and the additional information to the saved location information. Fourth, when Learner A makes a request to Learner B or the instructor at a specific place, it processes the request. Fifth, the learners (A, B, C, … N) and the instructors (A, B, C, … N) connect in real-time to the mobile-learning server to share location information and obtain social and additional information. Sixth, it helps to construct social education network services by utilizing the location information as meta-tags.
Fig. 2. Location-based learning support system
Fig. 2 shows a location-based learning support system which provides the construction of a social education network service using the location information of the smart-phone in mobile-learning. As seen in Fig. 2, the location-based learning support system generally consists of the four modules (Data Sender/Receiver Module, Course Content Management Module, Location-based Information Management Module, and Learner/Instructor Management Module) and three DBs (Course Content DB, Location based Information Content DB, and Learner/Instructor Content DB).
46
J.-M. Kang and S.-Y. Choi
The Data Sender/Receiver Module receives the request for content from the smartphone of the learner or the instructor and sends the request for the content to the Course Content Management Module. The Course Content Management Module generates the location-based educational content and saves the generated educational content in the Course Content DB. The Learner/Instructor Management Module collects the user’s information relevant to each item of educational content. The collected information is saved in the Learner/Instructor content DB and the Location-based Information Content DB. The Location-based Information Management Module receives the location information of the smart-phone registered in the mobile-learning server through the Data Sender/Receiver Module. The received location information is saved in the Location-based Course Information DB after matching this data with each smart-phone user. Also, it collects the location information of the smart-phone when the educational content is provided. Fig. 3 shows the execution algorithm of the Location-based Information Management Module
/* c_l_i DB : Course Learner/Instructor DB */ Get a Request_id If (the Request_ID == Terminal_ID in course Learner/Instructor DB) then Get the Terminal_location and Course_ID of Request_ID EndIf For each record in c_l_i DB Where (record.Course_ID ==Request_ID.Course_ID) Compute the Relative_location of the Terminal_ID If (the Relative_location <= θ) then Get the Terminal_IDs EndIf EndFor If {Terminal_IDs} ≠ null then For each Terminal_IDs where (Relative _location <= θ) Send a request for social network to each Terminal_ID EndFor EndIf If (acceptance signals for social network from the Terminal_IDs) then Send signals with detail location information to the Request_ID EndIf
∃
Fig. 3. Location-based Information Management Module’s execution algorithm
3 Service Scenario Smart-phones can obtain a more detailed context awareness in the mobile environment through sensors such as GPS, Bluetooth, Ambient light sensor, Approximate Sensor, Accelerometer (gravity-sensor, Gyro sensor, CandleFrame, BobbleMonk, Punch Mach, Neon Board), Digital Compass (Magnetic Field Sensor), WLAN, etc. The location information obtained by sensing is used to recommend the learners or the instructors located nearby the current location of the learner or to provide a social network service. Fig. 4 displays the learners and the instructors located nearby on Naver Map (http://map.naver.com/). That is, Fig. 4 is the display that searched for,
A Social Education Network Based on Location Sensing Information Using Smart-Phones
47
among the learners or the instructors of ‘N,’ other learners ‘n’ or instructors ‘n’ located within ‘100 m,’ ‘50 m,’ ‘1 km,’ etc. from the location of the current learner. The learner can suggest an educational collaboration to the nearby learners or the instructors displayed on the smart-phone. This makes it possible for the nearby learners and instructors to meet face-to-face for education.
Fig. 4. The search display of learners and instructors with whom education collaboration is possible
4 Conclusions Distribution of smart-phones provides a mobile education environment. Mobile education is the realization of personalized education utilizing GPS, Ambient light sensors, Approximate Sensors, gravity-sensors, Gyro sensors, Magnetic Field Sensors, etc., equipped in a smart-phone. At the same time, since social network services are used widely, online education is transformed into social mobile education. This paper proposes a method for combining the location-based information of smart-phones and education content, and realizing a social education network system. The proposed system helps the location-based information be transformed into information for social education network. The support of a location based social network service in our system would increase interactions among elearners and improve satisfaction regarding their e-learning.
References 1. Carr, S.: As distance education comes of age, the challenge is keeping students. Chronicle of Higher Education (online archives) 46(23) (2000) 2. Fredericksen, E., Pickett, A., Shea, P., Pelz, W., Swan, K.: Student satisfaction and perceived learning with on-line courses: Principles and examples from the SUNY learning network. Journal of Asynchronous Learning Network 4(2), 7–41 (2000)
48
J.-M. Kang and S.-Y. Choi
3. Boyd, D.: The characteristics of successful on-line students. New Horizons in Adult Education 18(2), 31–39 (2004) 4. Tinto, V.: Learning better together: The impact of learning communities on student success in higher education. Journal of Institutional Research 9(1), 48–53 (2000) 5. Li, M., Du, Z.: Dynamic social networking supporting location-based services. In: 2009 International Conference on Intelligent Human-Machine Systems and Cybernetics (2009) 6. Tan, Q., Yu-Lin, K., Huang, Y.: A collaborative mobile virtual system based on Locationbased dynamic grouping. In: 10th IEEE International Conference on Advanced Learning Technologies (2010) 7. Chen, S.M., Tasi, Y.N.: Interactive location-based game for supporting effective English learning. In: 2009 International Conference on Environmental Science and Information Application Technology (2009)
User-Oriented Pseudo Biometric Image Based One-Time Password Mechanism on Smart Phone Wonjun Jang, Sikwan Cho, and Hyung-Woo Lee School of Computer Engineering, Hanshin University, Korea {jangwjfly,whtlrdhks3355,hyungwoo8299}@hanmail.net
Abstract. User authentication procedures should be enhanced its security on Smart phone. And more secure system should be implemented to minimize the user’s privacy disclosure. Although image-based authentication mechanism was introduced recently, replay attack is also possible on existing one-time password based authentication system. In this paper, we implemented pseudo biometric image based OTP generation mechanism, which uses transformation function on captured biometric image from each user for providing enhanced secure authentication service on smart phone. Keywords: Pseudo Biometric Image, Authentication, One-time password, Smart phone.
1
Introduction
With a recent rapid increase in smart phone users, various applications are being used. It is possible to use an e-financial service or an Internet banking service by use of a smart phone so a correct authentication process for smart phone users should be set up, together with a solution for security. In case an Internet banking service is used through a smart phone, a more reinforced user authentication process should be made than the existing environment. Otherwise, it can create a serious problem. Therefore, it is necessary to use other approach which has more improved security than one-time password (OTP) approach applied to the existing financial services, and it is also necessary to associate user-related information with an OTP-generating process and offer a multi-factor authentication feature. The existing OTP approach [1,2,3] can’t offer an identification process for the real owner of OTP token so it can’t offer identification/verification features for users when it is lost. In addition, in the process of issuing OTP token, it creates a one-time password, using a previously-set identification number, but it doesn’t include the information on a real user so there is no way to identify or detect the use by a third party. That’s because man-in-the middle (MITM) attack can be given to OTP information. Therefore, a method of offering it more safely should be suggested [4,5]. As a solution, this study intends to offer an authentication feature for a real owner or user of OTP token. For this, it has implemented a system that OTP value is created from pseudo biometric information owned by an individual and used for authentication. Pseudo biometric information (PBI) suggested in this study means quasi-biometric information that a smart phone user himself/herself captures. For T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 49–58, 2011. © Springer-Verlag Berlin Heidelberg 2011
50
W. Jang, S. Cho, and H.-W. Lee
example, it means an image including a part of a user’s identity, like an image of the back of his/her hand. This system can solve a matter of privacy that can be raised in the process of using a smart phone. Besides, it can offer OTP-based multi-factor authentication feature using an information instrument, like a smart phone. It also has an advantage of coping with replay attacks in case conversion function is applied to pseudo biometric information to generate OTP. This study has analyzed vulnerabilities of the existing smart phone authentication technology and OTP technology, and suggested results of system design and implementation for B-OTP approach with pseudo biometric information and OTP technology applied. And it has implemented the system in smart phone applications and then, analyzed its stability
2 2.1
One-Time Password on Smart Phone Existing One-Time Password Mechanisms
OTP-generating method is this. Once a user enters his/her personal identification number (PIN) or password (PW) by use of OTP-generating media, the server transmits a challenge value. And then, hash function, like MD5 or SHA-1, is applied to the challenge value to get a one-way digest value. This one-way digest value is encoded and then, by use of it, OTP value is generated [1,2]. By adding time-based, event-based, and challenge-response mechanisms to this algorithm, different OTP is generated. The server and the OTP-generating media should use the same algorithm all the time to generate OTP, which can prevent any authentication failure resulting from inconsistent OTPs. OTP-generating method uses hash function, which is oneway function, so it is difficult to find out the original value (PIN + time, event, question-answer) of the encoded OTP so user information can’t be easily decoded when it is exposed to a hacker, which is strength of OTP-generating method [2]. OTP authentication process is this. Once a user makes the initial registration in order to use OTP, OTP server delivers PIN value (a random 4-digit value) to the user. After delivering, the server generates OTP value, using the PIN value, the time value when the user registered OTP, and the current time value, and OTP–generating media generates OTP value in the same way. And then, the user requests authentication to the server, using the OTP value coming out of the OPT-generating media. The server makes authentication if its OTP value and the OTP value entered by the user are same [4]. Currently in Korea, in order to reinforce financial safety, customers of Internet banking and telebanking are classified into three security grades as shown on the below table, and a different money transfer limit is set for each security grade. Of the three security graders, the first security grader is obliged to use OTP for a financial transaction. The method of using an OTP generator and an authentication certificate at the same time is for a high security grade. Two-channel authentication method, which is based on a security card and an authentication certificate, is available too, but recently, the method of using an OTP generator is being extensively used. An OTP-based operation scenario is as follows: OTP user generates OTP value and enters it into his/her PC, and the authentication server verifies it. The user generates
User-Oriented Pseudo Biometric Image Based OTP Mechanism on Smart Phone
51
OTP, using his/her own PIN and synchronization information, and then, enters it into his/her PC. The entered information is delivered to OTP authentication server of a financial institution and goes through comparison/verification against the OTP value generated in the server. 2.2
Image Based user Authentication
Recently ConfidentTech suggests an image-based authentication approach, which is this: a user sets up an image title in advance to use and enters ID; and in response to the challenge value composed of various combinations of image values, he/she transmits a title value of the image title chosen by him/her. For example, if a user chooses three categories of dog, car, and ship, 3-digit access codes, which are combinations of 5, X, and V, are transmitted in response to the challenge images in the below figure. In the next step, the images and their arrangements are changed so one of the combinations of F, z, and N is input as a new access code [6,7,8].
Fig. 1. Image-Based User Authentication
This method is similar with OTP method. The image title set by a user can be a secret of each user and different images are differently arranged and different access codes are assigned each time so it can be said that it is a kind of OTP method. However, if a user chooses 3 different image titles and 9 image samples are offered, only 6 out of 504 combination codes (9*8*7) can be suitable access codes, which means 1.19% probability. This method has a problem that three image categories can be easily found out through the analysis of similarities and commonness of 9 images that are given when an attacker tries to have access several times. However, in that a one-time access code is generated by use of image information, this method shows that it is possible to connect the existing general OTP method with images or other multimedia information. Therefore, this study intends to suggest a user authentication process that each user generates a one-time password from a random image captured by the user through his/her smart phone, and performs user authentication.
52
3 3.1
W. Jang, S. Cho, and H.-W. Lee
Pseudo Biometric One-Time Password Mechanism Proposed Pseudo Biometric Image Based OTP Mechanism
In order to acquire safety in a financial transaction, like Internet banking, a safer, more efficient approach should be used. And a security feature for privacy should be offered too. Therefore, in case the existing facial recognition system is used for Internet banking, there can be some problems like an error in facial recognition and abuse of the obtained bio information. Therefore, a new approach, which can solve those problems, needs to be offered. In order to overcome technical limits of the recognition system which is based on biometric information (fingerprints or face) and to solve an error in recognition, the following new approach is proposed. This study intends to use pseudo biometric data, image information that a user provides by use of the camera of his/her smart phone. That is, a user is asked to perform a real-time capture feature for his/her face or other information by use of his/her own instrument. The obtained information includes pseudo biometric information of each individual. For example, let’s assume that LENA image is transmitted through a smart phone. A user can obtain an image that he/she wants as shown on the below, and he/she can give several effects to it. A blurring effect can be given too. However, in case that biometric information of an individual’s face is transmitted as it is, a matter of privacy can be raised and overall security can’t help going down. Therefore, a more effective way is that an individual transmits an image including his/her own biometric information as per the below image. Pseudo Biometric Information: It means the information related with a user’s body. It is quasi-biometric information that is used for solving a matter of privacy. For example, it is the image information including a part of a user’s body. In the case of using partial biometric image information as per the above, a matter of privacy happens less than the case that the whole image of a face is transmitted. It is true that in this case, the identification feature for biometric information in the image can have a problem, but it uses partial biometric information, that is, pseudo biometric information, so any problems resulting from privacy can be solved. And a matter of a real owner can be raised too even though the image of a face is used. Therefore, this study proposes an approach of generating B-OTP by use of pseudo biometric information that a user captures through the camera in his/her smart phone. It abstracts pixel information in a specific random position of an image and generates B-OTP information by use of it. 3.2
Using Pseudo Biometric Image on OTP Generation
Currently security-related technology, like biometric information-used user authentication, is getting attraction and researches of it are being actively conducted, but the technology is not used a lot because of its high cost. However, dissemination and use of smart phones have been rapidly increased and activated after 2009 so through this environment, one of the problems with using biometric information, which is high cost, can be solved.
User-Oriented Pseudo Biometric Image Based OTP Mechanism on Smart Phone
53
Fig. 2. OTP Application Based on User Images (or pseudo biometric information)
Fig. 3. A User-Image-Based OTP System Architecture on Smartphone
By use of a camera, a microphone, and a sensor of a smart phone, it is possible to receive such information as a user’s voice, image, and actions, and through a mobile communication network and Wi-Fi network, it is possible to smoothly receive and transmit information so the use of biometric information through a smart phone can be done very efficiently in terms of cost and convenience. The existing OTP-generating method through a terminal device has many restrictions and inconvenience so if a software-like OTP-generating approach is implemented through a mobile phone, the use of OTP will be very efficient and
54
W. Jang, S. Cho, and H.-W. Lee
activated in terms of convenience. However, this approach has a problem of security so if biometric information is used in the process of generating OTP, high-security OTP can be used more efficiently and conveniently. The process of performing B-OTP by use of a smart phone can be designed like this. First, a user transmits his/her identity (ID) value only, except for password (PW), to the server and the server checks if the ID has been registered. If it has not been registered, no further progress will be made. Now the server creates a random challenge value and transmits it to the client. At this time, the client generates OTP, using PW created by him/her, and transmits it to the server. Likewise, the server compares the OTP value generated by it with the OTP value received from the client and if they are same, it works on authentication. OTP authentication architecture, which this study has developed with a smart phone, is as per the below. An Android phone, which is a recent issue, has been used for designing. As shown on the below figure, the client generates OTP, using the random challenge value received from the server after transmitting his/her ID to the server. The way of enhancing safety of the above architecture is using biometric information. Its detailed process is as follows: Step 1: An ID is transmitted to the server. A smart phone user transmits his/her ID to the server and makes a request for OTP generation. At this time, he/she transmits the request time T too. Step 2: The ID is confirmed in the client. The server searches the received ID in database and confirms it. Step 3: A random challenge value is generated and transmitted. The server applies conversion function to the ID and time information T received from the client, generates a random number (R), using seed and sequence values and hash algorithm like SHA-1 or MD5, and then, transmits it to the client. Step 4: A user image is created and transmitted. Using the camera module built in his/her smart phone, the client generates pseudo biometric image (PBI) information related to his/her biometric information. PBI, entered through the camera module, is saved in an image format, like JPG, BMP or GIF. For the acquired pseudo biometric information, conversion function (f) which is safely set between the client and the server is used to convert the image, which is transmitted to the server and then, saved as ‘PBI’ by the server. Step 5: An OTP is generated and transmitted in the client. Using SHA-1 algorithm, the client gets hash value H (PW | R | PBI | T) and transmits it to the sender as OTP value. The user’s own password (PW), the random challenge value (R) received from the server, and pseudo biometric image (PBI) are made into a one-time OTP value through the application of one-way hash function. In order to prevent replay attacks, time information T is used. Step 6: The OTP is identified and authenticated in the server. The server gets a hash value, using the random number (R) generated by it and the pseudo biometric information received from the server, and then, compares it with the OTP value received from the client. If the two values are same, it authenticates the user. Otherwise, the authentication process fails.
User-Oriented Pseudo Biometric Image Based OTP Mechanism on Smart Phone
3.3
55
Pseudo Biometric OTP Generation on Client
The following is a detailed process of generating OTP using pseudo biometric information in the client: Step 1: Enter ID/PW-based privacy information into the ID/PW field and press the Login button. Step 2: Focus the camera on your face or other part and press the shutter to abstract biometric privacy information. Step 3: Apply conversion function (f) to the PW entered in step 1, the challenge value received from the server, and the pseudo biometric information abstracted from step 2, and then, calculate biometric hash value. Generate B-OTP value by use of time information. The conversion or transformation function (f) can be re-defined as follows: Referring to the existing researches of biometric salting or non-invertible transformation methods [11,12,13], conversion is performed to the biometric information that a user entered from a smart phone. That is, cancelable biometric technique [11, 12, 15] is applied to pseudo biometric information to divide PBI data captured by the user into x × y blocks, and then, conversion function (f) is applied. Conversion function f = H(ID | PW | T) can be created and defined by use of user ID/PW and time information. Therefore, using this defined conversion function, conversion is performed to the pseudo biometric information (PBI) obtained by the user. 3.4
Pseudo Biometric OTP Verification on Server
The biometric OTP generation architecture in the server is as follows: Step 1: Make a judgment if the ID received from the client is in the ID list in the privacy database of the server. Step 2: If the ID is not valid, the server invalidates the ID received from the client. If it is valid, the server generates a 5-byte random seed value and a 10-byte random challenge value as a sequence value. Step 3: In the same way as the client, the server generates B-OTP. In case the ID in step 1 is valid, conversion function is applied to the PW of the relevant ID in privacy database, to the 10-byte challenge value generated in step 2, and the PBI value received from the client, through which B-OTP value is generated. Conversion function f that was generated in the client can generate its identical function f' in the server too. Using user PW/ID/time information T that are saved already, the server can calculate conversion function f' = H(ID | PW | T). Therefore, using conversion function f', the server can work on the conversion process for PBI' received from the user, and using it, it can generate and verify OTP value.
4 4.1
Implementation and Security Analysis Implementation Results
This study has used Android OS as a client system and SQL server as a server system. Android-based development environment is composed of JAVA JDK 1.6.x version,
56
W. Jang, S. Cho, and H.-W. Lee
Android software development kit (SDK), and eclipse galileo-SR2. The SQL server is MySQL-based. In Android environment, a user performs a process of receiving an account from the server in advance. This study has implemented biometric OTP login in Android environment.
Fig. 4. A Biometric OTP Implementation
4.2
Security Analysis
This study has analyzed safety of OTP system which uses implemented pseudo biometric information (PBI). OTP token used for Internet banking can be faced with MITM attacks, but in the case of the technique used by this study, the client transmits different pseudo biometric image information to the server each time. And the transmitted biometric information can be used for personal authentication because it is not a fingerprint or a whole face which can cause a privacy problem. Therefore, more reinforced security can be given. Multi-factor authentication feature: The technique suggested by this study is combined with OTP method through the use of PBI so it can enhance safety more than the existing method. The existing OTP method doesn’t offer an authentication process for the real owner of OTP token. When issuing OTP, PIN is used to generate
User-Oriented Pseudo Biometric Image Based OTP Mechanism on Smart Phone
57
PW, but it doesn’t include the information on a real user so in case it is used by a third party, there is no way to verify it. However, PBI is used so an identification feature can be improved. Smartphone-based owner authentication/identification feature: In case OTP token is lost or stolen, both synchronous and asynchronous methods don’t present a countermeasure. When OTP token is lost, an identification process for its original owner is not offered at all. That is, the existing OTP method just purports to create a one-time password and it doesn’t offer owner authentication/identification processes for OTP instrument and modules. Therefore, this study has applied conversion function to a pseudo biometric information image in order to authenticate the real owner of OTP token, changed the image, and applied it to the process of generating OTP. This method can solve the privacy matter happening when biometric information is used, and it can be also used for OTP information-based multi-factor authentication. Replay-attack prevention feature: This study has applied conversion function f to PBI in the process of generating OTP so it provides the feature of privacy protection for the biometric information obtained by a user, and it can also perform an authentication process on the basis of a one-time password so finally, it can prevent replay attacks. In the process of applying conversion function (f), time information (T) value is used so even though a replay attack is done by an attacker, it can be detected. If an attacker generates OTP by use of ID, PW and time information included in conversion function (f), it will be different from the information generated by the server so it is possible to detect one-time password transformation made by replay attacks. The proposed mechanism has been found to have similar computation complexity to existing mechanisms. In the mechanism, human voice information is used for user authentication and OTP generation on a smartphone. For biometric information available in the authentication process, the existing mechanisms [14,15,16] use face or fingerprint information, while the proposed mechanism allows a user's face information to be used for authentication. Actually, the new mechanism has been designed with considerations for the convenience of OTP users and the environment where smartphones are used. It is also applicable to fingerprint information like the existing ones.
5
Conclusions
This study has designed and implemented a pseudo biometric information-used OTP system in order to strengthen the authentication feature for users in the process of using smart phone applications and to offer a safe user authentication feature in an efinancial system like Internet banking. Unlike the existing general OTP method, the system implemented in this study uses the method that a user obtains biometric information (a part of his/her body) from the camera module of a smart phone, converts it by use of conversion function that offers the feature of “inverse conversion not allowed”, and then, applies it to the process of generating OTP.
58
W. Jang, S. Cho, and H.-W. Lee
Acknowledgement. This research was supported by Basic Science Research Program though the NRF of Korea funded by the MEST (No.2010-0016882) and also partially supported by MKE (Ministry of Knowledge Economy), Korea, under the ITRC (Information Technology Research Center) support program supervised by NIPA (National IT Industry Promotion Agency)” (NIPA-2011-(C1090-1031-0005)).
References 1. Lamport, L.: Password authentication with insecure communication. Communications of the ACM 24, 770–772 (1981) 2. Haller, N.M.: A one-time password system. Tech. Rep. RFC 1938 (May 1996) 3. Haller, N.M., Metz, C., Nesser II, P.J., Straw, M.: A one-time password system. RFC 2289 (February 1998), http://www.ietf.org/rfc/rfc2289.txt 4. Jang, W.J., Lee, H.W.: Biometric one-time password generation mechanism and its application on SIP authentication. Journal of the Korea Convergence Society 1(1), 93–100 (2010) 5. Lin, M.H., Chang, C.C.: A secure one-time password authentication scheme with lowcomputation for mobile communications. ACM SIGOPS Operating Systems Review 38(2), 76–84 (2004) 6. http://confidenttechnologies.com/products/confidentimageshield 7. http://www.darkreading.com/authentication/security/client/sh owArticle.jhtml?articleID=228200140 8. http://www.marketwire.com/press-release/ConfidentTechnologies-Delivers-Image-Based-MultifactorAuthentication-Strengthen-Passwords-1342854.htm 9. Ang, R., Safavi-Naini, R., McAven, L.: Cancelable Key-Based Fingerprint Templates. In: Boyd, C., González Nieto, J.M. (eds.) ACISP 2005. LNCS, vol. 3574, pp. 242–252. Springer, Heidelberg (2005) 10. Hirata, S., Takahashi, K.: Cancelable Biometrics with Perfect Secrecy for CorrelationBased Matching. In: Tistarelli, M., Nixon, M.S. (eds.) ICB 2009. LNCS, vol. 5558, pp. 868–878. Springer, Heidelberg (2007) 11. Kong, B., et al.: An analysis of Biohashing and its variants. Pattern Recognition 39(7), 1359–1368 (2006) 12. Lee, Y.J., et al.: One-Time Templates for Face Authentication. In: International Conference on Convergence Information Technology, ICCIT 2007, pp. 1818–1823 (2007) 13. Savvides, M., Vijaya Kumar, B.V.K., Khosla, P.K.: Cancelable Biometrics Filters for Face Recognition. In: Int. Conf. of Pattern Recognition, vol. 3, pp. 922–925 (2004) 14. Wang, D.-S., Li, J.-P.: A new fingerprint-based remote user authentication scheme using mobile devices. In: International Conference on Apperceiving Computing and Intelligence Analysis, ICACIA 2009, pp. 65–68 (2009) 15. Yoon, E.J., Yoo, K.Y.: A secure chaotic hash-based biometric remote user authentication scheme using mobile devices. In: APWeb/WAI 2007, pp. 612–623 (2007) 16. Khan, M.K., Zhang, J.S., Wang, X.M.: Chaotic hash-based fingerprint biometric remote user authentication scheme on mobile devices. Chaos, Solutions & Fractals 35, 519–524 (2008)
Prototype Implementation of the Direct3D-on-OpenGL Library Joo-Young Do1, Nakhoon Baek1,*, and Kwan-Hee Yoo2 1 Kyungpook National University, Daegu 702-701, Republic of Korea 2 Chungbuk National University, Cheongju Chungbuk 361-763, Republic of Korea
[email protected]
Abstract. In this paper, we aimed to provide Direct3D graphics features on Linux-based systems, which are actively used for various portable game platforms and mobile phone devices. Direct3D is used as one of the most important middle-wares for game and graphics applications developed on Microsoft Windows operating systems. However, this graphics library is not commonly available for other operating systems. We present a prototype library to provide Direct3D functionalities on Linux-based systems, using the OpenGL graphics library. In typical Linux-based systems, only the X window system and OpenGL graphics library are available. There are lots of needs to port Direct3D-based applications on these systems, and our Direct3D-on-OpenGL library would be a good starting point. Selecting a set of widely-used Direct3D data structures and functions, we implemented selected Direct3D functionalities and finally acquired a prototype implementation. Our implementation currently covers 3D transformations, light and material processing, texture mapping, simple animation features and more. We showed its feasibility through successfully executing a set of Direct3D demonstration programs on our implementation. Keywords: DirectX, OpenGL, Implementation, black-box testing.
1
Introduction
In this paper, we present a prototype implementation of Direct3D graphics functionalities on Linux-based systems. Notice that the Linux-based systems are now used for various portable game platforms and mobile phone devices[1,2,3]. In contrast, Currently, Direct3D is used as one of the most important library for graphics output, mainly for applications developed on Microsoft Windows operating systems[4]. In contrast, this graphics library is not commonly available for other operating systems. Thus, we are hard to use it on other operating systems, at least at this time. As the first step to provide an easy way of porting Direct3D-based game applications to other operating systems, we designed and implemented a graphics *
Corresponding author.
T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 59–65, 2011. © Springer-Verlag Berlin Heidelberg 2011
60
J.-Y. Do, N. Baek, and K.-H. Yoo
Direct3D-based Application Program
Direct3D function calls
Direct3D graphics library (only for MS-Windows)
Direct3D on OpenGL wrapper library (our implementation)
OpenGL graphics library (on Linux O/S) MS-Windows Operating System Linux Operating System & X windows system
Graphics Output on MS-Windows
Graphics Output on X windows
Fig. 1. The design concept of our system
library which provides Direct3D graphics API functions on Linux-based systems. Since this library can be used to directly port the graphics and game applications originally developed for PC desktops in a straight-forward manner, we expect it to be a cost-effective way of porting these programs. As shown in Figure 1, our final goal is to get the same graphics output for both of the desktop Direct3D application programs and the new implementation of our Direct3D-on-OpenGL architecture, which corresponds to the right side of the figure.
2
Related Works
Typical Linux-based systems, or especially for embedded Linux-based systems, they usually provide OpenGL (or its equivalent) for 3D graphics output. OpenGL is one of the most widely-used 3D graphics libraries, and continuously improved to reflect the current state of the art[5]. As an example, OpenGL ES is newly released for handheld devices including mobile phones[6]. OpenGL ES shows a good example of re-constructing a general purpose desktop 3D graphics library for small embedded systems[7]. We also take a similar way, to get a selected set of original Direct3D functions.
Prototype Implementation of the Direct3D-on-OpenGL Library
61
In the case of Microsoft Windows operating systems, they provide DirectDraw or Direct3D graphics libraries[4]. Recently, they are integrated into DirectX system, and widely used in Microsoft Windows systems. As the developing environment for Windows systems, the Visual Studio system consistently provides supports for embedded systems, and currently Visual Studio for Embedded System is also available. However, these all facilities are only available for Microsoft Windows systems and thus, we need another way of providing Direct3D features on other operating systems such as Linux.
3
Design and Implementation
At the design stage of our Direct3D-on-OpenGL library, we need to select the set of supported functions among the original Direct3D features. Considering its technical and marketing aspects, we selected Direct3D 9.0 as the major target. From the technical point of view, the vertex shaders and pixel shaders, which are based on the hardware GPU’s, are excluded from the first prototype implementation. They will be added in the next implementations. In this way, we selected the FVF(flexible vertex flags)-based graphics programs as our first target. Thus, our implementation naturally supports the following Direct3D classes: z z z z z
IDirect3D9 – The starting point of Direct3D programming. It generates IDirect3DDevice9 objects on demand. IDirect3DDevice9 – It handles core graphics primitives in Direct3D. IDirect3DTexture9 – Added to support texture mapping facilities. IDirect3DVertexBuffer9 – Providing coordinates and their related information for graphics primitives. D3DXMATRIX – Added for matrix processing.
During Implementation of those classes, the following classes are additionally needed: z z z z z z
_D3DCOLORVALUE – Providing color information as (R, G, B, A)quadraples. _D3DVECTOR – Providing 3D vectors with (x, y, z). _D3DRECT, _RGNDATA, _RGNDATAHEADER – Specifying regions as specific areas on the screen. _D3DLIGHT9 – Defining light sources for the light-and-material processing. _D3DMATERIAL9 – Defining material information for the light-andmaterial processing. _D3DPRESENT_PARAMETERS – Providing some features related to the overall screen updates.
To show the implementation details of these classes, the supported member functions of the IDirect3D9 class is listed as follows: z z
IDirect3D9 (void) ~IDirect3D9 (void)
62
J.-Y. Do, N. Baek, and K.-H. Yoo
z z
ULONG Release (void) HRESULT CreateDevice (UINT adaptor, D3DDEVTYPE deviceType, HWND hFocusWindow, DWORD behaviorFlags, D3DPRESENT_PARAMETERS *pPresentationParameters, IDirect3DDevice9 **ppReturnedDeviceInterface)
As the next one, our IDirect3DDevice9 class has the following member functions: z z z z
z z z z z z z
z z z z z z
IDirect3DDevice9 (DWORD behaviorFlags, D3DPRESENT_PARAMETERS *pPresentationParameters) ~IDirect3DDevice9 (void) ULONG Release (void) HRESULT CreateVertexBuffer (UINT length, DWORD usage, DWORD fvf, D3DPOOL pool, IDirect3DVertexBuffer9 **ppVertexBuffer, HANDLE *pShareHandle) HRESULT BeginScene (void) HRESULT EndScene (void) HRESULT Clear (DWORD count, CONST D3DRECT *pRects, DWORD flags, D3DCOLOR color, float z, DWORD stencil) HRESULT SetStreamSource (UINT streamNumber, IDirect3DVertexBuffer9 *pStreamData, UINT offsetInBytes, UINT stride) HRESULT SetFVF (DWORD fvf) HRESULT DrawPrimitive (D3DPRIMITIVETYPE primitiveType, UINT startVertex, UINT primitiveCount) HRESULT Present (CONST RECT *pSourceRect, CONST RECT *pDestRect, HWND hDestWindowOverride, CONST RGNDATA *pDirtyRegion) HRESULT SetTransform (D3DTRANSFORMSTATETYPE state, CONST D3DMATRIX *pMatrix) HRESULT SetRenderState (D3DRENDERSTATETYPE state, DWORD value) HRESULT SetTexture (DWORD sampler, IDirect3DTexture9 *pTexture) HRESULT SetLight (DWORD index, CONST D3DLIGHT9 *pLight) HRESULT LightEnable (DWORD lightIndex, BOOL bEnable) HRESULT SetMaterial (CONST D3DMATERIAL9 *pMaterial)
After carefully selecting the member functions to be supported, we finally implemented all the selected classes and their selected member functions.
4
Tests and Results
To show the feasibility of our implementation, we used a lot of demonstration programs and compare their results with those from desktop Direct3D applications.
Prototype Implementation of the Direct3D-on-OpenGL Library
(a) from Direct3D
(b) from our implementation
Fig. 2. A sample program for testing light and material functions
(a) from Direct3D
(b) from our implementation
Fig. 3. A sample program for rotating a color pyramid
(a) from Direct3D
(b) from our implementation
Fig. 4. Animating a texture mapped human character
63
64
J.-Y. Do, N. Baek, and K.-H. Yoo
Our test was basically a kind of black-box testing, since we did not reflect the internal structures of our implementations. Our test cases are as follows: z
z
z
Figure 2 is a sample program used for testing light-and-material processing. As shown in Figure 2, there are no particular differences between the original Direct3D output (left side) and that of our Direct3D-on-OpenGL implementation (right side). Figure 3 is another sample program for testing both of perspective projection and simple animation processing. Using an infinite loop structure, it rotates a color pyramid and also shows no particular differences. Since our implementation supports light-and-material processing, perspective transformation, animation loop and texture mapping, we can integrate all these features into a single program. Figure 4 is an example of those kinds of programs and shows an animation sequence of a game character. All these programs show that our implementation works well at least with these sample programs.
We applied much more test cases. Furthermore, these cases are applied in a stepby-step manner. After implementing a set of functions, then a set of test-cases are applied to test the feasibility and correctness of our partial implementation. We repeated these steps to get the currently finalized prototype implementation.
5
Conclusion
In this paper, we aimed at implementing a Direct3D-on-OpenGL library to acquire the same graphics output on the typical Linux-based systems with OpenGL library, with respect to the desktop Direct3D library. Based on our design strategies, we implemented a Direct3D-on-OpenGL emulation library and we showed that a set of demonstration programs produce the same result with respect to the original Direct3D-based outputs. A sample program even shows an animation sequence of a game character, to finally show the feasibility of our implementation. Actually, all these sample programs are used to perform a kind of black-box testing on our implementation. We perform these tests in a step-by-step manner, according to the implementation schedules. In near future, we plan to release a prototype system with more improved functionalities. We are also implementing a set of related libraries including OpenGL ES and EGL from Khronos group, which are already commercially available. Our Direct3D-on-OpenGL implementation will be also available in near future. Acknowledgement. This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education, Science and Technology (Grants 2010-0021853). This research was also supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education, Science and Technology (Grants 2010-0028106).
Prototype Implementation of the Direct3D-on-OpenGL Library
65
References 1. Pulli, K., Aarnio, T., Roimela, K., Vaarala, J.: Designing graphics programming interfaces for mobile devices. IEEE CG&A 25(6), 66–75 (2005) 2. http://code.google.com/android/what-is-android.html 3. http://developer.apple.com/iphone/ 4. Mirza, Y.H., da Costa, H.: Introducing the New Managed Direct3D Graphics API in the .NET Framework. MSDN Magazine (July 2003) 5. Segal, M., Akeley, K.: The OpenGL Graphics System: A Specification, version 3.1 (2009) 6. Munshi, A., Leech, J.: OpenGL ES Common/Common-Lite Profile Specification, version 1.1.12 (Full Specification), Khronos Group (2008) 7. Pulli, K., Aarnio, T., Miettinen, V., Roimela, K., Vaarala, J.: Mobile 3D Graphics: with OpenGL ES and M3G. Morgan Kaufmann, San Francisco (2007)
Open API and System of Short Messaging, Payment, Account Management Based on RESTful Web Services SunHwan Lim1, JaeYong Lee2, and ByungChul Kim2 1
Internet Service Research Department, ETRI, Daejeon, Korea
[email protected] 2 Dept. of Information Communications Engineering, ChungNam National University, Daejeon, Korea {jyl,byckim}@cnu.ac.kr
Abstract. In this paper, the functional architecture for short messaging, payment, and account management RESTful web services was designed that enables IT developers to create applications using telecommunications network elements. Especially, to support a business model that enables operators to offer integrated billing, payment and account management APIs are crucial. In the modeling of short messaging, payment, and account management, we proposed resource definitions and the HTTP verbs applicable for each of these resources. And we measured the TPS of the open service gateway including RESTful web services. Also, using the above model, an example service (i.e. fare payment) consisting of a short messaging service, a payment service, and an account management service was created. Through short messaging, payment, and account management process, the feasibility of the creation of a new service using the proposed architecture and resources was confirmed. Keywords: RESTful Open API, Short Messaging, Payment, Account Management.
1
Introduction
Telecommunications networks continually evolve in terms of their form of integrated or converged architecture. From the viewpoint of service, integration between the wire and the wireless services is also a current issue. This type of integration would imply that the end user is provided with seamless broadband multimedia services between wire and wireless networks using the same terminal. The current telecommunications service market is saturated, however. The integration between wire and wireless services provides the opportunity for a new of level services with subscribers using the broadband capability of wired service coupled with the mobility of wireless. The integration between wire and wireless services includes a number of concrete examples that have been developed. Open API (Application Programming Interface) can be easily used to implement or provide integration between wire and wireless services. T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 66–75, 2011. © Springer-Verlag Berlin Heidelberg 2011
Open API and System of Short Messaging, Payment, Account Management
67
Open API is a set of open, standardized interfaces between an application and a telecommunications network [12], [13], [14]. This technology can provide a range of services for the integration of wire and wireless systems independently from network infrastructures, operating systems, or developing languages. In here, use of SOAP (Simple Object Access Protocol) based APIs because of message encoding and decoding, many related stack (e.g. WS-security), etc is considered to complex. Alternatively RESTful APIs using HTTP protocol, etc are a light-weight. Use of RESTful API would lower the usage barrier for developers from the internet domain, supporting the web 2.0 consumers [6], [7], [8]. OMA (Open Mobile Alliance) defines open APIs (i.e. OMA RESTful network APIs) based on REST (REpresentational State Transfer) and Parlay defines open APIs (i.e. Parlay X APIs) based on SOAP that enables third-party applications to make use of network functionalities [1], [2], [3]. However, the current development status of RESTful open APIs in OMA is ongoing. In this paper, the functional architecture for short messaging, payment, and account management RESTful web services was designed. The architecture was implemented with Eclipse Galileo version and tested on Apache Geronimo version 2.2. In the modeling of the functional architecture, resource definitions and the HTTP verbs applicable for each of these resources were proposed. And the TPS (Transaction Per Second) of the open service gateway including RESTful web services was measured. Also, using the above model, the functional architecture for an example service (i.e. fare payment) was designed, implemented, and tested. This paper is organized as follows: Section 2 briefly describes open API. Section 3 details the designed functional architecture for short messaging, payment, and account management RESTful web services, as well as the defined resources for each of these. Section 4 describes the designed functional architecture for an example service (i.e. fare payment). Section 5 describes the implementation of the prototype function and Section 6 is the conclusion.
2
Open API
OMA and Parlay is groups to develop open, technology-independent APIs that enable the development of applications capable of operating across converged networks. In these groups, the selection of web services should be driven by commercial utility. The goal is to define a set of powerful simple, highly abstracted, telecommunications capabilities that developers in the IT community can both quickly comprehend and use to generate new, innovative applications. In this section, the Parlay X API (i.e. SOAP-based) and OMA RESTful network API is briefly described. A more detailed description of the Parlay X API and OMA RESTful network API is available in the literature [1], [2], [3], [4], [5], [9], [10], [11].
68
S.H. Lim, J.Y. Lee, and B.C. Kim
2.1
Parlay X API (SOAP-BASED)
Parlay X APIs should be abstracted from the set of telecommunications capabilities exposed by Parlay/OSA (Open Service Access) APIs (i.e. CORBA-based), but may also expose related capabilities that are not currently supported in Parlay/OSA APIs. Parlay/OSA APIs are designed to enable creation of both telephony applications and “telecom-enabled” IT applications. 2.2
OMA RESTful Network API
OMA RESTful network APIs define the HTTP protocol binding based on the similar API in Parlay X APIs, using the REST architectural style [6], [7], [8]. These provide resource definitions, the HTTP verbs applicable for each of these resources, and the element data structures, as well as support material including flow diagrams and examples using the various supported message body formats (e.g. XML, JSON, etc) [4]
3 Designed Architecture and Resources for Short Messaging, Payment, and Account Management RESTful Web Services 3.1
High Level Functional Architecture
The high level functional modules of short messaging, payment, and account management RESTful web services are illustrated in Fig. 1. This architecture consists of a web service module, a SCF (Service Capability Feature) module, and an operation and management module. In here, the main reason of the separation between web service module and SCF module is the effective support of services including state information like TPC (Third Party Call), Presence, etc. Web service module only publishes API and SCF module implements service logic of both including (i.e. stateful) and not including state information (i.e. stateless). Alternatively we can only use web service module. However, for the process of services including state information, we may implement service logic using DB including all state information or using request message including all state information parameter. This results in low performance of system. The main functions of each element are described below. -
-
The web service module receives a request from the AS (Application Server) and forwards the request to the SCF module. This module interacts with the AS using what is known as REST and with the SCF via RMI (Remote Method Invocation). The SCF module is the service logic that provides the functionalities for the short messaging, payment, and account management. This module receives a request from the web service module and forwards the request to a SMS-C (Short Message Service - Center) or charging server.
Open API and System of Short Messaging, Payment, Account Management
-
69
The operation and management module consists of the access control functionality and the log functionality. If any AS requires the use of the short messaging or payment or account management RESTful web service, it first sends a security token that includes the service user’s name and password. The access control functionality is the validation of the security token, and the log functionality is a log creation of the controls related to the short messaging or payment or account management. AS
AS : Application Server REST : REpresentational State Transfer RMI : Remote Method Invocation SCF : Service Capability Feature
REST
O pen S e rvice G a te w ay
W e b S e rvice M o dule RMI Method Call
S C F M o d ule O p e ratio n & M a nagem entM o dule
Fig. 1. High level functional architecture of short messaging, payment, and account management RESTful web services
3.2
Resources for Short Messaging RESTful Web Services
Currently, in order to programmatically send and receive SMS it is necessary to write applications using specific protocols to access SMS functions provided by network elements (e.g. SMS-C). This approach requires a high degree of network expertise. Alternatively it is possible to use open API approach based on web service, invoking standard interfaces to gain access to SMS capabilities. And then, we need light-weight RESTful web service. For the support of short messaging above, in the beginning, we can define SMS message request resource (1), (2) in the resource table below. Additionally, we need notification subscription resource of SMS message delivery status (3), (4) in the resource table below. Notification subscription resource could give information about SMS message delivery status to the application when a SMS is delivered to a terminal or if delivery was impossible. And also it could be achieved to enhance quality of applications and to allow subscribers to confirm information about SMS message delivery status in real time. The delivery result of the destination address can be one of the following values; Successful delivery to terminal, Successful delivery to network, Unsuccessful delivery, Delivery status unknown, Message is still queued for delivery, and Unable to provide delivery status notification. The following table gives a detailed overview of the resources defined for the short messaging RESTful web service.
70
S.H. Lim, J.Y. Lee, and B.C. Kim Table 1. Resources summary for the short messaging RESTful web services URL Base URL: http://{server root}/{api version}/shortmessa ging
HTTP verbs GET
SMS message requests (1)
/requests
Individual SMS message request (2) SMS message delivery status notification subscriptions (3) Individual SMS message delivery status notification subscription (4)
Resource
3.3
PUT
POST
DELETE
return all message requests
no
no
/requests/{request Id}
return one message request
no
create new messages request (requestId assigned) no
/subscriptions
return all subscriptions
no
create new subscription (subscriptionId assigned)
no
/subscriptions/{su bscriptionId}
return one subscription
update subscription
no
delete one subscription
no
Resources for Payment RESTful Web Services
A vast amount of content, both information and entertainment, will be made available to subscribers. To support a business model that enables operators to offer integrated billing, a payment API is crucial. Open and inter-operable "payment APIs" is the key to market growth and investment protection. The payment RESTful web service supports payments for any content in an open, web-like environment. It supports charging of currency amount. For the support of payment above using RESTful web service, in the beginning, we can define amount charging resource (1) in the resource table below. And also we need split charging resource (2) in the resource table below. If the account of one user is not possible to make payments (i.e. Account balance is low) and the sum of multiple accounts of several users is possible, just then we need split charging method. Split charging could make payments of multiple accounts simultaneously. And also it could be achieved to enhance quality of applications. Additionally, we need payment reservation resource (3), (4) in the resource table below. This resource results in reserving an amount of an account and charging to a
Open API and System of Short Messaging, Payment, Account Management
71
reservation to ensure that the subscriber can fulfill his payment obligations in case of a multimedia service (e.g. a stream of a soccer match). The following table gives a detailed overview of the resources defined for the payment RESTful web service. Table 2. Resources summary for the payment RESTful web services URL Base URL: http://{server root}/{api version}/payment
HTTP verbs GET
Amount charging (1)
/amount/charging
Amount split charging (2) Amount reservations (3)
/amount/charging/ split
Individual amount reservation (4)
Resource
3.4
PUT
POST
DELETE
return transaction by amount
no
no
no
/amount/reservatio ns
return transaction by amount split return all reservations
/amount/reservatio ns/{reservationId}
return one reservation
increase or decrease the reservation by amount
create charging or refunding by amount create split charging by amount create the reservation by amount (reservationId assigned) create charging of the reservation by amount
no
no
no
delete one reservation and return funds left in the reservation
Resources for Account Management RESTful Web Services
Subscribers have credits with their service providers. The consumption of services will lead to reduction of their credit. Therefore, sometimes, subscribers may have to recharge their accounts. This occurs through an application that interfaces with the subscriber either directly or indirectly. For the support of account management above using RESTful web service, in the beginning, we can define account balance and history resource (1), (2) in the resource table below. Additionally, we need notification subscription resource of account balance change (3), (4) in the resource table below. Notification subscription resource could give information about the account changed by some applications (e.g. Multimedia Service, WAP/WEB pages, etc) to other applications (e.g. SMS, MMS, etc) after charge, recharge, and accountLow (i.e. account balance is below the balance threshold). And also it could be achieved to enhance quality of applications and to allow subscribers to confirm information about charges, recharges and accountLow in real time. The following table gives a detailed overview of the resources defined for the account management RESTful web service.
72
S.H. Lim, J.Y. Lee, and B.C. Kim Table 3. Resources summary for the account management RESTful web services URL Base URL: http://{server root}/{api version}/account
HTTP verbs GET
Account balance (1)
/balance
update account balance
Account history (2)
/history
return the account balance and the expiration date return the transaction history of the account
Account balance change notification subscriptions (3) Individual account balance change notification subscription (4)
/subscriptions
/subscriptions/{su bscriptionId}
Resource
4 4.1
PUT
POST
DELETE
no
no
no
no
no
return all subscriptions
no
create new subscription (subscriptionId assigned)
no
return one subscription
update subscription
no
delete one subscription
the
Designed Architecture for Example Service (Fare Payment) Functional Architecture for Fare Payment
Functional blocks of open service application server and gateway for fare payment using open API are illustrated in Fig. 2. Open service application server provides service user with payment transaction history retrieving functionality. From here, we can know that open service application server provides service user with payment transaction history retrieving UI (i.e. account management User Interface). And it also stores the customer data for service user subscription management. If service user requests the payment transaction history, account management logic in open service application server requests the act for payment transaction history of open service gateway using account management API. Open service gateway account management functionality performs the request from account management logic. And then the result forwards the payment transaction history in charging server to account management logic in open service application server. Payment transaction history forwarded to account management logic in open service application server is provided with service user through account management UI. From Fig. 2, we can know that open service application server provides service user with payment UI. And it also stores the customer data for service user subscription management. If service user requests the fare payment, payment logic in open service application server requests the act for payment of open service gateway using payment API. Open service gateway payment functionality performs the request from
Open API and System of Short Messaging, Payment, Account Management
73
payment logic. And then the result is applicable to charging server. The charging server stores payment transaction data and log. From Fig. 2, we can know that open service application server provides service user with short messaging UI. And it also stores the customer data for service user subscription management. If service user requests the short messaging, short messaging logic in open service application server requests the act for short messaging of open service gateway using short messaging API. Open service gateway short messaging functionality performs the request from short messaging logic. And then the result sends the short message to mobile phone through SMS center. O p en S ervice A pplicatio n S erver Account Management UI
Account Ma nagement Logic
O pe n S ervice G a tew ay Account Account Management Management SCF API
Customer Data
Payment API
O p en S ervice A pplicatio n S erver Payment UI
Short Messaging API
C h arging S e rver
Payment SCF Short Messaging SCF
Payment Logic Protocol Adaptor
Customer Data
O pe n S ervice A pplication Server Short Messaging UI
S M S C enter
Short Messa ging Logic API : Application Programming Interface SCF : Service Capability Feature UI : User Interface
Customer Data
Fig. 2. Functional architecture for fare payment
5 5.1
Implementation of the Prototype Function Environments and Testing
Short messaging, payment, and account management RESTful web services were implemented using Eclipse Galileo version and tested on Apache Geronimo version 2.2. These RESTful web services consisted of a web service module and a SCF module. The web service module interacts with the SCF module using RMI, which
74
S.H. Lim, J.Y. Lee, and B.C. Kim
implies that a security policy between these modules is needed. However, a security policy is in fact unnecessary, as the two modules are parts of the same system. And also these RESTful web services interact with a Mysql DB to record transaction history information using JDBC (Java Database Connectivity), with SMS-C for sending a SMS message to a terminal, and with the charging server for the charging or billing information. For the TPS measurement of the open service gateway including RESTful web services, the above three and additional four web services were tested using IBM Rational Performance Tester version 8.2. The following table gives the TPS for the selected API in the web services of the open service gateway. Table 4. TPS for open service gateway Service Component SMS Payment Account Management Presence Directory Third Party Call Mail
API sendSMS chargeAmount getBalance getBuddyList getContactList makeCall sendMail
TPS (Average : 249.62) 252 254.5 253 256.2 239 208 284.7
An example service (i.e. fare payment) was used that consisted of a short messaging, a payment service, and an account management service. The created services were implemented and tested on Microsoft Visual Studio 2010 using the C# language.
6
Conclusion
The current telecommunications market is saturated. Regarding new market growth, a range of new intelligent services is on the horizon. Potential subscribers must be introduced to these services, but it is currently not feasible to bring third-party service providers and developers into the vertical architecture of current telecommunications networks. Thus, open, technology-independent APIs that enable the development of applications that operate across converged networks are necessary. In this paper, the functional architecture for short messaging, payment, and account management RESTful web services was designed that enables IT developers to create applications using telecommunications network elements. The architecture was implemented with Eclipse Galileo version and tested on Apache Geronimo version 2.2. In the modeling of the functional architecture, resource definitions and the HTTP verbs applicable for each of these resources were proposed. And the TPS of the open service gateway including RESTful web services was measured. Also, using the above model, the functional architecture for an example service (i.e. fare payment) was designed, implemented, and tested. Through the short messaging, the payment, and the account management process, the feasibility of the creation of a new service using the proposed architecture and resources was confirmed.
Open API and System of Short Messaging, Payment, Account Management
75
Acknowledgment. “This research was supported by the KCC (Korea Communications Commission), Korea, under the R&D program supervised by the KCA (Korea Communications Agency)” (KCA-2011- (09913-05001)).
References 1. 2. 3. 4. 5. 6.
7. 8. 9. 10. 11. 12. 13. 14. 15.
3GPP, Third Generation Partnership Project, http://www.3gpp.org/ OMA (Open Mobile Alliance), http://www. openmobilealliance.org/ ETSI, European Telecommunications Standards Institute, http://www.etsi.org/ W3C, World Wide Web Consortium, http://www.w3c.org/ IETF, Internet Engineering Task Force, http://www.ietf.org/ Fielding, R.: Architectural Styles and the Design of Network-based Software Architecture. Dissertation of Doctor of Philosophy in Information and Computer Science. University of California, IRVINE (2000) Richardson, L., Ruby, S.: RESTful Web Services. O’Reilly, Sebastopol (2007) Pautasso, C.: REST vs. SOAP: Making the right architectural decision. In: 1st International SOA Symposium (July 2008) Parlay X Working Group, Parlay X Web Services White Paper v1.0 (2002) Web Services Working Group, Parlay Web Services WSDL Style Guide (2002) Parlay X Working Group, Parlay X Web Services Specification v1.0 (2003) Moerdijk, A.-J., Klostermann, L.: Ericsson Eurolab Netherlands, Opening the Networks with Parlay/OSA: Standards and Aspects behind the APIs. IEEE Network (2003) Wu, W., Zou, H., Yang, F.: Design OSA/Parlay Application Frameworks Using a Pattern Language. In: Proceedings of ICCT (2003) Hellenthal, J.W., Panken, F.J.M., Wegdam, M.: Validation of the Parlay API through Prototyping. In: IEEE Intelligent Network Workshop (2001) Luttge, K.: E-Charging API: Outsource Charging to a Payment Service Provider. In: IEEE Intelligent Network Workshop (2001)
Privacy Reference Architecture for Personal Information Life Cycle Yong-Nyuo Shin1, Woo Bong Chun2, Hong Soon Jung3, and Myung Geun Chun4,* 1
Hanyang Cyber University, Dept. of Computer Engineering, 17 Haengdang-dong, Seongdong-gu, Seoul, Korea
[email protected] 2 Sungkyunkwan University, 300 cheoncheon-dong, jangan-gu, suwon, gyeonggi-do, Korea
[email protected] 3 Korea Information Security Agency, IT Venture Tower, Jungdaero 135, Songpa-gu, Seoul, Korea
[email protected] 4 Chungbuk National University, Dept.of Electrical & Computer Engineering, 410 Seongbong-ro, Heungdeok-gu, Cheongju chungbuk, Korea
[email protected]
Abstract. The increased commercial use (and value) of PII(Privacy Identifiable information), the sharing of PII across legal jurisdictions, and the growing complexity of ICT systems, makes it extremely difficult for an organization to ensure privacy and to achieve compliance with the various laws and regulations. Additionally, the open nature and characteristics of the Internet and its communication protocols can lead to a loss of information privacy when PII is used in a way that was not originally intended. Uncertainty or distrust can arise as a result of how an organization or other entity handles information privacy matters and as a result of cases of PII misuse. This paper proposes a security model for the management of personal information by each lifecycle stage, so that the information and communication service providers, which collect, store, manage, and use personal information, can manage their customers’ personal information more securely and efficiently. However, as the policy and technology designed to protect personal information vary in terms of application depending on the environment of each organization and enterprise, this paper presents general criteria. Therefore, the security requirements for each personal information lifecycle stage may be selectively applied to the environment appropriately for each organization and enterprise. Keywords: Privacy Framework, Privacy Reference Architecture, Personally identifiable information, PII principle, ubiquitous.
1 Introduction As smartphones have become widely adopted, they have brought about changes in individual lifestyles, as well as significant changes in the industry. As the mobile *
Corresponding author.
T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 76–85, 2011. © Springer-Verlag Berlin Heidelberg 2011
Privacy Reference Architecture for Personal Information Life Cycle
77
technology of smartphones has become associated with all areas of the industry, it is not only accelerating innovation in other industries such as shopping, healthcare service, education, and finance, but is also creating new markets and business opportunities [1]. But the more increasing convenience, the more concern for privacy. The majority of mobile app users—those who use mobile devices and have used an app —are cautious about sharing their locations via mobile phone, with users showing the most concern for privacy. According to Nielsen, 59 percent of female app users and 52 percent of male ones are concerned about location privacy on their mobile devices. Only 8 percent of female users and 12 percent of male users marked themselves as "not concerned" whatsoever, with the remainder being indifferent. The company also noted that different age groups tend to take different attitudes towards mobile app piracy. Users between the ages of 13 and 17 were a bit more concerned than their slightly older peers at 55 percent (while the 18-24 group was at 52 percent, and the 25-34 group was at 50 percent). But once users cross the 35-year barrier, the numbers start going up again. Unsurprisingly, the oldest age group—those 55 and up—showed the highest level of concern for location privacy at 63 percent [2]. Mobile app and location privacy has been a hot topic for well over a year now thanks to the explosion in popularity of location-based apps on iOS and Android. There are various security breach factors and methods when creating, collecting, storing, using, and disposing of privacy information. We present the general security requirements requested by each stage of the personal information lifecycle, based on the various security breach factors. The service provider that uses personal information should be able to take various protective measures in order to meet the following security requirements. However, the security requirements can vary depending on the service and system environment, and additional security requirements may be established besides the requirements described in this paper. Following the introduction, chapter 2 outlines existing studies related to our work. Section 3 introduces the actors and their interaction in the privacy reference architecture. Section 4 describes privacy requirements are influenced by three main factors. Section 5 explains privacy policy using like the GPKI. Section 6 provides breach factor and methods by personal information lifecycle stage. The last section provides a conclusion.
2 Related Works 2.1
Privacy Framework
The ISO 29100 [3] privacy principles were derived from existing sets of principles developed by international organizations such as OECD [10] and APEC [11], but focus here on the implementation in information and communication technology and development of privacy management systems to be implemented within the organization’s ICT systems. These privacy principles should be used to guide the design, development, and implementation of privacy policies and privacy safeguarding controls. Additionally, they can assist in the monitoring and measurement of performance, benchmarking and auditing aspects of privacy management programs in
78
Y.-N. Shin et al.
an organization. Despite the differences in social, cultural, legal, and economic factors that can limit the application of these principles in some contexts, the application of the privacy principles set in this International Standard is recommended. Any exceptions to these principles should be limited to the maximum extent possible. The following privacy principles form the basis for this International Standard. In PII processing, the data processing life cycle starts with the collection of PII from the PII principals, third parties or from PII which for other purposes is already under the control of the PII controller. Therefore, it can be subsumed that when dealing with privacy issues in the context of information communication systems, there are generally two main actors involved: the PII principal and the PII controller. However, for conceptualizing the processing of PII in this framework several roles of these actors can be differentiated. The PII processor has a specific role in these scenarios. He executes the collection, processing, use and transfer of PII on behalf of and under the supervision of the PII controller. The PII processor is bound by legal contract to execute exactly those processing steps the PII controller has stipulated, to subject itself to the control of the PII controller and, possibly, relevant regulating agencies, to observe the stipulated privacy requirements and to implement the corresponding privacy controls. 2.2
Privacy Reference Architecture
The ICT system presented to the PII principal is critical as it deals with important concepts such as communicating the privacy policy, consent management and data collection. The ICT system provided to the PII principal should communicate the privacy policy and its changes to the user. Furthermore, in addition to informing the PII principal about the privacy policy, the ICT system should also acquire consent for processing PII. Since the PII principal is the party that provides PII to the system. Its ICT system is a suitable place for deploying techniques for securing PII. These techniques include, but are not limited to, pseudonymization, anonymization, encryption and secret sharing. The architecture for the PII principal's ICT system is presented in Figure 1 [4].
Fig. 1. The PII principal ICT system architecture
Privacy Reference Architecture for Personal Information Life Cycle
79
3 Principles The architecture is based on the actors involved in PII processing described in ISO/IEC 29100. These actors are the PII principal, PII controller and PII processor. Figure 2 illustrates the actors and the possible flow of PII between them as described within ISO/IEC 29100.
Fig. 2. The actors and their interaction from ISO 29101
The architecture consists of three parts. Each part applies to the Information and Communication Technology (ICT) system deployed at the level of one of these actors. If one entity fills the roles of several actors, then the ICT system deployed for that entity should also contain components for all these actors. For example, if a single entity takes the role of the controller and the processor, its ICT system should contain components from both architectures. ICT systems employ a wide range of different communication and trust models. The architecture in this paper is generic and should be implemented while considering the specific properties of ICT systems. For example, in ICT systems employing peerto-peer communications, every application may take the roles of the all the three listed actors. Similarly, in social networking applications, information may be processed by anyone with access to other peoples' profiles. Therefore, significantly more users should be considered as processors with regard to the architecture in this international standard. As a necessary prerequisite to designing ICT systems, any organizations processing PII should adopt a privacy policy. The terms of this privacy policy should be used for choosing the architectural components that should achieve the protection level described in the said privacy policy.
4 Privacy Requirement Privacy requirements are influenced by three main factors as depicted in Figure 3 below: (1) legal and regulatory requirements for the safeguarding of the individual’s privacy and the protection of his/her PII, (2) the particular business and use case requirements, and (3) individual privacy preferences of the PII principal.
80
Y.-N. Shin et al.
Fig. 3. Factors influencing privacy requirements
For the safeguarding of the individual’s privacy and the protection of his/her PII, legal and regulatory requirements play a prominent role in most jurisdictions. These are not only local or national data protection and privacy laws but also international regulations such as transborder data transfer rules or laws and regulations on employee and work council organizations and consumer protection. Entities collecting, using, transferring, storing, archiving or disposing of PII can be asked to log certain access, modification, and disclosure activities around PII. In some jurisdictions, there could also be laws that require organizations collecting, using, transferring, storing, archiving or disposing of PII and that experienced a privacy breach to notify all individuals whose PII was compromised. Privacy requirements are influenced by a number of contractual requirements such as industry regulations, professional standards, company policies that might be derived from the company’s individual risk scenario or brand and competitive requirements protecting the brand from negative publicity in the event of a privacy breach, internal control systems, or third party contracts that all have to be met. Entities that process PII are well advised to make specific information about their contractual requirements and the resulting policies and practices relating to the management of PII readily available to the public. Privacy education, training, and a proactive communication strategy including regular updates in the organization’s internal privacy policies and procedures should be part of the privacy-sensitive attitude of an organization processing PII. This includes publishing easily accessible and simple-to-use complaint procedures and conducting regular audits and privacy assessments to assure consistent safeguarding of PII. All the privacy requirements that have been determined to be appropriate for the organization collecting, using, transferring, storing, archiving or disposing of PII should be documented and communicated in the organization’s internal privacy policy. In addition, any third parties that have access to PII should be made aware of their obligations in a formalized manner, for example by setting up third party agreements.
5 Privacy Policy Using Like the GPKI The method of fundamentally preventing personal information disclosure and infringement can be considered by improving the access rights of a personal
Privacy Reference Architecture for Personal Information Life Cycle
81
information handler to the GPKI (Government Public Key Infrastructure), and by implementing measures that prohibit any discretionary transfer of rights [5]. In addition, information leakages should be blocked and the responsibility should be clarified by mandating and managing the access logs of personal information handling staff and other related personnel, including details of personal information input, modification, deletion, and removal. A system that checks technical stability needs to be developed in order to secure stability of the system that processes personal information when performing personal information production tasks. Technical stability checks should be taken into consideration when the system is introduced. When interfacing the system, this should be discussed with the central administrative agency in accordance with the “Privacy Law.” Technical safety measures include those pertaining to transmission devices that can securely transmit personal information on the network, including such factors as computer virus infection prevention and encrypted communication; installation and operation of technical access control devices such as firewalls; the use of discrete encryption or locking devices for transmitting data; the separate management of personal information and general information in the system; and other technical security measures against internal/external disclosure, damage, alteration, intrusion, and detection. As personal information is disclosed and violated through various channels with the development of information and communication technology, technical measures against these violations are required. In particular, the appearance of ubiquitous technology necessitates preparation measures to manage the risks presented [6]. When reviewing other breach factors in this area, common breach factors for each service characteristic and individual breach factor are found in each stage of the information lifecycle – collection, storage and use, provision, and disposal. Individuals can be identified in the ubiquitous computing environment by collecting RFID [8] tags to analyze consumer’s preferences and purchase patterns. Also, cases of personal information leakages have increased as the number of P2P services increase. Personal information can be collected more comprehensively as information collection methods become more advanced and diversified. These methods include video recording by CCTV [9], location information collection by mobile devices, video recording by small optical devices, information collection from the Internet, and information collection using biometric technology. Analyzing each area, HSDPA and W-CDMA can disclose image and location information during the collection stage, whereas the Broadband Convergence Service and DMB/DTV can expose cookie and location information. Telematics, home networking, and RFID/USD suffer several breach factors, including inappropriate information access, disclosure of the collection/personal belonging possession status, and location information disclosure. Therefore, protective measures need to be prepared in response to the development of new technologies like RFID and biometric information. Also, technical measures need to be taken in order to secure the safety of personal information storage media, such as computing rooms, personal information processing terminals, and personal information storage databases.
82
Y.-N. Shin et al.
6 Breach Factor and Methods by Personal Information Lifecycle Stage The following section divides the lifecycle of personal information into four stages – personal information collection, storage and management, use and provision, and disposal. we define each stage and introduces the contents of the personal information processing performed at each stage. 6.1
Collecting Personal Information
In this stage, the information and communication service provider collects personal information from the personal information owner to use. For example, personal information is collected when an Internet service user provides his/her personal information to the service provider when subscribing to services. The personal information provided to the service provider is registered in a personal information database with the approval of the personal information administrator. The personal information collected in this stage can be divided into static personal information and dynamic personal information. Static personal information is provided by a personal information owner upon receiving the request of the information and communication service provider when subscribing for new services, and is retained until service withdrawal. Dynamic personal information refers to the location information provided by the user of specific services such as RFID and LBS, as well as the cookie information that provides information related to Internet connection status. That is, personal information is collected when the user provides his/her personal information to the service provider when subscribing for services provided by the information and communication service provider. 6.2
Storing and Managing Personal Information
The information and communication service provider stores and manages personal information of a personal information owner. For example, the service provider stores collected personal information in the database, and manages access privileges so that only authorized persons can access the personal information in question. In addition, at the request of the personal information owner, the service provider manages the stored personal information, including its modification, addition, or disposal, with the approval of the personal information administrator. 6.3
Using and Providing Personal Information and Communication Service Provider
In this stage, the information and communication service provider uses personal information of the personal information owner pursuant of various needs. For example, personal information collected and managed by the service provider is given to a third party, such as an outsourcing company or a business partner, with the approval of the personal information administrator. Generally, static personal information is used for basic services such as service user authentication or Internet shopping, as well as for additional services like events. If necessary, it can also be
Privacy Reference Architecture for Personal Information Life Cycle
83
provided to a third service provider besides the service provider, as specified in the privacy statement. Therefore, that part of any personal information stored and managed by the information and communication service provider is used or provided from the service subscription date to the withdrawal of the personal information owner during the use and provision stage. 6.4
Disposing of Personal Information
The information and communication service provider disposes of personal information of the personal information owner when the retention period of the concerned information expires. For example, personal information is disposed of with the approval of the personal information administrator when the Internet service use is stopped (withdrawn), or when the requested service subscription is terminated. In this stage, static personal information is disposed of when service subscription is terminated, (that is, when the personal information retention period expires), whereas dynamic personal information such as location information or cookie information created during service use is disposed of immediately after the requested service is completed, rather than only when withdrawal from the entire service is effected.
Fig. 4. Privacy information management model in korea
7 Conclusion The steady increase of data availability and the rapid decline in the cost of storage have created a situation in which control over data – and in this case specifically personally identifiable information – becomes difficult to achieve. Without control
84
Y.-N. Shin et al.
over data, it will be difficult to fulfill the expectations that users have when it comes to the handling of their PII and it will be impossible to comply with laws and regulations that require more control. Therefore, it is very important to understand privacy within each phase of the data processing life cycle. The method of fundamentally preventing personal information disclosure and infringement can be considered by improving the access rights of a personal information handler to the GPKI (Government Public Key Infrastructure), and by implementing measures that prohibit any discretionary transfer of rights. We divide the lifecycle of personal information into four stages – personal information collection, storage and management, use and provision, and disposal. Also we define each stage and introduces the contents of the personal information processing performed at each stage. This paper proposes a security model for the management of personal information by each lifecycle stage, so that the information and communication service providers, which collect, store, manage, and use personal information, can manage their customers’ personal information more securely and efficiently. However, as the policy and technology designed to protect personal information vary in terms of application depending on the environment of each organization and enterprise, this paper presents general criteria. Therefore, the security requirements for each personal information lifecycle stage may be selectively applied to the environment appropriately for each organization and enterprise. In addition, it could be utilized as an international standardization item through the international organization related to security (ISO SC27). Acknowledgments. This research was supported by the ICT Standardization program(2011-PM10-19) of MKE(The Ministry of Knowledge Economy).
References 1. Chang, Y.F., Chen, C.S., Zhou, H.: Smart phone for mobile commerce. Computers & Security, Elsevier Advanced Technology (31), 740–747 (2009) 2. Mobile phone users wary about privacy, the nielsen company (April 2011) 3. ISO/IEC JTC1 JCT1 SC27 WG5, CD Information technology – Security techniquesPrivacy Framework (April 2011) 4. ISO/IEC JTC1 JCT1 SC27 WG5, WD Information technology – Security techniquesPrivacy Reference Architecture (April 2011) 5. Blaze, M., Feigenbaum, J., Keromytis, A.D.: KeyNote: Trust Management for Public-Key Infrastructures (Position Paper). In: Proceedings of the 6th International Workshop on Security Protocols, April 15-17, pp. 59–63 (1998) 6. Scott Saponas, T.: Devices that tell on you: privacy trends in consumer ubiquitous computing (2007) 7. Bellare, M., Boldyreva, A., Desai, A., Pointcheval, D.: Key-Privacy in Public-Key Encryption. In: Proceedings of the 7th International Conference on the Theory and Application of Cryptology and Information Security: Advances in Cryptology, December 9-13, pp. 566–582 (2001) 8. Collins, J.: Lost and Found in Legoland. RFID Journal (2004), http://www.rfidjournal.com/article/articleview/921/1/1/
Privacy Reference Architecture for Personal Information Life Cycle
85
9. Andrew senior, blinkering surveillance: enabling video privacy through computer vision. Surveillance & Society (2002) 10. OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data, http://www.oecd.org/document/18/0,3746,en_2649_34255_ 1815186_1_1_1_1,00.html#introduction 11. APEC, Privacy Framework (2008), http://www.ema.gov.au/www/agd/rwpattach.nsf/VAP/ 03995EABC73F94816C2AF4AA2645824B~APEC+Privacy+Framework.pdf/ $file/APEC+Privacy+Framework.pdf
A Rate Adaptation Scheme to Support QoS for H.264/SVC Encoded Video Streams over MANETs Chhagan Lal, Vijay Laxmi, and M.S. Gaur Department of Computer Engineering, Malaviya National Institute of Technology, Jaipur, India {chhagan,vlaxmi,gaurms}@mnit.ac.in
Abstract. In recent years, the developments in wireless handheld devices and networking offers the technical platform for multimedia streaming over mobile ad-hoc networks (MANETs). However, providing QoS for multimedia streaming is quite difficult in MANETs due to its physical and organizational characteristics. Due to the high data rate and frame size of multimedia traffic there are times when offered load exceeds the available network capacity. This causes packet drops due to congestion and router input queue overflow. In this paper, we propose a cross layer rate adaptation scheme that provides required rate adaptation between applications transmission rate and network bandwidth. The proposed scheme uses the classical dual leaky bucket (DLB) algorithm to regulate the traffic flow with guaranteed QoS in terms of end-to-end delay. Furthermore, our scheme avoid the congestion in network by controlling the traffic flow of video sequences, this increases the overall network throughput. Our scheme uses the encoding information with QoS requirements of data session (i.e. video stream) provided by application layer to regulate the flow according to the available network resources. We validate our scheme in scenarios where different network size and node mobility degrees are tested in order to show the benefits offered by our scheme. We have used the latest coding standard H.264/SVC video traces to simulate video sources. The quality of received video is measured in terms of network metrics such as end-to-end delay, jitter and packet loss ratio. Keywords: ad hoc networks, Quality of Service, video streaming, H.264/SVC, dual leaky bucket, cross-layer design, video traces.
1 Introduction Mobile ad hoc networks (MANET) [3] are collection of mobile hosts which can selfconfigure, self-organize and, self-maintain. These mobile hosts communicate with each other through wireless channels with no centralized control. The inherently infrastructure less, inexpensive and quick to deploy nature of MANETs is providing a promise for its use in diverse domains. Since several years, multimedia streaming over the internet has established well and has numerous application including audio/video streaming, TV on demand, voice over IP and surveillance systems. Due to recent developments in handheld devices and wireless networks it is possible to extend the video streaming services over ad-hoc networks in order to increase the number of mobile end users. Wireless network characteristics and mobile devices causes additional challenges that must be addressed to provide sufficient quality of video streaming to end users over T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 86–95, 2011. c Springer-Verlag Berlin Heidelberg 2011
A Rate Adaptation Scheme to Support QoS for H.264/SVC Encoded Video Streams
87
MANETs. These challenges are introduced by resource-constrained network and mobile devices, high bit error rate and dynamically changing topology. Despite the large number of routing solutions available in MANETs, their practical implementation and use in real world is still limited. Multimedia and other delay or error sensitive applications that attract a mass number of users towards MANETs have demonstrated that best effort routing protocols are not adequate. Because of the dynamic topology and physical characteristics of MANETs, providing guaranteed QoS is not practical. So, QoS-adaptation and soft-QoS have been proposed instead [2]. SoftQoS means failure to meet QoS is allowed for certain cases, such as, when route breaks or the network becomes partitioned [2]. If nodes mobility is too high and topology changes very frequently, providing even soft-QoS is not possible. For a routing protocol to function properly in a mobile wireless network, the rate of topology state information propagation must be higher than the rate of topology change. Otherwise the topology information will always be stale and inefficient routing will takes place (in some cases there may be no routing at all). This applies equally to QoS state and QoS route messages. A network that satisfies the above condition is said to be combinatorial stable [1]. Some routing protocols provide the information about available network resources to the applications so that these can adjust their execution accordingly for achieving their required level of QoS. Other routing approaches may not serve the application directly but try to increase the overall network performance in terms of QoS metrics. Multimedia streaming is of two types, first is real time streaming (e.g., video conferencing) and second is non-real time video streaming (e.g., file transfer). In real time streaming, network should provide sufficient amount of bandwidth and an upper bound on delay and jitter values throughout the session. The overall challenge in real time streaming is to provide a satisfactory perceptional quality i.e., quality of experience (QoE), to mobile end user throughout the multimedia session. Multimedia applications generate traffic with variable frame size and low inter-packet time, making their transmission difficult over MANETs. The quality of audio/video used for transmission can be easily determined using two parameters i.e. frame size and inter-arrival time between consecutive frames. If traffic contains high quality data then frame would be larger and variable with very short inter-packet arrival time. In this paper, we propose a cross-layer approach to efficiently use the available network resources and address the problem of congestion caused by high data rate traffics. This approach is the cross-layer solution between DLB and H.264/SVC codec. The application layer provides information about the coding parameters of video stream that the application wants to send over the network. We assume that DLB [4] algorithm is running at each source node that has the video streams to send. Using the coding information provided by application layer the peak bit rate (PBR) and average bit rate (ABR) is calculated at the network layer. These parameters change their values according to the change in network conditions to provide the required end-to-end delay and throughput to QoS data sessions. Our scheme control the rate of traffic flows at source nodes to avoid the collisions and packet drops caused by congestion, this increases network throughput and packet delivery ratio (PDR). Although, the delay and throughput requirements are meet using our scheme at the cost of small increase in jitter. This increase in jitter can be rectified using a small buffer at the receiver side.
88
C. Lal, V. Laxmi, and M.S. Gaur
The rest of the paper is structured as follows: Section 2 provides an overview of some related work with video streaming support on mobile ad hoc networks. Section 3 describes the functionality of proposed rate adaptation scheme. In Section 4, simulation results with performance analysis of proposed scheme is presented. Finally section 5 concludes the paper with some guidelines for future work.
2 Related Work A QoS framework is proposed in [5] that supports delay, bandwidth and jitter requirements of the application using cooperation from different layers. The proposed framework is modular, which offers great flexibility by allowing plugging in to different protocols on different layers. The protocol uses H.264/AVC video traces [6] to simulate the video on source nodes. The quality of received video stream is measured in terms of signal to noise ratio, so that the efficiency of proposed scheme can be defined in form of quality of experience (QoE) of end users. In [7], authors extend the work of [5] for scalable videos. The protocol shows fairness between video sessions in terms of resource consumptions. It also avoiding network congestion through efficient admission control and guarantees an acceptable video quality at receiver. In [8], an application-centric routing framework for real-time video transmission is proposed. Unlike, the other QoS-aware routing solutions this approach calculate an optimal routing path expected video distortion as the routing metric. The main aim of [8] is to maximizing the user-perceived video quality under the given delay constraint for video playback. SACP in [9] is a bandwidth-constrained admission control scheme for multi-hop ad hoc networks like one proposed in [10], based on the concept of staggered admission. SACP tries to avoid congestion in the network which helps to decrease packet drop ratio due to collisions and intermediate router queue overflows. The QoS routing approach proposed in [11] is an extension of QoS routing with bandwidth and delay constraints. The approach presented in [11] is a simple model that ensures route stability by computing link stability along the route using received signal strength of two consecutive packets received from a neighbour. In [12], authors propose an efficient cluster-based routing protocol (ECBRP) to support QoS for realtime multimedia streaming in MANETs. The basic goal of [12] protocol is three folded. First, design an efficient cluster-head selection algorithm, by taking account of the node mobility and connectivity. Second, a link-failure detection scheme is proposed, which is able to identify whether the packet loss is due to congestion or mobility. Third, enhance the routing protocol so that in case of packet loss it can adjust its transmission strategy dynamically, as the network condition changes. Our proposed scheme differs from the above discussed proposals since it can be used on top of any routing protocol whether single-path or multipath and it does not use resource reservation while satisfying the QoS guarantees (in terms of delay and bandwidth) required by applications under limited network load and high mobility. Additionally, our evaluation and simulation is done using latest H.264/SVC video coding standard to obtain network level performance indexes which are directly connected to user level performance.
A Rate Adaptation Scheme to Support QoS for H.264/SVC Encoded Video Streams
89
3 Proposed Methodology Due to high quality content, data rate and frame size of H.264/SVC is very large. Transmission of such kind of traffic requires some sort of rate adaptation at network layer, so that the end user can receive an acceptable level of video quality. Our cross-layer approach performs rate adaptation between the transmission data rate of video stream and the available channel bandwidth. We use classical dual leaky bucket (DLB) algorithm, which performs traffic flow control at network layer based on the information provided by the application layer. The DLB algorithm takes three parameters as input i.e. the bucket size (BS) or depth of bucket, the average rate (AR) and the peak rate (PR).
Fig. 1. The dual leaky bucket regulator
Fig. 2. Structure of proposed scheme, including cross-layer interactions
A DLB regulator as shown in (refer Fig. 1) consists of three metrics (PR, AR, BS). It uses two buckets, first is the average rate bucket filled with at rate AR, it constraints the long-term appearance of the regulated flow. The second bucket known as peak rate bucket is filled at rate PR, this imposes a upper bound on the peak transmission rate of the data flow during its bursts of activity. When a frame is reached to the DLB regulator, it can be accepted only when at least one token is present in both buckets. Otherwise, the frame is dropped or delayed in a data buffer until new tokens arrive in the buckets. The application layer provides the information about the video coding scheme used by the multimedia data session. The information provided by the application layer about a traffic session consists of five parameters: the number of frames generated while encoding the video per second (FPS), the maximum (Fmax ) and minimum (Fmin ) frame size in the transmitting video stream, the total number of frames in video stream (N), average frame size of video stream (Savg ) and the transmission time between two consecutive
90
C. Lal, V. Laxmi, and M.S. Gaur
frames i.e. inter-frame time (IFT). The information about the data rate of network is directly obtained from underlying MAC layer. The above provided information is used to calculate the BS, AR and PR parameters of DLB algorithm configured at network layer. The size of bucket used in DLB should be at least equal to the maximum size frame in the video stream. So, BS can be calculated using equation 1. BS = Fmax + x
(1)
Where, x is a constant value greater than or equals to zero depends on the raw bandwidth available at MAC layer. The average frame size (Savg ) together with the total network bandwidth (Tbw ) is used to measure the value of average rate (AR) of DLB using equation 2. AR = Savg ∗ y (2) Where, y is a constant value greater than or equals to zero whose value depends on Tbw . If the network bandwidth is lower than the transmission rate of video stream than y is set to zero. The value of y increases with increase in network bandwidth. Finally, the peak rate (PR) is measured by knowing the peak frame bit rate (PFR) and Fmax of encoded video frame. PFR can be measured using equation 3 and then we can estimate PR using equation 4. PFR = Fmax /IFT (3) PR = (PFR/FPS) + z
(4)
Where, z is the positive constant whose value depends on PFR and the total channel bandwidth of network. PS has great impact on delay and jitter QoS metrics. If we increase the value of PS after some threshold the end-to-end delay of video stream increases, on the hand the delay variance or jitter value decreases with increase in PR. When a multimedia application wants a QoS connection, it sends the video coding information with the maximum tolerable delay value to the network layer. At network layer the DLB parameters are calculated based on the coding information with the help of equations (1) (4). (refer Fig. 2) shows the information flow between different layers. The interdependences between different layers are represented by double-headed arrows. A route from source to destination is calculated using any traditional routing protocol (proactive or reactive). When a route is available towards destination the source sends three consecutive hello packets to the destination. Destination has to acknowledge each of these three dummy packets. After receiving the acknowledge packets from destination, the end-to-end delay is calculated using the average value of the end-to-end delay presented in these acknowledge packets. If the delay calculated is less than the required delay of application, the application session is admitted. Otherwise, the application session is rejected and a new route if available with acceptable delay is searched. We use H.264/SVC encoded real time video traces [13] to simulate the video source nodes in MANETs. The Joint Video Team (JVT) from ISO/IEC MPEG and the ITUT VCEG developed a scalable video coding (SVC) [14] extension for the H.264/AVC standard [6]. The SVC standard seeks to fulfil the needs for flexible data rate adaptation through spatial, temporal and quality scalability layers. Scalable video streams provided
A Rate Adaptation Scheme to Support QoS for H.264/SVC Encoded Video Streams
91
by SVC are composed of a base picture and one or more enhancement pictures. Video flows with different quality layers would be transmitted efficiently over both wire-line and wireless networks, using SVC scalability layers. In terms of rate adaptation, SVC cannot just relay on the network layer to perform the flow control using its available bandwidth and degree of congestion information. This approach could cause serious performance consequences, especially in the environments like MANETs. So, we have to provide some sort of QoS mechanism that performs a fine tuning between available channel bandwidth and transmission rate of sender.
4 Simulation and Performance Evaluation In this section, we analyse the effectiveness of proposed approach using a simulated MANET environment. The simulator tool used is the scalable network simulator (Qualnet). All measurements are taken over a period of 1000 seconds and averaged over ten simulation runs. The parameters used during simulation is listed in Table 1. The video sequence used for simulation is taken from [13]. The video sequence is encoded at the rate of 30 frames per second. The delay requirement of the video sequence is .033 seconds. In our simulation, we also included five CBR sessions as background traffic to make the MANET works under more realistic assumptions. Table 1. Simulation Parameters Parameters Simulation time Scenario Dimension Number of nodes Transport protocol Routing protocol Video Sequence Resolution Number of Frames in Video Max. and Min. Frame Size Inter-frame Time MAC Specification Physical Specification Data Rate Mobility model
Values 1000 sec 1000 x 1000 meter 20 to 100 TCP,UDP AODV, D-AODV Sony Demo CIF (352 x 288) 17664 14 to 37738 bytes 33 millisecond 802.11 802.11a/g 6 Mbps, 12 Mbps Random way-point
4.1 Simulation Results In this section, we analyse the performance of our proposed scheme using the results obtained from various simulations. AODV [15] is used at network layer to perform routing with and without DLB algorithm. Notice that our main aim with the proposed scheme is to transmit a video sequence from source to destination while maintaining the
92
C. Lal, V. Laxmi, and M.S. Gaur AODV
D-AODV 100
0.09
90
Packet delivery ratio in %
End-to-End delay (sec)
AODV 0.1
0.08 0.07 0.06 0.05 0.04 0.03 0.02
D-AODV
80 70 60 50 40 30 20
0.01
10
0 20
40
60
80
100
Total number of nodes in network
Fig. 3. End-to-End delay with network scalibilty
20
40
60
80
100
Total number of nodes in network
Fig. 4. PDR with network scalibilty
given delay requirements of the application. Simulation result shows that our scheme provides acceptable end-to-end delay and packet delivery ratio (PDR) under various network scenarios. 1. Effect of network scalability: Figure 3 and Figure 4 compare the end-to-end delay and PDR for AODV with D-AODV (i.e. AODV with DLB) in MANETs for 20 to 100 nodes. In each scenario, we use four different source-destination pairs excluding the CBR connections used for background traffic and the number of nodes are increased by 20 every time. In Figure 3, we can see that the end-to-end delay with D-AODV is not only less than AODV but also lower than the required maximum delay of video sequence (i.e. .03 seconds). Furthermore, change in delay in D-AODV is much smoother than AODV. PDR measurements shown in Figure 4 indicates that D-AODV provides better and consistent results than AODV. We can see in Figure 4 that PDR has not affected by increase in number of nodes in the network. This is due to the fact that AODV is the reactive routing protocol, so the change in number of nodes in the network does not affect the number of control overheads required for routing as long as the number of routes in the network remains same. The DLB algorithm used at network layer will help to maintain the required delay and PDR by varying their parameters to reflect the network changes. The results shows that our rate adaptation scheme can scale well in large MANETs. Figure 5 and Figure 6 shows the results for delay and PDR with change in number of video sources in the network. The scenario uses 12 Mbps data rate and consists with 30 nodes in area of 800 x 800 meter. The results shows that our rate adaptation scheme can scaled well in MANETs with large number of source-destination pairs. We can see from the graphs shown in Figure 5 and Figure 6 that as the number of sources increases and DLB is not used, than the delay requirements of video streams is violated and PDR decreases significantly. This occurs due to the fact that increase in number of video sources increases the network traffic which causes congestion in the network. Due to collisions packet drop increases and more retransmission are required that causes increase in end-to-end delay. With D-AODV we can admit up-to eight video session as compared to AODV which can only admit up-to five sessions with the given delay constraints (refer Figure 5).
A Rate Adaptation Scheme to Support QoS for H.264/SVC Encoded Video Streams D-AODV
AODV 100
0.09
90
Packet delivery ratio in %
End-to-End Delay (sec)
AODV 0.1
0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01
D-AODV
80 70 60 50 40 30 20 10
0
0 0
2
4
6
8
10
0
2
4
Number of video sources
AODV
6
8
10
Number of video sources
Fig. 5. End-to-End delay with increase number of video sources
Fig. 6. Change in PDR with number of video sources
D-AODV
AODV
0.1
100
0.09
90
Packet delivery ratio in %
End-to-End Delay (sec)
93
0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01
D-AODV
80 70 60 50 40 30 20 10
0
0 20
40
60
80
100
Mobility Speed (m/s)
Fig. 7. End-to-End delay with increase in node mobility
20
40
60
80
100
Mobility Speed (m/s)
Fig. 8. Change in PDR with increase in node mobility
2. Effect of node mobility: Figure 7 and Figure 8 shows the results for Average endto-end delay and PDR with change in node’s mobility. We varies the mobility speed of nodes in the range of 20 m/s to 100 m/s. The results shown in Figure 7 indicates that in D-AODV the effect of node mobility is almost negligible while in AODV the change in mobility speed causes fluctuations in end-to-end delay. This is due to the fact that in D-AODV the DLB parameters adjust the rate of data transmission due to that D-AODV can easily adapt the change in network topology. In Figure 8, we can observe that the change in PDR is very low with mobility speed while using D-AODV.
5 Conclusions In this paper, we propose a cross layer rate adaptation scheme which uses dual leaky bucket algorithm at source nodes to provides required rate adaptation between applications transmission rate and network bandwidth. The proposed scheme provide delay guarantees to the admitted traffic flows, while avoiding occurrence of congestion in the network. We measure the effectiveness of proposed scheme by simulating source nodes
94
C. Lal, V. Laxmi, and M.S. Gaur
with H.264/SVC encoded video traces. We have shown through simulations that our rate adaptation scheme performs well under high mobility scenarios. We also provide results that shows that the proposed scheme can perform well with increase in number of video sources in the networks networks. The results also shows that the proposed scheme can scale well with network scalability. Also, by not using any resource reservation schemes, we reduced the constraints imposed on MANET nodes to a minimum. We offers a method that can be used in both homogeneous as well as heterogeneous environments. An admission control scheme can be used to stop the network being overloaded by the number of video sessions admitted discarding the flows whose admission affects the ongoing flows. The video quality received by the end user can be measured in terms of peak signal-to-noise ratio (PSNR), so that the effectiveness of applying the proposed methodology can be assessed in terms of end user satisfaction. Furthermore, the effect of using multiple paths on the proposed scheme has to be analyzed. This constitutes our future work.
References 1. Chakrabarti, S., Mishra, A.: QoS issues in ad hoc wireless networks. Communications Magazine, IEEE 39, 142–148 (2001) 2. Chen, S., Nahrstedt, K.: Distributed quality-of-service routing in ad hoc networks. Selected Areas in Communications, IEEE Journal 17, 1488–1505 (1999) 3. IEFT, MANET Working Group Charter, http://www.ietf.org/html.charters/manet-charter.html 4. Heinanen, J., Guerin, R.: A two rate three color marker. IETF RFC2698 (September 1998) 5. Calafate, C.T., Malumbres, M.P., Oliver, J., Cano, J.C., Manzoni, P.: QoS Support in MANETs: a Modular Architecture Based on the IEEE 802.11e Technology. IEEE Transactions on Circuits and Systems for Video Technology 19, 678–692 (2009) 6. Van der Auwera, G., David, P.T., Reisslein, M.: Traffic and Quality Characterization of Single-Layer Video Streams Encoded with the H.264/MPEG-4 Advanced Video Coding Standard and Scalable Video Coding Extension. IEEE Transactions on Broadcasting 54, 698–718 (2008) 7. Chaparro, P.A., Alcober, J., Monteiro, J., Calafate, C.T., Cano, J.-C., Manzoni, P.: Supporting Scalable Video Transmission in MANETs through Distributed Admission Control Mechanisms. In: 18th Euromicro International Conference on Parallel, Distributed and NetworkBased Processing (PDP), February 2010, pp. 238–245 (2010) 8. Wu, D., Ci, S., Wang, H., Katsaggelos, A.K.: Application-centric Routing for Video Streaming Over Multihop Wireless Networks. IEEE Transactions on Circuits And Systems for Video Technology (December 2010) 9. Hanzo, II.L., Tafazolli, R.: Throughput Assurances Through Admission Control for MultiHop MANETs. In: IEEE 18th International Symposium on Personal, Indoor and Mobile Radio Communications, PIMRC 2007, September, 1-5 (2007) 10. Yang, Y., Kravets, R.: Contention-aware admission control for ad hoc networks. IEEE Transactions on Mobile Computing 4, 363–377 (2005) 11. Sarma, N., Nandi, S.: Route Stability Based QoS Routing in Mobile Ad Hoc Networks. Wireless Perssonal Communication 54, 203–224 (2009)
A Rate Adaptation Scheme to Support QoS for H.264/SVC Encoded Video Streams
95
12. Tao, J., Bai, G., Shen, H., Cao, L.: ECBRP: An Efficient Cluster-Based Routing Protocol for Real-Time Multimedia Streaming in MANETs. Wireless Personal Communications, 1–20 (May 2010) 13. http://www.trace.eas.asu.edu 14. ITU-T and ISO/IEC JTCI, JVT-W201.: Joint Draft 10 of SVC Amendent. Joint Video Team (JVT) of ISO-IEC MPEG (April 2007) 15. Perkins, C., Royer, E.: Ad hoc on-demand distance vector routing. In: Proc. 2nd IEEE Workshop Mobile Comput. Syst. Appl., pp. 90–100 (1999)
Minimizing Scheduling Delay for Multimedia in Xen Hypervisor Jeong Gun Lee, Kyung Woo Hur, and Young Woong Ko Department of Computer Engineering, Hallym University, Chuncheon, 200-702, South Korea {jeonggun.lee,kwhur,yuko}@hallym.ac.kr
Abstract. In this paper, we propose Real-time Xen (RTXen) scheduling framework that supports time sensitive workloads on Xen hypervisor. Our primary goal is to provide real-time guarantee for real-time components that are running on the hypervisor. To accomplish this goal, first, we modified Xen credit scheduler to support real-time scheduling for real-time components. Second, we evaluated multi-core CPU capacity with/without hyper-thread capabilities. To guarantee timely execution of real-time task, we have to measure CPU resources for exact CPU capacity analysis. Our experiments demonstrate that the proposed system can support real-time workloads efficiently and it also shows fair share execution of non-real-time workloads. Keywords: virtualization, real-time, credit scheduler, RTXen, multi-core.
1 Introduction Virtualization is emerging technologies that provide savings on hardware and costs by consolidating the workloads of several under-utilized servers to fewer machines. There are many fields where benefit from virtualization technology[1]. First, we can use server consolidation using virtualization. It is not unusual to achieve multiple virtual to physical machine consolidation. This means that multiple server applications can be run on a single machine as if applications executing its own physical computers and the unique operating system. Second, testing and development are much easier than traditional approach. Virtualization enables rapid deployment by isolating the application in a verified environment by getting rid of unknown factors and errors. Third, dynamic load balancing and disaster recovery are supported effectively in virtualization environment. The workloads of a server is varying over time, therefore, the ability for managing resources is inevitable by providing dynamic load balance. If a virtual machine is over utilized for the resources then the VM should be moved to underutilized servers. Moreover, with disaster recovery, we can protect system crashes which lead to huge economic losses. To take advantage of this potential, there is a much attention to exploit virtualization technology in various fields. However, current virtualization has drawbacks supporting real-time workloads such as streaming server, game server and telephony server. In virtualization environment, supporting real-time guarantee is not easy because virtualization systems experience long delay when CPU is switched T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 96–108, 2011. © Springer-Verlag Berlin Heidelberg 2011
Minimizing Scheduling Delay for Multimedia in Xen Hypervisor
97
between domains. The delay can be generated during a domain does not have access to the CPU. Therefore, to guarantee real-time execution of a task, we have to minimize the delay in virtualization. Recently, many research groups are actively working on making use of virtualization for supporting real-time application in embedded hypervisor field. Although these approaches may satisfy the requirements of real-time using fixed resource partition mechanism, it is difficult to support soft real-time guarantee for commodity applications because it is usually executed on the non-real-time part. Furthermore, fixed resource partition mechanism makes it difficult to support dynamic workloads that is common in general purpose operating system. There have been extensive research results dealing with real-time workloads for general virtual machine monitor by providing real-time scheduling mechanism within virtual machine monitor level. However, these approaches generally lack of supporting schedulability test or admission control mechanism. In this paper, we propose Real-time Xen (RTXen) scheduling framework that supports time sensitive applications on Xen hypervisor[2]. Our primary goal is to provide real-time guarantee for real-time components that are running on the hypervisor. To accomplish this goal, first, we modified Xen credit scheduler to support real-time scheduling for real-time components. The distinguishing characteristic of the credit scheduler is providing proportional sharing of CPU resource based on specific parameter called weight. Moreover, it also performs efficient load balancing between physical cores. Current version of credit scheduler is very well organized and outperforms than other scheduling algorithms on supporting mixed workloads. We believe this approach is more efficient way than starting from scratch. We categorized VCPU states into RT, Boost, Under, Over and Idle. Adapting admission control for real-time VCPU, we can guarantee timely execution of realtime tasks on real-time VCPU. Second, we evaluated multi-core CPU capacity with/without hyper-thread capabilities. To guarantee timely execution of real-time task, we have to measure CPU resources for exact CPU capacity analysis. Theoretically, there exists a virtual clustering concept based on hierarchical real-time scheduling paradigm, which supports multiprocessor platforms. However, virtual clustering concept is more suitable for supporting many-core systems rather than multi-core systems; also this approach is expected to cause much of inherent overhead by frequent CPU switching. Therefore, we choose the simple global scheduling mechanism of credit scheduler. We believe that coupling our admission control scheme to credit global scheduling will be enough for soft real-time platforms. Finally, in this work, we modified Xen hypervisor to support soft real-time scheduling framework because Xen supports various operating system to be executed as a domain. Domains can run fully virtualized or para-virtualized, which include Linux[3], Minix, Plan 9, NetBSD, OpenBSD, Solaris, and Microsoft Window. Therefore, we can use all of the software running on those operating systems without modification. The rest of this paper is organized as follows. The next section presents the related works. In section 3, we will discuss the design and implementation of the proposed RTXen system. Section 4 explains the experimental environment and shows the experimental results. In section 5, we will conclude the paper and present future works.
98
J.G. Lee, K.W. Hur, and Y.W. Ko
2 Related Works One approach to address such demands is embedded hypervisor that supports hard real-time guarantees. Examples of these approaches have been proposed by many companies such as OK Labs(microkernel)[4], Realtime Systems(RTS Real-Time Embedded Hypervisor) and LynuxWorks (small separation kernel). The characteristics of embedded hypervisor are small size, fast type 1 hypervisor with support for multiple VMs and low-latency communication between system components. Typical real-time solution of these approaches use fixed hardware partition technique and divide non real-time part and real-time part for overall hardware resource including processor, disk and memory. Real-time part supports servicing an application's real-time needs and the other provides functionalities for running a general purpose operating system (GPOS) such as Linux or Microsoft Windows. With these dual deployment strategies, the real-time part processes realtime workloads and the GPOS part responsible for data processing, displaying, and non-real-time workloads. In this approach, supporting soft real-time application that is widely used in GPOS is difficult because the programs should be rewritten using the API of real-time hypervisor. Even more unfortunate problems arise when we deal with mixed workloads such as latency sensitive, batch, highly interactive and realtime. There are other approaches to handle real-time workloads in virtualization environment by providing real-time scheduling in hypervisor. Kaiser[5] categorizes virtual machine classes into non-real-time VM, event-driven VM, and time-driven VM and use specific schedulers for each VM class. In this approach, several schedulers may be competing CPU resource. To resolve such conflicts, they provide precedence rules that execute non-real-time scheduler as long as no real-time VM (event-driven and time-driven) ready, but the decision between real-time VM schedulers are not defined. The weak points of approach is an absence of schedulability test for workloads, therefore their approach can support only best effort real-time scheduling but not meet the requirements of real-time task in overload situation. Lee[6] suggests soft real-time scheduler for Xen hypervisor by modifying credit scheduler to calculate scheduling priority dynamically. They defined laxity value that provides an estimate of when the task needs to be scheduled next. When a VCPU of a real-time domain is ready, it is inserted where its deadline is expected to be met. This approach deals with low-latency tasks to be executed timely manner. However, this approach also cannot guarantee real-time workloads because it doesn't provide admission control mechanism. Therefore, if there increase workloads, it cannot meet real-time task's deadline. Vsched[7] proposed by Lin and Dinda is a userlevel scheduling tool using a periodic real-time scheduling model. Vsched is implemented for type-II virtual machine monitor that it does not run directly on the hardware, but rather on top of a host operating system. Therefore, the domains are executed as a process inside host operating system. Vsched provides EDF (Earliest Deadline First)[8] scheduler using SCHED_FIFO scheduling class in Linux. Their approach is quite straightforward to describe real-time workloads because domain is regarded as a process. However, to support real-time workloads accurately, the host operating system should be support real-time characteristics such as fine grained preemption mechanism, preventing priority inversion, fast interrupt handling, etc.
Minimizing Scheduling Delay for Multimedia in Xen Hypervisor
99
3 System Architecture In this section, we explain design and implementation principles of proposed system. First, we want to provide time sensitive workload without penalizing best effort workloads by efficiently allocating CPU resource. To accomplish this principle, we modified the credit scheduler (a base scheduler in Xen hypervisor). Second, we try to support time sensitive workloads which are not mission critical hard real-time workloads. This means that we allow transient deadline miss in a certain degree. Finally, we use a global scheduling mechanism supported by credit scheduler, which allow each task to be executed on any processor in the platform.
Fig. 1. Scheduling Layout in RTXen
Figure 1 shows the proposed scheduling concept. We classified VCPU priorities of domains into five classes. All the priority classes are identical to those of credit scheduler except RT class. As we mentioned before, credit scheduler is wellorganized for various types of workloads, therefore, we only focus on supporting realtime class workloads. One of notable points is to classify Dom0 priority into RT class because interrupt handling and I/O processing have to be handled within short time. If Dom0 workloads are postponed after real-time classes, some real-time tasks requesting I/O may be blocked until Dom0 finish I/O work. Furthermore, long delay of interrupt handling may cause loss of important interrupt, which may affect normal operation of system. In our work, scheduling information is delivered from hypercall that is a software trap from a domain to the hypervisor. In this work, we provide real-time workloads only. There are no additional workloads that consume CPU resource except minor workloads incurred by domain itself. This is important because if there are non-real-time tasks in the real-time domain then non-real-time tasks may consume CPU capacity reserved for real-time tasks. As a result, Xen hypervisor cannot supply exact amount of CPU resource for real-time tasks, which may cause missing deadline for real-time tasks. The basic operations of real-time scheduling in our work are as follow.
100
z
z
z
3.1
J.G. Lee, K.W. Hur, and Y.W. Ko
RT VCPU is always executed before other scheduling classes and it goes to IDLE class when VCPU consumes the reserved CPU resource. If VCPU turn to IDLE class without consuming its reserve, it returns to RT class queue whenever VCPU turn to ready state. The CPU resource is reserved based on worst case execution time; therefore we assume there is no exceptional case requiring more CPU resource rather than reserved CPU resource. Moreover, as mentioned before, there is no non-real-time workload in real-time domains. VCPU of Dom0 is executed as a RT class with maximum 30 milliseconds for every 100 milliseconds. We observed the behavior of Dom0 host operating system by stress test with various workloads, and we conclude average 30% of CPU resource is enough for handling Dom0 workloads in our experiment. VCPU of non-real-time is managed by original credit scheduling scheme. It consumes remaining CPU resource, so it is difficult to guarantee quality of service. Guest Operating System Scheduling Mechanism
To meet timing requirements of real-time task, not only a Xen hypervisor, but also guest operating system must provide real-time scheduler. In this work, we targeted Linux as a guest operating system. It may be practical way to use commercial realtime operating systems such as LynxOS, VxWorks, QNX, etc. However this approach needs porting effort to adapt on Xen hypervisor. To avoid porting effort, we used fixed priority real-time scheduling mechanism, which basically installed in Linux kernel, called SCHED_FIFO. In the Linux kernel, scheduler offers three different scheduling policies, SCHED_OTHER is the default time-sharing scheduler policy used for non-real-time tasks, SCHED_FIFO and SCHED_RR are intended for realtime tasks. SCHED_FIFO is a simple scheduling algorithm without time slicing. When a SCHED_FIFO processes become ready, it will always preempt normal process that is running as a non-real-time with SCHED_OTHER priority. In our system, we mimicked Rate Monotonic algorithm by specifying higher priority for short period real-time task with SCHED_FIFO algorithm. For example, if there are two tasks in the domain, T1(100, 15), T2(150, 32). We specify task priority of T1 and T2 as 100, 99, respectively. Our approach requires no modification for a guest operating system, therefore full virtualization guest OS as well as para-virtualization guest OS can be executed. 3.2
CPU Capacity Measurement
To get accurate processor capacity measurements, we performed CPU benchmark by varying the number of CPU from 1 core to 4 cores with hyper-threading on/off. If we turn on hyper-threading, we can use maximum 8 logical CPUs for our experiments. Figure 2 shows the results of the CPU throughput. The x-axis presents the number of CPU and the y-axis shows how many CPU works is processed in given time period using maximum CPU resources. In this experiment, we used SHA1 hash function as a benchmarking program. We measured CPU performance by counting the number of SHA1 hash work for 1 MByte data. To get rid of I/O interference, we made random data blocks on memory instead of reading data blocks from disk.
Minimizing Scheduling Delay for Multimedia in Xen Hypervisor
101
Fig. 2. Measuring CPU Capacity by varying the number of CPU cores
As you can see figure 2, CPU throughput of dual core without hyper-threading is almost double compared with single core. The quad core is almost 4 times faster compared with single core. However, single core CPU with hyper-threading shows 1.4 times faster compared with single core without hyper-threading. With this experiment results, we can define multi-processor CPU capacity. With a help of CPU throughput experiment result, we can draw the generalized CPU capacity calculation function for multi-processor. program CpuCapacity (Output) input: NCORE, FHT, NCOMP; begin VHT := 40; VCO := 1; CCAP := NCORE * (100 + FHT * VHT); CCAP := CCAP - NCOMP * VCO; return CCAP; end. Algorithm shows how we can get the exact CPU capacity of the system in multiprocessor system. Ncore is the number of physical core, FHT is a flag indicating hyperthreading on/off and NComp is the number of VCPU components. VHT is a CPU capacity of hyper-threading core that value is computed by the experiment in figure 2.
102
J.G. Lee, K.W. Hur, and Y.W. Ko
CCap can be calculated by allocating 100% for each physical core and 40% for hyperthreading core. For accurate measuring of CPU capacity, we have to consider VCPU context switching overhead by subtracting the switching overhead between domains. VCO is a context switching overhead value of domains that value is computed by the experiment shown in figure 3. In our experiment, VCO is around 1%. We calculate average CPU capacity by dividing total CPU capacity with the number of logical core. For example, suppose if a system has 4 cores with hyper-threading and 40 VCPUs. We can get average CPU capacity of each logical CPU is 65% by following equation. CCAP := (4 * (100 + 1 * 40) - 40 * 1)/8; CCAP := 65;
Fig. 3. Measuring context switching overhead by increasing the number of guest operating systems
The x-axis presents the number of guest operating systems with one VCPU and the y-axis shows how many CPU works is processed in given time period using maximum CPU resources. In this experiment, we find that the context switching overhead between guest OS is around 1 %.
4 Experiment Results In this experiment, we focus on the laxity value for measuring real-time task. We made a various experiments to draw all the aspects of the proposed system capabilities. Our experimental platform consists of the following components. As shown in table 1, our hardware platform has quad core processor and can be extended to 8 cores logically using hyper-threading. The software platform is based on CentOS Linux kernel that is widely used in Xen virtualization. We installed 12 domains on Xen hypervisor and allocated 400MByte memory for each domain.
Minimizing Scheduling Delay for Multimedia in Xen Hypervisor
103
Table 1. Hardware/Software Configuration
In this study, we made various workloads including time-driven periodic task, event-driven periodic task and non-real-time task. The characteristics of workloads type is as follows. z
z z
Time-driven periodic task: These types of workloads are periodically executed on real-time domain. We made a periodic task using SHA1 hash function, which executes E time units during P period. We can control execution time E by varying hash block size. Event-driven periodic task: These types of workloads are executed when there is an event messages from an external workload generator. In this experiment, a Linux client machine (event generator) sends event message using TCP/IP interface to event-driven periodic task. Tasks are blocking before it receives event message. If a message is arrived to a task then it executes SHA1 hash function and turn to sleep mode by calling usleep() function. For these workloads, we can also control execution time E by varying hash block size. Non-real-time task: These types of workloads are executed on non-real-time domains with credit C. We measured the performance of non-real-time task by counting how many SHA1 hash function is executed during given time period. In this work, we implemented a test server that controls the workloads running on the domains. The workloads must be synchronized to conduct accurate result, therefore, the test server make the workloads start at once by sending control messages. It also generates event messages for event-driven periodic tasks running on domains. The experimental configuration is shown in figure 4.
In this experimental configuration, the workload consists of several domains. Realtime domains have two types of workloads, event-driven task and time-driven task. Non-real-time domains have best-effort workloads. The sync module sends start message to clients. When start message arrives, client triggers to all the workloads inside domain. The server periodically sends event message to each event-driven task. While experiment is ongoing, each task logs the laxity values to memory array. After experiment is finished, the log data is saved to disk and analyzed. In our experiment, we evaluate a laxity value which means the difference between the deadline and job finish time. For example, if the deadline is 100 milliseconds and the workload finishes at 85 milliseconds, the laxity value is 15 milliseconds. If a laxity value is below 0, it means that deadline is missed.
104
J.G. Lee, K.W. Hur, and Y.W. Ko
Fig. 4. Experiment configuration
z z
z z
RTXen: RTXen is a real-time scheduler proposed by authors. SEDF: SEDF is a second scheduler supported by Xen. We can use SEDF algorithm if we specify SEDF as a default scheduler during boot time. SEDF can execute domain as a real-time manner, however it has drawback for supporting global scheduling which means SEDF cannot distribute workloads between processors. Credit256: Credit scheduler is a default scheduler in Xen hypervisor and it automatically allocates the same weight to the domains. Default weight value is specified as 256. In credit scheduler, user can adjust weight value differently. CreditX: In our experiment, we allocated different weight to each domain. For real-time domains, we allocated high weight value as 500, for non-real-time domains, we allocated low weight value as 100.
To understand the characteristics of each scheduler, we performed an experiment under the condition of under-load. In this experiment, we want to know how each scheduler deals with time-driven workloads. We run three real-time domains (G1, G2, G3) and two best effort domains (G4, G5). Each domain has one workload that periodically executes on every 100 milliseconds and consumes 20 milliseconds CPU resource. In RTXen scheduler, we specified real-time component interface as I(100, 20) including Dom0. For SEDF, we used scheduling parameter for real-time components as SEDF(100, 20). Each non-real-time domain also has the same workloads with real-time domain. In this experiment, the total real-time workloads are assumed under 80% including Dom0. Figure 5 shows the experimental result that compare the laxity times under Xen-RT, SEDF, Credit256 and CreditX. In terms of real-time guarantees, RTXen and SEDF show good performance by supporting timely execution of real-time tasks. For non-real-time workloads, RTXen can reduce laxity times compared to SEDF scheduler. CreditX also shows good performance for realtime task by allocating more weight than non-real-time task; however, more than 10% workloads cannot meet its deadline. The default credit scheduler shares CPU resource evenly to all workloads, however, lots of workloads missed deadline.
Minimizing Scheduling Delay for Multimedia in Xen Hypervisor
Fig. 5. Scheduling result of time-driven real-time task on uniprocessor Table 2. Workload Configuration
105
106
J.G. Lee, K.W. Hur, and Y.W. Ko
Fig. 6. Deadline miss result on each scheduling policy: uni-processor configuration
Fig. 7. Deadline miss result on each scheduling policy: Dual-HT processor configuration
Minimizing Scheduling Delay for Multimedia in Xen Hypervisor
107
To evaluate the performance of multiprocessor environment, we measured the performance by varying the number of cores. We investigated the impact of proposed RTXen scheduler might have on the real-time execution of workloads. Table 2 shows workloads configuration. We performed an experiment increasing the number of core including hyper-threading. In this experiment, T1 means event-driven periodic task that executes whenever event message is arrived. We generate event message every 100miliseconds. To make our experiment system be saturated with CPU-intensive workloads, we run several domains that have best effort tasks. Figure 7 shows experiment results performed on multiprocessor. We used histogram graph to analyze how the laxity value is distributed. In this experiment, we executed workloads varying the number of processors from uniprocessor to quad core with hyper-threading, however, we only shows the result of uniprocessor and dual processor with hyper-threading because page limit. The top figures show the performance result of uniprocessor and the bottom means dual processor with hyperthreading. The result explains that Credit256 and CreditX cannot support real-time workloads under overload condition. The SEDF shows fairly good performance on uni-processor; however SEDF fails to guarantee real-time requirements on dual processor. We believe that the SEDF lacks of global scheduling, therefore it cannot distribute real-time workloads among processors. The overall performance of RTXen is acceptable for supporting soft real-time workloads, because it shows a substantial improvement to support timely execution of real-time workload, also the maximum laxity value is within 200 milliseconds.
5 Conclusion Recently, there has been extensive research for supporting real-time task on virtualization environment. In the commercial market, embedded hypervisor products can meet real-time requirements of time critical application. However, this approach cannot support widely used software targeting soft real-time. To address this problem, we build a soft real-time framework, which supports general purpose domains. Therefore, we can easily provide soft real-time characteristics for commodity application without modification. This approach yields many advantages for software such as streaming server, game server and telephony server. In this paper, we propose several design issues including real-time scheduler, exact schedulability test mechanism for supporting multiprocessor and practical techniques for calculating system resource and overheads. The key idea of our work is to provide time sensitive workload without penalizing best effort workloads by allocating CPU resource efficiently. To accomplish this goal, we only focus on real-time workloads for scheduling and adapt the prevailing capability of credit scheduler for scheduling non-real-time workloads. Experimental result shows that proposed system can efficiently manage real-time workloads and it also keep non-real-time workloads from starvation. We found that credit scheduler with differential weight can also support time critical workloads in under-load condition. However, if system is busy for CPU resource contention, credit scheduler cannot support time sensitive workloads. SEDF scheduler shows reasonable performance for real-time workloads. However it shows broad distribution of laxity values for non-real-time workloads, which means SEDF has drawback for starvation.
108
J.G. Lee, K.W. Hur, and Y.W. Ko
Acknowledgments. This research was financially supported by the Ministry of Education, Science Technology (MEST) and National Research Foundation of Korea(NRF) through the Human Resource Training Project for Regional Innovation and (2010-0005442).
References 1. Virtualization Information, http://software.intel.com 2. Barham, P., Dragovic, B., Fraser, K., Hand, S., Harris, T., Ho, A., Neugebauer, R., Pratt, I., Warfield, A.: Xen and the art of virtualization. In: Proceedings of the Nineteenth ACM Symposium on Operating Systems Principles, New York, pp. 164–177 (2003) 3. Bovet, D.P., Cesati, M.: Understanding the Linux kernel, 3rd edn (2006) 4. Heiser, G.: The role of virtualization in embedded systems. In: Proceedings of the 1st Workshop on Isolation and Integration in Embedded Systems, New York, pp. 11–16 (2008) 5. Kaiser, R.: Alternatives for scheduling virtual machines in real-time embedded systems. In: Proceedings of the 1st Workshop on Isolation and Integration in Embedded Systems, New York, pp. 5–10 (2008) 6. Lee, M., Krishnakumar, A.S., Krishnan, P., Singh, N., Yajnik, S.: Supporting soft real-time tasks in the xen hypervisor. In: Proceedings of the 6th ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments, New York, pp. 97–108 (2010) 7. Lin, B., Dinda, P.A.: Vsched: Mixing batch and interactive virtual machines using periodic real-time scheduling. In: Proceedings of the 2005 ACM/IEEE Conference on Supercomputing, Washington, DC, USA, p. 8 (2005) 8. Liu, C.L., Layland, J.W.: Scheduling algorithms for multiprogramming in a hard-real-time environment. J. ACM 20(1), 46–61 (1973)
Efficient Allocation of Transmission Power and Rate in Multicarrier Code-Division Multiple-Access Communications Ye Hoon Lee Department of Electronic and Information Engineering, Seoul National University of Science and Technology, Seoul 139-743, South Korea
[email protected]
Abstract. We propose an efficient frequency-time domain resource allocation scheme in multicarrier (MC) direct-sequence code-division multiple-access (DS/CDMA) communications. We consider, as a power allocation strategy in the frequency domain, transmitting each user’s DS waveforms over the user’s sub-band with the largest channel gain. We then consider rate adaptation in the time domain, where the data rate is adapted such that a desired transmission quality is maintained. We analyze the achievable average data rate of the proposed scheme with fixed average transmission power, and compare the performance to single carrier DS/CDMA systems with power and rate adaptations. Keywords: multicarrier, code division multiaccess, adaptive systems.
1
Introduction
Various multicarrier (MC) transmission schemes have been introduced into codedivision multiple-access (CDMA) systems to obtain such advantages as higher rate data transmission, bandwidth efficiency, frequency diversity, lower speed parallel type of signal processing, and interference rejection capability [1]. These proposed techniques can be categorized into two types, the combination of frequency domain spreading and MC modulation [2] and the combination of time domain spreading and MC modulation [3]. A MC based direct-sequence (DS) CDMA scheme, belonging to the second type, was proposed in [3] as an alternative to the conventional single carrier (SC) system to yield a frequency diversity improvement instead of a path diversity gain. It was shown in [3] that, MC and SC DS/CDMA systems exhibit the same bit error rate (BER) performance in frequency-selective fading channels, but, in the presence of narrowband interference, the former provides a much better performance than the latter. When the transmitter and the receiver are provided with the channel characteristics, the transmission schemes can be adapted to it, enabling more efficient use of the channel. In CDMA cellular systems, power adaptation is employed to maintain the received power of each mobile at a desired level. The rate adaptation and the combined rate and power adaptation for DS/CDMA systems were T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 109–116, 2011. c Springer-Verlag Berlin Heidelberg 2011
110
Y.H. Lee
considered in [4] and [5], respectively, and extended to a generalized joint adaptation scheme in [6]. In [7] an MC DS/CDMA system utilizing hopping over favorite sub-bands was considered and a frequency-hopping pattern generation method based on the water-filling algorithm was proposed. In [8] MC DS/CDMA with an adaptive subchannel allocation method was considered for forward links. However, dynamic power or rate allocation in the time domain was not addressed in [7] and [8]. In this paper, we investigate a combined power and rate adaptation scheme in the frequency-time domain for MC DS/CDMA communications. We assume that perfect channel state information (CSI) is provided at both the transmitter and the receiver. We consider, as a frequency domain power allocation, transmitting each user’s DS waveforms over the user’s sub-band with the largest channel gain, rather than transmitting identical DS waveforms in parallel over all sub-bands. We then consider rate adaptation in the time domain, where the data rate is adapted such that a desired transmission quality is maintained. We analyze the average achievable data rate of the proposed combined adaptation scheme in MC-DS/CDMA systems, and compare the performance to that of power and rate adapted SC-RAKE systems. Our results show that combining the frequency domain power adaptation and the time domain rate adaptation in MC DS/CDMA systems has significant performance gain when the total system bandwidth is fixed. The remainder of this paper is organized as follows. In Section 2 we describe the system model. In Section 3 we analyze the average achievable average data rate of the proposed scheme in MC DS/CDMA systems. Numerical results and discussions are presented in Section 4. In Section 5 conclusions are made.
2
System Model
In MC-DS/CDMA communications, the entire bandwidth is divided into M equi-width disjoint frequency bands, where M is the number of sub-bands. A data sequence multiplied by a spreading sequence modulates M sub-carriers, and is sent over M sub-bands. dk (·) and pk (·) are the binary data sequence and the random binary spreading sequence for user k, respectively. The amount of power αk,m ST of the user k is transmitted over the mth sub-band, where ST is the average transmission power and αk,m is the transmitter power gain of the user M k on the mth sub-band, that is E m=1 αk,m = 1. The transmitted signal for user k is, therefore, given by xk (t) =
M
2αk,m ST dk (t)pk (t) cos(2πfm t + θk,m )
(1)
m=1
where fm is the mth carrier frequency. We assume that the channel is frequency-selective Rayleigh fading, but the sub-bands are frequency-nonselective with respect to the spreading bandwidth
Efficient Allocation of Transmission Power
111
and independent of each other. This can be achieved by selecting M properly as in [3]. We also assume the channel variation due to the multipath fading is slow relative to the bit duration. The received signal y(t) at the base station (BS) can be represented by y(t) =
K M
2αk,m Gk,m ST dk (t − τk,m )pk (t − τk,m ) cos(2πfm t + φk,m ) + n(t)
k=1 m=1
(2) where K and τk,m are the number of users and the delay on the mth sub-band for user k, respectively, and φk,m = θk,m − 2πfm τk,m . We assume that τk,m ’s and φk,m ’s are independent and uniformly distributed, the former over a bit interval and the latter over [0, 2π]. Gk,m is the exponentially distributed random variable representing the channel power gain for user k on the mth sub-band, and its probability density function (pdf) is given by PGk,m (g) =
1 −g/Ω0 e Ω0
(3)
where Ω0 = E[Gk,m ].
(4)
In what follows, we will assume that Ω0 = 1. n(t) represents the zero-mean white Gaussian noise with two-sided power spectral density N0 /2. A coherent correlation receiver recovering the signal of user i on the mth sub-band forms a decision statistic Zi,m , given by Zi,m = =
2 Ti
Ti +τi,m
τi,m
y(t)pi (t − τi,m ) cos(2πfm t + φi,m ) dt
αi,m Gi,m ST Ti di + IMAI + η
(5)
where Ti is the bit duration, and di is the data bit taking values +1 and −1 with equal probability, all for user i. The first term in (5) is the desired signal term. The second term IMAI is the multiple-access interference term induced by the other K − 1 users, with zero mean and variance E[IMAI 2 ] =
K
αk,m Gk,m ST Tc /3
(6)
k=1 k=i
where Tc is the chip time. η is the white Gaussian noise of mean zero and variance N0 /2. At the combining stage, the correlator output of user i on the mth sub-band is weighted with βi,m where βi,m is the receiver power gain of the user i on the mth sub-band, and then all the signals out of the correlators are combined coherently. In this work, we set βi,m = αi,m Gi,m , since it is the optimal form of the diversity combining at the receiver [9]. Then, the combiner for user i forms a
112
Y.H. Lee
M decision statistic Zi = m=1 αi,m Gi,m Zi,m , and the bit energy-to-equivalent noise spectral density Eb /Ne at the combiner output is given by
2 M ST Ti α G i,m i,m m=1 . Eb /Ne = (7) M K α G k=1 2αk,m Gk,m ST Tc /3 + N0 i,m i,m m=1 k=i
3
Performance Analysis
First, we consider adaptively transmitting each user’s DS waveforms over the user’s sub-band with the largest channel gain, rather than transmitting identical DS waveforms in parallel over all sub-bands. The transmitter gain {αi,m } in this case is given by (1) 1, if Gi,m = Gi αi,m = (8) 0, elsewhere where
(1)
= max(Gi,1 , Gi,2 , · · · , Gi,M ).
Gi The pdf of
(1) Gk
(9)
is given by [10] PG(1) (g) = M e−g (1 − e−g )M −1 . k
(10)
We now consider rate adaptation, where the data rate is varied so that the received Eb /Ne is kept at a desired value while the transmission power is allocated as in (8). Thus it is a combined power adaptation in the frequency domain and rate adaptation in the time domain. It follows from (7) and (8) that, in order to maintain Eb /Ne at a desired value (Eb /Ne )o , the data rate is given by
Ri = 1/Ti =
1 · K I (Eb /Ne )o k=1
(1)
Gi
(1) 2Gk Tc /3
(11) + N0 /ST
where KI is the number of interfering users that transmit their DS waveforms over the same sub-band as user i. The probability of KI being n is given by
K−1 n Pr (KI = n) = p (1 − p)K−1−n , n = 0, 1, · · · , K − 1 (12) n where p is the probability that each user transmits one’s DS waveform over the same sub-band as user i. In asynchronous systems, the probability p is given by p = Pr (partially interfering) + Pr (fully interfering) 1 1 = (1 − Λ(M )) · 2 + Λ(M ) M M 1 = (2 − Λ(M )) M
(13)
Efficient Allocation of Transmission Power
113
where Λ(M ) is the probability of two consecutive data bits being transmitted over the same sub-band, derived as Λ(M ) = 1/M for the random memoryless case. The average data rate R¯i with the rate adaptation is given by
M 3 1/m m=1 1 R¯i = ·E (14) 2Tc (Eb /Ne )o I where
I=
KI
(1)
Gk +
k=1
3N0 . 2ST Tc
(15)
(1)
Since all Gk ’s are assumed to be independent, identically distributed (i.i.d.) random variables, ∞ 1 1 E = PI (x)dx I KI =n x a ∞ ∞ 1 1 = ϕn (ω) e−jω(x−a) dω dx (16) 2π a x −∞ where
a=
3N0 2ST Tc
(17) (1)
and PI (x) is the pdf of I. ϕ(ω) is the characteristic function of Gk , given by ∞ ϕ(ω) = PG(1) (x) ejωx dx k
0
=M
M −1 k=0
(−1)k−M +1
M −1 1 . k (M − k − jω)
It follows from (12), (14), and (16) that
M K−1 3 K − 1 m=1 1/m R¯i = pKI (1 − p)K−1−KI 4πTc (Eb /Ne )o KI KI =0 ∞ ∞ 1 KI × ϕ (ω) e−jω(x−a) dω dx . a −∞ x
4
(18)
(19)
Numerical Results and Discussions
In order to compare the performance of the proposed combined adaptation scheme in MC DS/CDMA system to that of power and rate adapted SC DS/ CDMA systems with RAKE reception, the followings are assumed:
114
Y.H. Lee 1e+05
Ri [bits/sec]
1e+04
1e+03
1e+02 power adaptation in SC rate adaptation in SC proposed adaptation in MC 1e+01
20
30
40 50 ST/N0 [dB]
60
70
Fig. 1. The average data rate versus ST /N0 ; (Eb /Ne )o = 7[dB], K = 60, M = L = 4, Tm = 1[µsec], fd Ti = 10−2
– The total bandwidth of the SC system is the same as that of the MC system. That is, Tcsc = Tc /M , where Tcsc is the chip time for SC-RAKE systems. – In the SC system, the number of resolvable path L is equal to the number of sub-bands M in the MC system. – In the SC system, the channel gain for user k on the lth path, denoted by Gsc k,l , are assumed to be i.i.d. exponential random variables. Namely, constant multipath intensity profile (MIP) is assumed in SC-RAKE systems. The average data rates versus ST /No are shown in Fig. 1. We find that the proposed adaptation in frequency-time domain for MC DS/CDMA provides higher average data rates than the power or rate adaptation in SC-RAKE systems. This is because the proposed combined power and rate adaptation adapts its data rate rather than the transmission power relative to the channel variations (1) such as Gi and KI , in order to maintain a desired transmission quality. Besides, the mean value of the combined channel gain has a significant influence on the performance of rate adaptation, since the achievable average data rate is immediately related with this mean value. Therefore the proposed adaptation scheme in MC DS/CDMA, in which the mean value of the combined channel gain is much larger than that in SC-RAKE as M increases, is more effective transmission scheme than SC-RAKE scheme under the same frequency bandwidth,
Efficient Allocation of Transmission Power
115
Aver. Spectral Efficiency [bit/sec/Hz]
0.004
0.003
0.002
0.001 power adaptation in SC rate adaptation in SC proposed adaptation in MC 0
1
2
3
4
5
6
M (= L)
Fig. 2. The average spectral efficiency versus ST /N0 ; (Eb /Ne )o = 7[dB], K = 60, M = L = 4, Tm = 1[µsec], fd Ti = 10−2
when power and/or rate adaptation is employed. However, the rate adaptations provide variable data rate, which is more profitable to non delay-limited services such as file transfer and email services. If we normalize the average data rate R¯i by the total bandwidth, then we get the average spectral efficiency in bit/sec/Hz. Fig. 2 presents the average spectral efficiencies versus bandwidth expansion factor M (= L). We find that there exist an optimal bandwidth which maximizes the average spectral efficiency when the power adaptation is employed in SC-RAKE system. In the SC-RAKE system, decrease in L reduces the average transmitted data rate, since the combining capability of RAKE receiver and effective spreading gain decrease. In the MC system, decrease in M also reduces the average data rate, since the probability of deep fading increases and the average received power decreases. However, the required bandwidth for both systems is also reduced. Thus, increasing (or decreasing) the bandwidth produces two counteracting effects on the average spectral efficiency of the system. We find the average spectral efficiencies with rate adaptation are monotonic functions of the bandwidth. This indicates that when the rate adaptation is employed, the effect of reducing required bandwidth on the average spectral efficiency is crucial over all the range of system bandwidth.
116
5
Y.H. Lee
Conclusion
We considered jointly adapting transmission power and data rate in the frequencytime domain in MC-DS/CDMA communications. As a frequency domain power adaptation, we allocated the transmission power to only a sub-band with the largest channel gain. We then considered rate adaptation, where the data rate is adapted such that a desired transmission quality is maintained. We compared the performance of the proposed combined power and rate adaptation scheme in MC DS/CDMA systems to that of power and rate adaptations in SC-RAKE systems. The proposed combined power and rate adaptation employed in MC DS/CDMA systems was shown to have better performance than other adaptation schemes in SC-RAKE systems. Acknowledgments. This research was supported in part by the MKE (The Ministry of Knowledge Economy), Korea, under the ITRC (Information Technology Research Center) support program supervised by the NIPA (National IT Industry Promotion Agency) (NIPA-2011-C1090-1121-0007) and in part by the Korea Research Foundation (KRF) grant funded by the Korea government (MEST) (No. 2011-0003512).
References 1. Hara, S., Prasad, R.: Overview of multicarrier CDMA. IEEE Commun. Mag., 126– 133 (December 1997) 2. Yee, N., Linnartz, J.P., Fettweis, G.: Multicarrier CDMA in indoor wireless radio networks. In: Proc. IEEE PIMRC. pp. D1.3.1–D1.3.5 (September 1993) 3. Kondo, S., Milstein, L.B.: Performance of multicarrier DS CDMA systems. IEEE Trans. Commun. 44, 238–246 (1996) 4. Kim, S.W.: Adaptive rate and power DS/CDMA communications in fading channels. IEEE Commun. Lett., 85–87 (April 1999) 5. Kim, S.W., Lee, Y.H.: Combined rate and power adaptation in DS/CDMA communications over Nakagami fading channels. IEEE Trans. Commun., 162–168 (January 2000) 6. Lee, Y.H., Kim, S.W.: Generalized joint power and rate adaptation in DS-CDMA communications over fading channels. IEEE Trans. Veh. Technol. 57, 603–607 (2008) 7. Chen, Q., Sousa, E.S., Pasupathy, S.: Multicarrier CDMA with adaptive frequency hopping for mobile radio systems. IEEE J. Select. Areas Commun. 14, 1852–1858 (1996) 8. Kim, Y.H., Song, I., Yoon, S., Park, S.R.: A multicarrier CDMA system with adaptive subchannel allocation for forward links. IEEE Trans. Veh. Technol. 48, 1428–1436 (1999) 9. Brennan, D.G.: Linear diversity combining technique. Proc. IRE 47, 1075–1102 (1959) 10. David, H.A.: Order Statistics, 2nd edn. John Wiley and Sons, Chichester (1981) 11. Feller, W.: An Introduction to Probability Theory and Its Applications, 2nd edn., vol. II. John Wiley and Sons, Chichester (1971)
A Quality of Service Algorithm to Reduce Jitter in Mobile Networks P. Calduwel Newton1 and L. Arockiam2 1
Computer Science Bishop Heber College (Autonomous) 2 St. Joseph’s College (Autonomous) Tiruchirappalli, Tamil Nadu, India
[email protected]
Abstract. The mobile networks are more vulnerable to ensure Quality of Service (QoS) due to the exponentially increasing number of users / devices, limited resources, etc. QoS is a set of service requirements to be met by the network. Jitter means the delay variation in transferring packets. There can be various factors that increases jitter. They are misbehaving nodes, unresponsive flows, issues in existing packet scheduling algorithms. In mobile communication system, a typical cell phone tower spreads radio signals ranges from 1 to 35 KM. The packets that come from a mobile node which is very near to the tower may get the chance to transfer the packets immediately. But, a mobile node which is far away from the tower may not get the chance to transfer the packets immediately. In other words, it is unfair to serve the packet which is started later than the packet which is started early. The packet which is started early must be served first. This paper proposes a QoS algorithm to eliminate / reduce this issue and to reduce jitter in mobile networks. Ultimately, it helps to increase Quality of Service in mobile networks. Keywords: Packet Scheduling, Jitter, Quality of Service (QoS), Mobile Network, Time to Live (TTL), Time-stamp.
1 Introduction QoS violation is a major issue in the mobile and wireless networks. Because of the increased number of users and uses the QoS become a big challenge. Admission control and Traffic control are the two control mechanisms used to strengthen the QoS by many research experiments in the network world. The admission control mechanisms specify how, when and by whom the network resources can be used. The traffic control mechanisms regulate the packets by scheduling, classifying, etc. The traffic is affected by various factors like unresponsive flows, QoS violations, unfairness, high jitter, misbehaving nodes, etc. Responsive and unresponsive flows are classified by their response to network congestion. Responsive flows may suffer from a starvation effect while unresponsive flows benefit from their greedy nature [1]. Many researches focus on this problem to regulate the unresponsive flows. The unresponsive flows affect the traffic as well as make congestion in bottleneck. TCP is a most common example for responsive T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 117–124, 2011. © Springer-Verlag Berlin Heidelberg 2011
118
P. Calduwel Newton and L. Arockiam
protocols and UDP is an example for unresponsive protocol. Queue management is another important factor which affects the QoS. The queue management must be capable of handling all responsive and unresponsive flows. If there are unresponsive flows it must be penalized. Misbehaving nodes are a part of QoS violation factors. The malicious behavior of misbehaving nodes affects the performance of the network as well as a security threat. The attacks are based on the malicious behavior of some network nodes which disturbs the forwarding of control messages [2]. The misbehaving nodes may violate the network protocol, provide incorrect information or launch a DoS (Denial of Service) attack [3]. The congestion control and other packet and queue management systems or protocols must be sufficiently extensible to manage common attacks and mis-configurations in mobile wireless networks. Many mobile wireless networks are not able to reduce jitter because of the QoS violation, misbehaving nodes, unresponsive flow, etc. The misbehaving nodes and unresponsive flows affect the traffic and ignore the protocol rules. The jitter can be reduced by regulating and penalizing the misbehaving nodes and unresponsive flows. Because of the greedy nature of misbehaving nodes, unresponsive flows gain full access and affects the jitter. This paper is organized as follows. Section 2 describes the related works and the motivation to write this paper. Section 3 explains the proposed algorithm to reduce jitter in mobile networks. Section 4 reveals the research findings. Section 5 highlights important outcomes and concludes paper. Finally references are listed.
2 Related Works and Motivation Many research works have done related to this issue. Jangeun Jun et.al [4] explored various queuing schemes for multihop wireless networks and examined the fairness and throughput performance of each scheme. Each scheme offers a different degree of fairness. It was shown that in order to achieve the optimal bandwidth utilization, the medium access control (MAC) layer should be able to support different priorities. Jangeun theoretically investigated the pros and cons of different queuing schemes and verified the analytical results with detailed simulations. Parris et.al [1] surveyed the problem of managing responsive and unresponsive traffic. Parris’ mechanism, called class-based thresholds (CBT), allocates buffer capacity within a switch’s FIFO queue to each type of flow in proportion to the desired bandwidth allocation between traffic types. Parris demonstrated empirically the starvation effect of unresponsive traffic on responsive traffic, the effect of unresponsive traffic on proposed active queue management mechanisms, and the ability of CBT to efficiently provide isolation between traffic classes and provided a better-than-best-effort service to necessarily unresponsive traffic. Visvasuresh et al. [5] introduced a qualify of service (QoS) scheme, using agents, that is scalable, congestion avoiding and controlling, network performance monitoring, congestion forecasting, diagnosing, and resource allocating and enforcing scheme aimed at providing end-to-end communication latency and jitter for these flows in a scalable, proactive and reactive manner. Frank kargl et al. [6] stated that a selfish node wants to preserve its resources while using the services of others and
A Quality of Service Algorithm to Reduce Jitter in Mobile Networks
119
consuming their resources. One way of preventing selfishness in a MANET is a detection and exclusion mechanism. Frank focused on the detection and presented different kinds of sensors that will find selfish nodes. Jian Chen [7] proposed to adapt the Early Regulation of Unresponsive Flows (ERUF) to third generation wireless networks employing link layer retransmissions. Based on an analysis of the characteristics of the Radio Link Control (RLC) layer of the GPRS/UMTS network, he developed a new set of mechanisms based on active queue management to achieve congestion free flow. Hongxun Liu et al. [8] presented a hardware based cache scheme to detect misbehaving nodes in mobile ad hoc network. In this scheme, the hardware monitors the activities of the upper-layer software and reports the misbehavior of the software to other mobile nodes in the network. The hardware cache stores the identity information of recently received packets. The detection mechanism uses the cache to detect packet dropping and packet misrouting. Paul and Westhoff [9] claim their hashchain based framework can doubtlessly accuse a misbehaving node. But the voting scheme can only guarantee the accusation by some probability. Also, if the misbehaving node chooses to drop the packets silently, there is no way the neighbors can tell it is a misbehaving node or it doesn’t even exist. Many researches are being carried out to enhance QoS in mobile networks [10], [11]. The literature survey indicates that there are many parameters such as reliability [12], delay [13], hops [14], bandwidth [15], etc. are involved in enhancing QoS in mobile networks.
3 A Proposed Algorithm: QoS Approach The proposed packet scheduling algorithm is explained as follows. It improves QoS by ensuring low jitter. Packet scheduling must be done in an efficient manner in order to avoid certain QoS violations and reduce jitter in the mobile networks. Many research works are being carried out in packet scheduling algorithms. The packet starvation effect causes some packets to experience high delays at realistic offered loads as low as 40% and causes complete starvation of some packets at offered loads as low as 60% [16]. The packets that are received at each node must be handled very carefully to avoid high jitter and bad network performance. The following steps explore the processes involved in the proposed algorithm. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
Store the packets in the queue Analyze and classify the packets into MRT and LRT Identify LRR packets which are BS Identify MRR packets which are BS Assign 1st priority to LRT-BS packets Assign 2nd priority to MRT-BS packets Assign 3rd priority to LRT-AS packets Assign 4th low priority to MRT-AS packets Transfer the packets according to the priority level If two packets have same priority then a. Analyze the content of the packet b. Based on content assign priority (e.g. for video Æ 1st , for text Æ 2nd )
120
P. Calduwel Newton and L. Arockiam
Based on the arrival, packets are stored in a queue. When the packets reach the queue, the packets are classified and priority is assigned. The packets are classified and prioritized as LRT-BS Æ 1, MRT-BS Æ 2, LRT-AS Æ3, MRT-AS Æ 4, where LRT, MRT, BS, AS indicates Least Recently Time-stamped, Most Recently Timestamped, Behind the Schedule, Ahead of Schedule, respectively. The parameters such as time stamp and TTL (Time to Live) can be used to determine the packets as LRT / MRT and BS / AS respectively. The packet is said to be LRT if the time-stamp of a packet is less than other packet. The packet is said to be MRT if the time-stamp of a packet is greater than other packet. The packet is said to be BS if the TTL is short and it is AS if the TTL is long. The LRT-BS packet is given 1st priority, MRT-BS packet is given 2nd priority, LRTAS packet is given 3rd priority, MRT-AS packet is given 4th priority.If there is a problem of having same priority for two packets, then the packets will be further analyzed by their content (video, text, etc.). According to the type of contents again the packets can be prioritized and processed further. If the content of the packet is video then high priority can be fixed and if the content is text then low priority can be fixed. B
Ingress
Egress
C E
A
F
D Fig. 1. Model network
Fig.1. has two mobile networks and backbone network. The proposed algorithm can be used in any router junction where packets from various nodes will be received. The node ‘A’ is called as Ingress router which is an entry point for the packets and node ‘F’ is called as Egress router which is the exit point for the packets. Through the Ingress router the packets get into the network and through the Egress router the packets get out from the network. When so many packets arrive at one node and trying to pass through it at the same time then there is a chance for high jitter.
A Quality of Service Algorithm to Reduce Jitter in Mobile Networks
121
4 Research Findings In the proposed algorithm, the packets are classified at first and according to the classification the priority is set and then the packets are stored in queue. Consider the following example. Let us assume that node ‘A’ has ‘n’ packets that are to be served. The packets are classified and stored in appropriate queues. Let n=100, in existing packet scheduling algorithm, out of 100 packets, 50 packets may belong to Least Recently Received (LRR) and remaining 50 packets belong to Most Recently Received (MRR). There is a difference between LRR / MRR and LRT / MRT packets. It is the difference between time at which the packet is received in the queue and time at which the packet is started from the source node. In certain cases, 50 MRR packets can have some packets that would have started early from the source node and 50 LRR packets can have some packets that would have started later from the source node. In other words, the packets started early from the source node would have reached the queue late and the packets started late from the source node would have reached the queue early. Hence, it is unfair to send the packets first that have started late from the source node and to send the packets later that have started early from the source node. This problem is solved in the proposed algorithm. Figure 2 illustrates the classification and prioritization of the packets.
Q-LRT-BS
Q - Queue MRT -Most Recently Time-stamped LRT -Least Recently Time-stamped AS -Ahead of Schedule BS -Behind the Schedule
Q-MRT-BS Packet Classifier
Q-Out Q-LRT-AS Q-MRT-AS
Fig. 2. Proposed QoS packet scheduling algorithm
The proposed algorithm considers the time at which the packet is started from the source node. But, the existing approach considers the time at which the packet is received in the queue. Out of 100 packets in node ‘A’, 50 packets belong to LRT and the remaining 50 packets belong to MRT. The problem is that 50 LRT packets may have some packets that are ahead of schedule and 50 MRT packets may have some packets that are behind the schedule. The proposed approach further classifies LRT and MRT packets as LRT-BS, LRT-AS, MRT-BS and MRT-AS. Let us assume that the number of packets in the above cases is LRT-BS Æ 30, LRT-AS Æ 25, MRT-BS Æ 35 and MRT-AS Æ 10. The proposed algorithm will send packets in the following order. That is, 30 LRT-BS packets, 35 MRT-BS packets, 25 LRT-AS packets and 10 MRT-AS packets.
122
P. Calduwel Newton and L. Arockiam
60
No. of Packets
50
40 Unfair
30
Fair
20
10
0 LRR Packets
MRR Packets
Fig. 3. LRR packets vs. MRR packets
Fig. 3. illustrates that the existing algorithm has unfairness to a certain extent which increases jitter. The unfairness is due to the transfer of packets based on the time at which the packets are received in the queue.
60
No. of Packets
50
40 Unfair
30
Fair
20
10
0
LRT Packets
MRT Packets
Fig. 4. LRT packets vs. MRT packets
A Quality of Service Algorithm to Reduce Jitter in Mobile Networks
123
Fig. 4. indicates that the part of proposed algorithm has unfairness to a certain extent but less than the existing algorithm that reduces jitter. The unfairness is due to the transfer of ahead of schedule packets earlier than the packets that are behind the schedule.
40 35
No. of Packets
30 25 Unfair
20
Fair
15 10 5 0
LRT-BS MRT-BS LRT-AS MRT-AS Packets Packets Packets Packets
Fig. 5. Enhanced fairness by proposed algorithm
Fig. 5. reveals that the proposed algorithm has enhanced fairness that greatly reduces the jitter. The fairness is due to the classification and prioritization of packets into LRT-BS / MRT-BS / LRT-AS / MRT-AS packets.
5 Conclusion The QoS violations are inevitable in mobile networks as it has limited resources compared to wired networks. The unresponsive flows, misbehaving nodes, weaknesses in existing algorithms may cause unfairness during the packet scheduling and transferring. The proposed packet scheduling algorithm never allows the packets to be sent early that have started late from the source node and never allows the packets to be sent late that have started earlier from the source node. It greatly reduces the jitter. The proposed algorithm classifies and prioritizes the packets into LRT-BS Æ 1st Priority, MRT-BS Æ 2nd Priority, LRT-AS Æ 3rd Priority and MRT-AS Æ 4th Priority. Figure 3, 4 and 5 clearly shows that the proposed algorithm enhances fairness which in turn reduces jitter. Though this paper has ensured fairness and reduces jitter, certain limitations are involved in classification and prioritization of the packets.
124
P. Calduwel Newton and L. Arockiam
References 1. Parris, M., Jeffay, K., Smith, F.D.: Responsive vs. Unresponsive Traffic: Active Queue Management for a Better Than-Best-Effort Service. IEEE Networks Special Issue on Active Queue Management (September 2000) 2. Rocha, L.G.S., Costa, L.H.M.K., e Duarte, O.C.M.B.: Analyzing the Impact of Misbehaving Nodes in AdH̃oc Routing. In: The 3rd IEEE Latin American Network Operations and Management Symposium - LANOMS 2003, Foz do Iguaçu, Brasil, pp. 5– 12 ( September 2003) 3. McCoy, D., Sicker, D., Grunwald, D.: A Mechanism for Detecting and Responding to Misbehaving Nodes in Wireless Networks. In: 2nd IEEE Workshop on Networking Technologies for Software Define Radio Networks, pp. 48–54 (June 2007) 4. Jun, J., Sichitiu, M.L.: Fairness and QoS in Multihop Wireless Networks. In: Proceedings of IEEE Vehicular Technology Conference, VTC 2003, pp. 2936–2940 (2003) 5. Visvasuresh, V., Zaruba, G., Balasekaran, G.: A QoS Scheme To Address Communication Latency Issues For Critical Network Flows In Best-Effort Networks Using Mobile Agents. In: IEEE, Canadian Conference On Electrical And Computer Engineering, pp. 891–896 (May 2004) 6. Kargl, F., Klenk, A., Weber, M., Schlott, S.: Sensors for Detection of Misbehaving Nodes in MANETs. Praxis der Informationsverarbeitung und Kommunikation 28(1), 38–44 (2007) 7. Chen, J., Leung, V.C.M.: Improving end-to-end quality of services in 3G wireless networks by wireless early regulation of real-time flows. In: 14th IEEE Proceedings on Personal, Indoor and Mobile Radio Communications, pp. 2333–2337 (September 2003) 8. Liu, H., Delgado-Frias, J.G., Medidi, S.: Using a Cache Scheme to Detect Misbehaving Nodes in Mobile Ad-Hoc Networks. In: 15th IEEE International Conference on Networks (ICON 2007), pp. 7–12 (November 2007) 9. Paul, K., Westhoff, D.: Context Aware Detection of Selfish Node in DSR based Ad-hoc Network. IEEE GLOBECOM 2002, 178–182 (November 2002) 10. Calduwel Newton, P., Arockiam, L.: A Novel Prediction Technique to Improve Quality of Service (QoS) for Heterogeneous Data Traffic. Journal of Intelligent Manufacturing (November 2009), doi:10.1007/s10845-009-0361-z ISSN: 0956-5515 (Print) /1572-8145 (Online) 11. Calduwel Newton, P., Arockiam, L.: Route Optimization Mechanisms for Internet Applications in Mobile Networks: A Survey. International Journal of Future Generation Communication and Networking 3(3), 57–69 (2010) ISSN: 1738-995x 12. Calduwel Newton, P., Arockiam, L.: Reliability Analysis for General Packet Radio Service with Quality of Service Support. CiiT International Journal of Wireless Communication, 79–83 (May 2009) ISSN 0974 – 9756 13. Calduwel Newton, P., Arockiam, L., Isac Gnanaraj, J.: A Delay-Sensitive Mechanism to Establish Route Optimization in Mobile Networks. International Journal of Future Generation Communication and Networking 2(3), 37–44 (2009) ISSN: 1738-995x 14. Isac Gnanaraj, J., Calduwel Newton, P., Arockiam, L., Kim, T.-h.: A Hop-Sensitive Mechanism to Establish Route Optimization in Mobile Networks. Communications in Computer and Information Science Series 77, 116–123 (2010) ISSN: 1865-0929 15. Arockiam, L., Calduwel Newton, P., Isac Gnanaraj, J.: A Quality of Service Strategy to Optimize Bandwidth Utilization in Mobile Networks. In: Proceedings of International Conference on Advances in Computer Science, Trivandrum, Kerala, India, pp. 186–189, ACEEE (December 2010) 16. Whetten. B., Steinberg. S., Ferrari. D.: The packet starvation effect in CSMA/CD LANs and a solution. In: Proceedings of 19th Conference on Local Computer Networks, pp. 206217 (August 2002)
Performance Analysis of HDD and DRAM-SSD Using TPC-H Benchmark on MYSQL Hyun-Ju Song1, Young-Hun Lee2,*, and Seung-Kook Cheong3 1
Dept. of Electronic Eng., Hannam University, Ojeong -dong, Daedeok-gu, Daejon 306-791, Korea Tel.: + 82-42-629-8001
[email protected] 2 Dept. of Electronic Eng., Hannam University, Ojeong -dong, Daedeok-gu, Daejon 306-791, Korea Tel.: +82-42-629-7565
[email protected] 3 Dept. Principal Member of Engineering Staff. ETRI, 161, Gajeong-dong, Yuseong-gu, Daejeon 305-700, Korea Tel.: +82-42-860-4845
[email protected]
Abstract. Recently, Users needed storage for processing high-capacity data. Meanwhile, the HDD was used primarily as storage media, difference of the development of the processing speed of the HDD and CPU processing speed of development has occurred and serious the data input / output bottleneck that was created to solve a problem is SSD. So in this paper, using TPC-H Benchmark on MYSQL, we will compare DRAM-SSD to HDD and confirm showing good performance in data processing. As a performance analysis results , when DBMS database Size increased QphH@size was increased with TPC-H, but difference of the ad-hoc query processing ability are increased when compared of each database capacity. Based on these results, using DRAM-SSD Storage is judged to be effective at large amounts of data I/O required field or applications. Keywords: SSD, HDD, Mysql, TPC-H.
1 Introduction Recently, Users needed storage for processing high-capacity data. Meanwhile, the HDD was used primarily as storage media, difference of the development of the processing speed of the HDD and CPU processing speed of development has occurred and serious the data input / output bottleneck that was created to solve a problem is SSD. SSD which is faster processing speed than the HDD has been actively studied to increase as a storage device for increasing. HDD and SSD, by comparing performance of storage device of each, research which known difference of data I/O *
Corresponding author.
T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 125–131, 2011. © Springer-Verlag Berlin Heidelberg 2011
126
H.-J. Song, Y.-H. Lee, and S.-K. Cheong
processing performance was progressd.[1-3] General storage medium manage data and use of DBMS which provided efficient and convenient way increased. So in this paper, using TPC-H Benchmark on MYSQL, we will compare DRAM-SSD to HDD and confirm showing good performance in data processing. The process of paper is as follow. In Chapter 2 we will introduce used technology that evaluates performance of HDD and SSD, In Chapter 3 we will introduce analysis environment and condition by thinking each storage device like storage and using tool. And In chapter 4 analyzes test results will be shown with conditions described in chapter 3, and chapter 5 is concluded.
2 Related Study 2.1 DRAM-SSD DRAM-based SSD uses volatile DRAM as the primary storage device, and because it saves and access data directly from RAM-Chip, it has more fast than conventional magnetic devices. DRAM is basically volatile device but it is performed by integration of nonvolatile backup system as interior battery and HDD.[4] Currently, Major released products construct Dram modules to commercial motherboard PCI slot recently these use like high-speed interface fast read/write to show performance. 2.2 MYSQL MYSQL is the relative database manage system that uses SQL which is standard database quality language, which is very fast, flexible, and easy to use. It supports multi user and multi thread, and offers application program interface (API) which is for C, C++, Eiffel, Java, Perl, PHP, and Python script. MYSQL offers client/server environment, and a server installed MYSQL has MYSQL called mysqld daemon, so client program connects via network by this daemon so that it can control data.[5]
Fig. 1. Performance structure of MYSQL
2.3 TPC-H TPC is non-profit organization that multiple hardware and software companies. Usually TPC called transaction processing performance evaluation, but TPCalphabet notation tells the benchmark model. TPC becomes standard evaluating
Performance Analysis of HDD and DRAM-SSD Using TPC-H Benchmark on MYSQL
127
processing performance of on-line transaction processing systems. TPC defines transaction processing and database benchmarks and is used to performance measure of total system including disk I/O and s/w. As to measure benchmarking tool that how fast it can handle complex SQL, TPC-H defines 22 SQL statements and DB schema, and set of data about 1GB. TPC-H benchmark is public performance test that is used SQL that Ad hoc Query and concurrent data modifications made by the combination about large data. Fig 2 show business environment of TPC-H, TPC-H Being performed impromptu Query and modification transaction at table from multiple users, modeled situation entered data from database of decision support system from OLTP system[6].
Fig. 2. TPC-H's Business environment
3 Performance Analysis Environment and Conditions 3.1 Performance Test Environment This paper comparative analyzed the performance of each, which the HDD and SSD as a storage at modern mainly used storage device. Environment for test formed as fig 3, test subject server formed as Linux CentOS 5.3 version, performance measure tool used as TPC-H benchmark, to use a tool as the database was installed MYSQL 5.0.90 version. HDD storage and DRAM-SSD storage as data storage device when they were applied to test the performance was composed for the environment.
Fig. 3. Test environment configuration
128
H.-J. Song, Y.-H. Leee, and S.-K. Cheong
3.2 Test Procedure and Condition C Test using the TPC-H benchmark b were comparative analyzed total three sstep procedure with the results. Load Test is the first step, which make up database and step data store to generate.. But, performance analysis did not include results. Next, when Single active user in n the Power Test is run the Query that was analyzedd by measuring the ability. Finaally, when Multi active user in the Throughput Test is run Query at same time that was w analyzed by measuring the ability. Power TEST and Throughput Test results haave combined to analyze the performance of each storrage device. Data generation and d total of eight table generation to store generated dataa for DBMS. Read data in the created c file generation code stored in the database, innsert data stored by the check k out time. A total of 22 Query given in the TPC C-H benchmark, the Query Bro owser is seen running through the run-time performaance through a comparative anallysis of each storage device to check each place, total off 22 Query stored for multi activ ve user test through vim command in TPC-H installed the directory. After run total of o 22 Query to delete from a script set by checking the result. Result of insert exeecution time and Query of 22 execution time and deelete execution time calculate vallue of Power Test apply in below Equation 1.
(1) QI (i, 0) is the execution n time of I second Query, RI (j, 0) as the running timee of insert and delete unit sec an nd the data generated by the size of SF, ie, the value off the database. Next Test is Thro oughput Test, witch to check performance ability when tthey were running at the same tiime the Query Multi active user. At the same time multtiple users when performance ab bility of Query of each has a different number of user s as table 1 to amount of data. Ta able 1. Conditions for throughput test Database capacity
User number
1GB
2
5GB
2
10GB
3
Determined by the user, as shown in Table 3-1 above, perform parallel each off 22 Query, time check was exp pressed in units of sec from first user starts the first Quuery to last user finish last Query y. By substituting in Equation 2 is to evaluate performaance Time to check on the recipee. (2)
Performance Analysis of HDD D and DRAM-SSD Using TPC-H Benchmark on MYSQL
129
In the above Equation 2, 2 is performance time, S is number of user, SF shhow amount of data. Through Equation E 1 and Equation 2 with the result, Equation 3 has produced result to get final result to analysis performance of each storage device. (3) The calculated value from fr Equation 3 QphH@Size is reflected value of the different characteristics off Query processing system, these features are perform med Query, Query processing ability when performed Query, Query performed by multiple users simultaneoussly including all the processing ability. Thus, QphH@S Size shown ad-hoc query capabiilities hourly capacities of the database.
{G OkinSGxnPG
G
kG ct eG
G
xG
Fig. 4. TPC C-H applies to get the results the block Diagram
4 Test Results Test results using TPC-H Benchmark comparative analysis methods based on the results described in Sectio on 3.2 performance of each storage device when saame condition in Local environm ment a combination of Power Test results and Throughhput Test Results. Some of Tottal 22 Query for test when Database capacity 1GB, 5G GB, 10GB, by modifying compaarative analysis performance for each size Shorten the tiime the test case. Table 2. Locaal environment HDD performance measurements
HDD
Database capacity
Test result
1GB
5GB
10GB
Power@size
9.5856E-12
8.8771E-29
1.5251E-34
Throughput@size
12.6619
21.7726
8.9001
QphH@size
1.0982E-05
4.3964E-15
3.6843E-17
130
H.-J. Song, Y.-H. Lee, and S.-K. Cheong Table 3. Local environment DRAM-SSD performance measurements
DRAM-SSD
Database capacity
Test result
1GB
5GB
10GB
Power@size
9.7911E-09
4.6299E-20
8.8710E-22
Throughput@size
19.3928
34.7933
80.1214
QphH@size
4.3575E-04
1.2692E-09
2.6660E-10
As can be seen through above Table 2 and Table 3, the practical value for comparing the performance of DRAM-SSD Storage and HDD Storage is QphH@size, Compared to only a fraction of each storage device-specific analysis of the database are listed below picture.
Fig. 5. Local environment database capacities of each storage device QphH@size value
QphH@size value that can handle ad-hoc queries, find out the difference between less to handle ad-hoc query capabilities of HDD Storage and DRAM-SSD Storage in a small database capacity, but With increasing amount of data to the DRAM-SSD Storage HDD Storage for more than an hour to handle ad-hoc query capabilities that can be seen that much higher.
5 Conclusion In this paper, the DRAM-SSD and HDD Storage using the TPC-H Benchmark on MYSQL analyzed the data processing performance. As a performance analysis results, when DBMS database Size increased QphH@size was increased with TPCH, but difference of the ad-hoc query processing ability are increased when compared
Performance Analysis of HDD and DRAM-SSD Using TPC-H Benchmark on MYSQL
131
of each database capacity. Based on these results, using DRAM-SSD Storage is judged to be effective at large amounts of data I/O required field or applications. Also mentioned in the introduction SSD benefits and if SSD price is stabled, using SSD is thought to be effective in case of Industry side required large amounts of data I/O or Media Server, using storage device of computer and notebook computer. In the future, by applying which the SAN used to quickly transfer large amounts of data, in any environment, better than data processing capacity have expected to study. Acknowledgments. This paper has been supported by 2011 Hannam University Research Fund.
References [1] Park, K.-H., Choe, J.-k., Lee, Y.-H., Cheong, S.-K., Kim, Y.-S.: Performance Analysis for DRAM-based SSD system. Korean Information Technical Academic Society Dissertation 7(4), 41–47 (2009) [2] Kang, Y.-H., Yoo, J.-H., Cheong, S.-K.: Performance Evaluation of the SSD based on DRAM Storage System IOPS. Korean Information Technical Academic Society Dissertation 7(1), 265–272 (2009) [3] Cheong, S.-K., Jeong, Y.-W., Jeong, Y.-J., Jeong, J.-j.: Input-Output Performance Analysis of HDD and DDR-SSD Storage under the Streaming Workload. Korean Information Technical Academic Society Dissertation, 322–325 (2010) [4] Cheong, S.-K., Ko, D.-s.: Technology Prospect of Next Generation Storage. Korean Information Technical Academic Society Dissertation Summer Synthetic Scientific Announcement Thesis, 137 (2008) [5] Kim, H.: Learn to MYSQL Database Programming. Young-jin.com [6] http://www.tpc.org/tpch/spec/tpch2.13.0.pdf
User Authentication Platform Using Provisioning in Cloud Computing Environment Hyosik Ahn, Hyokyung Chang, Changbok Jang, and Euiin Choi∗ Dept. of Computer Engineering, Hannam University, Daejeon, Korea {hsahn,hkjang,chbjang}@dblab.hannam.ac.kr,
[email protected]
Abstract. Cloud computing is a computing environment centered on users and can use programs or documents stored respectivily in servers by operating an applied software such as Web browser through diverse devices on where users can access Internet as an on-demand outsourcing service of IT resources using Internet. In order to use these cloud computing service, a user authentication is needed. This paper proposed the platform for user authentication using provisioning in Cloud computing environment. Keywords: Cloud Computing, User Authentication, Provisioning, Security.
1
Introduction
Gartner announced Top 10 of IT strategy technologies of the year of 2010 in 2009, October. Strategy technologies Gartner mentioned are the technologies which importantly affect enterprises for the next 3 years and have a powerful effect on IT and business. They may affect long-term plans, programs, and major projects of enterprises and help enterprises get strategic advantages if enterprises adopt them a head start. Cloud Computing got the top rank (2nd rank in 2009)[1][2]. Cloud Computing is a model of performance business and also infrastructure management methodology. It lets users use hardware, software and network resources as much as possible so as to provide innovative services through Web in these business performance models and also enables to provision a server according to needs of the logical by using automated advanced tools[3][4]. Also, Cloud Computing models offer both users and IT managers user interface which helps manage provisioned resources easy through the entire life cycle of a service request[5][6]. When resources requested by the user arrives through Cloud, the user can track the order consisted of a certain number of servers and software and operate jobs such as checking the state of the resources provided, adding a server, changing the installed software, removing a server, increasing or decreasing allocated processing power and memory or storage, even starting, aborting, and restarting server[7]. However, the diffusion of cloud computing incites users’ desires for more improved, faster and more secure service delivery. Hence, security issues in the Cloud Computing environment are constantly emerging and authentication and access ∗
Corresponding author.
T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 132–138, 2011. © Springer-Verlag Berlin Heidelberg 2011
User Authentication Platform Using Provisioning in Cloud Computing Environment
133
control has been studying. A user in the Cloud Computing environment has to complete the personal authentication process required by the service provider whenever using a new Cloud service. In this process, in the case that the characteristic and safety have been invaded by any attack, there will be a severe damage because personal information stored in the database and business processing service have been exposed or information related to individuals or organizations will be exposed as well. Therefore, this paper designs a platform for user authentication using provisioning in the Cloud Computing environment.
2 2.1
Related Works Definition of Cloud Computing and the Appearance Background
Cloud Computing is a kind of on-demand computing method that lets users use IT resources such as network, server, storage, service, application, and so on via Internet when needing them rather than owning them[5]. Cloud Computing can be considered as a sum of SaaS (Software as a Service) and utility computing and Figure 1 shows the roles of users or providers in the Cloud Computing under the concept[9].
Fig. 1. Users and Providers of Cloud Computing
2.2
Provisioning
Provisioning is a procedure and behavior that prepares required knowledge in advance and provides by requests in order to find the best thing. That is, it allocates, deploys, and distributes infrastructure resources to meet the needs of users or business and helps use the resources in the system[9]. z Server Resources Provisioning: prepares resources like the CPU of the server and memory by allocating and placing the resources appropriately z OS Provisioning: prepares OS operative by installing OS in the server and configuring tasks z Software Provisioning: prepares to operate by installing/distributing Software(WAS, DBMS, Application, etc.) in the system and doing configuration settings needed
134
H. Ahn et al.
z Storage Provisioning: enables to identify wasted or unused storage and put it into a common pool. If there is a request of storage since then, the administrator gets one out of the pool and uses. Possible to construct infrastructure to heighten the efficiency of storage z Account Provisioning: the process that HR manager and IT manager take the appropriate approval process and then generate accounts needed for various applications such as e-mail. groupware, ERP, etc. or change the access authorities when the category of the resources that users access changes[10] Figure 2 shows such a provisioning service.
Fig. 2. Provisioning Service
2.3
Security Technology in the Cloud Computing Environment
There are no concrete security technologies in Cloud Computing, however, if we regard Cloud Computing as an extension of the existing IT technologies, it is possible to divide some of them by each component of Cloud Computing and apply to[12]. Access control and user authentication are representative as security technologies used for platforms. Access control is the technology that controls a process in the operating system not to approach the area of another process. There are DAC (Discretionary Access Control), MAC (Media Access Control), and RBAC (RoleBased Access Control). DAC helps a user establish the access authority to the resources that he/she owns as he/she pleases. MAC establishes the vertical/horizontal access rules in the system at the standard of security level and area for the resources and uses them. RBAC gives an access authority to a user group based on the role the user group plays in the organization, not to a user. RBAC is widely used because it is fit for the commercial organizations. Technologies used to authenticate a user are Id/password, Public Key Infrastructure, multi-factor authentication, SSO (Single Sign On), MTM (Mobile Trusted Module), and i-Pin[11].
User Authentication Platform Using Provisioning in Cloud Computing Environment
3 3.1
135
User Authentication in the Cloud Computing Environment User Authentication Technology in the Cloud Computing Environment
Users in the Cloud Computing environment have to complete the user authentication process required by the service provider whenever they use new Cloud service. Generally a user registers with offering personal information and a service provider provides a user’s own ID (identification) and an authentication method for user authentication after a registration is done. Then the user uses the ID and the authentication method to operate the user authentication when the user accesses to use a Cloud Computing service[13]. Unfortunately, there is a possibility that the characteristics and safety of authentication method can be invaded by an attack during the process of authentication, and then it could cause severe damages. Hence, there must be not only security but also interoperability for user authentication of Cloud Computing. 3.2
Weakness of User Authentication Technology in Cloud Computing
The representative user authentication security technologies described above have some weaknesses[11][12]. z Id/password: the representative user authentication method. It is simple and easy to use, but it has to have a certain level of complication and regular renewal to keep the security. z PKI(Public Key Infrastructure): an authentication means using a public-key cryptography. It enables to authenticate the other party based on the certificate without shared secret information. In PKI structure, it is impossible to manage and inspect the process of client side. z Multi-factor: a method to heighten the security intensity by combining a few means of authentication. Id, password, biometrics like fingerprint and iris, certificate, OTP (One Tome Pad), etc. are used. OTP information as well as Id/password can be disclosed to an attacker. z SSO (Single Sign On): a kind of passport if it gets authentication from a site, then it can go through to other sites with assertion and has no authentication process. The representative standard of assertion is SAML. z MTM (Mobile Trusted Module): a hardware-based security module. It is a proposed standard by TCG (Trusted Computing Group) which Nokia, Samsung, France Telecom, Ericson, etc. take part in. It is mainly applied to authenticate terminals from telecommunications, however, it is being considered as a Cloud Computing authentication method with SIM (Subscriber Identity Module) because of generalization of smartphone[15]. z i-Pin: a technique to use to confirm a user’s identification when the user uses Internet in Korea now. It operates in a way that the organization which itself performed the identification of the user issues the assertion.
136
4 4.1
H. Ahn et al.
Platform Design for User Authentication Using Provisioning in Cloud Computing Environment User Authentication Using Provisioning
User authentication platform using provisioning first authenticates by using ID/Password, PKI, SSO, etc. which a user input. Second, it authenticates with Authentication Manager through the user profile and patterns and stores the changes of the state via Monitor. When using Cloud Computing services, to solve the inconvenience of user authentication, user’s information is stored in the User Information. Figure 3 shows a conceptual architecture.
Fig. 3. Conceptual Architecture
For user authentication, Authentication Module has the previous user profile and log data record by Provisioning strategy. 4.2
Design of User Authentication Platform Using Provisioning
Figure 4 shows the composition of User authentication platform using provisioning proposed in this paper. To authenticate a user first, Analyzer analyzes information such as ID/Password, Time, Position, Place, etc. it analyzes user pattern by using Rules for user authentication. To analyze user pattern, it authenticates a user by analyzing with current status and Rules of Profile DB. Profile DB stores user profile like existing user login time, location, position, etc. Also, it monitors the changes of user’s status and situations via Status Monitor, records them in the database, and puts user information into User Information. That is how the user authentication process which the service provider asks to users whenever they use Cloud Computing services.
User Authentication Platform Using Provisioning in Cloud Computing Environment
137
Fig. 4. Authentication Platform
5
Conclusions
In this paper, user authentication platform using Provisioning in Cloud Computing environment was proposed and the features of this proposal are as follows. There is some troublesome for users to get user authentication in the existing Cloud Computing environment because they have to go through the user authentication process to use the service every single time by using an ID and authentication method that the service provider provided. So, user authentication platform using Provisioning solves the existing inconvenience and helps use Cloud Computing services easily. The proposed platform architecture analyzes user information and authenticates a user through user profile. Also, it has user information stored through user monitoring and there is an advantage that the user authentication process required by the service provider can be omitted when using other Cloud Computing services. As further study, there should be a study on protection of user information, which is profile and log data in the platform proposed by this paper. Acknowledgments. This work was supported by the Security Engineering Research Center, granted by the Korea Ministry of Knowledge Economy.
References 1. Lee, T.: Features of Cloud Computing and Offered Service State by Individual Vender. Broadcasting and Communication Policy 22, 18–22 (2000) 2. http://www.gartner.com/technology/symposium/2009/sym19/ about.jsp 3. Lee, J., Choi, D.: New trend of IT led by Cloud Computing. LG Economic Research Institute (2010) 4. Lee, H., Chung, M.: Context-Aware Security for Cloud Computing Environment. IEEK 47, 561–568 (2010) 5. Kim, J., Kim, H.: Cloud Computing Industry Trend and Introduction Effect. IT Insight, National IT Industry Promotion Agency (2010)
138
H. Ahn et al.
6. http://en.wikipedia.org/wiki/Cloud_computing#History 7. Dikaiakos, M.D., et al.: Cloud Computing Distributed Internet Computing for IT and Scientific Research. In: IEEE Internet Computing, pp. 10–13 (September/October 2009) 8. Harauz, J., et al.: Data Security in the World of Cloud Computing. In: IEEE Security & Privacy, pp. 61–64 (2009) 9. Lee, J.: Cloud Compting, Changes IT Industry Paradigm. LG Business Insight, 40–46 (2009) 10. Kim, H., Park, C.: Cloud Computing and Personal Authentication Service. KIISC 20, 11– 19 (2010) 11. Armbust, M., et al.: Above the Clouds: A Berkeley View of Cloud Computing. Technical Report, http://www.eeec.berkeley.edu/Pubs/TechRpts/2009/ EEEC-2009-28.html (2009) 12. Un, S., et al.: Cloud Computing Security Technology. ETRI 24(4), 79–88 (2009)
Profile for Effective Service Management on Mobile Cloud Computing Changbok Jang, Hyokyung Chang, Hyosik Ahn, Yongho Kang, and Euiin Choi* Dept. of Computer Engineering, Hannam University, Daejeon, Korea {chbjang,hkjang,hsahn}@dblab.hannam.ac.kr,
[email protected],
[email protected]
Abstract. Mobile Cloud Computing has become as a new IT paradigm because of the growth of mobile device like smartphone and appearance of Cloud Computing environment. This mobile cloud environment provides various services and IT resources according to users‘ requests, so an effective management of service and IT resources is required. Hence, this paper designs a profile on mobile cloud service platform in order to provide distributed IT resources and services to users based on context-awareness information. As the profile proposed in this paper uses context-aware information, it enables to provide more accurate personalized services and manage distributed IT resources. Keywords: Cloud computing, Context-aware, Profile, Intelligence Service, Mobile cloud computing.
1
Introduction
The market of mobile recently has been evolving rapidly and cloud computing is spreading into mobile as well. That is why mobile cloud computing is becoming a new issue today. Cloud computing is the computing that provides virtualized IT resources as a service by using Internet technology. In cloud computing, a user lends IT resources (software, storage, server, network) as needed, uses them, get a support of real-time scalability according to service load, and pays as he/she goes. Especially the cloud computing environment distributes IT resources and allocates according to user’s request, so there should be a study on technology that manages these resources and deals with effectively[1]. Mobile cloud computing creates a new chance for IT industry because it allows the superiority and economic of cloud computing to meet the mobility and convenience of mobile and draws a synergy effect for both. Also mobile cloud computing refers to an infrastructure that data storage and data processing is done outside mobile device by using cloud computing in the regardless of kinds of mobile devices. Mobile devices used in the mobile environment include personal information and enable to provide the environment that collects a variety of context-aware information. Users’ demand on service types suitable for the individual situation has been increasing. *
Corresponding author.
T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 139–145, 2011. © Springer-Verlag Berlin Heidelberg 2011
140
C. Jang et al.
So, context-aware reasoning technique has been studied to provide a suitable service for user by using user’ context and personal profile information in mobile environment[2-9]. In this context-aware system, a formal context model has to be provided to offer information needed by application as well as store context and manage. However, there are some technical constraints for this context-aware model to overcome because it itself cannot be applied to mobile platform due to limited device resources, so the study on intelligent mobile service in mobile platform is still insufficient. Recent interest related to mobile cloud is personal smartphone. The study on physical support like connecting smartphone to personal virtual system on cloud and using computing resources unlimitedly is quite active, but the study on how to manage distributed IT resources effectively and provide intelligent mobile service through reasoning based on collected information and role as a medium of collecting context of mobile device is ignored. Therefore, this paper designs a profile based context-aware knowledge on mobile cloud service platform and develops in order to the optimized mobile cloud service through recognizing the conditions of user and cloud server and reasoning on the basis of external context achieved from mobile or internal user’s personal information and information of resources from cloud server and service use information.
2
Related Works
Mobile platform is mainly referred to mobile middleware what lets users operate the optimized contents or service on mobile and it provided a formed interface to UI and service by using RTOS (Realtime OS) and hardware function. There are Windows Mobile, iPhone, Android, Symbian, etc. as these mobile platform. The object of awareness in this mobile cloud computing which differ from previous computing environment is not human, but device[10]. This means that information of awareness is to be personalized. Because inference power of computer was not enough to understand human’s action and thought. But because they actively have to provide various service and information to aware human’s behavior and thought, there is need of definition, how expressed human’s behavior and thought to context information. Also, because human’s behavior and thought was individually differ, how understand personal inclination is important problem. For this problem solving on mobile cloud computing, it was inferred context which was able to understand human’s behavior based situation information which collected from various sensors. And they are provided service and information to user through the inferred context. Generally, situation which arose around human was able to collect from sensor, but personal inclination and thought was not. Therefore, they used the method of personal information storage for analyzing inclination, such as personal profile, history, diary[11,12]. As mentioned above, user of mobile cloud computing was provided various services without human’s recognization by mobile devices in anywhere and anytime. So, we have to infer context for provide the services to users correctly. Therefore, there are studying about technique of context inference to use personal inclination and information. As it demanded personal inclination and information in context inference, using the user’s profile for storing it, research about technique of profile was as follow:
Profile for Effective Service Management on Mobile Cloud Computing
141
UbiData project was suggested data process and synchronization and addresses these three challenges using an architecture and sophisticated hoarding, synchronization, and transcoding algorithms to enable continuous availability of data regardless of user mobility and disconnection, and regardless of the mobile device and its data viewing/processing applications[13,14]. Annika Hinze describe TIP (Tourism Information Provider) system, which delivers various types of information to mobile devices based on location, time, profile of end users, and their “history”, i.e., their accumulated knowledge. The system hinges on a hierarchical semantic geospatial model as well as on an Event Notification System (ENS)[15]. Annie Chen proposed a context-aware collaborative filtering system that can predict user preferences in different context situations, based on past user-experiences. The system uses what other like-minded users have done in similar context, to predict a user’s preference towards an item in the current context[16]. Manuele Kirsch-Pinheiro proposed a context-based filtering process, aimed at adapting awareness information delivered to mobile users by collaborative web systems. This filtering process relied on a model of context which integrates both physical and organizational dimensions, and allows representation of the user’s current context as well as general profiles. These profiles are descriptions of potential user contexts and express awareness information filtering rules to apply when the user’s current context matches one of these rules. Given a context, these rules reflect user preferences. They describe how the filtering process performs in two steps, the first for identifying the general profiles that apply, and the second for selecting awareness information[17]. However, these profile techniques not sufficient on mobile cloud computing. Therefore, this paper proposes profile in order to manage resources more effectively by using personal context information and do modeling context-aware information in mobile platform and reason.
3
System Architecture
In this paper, we suggested context-aware-based intelligence mobile cloud service platform for efficiently managing resource to use context-aware information. As shown figure 1, suggested system consisted of intelligence agent and intelligence middleware. Intelligence agent was responsible for understanding a variety of context-aware information and inferring it. And it consisted of sub-modules such as service module, context-aware preprocessor, personal profile, context-aware information modeling database. Intelligence middleware was responsible for providing services and efficiently managing IT resources by user’s request on mobile cloud computing. Context-aware preprocessor on intelligence agent included process for collecting context-aware information and modeling it, inferring context-aware information, and responsible for understanding what user’s situation was. Service module was responsible for sending context-aware information to intelligence middleware, providing services that suitable to user. Personal profile was repository which was stored personal information, such as service information by using user, user’s ID, password. Context-aware modeling database was stored to information which was modeled by using ontology. Intelligence middleware consisted of interaction interface for communicating to agent, resource manager, service manager,
142
C. Jang et al.
Service catalog, Provisioning Rule Database. Resource manager responsible for effectively allocating and managing service information was required for processing user’s request service, and consisted of monitoring module, provisioning module, and scheduler. Monitoring module crawled information of IT resource utilization. Provisioning module set up plan for providing best service to analyze context-aware information which was transferred by user and utilization information of IT resource. Scheduler was scheduled to utilization of service and resource by plan which was established to provisioning module. Service Catalog was stored service information for which user used, provisioning rule database was stored rule for providing best provisioning process to use context-aware information and utilization of resource. Also, service module was responsible for executing service and using distributed IT resource to providing service to user, and consisted of sub-module, such that synchronization module. Synchronization module responsible for synchronizing resource which user was using on cloud computing.
Fig. 1. Suggested platform architecture
4 4.1
Profile Structure Definition of User Profiles
A user's profile specifies information of interest for an end user. So in this paper, the profile was consisted of user information part and service information part. User information part stored user's information such as user's name, inclination, hobby and Service information part stored services that they were used such as service name, service provider etc.
Profile for Effective Service Management on Mobile Cloud Computing
143
Structure of user profile was follow: - User Information: User name, User ID, Personal inclination, hobby, etc - Service Information: Service Name, Service Provider, Service context, Service frequency value, etc Because profile stored how much the service information used, stored not only used service, but also information when, how, where used. Also, there are stored the information about what context used. DTD of profile which we were suggested follow. 4.2
Profile Manipulation
We assumed that the services will use this place and time next time, if service was demanded in specific location and time. So, we used the information of time, location, frequency to provide services to user more correctly and suggested profile which using recently access time, access time, frequency of access, location value, weekend value. And the values stored in profile. - recently access time(t): This value stored time when service used recently, and use for finding service which not used for a long time. - access time(a): This value have to 24 from 0, and if service was used on 1 P.M, it’s value has 13. - frequency of access (f): This value stored frequency of service how many user used the service.
144
C. Jang et al.
- location value(l): This value have unique number of place where service was used. For example, if user used A service in house and office, location value of A service which used in house is 1, other is 10. - weekend value(e): This value have to 7 from 1, if service used on Monday, weekend value is 1. Generally, people’s life pattern was repeated per week. So we use the value for analyzing service frequency of user per week. In case of using the service, we need location information where service used for inferring user’s inclination and context efficiently. So, we simply classified location information which has a unique value, such as the following: - Home : Bathroom(1), bedroom(2) - Office : Lobby(3), elevator(4), floor(5), office room(6), conference room(7) - Other : Street(8), Car(9), etc And we represented frequency of access to 3-Dimension graph which have threecoordinate values(access time, location value, weekend value). For example, if user demand A service at 7 A.M, Monday on bathroom, an then weekend value(monday is mean 1 in the location model) has 1, access time has 9, location value has 1. So frequency of A service is represent at coordinate (7, 1, 1) and has 1 value. If user will demand A service at same time and place, frequency of A service which has coordinate (7, 1, 1) will become 1 by increasing. Also, we find location information of service which has most high value of frequency which place on responding coordinate among it, and put it in service storage which user will use the service.
5
Conclusion and Future Work
In this paper, to provide users with suitable services and manage resources effectively by using context information in the mobile cloud environment, we were proposed to profile which stored location and time, frequency information of often used service among various services, and put the service which will expect to use in any location. The system used a profile made in the form of an XML document, and classified information, which was used by users when context arose with elements such as location, time, date(week), and frequency. If the user is located at a specific place, our system provide service to the user through location, time, date(week), and frequency information, which is stored in the user’s profile. As a further research, we are needed algorithm of similarity assessment between current and past context, and technique that extracts context information and does modeling, the resources management technique that manages distributed IT resources effectively by using context information, and the part that examines the performance and tests after embodying the actual platform proposed. Acknowledgments. This research was financially supported by the Ministry of Education, Science Technology (MEST) and National Research Foundation of Korea(NRF) through the Human Resource Training Project for Regional Innovation.
Profile for Effective Service Management on Mobile Cloud Computing
145
References 1. Goyal, A., Dadizadeh, S.: A Survey on Cloud Computing, University of British Columbia Technical Report for CS 508 (2009) 2. Hess, C.K., Campbell, R.H.: An application of a context-aware file system. Pervasive Ubiquitous Computing 7(6) (2003) 3. Khungar, S., Riekki, J.: A Context Based Storage for Ubiquitous Computing Applications. In: Proceedings of the 2nd European Union Symposium on Ambient Intelligence, pp. 55– 58 (2004) 4. Mayrhofer, R.: An Architecture for Context Prediction. Trauner Verlag, Schriften der Johannes-Kepler-Universität Linz C45 (2005) 5. Byun, H.E., Cheverst, K.: Exploiting User Models and Context-Awareness to Support Personal Daily Activities. In: Bauer, M., Gmytrasiewicz, P.J., Vassileva, J. (eds.) UM 2001. LNCS (LNAI), vol. 2109, Springer, Heidelberg (2001) 6. Byun, H.E., Cheverst, K.: Utilising context history to support proactive adaptation. Journal of Applied Artificial Intelligence 18(6), 513–532 (2004) 7. Sur, G.M., Hammer, J.: Management of user profile information in UbiData, In: University of Florida Technical Report TR03-001 (2003) 8. Biegel, G., Vahill, V.: A Framework for Developing Mobile, Context-aware Applications. In: IEEE International Conference on Pervasive Computing and Communications, PerCom (2004) 9. Gu, T., Pung, H.K., Zhang, D.Q.: A Middleware for Building Context-Aware Mobile Services. In: Proceedings of IEEE Vehicular Technology Conference, VTC (2004) 10. Weiser, M.: Hot topics-ubiquitous computing. IEEE Computer 26(10) (1993) 11. Barkhuus, L., Dey, A.: Is context-aware computing taking control away from the user Three levels of interactivity examined. In: Dey, A.K., Schmidt, A., McCarthy, J.F. (eds.) UbiComp 2003. LNCS, vol. 2864, pp. 149–156. Springer, Heidelberg (2003) 12. Elfeky, M.G., Aref, W.G., Elmagarmid, A.K.: Using convolution to mine obscure periodic patterns in one pass. In: Hwang, J., Christodoulakis, S., Plexousakis, D., Christophides, V., Koubarakis, M., Böhm, K. (eds.) EDBT 2004. LNCS, vol. 2992, pp. 605–620. Springer, Heidelberg (2004) 13. Sur, G.M., Hammer, J.: Management of user profile information in UbiData. Technical Report TR03-001, Dept. of CISE, University of Florida, Gainesville (2003) 14. Zhang, J., Helal, A.S., Hammer, J.: Ubidata: Ubiquitous mobile file service. In: Eighteenth ACM Symposium on Applied Computing (2003) 15. Hinze, A., Voisard, A.: Location- and time-based information delivery in tourism. In: Advances in Spatial and Temporal Database (2003) 16. Chen, A.: Context-aware collaborative filtering system: Predicting the user’s preference in the ubiquitous computing environment. In: Strang, T., Linnhoff-Popien, C. (eds.) LoCA 2005. LNCS, vol. 3479, pp. 244–253. Springer, Heidelberg (2005) 17. Kirsch-Pinheiro, M., Villanova- Oliver, M., Gensel, J., Martin, H.: Context-Aware Filtering for Collaborative Web Systems: Adapting the Awareness Information to the User’s Context. In: ACM Symposium on Applied Computing (2005)
Context Model Based on Ontology in Mobile Cloud Computing Changbok Jang and Euiin Choi* Dept. of Computer Engineering, Hannam University, Daejeon, Korea
[email protected],
[email protected]
Abstract. Mobile Cloud Computing has become as a new IT paradigm because of the growth of mobile device like smartphone and appearance of Cloud Computing environment. This mobile cloud environment provides various services and IT resources according to users‘ requests, so an effective management of service and IT resources is required. Hence, this paper designs a context model based on ontology in mobile cloud computing in order to provide distributed IT resources and services to users based on context-awareness information. As the context model proposed in this paper uses context-aware information, it enables to provide more accurate personalized services and manage distributed IT resources. Keywords: Cloud computing, Context-aware, Context model, Ontology, Intelligence Service, Mobile cloud computing.
1
Introduction
The market of mobile recently has been evolving rapidly and cloud computing is spreading into mobile as well. That is why mobile cloud computing is becoming a new issue today. Cloud computing is the computing that provides virtualized IT resources as a service by using Internet technology. In cloud computing, a user lends IT resources (software, storage, server, network) as needed, uses them, get a support of real-time scalability according to service load, and pays as he/she goes. Especially the cloud computing environment distributes IT resources and allocates according to user’s request, so there should be a study on technology that manages these resources and deals with effectively[1]. Mobile cloud computing creates a new chance for IT industry because it allows the superiority and economic of cloud computing to meet the mobility and convenience of mobile and draws a synergy effect for both. Also mobile cloud computing refers to an infrastructure that data storage and data processing is done outside mobile device by using cloud computing in the regardless of kinds of mobile devices. Mobile devices used in the mobile environment include personal information and enable to provide the environment that collects a variety of context-aware information. Users’ demand on service types suitable for the individual situation has been increasing. *
Corresponding author.
T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 146–151, 2011. © Springer-Verlag Berlin Heidelberg 2011
Context Model Based on Ontology in Mobile Cloud Computing
147
Therefore, context-aware reasoning technique has been studied to provide a suitable service for user by using user’ context and personal profile information in mobile environment[2-9]. In this context-aware system, a formal context model has to be provided to offer information needed by application as well as store context and manage. However, there are some technical constraints for this context-aware model to overcome because it itself cannot be applied to mobile platform due to limited device resources, so the study on intelligent mobile service in mobile platform is still insufficient. Recent interest related to mobile cloud is personal smartphone. The study on physical support like connecting smartphone to personal virtual system on cloud and using computing resources unlimitedly is quite active, but the study on how to manage distributed IT resources effectively and provide intelligent mobile service through reasoning based on collected information and role as a medium of collecting context of mobile device is ignored. Therefore, this paper designs a context model based on ontology in mobile cloud computing and develops in order to the optimized mobile cloud service through recognizing the conditions of user and cloud server and reasoning on the basis of external context achieved from mobile or internal user’s personal information and information of resources from cloud server and service use information.
2
Related Works
Mobile platform is mainly referred to mobile middleware what lets users operate the optimized contents or service on mobile and it provided a formed interface to UI and service by using RTOS (Realtime OS) and hardware function. There are Windows Mobile, iPhone, Android, Symbian, etc. as these mobile platform. There are Context-aware information modeling techniques such as Key-value model, Markup scheme model, Graphical model, Object oriented model, and ontology based model which are used in the existing ubiquitous environment and Web environment. Ontology model, a Context-aware model which has been studied mostly recently, enables to express concepts and interactions easily. Recently ontology model has been studied lively related to Semantic Web study based on OWL(Web Ontology Language) and there is a movement to adapt ontology-based model in a variety of context-aware framework. One of the early methods of context modeling using ontology was proposed by Otzturk and Aamodt. Van Heijst divided ontologies into Structure Type and Concept Issues in the study for ontology. Structure Type is classified as Knowledge Modeling Ontology, Information Ontology and Terminological Ontology. Concept Issues is divided as Domain Ontology, Application Ontology, Representation Ontology and Generic Ontology[10]. Guarino classified ontologies according to general level to represent context of different kinds [11]. Top-level Ontologies describe general concepts like space, time, matter, object, event and action. Domain Ontologies and Task Ontologies describe the vocabulary related to a generic domain or a generic task or activity by specializing the terms introduced in the Top-level Ontology. Application Ontologies describe concepts depending both on a particular domain and task, which are often specializations of both the related ontology. These concepts correspond to roles played by domain entities while performing a certain activity. Context modeling in context-awareness
148
C. Jang and E. Choi
needs to acquire context initially. Then it is necessary to process modeling to enable acquired context to use. Many projects have used context model with their certain type. Context Toolkit[12] suggested middleware layers that serve to convey to application after acquiring original information and transforming it into any type that application can be understandable. Hydrogen was developed by Hofer[13]. This system is based on hierarchical architecture. This model’s representation ability is admirable because it represented model with object-oriented method. But the representation formality is incomplete. Karen’s context information model is based on object-oriented method. This modeling concept provides a formal basis for representing and reasoning about some of the properties of context information such as its persistence and other temporal characteristics, its quality and its interdependencies[14]. He attempted to model using both the Entity-Relationship model and the class diagrams of UML. CASS(Context-Awareness Sub-Structure)[15] is a framework for context-aware mobile application designed with middleware approach. By separating into application and context inference, this middleware can be able to infer context without recompiling. CONON(the Context Ontology)[16] is divided as Upper Domain and specific Sub Domain. The context model is structured around a set of abstract entities, each describing a physical or conceptual object including Person, Activity, Computational Entity and Location, as well as a set of abstract sub-classes. This model supports extensibility to add specific concepts in different application domain. It also supports the use of logic reasoning to check the consistency of context information, and to reason over low-level, but it’s difficult to represent diverse context with upper context restricted selectively. However, these context models not sufficient on mobile cloud computing. Therefore, this paper proposes context model in order to manage resources more effectively by using personal context information and do modeling context-aware information in mobile platform and reason.
3
System Architecture
In this paper, we suggested context-aware-based intelligence mobile cloud service platform for efficiently managing resource to use context-aware information. As shown figure 1, suggested system consisted of intelligence agent and intelligence middleware. Intelligence agent was responsible for understanding a variety of context-aware information and inferring it. And it consisted of sub-modules such as service module, context-aware preprocessor, personal profile, context-aware information modeling database. Intelligence middleware was responsible for providing services and efficiently managing IT resources by user’s request on mobile cloud computing. Context-aware preprocessor on intelligence agent included process for collecting context-aware information and modeling it, inferring context-aware information, and responsible for understanding what user’s situation was. Service module was responsible for sending context-aware information to intelligence middleware, providing services that suitable to user. Personal profile was repository which was stored personal information, such as service information by using user, user’s ID, password. Context-aware modeling database was stored to information which was modeled by using ontology.
Context Model Based on Ontology in Mobile Cloud Computing
149
Fig. 1. Suggested platform architecture
Intelligence middleware consisted of interaction interface for communicating to agent, resource manager, service manager, Service catalog, Provisioning Rule Database. Resource manager responsible for effectively allocating and managing service information was required for processing user’s request service, and consisted of monitoring module, provisioning module, and scheduler. Monitoring module crawled information of IT resource utilization. Provisioning module set up plan for providing best service to analyze contextaware information which was transferred by user and utilization information of IT resource. Scheduler was scheduled to utilization of service and resource by plan which was established to provisioning module. Service Catalog was stored service information for which user used, provisioning rule database was stored rule for providing best provisioning process to use contextaware information and utilization of resource. Also, service module was responsible for executing service and using distributed IT resource to providing service to user, and consisted of sub-module, such that synchronization module. Synchronization module responsible for synchronizing resource which user was using on cloud computing.
4
Classification of Context Model
In mobile cloud computing, context-aware information which can be used is user’s profile, services that user was used, resources for providing services. And we need to provisioning techniques in order to manage resources more effectively on mobile
150
C. Jang and E. Choi
cloud computing, multimodal techniques for supporting convenient user’s interface, inferring user’s intention more accurately. So, we include entity such as provision, activity. Figure 2 shows each entity and relational property.
Fig. 2. Context model on mobile cloud computing
In this paper, generic ontology is user, service, resource, provision, activity. And they are connected with each other through relational property(eg. Locatedin between User and location). Individual generic ontology includes domain ontology as a detailed material and immaterial entity(eg, User and location). Consequently, it provides extensibility and formal representation ability by hierarchical ontology classification.
5
Conclusion and Future Work
Context modeling in context-awareness needs to acquire context initially. And then it is necessary to process modeling to enable acquired context to use. In this paper, we have proposed context model to provide users with suitable services and manage resources effectively by using context information in the Mobile Cloud environment. We have also defined context for modeling through diverse context definitions. We have classified ontology and represent hierarchically. The proposed context model by the paper is expected to help have the optimized personalized service and effective IT resources management in the Mobile Cloud environment. As a further research, we will include additional function for inference. Also we will try to progress in a study that interpret and inference the high level context, and study the resources management technique that manages distributed IT resources effectively by using context information, and the part that examines the performance and tests after embodying the actual platform proposed.
Context Model Based on Ontology in Mobile Cloud Computing
151
Acknowledgments. This research was financially supported by the Ministry of Education, Science Technology (MEST) and National Research Foundation of Korea(NRF) through the Human Resource Training Project for Regional Innovation.
References 1. Goyal, A., Dadizadeh, S.: A Survey on Cloud Computing, In: University of British Columbia Technical Report for CS 508 (2009) 2. Hess, C.K., Campbell, R.H.: An application of a context-aware file system. Pervasive Ubiquitous Computing 7(6) (2003) 3. Khungar, S., Riekki, J.: A Context Based Storage for Ubiquitous Computing Applications. In: Proceedings of the 2nd European Union Symposium on Ambient Intelligence, pp. 55– 58 (2004) 4. Mayrhofer, R.: An Architecture for Context Prediction. Trauner Verlag, Schriften der Johannes-Kepler-Universität Linz C45 (2005) 5. Byun, H.E., Cheverst, K.: Exploiting User Models and Context-Awareness to Support Personal Daily Activities. In: Bauer, M., Gmytrasiewicz, P.J., Vassileva, J. (eds.) UM 2001. LNCS (LNAI), vol. 2109, Springer, Heidelberg (2001) 6. Byun, H.E., Cheverst, K.: Utilising context history to support proactive adaptation. Journal of Applied Artificial Intelligence 18(6), 513–532 (2004) 7. Sur, G.M., Hammer, J.: Management of user profile information in UbiData. University of Florida Technical Report TR03-001(2003). 8. Biegel, G., Vahill, V.: A Framework for Developing Mobile, Context-aware Applications. In: IEEE International Conference on Pervasive Computing and Communications, Percom (2004) 9. Gu, T., Pung, H.K., Zhang, D.Q.: A Middleware for Building Context-Aware Mobile Services. In: Proceedings of IEEE Vehicular Technology Conference, VTC (2004) 10. Guarino, N.: Formal Ontology in Information Systems. In: Proceedings of FOIS 1998, Trento, Italy, June 6-8 (1998) 11. Schilit, B.N., Adams, N., Want, R.: Context-Aware Computing Applications. In: IEEE Workshop on Mobile Computing Systems and Applications, December 8-9 (1994) 12. Wu, H., Siegel, M., Ablay, S.: Sensor Fusion for Context Understanding. In: IEEE Instrumentation and Measurement Technology Conference, AK, USA, May 21-23 (2002) 13. Hofer, T., Schwingwe, W., Pichler, W., Leonhartsberger, G., Altmann, J.: Contextawareness on mobile devices - the hydrogen approach. In: Proceedings of the 36th Annual Hawaii International Conference on System Sciences, pp. 292–301 (2002) 14. Fahy, S., Clarke, S.: A middleware for mobile context-aware applications. In: Workshop on Context Awareness, MobiSys (2004) 15. Schilit, B., Theimer, M.: Disseminating Active Map Information to Mobile Hosts. IEEE Network 8(5), 22–32 (1994) 16. Wang, X., Zhang, D., et al.: Ontology-Based Context Modeling and Reasoning using OWL. In: Workshop on Context Modeling and Reasoning at IEEE International Conference on Pervasive Computing and Communication, PerCom 2004, Orlando, Florida, March 14 (2004)
SPECC - A New Technique for Direction of Arrival Estimation In-Sik Choi Department of Electronic Engineering, Hannam University 133 Ojung-dong, Daeduk-Gu, Daejeon 306-791, Republic of Korea
[email protected]
Abstract. In this paper, a novel direction of arrival (DOA) estimation scheme is proposed. The proposed algorithm, called the signal parameter extraction via component cancellation (SPECC), is an evolutionary optimization-based method and extracts the amplitudes of signal sources and DOA impinging on a sensor array in a step-by-step procedure, different from other algorithms such as MUSIC, root-MUSIC, ESPRIT, etc., which extract those parameters at the same time. Our algorithm has robustness to noise and high resolution in DOA estimation. In the simulation, comparisons with root-MUSIC, which has been known as one of the best algorithms, are presented to illustrate the superiority of the proposed SPECC algorithm. Keywords: direction of arrival, signal parameter extraction via component cancellation, sensor array.
1
Introduction
Extraction of signal parameters from the received data by array antenna is a critical issue in radar, sonar and communication systems such as smart antennas and real-time location system (RTLS). In the previous research, a variety of techniques for DOA estimation have been proposed. The well-known methods are, for example, the maximum likelihood (ML) technique [1], the multiple signal classification (MUSIC) [2], the root-MUSIC [3], the Estimation of Signal Parameters via Rotational Invariance Techniques (ESPRIT) [4], and the genetic algorithm (GA)-based method [5]. Each algorithm has a strength and weakness relative to each other. In this paper, a new DOA estimation algorithm, called signal parameter extraction via component cancellation (SPECC), is proposed [6]. The previously developed algorithms mentioned above extract the parameters (amplitudes and DOAs) of all source signals at the same time. In our algorithm, however, the parameters of each source signal out of multiple signals impinging on a sensor array are extracted in a step-by-step procedure. For the optimization of cost function, we use the evolutionary programming (EP), since EP is a stochastic process and does not suffer from local minimum problem. T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 152–160, 2011. © Springer-Verlag Berlin Heidelberg 2011
SPECC -- A New Technique for Direction of Arrival Estimation
153
Our algorithm has the characteristics of high resolution, robustness and accuracy. Of the many DOA estimation methods, the root-MUSIC algorithm is considered as one of the best. Therefore, we compared our algorithm with root-MUSIC. In the simulation, we conduct some tests to verify the high-resolution, robustness to noise, and accuracy of the proposed SPECC algorithm.
2
Evolutionary Programming
Traditionally, the gradient-based methods are used for complex function optimization. However, the gradient-based methods have a problem of local minima because of the characteristic of local search behavior. Therefore, the global optimization algorithms, such as the genetic algorithms (GA), the evolutionary strategies (ES), the evolutionary programming (EP), etc., have emerged as efficient and robust search methods. In this paper, we use EP since EP is suitable for real-valued and high-precision optimization problems. EP is performed using the following procedures. STEP 1. (Initialization) Let q=1 where q is the generation index, and generate an initial N P trial solutions with uniform distribution within the given domain. The nth individual vector is defined as follows: X n = [ xn1 , xn2 , ⋅ ⋅ ⋅ ⋅⋅, xnJ ],
n = 1,2,⋅ ⋅ ⋅, N P
(2.1)
where xnj ( j = 1,2,⋅ ⋅ ⋅, J ) is jth component of nth individual. STEP 2. (Evaluation) Evaluate the cost C ( X n ) of each parent solution. STEP 3. (Mutation) For each of the N P parents, X n generate an offspring X n' as follows: xn' j = xnj + σ nj ⋅ N j (0,1) , σ n' j = σ nj ⋅ exp(τ ' ⋅ N (0,1) + τ ⋅ N j (0,1))
(2.2)
where σ nj is jth standard deviation of the nth individual, N j (0,1) represents the jth normally distributed random number N (0,1) and τ and τ ' are arbitrary constants which are defined by the user [7]. STEP 4. (Evaluation) Evaluate the cost of each offspring. STEP 5. (Selection) Compare each 2 N P solution with 10 randomly selected opponents and count the number of wins. Select N P solutions out of 2 N P solutions having the most wins. Let q=q+1. STEP 6. (Termination check) Proceed to STEP 3 unless the available execution time is exhausted or an acceptable solution has been discovered.
154
3
I.-S. Choi
Proposed Algorithm
The proposed SPECC algorithm extracts the desired number of signal sources in order from the highest energy to the lowest energy. Thus, the individual’s dimension becomes very low compared to the GA-based method [5] which extracts parameters of all signal sources simultaneously. In that method, each individual is composed of DOAs, amplitudes, and relative phases of whole signal sources, whereas in our algorithm, each individual is composed of only one signal source parameters. As the dimension of the individual vector increases, premature convergence to local minimum is more likely to occur and the convergence time is much longer [8]. In this paper, we consider a linear array with I sensor elements as in Fig. 1 and M narrowband receiving signals.
θm
θm
θm
θm
θm
Fig. 1. Geometry of linear array antenna
If the first array element is assumed as reference point, the complex signals received by the ith element can be expressed by M
yi ( k ) = ∑ Am ( k ) ⋅ exp[ j m =1
2π
λ
(i − 1) d cos θ m ] + ni ( k ) ,
(3.1)
, i = 1,2,⋅ ⋅ ⋅, I
where λ is the signal central wavelength, d is the distance between the array elements, θm is DOA of the mth source signal, Am (k ) is the complex amplitude of the mth source signal at kth time sample, and ni (k ) is the additive noise of ith array element at kth time sample. The detailed SPECC algorithm is as follows: STEP 1. (Initialization) Set m=1, where m is the index of iteration to extract the mth signal source, and define the received signal at the ith array element as yim (k ) for
i=1,2,…,I, where I is the number of array elements. STEP 2. (Parameter extraction) Obtain the complex coefficients Am (k ) and θm that minimize the following cost function Cm (k ) of the mth iteration using the EP subroutine:
SPECC -- A New Technique for Direction of Arrival Estimation
I
C m ( k ) = ∑ yim ( k ) − Am ( k ) ⋅ exp[ j
2π
i =1
λ
155
2
(i − 1) d cos θ m ]
(3.2)
In the EP subroutine, the individual vector is composed of the real and imaginary parts of Am (k ) and θm . Terminate the EP subroutine when the available execution time has passed because we do not know the minimum value of Cm (k ) . STEP 3. (Component cancellation) Subtract the components of the determined signal sources in Step 2 from yim (k ) and obtain yim +1 (k ) as follows: yim+1 (k ) = yim ( k ) − Am ( k ) ⋅ exp[ j i = 1," , I
2π
λ
(i − 1)d cos θ m ],
(3.3)
STEP 4. (Termination check) Let m=m+1. Return to Step 2, unless the desired M components are extracted. As explained previously, the SPECC algorithm recursively estimates a DOA and amplitude of each source signal. During each iteration, the highest energy source in the remained received signal ( yim (k ) ) is determined, and its DOA and amplitude are regarded as parameters of each source signal. After determining one source signal, the SPECC algorithm subtracts the determined signal components from the complex remained signal ( yim (k ) ) and obtain yim +1 (k ) which is the remained signal used for the next iteration. This procedure is repeated until the residual energy is below the predefined threshold related to noise level or the iteration index (m) reaches the predefined (or estimated) value M. In this paper, we assume that M is known or preestimated. Typically, for an estimation of the total signal sources, AIC [9] and MDL [10] could be used, but they have a high computational cost and may fail in noisy environments. Unlike the MUSIC or root-MUSIC, the false estimation of the number of signal sources (M) does not affect the accuracy of extracted parameter values in our algorithm. Therefore, our algorithm can even be used for a sufficiently large M case and observe the magnitude of extracted source signal. If the signal magnitude becomes relatively small, we can consider this signal as noise and stop the SPECC algorithm.
4
Simulation Results
To verify the performance of the SPECC algorithm, we use some different scenarios. First, we consider two closely spaced source signals to show the high-resolution. The magnitudes and DOAs of two signal sources are shown in Table 1. The number of array element I = 8 and inter-element distance d= λ /2.
156
I.-S. Choi
Table 1. Magnitudes and DOAs of 2 close signals
number
DOA [deg.]
Magnitude
1
60
1.0
2
70
1.0
Fig. 2. Standard beamformer, noise-free, M=2, K=1
Fig. 3. SPECC, noise-free, M=2, K=1
SPECC -- A New Technique for Direction of Arrival Estimation
157
Fig. 2 shows the spectrum of the standard beamformer [11]. As is known, the standard beamformer cannot resolve the two closely located signal sources at 60o and 70o. Fig. 3 shows the extracted parameters of the proposed algorithm in the noise-free condition. The number of time samples (K) is assumed as 1. You can see that the two closely spaced sources at 60o and 70o are clearly resolved in the proposed algorithm. Furthermore, estimated DOAs (59.9o & 70.1o) and its magnitudes (1.01 & 0.97) are very accurate.
Fig. 4. root-MUSIC, SNR=15 dB, M=2, K=5
Fig. 5. SPECC, SNR=15dB, M=2, K=1
158
I.-S. Choi
In next simulation, we added zero-mean white Gaussian noise to the received signals to test robustness to noise. Fig. 4 and Fig. 5 show the extracted parameters of two source signals using root-MUSIC and SPECC algorithm with a SNR of 15 dB. In the SPECC, only one time sample (K =1) is used. However, K = 5 time samples ( k = 1,2,", K ) are used in the root-MUSIC. The result of root-MUSIC is inaccurate, even though 5 time samples are used, but SPECC algorithm accurately estimates two source signals with only one time sample. Table 2 shows the estimated parameters of two source signals using the rootMUSIC and our proposed algorithm with a function of SNR. From the results in Table 2, we can easily state that SPECC algorithm is more robust to the noise than the root-MUSIC. Table 2. Estimated DOAs and magnitudes (in parenthesis) of 2 close signals via root-MUSIC and SPECC algorithm as a function of SNR. M=2, K=5
root-MUSIC
SNR
SPECC
[dB]
60 o (1.0)
70 o (1.0)
60 o (1.0)
70 o (1.0)
15
56 o (0.38)
68 o (1.82)
58.7 o (0.86)
70.3 o (0.93)
10
55.2 o (0.42)
69.1 o (1.64)
60.7 o (1.01)
70.5 o (1.00)
5
64.6 o (1.64)
153.1 o (0.02)
57.8 o (0.69)
70.3 o (1.05)
Finally, we tested our algorithm with a data composed of five signal sources. In this simulation, the two sources (no. 2 & 3) are more closely spaced as 5o. The magnitudes and DOAs of the five source signals are shown in Table 3. The number of array element I = 25 and inter-element distance d= λ /2. The K = 5 time samples ( k = 1,2,", K ) are used both in SPECC and root-MUSIC. We assumed that the number of signal sources (M) are accurately estimated by MDL or AIC. If this estimation is not correct, the performance of root-MUSIC is degraded considerably. Table 3. Magnitudes and DOAs of 5 source signals
number
DOA [deg.]
Magnitude
1
30
0.36
2
60
0.71
3
65
1.00
4
90
0.82
5
150
0.50
SPECC -- A New Technique for Direction of Arrival Estimation
159
As shown in Fig. 6 and Fig. 7, the extracted DOAs and corresponding magnitudes via SPECC algorithm are more accurate than those of the root-MUSIC. Especially, the root-MUSIC fails to extract two signal sources in 60o and 65o. The root-MUSIC estimates only one source signal between two closely spaced sources located in 60o and 65o. This result shows that SPECC has characteristics of high resolution and accuracy in the noisy environment.
Fig. 6. root-MUSIC, SNR=15 dB, M=5, K=5
Fig. 7. SPECC, SNR=15 dB, M=5, K=5
160
5
I.-S. Choi
Conclusion
In this paper, we propose a novel high-resolution DOA estimation method which is called SPECC algorithm. The SPECC algorithm has the characteristics of highresolution, robustness to noise, and high accuracy. Furthermore, our algorithm doesn’t need the information of the number of source signals (M) which is very important in the MUSIC or the root-MUSIC method. In the simulation results, we verified these characteristics by comparing with the standard beamformer and the root-MUSIC. The application of the SPECC algorithm to the communication systems such as RTLS and smart antennas remains as a future work. Acknowledgment. This work has been supported by 2011 Hannam University Research Fund.
References 1. Schweppe, F.C.: Sensor array data processing for multiple signal sources. IEEE Trans. on Inform. Theory 14, 294–305 (1968) 2. Schmidt, R.O.: Multiple Emitter Location and Signal Parameter Estimation. IEEE Trans. on Antennas and Propagation 34, 276–280 (1985) 3. Rao, B.D.: Performance analysis of root-MUSIC. IEEE Trans. On Acoustics, Speech, and Signal Processing 37, 1939–1949 (1989) 4. Roy, R., Paulraj, A., Kailath, T.: ESPRIT - Estimation of Signal Parameters via Rotational Invariance Techniques. IEEE Trans. On Acoustics, Speech, and Signal Processing 37, 984–995 (1989) 5. Karamalis, P., Marousis, A., Kanatas, A., Constantinou, P.: Direction of arrival estimation using genetic algorithm. In: 2001 IEEE Vehicle Technology Conference (VTC 2001), pp. 162–166 (2001) 6. Choi, I.-S., Rhee, I.-K., Lee, Y.-H.: Signal Parameter Extraction via Component Cancellation Using Evolutionary Programming. In: 2007 International Conference on Future Generation Communication and Networking, FGCN 2007, vol. 2, pp. 458–462 (2007) 7. Palaniswami, M., Attikiouzel, Y., Marks II, R.J., Fogel, D., Fukuda, T.: Computational Intellegence: A Dynamic System Perspective, pp. 152–163. IEEE Press, Los Alamitos (1995) 8. Ishibuchi, H., Nakashima, T., Murata, T.: Genetic-Algorithm-based approaches to the designing of fuzzy systems for multi-dimensional pattern classification problems. In: Proceedings of 1996 IEEE International Conference on Evolutionary Computation, pp. 229–234 (1996) 9. Wax, M., Kailath, T.: Detection of signals by information theoretic criteria. IEEE Trans. On Acoustics, Speech, and Signal Processing 33, 387–392 (1985) 10. Wax, M., Ziskind, I.: Detection of the number of coherent signals by the MDL principle. IEEE Trans. On Acoustics, Speech, and Signal Processing 37, 1190–1196 (1989) 11. Unnikrishna Pillai, S., Burrus, C.S.: Array signal processing, pp. 17–20. Springer, Heidelberg (1989)
Trading Off Complexity for Expressiveness in Programming Languages for Embedded Devices: Visions and Experiences Vincenzo De Florio and Chris Blondia University of Antwerp Department of Mathematics and Computer Science Performance Analysis of Telecommunication Systems group Middelheimlaan 1, 2020 Antwerp, Belgium Interdisciplinary Institute for Broadband Technology (IBBT) Gaston Crommenlaan 8, 9050 Ghent-Ledeberg, Belgium
Abstract. When programming resource-scarce embedded smart devices the designer requires both the low-level system programming features of a language such as C and higher level capability typical of a language like Java. The choice of a particular language often implies trade offs between conflicting design goals such as performance, costs, time-to-market, and overheads. The large variety of languages, virtual machines, and translators provides the designer with a dense trade off space, ranging from minimalistic to rich full-fledged approaches, but once a choice is made it is often difficult and tricky for the designer to revise it. In this work we propose a different approach based on the principles of language-oriented programming. A system of light-weighted and modular extensions is proposed as a method to adaptively reshape the target programming language as needed, adding only those application layer features that match the current design goals. By doing so complexity is made transparent, but not hidden: While the programmer can benefit from higher level constructs, the designer and the deployer can deal with modular building blocks each characterized by a certain algorithmic complexity and therefore each accountable for a given share of the overhead. As a result the designer is provided with finer control on the amount of computing resources that are consumed by the run-time executive of the chosen programming language.
1
Introduction
The December 2010 Tiobe Programming Community index [32], ranking programming languages according to their matching rate in several search engines, sets C as the second most popular programming language, barely 2% less than Java. C’s object-oriented counterpart C++ is third but quite further away (9.014% vs. 16.076%). Quite remarkably, C scored the top position in April 2010 and was even “programming language of the year” for Tiobe in 2008, exhibiting that is the highest rise in ratings in that year—a notable feat achieved by Java in 2005. T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 161–175, 2011. c Springer-Verlag Berlin Heidelberg 2011
162
V. De Florio and C. Blondia
Both quite successful and wide-spread, C and Java represent two extremes of a spectrum of programming paradigms ranging from system-level to service-level development. Interestingly enough, in C complexity is mostly in the application layer, as its run-time is often very small [18]; in Java, on the other hand, non-negligible complexity comes also with a typically rich execution environment (EE). The latter comprises a virtual machine and advanced features e.g. autonomic garbage collection. The only way to trade off the EE complexity for specific services is then by adopting or designing a new EE. Various EE’s are available, developed by third parties to match specific classes of target platforms. Fine-tuning the EE is also possible, e.g. in Eclipse; and of course it is also possible to go for a custom implementation. In general, though, the amount and the nature of the EE complexity is hidden to the programmer and the designer: after all, it is the very same nature of Java as a portable programming language that forbids to exploit such knowledge. Though transparent, such hidden complexity is known to have an impact on several aspects, including overhead, real-timeliness, deterministic behavior, and security [10]. In particular, when a computer system’s non-functional behavior is well defined and part of that system’s quality of service—as it is the case e.g. for real-time embedded systems—then any task with unknown algorithmic complexity or exhibiting non-deterministic behavior might simply be unacceptable. As an example, a run-time component autonomically recollecting unused memory, though very useful in itself, often results in asynchronous, unpredicted system activity affecting e.g. the processors and the memory system—including caches. Taking asynchronous tasks such as this into account would impact negatively on the analysis of worst-case execution times and consequently on costs as well. Moreover, the availability of different flavors of the Java EE is likely to bring about assumption failures—as explained in [5]. In what follows we propose an alternative—in a sense, an opposite—direction: Instead of stripping functionality from Java to best match a given target platform, we chose to add functionality to C to compensate for lack of expressiveness and linguistic support. More specifically, in our approach, C with its minimalistic run-time executive becomes a foundation on top of which the designer is made able to easily lay a system of modular linguistic extensions. By doing so the above mentioned partitioning of complexity is not statically defined and unchangeable, but rather revisable under the control of the designer. Depending on the desired linguistic features and the overhead permitted by the target platform as well as by mission and cost constraints, our approach allows the programming language to be flexibly reshaped. This is because our approach employs well-defined “complexity containers”, each of which is granting a few specific functions and each of which characterized by well-defined complexity and overhead. Syntactic features and EE functions are weaved together under the control of the designer, resulting in bound and known complexity. A dynamic trade off between complexity and expressiveness can then be achieved and possibly revised in later development stages or when the code is reused on a different platform. In principle such
Trading Off Complexity for Expressiveness in Programming Languages
163
combination of transparent functionality and translucent complexity should also reduce the hazards of unwary reuse of software modules [22]. The structure of this paper is as follows: A comparison with other related approaches is given in Sect. 2. In Sect. 3 we introduce a number of “basic components” implementing respectively syntactic and semantic extensions for context awareness, for autonomic data integrity, and for event management. In Sect. 4 we discuss how we built such components and how they can be dynamically recombined so as to give raise to specific language extensions. Section 5 introduces a case study and its evaluation. Our conclusions are finally produced in Sect. 6.
2
Related Approaches
Modular extensions to programming languages have been the subject of many an investigation, both theoretical and empirical. The most significant and possibly closest genus here is given by the family of approaches, which includes e.g. Language Workbenches [17] and Intentional Programming [28], and is collectively known as Language-oriented Programming [33,16] (LoP). In LoP it is argued that current paradigms “force the programmer to think like the computer rather than having the computer think more like the programmer” [16], which makes programming both time consuming and error prone. A considerable distance exists between the conceptual solution of a problem and its representation in an existing computer language. Such distance is called the redundancy of the language in LoP and is also known as syntactical adequacy [7]. The LoP vision is that, the larger this distance, the more programming becomes difficult, time-consuming, unpleasant, and error prone. To remedy this, LoP advocates that programming should not be simply the process of encoding our concepts in some conventional programming language; it should be creating a collection of domain-specific languages (DSLs) each of which specializes in the optimal representation of one of our concepts. Programming becomes then (1) solving a number of such sub-problems of optimal expression and (2) creating a “workflow” that hooks together all the bits and pieces into a coherent execution flow. With the words of one of its pioneers, LoP advocates the ability “to use different languages for each specialized part of the program, all working together coherently” [16]. Each language ideally should offer the least redundant expression of a concept, in a form that is as close as possible to the means a person would use to communicate their concept to another person. The approach suggested in this paper goes along the very same direction of LoP: We propose to create collections of “little languages” (tiny DSLs providing minimal-redundancy expressions of domain-specific problems) and to use a simple application layer to bind everything together. Next sections show how natural language concepts such as “tell me at anytime what is the current temperature” or “cycle this operation continuously” can be embedded in the programming language as modular, reusable extensions. Another family of approaches related to ours is given by the quite popular “AspectC” projects, including among others AspectC [1], WeaveC [26], and AspectC++ [25]. The latter project is particularly relevant in that it was used
164
V. De Florio and C. Blondia
successfully to program small embedded systems such as the AVR ATmega microcontroller. In the cited paper the authors describe how to represent abstract sensors in AspectC++ in an elegant and cost-effective way. Indeed we witnessed ourselves how expressive and effective aspects can be by coding an adaptive strategy to tolerate performance failures in video rendering applications [29]. In the reported experience we also used an Eclipse plug-in as in [25], but with AspectJ as programming language. Also in this case aspects reify abstract sensors reporting the current value of relevant context figures such as available CPU and bandwidth. It is worth remarking how aspect languages exhibit minimal redundancy of the language only in specific domains. When confronted with domain-specific problems such as exception handling, also aspects call for dense and cumbersome translations of natural language concepts [24,7]. In what follows we describe our method—a minimalistic implementation of LoP based on so-called “little languages” and simple run-time executives based on Posix threads. The specific difference between mainstream programming language approaches and ours is that, by using modular, reusable DSL extensions, we provide minimally language-redundant translations of natural language concepts. Such translations constitute a composable high-level language rather than a collection of low-level methods.
3
Basic Components
This section introduces three basic components of our approach: Linguistic support to context awareness (Sect. 3.1), adaptive redundancy management (Sect. 3.2), and application-level management of cyclic events (Sect. 3.3). In all three cases the syntactical extension instruments the memory access operations on certain variables. 3.1
Context Awareness Component
Our first component, called Reflective Switchboards (ReS), provides high level linguistic support for context awareness to a host language such as C or C++. Building on top of our reflective variables system [6], ReS provides transparent access to sensors and actuators by means of a source-to-source translator and a number of run-time components. A description of the ReS features and its components constitutes the rest of this section. The idea behind ReS is best described by considering Fig. 1 and Fig. 2. In the former picture four components are displayed: the user application (UA), a Callback Interpreter (CI), a server thread (ST), and a sensor driver (SD). The CI is unique for each user application, while instances of ST and SD are executed for specific families of physical sensors—e.g., operating system-specific or network layer-specific. In Fig. 1 the UA makes use of a single sensor called V. UA executes a simple domain-specific task, which could be expressed in natural language e.g. as follows: “Keep me posted about the running values of
Trading Off Complexity for Expressiveness in Programming Languages
165
Fig. 1. Sequence diagram of the initialization phase of Reflective Switchboards
Fig. 2. Sequence diagram of the steady state of Reflective Switchboards
V, and execute this maintenance task whenever V reaches a certain value”. A simple “little language” (the popular name for DSLs coded with traditional tools Lex and YACC [23]) makes it possible to code expressions such as the above with very limited redundancy of the language. As an example, Fig. 3 shows a ReS program that (1) continuously updates the content of variable int cpu with an integer approximating the current percentage of CPU being used, and (2) automatically calls function largerThan10 each and every time condition “(cpu>10);” becomes true. “Whenever condition do action” is realized by methods RR VARS and rrparse (char *guard, int (action*)(void)). The former launches the CI as a Posix thread and performs some initializations. Method rrparse then requests the CI to register a new guarded action—that is, an action whose execution is conditioned by the validity of an arithmetic expression. The first argument of rrparse is such expression—a string that is parsed by the CI and translated into a simple and
166
V. De Florio and C. Blondia
Fig. 3. Reflective Switchboards: A simple coding example
compact pseudo-code. The second argument is the action associated with the guard. The pseudo-code for the guarded action is then pushed onto an internal stack. This is represented on top of Fig. 1. The “keep me posted about V” part is realized by method RR VAR V. Such method initializes the run-time executive for reflective variable V. This executes a corresponding ST as a Posix thread. As described in Fig. 1, ST then activates and binds to its associated sensor driver SD. By doing so, ST requests to be notified by SD for each context change pertaining sensor(s) V. After the above initializations, ReS enters its “steady state”, depicted in Fig. 2: While the UA proceeds undisturbed, actions are triggered on the ReS components when the SD’s reports a new value of their sensors. In this case the ST updates the memory cells of the corresponding “sensor variable” and then requests the CI to interprete the pseudo-code stored in its stack. For each guarded action g → a, guard g is evaluated and, when found true, the CI executes method a. ReS also uses variables as access points to actuators by intercepting all the assignments in which the left value is a legal “actuator variable”. After the assignment, the translator simply adds a method to communicate the new value to the corresponding actuator. Table 1 briefly lists the currently available sensor and actuator variables. It is worth discussing two special variables—linkbeacons and linkrates. In this case yet another “little language” was designed to allow the definition of dynamically growing arrays of C structures representing the properties of the nearest neighboring peers in a Mobile Ad-hoc Network (MANET). The first variable, linkbeacons, is an array of sensor variables reporting Medium Access Control layer properties of MANET peers. A new object comes to life dynamically each Table 1. Currently available sensor and actuator variables and arrays name type class short description cpu int sensor CPU usage (%) bandwidth int sensor bandwidth available between localhost and a TCP remote host (Mbps) mplayer int sensor status of instance of the mplayer video player mplayer int actuator sets properties of an mplayer instance watchdog int sensor status of an instance of a watchdog timer thread watchdog int actuator controls an instance of a watchdog timer thread linkbeacons lb t [ ] sensors MAC beacons received from a network peer in a MANET linkrates lr t [ ] sensors estimated IP bandwidth between localhost and a MANET peer.
Trading Off Complexity for Expressiveness in Programming Languages
167
Fig. 4. ReS code for MAC-IP cross-layer optimizations
time a new peer comes in proximity. When a peer node falls out of range, the corresponding object becomes “stale” until its node becomes reachable again. The above mentioned “little language” provides syntactic sugar to allow the linkbeacons array to be addressed by strings representing the MAC address of peer nodes. Array linkbeacons reflects a number of properties, including the number of MAC beacons received by a peer node during the last “observation period” (defined in our experiments as sixty seconds) or the number of periods elapsed without receiving at least one beacon from a certain node. Similarly, array linkrates returns Network layer properties of peers in proximity—specifically, it returns the estimated bandwidth between the current node and the one whose address is being used as an index. The above mentioned arrays are currently being used in our research group to set up cross-layer optimizations such as MAC-aware IP routing in mobile ad-hoc networks. The program used to steer this cross layer optimization is quite simple: Every new observation cycle, the program retrieves the MAC addresses of the peers in proximity via a simple function call (anext) and then requests to adjust the routing metric using the above mentioned arrays. The actual adjustments to the routing protocol are carried out through a Click [19] script. As can be seen by the above examples, despite C being a relatively simple programming language, the modular addition of linguistic features covering domain-specific sub-problems does lead to a new and more powerful language characterized by a better linguistic redundancy. 3.2
Adaptive Redundancy Component
Another important service that is typically missing in conventional programming languages such as C is transparent data replication. As embedded systems
168
V. De Florio and C. Blondia
Fig. 5. A simple example of use of redundant variables. An “extended C” source code that accesses a redundant variable (left-hand image) and an excerpt from the translation in plain C (right-hand picture) are displayed.
are typically streamlined platforms in which resources are kept to a minimum in order to contain e.g. costs and power consumption, hardware support to memory error detection is often missing. When such embedded systems are mission critical and subjected to unbound levels of electro-magnetic interference (EMI), it is not uncommon to suffer from transient failures. As an example, several Toyota models recently experienced unintended acceleration and brake problems. Despite Toyota’s official communications stating otherwise, many researchers and consultants are suggesting this to be just another case of EMI-triggered failures [31,15,34]. More definitive evidence exists that EMI produced by personal electronic devices does affect electronic controls in modern aircrafts [27], as it is the case for control apparatuses operating in proximity of electrical energy stations as well [11]. Whenever EMI causes unchecked memory corruption, a common strategy is to use redundant data structures [30]: Mission-critical data structures are then “protected” by replication and voting and through redoing [11]. Our adaptive redundancy component is yet another “little language” that allows the user to tag certain C variables as being “redundant”. A run-time executive then transparently replicates those variables according to some policy (for instance, in separate “banks”) and then catches memory accesses to those variables. Write accesses are multiplexed and store their “rvalues” [18] in each replica, while read accesses are demultiplexed via a majority voting scheme. Figure 5 summarizes this via a simple example. In some cases, for instance when the application is cyclic and constantly reexecuted as in [12], the behavior of the voting scheme can be monitored and provides an estimation of the probability of failure: As an example, if the errors induced by EMI are affecting a larger and larger amount of replicas, then this can be interpreted as a risk that the voting scheme will fail in the near future due to the impossibility to achieve a majority. Detecting this and assessing the corresponding risk of voting failure allows the amount of replicas to be transparently
Trading Off Complexity for Expressiveness in Programming Languages
169
Table 2. Example of usage of the TOM time-out management class. In 1. a time-out list pointer and two time-out objects are declared, together with two alarm functions. In 2. the time-out list and the time-outs are initialized. Insertion is carried out in 3. In 4., time-out t2 is disabled; its deadline is changed; t2 is restarted; and finally, time-out t1 is deleted. 1.
/* declarations */ TOM *tom; timeout t t1, t2; int PeriodicMethod1(TOM*), PeriodicMethod2(TOM*);
2.
/* definitions */ tom ← tom init(); tom declare(&t1, TOM CYCLIC, TOM SET ENABLE, TIMEOUT1, SUBID1, DEADLINE1); tom set action(&t1, PeriodicMethod1); tom declare(&t2, TOM CYCLIC, TOM SET ENABLE, TIMEOUT2, SUBID2, DEADLINE2); tom set action(&t2, PeriodicMethod2);
3.
/* insertion */ tom insert(tom, &t1), tom insert(tom, &t2);
4.
/* control */ tom disable(tom, &t2); tom set deadline(&t2, NEW DEADLINE2); tom renew(tom, &t2); tom delete(tom, &t1);
and autonomically adjusted, e.g. as described in [4]. Such a run-time time scheme could also be complemented with compile-time explorations and optimizations as discussed e.g. in [20]. 3.3
Cyclic Methods Component
As observed in [8], natural language expressions such as repeat periodically, at time t send heartbeat, at time t check whether message m has arrived, or upon receive, are often used by researchers to describe e.g. distributed protocols. The lack of those constructs in a language such as C led us in the past to implement another extension in the form of a library of alarm management methods. Such library allows user-specified function calls to be postponed by a given amount of time. In [8] we showed how this permits to implement the above natural language expressions by converting time-based events into message arrivals or signal invocations. In the cited paper we also proposed some preliminary “syntactic sugar” to ease up the use of our library. Table 2 is a simple example of how our time-out methods could be used e.g. to define and control two “cyclic methods,” i.e., functions that are executed by the run-time system every new user-defined cycle. In the experience reported in this paper we capitalized on our previous achievements and designed yet another “little language” to facilitate the definition of cyclic methods. Table 3 shows the syntax of our extension. In a nutshell, the extension allows the user to specify a dummy member, Cycle, for those methods that have been tagged as cyclic t. Every Cycle milliseconds the extension executes a new instance of the corresponding method—irrespective of the fact that previous instances are still running or otherwise.
170
V. De Florio and C. Blondia
Table 3. The new syntax for the example of Table 2. Two simple constructs are introduced—bold typeface is used to highlight their occurrences in this example.
4
1.
/* declarations */ cyclic t int PeriodicMethod1(TOM*); cyclic t int PeriodicMethod2(TOM*);
2.
/* definitions: unnecessary */
3.
/* insertion */ PeriodicMethod1.Cycle = DEADLINE1; PeriodicMethod2.Cycle = DEADLINE2;
4.
/* control */ PeriodicMethod2.Cycle = NEW DEADLINE2; PeriodicMethod1.Cycle = 0;
Putting Things Together
In previous section we introduced several domain-specific languages each of which augments plain C with extra features. In the rest of this section we briefly report on some general design principles as well as on our current approach to combine together those domain-specific languages. The key principle of our approach is the use of a set of independent and interchangeable linguistic extensions, each addressing a specific problem domain. Extensions augment a same base language (in the case at hand, C) and in the face of local syntax errors, assume that the current line being parsed will be treated by one of the following extensions. In other words, what would normally be regarded as severe errors is treated as a warning and flushed verbatim on the standard output stream. Obviously such strategy is far from ideal, as it shifts all possible syntax checks down to C compile time. A better strategy would be to let the system guess which extensions to apply based on the syntactic “signature” of each input fragment. A simpler alternative would be to use start conditions, as suggested for lexical analysis in the now classical article [21]. Our extensions are coded in C with Lex and YACC [23] and make use of some simple Bash shell scripts. Some extensions were originally developed on a Windows/Cygwin environment while more recent ones have been devised on Ubuntu Linux. All extensions run consistently on both environments. Each of our extensions is uniquely identified by an extension identifier—a string in the form “cpm://e/v”, where e and v are two strings representing respectively the extension and its version number. Our current implementation makes use of a simplistic strategy to assemble components, requiring the user to manually insert or remove the translators corresponding to each extension. In particular the user is responsible for choosing the order of application of the various extensions. Figure 6 shows the script that we use for this. A Unix pipeline is used to represent the assembling process. Components of this pipeline are in this case redundancy, which manages the extension described in Sect. 3.2, followed by refractive, which adds operator overloading capabilities to context variables. The last stage of the pipeline is in this case array, which produces the dynamic array extension described in Sect. 3.1.
Trading Off Complexity for Expressiveness in Programming Languages
171
Fig. 6. A Bash script is used to selectively augment C with our modular extensions
It is worth pointing out that each extension publishes its extension identifier by appending it to a context variable, a string called extensions pipeline, e.g. “cmp://redundancy/1.1; cmp://refractive/0.5;cmp://array/0.5”. By inspecting this variable the program is granted access to knowledge representing the algorithmic complexity and the features of its current execution environment. As described in revious section, extensions make use of Posix threads defined in libraries and ancillary programs. Such ancillary code (and the ensuing complexity) is then selectively loaded on demand during the linking phase of the final compilation.
5
Evaluation
In order to analyze the performance of our method we shall focus on a particular case study: The design of a simple software watchdog timer (WDT). This particular choice stemmed from a number of reasons: – First of all, WDT provides a well known and widespread “dependable design pattern” that is often used in either hardware or software in mission-critical embedded systems, as it provides a cost-effective method to detect performance failures [2]. – Secondly, a WDT is a real-time software. This means that it requires context awareness of time. This makes it suitable for being developed with the extension described in Sect. 3.1. – Moreover, a WDT is a cyclic application. Linguistic constructs such as the one described in Sect. 3.3 allow a concise and lean implementation of cyclic behaviors.
172
V. De Florio and C. Blondia
– Furthermore, WDT is a mission-critical tool: A faulty or hacked WDT may cause a healthy watched component to be stopped; this in turn may severely impact on availability. Protecting a WDT’s control variables could help preventing faults or detecting security leaks. The extension described in Sect. 3.2 may provide to some extent such protection. – Finally, the choice of focusing on a WDT allows us to leverage on our past research: In [9] we introduced a domain-specific language that permits to define WDTs in a few lines of code. This allows an easy comparison of the amount of the expressiveness of the two approaches. As briefly mentioned in Table 1, a sensor/actuator variable called watchdog reflects the state of a WDT. States are reified as integers greater than −4. Negative values represent conditions, i.e. either of: WD STARTED, meaning that a WDT task is running and waiting for an activation message. WD ACTIVE, stating that WDT has been activated and now expects periodical heartbeats from a watched task. WD FIRED, that is, no heartbeat was received during the last cycle—the WDT “fired”. WD END, meaning that the WDT task has ended. Positive values represent how many times the WDT reset its timer without “firing.” That same variable, watchdog, is also an actuator, as it controls the operation of the WDT: Writing a value into it restarts a fired WDT. Being so crucial to the performance of the WDT, we decided to protect watchdog by making it redundant. To do so we declared it as extern redundant t int watchdog. Using the extern keyword was necessary in order to change the definition of watchdog into a declaration [18], as the context aware component defines watchdog already. In other words this is a practical example of two nonorthogonal extensions. Figure 7 describes our prototypic implementation. The code uses all three extensions reported in Sect. 3. It executes as follows: A WDT thread is transparently spawned. Such thread is monitored and controlled via variable watchdog. Redundant copies of this variable are used to mitigate the effect of transient faults or security leaks affecting memory. The code then uses our cyclic methods extension to call periodically a management function. Such function in turn makes use of two of our extensions—for instance, the WDT is restarted simply by writing a certain value in watchdog. Our evaluation is based on a qualitative estimation of the redundancy of the resulting programming language (see Sect. 2). In other words, we are interested here in the amount of expressiveness of our language—how adequate and concise the language proved to be with respect to other existing languages. A rough estimation of this syntactical adequacy [7] may be done by measuring the required amount of lines of code (LoC).
Trading Off Complexity for Expressiveness in Programming Languages
173
Fig. 7. Excerpt from the code of the WDT
If we restrict ourselves to the above discussed WDT we can observe how in this special case the programmer is required to produce an amount of lines of code notably smaller than what normally expected for a comparable C program. Such amount is slightly greater than in the case treated in [9], where a C implementation of a WDT is produced from the high level domain-specific language Ariel [3,13]. It must be remarked though that the WDT produced by Ariel is much simpler than the one presented here—e.g. it is non-redundant and context agnostic.
6
Conclusions
We have introduced an approach inspired by LoP that linearly augments the features of a programming language by injecting a set of light-weighted extensions. Depending on the desired features and the overhead and behaviors permitted by the target platform and cost constraints, our approach allows the programming language to be flexibly reshaped. This is because it employs well-defined “complexity containers”, each of which grants limited domain-specific functions and is characterized by well-defined complexity and overheads. By doing so, complexity is made transparent but it is not hidden: While the programmer can benefit of high level constructs, the designer and the deployer can deal with modular building blocks each characterized by a certain algorithmic complexity and therefore each accountable for a certain overhead. A mechanism allows each building block to be identified, thus avoiding mismatches between expected and
174
V. De Florio and C. Blondia
provided features. At the same time, this provides the designer with finer control over the amount of resources required by the run-time executive of the resulting language, as well as over its resulting algorithmic complexity. We observe how our approach allows the designer to deal with a number of separated, limited problems instead of a single, larger problem. From the divide-and-conquer design principle we then conjecture a lesser complexity for our approach. Moreover, in our case the designer is aware and in full control of the amount and the nature of the complexity he/she is adding to C. A full-fledged comparison between a library-based approach such as [14] and ours will be the subject of future research.
References 1. Coady, Y., et al.: Using aspectc to improve the modularity of path-specific customization in operating system code. In: Proc. of FSE-9, pp. 88–98 (2001) 2. Cristian, F.: Understanding fault-tolerant distributed systems. Communications of the ACM 34(2), 56–78 (1991) 3. De Florio, V.: A Fault-Tolerance Linguistic Structure for Distributed Applications. PhD thesis, Dept. of Elec. Eng., Univ. of Leuven, Belgium (October 2000) 4. De Florio, V.: Cost-effective software reliability through autonomic tuning of system resources. In: Proc. of the Applied Reliability Symposium, Europe (April 2010) 5. De Florio, V.: Software assumptions failure tolerance: Role, strategies, and visions. In: Casimiro, A., de Lemos, R., Gacek, C. (eds.) Architecting Dependable Systems VII. LNCS, vol. 6420, pp. 249–272. Springer, Heidelberg (2010), doi:10.1007/9783-642-17245-8 11 6. De Florio, V., Blondia, C.: Reflective and refractive variables: A model for effective and maintainable adaptive-and-dependable software. In: Proc. of SEAA 2007, L¨ ubeck, Germany (August 2007) 7. De Florio, V., Blondia, C.: A survey of linguistic structures for application-level fault-tolerance. ACM Computing Surveys 2 (April 2008) 8. De Florio, V., Blondia, C.: Design tool to express failure detection protocols. IET Software 4(2), 119–133 (2010) 9. De Florio, V., Donatelli, S., Dondossola, G.: Flexible development of dependability services: An experience derived from energy automation systems. In: Proc. of ECBS 2002, Lund, Sweden. IEEE Comp. Soc. Press, Los Alamitos (2002) 10. De Win, B., Goovaerts, T., Joosen, W., Philippaerts, P., Piessens, F., Younan, Y.: Chapter Security Middleware for Mobile Applications. In: Middleware for Network Eccentric and Mobile Applications, pp. 265–284. Springer, Heidelberg (2009) 11. Deconinck, G., et al.: Stable memory in substation automation: a case study. In: Proc. of FTCS-28, Munich, Germany, pp. 452–457 (June 1998) 12. Deconinck, G., et al.: Integrating recovery strategies into a primary substation automation system. In: Proc. of DSN 2003 (2003) 13. Deconinck, G., et al.: A software library, a control backbone and user-specified recovery strategies to enhance the dependability of embedded systems. In: Proc. of Euromicro 1999, Milan, Italy, vol. 2, pp. 98–104 (September 1999) 14. Deconinck, G., et al.: Industrial embedded HPC applications. Supercomputer 13(3– 4), 23–44 (1997)
Trading Off Complexity for Expressiveness in Programming Languages
175
15. Dividend, I.: Toyota’s Electromagnetic Interference Troubles: Just the Tip of the Iceberg (February 2010), http://seekingalpha.com/article/187021-toyota-s-electromagneticinterference-troubles-just-the-tip-of-the-iceberg 16. Dmitriev, S.: Language oriented programming: The next programming paradigm. OnBoard (November 2004) 17. Fowler, M.: Language workbenches: The killer-app for domain specific languages (2005), http://www.martinfowler.com/articles/languageWorkbench.html 18. Kernighan, B.W., Ritchie, D.M.: The C Programming Language, 2nd edn. Prentice-Hall, Englewood Cliffs (1988) 19. Kohler, E., Morris, R., Chen, B., Jannotti, J., Kaashoek, M.F.: The click modular router. ACM Transactions on Computer Systems 18(3), 263–297 (2000) 20. Leeman, M., et al.: Automated dynamic memory data type implementation exploration and optimization. In: Proc. of ISVLSI 2003, Washington, DC (2003) 21. Lesk, M.E., Schmidt, E.: Lex – a Lexical Analyzer Generator. Technical report, Bell Laboratories, CS Technical Report No. 39 (1975) 22. Leveson, N.G.: Safeware: Systems Safety and Computers. Addison, London (1995) 23. Levine, J., et al.: Lex & YACC, 2nd edn. O’ Reilly, Sebastopol (1992) 24. Lippert, M., Videira Lopes, C.: A study on exception detection and handling using aspect-oriented programming. In: Proc. of ICSE 2000, Limmerick, Ireland (June 2000) 25. Lohmann, D., Spinczyk, O.: Developing embedded software product lines with aspectc++. In: OOPSLA 2006, New York, NY, pp. 740–742 (2006) 26. Nagy, I.A., van, R.E., van der, D.P.: An overview of mirjam and weavec. In: Ideals: Evolvability of Software-Intensive High-Tech Systems, pp. 69–86. Embedded Systems Institute, Eindhoven (2007) 27. Perry, T.S., Geppert, L.: Do portable electronics endanger flight? The evidence mounts. IEEE Spectrum 33(9), 26–33 (1996) 28. Simonyi, C.: Is programming a form of encryption? (2005), http://blog.intentsoft.com/intentional software/2005/ 04/dummy post 1.html 29. Sun, H., De Florio, V., Gui, N., Blondia, C.: Adaptation strategies for performance failure avoidance. In: Proc. of SSIRI 2009, Shanghai (July 2009) 30. Taylor, D.J.: at al. Redundancy in data structures: Improving software fault tolerance. IEEE Trans. on Soft. Eng. 6(6), 585–594 (1980) 31. Tekla, P.: Toyota’s Troubles Put EMI Back Into The Spotlight (February 2010), http://spectrum.ieee.org/tech-talk/green-tech/advanced-cars/ toyotas-troubles-put-emi-back-into-the-spotlight 32. Tiobe. TIOBE Programming Community Index for (July 7 2010), http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html 33. Ward, M.P.: Language-oriented programming. Software—Concepts and Tools 15(4), 147–161 (1994) 34. Weiss, C.: Consultants Point to Electromagnetic Interference In Toyota Problems, (March 2010), http://motorcrave.com/consultants-point-to-electromagneticinterference-in-toyota-problems/5927
Electric Vehicle Telematics Framework for Smart Transportation Junghoon Lee1 , Hye-Jin Kim1 , Gyung-Leen Park1, Ho-Young Kwak2 , Young-cheol Kim3 , and JeongHoon Song4 1
Dept. of Computer Science and Statistics, Jeju National University 2 Dept. of Computer Engineering, Jeju National University 3 Digital Convergence Center, Jeju Techno Park 4 CS Co., Ltd {jhlee,hjkim82,glpark,kwak}@jejunu.ac.kr,
[email protected],
[email protected]
Abstract. This paper functionally designs an efficient electric vehicle telematics framework for smart transportation, aiming at providing an EV-related advertisement via digital multimedia broadcasting. Taking advantage of information technology and wireless communication, the telematics system can support electric vehicle tracking, vehicle sharing, charging station selection, and location data analysis. The electric vehicle charge service develops a reservation protocol between drivers and stations, station-side scheduling, and path adaptation according to a new charge plan. In addition, as a promising business model, electric vehicle sharing needs station placement and relocation schemes, to which a previous pick-up point analysis result can give a helpful guide. The telematics framework enriches the related applications with diverse basic service building blocks and thus accelerates the penetration of electric vehicles into our daily life.
1
Introduction
Empowered by information technology, the modern power network paradigm called the smart grid, is innovating the legacy power system, especially in power system management and intelligent load control [1]. From the viewpoint of customers, the smart grid saves energy, reduces costs, and improves reliability based on a two-way interaction between consumers and suppliers. The smart system management allows systematic integration of a variety of energy sources, for example, solar, wind, and other renewable energies. The load control can reshape the power consumption to reduce peak load, and the reduced peak makes it unnecessary to build new cable systems or power plants [2]. Many countries are significantly interested in this smart grid system, trying to take initiative in its research, technique, and business. In the mean time, the Republic of Korea was designated as one of the smart grid initiative countries together with Italy during the expanded G8 Summit in
This research was supported by the MKE (The Ministry of Knowledge Economy), through the project of Region technical renovation, Republic of Korea.
T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 176–182, 2011. c Springer-Verlag Berlin Heidelberg 2011
Electric Vehicle Telematics Framework for Smart Transportation
177
2009 [3]. The Korean national government launched the Jeju smart grid test-bed, aiming at testing technologies and developing business models mainly in 5 areas of smart power grid, smart place, smart transportation, smart renewables, and smart electricity services. Among these, the smart transportation aims to build a nationwide charging infrastructure that will allow electric vehicles, or EVs in short, to be charged anywhere. It also develops a V2G (Vehicle to Grid) system where the batteries of electric vehicles are charged during off-peak times while the resale of surplus electricity takes place during peak times. For better deployment EVs and charge services, telematics techonology is essential, as it provides a useful information exchange between a car on the move and the service of interest via the wireless network. In addition to classic services such as real-time traffic information, path finding, and vicinity information download, a systematic telematics service framework is required to provide online advance booking of charge spots, remote diagnostics, and time display for next charge [4]. In this regard, this paper is to build a telematics framework capable of providing EV-related services such as location tracking, vehicle sharing, battery charging, and movement record data analysis. As can be seen in Figure 1, in-vehicle telematics device has a connection to DTG (Digital Tachometer Graph) and ECU (Electric Control Unit). For more sophisticated monitoring and charging services, and especially the safety applications, the EV telematics system must consider the data in those units. The telematics service can be integrated into the digital multimedia broadcasting contents to announce the current available service status.
Fig. 1. Electric vehicle architecture
2 2.1
Telematics Service Framework EV Tracking
As the most basic and essential application of telematics systems, vehicle tracking service traces the current location of EVs on the digital map [5]. The location of each vehicle can be expressed either in (longitude, latitude) coordinate, or the
178
J. Lee et al.
read segment the vehicle is moving on. This service can host an application such as tour guide, navigation, safety service, and so on. Jeju has a well maintained road network which essentially follows the entire coast (200 km) and crisscrosses between the island’s major points. In terms of a road network, there are about 18,000 intersections and 27,000 road segments. This means that the road graph can be built with 18,000 nodes and 27,000 links along with some additional data structures such as POI (Point of Interest), making it possible to store the whole graph in the high-speed main memory, not using low-speed file systems or databases in disk. Hence, almost every function can be carried out only within the main memory. The in-vehicle telematics device contains a GPS receiver as well as an air interface, which is CDMA (Code Division Multiple Access) protocol in Korea. Each EV reports basically its location records at fixed time interval. A remote server is responsible for receiving and managing, this information, while integrating spatial data such as a digital map and a road network. The digital map has full sequence of points for every road segment to perform map matching. On the other hand, the road network has only nodes and links, which are intersection and two end points of each road segment, respectively. Figure 2 plots the road network and we have implemented a road network generator from the ESRI shape file. The road network is exploited for advanced functions such as EV allocation in vehicle sharing application. Basically, map matching finds the link that corresponds to a spatial stamp specified as a (latitude, longitude) pair. For a road segment, the area of each triangle which consists of two end points of each line segment and a report point is calculated. Receiving a path finding request with the specification of a source and a destination, the server runs a well-known A* algorithm in which the Euclidean distance is taken for future cost estimation while the network distance for the accumulated cost [6]. In addition, the path finding scheme provides another option which takes into account the current moving direction of a vehicle. After matching the angle between the road segment and the EV s direction, we can
Fig. 2. EV telematics architecture
Electric Vehicle Telematics Framework for Smart Transportation
179
find the node the EV will arrive. We also calculate the distance and estimate the time duration from the source to its destination. This path finding scheme can be easily extended to integrate a battery charge plan for EVs. If an EV is to have its battery charged, the path schedule can be adjusted to include the available charging station. 2.2
EV Sharing
EVs are still relatively expensive, so EV sharing is a promising business model. Moreover, it can reduce the number of EVs in the community. In this model, people rent cars for short period of time, picking up and dropping off the vehicle at different places. The booking system must increase the number of acceptable requests by improving the availability of vehicles at the very spot a customer wants. The EV sharing company essentially opens dedicated pick-up stations over the target service area and employs drivers who are responsible to move cars between the pick-up stations. Hence, the booking system is required to consider the relocation cost also. Efficient relocation is important for EV availability. It is a kind of multipoint routing problem, which is inherently NP complete. Many heuristics are designed and useful sources are open to the public. We adopt the Lin-Kernighan algorithm [5]. Each iteration step of this algorithm performs a sequence of exchange moves while considering whether or not a possible move could lead to a better tour. This heuristic can find an optimal route in most cases with an extremely small overhead. In addition to the reactive relocation, proactive schemes can be considered based on the car-sharing request forecast [7]. The pick-up point analysis can give a hint for efficient forecasting combined with a variety of forecasting models including least squares, Kalman filters, and genetic algorithms. In addition, for public transport and car-sharing EVs to interoperate efficiently, an appropriate placement of pick-up stations is important [8]. The location of pick-up stations depend on the road network layout and the requesting pattern form the customer-side. Our previous work has identified the intensive pick-up points based on the analysis of location history data created by the taxi tracking service. Based on this data, the telematics framework will select the locations of pick-up stations. In this model, the easiness-to-install must be considered. 2.3
Charging Scheme
Even though many researchers and developers are working to improve driving range while decreasing charging time, weight, and cost of batteries, EVs still need to be charged more often and it takes tens of minutes to charge an EV. Without an appropriate distribution of EVs over charging stations, not only the waiting time can be intolerable to the drivers, but also the power consumption may exceed the permissible bound in a charging station, possibly resulting in the extra cost. The availability of charging station information can distribute and even assign EVs to stations. To this end, many telemetics services will be available for
180
J. Lee et al.
drivers, for example, on-demand-information fuelling, remote vehicle diagnostics, interior pre-conditioning, and green report generation for monthly EV miles. The information necessary for charge services includes estimated distance covered on existing charge, availability and booking of charging station, location of charging station, and state of charge. When an EV decides to have its battery charged, it runs Dijkstra’s algorithm for multiple destinations. As the vicinity road network is not so large, it is possible to run this optimal algorithm. This version begins from the location of the EV, spanning the node within a range reachable with the remaining battery capacity until it finds any destination. Then, a reservation request is sent to the stations with its battery and time constraint specified. The charge station attempts to schedule the requests and check sif it can meet the requirement without violating the constraints of already admitted requests. In a charging station, each charge operation can be modeled as a task. For a task, the power consumption behavior can vary according to the charge stage, remaining amount, vehicle type, and the like. The load power profile is practical for characterizing the power consumption dynamics along the battery charge stage [9]. This profile is important in generating a charge schedule. In the profile, the power demand is aligned to the fixed-size time slot, during which the power consumption is constant. The length of a time slot can be tuned according to the system requirement on the schedule granularity and the computing time. In a power schedule, the slot length can be a few minutes, for example, 5 minutes. This length coincides with the time unit length generally used in the real-time price signal. After all, task Ti can be modeled with the tuple of < Ai , Di , Ui >. It is necessary to mention that the charge task is practically nonpreemptive in the charging station [10]. Even though it can be preempted in the single user case as in an individual home, the charge process continues to the end once it has started in the charging station. Ai is the activation time of Ti , Di is the deadline, and Ui denotes the operation length, which corresponds to the length of the consumption profile. Ai is the estimated arrival time of the vehicle. Each task can start from its activation time to the latest start time, which can be calculated by subtracting Ui from Di . When a start time is selected, the profile entry is just copied to the allocation table one by one, as the task cannot be suspended or resumed during its operation. The choice option is bounded by M , the number of time slots in the scheduling window, hence the time complexity of search space traversal for a single nonpreemptive task is O(M ), making the total complexity O(M N ), where N is the number of tasks. 2.4
Location History Analysis
Along with the time and spatial stamps, EVs are highly likely to report DTGrelated data. In case the real-time collection is not feasible due to the network breakage or bandwidth limitation, at least the off-line analysis will be conducted [11]. The examples of a DTG record include onboard recorder insertions and withdrawals of tachograph cards, detailed speed and over speeding, driving
Electric Vehicle Telematics Framework for Smart Transportation
181
Fig. 3. History data analysis framework
distances and time, malfunctioning, vehicle unit identification data, motion sensor identification data, calibration data, company locks data and management, control activity data, download activity data, entry of specific conditions, and warnings. In addition, ECU consists of a sensor part and an actuator part to perform a diagnostic function and a fail-safe function. The sensor part includes accelerator position sensors, accelerator pedal switches, revolution sensors, and motor temperature sensors, while the actuator part consists of system main relays, system main relay control, traction motor control, output reduction control, and a diagnosis system. Figure 3 depicts the spatial analyzer tool implemented with our own road network visualizer. By pointing two discrete points, we can set the block or route of interest. As the cars generally moves along the shortest path, the analyzer performs A* path finding algorithm between the two points, creating the set of links that constitute the path. The query is issued to the history database to retrieve the record whose link is included in the set. It can set the box area to investigate in more detail. Here, each time the car enters and leaves the interested block, we can get the first record and the last record in a trajectory inside the block. From them, the analyzer can calculate the time difference as well as estimate the distance between the two points. It is not an Euclidean distance but the network distance that corresponds to the actual path taken by the car. Such information can be submitted to the advanced processing step mainly based on the artificial intelligence technique like the Kalman filter [12].
3
Conclusions and Summary
EVs are important elements of the smart grid, and they are charged via the connection to the smart grid system. In this paper, we have functionally designed an integrative telematics service framework for moving objects, focusing on the essential telematics services such as object tracking, vehicle sharing, charge reservation, and movement data analysis. EV tracking implements coordinate conversion, map matching, and path finding, and it can generate a schedule integrating
182
J. Lee et al.
the battery charge plan. Vehicle sharing is a promising business model of EVs and needs sharing station site selection and an efficient relocation algorithm. Battery charging is the core of the EV telematics system, including driver-station booking and station-side scheduling. It can distribute EVs over multiple stations to reduce the charging time and peak load. The location data analysis must handle a large amount of location data containing DTG and ECU in addition to the classical temporal and spatial stamps. On top of spatial database, a sophisticated analysis engine is available. In short, the proposed telematics framwork can enrich the EV-related applications to accelerate the deployment of them in our daily life.
References 1. Gellings, C.W.: The Smart Grid: Enabling Energy Efficiency and Demand Response. CRC Press, Boca Raton (2009) 2. Mohsenian-Rad, A., Wong, V., Jatkevich, J., Schober, R., Leon-Garcia, A.: Autonomous demand-side management based on game-theoretic energy consumption scheduling for the future smart grid. IEEE Transactions on Smart Grid 1, 320–331 (2010) 3. Korean Smart Grid Institute, http://www.smartgrid.or.kr/eng.htm 4. Frost & Sullivan: Strategic Market and Technology Assessment of Telematics Applications for Electric Vehicles. In: 10th Annual Conference of Detroit Telematics (2010) 5. Lee, J.-H., Park, G.-L., Kim, H., Yang, Y.-K., Kim, P.-K., Kim, S.-W.: A telematics service system based on the linux cluster. In: Shi, Y., van Albada, G.D., Dongarra, J., Sloot, P.M.A. (eds.) ICCS 2007. LNCS, vol. 4490, pp. 660–667. Springer, Heidelberg (2007) 6. Goldberg, A., Kaplan, H., Werneck, R.: Reach for A*: Efficient point-to-point shortest path algorithms. MSR-TR-2005-132. Microsoft (2005) 7. Xu, J., Lim, J.: A New Evolutionary Neural Network for Forecasting Net Flow of a Car Sharing System. In: IEEE Congress on Evolutionary Computation, pp. 1670–1676 (2007) 8. Ion, L., Cucu, T., Boussier, J., Teng, F., Breuil, D.: Site Selection for Electric Cars of a Car-Sharing Service. World Electric Vehicle Journal 3 (2009) 9. Derin, O., Ferrante, A.: Scheduling Energy Consumption with Local Renewable Micro-Generation and Dynamic Electricity Prices. In: First Workshop on Green and Smart Embedded System Technology: Infrastructures, Methods, and Tools (2010) 10. Lee, J., Park, G., Kang, M., Kwak, H., Lee, S.: Design of a Power Scheduler Based on the Heuristic for Preemptive Appliances. In: Nguyen, N.T., Kim, C.-G., Janiak, A. (eds.) ACIIDS 2011, Part II. LNCS (LNAI), vol. 6592, pp. 396–405. Springer, Heidelberg (2011) 11. Schweppe, H., Zimmermann, A., Grill, D.: Flexible In-vehicle Stream Processing with Distributed Automotive Control Units for Engineering and Diagnosis. In: IEEE 3rd International Symposium on Industrial Embedded Systems, pp. 74–81 (2008) 12. Chui, C., Chen, G.: Kalman Filtering with Real-Time Applications. Springer, Heidelberg (2008)
E-Contract Securing System Using Digital Signature Approach Nashwa El-Bendary1, Vaclav Snasel2 , Ghada Adam3 , Fatma Mansour3 , 4 Neveen I. Ghali , Omar S. Soliman3 , and Aboul Ella Hassanien3 1
Arab Academy for Science,Technology, and Maritime Transport P.O. Box 12311, Giza, Egypt nashwa
[email protected] 2 Faculty of Electrical Engendering and Computer Science, VSB-Technical University of Ostrava 17. listopadu 15, 708 33 Ostrava-Poruba, Czech Republic
[email protected] 3 Faculty of Computers and Information, Cairo University 5 Ahmed Zewal St., Orman, Giza, Egypt
[email protected] 4 Faculty of Science, Al-Azhar University, Cairo, Egypt nev
[email protected]
Abstract. This paper presents an e-contract securing system, using the digital signature approach, for various e-commerce applications. The proposed system is composed of three phases, namely, e-contract hashing and digital signing phases that are applied at sender’s side, with addition to digital signature verification phase that is applied at the corresponding receiver’s side. The implementation of the proposed system shows accurate and effective results in terms of signing and verification. Keywords: E-contract; hashing; digital signature; verification.
1
Introduction
Electronic commerce (e-commerce) denotes business processes on the Internet, such as buying and selling goods [1]. Further applications may include requesting information, and writing contracts. The abuse of consumer privacy is becoming a concern at the consumer, business and government level. There will be resistance to participating in certain types of e-commerce transactions if the assurance of privacy is low or non-existent [2]. Digital signatures are one of the most important cryptographic tools that are widely used nowadays. Applications for digital signatures range from digital certificates for secure e-commerce to legal signing of electronic contracts (e-contracts). As in traditional business relations, e-contracts are employed to provide a legally enforceable protection mechanism to parties [3]. A conventional signature on a paper document usually engages the responsibility of the signer. Digital signature aims at signing a document in its electronic form [4], [5] and can be T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 183–189, 2011. c Springer-Verlag Berlin Heidelberg 2011
184
N. El-Bendary et al.
transmitted electronically with the signed document. Due to the rapid growth of e-commerce, the digital signature, which is fundamental tools for securing contracts via the Internet, is legally of vital interest. Consequently, security against fraud and misuse must at least be equal to traditionally signed written papers [6]. In this paper a digital signature system for securing e-contracts is presented. The rest of this paper is organized as follows. Section 2 presents e-contract features. Section 3 introduces in details the proposed e-contract digital signature system and its phases. In section 4, experiments and results are illustrated. Finally, Section 5 addresses conclusions and discusses future work.
2
E-Contract Features
A contract is an agreement between two or more parties interested in creating mutual relationships on business or legal obligations. In the development of electronic means for communication and collaboration between organizations, electronic contracts (e-contracts) have emerged as a digital alternative for physical documents [7]. The use of e-contracts is a promising way for improving the efficiency of contracting processes. New business scenarios caused by e-commerce developments require new contracting paradigms in which the use of electronic contracts becomes an essential element to obtain an essential paradigm shift in business relations contracting. Paper is a trusted medium for holding legal and audit evidence. People are familiar with this medium, and centuries of experience have tested the application of evidence to paper documents [8]. Due to the physical nature of paper and ink, every modification made to any of the parts that make up a paper document (paper sheet, text or pictures, information about the issuer, written signature) leaves a mark. However, an electronic document is saved on a magnetic media and can be deleted, modified or rewritten without traces of evidence. To ensure identification, authenticity, declaration, and proof, the process of signing should be substituted by new electronic methods [6]. The fundamental difference resides between conventional and digital signature is the verification of the signature authenticity. A conventional signature is authenticated by comparing it with a certified one. This authentication method is obviously not very reliable, since it is easy to imitate the signature of someone else. An electronic signature however, can be verified by any person that knows the verification algorithm [4].
3
E-Contract Digital Signature System
The proposed e-contract digital signature system is aiming at securing e-contracts in different e-commerce applications through applying the ElGamal digital signature [11] algorithm. The proposed system is generally composed of three phases, namely, (1) hashing and (2) digital signing phases that are applied at sender’s side, with addition to (3) verification phase that is applied at the corresponding
E-Contract Securing System Using Digital Signature Approach
185
Fig. 1. E-contract digital signature system phases
receiver’s side. Figure 1 depicts the main phases of the proposed system at both sender’s and receiver’s sides. The following subsections describe the phases of the proposed system. 3.1
Hashing Function Phase
A digital signature (DS) is a seal affixed to digital data, which is generated by a signature private key (PrK), and establishes the owner of the signature key and the integrity of the data with the help of an associated public key (PuK) [8]. Digital signature algorithms are applied either directly to the message to be authenticated or to its hash value in order to generate a tag that is used for the authenticity verification [4]. The most important tool that helps implementing digital signature algorithms is based on the hash function. A hash function H generally operates on a message m of arbitrary length to provide a fixed size hash value h that is called message digest , as shown in equation (1). The size of the message digest h is usually much smaller than the size of the original message m. Several algorithms were proposed for hash calculation [9], [10]. In this paper, the MD5 [15] hash function has been used
186
N. El-Bendary et al.
Algorithm 1. ElGamal digital signature algorithm For Digitally signing a document,the sender implements the following steps: Set p as a large prime number, q as a large prime number’s factor of P − 1, g as a large prime number’s factor with order q over GF(P), and gcd(g,p)=1 Select a random number x; where x ∈ (1, p-1) Calculate Y using equation (2), where the public key is (Y, g, p), and the secret key is x Y = gx mod p (2) Digitally sign message m through the following steps: Select integer k randomly; where k ∈ (l,p-1) and gcd(k,p-1) = 1. Calculate r using equation (3) r = gk(mod p)
(3)
Calculate s, which satisfies equation (4), according to equation (5) m = xr + ks (mod (p − 1))
(4)
S = (m − xr) k − 1 (mod (p − 1))
(5)
Attach the sender’s digital signature as (r, s) to the original document m
for calculating the digest of the message. For a hash function H, it is easy to calculate h = H(m) using m, however the calculation, which yields a m to make H(m) = h , is not feasible. That is to say H is called as one-way hash function. It is introduced mainly based on the consideration of digital signature or message authentication. digest = h = H(m) (1) 3.2
Digital Signing Phase
The ElGamal Encryption scheme [11] is one of the classic asymmetric public key encryption schemes. The ElGamal Digital Signature scheme [11], [12], [13] is quite different from the encryption scheme with the same name since the security of the digital signature scheme is based on the difficulty of the ”discrete logarithm” problem. For the proposed system, the ElGamal Digital Signature algorithm has been used for both sender’s digital signature and receiver’s verification phases. Algorithm (1) briefs the digital signing steps. 3.3
Signature Verification Phase
At the receiver’s side, after receiving the signature (r, s) of m from the sender, the receiver verifies if (r, s) meets equation (6) through applying the steps of algorithm (2) [14]. gm = Y xrs (mod p) (6)
E-Contract Securing System Using Digital Signature Approach
187
Algorithm 2. Digital signature verification algorithm For verifying the received document, the receiver calculates the modulo multiplicative inverse according to the following steps: Set m as a positive integer, for any u ∈ 0, 1, 2, , m-1, where u−1 is the modmultiplicative inverse element if u=0 then Set u−1 =0 else Set initial values as n1 = m, n2 = u, b1 = 0 and b2 = 1 end if Divide n1 by n2 according to equation (7) n1 = q ∗ n2 + r
(7)
Get the quotient q and the remainder r if r = 0 then Update variables as n1=n2, n2=r, t=b2, b2=b1-q*b2, b1=t, then go back to step(3) else n2 = 1, u−1 does not exist end if if b2 < 0 then Update b2 as b2=b2+m and u−1 = b2 end if if (r, s) meets equation (6) then The signature will be accepted else The signature will be rejected end if
4
Experimental Results
The e-contract securing system proposed in this paper deals with e-contracts to be secured in the form of image files. To digitally sign an e-contract to be sent, the system loads the e-contract document (a bmp image or a jpg image that will be converted into a bmp image), then the system applies the hashing and signature phases in order to generate a digitally signed e-contract. On the receiver’s side, the system verifies the received e-contract in order to ensure integrity of received e-contracts and identity of the sender. The proposed e-contract digital signature system was tested against an econtract sample bmp image with 564 x 436 size. All the results reported in this paper were obtained on a PC with Windows 7 ultimate, processor i3core 2.13 GHZ, using java programming language (Net beans IDE 6.7). The MD5 [15] hash function has been used for calculating the message digest. The resulted digest of the hash function is a hexadecimal string of length 28 digits of the format:(ee6592cd76c886ed4f6709332e97), as shown in Figure 2. In addition,
188
N. El-Bendary et al.
(a) Original document
(b) Digitally signed document
Digest = (ee6592cd76c886ed4f6709332e97)
Fig. 2. Original, digitally signed document, and generated digest
Figure 2 shows the original and digitally signed document. After calculating the digital signature, the values of r and s are r = 1 and s = 1.
5
Conclusions and Future Work
The rapid progress for standardizing computing and communication technologies has enabled the electronic commerce (e-commerce) to be booming and promising. Floods of consumer transactions that take place online all require execution of electronic contracts (e-contracts) with the aid of digital signature. This paper presents a system for securing e-contracts using digital signature approach that aims at achieving authenticity and integrity for e-contracts. The proposed system is generally composed of three phases that are hashing and digital signing phases, which are applied at sender’s side, with addition to verification phase, which is applied at the corresponding receiver’s side. In the future work we want to implement the proposed system with different symmetric key encryption algorithms in order to assure document confidentiality. Also, further implementations could be applied with different hash functions and different file formats representing the e-contract documents.
References 1. Schwiderski-Grosche, S., Knospe, H.: Secure Mobile Commerce. Special issue of the IEE Electronics and Communication Engineering Journal on Security for Mobility 14(5), 228–238 (2002) 2. Marchany, R.C., Tront, J.G.: E-Commerce Security Issues. In: The 35th Hawaii International Conference on System Sciences, pp. 2500–2508 (2002)
E-Contract Securing System Using Digital Signature Approach
189
3. Angelov, S., Till, S., Grefen, P.: Dynamic and Secure B2B E-contract Update Management. In: EC 2005 Conference, Vancouver, British Columbia, Canada (2005) 4. Haouzia, A., Noumeir, R.: Methods for Image Authentication: A Survey. Multimedia Tools and Applications 39(1), 1–46 (2008) 5. Pfitzmann, B.: Digital Signature Schemes: General Framework and Fail-Stop Signatures. LNCS, vol. 1100. Springer, Berlin (1996) 6. Menzel, T., Schweighofer, E.: Securing Electronic Commerce with Digital Signatures - Do Digital Signatures Comply with the Legal Criteria for the Written Form and Supply Equal Proof? In: The 14th BILETA Conference (CYBERSPACE 1999) Crime, Criminal Justice and the Internet, College of Ripon & York St. John, York, England (1999), http://www.bileta.ac.uk/99papers/menzel.html 7. Fantinato, M.: A Feature-based Approach to Web Services E-contract Establishment. IBM Research Report. In: Di Nitto, E., Ripeanu, M. (eds.) ICSOC 2007. LNCS, vol. 4907, Springer, Heidelberg (2009) 8. Wright, B.: The Law of Electronic Commerce - EDI, E-Mail, and Internet: Technology, Proof, and Liability. Little Brown and Company, Boston (1996) 9. Matsuo, T., Kaoru, K.: On Parallel Hash Functions Based on Block-ciphers. The IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, 67–74 (2004) 10. Skala, V., Kucha, M.: The Hash Function and The Principle of Duality. The Computer Graphics International 200, 167–174 (2001) 11. El Gamal, T.: A Public Key Cryptosystem and a Signature Scheme Based on Discrete Logarithms. In: Blakely, G.R., Chaum, D. (eds.) CRYPTO 1984. LNCS, vol. 196, pp. 10–18. Springer, Heidelberg (1985) 12. Paar, C., Pelzl, J., Preneel, B.: Understanding Cryptography: A Textbook for Students and Practitioners, 2nd edn. Digital Signatures, ch.10. Springer, Heidelberg (2010) 13. Chen, H., Shen, X., Lv, Y.: A New Digital Signature Algorithm Similar to ELGamal Type. Journal of Software - JSW 5(3), 320–327 (2010) 14. Li, X., Sun, F., Wu, E.: Modulo Multiplicative Inverse Circuit Design. University of Macau, Macao (2006) 15. Wang, Y., Wong, K.-W., Xiao, D.: Parallel Hash Function Construction Based on Coupled Map Lattices. Communications in Nonlinear Science and Numerical Simulation 16(7), 2810–2821 (2011)
Fault Tolerance Multi-Agents for MHAP Environment: FTMA SoonGohn Kim1 and Eung Nam Ko2 1 Division of Computer and Game Science, Joongbu University, 101 Daehakro, Chubu-Meon, GumsanGun, Chungnam, 312-702, Korea
[email protected] 2 Division of Information & Communication, Baekseok University, 115, Anseo-Dong, Cheonan, Chungnam, 330-704, Korea
[email protected]
Abstract. This paper describes the design and implementation of the FTMA(Fault Tolerance Multi-Agents), which is running on MHAP(MOMbased on Home Automation Platform) environment. FTMA has been designed and implemented in DOORAE environment for MHAP. The physical device and network layer consists of any network and physical device supporting any networking technology for MHAP. The infrastructure layer introduces infrastructure to provide service management and deployment functions for MHAP. DOORAE(Distributed Object Oriented collaboRAtion Environment) is a good example of the foundation technology for a computer-based multimedia collaborative work that allows development of required application by combining many agents composed of units of functional module when user wishes to develop a new application field. It is a multi-agent system that is implemented with object oriented concept for MHAP. Keywords: FTMA, MHAP, multimedia collaborative work.
1
Fault-Tolerance
Multi-agents,
DOORAE,
Introduction
Since new education system must be developed in a way that combines various field of technologies, including group communication and distributed multimedia processing which are the basis of packet based videoconferencing systems, integrated service functions such as middleware are required to support it[1,2,3,4]. The requirement of distributed multimedia applications is the need for sophisticated quality of service (QoS) management. In terms of distributed multimedia systems, the most important categories for quality of service are a timeless, volume, and reliability [5]. In this paper, we discuss a method for increasing reliability through FTMA(Fault Tolerance Multi-agents), which is running on MHAP(MOM-based on Home Automation Platform) environment. FTMA is a fault-tolerance system running on distributed multimedia object oriented collaboration environment. The objective of this article is to propose multi-agents model that is a fault tolerance system with detection, classification, and recovery agents to detect, classify and recover an error T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 190–196, 2011. © Springer-Verlag Berlin Heidelberg 2011
Fault Tolerance Multi-Agents for MHAP Environment: FTMA
191
automatically. Section 2 describes the context: situation-aware middleware. Section 3 denotes FTMA. Section 4 describes simulation results. Section 5 concludes the paper.
2
The Context: Situation-Aware Middleware
A conceptual architecture of situation-aware middleware based on Reconfigurable Context-Sensitive Middleware (RCSM) is proposed in [6]. Ubiquitous applications require use of various contexts to adaptively communicate with each other across multiple network environments, such as mobile ad hoc networks, Internet, and mobile phone networks. However, existing context-aware techniques often become inadequate in these applications where combinations of multiple contexts and users’ actions need to be analyzed over a period of time. Situation-awareness in application software is considered as a desirable property to overcome this limitation. In addition to being context-sensitive, situation-aware applications can respond to both current and historical relationships of specific contexts and device-actions. However, it did not include fault tolerance system support in the architecture.
3
FTMA
In this paper, we focus on describing multi-agents model that is a fault tolerance system in situation-aware middleware for MHAP environment. 3.1
Overview of the MHAP Model
As shown in figure 1, MHAP has four layered architecture [7]. The physical device and network layer consists of any network and physical device supporting any networking technology. The infrastructure layer introduces infrastructure to provide service management and deployment functions for MHAP services. The MHAP layer consists of MHAP services and provides functionalities constructing HA, which includes event notification, appliance control, HA rule configuration and device management. It uses MOM to support event-driven HA in heterogeneous environment. Facilitating Home Automation needs many different kinds of applications. There are DOORAE agent layer between application layer and MHAP service layer. Nowadays multi-agent systems establish a major research subject in distributed artificial intelligence. In particular, multi-agent modeling makes it possible to cope with natural constraints like the limitation of the processing power of a single agent or the physical distribution of the data to be processed and to profit from inherent properties of distributed system like robustness, fault tolerance parallelism and scalability [8]. 3.2
FTMA for DOORAE Agent Layer in MHAP Model
Our proposed FTMA model aims at supporting adaptive fault tolerance of events occurred in application-level described by a set of objects by reserving, allocating, and reallocating necessary resources given dynamically changing situations. A highlevel FTMA conceptual architecture to support adaptive fault tolerance of events is shown in Figure 2.
192
S. Kim and E.N. Ko
Administration Tool
Rule Configuration Universal Controller
Appliance Monitor Application Layer
DOORAE Agent Layer
Adapter
Rule Engine
Device Management MHAP Service Layer Infrastructure Layer
Configuration MOM Service Open Service Gateway Initiative
Physical Device & Network Layer
Fig.1. The organization of MHAP
The main components are Situation-aware Manager (SM), Resource Manager (RM), and Fault Tolerance Multi-agents (FTMA) shown in Situation-Aware Middleware box in Figure 2. Applications request to execute a set of events to Situation-aware Middleware with various adaptive fault tolerance of events. A Situation-aware Manager analyzes and synthesizes context information captured by sensors over a period of time, and drives a situation. A Resource Manager simultaneously analyzes resource availability by dividing requested resources from events by available resources. It is also responsible for monitoring, reserving, allocating and deallocating each resource. Given the driven situations, A Fault Tolerance Multi-agents (FTMA) controls resources when it met errors through the Resource Manager to guarantee adaptive fault tolerance of events. If there are some error resource due to low resource availability, FTMA performs resource error
Fault Tolerance Multi-Agents for MHAP Environment: FTMA
193
detection-recovery. RM resolves the errors by recovering resources for supporting high priority error events. To effectively identify and resolve error event conflicts, we need to capture the relationships between error event, response, its related fault tolerance requirements, and resources.
<Applications> event1
<Sensor>
event 2
situation 1
event n
situation 2 <Situation-Aware Middleware>
response1 +fault1
situation n
Situationaware Agent
response n + fault n
Resource Agent
Fault-Tolerance Multi-Agent
R e s o u r c e 1
R e s o u r c e 2
R e s o u r c e n
Fig. 2. FTMA Model based on MHAP
4
Simulating FTMA Based on MHAP
The FTMA based on MHAP simulation model has been implemented by using VISUAL C++. To evaluate the performance of the proposed system, an error detection method was used to compare the performance of the proposed model against the conventional model by using DEVS (Discrete Event System Specification) formalism. The DEVS formalism introduced by Bernard P. Zeigler provides a means of specifying a mathematical object called a system. Conventional simulation system adequately support only a single level at which change occurs in the model, that of changes in the model descriptive variable, viz. its behavior. The DEVS formalism is a
194
S. Kim and E.N. Ko
theoretical, well grounded means of expressing hierarchical, modular discrete event models. In DEVS, a system has a time base, inputs, states, outputs based on the current states and inputs. DEVS is a formalism of being developed by Bernard P. Zeigler. The structure of atomic model is as follows [9,10,11]: M = < X, S, Y, δint, δext, λ, ta > X: a set of input events, Y: a set of output events, δext : external transition function, ta : time advance function.
S: a set of sequential states, δint : internal transition function, λ : output function, and
Before system analysis, the variable that is used in this system is as follows. The letter Poll-int stands for “polling interval”. The letter App-cnt stands for “The number of application program with relation to FTE session”. The letter App_cnt2 stands for “The number of application program without relation to FTE session”. The letter Smt-a stands for “ The accumulated time to register information in SM”. (Simulation 1) The atomic models are EF, RA1, UA1, and ED1. The combination of atomic models makes a new coupled model. First, it receives input event, i.e., polling interval. The value is an input value in RA1 and UA1 respectively. An output value is determined by the time related simulation process RA1 and UA1 respectively. The output value can be an input value in ED1. An output value is determined by the time related simulation process ED1. We can observe the result value through transducer. (Simulation 2) The atomic models are EF, RA2, and ED2. The combination of atomic models makes a new coupled model. First, it receives input event, i.e., polling interval. The value is an input value in RA2. An output value is determined by the time related simulation process RA2. The output value can be an input value in ED2. An output value is determined by the time related simulation process ED2. We can observe the result value through transducer. The error detected time interval is as follows.
The relationship of application programbetween error detection time Error 50 detection Time 0
Conventional Proposed method 15 20 30 40 The number of application programor media
Fig. 3. The relationship of application program and error detection time
Fault Tolerance Multi-Agents for MHAP Environment: FTMA
195
Conventional method: Poll_int*(App_cnt + App_cnt2) Proposed method: Poll_int*(App_cnt) + Sm_t_a Therefore, in case of App-cnt2 > App-cnt, Poll_int*(App_cnt + App_cnt2) > Poll_int*(App_cnt) + Sm_t_a That is, proposed method is more efficient than conventional method in error detected method in case of App-cnt2 > App-cnt. We have compared the performance of the proposed method with conventional method.
5
Conclusion
This paper proposes an Adaptive Fault Tolerance Multi-Agents (FTMA) in situationaware middleware framework and presents its simulation model of FTMA-based agents. It is a system that is suitable for detecting and recovering software error based on Home Automation environment by using software techniques. The purpose of FTMA is to maintain and recover for DOORAE session automatically. The physical device and network layer consists of any network and physical device supporting any networking technology for MHAP. The infrastructure layer introduces infrastructure to provide service management and deployment functions for MHAP. DOORAE(Distributed Object Oriented collaboRAtion Environment) is a good example of the foundation technology for a computer-based multimedia collaborative work that allows development of required application by combining many agents composed of units of functional module when user wishes to develop a new application field. It is a multi-agent system that is implemented with object oriented concept for MHAP. In the future work, fault-tolerance system will be generalized to be used in any environment, and we will progress the study of domino effect for distributed multimedia environment as an example of situation-aware applications.
References 1. Moore, M.G., Kearsley, G.: Distance Education a System View. An International Thomson Publishing Company (1996) 2. Ahn, J.Y., Lee, G.m., Park, G.C., Hwang, D.J.: An implementation of Multimedia Distance Education System Based on Advanced Multi-point Communication Service Infrastructure: DOORAE. In: Proceedings of the IASTED International Conference Parallel and Distributed Computing and Systems, Chicago, Illinois, USA, October 16-19 (1996) 3. Park, G.C., Hwang, D.J.: Design of a multimedia distance learning system: MIDAS. In: Proceedings of the ISATED International Conference, Pittsburgh USA (April 1995) 4. Ko, E.-N., Lee, Y.-H., Hwang, D.-J.: An Error Detection-Recovery System running on Distributed Multimedia Environment:EDRS. In: Proceedings of International Conference on IEEE/IEE ICATM 1999, Colmar, France, June 21-23 (1999) 5. Blair, G., Stefani, J.-B.: Open Distributed Processing and Multimedia. Addison-Wesley, Reading (1997) 6. Saha, D., Mukherjee, A.: Pervasive computing: a paradigm for the 21st century. IEEE Computer 36(3), 25–31 (2003)
196
S. Kim and E.N. Ko
7. Chen, C.-Y., Chiu, C.-H., Yuan, S.-M.: A MOM-Based Home Automation Platform. In: Szczuka, M.S., Howard, D., Ślȩzak, D., Kim, H.-k., Kim, T.-h., Ko, I.-s., Lee, G., Sloot, P.M.A. (eds.) ICHIT 2006. LNCS (LNAI), vol. 4413, pp. 373–384. Springer, Heidelberg (2007) 8. Weiβ, G.: Learning to Coordinate Actions in Multi-Agent Systems, pp. 481–486. Morgan Kaufmann Publishers, San Francisco (1998) 9. Zeigler, B.P.: Object-Oriented Simulation with Hierarchical, Modular Models. Academic Press, San Diego (1990) 10. Cho, T.H., Zeigler, B.P.: Simulation of Intelligent Hierarchical Flexible Manufacturing: Batch Job Routing in Operation Overlapping. IEEE Trans. Syst. Man, Cybern. A 27, 116– 126 (1997) 11. Zeigler, B.P., Cho, T.H., Rozenblit, J.W.: A Knowledge-based Environment for Hierarchical Modeling of Flexible Manufacturing System. IEEE Trans. Syst. Man,Cybern. A 26, 81–90 (1996)
An Error Detection-Recovery Agent for Multimedia Distance System Based on Intelligent Context-Awareness: EDRA_RCSM SoonGohn Kim1 and Eung Nam Ko2 1
Division of Computer and Game Science, Joongbu University, 101 Daehakro, Chubu-Meon, GumsanGun, Chungnam, 312-702, Korea [email protected] 2 Division of Information & Communication, Baekseok University, 115, Anseo-Dong, Cheonan, Chungnam, 330-704, Korea [email protected]
Abstract. The focus of multimedia education system has increased lately. In this paper, we will first explain an error detection-recovery agent for multimedia distance education system based on RCSM (Reconfigurable Context-Sensitive Middleware). DOORAE is a good example for developing multimedia distance education system based on RCSM between students and teachers during lecture. The development of multimedia computers and communication techniques has made it possible for a mind to be transmitted from a teacher to a student in distance environment. This method detects error by using process database periodically to find some error based on RCSM. If error is found, this paper took the first steps towards learning to coordinate actions in multi-agent systems based on RCSM for classifying the type of errors. If an error is to be recovered, this system uses the same method as it creates a session. EDRA_RCSM is a system that is suitable for detecting and recovering software error for multimedia distance education system based on RCSM by using software techniques. Keywords: multimedia distance education system, RCSM, DOORAE, detecting and recovering software error, EDRA_RCSM.
1
Introduction
Context awareness (or context sensitivity) is an application software system’s ability to sense and analyze context from various sources; it lets application software take different actions adaptively in different contexts [1]. In a ubiquitous computing environment, computing anytime, anywhere, any devices, the concept of situationaware middleware has played very important roles in matching user needs with available computing resources in transparent manner in dynamic environments [2, 3]. The implementation of interactive multimedia distance education system can be recognized as a diversification of videoconferencing system which first appeared in the 1980’s. Early implementations of videoconferencing systems were circuit-based T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 197–202, 2011. © Springer-Verlag Berlin Heidelberg 2011
198
S. Kim and E.N. Ko
systems relying on dedicated video devices, telephone networks or leased lines. After the early 1990’s, the major basis of videoconferencing system moved to packet-based systems which operate on computer network. However, since this new education system must be developed in a way that combines various field of technologies, including group communication and distributed multimedia processing which are the basis of packet based videoconferencing systems, integrated service functions such as middleware are required to support it [4,5,6,7]. In this paper, we propose a method for increasing reliability based on context awareness. The rest of this paper is organized as follows. Section 2 describes Multi-agent Systems. Section 3 denotes EDRA_RCSM. Section 4 concludes the paper.
End User End-user interface Applications Content Titles Solutions Support Tools Multimedia authoring tools Software engineering management tools Systems administration Application enablers Programming language compilers Hypermedia and hypertext linking tools Agent tools Browsers Tools for capturing, creating, and editing graphics, image, and video Data Management Facilities Object-oriented, relational, and hierarchical databases File systems Hypermedia link support in operating system Agent support Base operating system and nework operating system Batch, interactive, or real-time support Input-output device drivers Hardware interface Hardware, including end-user device, server, and Delivery Fig. 1. Software layers
An Error Detection-Recovery Agent for Multimedia Distance System
2
199
Multi-agent System
In computer science and artificial intelligence the concept of multi-agent systems has influenced the initial developments in areas like cognitive modeling [8, 9, 14], blackboard systems [10, 14], object-oriented programming languages [11, 14], and formal models of concurrency [14]. Nowadays multi-agent systems establish a major research subject in distributed artificial intelligence [12, 13, 14]. The interest in multiagent systems is largely founded on the insight that many real-world problems are best modeled using a set of agents instead of a single agent. In particular, multi-agent modeling makes it possible to cope with natural constraints like the limitation of the processing power of a single agent or the physical distribution of the data to be processed and to profit from inherent properties of distributed system like robustness, fault tolerance parallelism and scalability. Generally, a multimedia system is composed of a number of agents that are able to interact with each other and the environment and that differ from each other in their skills and their knowledge about the environment. There is a great variety in the multi-agent systems studied in distributed artificial intelligence [14]. Developers and users have different views of what an application is. From a developer’s point of view, an application is the next to highest software layer chart such as Figure 1. An application’s upper boundary is an end-user interface. An application’s lower boundary is the application programming interface that lower layers provide. In general, an application runs partly in one or more end-user devices and partly in one or more servers [15].
3 3.1
EDRA_RCSM Reconfigurable Context-Sensitive Middleware (RCSM)
Figure 2 shows how all of RCSM’s components are layered inside a device. All of RCSM’s components are layered inside a device. The Object Request Broker of RCSM (R-ORB) assumes the availability of reliable transport protocols; one R-ORB per device is sufficient. The number of ADaptive object Containers (ADC)s depends on the number of context-sensitive objects in the device. ADCs periodically collect the necessary “raw context data” through the R-ORB, which in turn collects the data from sensors and the operating system. Initially, each ADC registers with the R-ORB to express its needs for contexts and to publish the corresponding context-sensitive interface. RCSM is called reconfigurable because it allows addition or deletion of individual ADCs during runtime (to manage new or existing context-sensitive application objects) without affecting other runtime operations inside RCSM. Ubiquitous applications require use of various contexts to adaptively communicate with each other across multiple network environments, such as mobile ad hoc networks, Internet, and mobile phone networks. However, existing context-aware techniques often become inadequate in these applications where combinations of multiple contexts and users’ actions need to be analyzed over a period of time. Situation-awareness in application software is considered as a desirable property to overcome this limitation. In addition to being context-sensitive, situation-aware applications can respond to both current and historical relationships of specific contexts and device-actions [16].
200
S. Kim and E.N. Ko
6LWXDWLRQ$ZDUH$SSOLFDWLRQ2EMHFWV 5&60 2SWLRQDO&RPSRQHQWV 5&60(SKHPHUDO*URXS 2WKHU &RPPXQLFDWLRQ6HUYLFH 6HUYLFHV &RUH&RPSRQHQWV $GDSWLYH2EMHFW&RQWDLQHUV$'&V >3URYLGLQJDZDUHQHVVRIVLWXDWLRQ@ 5&602EMHFW5HTXHVW%URNHU525%
2 6
>3URYLGLQJWUDQVSDUHQF\RYHUDGKRFFRPPXQLFDWLRQ@
7UDQVSRUW/D\HU3URWRFROVIRU$G+RF1HWZRUNV
6HQVRUV
Fig. 2. Integrated Components of RCSM
,QIRUPDWLRQ5&60! 5HJLVWUDWLRQ )7$'DHPRQ )RXQG(UURU ([LVWHQFH,QIRUP&UHDWLRQ ,QVSHFWLRQ 6HVVLRQ0DQDJHU ([LVWHQFH&UHDWLRQ ,QVSHFWLRQ 9LGHR$XGLR:KLWHERDUGDQG $SSOLFDWLRQ6KDULQJ3URYLGHU Fig. 3. Relationship between FTA and Daemon based on RCSM
3.2
EDRA_RCSM Based on RCSM
However, it did not include fault-tolerance agent support in the RCSM architecture. In this paper, we focus on how to represent fault-tolerance agent in situation-aware middleware as RCSM. EDRA_RCSM consist of FTA(Fault Tolerance Agent), UIA(User Interface Agent) and SMA(Session Management Agent) that be included in other services of optional components in RCSM. UIA is an agent which plays a role as an interface to interact between the user and FTA. UIA is a module in EDRA_RCSM. UIA has functions which receive user’s requirement and provides the
An Error Detection-Recovery Agent for Multimedia Distance System
201
results for the user. SMA is an agent which plays a role in connection of UIA and FTA as management for the whole information. SMA monitors the access to the session and controls the session. It has an object with a various information for each session and it also supports multitasking with this information. SMA consists of GSM(Global Session Manager), Daemon, LSM(Local Session Manager), PSM(Participant Session Manager), Session Monitor, and Traffic Monitor. GSM has the function of controlling whole session when a number of sessions are open simultaneously. LSM manages only own session. For example, LSM is a lecture class in distributed multimedia environment. GSM can manage multiple LSM. Daemon is an object with services to create session. This system consists of a FTA, GSM, LSM, PSM and the application software on LAN. Platform consists of GSM, Session Monitor, and Traffic Monitor. The other platform consists of Daemon, Local Session Manager, Participant Session Manager and FTA. ,QIRUPDWLRQ5&60! 5HJLVWUDWLRQ )7$6HVVLRQ0DQDJHU )RXQG(UURU ,QIRUP ([LVWHQFH&UHDWLRQ ,QVSHFWLRQ9LGHR$XGLR:KLWHERDUGDQG $SSOLFDWLRQ6KDULQJ3URYLGHU,QVWDQFH Fig. 4. Relationship between FTA and Session Manager based on RCSM
4
Conclusion
In multi-agent environment, intelligent agents interact with each other, either collaboratively or non-collaboratively, to achieve their goals. The main idea is to detect an error by polling methods based on context awareness. Also it is to classify the type of errors based on context awareness by using learning rules. The merit of this system is to use the same method to recovery it as it creates a session based on context awareness. EDRA_RCSM is a system that is able of detecting and recovering software error for multimedia distance education system based on context awareness. The weak point of this system is limited to DOORAE based on context awareness. Our future work is to extend to autonomous agents for detecting and recovering error and to generalize it to adjust any other system based on context awareness.
References 1. Yau, S., Karim, F., Wang, Y., Wang, B., Gupta, S.: Reconfigurable Context-Sensitive Middleware for Pervasive Computing. IEEE Pervasive Computing 1(3), 33–40 (2002) 2. Yau, S.S., Karim, F.: Adaptive Middleware for Ubiquitous Computing Environments. In: Design and Analysis of Distributed Embedded Systems, Proc. IFIP 17th WCC, August 2002, vol. 219, pp. 131–140 (2002)
202
S. Kim and E.N. Ko
3. Yau, S.S., Karim, F.: Contention-Sensitive Middleware for Real-time Software in Ubiquitous Computing Environments. In: Proc. 4th IEEE Int’l Symp. on Object-Oriented Real-time Distributed Computing (ISORC 2001), May 2001, pp. 163–170 (2001) 4. Ahn, J.Y., Lee, G.m., Park, G.C., Hwang, D.J.: An implementation of Multimedia Distance Education System Based on Advanced Multi-point Communication Service Infrastructure: DOORAE. In: Proceedings of the IASTED International Conference Parallel and Distributed Computing and Systems, Chicago, Illinois, USA, October 16-19 (1996) 5. Fluckiger, F.: Understanding Networked Multimedia-Application and Technology. Prentice Hall Inc., Herfordshire(UK) (1995) 6. Loftus, C.W., Sherratt, E.M., Gautier, R.J., Grandi, P.A.M., Price, D.E., Tedd, M.D.: Distributed Software Engineering-The Practitioner Series. Prentice Hall Inc., Herfordshire (1995) 7. ITU-T Recommendation T.122 Multipoint Communication Service for Audiographics and Audiovisual Conferencing Service Definition, ITU-T SG8 Interim Meeting (October 18, 1994), mertlesham, (issued March 14,1995) 8. Selfridge, O.G.: Pandemonium: a paradigm for learning. In: Proceedings of the Symposium on Mechanisation of Thought Processes, pp. 511–529. Her Majesty’s Stationary Office, London (1959) 9. Minsky, M.: The society theory of thinking. In: Artificial Intelligence: an MIT perspective, pp. 423–450. MIT Press, Redmond (1979) 10. Erman, L.D., Lesser, V.E.: A multi-level organization for problem-solving using many, diverse, cooperating sources of knowledge. In: Proceedings of the 1975 International Joint Conference on Artificial Intelligence, pp. 483–490 (1975) 11. Hewitt, C.E.: Viewing control structures as pattern of passing messages. In: Artificial intelligence, pp. 323–364 (1977) 12. Bond, A.H., Gasser, L. (eds.): Readings in distributed artificial intelligence. Morgan Kaufmann, San Francisco (1988) 13. Huhns, M.N. (ed.): Distributed artificial intelligence. Pitman (1987) 14. Weiβ, G.: Learning to Coordinate Actions in Multi-Agent Systems, pp. 481–486. Morgan Kaufmann Publishers, San Francisco (1998) 15. Agnew, P.W., Kellerman, A.S.: Distributed Multimedia. ACM Press, New York (1996) 16. Yau, S., Karim, F., Wang, Y., Wang, B., Gupta, S.: Reconfigurable Context-Sensitive Middleware for Pervasive Computing. IEEE Pervasive Computing 1(3), 33–40 (2002)
An Error Sharing Agent Running on Situation-Aware Ubiquitous Computing SoonGohn Kim1 and Eung Nam Ko2 1 Division of Computer and Game Science, Joongbu University, 101 Daehakro, Chubu-Meon, GumsanGun, Chungnam, 312-702, Korea [email protected] 2 Division of Information & Communication, Baekseok University, 115, Anseo-Dong, Cheonan, Chungnam, 330-704, Korea [email protected]
Abstract. This paper describes an ESA (Error Sharing Agent running on Situation-Aware Ubiquitous Computing Environment). It is a multi-agent based fault-tolerance system running on Situation-Aware Ubiquitous Computing with function for an error detection, classification and recovery automatically. It consists of EDA, ECA, and ERA. EDA has a function of error detection running on Situation-Aware Ubiquitous Computing. That is, EDA becomes aware of an error occurrence and transmitted quickly through an error sharing method running on Situation-Aware Ubiquitous Computing. ECA has a function of error classification running on Situation-Aware Ubiquitous Computing. ERA has a function of error recovery running on Situation-Aware Ubiquitous Computing. In this paper, we discuss a method for increasing reliability through an error sharing system running on Situation-Aware Ubiquitous Computing. Keywords: Error Sharing Agent, Situation-Aware Ubiquitous Computing Environment, fault-tolerance system.
1
Introduction
With the rapid development of multimedia and network technology, more and more digital media is generated [1, 2, 3]. Although the situation-aware middleware provides powerful analysis of dynamically changing situations in the ubiquitous computing environment by synthesizing multiple contexts and users’ actions, which need to be analyzed over a period of time, access control in using multimedia shared object causes a problem of the seam in the ubiquitous computing environment. Thus, there is a great need for error sharing agent in situation-aware middleware to provide dependable services in ubiquitous computing. This paper proposes a new model of error sharing agent running on situation-awareness ubiquitous computing. Section 2 describes situation-aware middleware. Section 3 denotes access control algorithm. Section 4 describes simulation results of our proposed algorithm. Section 5 present conclusions. T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 203–208, 2011. © Springer-Verlag Berlin Heidelberg 2011
204
2
S. Kim and E.N. Ko
Background: Situation-Aware Middleware
Ubiquitous applications require use of various contexts to adaptively communicate with each other across multiple network environments, such as mobile ad hoc networks, Internet, and mobile phone networks. However, existing context-aware techniques often become inadequate in these applications where combinations of multiple contexts and users’ actions need to be analyzed over a period of time. Situation-awareness in application software is considered as a desirable property to overcome this limitation. In addition to being context-sensitive, situation-aware applications can respond to both current and historical relationships of specific contexts and device-actions. An example of situation-aware applications is a multimedia distance education system. The development of multimedia computers and communication techniques has made it possible for a mind to be transmitted from a teacher to a student in distance environment. However, it did not include error sharing agent support in the architecture of situation-aware middleware. In this paper, we propose a new error sharing agent in situation-aware middleware.
3 3.1
Error Sharing Agent: Our Proposed Approach The Environment for Error Sharing Agent
A conceptual architecture of situation-aware middleware based on Reconfigurable Context-Sensitive Middleware (RCSM) is proposed in [2]. All of RCSM’s components are layered inside a device. The Object Request Broker of RCSM (R-ORB) assumes the availability of reliable transport protocols; one R-ORB per device is sufficient. The number of ADaptive object Containers (ADC)s depends on the number of contextsensitive objects in the device. ADCs periodically collect the necessary “raw context data” through the R-ORB, which in turn collects the data from sensors and the operating system. Initially, each ADC registers with the R-ORB to express its needs for contexts and to publish the corresponding context-sensitive interface. RCSM is called reconfigurable because it allows addition or deletion of individual ADCs during runtime (to manage new or existing context-sensitive application objects) without affecting other runtime operations inside RCSM. Other services have many agents. They consist of AMA(Application Management Agent), MCA(Media Control Agent), ESA(Error Sharing Agent), SA-UIA(SituationAware User Interface Agent), SA-SMA(Situation-Aware Session Management Agent), and SA-ACCA(Situation-Aware Access and Concurrency Control Agent), as shown in Figure 1. AMA consists of various subclass modules. It includes creation/deletion of shared video window and creation/deletion of shared window. MCA supports convenient applications using situation-aware ubiquitous computing. Supplied services are the creation and deletion of the service object for media use, and media share between the remote users. This agent limits the services by hardware constraint. ESA is an agent that plays a role in sharing an error in situation-aware ubiquitous environment. SA-UIA is a user interface agent to adapt user interfaces based on situations. SA-SMA is an agent which plays a role in connection of SA-UIA and ESA as situation-aware management for the whole information. SA-ACCA
An Error Sharing Agent Running on Situation-Aware Ubiquitous Computing
205
controls the person who can talk, and the one who can change the information for access. Our proposed model aims at supporting a new error sharing agent in situationaware middleware.
Situation-Aware Application Objects
RCSM Ephemeral Group Communication Group
SASMA
O S
ESA
SAACCA
<
SAUIA
AMA
MCA
Adaptive Object Containers(ADCs) [Providing awareness of situation] RCSM Object Request Broker(R-ORB) [Providing transparency over ad hoc communication]
Transport Layer Protocols for Ad Hoc Networks
sensors
Fig. 1. Other Services in Situation-Aware Ubiquitous Computing
3.2
The Algorithm for ESA
As shown in Fig.2, error and application program sharing windows perform process communication of message form agent in situation-aware middleware. In the middle of this process, there are couple ways of snatching message by error and application
206
S. Kim and E.N. Ko
sharing agent. The roles of ESA (error and application program sharing agent in situation-aware middleware) are divided into two main parts; Abstraction and sharing of view generation. Error and application program sharing must take different from each other according to number of replicated application program and an event command. This proposed structure is distributed architecture but for error and application program sharing, centralization architecture is used. Error and application program sharing windows perform process communication of message form agent in situation-aware middleware. In the middle of this process, there are couple ways of snatching message by error and application sharing agent. ESA informs SM of the results of detected errors. Also, ESA activates some failure application software automatically. It informs SM of the result again. That is, ESA becomes aware of an error occurrence after it receives requirement of UIA and transmit it. ESA informs SM of the results of detected errors.
<<E <ESA> hook table
<ESA>
application
Event Distributer Network
view /event
<ESA>
Filter function view/event <ESA>
Virtual app. Filter func.
Virtual app. Filter func.
Fig. 2. Error Sharing Agents based on Situation-Aware Ubiquitous Computing
4
Simulation Results
As shown in Table 1, you can see the characteristic function of each system function for multimedia distance education. To evaluate the performance of the proposed system, it was used to compare the performance of the proposed model against the conventional model by using DEVS formalism. In DEVS, a system has a time base, inputs, states, outputs based on the current states and inputs. DEVS(Discrete Event System Specification) is a formalism of being developed by Bernard P. Zeigler. The structure of atomic model is as follows [4-9].
An Error Sharing Agent Running on Situation-Aware Ubiquitous Computing
207
Table 1. Comparison for multimedia collaboration Software Architecture in situation-aware environment
Function
ShaStra UNIX Purdue Univ. USA 1994
MERMAID UNIX NEC, JAPAN
Server /client
Server /client
protocol
TCP/IP
Error Sharing
No
OS Development Location Development year Structure
5
MMconf UNIX CamBridge USA 1990
CECED UNIX SRI, International 1993 RepliCated
TCP/IP
Centralized or Replicated TCP/IP
No
No
1990
TCP/IP multicast No
Conclusions
The focus of situation-aware ubiquitous computing has increased lately. An example of situation-aware applications is a multimedia education system. The development of multimedia computers and communication techniques has made it possible for a mind to be transmitted from a teacher to a student in distance environment. This paper proposed a new model of error sharing agent running on situation-awareness ubiquitous computing. DOORAE is a framework of supporting development on multimedia application for distributed multimedia environment running on situationawareness ubiquitous computing. ESA is a system that is capable of sharing software error for distributed multimedia environment running on situation-awareness ubiquitous computing. The purpose of this research is to increase a reliability by sharing DOORAE session automatically. We have given a detailed discussion of ESA, a suit of an error sharing system that ensures the continuous applications. In this paper, we have discussed a method for enhancing reliability through a quick error sharing for distributed multimedia environment running on situation-awareness ubiquitous computing.
References 1. Zhang, T., Kuo, C.-C.J.: Hierarchical Classification of Audio Data For Archiving and Retrieval. In: ICASSP 1999, Phoenix, vol. 6, pp. 3001–3004 (March 1999) 2. Wold, E., Blum, T., Keislar, D., Wheaton, J.: Content-Based Classification, Search, and Retrieval of Audio. IEEE Multimedia 3(3), 27–36, Fall(1996)
208
S. Kim and E.N. Ko
3. Zhang, H., Kankanhalli, A., Smoliar, S.: Automatic Partitioning of Full-motion Video. In: A Guided Tour of Multimedia Systems and Applications. IEEE Computing Society Press, Los Alamitos (1995) 4. Zeigler, B.P., Cho, T.H., Rozenblit, J.W.: A Knowledge-Based Simulation Environment for Hierarchical Flexible Manufacturing. IEEE Transaction on Systems, Man, and Cybernetics-Part A: System and Humans 26(1), 81–90 (1996) 5. Cho, T.H., Zeigler, B.P.: Simulation of Intelligent Hierarchical Flexible Manufacturing: Batch Job Routing in Operation Overlapping. IEEE Transaction on Systems, Man, and Cybernetics-Part A: System and Humans 27(1), 116–126 (1997) 6. Zeigler, B.P.: Object-Oriented Simulation with hierarchical, Modular Models. Academic Press, London (1990) 7. Zeigler, B.P.: Multifacetted Modeling and Discrete Event Simulation. Academic, Orlando (1984) 8. Zeigler, B.P.: Theory of Modeling and Simulation. John Wiley, NY (1976) (reissued by Krieger, Malabar, FL, USA, 1985) 9. Conception, A.I., Zeigler, B.P.: The DEVS formalism: Hierarchical model development. IEEE Trans. Software Eng. 14(2), 228–241 (1988)
Integrated Retrieval System for Rehabilitation Medical Equipment in Distributed DB Environments BokHee Jung, ChangKeun Lee, and SoonGohn Kim Department of Information Science, Joongbu University, 101 Daehakro, Chubu-meon, Gumsangun, Chungnam, 312-702, Korea {jangel9977,leec90}@nate.com, [email protected]
Abstract. In today's society, the use of Rehabilitation Medical Equipment is rapidly increasing due to the aging population and unexpected incidents. Accordingly, the demand for efficient sharing and use of the rehabilitation medical equipment is becoming higher, but actually its systematic management isn't properly handled. Consequently, it is extremely difficult for users to grasp the medical clinics where the rehabilitation medical equipment is available. Currently, each hospital is equipped with medical information system and Rehabilitation Medical Equipment system, but not equipped with one integrated system. In this thesis, we established one integrated database by combining the Rehabilitation Medical Equipment which each hospital has, designed and implemented the integrated search system of Rehabilitation medical equipment so that users can easily search the results on the web. Keywords: Integrated Search System, Integrated Database, Distribution, Rehabilitation Medical Equipment, RME.
1
Introduction
Recently, the amount of data that can be used in several fields is rapidly increasing. Such data were established by disparate organizations, so these data are stored in a mutually different distributed platform; moreover, these data have heterogeneous nature in several aspects such as structure of data, database for use, and operating environment. [1] The integrated database is a database technology aiming at providing the transparency with which a user can use the distributed homogeneous or heterogeneous database while sharing database consistently. This technology is the one which can effectively improve the complicatedness in daily increasing information processing fields, and it can be suggested as a very excellent solution to the eradication of the overlapped data operated in a distributed way in a multitude of systems and the issue of their incompatibility. [2] For the integrated technologies of information to make it easier to establish a new service by integrating countless information operated in diverse environment so far, establishment technology of data warehouse, system technology of multi-data base, and mediator-based integrated technology, etc. have been studied. [3] T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 209–214, 2011. © Springer-Verlag Berlin Heidelberg 2011
210
B. Jung, C. Lee, and S. Kim
The Integrated database can provide the transparency with which users can homogeneous or heterogeneous database which is distributed each other without any inconvenience. [2] The ultimate goal of such an integrated system is to provide the users with the usable environment by inducing the users to read the computers located at a different place with each other as an integrated computing equipment regardless of the system-level location. [4] At present, every hospital is equipped with medical information system and Rehabilitation Medical Equipment system, but the system is not realized as a singly integrated one; thus, and it's not effective in the use of equipment because of the failure to deliver information to users effectively. Accordingly, it's possible to let the users know what kind of Rehabilitation Medical Equipment is available at what hospital by suggesting the method of effectively linking the database for Rehabilitation Medical Equipment in a heterogeneous environment which each hospital possesses. In this thesis, we are intending for users to easily search the Rehabilitation Medical Equipment that meets the demands of the users on the web by establishing a system interlocking the database possessed by each hospital with the database for Rehabilitation Medical Equipment on the basis of basic information on Rehabilitation Medical Equipment.
2
Design and Implementation of Integrated Retrieval System for Rehabilitation Medical Equipment
In this paper, we analyzed RME(Rehabilitation Medical Equipment) on the basis of the rehabilitation department of domestic hospitals and established the integrated RME database on the basis of the database for each hospital. Fig.1 is showing the establishment of RME database by collecting the data for each hospital. The design process made by this thesis is as follows: when a user searches the integrated RME, web server transmits query using RME server, and then RME server helps users to retrieve the query results on the web by transmitting the query results to the web server.
Fig. 1. Overall System Structure
Integrated Retrieval System for Rehabilitation Medical Equipment
211
As for the database for each hospital, we used a specialized RME reporter for the database for each hospital in order for the database to be linked to RME server. Through the intermediate role of this specialized RME reporter, the data possessed by each hospital are renewed periodically and subsequently the data in RME server are also renewed. In this paper, we saved the only data, which coincide with the field name in RME server, in the field of table in RME server by doing loader. It's because one integrated search system should be realized as a field of common parts as the database for each hospital has its own characteristic. Accordingly, when a user inputs a search word in the box by selecting the search inputs a user wants, then the user can identify the information on RME which corresponds to the result from the comparison of search type with the search word in the box. 2.1
RME Server
As for basic data, we created the data by making database and table of the selected domestic hospitals in this thesis. The database format of RME system of each hospital is all different, so we drew up a field centering on the basic data of rehabilitation equipment. With the aim of integrating the data for RME of each hospital, we organized RME Server with product name, model name, characteristic, use, size, manufacturing country, hospitals in possession of RME, district in possession of RME, numbers of possessed RME, and number of units of RME usable. Fig. 2 is showing the system structure of RME server. We induced the structure to be renewed periodically while only the field, which is the same as the one of RME server, is interlocked. Table 1 is showing a standard record of Rehabilitation Medical Equipment RME server information.
Fig. 2. System Structure of RME Server
212
B. Jung, C. Lee, and S. Kim Table 1. RME Table: Proposed Record Set
Filed name Nameno modelno keynote useing ssize countryname havehospital havearea havecount usecount
2.2
Data type char(50) char(50) text text varchar(20) varchar(10) varchar(10) varchar(10) int int
Note Not null, PK Not null Not null Not null
Not null Not null Not null
RME Reporter
RME reporter carries out the role of linking the database for RME system of each hospital to RME server. In other words, the data in RME server is automatically renewed by transmitting the results to RME Server by monitoring the database for each hospital periodically. The sorts of database for hospitals are all different, so we managed to induce the database to be linked with each other using a specialized inverter- RME reporter. Fig.3 is showing the system structure of RME Reporter.
Fig. 3. Structure of RME Reporter
2.3
User Interface
This RME server is possessed of the database for each hospital in an integrated way. When users selects the search type of RME and inputs the search word related to search type using web browser, then web server transmits the search type and inquiry word in the search word to RME server; at that moment, when there comes up the data information corresponding to the inquiry word, we induced the server to restitute the results to the user on a new engine other than the search engine on the web. Thus, even if users don't know the component of the RME server, they can get the information about RME they want only through the search using web browser.
Integrated Retrieval System for Rehabilitation Medical Equipment
213
Fig. 4. Structure of User Interface
Fig. 5. RME Search and Results
Fig. 4 is showing the structure of user interface. Accordingly, the result screen on the web browser seen to the users shows the product name, use, RME-possessing hospital, and district in possession of RME.
3
Experimental Environment and Results
In this paper, we did experiments by inducing the search system of the integrated RME to be carried out in the Window-based operating system, using Apache as web server, and using MySQL which is the commonly used database management system as the information repository.
214
B. Jung, C. Lee, and S. Kim
As for experimental results, we used the way in which when a user searches using the web browser, the result appears on another web browser. Fig. 5 is a screen showing the result when available are the data corresponding to the search word by comparing the search type to the search word in the box after users runs it by selecting one of the search types and typing the search word relevant to the type in the box below.
4
Conclusion
In this paper, we designed and implemented the integrated RME database on the basis of the database for RME of each hospital. So far, RME System has been established in each hospital, but there has not been any system that can establish such systems into one. For this reason, users have felt it extremely difficult to grasp what hospital is equipped with the RME which they want. The integrated search system of RME greatly varies according to the purpose of rehabilitation ilitation. It's almost impossible for one hospital to possess all sorts of equipment for rehabilitation, so users have had to undergo difficulties in sharing the information about what hospital is possessed of the equipment they want. Accordingly, should information be shared efficiently, it would be possible to search the information concerning what hospital they'd better visit on the web even without bothering to visit the hospital in person. In addition, a medical team can appropriately use the data by analyzing the present situation of rehabilitation in other hospitals for their patients(users), and the sharing of the information about RME with other hospitals will become smooth. In further study, we are going to do research on how to provide more accurate information to the users by designing implementing the distributed information on the basis of accurate information other than being virtual; in addition, we hope that there will come up more active and efficient results in sharing information if an integrated system could be implemented by applying the range of application more diversely, not just limited to RME.
References 1. Lee, S.: Design and Implementation of a mediator for Integrating Distributed Heterogeneous Database. Chungnam University (1999) 2. Kim, G.: A Schema Integration Method for Integrated Database. Chungbuk University (1998) 3. Lee, M., Kim, M., Lee, G.: DataBlender (Design of an Information Integration system based on XML Schema: DataBlender). Korea Contents Association 2(2) (2002) 4. Kim, M., et al.: Software technology Distributed Processing system. Insitute of Electronics Engineers of Korea, Electronic Calculations Study Group 8(1), 61–73
Effective Method Tailoring in Construction of Medical Information System WonYoung Choi and SoonGohn Kim Dept. of Information Science, Joongbu University, Korea [email protected], [email protected]
Abstract. Constructing of domestic medical information system shows severe side effects in quality and productivity as the expansion of constructing scope and short-term delivery trend deepen. This paper presents appropriate tailoring rule at the time of constructing the medical information system to present the practical guideline at the site where indiscrete tailoring is made. Keywords: method, tailoring, contingency, medical information system, practical guideline.
1
Introduction
The main characteristics in constructing of recent medical information system would be the expansion of scope subject for constructing and the trend of short delivery term. Because of the above, the development institutions for medical information system practically give up the existing software constructing method in fact for the system development of more range within short period of time. However, this has not improved the productivity in software development and displayed serious side effects in quality. Therefore, in order to improve quality and productivity simultaneously in constructing of software, there is a need of tailoring of existing software constructing method with the medical information system constructing environment. There has been a long-standing acknowledgment that software methods need to be tailored for use, the essence of which is captured well by De Marco: I find myself more and more exasperated with the great inflexible sets of rules that many companies pour into concrete and sanctify as methodologies. Use the prevailing methodology only as a starting point for tailoring[4]. In addition, Sommerville and Ransom also asserted that it is a truism that any method has to be adapted for the particular circumstances of use[5]. However, Despite these contentions, little research has been carried out into method tailoring to date[1].
2
Related Works
As with many themes and issues in the software development literature, tailoring of software engineering methods has been referred to by many different terms, including context-based method use, method adaptation, method assembly, method configuration, and scenario use. However, these can be classified into one of two overarching T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 215–222, 2011. © Springer-Verlag Berlin Heidelberg 2011
216
W.Y. Choi and S.G. Kim
approaches—namely, contingency-based method selection and method engineering. Contingency-based method selection is based on the premise that, rather than accepting a software development method as being universally applicable, the team should choose a method from a broad portfolio of development methods to suit each different project context. On the other hand, Method Engineering, is a metamethod process, where instead of selecting a method from an available library, a new one is constructed or “engineered” from the ground up using existing “method fragments” [2]. In this paper, by reviewing the method for domestic medical information companies to present the standard tailoring result of the method to fit for the domestic medical information system constructing environment. Then, it was formed to enable tailoring in accordance with the input of medical institution, process management capability of development institution, or need of inspection for computer supervising. And, while existing research[3] has focus on tailoring of any specific method, this paper is considered as differentiated from the existing research for the point of tailoring with the method with the subject of specific business type.
3
Research Method
This paper has been undertaken in 6 phases as presented on following Fig. 1. First of all, the qualitative analysis was implemented by collecting the cases applied in actual sites and the method data of domestic medical field medical information companies. On the basis of it, the standard tailoring result determined as appropriate to the medical information system constructing situation was presented and it presented tailoring algorithm for applied in accordance with the diverse situations. Simple prototype was realized for tailoring method application and certain number of medical information system field expert had the interview to evaluate the structured tailoring result and tailoring method. Lastly, it was applied at the site, the tailoring algorithm was evaluated.
Fig. 1. Research method
Effective Method Tailoring in Construction of Medical Information System
3.1
217
Data Collection
For the analysis of software constructing method of domestic medical field SI (System Integration) companies, the method data owned by the companies was collected, and the data for 3 places was gathered from the 4 major SI companies in the medical field with an exception of one place where data collection was difficult. In addition, in order to analyze how each company apply on actual site, the most recently applied cases were collected by seeking the cooperation of medical institutions. 3.2
Data Analysis
Looking at the software constructing method of each company, in the event of the software constructing method of Company A, it was structured with the repeated development method and simplified to be appropriate for small-scale short-term supply period. In the event of the software constructing method of Company B, it was structured as repetitive and gradual verified development method on the basis of processing and product of information engineering/object-oriented/CBD. In addition, it presented the rules on core products (53 essential products) and submitted products (minimal products for maintenance and repairing), and it discovered to present the pre and post relativity between products. In the event of the software constructing methodology of Company C, it was structured on the basis (CBD methodology-oriented) for gradual development method with MARMI methodology and equipped with the similar features with RUP . 3.3
Definition of Standard Procedure
As a result of analyzing the method of 3 SI companies in the domestic medical field, the standard procedure for tailoring could be obtained. In the development phase, it was discovered as having the existing analysis phase to be separated into the phase of requirement definition and analysis phase. Considering that the recent software engineering tends to value the requirement definition, it is considered as appropriately defined. In addition, the design phase and development phase are combined to integrate into detailed design and development phases, and for most cases, it is implemented repeatedly for 3 times or more. This is attributable to be positive for it goes through the review process with the users continuously while the existing prototype model to go through one review process in the initial design phases. Looking into the key activities of each development phase, the method of each company could be seen as following the existing frame of method for the case of following the object-oriented method. This is considered as a proof that the complete meaning of object-oriented method is not easy in work development like the medical information system. In addition, the change of recognition on the importance of the test could be read. In the past, it was customary to undertake two or three times of test (without classification of integration test and rehearsal) in one or two months of system operation, but in recent days, and it showed to set sufficient period of integrated test and rehearsal period. In addition, the quality assurance activity that was formative one in the past has become to go through the internal and external review process for each major milestone now. It is attributable to have the size of medical
218
W.Y. Choi and S.G. Kim
information system to be enlarged and heightened importance of medical information system use within the medical institution from the past, and the recognition of users thereof has changed as well.
4 4.1
Research Result Formulating the Tailoring Rule
Under the standard procedure of the method, the tailoring rule was formulated to enable appropriate tailoring to fit for the environment of each project. Tailoring rule – 1. Constructing of the medical information system is still made at the SI development level yet, but recently, many companies were found to approach the packages in customizing concept. This is considered to show that the scale has enlarged from the past, but the delivery time is rather shortened for companies to reflect self-help measures. For a reference, in the event of the neighboring country, Japan, the medical information market itself has already been converted on package with the customizing concept. Therefore, as the first of tailoring rule, the tailoring result was structured with difference in accordance with the package customizing. Tailoring rule – 2. In the event of domestic cases, the information system audit was introduced in 1987, but it was mostly public institution-oriented basis. While doing so, the obligation information system audit was in place since 2007, the medical institutions began to implement the information system audit with public medical institution leading the way. As the information system audit undertaking being evaluated generally through interview or document, there were additional needs for several products to respond. Therefore, as the second of tailoring rule, it was structured to have different tailoring result following the need of responding to information system audit. Tailoring rule – 3. In the event of the medical institutions, they are organized with similar works in large scale, but the level of demand by users was known to differ greatly in accordance with the size and work force of medical institutions. In other words, as the size and work force of medical institutions became larger and larger, the level of requirement of users would be tougher, and for the responsive plan thereto, it needs the review process by the users and internal quality review before the development phase to be progressed any further as the responsive plan. If not, the volume to modify and supplement at the last phase of the development would be increased. This may cause not to meet the delivery date as well as lose the trust from users that it is needed to go through the review process before undertaking the development phase any further if possible. Therefore, as the third tailoring rule, it was structured to have different tailoring result in accordance with the needs of review strengthening for each phase. Tailoring rule – 4 and 6. In the event of the domestic, there were many places that adopted the method based on structural and information engineering method. When the object-oriented method was introduced several years ago, most had the revised their method to the object-oriented method, but due to the difficulties in implementation,
Effective Method Tailoring in Construction of Medical Information System
219
many have returned to the structural and information engineering method. However, there is still development companies based on the object-oriented method that there was a need for tailoring by distinguishing the two. Therefore, as the fourth and sixth of tailoring rule, it was structured to have different tailoring result in accordance with the application of the object-oriented method. Tailoring rule – 5. In the event of domestic development companies to develop the medical information system, most were the cases to repeatedly undertaking detailed design and development phase. In this case, developing screen and various reports tended to make continuous change that, if the product was not made in this phase, there were frequent cases of making the product after completing the development. However, depending on the project, there is cases where the project is undertaken in accordance with the procedural method, and in this case, the product was made at the applicable phase and then revised and supplemented. Therefore, as the fifth of the tailoring rule, it was structured to have different tailoring result in accordance with the application of the procedural method (Strictly speaking, the tailoring result is the same but the time to submit the product would differ). Considering the above contents, the formulated tailoring rule is shown as following Table 1. Table 1. Tailoring rule Rule
Condition
No.1
Package customizing
No.2
Requiring information audit ?
No.3
Strengthening of review for each phase?
No.4
responsive system
Object-oriented analysis?
Result Yes No Yes No Yes No Yes No Yes
No.5
Procedural method?
No.6
Object-oriented design?
No Yes No
4.2
Output Gap analysis Requirement trace table Result of review and corrective action Use case diagram, Use case specification, Class diagram, Sequence diagram Logical ERD, Process flow chart Screen definition, Report definition Prepare document after replacing it with UI Use case diagram, Use case specification, Class diagram, Sequence diagram (detailed) DB trigger specification, Procedure specification
On-Site Application (S Hospital, Korea)
In the event of domestic development institutions, they are lacking the concept on software quality in relative terms. With the exception of major development institutions, there were many cases of not having the organization and method for
220
W.Y. Choi and S.G. Kim
quality control even. In addition, it was very passive in product preparation. The development institutions generally adopted the repeated development method, but when modifying the specification, there were many cases of customizing in reflecting on the direct source code, rather than reflecting on the product. However, it would bring substantial difficulties in maintenance and repairing, and as a result, it would provide the cause of shortening the life time of the system constructed. Table 2 presented on the following showed the result at the applicable site when applying the method tailoring rule proposed on this paper in actual site. Table 2. Tailoring rule application Rule No.1 No.2 No.3
Condition Package customizing? Requiring responsive information system audit? Strengthening of review for each phase?
Result No
-
No
-
Yes
No.4
Object-oriented analysis?
No
No.5
Procedural method?
Yes
No.6
Object-oriented design?
No
Output
Result of review and corrective action Logical ERD, Process flow chart Screen definition, Report definition DB trigger specification, Procedure specification
Analysis of difference with tailoring rule No. 1. The applicable site was that constructed the system after analyzing the requirement for users under the contract that it did not have the application of rule No. 1. Therefore, there was no need to prepare the Gap analysis. However, actually, the Gap analysis was personally prepared by the current users with the form and the result was obtained to reflect on the system constructing. This was to show that the development companies of the medical information system recycle substantial number of systems that they owned, and they develop it in the type of package customizing regardless of contract condition realistically. For similar case, K1 University Hospital, K Cancer Center, K2 University Hospital and others were confirmed as a result of interview with the officials that the system was constructed in the package customizing-type. Analysis of difference with tailoring rule No. 2. The applicable site was private university hospital that was not an institution subject for information system audit (a public institution with the development project of 500 million won or more is obligated for information system audit) that it was not subject of application for rule No. 2. Therefore, there was no need of preparing the requirement tracing table. However, the preparation was to be prepared as non-official document actually. Of course, tracing of requirement was not easy that the adjustment in reduction was made for tracing only with the relationship between the requirement, screen and unit test. This would be considered as the strategy of the applicable company with the mind set on the inspection process after completing the system constructing later. When inspecting after constructing the system, the requirement of the current user is reflected on the applicable program is most important, and the requirement of the user
Effective Method Tailoring in Construction of Medical Information System
221
would be relatively easier for relative inspection to confirm the users for test result of the applicable screen as the requirement is realized on screen (program). Analysis of difference with tailoring rule No. 3. The applicable site was recognized for importance and need of review for each software constructing phase that rule No.3 was applied to prepare the review and corrective measure result. However, the review at the phase of actual requirement definition adopted for method to renew the data, and the review at the analysis phase was replaced with the preparation of the decision making request, and the detailed design and development phase was replaced with the review of current users. This was considered for having the cause in the absence of quality assurance related organization and method in the development institution. Currently, the method of the development institution would be of help in preventing the omission of the user requirement, but it seemed to have the restriction in the quality review in objective point of view. Analysis of difference with tailoring rule No. 4. The applicable site did not adopt the object-oriented analysis method and possessed the information engineering method based independent method. Therefore, on the data point of view, it placed relative importance for preparation of the process flow in the point of view of ERD (Entity Relationship Diagram) and process. Among them, for the case of ERD, the principle is to prepare for logical and physical ERD. The applicable company made the physical ERD by the reverse engineering after revising the table by finishing the analysis phase as it had the independent medical information system (facilitating the CASE (Computer Aided Software Engineering) tools). Analysis of difference with tailoring rule No. 5. The applicable site adopted the procedural method that rule No. 5 was applied to prepare the screen definition and the report definition. However, the report definition was not prepared but since it was replaced with the screen definition to prepare that we consider that there was not much difference. And the screen definition was prepared to include the functional details omitted for preparation at the analysis phase. Analysis of difference with tailoring rule No. 6. The applicable site did not adopt the object-oriented design method but to possess the independent method based on the information engineering that the rule No. 6 was applied to prepare the DB trigger specification and procedure/function specification. However, the preparation was omitted which was considered as the common characteristics of companies that adopted the repetitive development method in the detailed design and development phase. At this phase, in the process of repeating the user review and development, the continuous revision and supplement occurred that documenting at the applicable phase would be considered as overhead under work. Therefore, after completing the program development and review, there were substantial cases to prepare the document afterwards. And, one feature is to take a look at the preparation of the batch program specification that does not require preparing under the rule. This was applied in the major batch function of outpatient, hospitalization claim aggregation, wage calculation, fixed asset, depreciation disposition and others that it would be considered as the part to be replaced with the screen definition or Procedure specification.
222
W.Y. Choi and S.G. Kim
Analysis of other differences. In the event of the modification request, unit test result, and data conversion result, it is defined as the standard application category, but it has yet to be adopted by applicable institution which is considered as related to the process management capability of the development institution. And, in the event of integrated test result and rehearsal result, the ordering institution generally arranges the test result that it is usually not defined as a standard application category, but the applicable institution usually adopts it. Conversely, it is considered as related to the management capability of ordering institution.
5
Conclusion
When constructing the domestic medical information system, the attempt was made to contribute to the improvement of quality and productivity and formulate the tailoring rule as well as apply at the site. For overseas cases where some difference has been detected from the method of Korea, but the work was undertaken in accordance with the tailoring principle of its own, but in the event of the domestic cases, works were undertaken as pleased for developer without principle on tailoring. This paper is expected to present the practical guideline at the site where indiscrete tailoring is made at the time of constructing the domestic medical information systems.
References 1. Aydin, M., Harmsen, F., et al.: An agile information systems development method in use. Turk. J. Electron. Eng. 12(2), 127–138 (2004) 2. Brinkkemper, S.: Method engineering: Engineering of information systems development methods and tools. Inform. Softw. Tech. 38(4), 275–280 (1996) 3. Conboy, K., Fitzgerald, B.: Method and Developer Characteristics for Effective Agile Method Tailoring: A Study of XP Expert Opinion. ACM Transactions on Software Engineering and Methodology 20(1), Article 2 (2010) 4. De Marco, T.: Controlling Software Projects: Management Measurement and Estimation, p. 13. Prentice-Hall, Englewood Cliffs (1982) 5. Sommerville, I., Ransom, J.B.: An industrial experiment in requirements engineering process assessment and improvement. ACM Transactions on Software Engineering and Methodology 14(1), Article33, 93 (2005)
A Study on the Access Control Module of Linux Secure Operating System JinSeok Park and SoonGohn Kim Department of Computer & Game Science, Joongbu University, 101 Daehakro, Chubu-meon, Gumsangun, Chungnam, 312-702, Korea [email protected], [email protected]
Abstract. The secure OS means the system that defends and blocks the prospective hacking by adding various security functions such as access control, user authentication, audit and trail, and anti-hacking to Linux kernel. To protect important information from threatening elements, it is essential to establish the security system that satisfies the systematic security policy and the security requirements. In this paper, we analyzed the existing studies about the current secure OS, security module and SELinux, and suggested Linux access control module that uses the user discriminating authentication, security authority inheritance of subjects and objects, reference monitor and MAC class process and real-time audit trailing using DB. In addition, by restricting the root authority, the access to security level files without any access authority will be blocked. Malicious hacking can be thoroughly blocked thanks to real-time audit trailing using DB. Keywords: SELinux, LSM, MAC, Linux Secure Operating System.
1
Introduction
As the secure OS improves, the attacks to software are also getting stronger. Unfortunately, Linux cannot cope with the threats well and this weakness of the software falls the main target of hackers' attack. To resolve the security vulnerability, it is the best way to use an effective access control. Although DAC(Discretionary Access Control) is appropriate to keep the user's privacy, it is not enough to protect them from the hackers' attack [1]. So the studies has been carried out on Nondiscretionary Access Control for more than 30 years. However, there is not any practical agreement on a single access control module. Due to this absence of agreement, various Linux kernel patch files have been created. However, none of them have become the standard Linux kernel patch. In this paper, we analyzed the existing Linux Secure OS, security module and SELinux, and suggested the advanced access control module for Linux secure OS.
2
Related Works: Home Network
There are a lot of requirements for the security kernel because it should be perfect. The followings are the main requirements for it. First, the security kernel should be T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 223–228, 2011. © Springer-Verlag Berlin Heidelberg 2011
224
J.S. Park and S.G. Kim
separate from the process carrying out the Reference Monitor. Second, the Reference Monitor should perform access control for every activity between the subjects and the objects. Third, it should be attested in through and comprehensive ways and small enough to be tested. TCB(Task Control Block) is the security mechanism system in a computer system. It is responsible for applying the security policy in harmony with other programs. It creates the basic security environment and provides the additional user service for a reliable computer system [2]. When a malicious subject without permission tries to have access to a system, the Reference Monitor mediates all accesses to protect the object. Fig.1 is showing the composition of Reference Monitor.
Fig. 1. The composition of Reference Monitor
3 3.1
SELinux Security Policy and Application SELinux Security Policy
SELinux security policy can be broken down into two classification - the targeted policy and the strict policy. Under the targeted policy, the basic restriction is controlled by the standard Linux secure OS, and SELinux policy restricts the access to only some processes. The strict policy applies the single strict restriction to all processes except for the basic installation [3]. SELinux security policy describes user, program, process and their objects containing device, file and the whole system as in Figure 2. which describes the access permissions to all subjects and objects. In RHEL 4 version, the daemons that the targeted policy is applied to are /etc/selinux/targeted/src/policy/domains/program. Currently, the daemons like dhcpd, httpd, named, ntpd, portmap, snmpd, squid, syslogd are under the application of the targeted policy [4].
A Study on the Access Control Module of Linux Secure Operating System
225
Fig. 2. The security policy of SELinux
3.2
Application of SELinux
The application of SELinux can be set up during the installation. And after installing Linux, it can be set up by running the system-config-securitylevel program. In the case of a shell program, we can use setenforcing command, or directly apply the file, /etc/SELinux/config, by using vi editor. Also the application can be set up by the kernel parameter in /boot/grub/grub.conf. a configuration file of a boot manager program [5]. 3.3
System-Config-Securitylevel
This utility is available only when X-window is installed. Using it, we can easily set up the detailed policy about the daemon service as well as the basic policy. After running it, check [activate] in [SELinux] and set the policy type to targeted. Each policy can be set in SElinux policy change. After rebooting the system, if there is not any trouble, set it to [enforcing] [6]. 3.4
Setenforce Command
In the shell program, the basic policy type can be set by using setenforce command. When setting an enforcing mode, run 'setenforce enforcing' or 'setenforce 1'. when setting a permissive mode, run 'setenforce permissive ' or 'setenforce 0'. Setenforce command immediately applies the SELinux policy. After rebooting, it doesn't apply. It can be confirmed by running setenforce command [7]. 3.5
/etc/sysconfig/selinux File
The basic policy of SELinux is recorded on this file. The original file is /etc/selinux/config file and the symbolic link file is /etc/sysconfig/selinux file.
226
J.S. Park and S.G. Kim
The basic configuration after Linux installation is like the following. It contains the SELINUX variable, which can be set to enforcing, permissive or disabled mode. Enforcing mode applies the SELinux policy to the kernel, permissive mode checks the policy and put out the warning message, and disable mode doesn't apply the SELinux policy to the kernel. The configuration file also contains the SELINUXTYPE variable, which can be set to targeted or strict. Under the targeted policy, it restricts the daemons in the directory, /etc/selinux/targeted/src/policy/domains/program. On the other hand, the strict policy restricts all daemons.
4 4.1
The Design of Linux Access Control Module User Discriminating Authentication
In this paper, we suggested designing the three stages of user discriminating authentication to block every illegal intrusion as in Fig 3. In this system, the user keeps the separate directory and file to store the security information and sets the authentication level where the security manager who is superior to the root can access to the system [8].
Fig. 3. The three stages of user identification authentication
This design adds security level and protection category to the current user authentication information. Also it establishes a virtual Linux account, the security manager who is superior to the root and doesn't exist in /ect/passwd. Therefore, the security manager should obtain the root authentication, and then obtain the security
A Study on the Access Control Module of Linux Secure Operating System
227
manager authentication. In this design, the users who fail to login through the required security level and the protection category can't have access to the security system directory. Also only the security manager can set the general users' minimum, basic or maximum security level and protection category. When the users without the security level have access to the server, the security level and the protection category will be set to (0, 0) and keep the users from accessing to the folders or the files which are given the security level. 4.2
Real-Time Audit Trailing Using DB
In this paper, we breaked down the current real-time audit trailing system into three areas and suggested the following. First, design the program in the kernel which can link log information to the database. Second, set the audit function which can read log information from memory log book in the kernel area by using the system call. Third, by using the kernel and the system call, makes getlogd that can substitute for the current klogd or syslogd in the user area. This audit trailing system detects all activities in the operating system, records them in the database and provides the security manager with the related security audit data in real-time. Also, as MAC is carried out in the kernel between the security attributes given to the users and the resources which the users want to access to, every activity in the system is to be audited inside the kernel. It is not an onetime audit but a continuous audit, and it continually provides the audit trailing in case of the hacking. The system manager can refer to the trailed record later and use it to decide a new security policy. Fig 4. shows the system of real-time audit trailing using DB which is suggested in this paper.
Fig. 4. Real-time audit trailing system using DB
228
5
J.S. Park and S.G. Kim
Conclusion
In this paper, we reviewed the current studies about the current secure OS, security module and SELinux, and proposed Linux access control module that uses the user discriminating authentication, security authority inheritance of subjects and objects, reference monitor and MAC class process and real-time audit trailing using DB. First, during the user authentication process, it distinguishes the access permission IP and separates the super user(root)'s authority from that of the security manager by making the users input the security level and the protection category. Second, when the subjects have access to the objects through security authority inheritance of subjects and objects, the suggested system carries out the access control by comparing he security information of the subjects with that of the objects. Third, this system implements a Reference Monitor audit on every current events happening in the kernel. As it decides the access permission after checking the current MAC security attributes, it can block any malicious intrusion in advance. Fourth, through the real-time audit trailing system, it detects all activities in the operating system, records them in the database and sends the current status data to the security manager with the related security audit data in real-time. Consequently, it can immediately cope with the problems in case of any trouble or emergency situation.
References 1. 2. 3. 4.
5. 6. 7. 8.
Pfleeger, C.P.: Security in Computing. PTR (1997) Gollmann, D.: Computer Security. John Wiley & SONS, West Sussex (1999) Wlash, D.: Elevating Security Best Practices: SELinux (November 2003) Loscocco, P., Smalley, S.: Integrating flexible support for security policies into the Linux operating system. In: Proceedings of the FREENIX Track of the 2001 USENIX Annual Technical Conference (2001) IEEE Std 1003.2c-Draft standard for Information Technology Portable Operating System Interface(POSIX) Part 2: Shell and Utilities : Protection and Control Interfaces. Smalley, S.: Configuring the SELinux policy. NAI Labs Report #02-007, http://www.nsa.gov/selinux (June 2002) Smalley, S., Fraser, T.: A Security Policy Configuration for the Security-Enhanced Linux. NAI Labs Technical Report (February 2001) Jaeger, T., Edwards, A., Zhang, X.: Managing access control policies using access control spaces. In: Proceedings of the ACM Symposium on Access Control Models and Technologies (June 2002)
An fMRI Study of Reading Different Word Form Hyo Woon Yoon and Ji-Hyang Lim* Department of Art Therapy, Daegu Cyber University, Daegu, Republic of Korea Tel.: +82-53-850-4081, Fax : +82-53-850-4019 [email protected]
Abstract. Chinese characters appear in the currently used Korean language, and the writing system of the Korean language consists of a mixture of the Korean alphabet and Chinese characters. However, the usage of Chinese characters in Korean language is different compared to Chinese or Japanese. In the present study, the neural mechanisms involved in reading a single Chinese character words and naming pictures by Korean native speakers were investigated using a functional magnetic resonance imaging technique. The findings show a right hemispheric dominance within the occipito-temporal and the left middle / medial frontal area for both reading and naming of Chinese characters and pictures. This should reflect the specific visual processing of reading Chinese characters. Additional activations in inferior frontal and cingulage gyrus are also observed. The activations of inferior parietal region and thalamus are of interest, since we assume that these activations are strongly related with the phonological status of single Chinese character words rather than two character words by Korean native speakers. Keywords: different word form, hierarchical coding of words, fMRI.
1 Introduction It is generally known that perceiving or reading visually presented words encompasses many processes that collectively activate several specialized neural systems to work in concert. Functional imaging techniques such as Positron Emission Tomography (PET) or functional Magnetic Resonance Imaging (fMRI) have provided meaningful insights into the neural systems that underlie word recognition and reading process in the human brain. In the proposed model of written word perception (1,2), it is proposed that a large-scale distributed cortical network, including the left frontal, temporal, and occipital cortices, mediates the processing of visuo-orthographic, phonologic, semantic, and syntactic constituents of alphabetic words. For example, the posterior fusiform gyri are relevant to visual processing, whereas the inferior frontal lobe emphasizes their role in semantic processing (3,4). Regarding various written languages or writing systems, the question of how the surface form of words influences the neural mechanisms of the brain during word recognition is of interest. *
Corresponding author.
T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 229–237, 2011. © Springer-Verlag Berlin Heidelberg 2011
230
H.W. Yoon and J.-H. Lim
One of the most different writing systems from alphabetic words is the Chinese character. Alphabetic systems are based on the association of phonemes with graphemic symbols and linear structure, whereas Chinese characters are based on the association of meaningful morphemes with graphic units, the configuration of which is square and nonlinear. Previous studies using visual hemifield paradigms demonstrated that the right cerebral hemisphere is more effective in processing Chinese characters than the left cerebral hemisphere (5). This leads to a Chinese character-word dissociation hypothesis for a lateralisation pattern, since word perception is regarded to be left-lateralised. This conclusion has been disrupted, because some more current results of brain activation based on fMRI experiments suggest that the reading of Chinese characters is bi-lateralized. In particular, the left inferior frontal cortex (BA 9/45/46) emphasized the importance of the semantic generation or processing of Chinese characters (6,7). Chinese characters are used not only in the writing systems of Chinese language, but are also widely used in the Japanese and Korean languages. The Korean writing system consists of the mixture of the pure Korean words and Chinese characters. The Korean words are characterized as phonemic components similar to the alphabetic words used in English or German. However, the shape of Korean words is nonlinear. The composition of its symbols is shaped into a square-like block, in which the symbols are arranged left to right and top to bottom. Its overall shape makes Korean more similar to Chinese than other alphabetic orthographies (8). Furthermore, unlike alphabetic words, these phonemic symbols are not arranged in a serial order, but are combined into a single form to represent a syllable. These syllabic units are spatially separated from each other. Each Korean syllable is constructed of two to four symbols that in various combinations represent each of 24 phonemes. Thus, in a sense, Korean words, Hangul, can also be regarded as syllabograms (Figure 1). In addition, the Korean vocabulary consists of pure Korean words (24.4 %), Chinese-derivative words (69.32 %), and other foreign words (6.28 %). Chinese derivative words can be written either in the form of Chinese ideogram or its corresponding Korean words (9). In the current Korean writing system, e.g. daily newspapers or boulevard magazines in South Korea, the use of Chinese characters is relatively sparse. According to the statistics of the year 1994 (10), the proportion of Chinese characters in the body of daily newspapers are about 10 % and since then this has continuously diminished. Furthermore, it is important to note that number of years that students are exposed to Chinese characters in Korean schools are shorter than that for Korean words. It can be therefore argued that the familiarity and expertise of Chinese characters and Korean words, Hangul, for young Korean students are not identical. In attempts to overcome this different difficulty/expertise, in this study we used the simplest words written in Chinese characters as stimuli. Words, which are taught in the first year of Chinese character education, were chosen for our experiment. In addition, these words appear relative frequently in the daily newspapers and boulevard magazines. Two syllable Chinese-derivative words were used as stimuli. These can be written, as mentioned above, either in the form of Korean words or as Chinese characters with same meaning and pronunciation.
An fMRI Study of Reading Different Word Form
231
Surprisingly, although these are unique and interesting characteristics, the neural mechanisms involved in reading Korean words have been rarely studied, at least with modern functional imaging techniques. Using functional magnetic resonance imaging technique, we investigated the neural mechanism involved in reading these two different writing systems by Korean native speakers. In doing so, we hope to identify specific neural mechanisms that are involved in reading Korean words (phonemes) and Chinese characters (morphemes). Due to the different role of Chinese characters in the Korean language compared to Chinese, it is also of interest to investigate the brain activation pattern associated with reading Chinese characters by Korean subjects.
2 Materials and Methods Seven male and five female right-handed subjects (mean age: 22 years, S.D. 1.5 years) participated in the study. All were native Korean speakers who has been educated for Chinese characters for more than 6 years in school. They did not have any medical, neurological or psychiatric illness at past or present, and they did not take medication. All subjects consented to the protocol approved by the Institutional Ethics and Radiation Safety Committee. As stimuli, two-character Chinese words and Korean words with equivalent phonetic as well as semantic components were chosen. There were 60 words for each category. All words were nouns. Half consisted of abstract meanings and the other half concrete meanings. Stimuli were presented using custom-made software on a PC and projected via an LCD projector onto a screen at the feet of the subjects. The subjects viewed the screen via a homemade reflection mirror attached on the head RF coil. Each stimulus was presented for 1.5 seconds long, followed by a blank screen for 500 ms. Ten different items of this stimulus pattern were presented, including a blank screen for one second prior to the first stimulus within a block. These stimuli blocks were alternated with the baseline task. During the baseline task, a fixation point was projected on the middle of the screen for 21 seconds. Two kinds of stimuli (Korean words and Chinese characters) and baseline task blocks lasted equally for 21 seconds each. A total of six blocks of Korean words and six blocks of Chinese characters were presented, and these were intermixed at random. During the experiment, the subjects were instructed to press the right button for nouns with an abstract meaning and the left button for those with a concrete meaning. Simultaneously, they should respond covertly to the stimuli presented. Images were acquired by using 3 Tesla MRI scanner (ISOL Technology, Korea) with a quadrature head coil. Following a T1-weighted scout image, high-resolution anatomic images were acquired using an MPRAGE (Magnetization-Prepared RApid Gradient Echo) sequence with TE = 3.7 ms, TR =8.1 ms, flip angle = 8°, and image size of 256 x 256. T2*-weighted functional data were acquired by using echo planar imaging (EPI) with TE = 37 ms, flip angle = 80°, TR = 3000 ms, and image size of 64 x 64. We obtained 30-slices EPI images with slice thickness of 5 mm and no gaps between slices for the whole brain. Total 172 volumes were acquired per an
232
H.W. Yoon and J.-H. Lim
experimental run. For each participant, the first four volumes in each scan series were discarded, which were collected before magnetization reached equilibrium state. Image data were analyzed using SPM99 (Wellcome Department of Cognitive Neurology, London). The images of each subject were corrected for motion and realigned using the first scan of the block as a reference. T1 anatomical images were coregistered with the mean of the functional scans and then aligned to the SPM T1 template in the atlas space of Talairach and Tournoux (11). Finally, the images were smoothed by applying Gaussian filter of 7 mm full-width at half-maximum (FWHM). In order to calculate contrasts, the SOA (stimulus onset asynchrony) from the protocol was defined as events and convolved with the hemodynamic response function (HRF) to specify the appropriate design matrix. The general linear model was used to analyze the smoothed signal at each voxel in brain. Significant changes in hemodynamic response for each subject and condition were assessed using t-statistics. For the group analysis, contrast images of single subject were analyzed using a random effect model. Activations were reported if they exceeded a threshold P < 0.05, corrected on the cluster level (P < 0.0001 uncorrected at the single voxel level) for the task of Korean words and Chinese characters vs. baseline. Significance on the cluster level was calculated in consideration of peak activation and extent of the cluster. The threshold for P < 0.0005 (uncorrected) at the single voxel level was chosen for the direct comparison of Chinese characters and Korean words. Activations are based on the extent of ten voxels.
3 Results The mean reaction time for subjects during Korean word reading was 1.01 sec (S.D.: 325 ms), whereas, for Chinese character reading, it was 1.24 sec (S.D.: 367 ms). A paired t-test verified the significance between these two reaction times (p < 0.00001). Significant signal changes for Korean words reading vs. baseline were detected bilaterally in the fusiform gyrus (BA 19/37) and in the left middle frontal area (BA 46/6). In addition, right hemispheric activation was observed in the medial frontal gyrus (BA 8). For Chinese characters vs. baseline, the activation patterns appeared to be slightly different. In the region, responsible for the visual stimuli per se, we observed bi-hemispheric activation for the Chinese character reading vs. baseline task. In the frontal (superior, BA 8 and inferior area, BA 9) and parietal (superior, BA 7) cortices, only left hemispheric activation was significant in contrast with the baseline task. In the different contrast of Korean words minus Chinese character conditions, significant positive signal changes were observed in the right superior gyrus of the frontal lobe (BA 8), the left superior temporal lobe (BA 41), and the right midtemporal lobe (BA21), precentral gyrus (BA 6) and insula (BA 13) and for the condition of Chinese character minus Korean words, activation was observed in the bi-hemispheric visual area (BA 19).
An fMRI Study of Reading Different Word Form
233
Fig. 1. Activation map “Korean word reading” minus “Chinese character reading” in 12 subjects (threshold at p < 0.0005, uncorrected at a single voxel level)
4 Discussion In terms of behavioral data, significantly longer reaction times were observed for Chinese character reading compared to Korean word reading. Since very simple Characters were used as stimuli, it would appear that the reaction time advantage for reading Korean words is not derived from a familiarity effect. Rather, it might rely on differences in characteristics of phonological processing between these two writing systems. The phonological processing in Chinese character recognition is at the syllable-morpheme phonology level. This is the fundamental difference regarding the role of phonology between Chinese and alphabetic writing systems. The concept of pre-lexical phonology is misleading for Chinese character reading (8). However, in processing Korean words, pre-lexical phonology is activated rapidly and automatically. Reading Korean words for meaning involves pre-lexical information processing (12). In the functional imaging data, the activated area for the condition of Chinese character vs. baseline reading was found to be in the left hemispheric inferior and superior gyri of the frontal lobe (BA 6/9). This demonstrates the left lateralized pattern of the frontal cortex during Chinese character reading. This activation can be
234
H.W. Yoon and J.-H. Lim
attributed to the unique square configuration of Chinese characters (13,14). Chinese characters consist of a number of strokes that are packed into a square shape according to stroke assembly rules, and this requires a fine-grained analyses of the visual-spatial locations of the strokes and subcharacter components (15). In addition, it is known that the left middle frontal cortex (BA 6/9) is the area of spatial and verbal working memory by which the subject maintains a limited amount of spatial and verbal information in an active state for a brief period of time (16,17). More precisely, this area may play a role as a central executive system for working memory, which is responsible for coordination of cognitive resources (18). In our experiment, even though a working memory process was not involved in the subjects’ decision, they indeed needed to coordinate the semantic (or phonological) processing of the Chinese characters. These two processes of coordination of cognitive resources and semantic processing were explicitly required by the experimental task and the intensive visuospatial processing of the Chinese characters. It seems that the activation of the left middle frontal gyrus should be involved in these two cognitive processes. This left frontal activation pattern is consistent with other studies, in which functional imaging techniques of Chinese character reading by native Chinese speakers were used, especially the activation of BA 9 (7,19,20). Left hemispheric middle frontal activations (BA 46/6) were also observed for the condition of Korean words vs. baseline and this appears to be correlated with similar mechanisms associated with the reading of other alphabetic words (21). Since our subjects were ask to respond after seeing and the covert speaking of Korean words (forced choice option), which is connected with semantic processing, the activation of middle frontal area seems to underlie this cognitive process. We propose that this might be the reason for why the left frontal area is strongly activated during this experimental task. Occipital lobe activation was observed for Chinese character reading in contrast with baseline as well as a direct comparison with Korean words (Table 1). The activated occipital areas, such as the fusiform gyrus, are thought to be relevant to the visual processing of Chinese characters. Interestingly, we observed right hemispheric dominant occipital activation, even though two-character Chinese words were presented as stimuli. There were some indications that the reading of two-character Chinese word is left lateralized (19,20), but our results did not support the dissociation hypothesis of single and two-character Chinese word perception. Bilateral activation of occipito-temporal area was also observed for the Korean word reading. It is generally thought that this area is relevant to the processing of the visual properties of Korean words. The activation pattern is bilateral, but the left hemispheric activity was relatively weaker (Table 1). This is not in agreement with previous studies with alphabetic words (22). The activation of the right hemispheric superior frontal lobe (BA 8) was reported in the conditions of Korean words reading minus Chinese character reading as well as the contrast of Korean words vs. baseline (Table 2). This area belongs to part of the dorsolateral prefrontal cortex (BA 8 is the posterior portion). The activation of the right dorsolateral prefrontal cortex (or right hemispheric dominance at least) was
An fMRI Study of Reading Different Word Form
235
reported in some working memory studies, in which functional imaging techniques such as PET or fMRI were used (23,24). However, in our study, the meaning of this area relates more to the visual systems per se. This area is close to the frontal eye field, which is linked anatomically and physiologically with the visual and oculomotor systems (25,26). In the monkey, lesions in BA 8 cause severe impairment of the performance of conditional tasks in which the appropriate visual stimuli must be chosen depending on the particular visual cues presented (27) and this would suggest that BA 8 might play an important role in the selection of specific visual stimuli. Failure on visual conditional tasks following lesions in BA 8 in the monkey may reflect the loss of this visual higher order control (28). In a comparable study, the activation of BA 8 was reported to be related to language proficiency; this seems to readily rely on the encoding and retrieval of visual nonverbal forms, similar to the processing of shape judgment, and this aspect of visual processing may have placed greater demands on BA 8 (15). This finding herein suggest that the activation of the right hemispheric BA 8 of our subjects during Korean word reading relies on visual nonverbal higher order control or on the visuospatial analysis of Korean words. This activation seems to be associated with the analysis of the specific surface form of Korean words as well. Superior midtemporal activation was observed in the condition of Korean words minus Chinese characters. According to some previous studies with alphabetic words, the left lateralized activation of the superior midtemporal area is known to be related to phonological processing (29), and is particularly responsible for fine-grained phonemic analysis (30). Our results are in general agreement with these previous studies, since we believe that Korean words are phonemes like alphabetic words despite their square-like shape. The activation of insula would be better appreciated under the context of the concomitant of the precentral gyrus (BA 6). The insula and precentral gyrus work in concert in formulating articulatory plans and coordinating speech articulation of alphabetic words (1). Overall, our results of activation patterns during Chinese character reading seem to involve similar mechanisms to that for native Chinese speakers, e.g., strong activation of the left middle frontal area. The activation pattern of Korean words in our results appears to be corroborated with that of alphabetic words at the general level, but some differences in details were noted, e.g. the right hemispheric dominance of occipital lobe. The right hemispheric activation of the posterior part of the dorsolateral prefrontal cortex would be related to specific higher order visual control or visuospatial analysis for Korean words. In summary, we investigated the neural mechanisms of reading Korean words and Chinese characters by Korean native speakers using a functional magnetic resonance imaging technique. Our results indicate similar mechanisms for reading Chinese characters by our subjects compared to the previous studies with Chinese people. The neural mechanisms for reading Korean words appear similar to that of alphabetic words at a general level, in that activation of right superior frontal area occurs. We suggest that this activation is correlated with the specific visuospatial analysis of reading Korean words. Indeed, further studies will be needed to clarify the neural mechanisms involved in reading Korean words.
236
H.W. Yoon and J.-H. Lim
References 1. Price, C.J.: The anatomy of language: contributions from functional neuroimaging. J. Anat. 3, 335–359 (2000) 2. Demonet, J.E., Chollet, F., Ramsay, S., Cardebat, D., Nespoulus, J.N., Wise, R., Rascol, A., Frackowiak, R.: The anatomy of phonological and semantic processing in normal subjects. Brain 115, 1753–1768 (1992) 3. Bookheimer, S.: Functional MRI of language: New approaches to understanding the cortical organization of semantic processing. Ann. Rev. Neurosci. 25, 151–188 (2002) 4. de Zubicaray, G.I., Wilson, S.J., McMahon, K.L., Muthiah, S.: The semantic interference effect in the picture-word paradigm: An event-related fMRI study employing overt responses. Human Brain Mapping 14, 218–227 (2001) 5. Tzeng, O.J.L., Hung, D.L., Cotton, B., Wang, W.S.-Y.: Visual lateralisation effect in reading Chinese characters. Nature 282, 499–501 (1979) 6. Ding, G., Perry, C., Peng, D., Ma, L., Li, D., Xu, S., Luo, Q., Xu, D., Yang, J.: Neural mechanisms underlying semantic and orthographic processing in Chinese-English bilinguals. NeuroReport 14, 1557–1562 (2003) 7. Tan, L.H., Spinks, J.A., Gao, J.-H., Liu, H.-L., Perfetti, C.A., Xiong, J., Stofer, K.A., Pu, Y., Liu, Y., Fox, P.T.: Brain activation in the processing of Chinese characters and words: a functional MRI study. Human Brain Mapping 10, 16–27 (2000) 8. Wang, M., Koda, K., Perfetti, C.A.: Alphabetic and nonalphabetic L1 effects in English word identification: a comparison of Korean and Chinese English L2 learners. Cognition 87, 129–149 (2003) 9. Kim, H., Na, D.: Dissociation of pure Korean words and Chinese-derivative words in phonological dysgraphia. Brain and Language 74, 134–137 (2000) 10. Talairach, J., Tournoux, P.: Co-Planar stereotaxic atlas of the human brain. Thieme, New York (1988) 11. Gusnard, D.A., Raichle, M.E.: Searching for a baseline: functional imaging and the resting human brain. Nat. Rev., Neurosci. 2, 685–694 (2001) 12. Kuo, W.-J., Yeh, T.-C., Duann, J.-R., Wu, Y.-T., Ho, L.-W., Hung, D., Tzeng, O.J.L., Hsieh, J.-C.: A left-lateralized network for reading Chinese words: a 3 T fMRI study. NeuroReport 12, 3997–4001 (2001) 13. Tan, L.H., Liu, H.-L., Perfetti, C.A., Spinks, J.A., Fox, P.T., Gao, J.-H.: The neural system underlying Chinese logograph reading. NeuroImage 13, 836–846 (2001) 14. Chee, M., Tan, E., Thiel, T.: Mandarin and English single word processing studies with functional magnetic resonance imaging. J. Neurosci. 19, 3050–3056 (1999) 15. Chee, M.W.L., Weekes, B., Lee, K.M., Soon, C.S., Schreiber, A., Hoon, J.J., Chee, M.: Overlap and dissociation of semantic processing of Chinese characters, English words, and pictures: evidence from fMRI. NeuroImage 12, 392–403 (2000) 16. Zhang, W., Feng, L.: Interhemispheric interaction affected by identification of Chinese characters. Brain and Cognition 39, 93–99 (1999) 17. Mathews, P.M., Adcock, J., Chen, Y., Fu, S., Devlin, J.T., Rushworth, M.F.S., Smith, S., Beckmann, C., Iversen, S.: Towards understanding language organization in the brain using fMRI. Human Brain Mapping 18, 239–247 (2003) 18. Courtney, S.M., Petit, L., Maisog, J.M., Ungeleider, L.G., Haxby, J.V.: An area specialized for spatial working memory in human frontal cortex. Science 279, 1347–1351 (1998) 19. Owen, A.M., Doyon, J., Petrides, M., Evans, A.C.: Planning and spatial-working memory: A positron emission tomography study in humans. Eur. J. Neurosci. 8, 353–364 (1996)
An fMRI Study of Reading Different Word Form
237
20. D’Esposito, M., Detre, J.A., Alsop, D.C., Shin, R.K., Atlas, S., Grossman, M.: The neural basis of the central executive systems of working memory. Nature 378, 279–281 (1995) 21. Sevestianov, A., Horwitz, B., Nechaev, V., Williams, R., Fromm, S., Braun, A.R.: fMRI study comparing names versus pictures of objects. Human Brain Mapping 16, 168–175 (2002) 22. Moore, C.J., Price, C.J.: Three distinct ventral occipitotemporal regions for reading and object naming. NeuroImage 10, 181–192 (1998) 23. Chen, Y., Fu, S., Iversen, S.D., Smith, S.M., Mathews, P.M.: Testing or dual brain processing routes in reading: a direct contrast of Chinese characters and pinyin reading using fMRI. J. Cog. Neurosci. 14, 1088–1098 (2002) 24. Fu, S., Cheng, Y., Smith, S., Iversen, S., Mathews, P.M.: Effects of word form on brain processing of written Chinese. NeuroImage 17, 1538–1548 (2002) 25. Kim, J.J., Kim, M.S., Lee, J.S., Lee, D.S., Lee, M.C., Kwon, J.S.: Dissociation of working memory processing associated with native and second languages: PET investigation. NeuroImage 15, 879–891 (2002) 26. Petrides, M., Alivisatos, B., Evans, A.C.: Functional activation of the human ventrolateral frontal cortex during mnemonic retrieval of verbal information. Proc. Nat. Acad. Sci. 92, 5803–5807 (1995) 27. Fox, P.T., Fox, J.M., Raichle, M.E., Burde, R.M.: The role of cerebral cortex in the generation of voluntary saccades: a positron emission tomography study. J. Neurophysiol. 54, 348–369 (1985) 28. Shanton, G.B., Deng, S.Y., Goldberg, M.E., McMullen, N.T.: Cytoarchitectural characteristic of the frontal eye fields in macaque monkeys. J. Comp. Neurol. 282, 415– 427 (1989) 29. Halsband, U., Passingham, R.: The role of premotor and parietal cortex in the direction of action. Brain Res. 240, 368–372 (1982) 30. Petrides, M., Alivisatos, B., Evans, A.C., Meyer, E.: Dissociation of human middorsolateral frontal cortex in memory processing. Proc. Nat. Acad. Sci. 90, 873–877 (1993)
Intelligent Feature Selection by Bacterial Foraging Algorithm and Information Theory Jae Hoon Cho1 and Dong Hwa Kim2 1
Chungbuk National University, Cheongju Chungbuk, Korea 2 Hanbat National University, Daejeon, Korea [email protected], [email protected]
Abstract. In this paper, an intelligent feature selection by bacterial foraging algorithm and mutual information is proposed. Feature selection is an important issue in the pattern classification problem. Particularly, in the case of classifying with a large number of features or variables, the accuracy and computational time of the classifier can be improved by using the relevant feature subset to remove the irrelevant, redundant, or noisy data. The proposed method consists of two parts: a wrapper part by bacterial foraging optimization and a filter part by mutual information. In order to select the best feature subset to achieve the best performance of the classifiers. Experimental results show that this method can achieve better performance for pattern recognition problems other than other conventional ones. Keywords: Feature selection, Bacterial foraging optimization, mutual information.
1 Introduction Recently, feature selection has been gaining interests of many researchers to improve the performances of classifiers in various pattern recognition problems. The goal of the feature selection is a process to find useful features to obtain the best performance of classifier for the given pattern recognition problems. Feature selection solves this problem by removing the irrelevant, redundant, or noisy data in the given original data set. Feature selection algorithms can be categorized based on generation procedure and evaluation function, or dependence on the inductive algorithm. M. Dash and H. Liu[1] grouped different feature selection methods into two categories: generation procedure and evaluation function. Langley[2] widely grouped different feature selection methods into two groups based on their dependence and independence on the inductive algorithm : filter methods and wrapper methods. Filter methods are independent of the inductive algorithm and evaluate the performance of the feature subset by using the intrinsic characteristic of the data. In the filter methods, the optimal features subset is selected in one pass by evaluating some predefined criteria. So, filter methods have the ability to quickly compute very high-dimensional datasets, whereas they have the worse classification performance caused by ignoring the effect of the selected feature subset on the performance of the inductive algorithm. The wrapper methods utilize the error rate of inductive algorithm as evaluation function T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 238–244, 2011. © Springer-Verlag Berlin Heidelberg 2011
Intelligent Feature Selection by Bacterial Foraging Algorithm and Information Theory
239
and search for the best subset of features in the space of all available feature subsets. Generally, it is known that the wrapper methods have higher performance than filter methods. On the other hand, information theory has been applied to feature selection problems in recent years. Battiti[3] proposed a feature selection method called MIFS(Mutual Information Feature Selection). Kwak and Choi [4] investigated the limitation of MIFS using a simple example and proposed an algorithm which can overcome the limitation and improve performance. The main advantages of mutual information are the robustness to noise and data transformation. In this paper, we propose a feature selection method using both information theory and genetic algorithm. The proposed method consists of two parts: filter part and wrapper part. In the filter part, we evaluated the significance of each feature by using mutual information, and then removed features with low significance. In the wrapper part, we use bacterial foraging optimization to select the optimal feature subsets with smaller size and higher classification performance. The goal of the proposed method is to select better feature subsets with smaller size and higher classification performance. In order to estimate the performance of the proposed method, we apply our method on UCI Machine-Learning data sets[5]. Experimental results show that our method is effective and efficient in finding small subsets of the significant features for reliable classification.
2 Mutual Information Based Feature Selection Basically, mutual information is a special case of a more general quantity called relative entropy, which is a measure of the distance between two probability distributions. The entropy is a measure of uncertainty of random variables. More specifically, if a discrete random variable X has λ alphabets with its probability density function denoted as p( x) = Pr{X = x}, x ∈ λ , then the entropy of X can be defined as[3]
H ( X ) = − ∑ p ( x) log p ( x)
(1)
x∈λ
For the case of two discrete random variables, i.e., X and Y , the joint entropy of X and Y is defined as follows:
H ( X , Y ) = − ∑ ∑ p( x, y ) log p ( x, y )
(2)
x∈λ y∈δ
where p ( x, y ) denotes the joint probability density function of X and Y .When some variables are known and the others are not, the remaining uncertainty can be described by the conditional entropy, which is defined as H (Y | X ) = − ∑ ∑ p( x, y ) log p ( y | x)
(3)
x∈λ y∈δ
The common information of two random variables X and Y is defined as the mutual information between them,
240
J.H. Cho and D.H. Kim
I ( X ;Y ) =
p ( x, y )
∑ ∑ p( x, y) log p( x) ⋅ p( y)
(4)
x∈λ y∈δ
Obviously, a large amount of mutual information between two random variables means that the two variables are closely related; otherwise, if the mutual information is zero, then the two variables are totally unrelated or independent of each other. The relation between the mutual information and the entropy can be described in (5), which can also be shown in Fig. 1.
I ( X ; Y ) = H ( X ) − H ( X | Y ) = H (Y ) − H (Y | X ) = H ( X ) + H (Y ) − H ( X , Y ), I ( X ; Y ) = I (Y ; X ), I (X ; X ) = H (X )
(5)
Fig. 1. Relation between entropy and mutual information
In the feature selection problems, given two variables: feature F and class C, their mutual information is defined in terms of their probabilistic density functions p( f ) , p (c) and p ( f , c) : I (F;C) =
p( f , c)
∑ ∑ p( f , c) log p( f ) ⋅ p(c)
(6)
f ∈λ c∈δ
If the mutual information I ( F ; C ) between a feature F and class C is large, it means feature F contains much information about class C. if I(F;C) is small, then feature F has a little effect on output Class C. So, in the feature selection problems, the optima feature subset can be determined by selecting the features with higher mutual information. In this paper, the following equation is used as the fitness function for GA and the proposed method. I (C ; fi ) - β
∑
fs Î S
I (C ; f s ) I ( fi ; f s ) H ( fs )
where S is all the feature subsets and β is the redundancy parameters
(7)
Intelligent Feature Selection by Bacterial Foraging Algorithm and Information Theory
241
3 Bacterial Foraging Optimization Search and optimal foraging of animals can be used for solving engineering problems. To perform a social foraging, an animal needs communication capabilities and it gains advantages to exploit essentially the sensing capabilities of the group, so that the group can gang-up on larger prey, individuals can obtain protection from predators while in a group [6]. For applying the bacterial foraging to optimization problem, conventional BFO was described as following [7]. In the minimal problem, the main goal of the BFO based algorithm is to find the minimum of J(θ) θ∈ℜp not the gradient∇J(θ). Here, θ is the position of a bacterium, and J(θ).denotes an attractant-repellant profile. That is, where nutrients and noxious substances are located, J(θ)< 0, J(θ)= 0, J(θ) > 0 represent the presence of nutrients, neutral medium, and noxious substances, respectively. On the other hand, the population of bacteria can be defined by P( j, k , l ) = {θ i ( j, k , l ) | i = 1, 2, , S}
(8)
where θi(i,j,k) represents the position of each member in the population of the S bacteria at the jth chemotactic step, kth reproduction step, and lth eliminationdispersal event. Let J (i,j,k,l) denote the cost at the location of the ith bacterium θi (j,k,l) ∈ℜp and the bacterial position after the next chemotactic step can be represented by θ i ( j + 1, k , l ) = θ i ( j , k , l ) + C (i)φ ( j )
(9)
where C(i) > 0 is the size of the step taken in the random direction specified by the tumble. If the cost J (i,j+1,k,l) at θi (j+1,k,l) is better than at θi (j,k,l), then another chemotactic step of size C(i) in this same direction will be taken and repeated up to a maximum number of steps Ns which is the length of the lifetime of the bacteria as long as it continues to reduce the cost. During the process of chemotactic, the bacterium which has searched the optimal position tries to provide an attractant or repellent signal for the swarm behaviors of a i (φ ) , i = 1, 2,… , N , to model the cell-to-cell attractant and a group. Function J cc repellant effect is represented by S
i J cc (θ , P( j , k , l )) = ∑ J cc (θ ,θ i ( j , k , l )) i =1
S
=
⎡
⎞⎤
⎛
p
⎝
m =1
∑ ⎢⎢-dattract exp ⎜⎜ -ωattract ∑ (θm - θmi )2 ⎟⎟⎥⎥
⎠⎦ p ⎡ ⎛ ⎞⎤ + ⎢-hrepellant exp ⎜ -ωrepellant (θ m - θ mi ) 2 ⎟ ⎥ ⎜ ⎟⎥ ⎢ i =1 ⎣ i =1 ⎝ ⎠⎦ i =1 ⎣ S
∑
∑
(10)
242
J.H. Cho and D.H. Kim
i is the mth where θ = [θ1,…,θ p ]T is a point on the optimization domain, θ m component of the ith bacterium position, d attract is the depth of the attractant released by the cell, ω is a measure of the width of the attractant signal,
attract hrepellant = dattract is the height of the repellant effect magnitude, and ω repellant is a
measure of the width of the repellant. Therefore, the final cost at the location of the ith bacterium θ i (i, j , k ) ∈ R p reflecting the effect of an attractant and repellant can be defined by J (i , k , k , l ) + J cc (θ , P )
(11)
After chemotactic steps, a reproduction step is taken. The bacteria with the highest J values (low fitness cost) die and the other bacteria having lower values J (high fitness cost) split into two bacteria, which are placed at the same location, and then elimination-dispersal events is carried out. In these events, each bacterium in the population is subjected to elimination-dispersal with probability. Here, it is noted that each bacterium has the same predefined probability of eliminationdispersal events.
4 Simulation and Discussions In our experimentations, in order to evaluate the performance of the selected feature subsets, we use UCI dataset and four data sets with high-dimension. All experiments are implemented in a PC with Pentium 4 and 3G RAM. The initial parameters of genetic algorithm and bacterial foraging optimization are shown in Table 1. Ten-fold cross-validation procedure is commonly used to evaluate the performance of methods. In the 10-fold cross-validation, the selected features subsets are partitioned into ten feature subsets. The cross-validation process is repeated 10 times, and the 10 results from the folds then can be averaged to produce a single estimation. Table 2 shows the UCI datasets used in our experiments. The first two rows are artificial dataset and the rest are real-world datasets. The ratio of training and holdout set is 2:1 [8]. In this experiment, we use the fitness values evaluated by using (7) and the three genetic operators: roulette-wheel selection, uniform crossover and simple mutation. Table 3 reports the feature selection performance of proposed method. The first column shows the results of called ANNIGMA-wrapper method proposed by Hsu et al., and the second column shows results by genetic algorithm. The final column shows the results of proposed method. For each error and the number of selected feature, we describe the average and the standard deviation. As shown in Table 3, the proposed method shows better performance than the other methods with small features for most dataset and the proposed method makes it possible to decrease the error.
Intelligent Feature Selection by Bacterial Foraging Algorithm and Information Theory
243
Table 1. Parameter for genetic algorithm and the proposed method
Parameter Population size
value 20
Probability of crossover
0.7
Probability of mutation
0.1
Generation
50
Weight of fitness ( λ )
0.5
Initial population rate( α )
0.5
The initial bacteria population size for BPO(S)
300
Chemotactic steps(Nc)
100
The number of reproduction steps(Nre)
1
The number of elimination-dispersal events (Ned)
1
Elimination-dispersal with probability
0.5
Table 2. UCI data used in this experiment
Data set Monk3a Monk3b Cancer Credit Ionosphere
No. of feature 6 15 9 9 34
Training data 122 122 399 490 200
Test data 432 432 300 200 151
Table 3. Comparisons results between proposed method and other methods
Dataset
ANNIGMA-Wrapper[8] Feature Error (%)
GA Error (%)
2.9 ± 0.8
2.4 ± 1.1
2.8 ± 0.0 ± Cancer 5.8 1.3 3.5 ± 1.2 Credit 6.7 ± 2.5 12.0 ± 0.8 Ionosphere 9.0 ± 2.5 9.8 ± 1.3
3.5 ± 0.8
Monk3a Monk3b
2.3 ± 0.7
Feature
2.2 ± 0.4
5.7 ± 1.6
Proposed method Feature Error (%) 2.0 ± 0.3
1.8 ± 0.2
3.2 ± 1.0 2.2 ± 1.0 ± 5.2 0.7 2.8 ± 0.7 2.1 ± 0.4 7.3 ± 1.6 15.4 ± 1.5 3.1 ± 0.5 12.4 ± 2.7 8.2 ± 3.1 8.2 ± 2.3
2.0 ± 0.1 1.5 ± 0.3 9.0 ± 0.8 4.0 ± 0.5
References 1. Dash, M., Liu, H.: Feature selection for classification. Intell. Data Anal. 1, 131–156 (1997) 2. Ross, D.T., et al.: Systematic Variation in Gene Expression Patterns in Human Cancer Cell Lines. Nature Genetics 24, 227–234 (2000)
244
J.H. Cho and D.H. Kim
3. Battiti, R.: Using mutual information for selecting features in supervised neural net learning. IEEE Trans. Neural Networks. 5, 537–550 (1994) 4. Kwak, N., Choi, C.H.: Input feature selection for classification problems. IEEE Trans. Neural Netw. 13, 143–159 (2002) 5. Tan, F., Fu, X., Zhang, Y., Bourgeois, A.G.: Improving Feature Subset Selection Using a Genetic Algorithm for Microarray Gene Expression Data. In: IEEE Congress on Evolutionary Computation, pp. 2529–2534 (2006) 6. Taga, M., Bassler, B.: Chemical Communication among Bacteria. Proc. the National Academy of Sciences of the United States of America, 14549–14554 (2003) 7. DeLisa, M., Wu, C., Wang, L., Valdes, J., Bentley, W.: DNA microarray-based identification of genes controlled by autoinducer 2-stimulated quorum sensing in Escherichia coli. J. Bacteriology. 183, 5239–5247 (2001) 8. Hsu, C.-N., Huang, H.-J., Schuschel, D.: The ANNIGMA-Wrapper Approach to Fast Feature Selection for Neural Nets. IEEE Trans. on Syst., Man, Cybern. B 32, 207–212 (2002)
The Intelligent Video and Audio Recognition Black-Box System of the Elevator for the Disaster and Crime Prevention Woon-Yong Kim1, Seok-Gyu Park1, and Moon-Cheol Lim2 1
Dept. of Computer and Internet Technique, Gangwon Provincial Collage, Korea 2 IVTEK Co., Ltd, Korea {wykim,skpark}@gw.ac.kr, [email protected]
Abstract. Recently, increasing in social crimes and the needs for social safety networks, there are needs of the strengthened security in the area of the elevator due to accidents and crimes. However, the elevator system is lacking the solution of the security by relying on the existing CCTV cameras for crimes. In this paper, we propose the intelligent sensing system combining with video and audio analysis so that we build the integrated proactive response system detecting the accident using sensors, video and audio data, checking the elevator system remotely. We will be built the specialized security system of the elevator to respond the accident quickly. Keywords: Elevator, Black-box, Video, Audio, CCTV, Security, Intelligent.
1
Introduction
Increasing in violent crimes and accidents in an elevator, there are various needs to prevent these accidents and to solve the problems. According to Korea Elevator Safety Institute, Elevator accidents that are occurred frequently have been recognized as user and operator fault, manufacturing defects and poor maintenance [1]. Especially, the accidents caused by user fault were beyond the half. In addition, there are intentional damage and crime. In order to resolve the cause of crime recently, there is CCTV installation obligation for the elevator. But it is being used to postresponse system in the event of an accident because of using as a means of video storage. To solve these problems for the crime and accidents and improve the performance of maintenance, we propose the intelligent elevator system based on video and audio recognition and the effective utilization of it. It could be the aggressive solution by recording the elevator inside and outside using video, audio and sensor data, analyzing of abnormal situations such as screaming, excessive movement, temperature and fire in the elevator, notifying the possibility of an accident to the central monitoring system and providing the control service using a smart phone. Recently, with the advancement of the elevator industry, tries to connect with IT industry has emerged of a variety of services. For example, there are typical use such as EDS to service for news, weather and financial information, DSP (Destination Pre T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 245–252, 2011. © Springer-Verlag Berlin Heidelberg 2011
246
W.-Y. Kim, S.-G. Park, and M.-C. Lim
assignment System), the floor schedule, fingerprint recognition, video surveillance and access control for the elevator [2]. But the elevator system not have constructed by the integrated structure of Information technique. So we are propose the integrated model of the elevator that have intelligent analysis of video and audio for the environment of the elevator so that it can be response to the accident and security issues actively. The rest of this paper is organized as follows. In section 2, we briefly review related work of the elevator system. In section 3, we describe the framework of the intelligent audio and video recognition for the elevator system. In section 4, we discuss the features and the technical elements of the system. Finally, we conclude in Section 5.
2 2.1
Related Work An Elevator System Research
Elevator industry is rapidly being transformed for purposes such as high-speed elevator, cost reduction, simplifying and management of technique and convenience of use. In addition, there is increasing the need for vertical transportation for persons with disabilities access rights and aging society and also public facilities and public transport are increasing demands on the elevator installation. And also the elevator with features as speed, comfort, safety, affordability, convenience, design and performance has been required because of trends and changes in various users. The research and development has mainly focused on a space-saving elevator and an enhanced intelligent elevator to provide advanced information services [3][4]. Space-Saving Elevator Model. This system is intended to a small, lightweight and high efficiency actuator and control devices. Representative approaches include Machine Room Less (MRL), Multi-Deck and Twin Elevator. The Machine room-less elevator with an industrial development minimizes space and optimizes efficiency at operation using only hoistway. The representative MRL elevators are Finland Kone’s Mono Space, Otis GeN2 and LIM and Mitsubishi’s ELENESSA. Double-deck elevators are designed with two more elevator cars that are attached, one on top of the other. This allows passengers on two consecutive floors to be able to use the elevator simultaneously, significantly increasing the passenger capacity of an elevator shaft. Such a scheme can prove efficient in buildings where the volume of traffic would normally have a single elevator stopping at every floor. The TWIN system features two elevator cars one arranged upon the other in a single shaft using the same guide rails. The cars are not connected and can move independently to different levels, saving space and enhancing lift capacities [2]. Intelligent Enhanced Elevator Model. This system is using to purpose access control and visitor control by adding a card reader, touch screen and so on. The examples of the system are Schindler ID and Miconic 10 and so on. And also, there are security enhanced system such as fingerprint recognizing elevator, the elevator with Thin Film Transistor LCD screen to provide information for passenger with contents providers, the Pay Lift elevator that is pay the cost for real operation and the
The Intelligent Video and Audio Recognition Black-Box System
247
elevator without rope. And there are researches of the elevator that make routes divide and then operate each part vertically and a space elevator through electromagnetic conversion [5][6]. 2.2
The Elevator Technology Related by IT Convergence
Recently, there are a variety of IT convergence associated with the elevator such as EDS to service for news and weather, DSP, fingerprint recognition, video surveillance and access control. Representative examples follow [7][8]. Mitsubishi Elevator. Mic Center collects various data related operation of elevator without video data. And notify the possibility of problems to engineer in charge before it occur failures with mobile system. And Safe Elevator is made for aware of passenger with a sudden burst of activity in elevator using video recognition. If this is come, the system ring the emergency alarm and then move the car to the nearest floor. ThyssenKrupp Elevator. Destination Selection Control is a highly intelligent control system where passengers select their destination floor at a key pad terminal before entering the elevator. In a conventional elevator, journeys are often slowed down and interrupted by passengers getting in and out as new calls are received, Destination Selection Control reduces the effect of this by directing each passenger to the fastest lift for each journey. Hyundai Elevator. IP-DVS(Internet Protocol based Digital Video Security) is model to integrate CCTV, DVR, the system of elevator control and high speed network. This model transfers a digital images and other information using an extra telephone line in the building to display the operation information of the elevator to the video surveillance system. 2.3
The Imaging Technology for an Elevator
Current imaging security equipments for an elevator consist of general video cameras, digital video recorder for storing video and CMS(Central Management System) to watch the situation from multiple monitor remotely and so on[5]. Until recently, camera and DVD related technology is evolving rapidly through researches such as high-resolution, high-quality, and intelligent products. But there are still a lake of many researches of video security system and automated inspection solutions. In the elevator especially, general CCTV cameras that just have the functions of video recording and playback are being installed and running in the elevator. As a result, they do not use advanced IT convergence technology efficiently. And the error notification and inspection of the elevator is being inspected all elevators manually so that the frequent failure of the elevator caused a lot of complaints and also this is still a lake of using the IT convergence technology. As a result, simultaneously with the mechanical performance and technology development of the elevator itself, various studies of the CCTV system that the demand continues to increase in the elevator and IT convergence technologies for automated inspection and monitoring system in the elevator are needed.
248
W.-Y. Kim, S.-G. Park, and M.-C. Lim
3 Intelligent Video and Audio Recognition Black-Box System for an Elevator Only 3.1
The System Architecture for the Black-Box of an Elevator
The intelligent video and audio recognition elevator system for accident and crime prevention consists of integrated units such as the input units of video, audio and sensors to obtain information in the elevator, the elevator black-box system that analysis the input data intelligently, the CMS(Central Monitoring System) that control the event when it occur the serious situations and the remote management part to receive data with smart phone. Fig.1 shows the architecture of this system.
Fig. 1. The system architecture for the proposed system
The proposed system have the CAMs that be installed inside and outside of the car to recognize the video and audio information for the elevator status and translate the information to the black-box system in real time. The CAM for the inside of the car is to recognize the situation in the car and the CAM for the outside of the car is to inspect the elevator system remotely. There is touch panel in the car to operate the elevator, to display the status and to be a video call to the management office in an emergency situation. And the sensors such as shock and temperature is located in the elevator and utilized in the black-box. The Control Panel(CP) controls the elevator and translates the status of the elevator to the black-box. The black-box receives the information such as excessive movement, screaming, leaving things and people and elevators status from the CP, CAM and sensors and then analysis the situation. When an emergency situation exists, the black-box generates an emergency event to CMS automatically and then the CMS deal with the situation. And also the CMS send the information to the smart phone to control the system remotely.
The Intelligent Video and Audio Recognition Black-Box System
3.2
249
The Proposed System Framework
The proposed framework consist of the input devices such as sensors, cameras and touch screen, Black-box module, Central Management System module and Control Panel module. Fig.2 shows the proposed system framework.
Fig. 2. The system framework for the elevator black-box service
The input devices consist of fire and G-sensors, three CAMs, touch screen to display and operator the system. and black-box module have intelligent video and audio analyzer, event creator, video call, media information translator, sensor data collector, storage service, emergency power system and remote control service. And CMS have real-time multiple video monitoring module, two-way audio and video communication, emergency video monitoring based on an event, a storage service and mobile service. And the control panel has system, status and error controllers. Each elements of the framework describe chapter 4.
4 The Elements of the Intelligent Video/Audio Recognition System 4.1
The Input Device in the Car
The input devices consist of sensors, cam1/2 to recognize the internal status of the elevator and cam3 and controller based on touch screen to provide status information, floor call, door control and emergency video call. The input sensor and video cam transfer the data to the black-box. And the black-box transfer the status information to the touch screen in real-time.
250
4.2
W.-Y. Kim, S.-G. Park, and M.-C. Lim
The Elevator Black-Box Module
The module of the elevator black-box include the analyzer of the video and audio data, the communication module, the sensor data collector, the media data store module, the emergency power manager module and remote control module. The black-box module includes H.264 video and audio storage and transmission capabilities, gathering and transferring the elevator meta-data and SSD-based memory management features. The structure of the black-box is presented in Fig.3.
Fig. 3. The block diagram for the black-box system for the elevator
Fig. 4. The block diagram for power management
The blocks of the black-box have input modules of video/audio/event, module of image conversion, the module of video compression and decompression, the module of storage, transmission and display of the data and processing the event. And the
The Intelligent Video and Audio Recognition Black-Box System
251
analog data is converted to digital with TW2866(AD converter), compress by Hi3515 H.264 Codec, stored on disk and send the data over the network. And also, the power system supplies the power to the cameras, and protects the video and metadata of the system when the power is suddenly blocked. Fig.4. shows the block diagram for the power management. 4.3
Central Management System and Control Panel for the Elevator
Central Management System consists of the event processing, communication of video and audio, mobile control, storage management and real-time multi monitoring. The system includes a decoder for H.264 Audio/Video data, analyzer of the metadata, synchronize module of audio and video, display module, transferring the data to the smart phone, checking the error and status of the elevator. As a result, the system provides convenient monitoring environment for users. The Control Panel has the mechanical device control module of the elevator system such as inverter control, driving control and group control. When the system makes some of status and errors, the system notifies the information to the blockbox system. And the CP provides the environments of checking the operating conditions in real-time. By building the environment of the prevention system from a limit of simple image data storage and post response system of the accidents, intelligent video/audio surveillance block-box system provides the integrated environments of the elevator and information technology for the accident, crime prevention environment with a systematic and integrated management. Because the convergence of IT and the elevator is made in a fragmented environment so far, it is the lake of the utilization of advanced IT technology. The proposed system gives the IT convergence structure for the specialized intelligent elevator including sensor information and video/audio data. In this way, we could be managing the cost and accident prevention.
5
The Conclusion
In this paper, we proposed the intelligent block-box system integrated video/audio, sensor data and status of the elevator. Using this information, we analysis the possible accident, criminal and abnormal behavior in the elevator and recognize the problem situation as well as recording the internal and external information for the elevator. It should prevent the criminality and accident by notifying the possibility of accident. Recently, increasing in social crimes and the needs for social safety networks, there are needs of the strengthened security in the elevator due to accidents and crimes. But the elevator system is lacking the solution of the security by relying on the existing CCTV cameras. For these problems, we proposed the intelligent sensing system combining with video and audio and we build the integrated proactive response system detecting the accident. It could be built the specialized security system of the elevator to solve and management these problems.
252
W.-Y. Kim, S.-G. Park, and M.-C. Lim
References 1. Korea Elevator Safety Technology Institute: Elevator Structure and Principles, http://www.kest.or.kr/html/05_information/ elevator_organization.php 2. Jang, J.-m.: Elevators of the latest technology trends. The Korean Society For Elevator Engineering (December 2004) 3. Roschier N.-R.: Up-Peak Handing Capacity by Destination Control. KONE Internal Report (2003) 4. Kim, J.-c.: Implementation of a Elevator Control System Using CAN Communication. In: Proceedings of The Korean Society for Elevator Engineering (2006) 5. Liu, K.: Research on Internet-based monitoring and diagnosis for hoist equipment. Master’s thesis of Taiyuan University of technology (2003) 6. Hongfang, Y.: Research on Internet-based telemonitoring and diagnosis for FMS. Automated Manufacturing 22(5), 23–26 (2000) 7. Hyundai E.: Remote Monitoring System Development Report (2004) 8. Shin, S.-S., Yu, B.-S.: Realization of Elevator Display System with Operating Schedule Information (EDOSI). Journal of Korea Information Processing Society: Part A 12-A(1) (2005)
Real-Time Intelligent Home Network Control System Yong-Soo Kim Dept. of Computer & Information System, GeoChang Provincial College, Korea [email protected]
Abstract. The real-time intelligent home network control system is the system which can control and monitor intelligent home network anytime and anywhere with mobile devices. In this study, to embody the real-time control system for intelligent home network, I designed the sub-module which can control various USN senses with using ZigBee, and organized the GUI environment into the client module to drive by users with mobiles devices. Keywords: Home-Network, ZigBee, USN, Embedded-System.
1
Introduction
Combined with a variety of devices used for electronic equipment, human, environment, media, and so forth, information technology has been part of our life. To make convenient modes of life, it has developed into Ubiquitous system in which products and users are linked together into a network system through which they can communicate each others: device(s) to device(s) and device(s) to users[1]. Among many sectors in the information technology industry, Home Networking has a biggest potentiality in the industry. Because it doesn’t require pre-installation tasks and high mobility, the sector gradually brings in wireless technology (its products and services), forming the sub-market. Further, coupling of wireless technology with USN (Ubiquitous Sensor Network) will makes it possible that all the devices and equipment can be automatically sensed, tracked, monitored, and controlled regardless of time and place. This will lead to a faster development of Ubiquitous system where Intelligent Home Network is constructed on. There are various modes of WPAN (Wireless Personal Area Network) such as ZigBee and UWB. WPAN is a key part of integrated wireless network system on which USN-based home networking builds. In terms of effectiveness, Zigbee seems a more preferred choice because of its characteristics of low power use and rate. At present, two controlling systems are used for home-networking: telephone network and wireless internet. Both are text-based. Because the telephone network control system uses voice mode –voice message and key-, it requires supplementary modules of voice conversion and analog transmission to control signals. In addition, the line system entails phone fee according to a phone charge system a user subscribes. When wireless Internet is used to control home devices and equipment, they are linked to home control server for a user to select HTML-based menus. As for mobile devices, no additional programs are necessary and the server controls everything by sending and receiving control signals. In this process, however, there could be an T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 253–260, 2011. © Springer-Verlag Berlin Heidelberg 2011
254
Y.-S. Kim
issue of overhead time. Furthermore, as more images and multimedia elements are introduced in terms of designing, the overhead time will rise exponentially. To minimize the overhead time, therefore, a mobile device needs an additional program that sends and receives only original control signal in a format of binary streaming between the device and the server. In the case of delivering webpage information through Smartphone, unnecessary information (e.g. webpage-related page and head information) is transmitted to the network along with status information and control signals. This is a barrier to high-speed data transmission in wireless environment, not to mention high transmission cost. The solution is Smartphonespecific program that connects directly to the network. This method can avoid unwanted information, reduce the amount of data traffic, and in turn increase efficiency of data transmission. This study here suggests an intelligent home network control system to replace the existing methods that require high cost in building the system. We suggest in the study that the system use Zigbee controller which is most suitable for home networking, low in cost, and versatile in function. The system also uses Windows Embedded CE on which Smartphone is based to be directly connected to the network. Consequently, Zigbee and Windows Embedded CE will realize an intelligent home network control system at an affordable price.
2 2.1
Technology Platform Embedded System
An embedded system is an electronic control (computer) system embedded with hardware and software to perform certain tasks. That is, it is embedded with not only simple circuits but also a microprocessor that processes and control wanted tasks[2]. As technological advancement in IC design and manufacturing brought so rapidly in PC industry, everyone comes to have his own computer now. Following the same footsteps it did in PC industry, the technology is sinking into general consumer appliances and is being embedded in information appliances. The applicable areas of embedded system include camera, camcorder, microwave, engine controller, robot, missile, facsimile, router, PDA, game player, settop box, and so on. 2.2
Device Driver
USN (Ubiquitous Sensor Network): USN is an infrastructure for advanced intelligent social community where every member can use customized information services at any time and place. In the structure, USN plays a roll of forming and providing context-awareness information and knowledge contents through sensing, storing, processing, integrating Infothing (information about things) and environments by its tag and sensor node installed everywhere[3][4]. More importantly, USN is the most promising engine driver for the next generations that determines the national competitiveness in the future an important future technology to bring about a wave of innovation over the society. Because of its particular characteristics and limitless potentiality of growth, USN is expected to have impact on non-information
Real-Time Intelligent Home Network Control System
255
technology industry as well as both public and private information technology sectors[5]. The applicable areas of USN include national defense, manufacturing, construction, transportation, medical service, environment, education, logistic, distribution, agriculture, farming, and so many. Accordingly, great interest bumps up in not only domestically but also internationally and some advanced countries already developed and accumulated a considerable level of USN technology and stepped into a USN competing market with its applications. USN is optimized by each type of applications. It also provides the communication environment with unique data traffic by application and protocol. It is a central part of infrastructure for u-IT839 that satisfies all the conditions (such as limited electric power, limited size, limited price, limited broadband, limited RF output, seamless, convergence, mobility, ubiquitous N/W, invisible, connected, calm, real) and enables context control, real-time and intellectual processing. Therefore, USN is the very area we need to develop intensively in the future. ZigBee: Zigbee is a popular name for the IEEE 802.15.4 standard for wireless personal area networks (WPANs) with low power and low price. Because of its small power consumption, low chip price and highly stabilized communication, it is recognized as one of technological areas that grow at fastest way. ZigBee is based on the physical layers of IEEE 802.15.4 standard and Mac-based media access control, on which it builds network, application and security. ZigBee’s physical structure is simple and its Mac layers are designed to minimize power consumption[6][7][8]. Zigbee is well suited for remote control and management. It has many functions such as home automation, building control system, security automation and energy control system for the purpose of realizing home comfort and safety. In addition, it can be applied to the areas of building automation system (electric power, heater, air conditioner, illumination, and elevator) and of factory and industry automation system through computers and various types of measuring equipment. The application will be applied more widely in the future. Some of the most distinctive features of Zigbee are its battery life that lasts years, simple system structure that realizes operability by 8bit micom, and easy installation and maintenance that do not require expensive wire backbones or infrastructure. Furthermore, it is used in many wireless sensing and controlling fields because of its excellent characteristics: remote wireless communication, international standard base, various and wide frequency radius, bulky node, compatibility with many types of network. In particular, it is regarded as a new technology and takes a responsible roll in building home automation and Ubiquitous Sensor Network.
3 3.1
Preliminary Studies Mobile-Based XML Text Transmission System
Mobile-based XML Text Transmission System has been designed and developed on Java 2 Platform (J2ME)[9]. To develop the system, we employed kXML parser that is distributed as an open source operable on the platform.
256
Y.-S. Kim
The transmission system has an advantage of easiness in designing documents and pages by various tags. Extensible Markup Language or XML is a composite language that can be separated into data, schema, style sheet, etc while Hypertext Markup Language or (HTML) is not. Therefore, XML has a better data-processing capability than HTML[49]. When XML is applied to wireless applications, it can save labor and cost in enhancing data sharability and program flexibility. However, it has also a negative side; transmitting, storing, and parsing XML can aggravate bandwidth load in a wireless system. Wireless internet connection is to send and receive digital control signals on the basis TCP/IP protocol. The transmission is divided into 2 modes: text-based and bit streaming. Text-based transmission now uses two web- language texts (XML and HTML) to send and receive text-based control signals. Such functions are basically built in a mobile device (web-explorer or separate client programs). The server or setup box consists of its web server program and control program. XML was created to exchange and share data on the web, and to exchange information among application modules. Thus it sets up its own standard. As wireless network technology rapidly grows and mobiles devices such as PDA, cellular phone and Smartphone are widely distributed, data transmission mode is shifting from wirebased to wireless-based Internet. And, some studies are being actively undergone for transmitting, storing, and parsing information by XML. This study views that there can be excessive data traffics to load a server because mobile devices have many limitations and a setup box (not PC) is used as server. Therefore, there should be an additional process to extract original control signals. However, parsing is expected to impose another load on a mobile device. 3.2
Designing Ubiquitous Home Automation System for 35m2 Apartment
We designed a 3rdgeneration ubiquitous home automation system that satisfies its 5 essential elements: intelligent environments, world class model, location-based services, private certification, and Automatic Behavior. And we applied it to the standard apartment size (35m2)for Korean middle class to suggest a new generation model of ubiquitous home automation system[10]. This home automation design placed user’s moving lines and behaviors in its prediction and satisfied the predictions before the user recognizes. ZigBee-based USN facilitates network formation. However, limited and false predictions may entail unexpected problems. Thus, a high level of image processing algorithm should be applied to make a more specific and delicate prediction for user’s behaviors. Fore example, when the user approaches a bed, many algorithmic predictions should be made for his/her further actions such as sleeping or reading beyond general prediction on move direction and behaviors. To do so, supplementary devices are need such as voice recognition and controllers. Unfortunately, voice recognition technology is still not in a perfect stage and hard to apply despite its considerable development. But using a controller such as a portable cellular phone can solve this problem due to its excellent level of mobile programming.
Real-Time Intelligent Home Network Control System
4 4.1
257
System Implementation System Configuration
In this study, Smartphone controls home system from outside. Figure 1 shoes Home Network Control System. The components to be controlled are eight parts including room 1 and 2, refrigerator, TV, ventilator. The remote control system comprises mobile devices like Smartphone, a server that plays relaying roll and a ZigBee controller that controls home network control system through Radio-Frequency Identification (RFID) tag. Mobile application is a client that functions as mobile device (e.g. Smartphone). The application is connected to the server through Internet to send and receive control information in binary streaming format. The server plays relaying roll. The server is connected to ZigBee controller in serial to send and receive instruction in binary streaming format through user interface. The binary streaming is a method of relaying data (esp. video and audio material) over a computer network as a steady continuous stream, allowing playback to proceed while subsequent data is being received. This type of streaming creates packets (a message or message fragment) including device and location ID, status information, intensity of illumination, temperature, and humidity; the last four are need to control traffic. 제어신호(Bit String) 상태정보(Bit String) Mo b i l e
I n te r n e t
S e rv e r
제 상태 어신 정보 호(Bi (Bit tStr Str ing) S e r i a l ing )
Z i g Be e
t String) 상태정보(Bi ring) t St 제어신호(Bi Ub i Ho u s e
R / F 무선
Fig. 1. Home Network Control System
4.2
System Implementation
The basic structure of the program is that a mobile device as client is accessed to a server through wireless Internet. It sends and receives control signals in binary streaming format. The server plays relaying roll. The server is connected to ZigBee controller in serial to send and receive instruction in binary streaming format through user interface. Mobile Client Implementation: A Client has access to the server and remotely controls home appliances linked to UbiHouse network. It uses Pocket PC 2003 of Window-EmbeddedCE Platform to implement the program. When not connected to the server, it doesn’t produce the environment-related outputs. To be connected to the server, the client touches the connection setting shown on upper part of the screen, enters IP address, and pushes "conformation" button. Then the screen displays the environment information in the location of the latest log-in.
258
Y.-S. Kim
Fig. 2. The Screen of Sever Connection Setting
When the client touches "setting" on the lower part of the screen, the client checks current status and turn on or off. As the button is touched, instruction changes from "confirmation of status information" to "turning on", to "turning off". Those converted instructional information is transmitted.
Fig. 3. The Screen of Status Information
The Screen of Server Implementation. A server is an integral part of home network control system. First, the server analyzes the data sent from a client and sends them to ZigBee controller and save the date transmitted form ZigBee. In the latter case, the server is a database for the data from ZigBee. When home appliances request status information for the client, the server delivers the date.
Fig. 4. The Turn-On(Left)/Turn-Off(Right) Status Information of Receiving of All the Control Devices
The Screen of Intelligent Home Network Control System. To test the control program suggested in this study, we ran the mobile program on PC through Microsoft Pocket PC 2003 emulator. On the other hand, we used a server installed with standard window application programs and connected it to PC. We used TCP/IP-based sockets
Real-Time Intelligent Home Network Control System
259
as interface between the mobile application programs and the server application programs. We also took RF reader to connect the server to UbiHouse. Figure 5 shows the screen of the system test implementation. And Figure 6 is the screen that shows how rooms, ventilator, air conditioner, refrigerator and so one are controlled by USN.
Fig. 5. The screen of the system test implementation
Fig. 6. All controlled System
5
The Conclusion
This paper designed and implemented the interface to control intelligent home network at any time and place. We selected ZigBee among USN-based wireless home networking technologies and connect it to Smartphone that is expected to grow more and further. First, we created a set of instruction to analyze data through ZigBee network and transmit control signals. At the same time, we determined and processed a set of instruction and a format of status information to send and receive the control signals and status information of client’s modules built in the mobile device and the server. Also we implemented the server modules to control various USN sensors through ZigBee network. Second, we also implemented a client module to be built in a mobile device and operated by a user. In this study, we created the client module to be well operated under Window CE environment-based Pocket PC 2003 mobile operation system. The module sends and receives control signals to/from the server and displays and controls under Graphical User Interface (GUI). In this study, we composed GUI environment to facilitate users to control home network system. In
260
Y.-S. Kim
addition, we used Smartphone to control and monitor home network system. Through this, we invented an interface of home network system that can be operated by Smartphone well and easily. In addition, we controlled and monitored the control messages transmitted between the platform and home network system. To do this, we connected the mobile device to home network by using socket communication. We predict now that the technology to control home network system through a mobile device will be a key element for the growth of ubiquitous computer environment. We suggest here one area that a further study may focus more on: development of home network systems customized to more diversified mobile devices and rich contents. In this study, we tried to implement (display) everything on the tiny size of the mobile phone, thus we had limited ourselves. As more kinds or types of controlling factors emerge, it will be homework to work on: "how to show all here". The consideration suggests an additional display control design of box-typed list.
References 1. Mo, J.J.: Effective Promotion of u-City by u-Service Model, Graduate School of Information and Communication. Dankuk University (2008) 2. Gupta, R.K.: Introduction to Embedded System (2003) 3. Sung, K.D.: Politics Direction to Promote the Establishment of u-Sensor Network. Chuenpa 116 (2004) 4. Ho, J.B., Sung, K.Y., Kyo II, J., Heon, Y.D.: Consideration for Information Protection in the Environment of RFID/USN. Electronics and Telecommunications Research Institute, ETRI (2003) 5. Hyuk, C.W.: Demonstration Project of Home Network and Future Plan. Korea Ministry of Information and Communication (2004) 6. Zigbee Alliance, http://www.zigbee.org 7. Chang, S.J., Dong, K.I.: ZigBee, Andong University (2006) 8. In, J.H.: IEEE 802.15.4 WPAN Technology. Electronics and Engineering Association (2005) 9. Sung Il, Y., Kyung, S.K.: Designing Transmission System for Mobile-Based XLM Text and Its Implementation. Konhju Media and Information University (2002) 10. Sun, O.Y., Chul, S.K.: Designing Ubiquitous Home Automation System for 35m2 Apartment Adopting USN Environment. Game & Entertainment Paper (2006) 11. Ministry of Information and Communication, IT Information Statistics, http://www.mic.go.kr 12. Korean Wireless Network, http://www.korwin.co.kr
LCN: Largest Common Neighbor Nodes Based Routing for Delay and Disruption Tolerant Mobile Networks Doo-Ok Seo1,*, Gwang-Hyun Kim2,**, and Dong-Ho Lee1,*** 2
1 Dept. of Computer Science, Kwangwoon University, Seoul, Korea Dept. of Information Communications, Gwangju University, Gwangju, Korea {clickseo,dhlee}@kw.ac.kr, [email protected]
Abstract. DTNs (Delay Tolerant Networks)refer to the networks that can support data transmission in the extreme networking situations such as continuous delay and no connectivity between ends. DTMNs (Delay Tolerant Networks) are a specific range of DTNs, and its chief considerations in the process of message delivery in the routing protocol are the transmission delay, improvement of reliability, and reduction of network loading. This article proposes a new LCN (Largest Common Neighbor nodes) routing algorithm to improve Spray and Wait routing protocol that prevents the generation of unnecessary packets in the network by letting mobile nodes limit the number of copies of their messages to all nodes to L. Since higher L is distributed to nodes with directivity to the destination node and the maximum number of common neighbor nodes among the mobile nodes based on the directivity information of each node and the maximum number of common neighbor nodes, more efficient node transmission can be achieved. In order to verify the proposed algorithm, DTN simulator is designed by using ONE simulator. According to the result of the simulation, the suggested algorithm can reduce average delay and unnecessary message generation. Keywords: Delay Tolerant Networks, DTMNs, Mobility, Routing Protocol, ONE simulator.
1
Introduction
Due to the trend shift into Ubiquitous computing environments, the introduction of not only wired Internet core network but also various networks such as wireless LAN, adhoc network, sensor network have been accelerated[1]. User's mobile device dependency is also increasing, as such these devices are required to be connected whenever and wherever it is required. Various wireless network technologies are introduced to support ubiquitous effectively. However, extreme network environments such as satellite network, communication between the earth and a planet, military operation, remote village and disaster increase additional requirements such as network *
Ph.D. Student of Computer Science, Kwang-Woon University. Professor Dept. of Information Communications, Gwangju University. *** Professor Dept. of Computer Science, Kwang-Woon University. **
T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 261–270, 2011. © Springer-Verlag Berlin Heidelberg 2011
262
D.-O. Seo, G.-H. Kim, and D.-H. Lee
splits, large delay, reconnect after connection loss, high link error rate and heterogeneous network environments. Therefore new hypothesis such as large delay, intermittent connection and the most important, absence of route between the source and the destination are required. Users needs remain unchanged under such a extreme network environments. In order to support heterogeneous network integration, DTNs(Delay Tolerant Networks) is proposed.[2-4]. DTNs employs SCF(Store-Carry-Forward) method and draws attentions as the network technology which may be adapted into networks where existing TCP/IP protocol cannot be applied because of frequent link disconnect and network loss between ends. However, existing routing protocol being used in mobile ad-hoc network and wireless sensor network does not reflect DTNs' own property.[2]. Although Epidemic routing protocol and Spray and Wait routing protocol was proposed, they possess their weakness as they must distribute large amount of data copy into networks in order to transmit data and it still takes a lot of time to complete data transmission.[2]. They are also based on unrealistic hypothesis such as Random Waypoint mobile model. It is not effective when applied to the real world network. In this article, we formed neighbor node list of mobile node. LCN routing algorithm, which distributes message copy based on the neighbor node list, is proposed in order to distribute more message copy of itself to mobile node which has the maximum number of neighbor node. This article will begin with simple introduction and explain Dynamic routing protocol in the second chapter. This article will describe proposed LCN routing algorithm in the third chapter and conclusion and further research will be handled in the forth chapter.
2
Dynamic Routing Protocol of DTMNs
DTMNs(Delay Tolerant Mobile Networks) is a specific category of DTNs. Every node is mobile and end-to-end path does not exist between any given two nodes. Every node in DTMNs in blind and autonomous[5]. Dynamic routing protocol is designed to deliver packets into network under such a circumstance. Typical examples of dynamic routing protocols are Epidemic routing protocol and Spray and Wait routing protocol. 2.1
Epidemic Routing Protocol
Epidemic routing protocol is an algorithm which exchange messages by trading index of message in possession. Thus, even if there are no paths from sender node to destination node, it can deliver packets via carrier around with this algorithm. A figure 1 shows that how it can deliver packets from sensor node, S, to destination node, D. Figure 1 (a) describes that node S is trying to destination node, D, where there is no path between node S and D. Where time is t1, node S sends messages when node C1 and C2 reach transmit range. Every node will move at time t2, C2 will deliver message when C3 enters into the range. As destination node, D, exists within the transmit range of C3, the message will be delivered to the destination[6].
LCN Based Routing for Delay and Disruption Tolerant Mobile Networks
263
Fig. 1. Packet transmission in process of time via Neighbor Nodes(Carrier)[5]
2.2
Spray and Wait Routing Protocol
In Epidemic routing protocol, mobile node delivers a copy of message in a possession, it will eventually generates unnecessary packets within networks. Therefore, network bandwidth will increase and each node will face buffer management problem. Spray and Wait routing protocol is proposed in order to improve this problem. The total number of copy of packet is limited to L. Spray phase and Wait phase is defined for packet delivery. Spray phase is a process of transmitting packets within the limit of L, which every mobile node delivers packets in possession to nodes within transmission range. Wait phase is a process of stand-by process until it joins final destination node even if new mobile node reaches within transmission range, where there are L copies of any given packet. By setting the limit of packet transmission, it will prevent unnecessary packets generated in the networks[7]. Spray and Wait routing consists of the following two phases: • spray phase: for every message originating at a source node, L message copies are initially spread – forwarded by the source and possibly other nodes receiving a copy – to L distinct “relays”. (Details about different spraying methods will be given later.) • wait phase: if the destination is not found in the spraying phase, each of the L nodes carrying a message copy performs direct transmission (i.e. will forward the message only to its destination). Spray and Wait combines the speed of epidemic routing with the simplicity and thriftiness of direct transmission[7].
3
LCN Routing Algorithm
The proposed algorithm in this article intends to set higher L, calculated based on mobility information of each node and the maximum number of common neighbor node, to mobile nodes, which have more directivity towards destination node and more neighbor nodes.
264
3.1
D.-O. Seo, G.-H. Kim, and D.-H. Lee
Message Delivery Based on Directivity Information
Mobile nodes learns its own location, the location of its neighbors and direction information periodically, compares them with the location of destination node and distribute higher L to nodes which moves toward similar direction. When a mobile node encounters another node, it performs the following tasks. • Exchange the type of message and direction information of each other. • When there are messages to exchange, it compares the destination and direction and shares L. The algorithm described above can be defined as following. FDirection (L, angle) = L/2 (-45°≤angle≤45°)
< < (90°<angle≤180°, -90°<angle≤-180°)
FDirection (L, angle) = L/5 (45° angle≤90°, -45° angle≤-90°) FDirection (L, angle) = 0
(1)
L is the maximum value which a mobile node can deliver on the network. Angle is a difference between mobile node and the destination node. Thus, it distributes higher L to nodes which has more similar directivity[1].
Fig. 2. L distribution based on directivity information
Figure 2 explains the process of distributing L based on directivity information. Let us assume that arbitrary node C1, C2, C3 exist within the transmission range of mobile node S. The direction of node C3 is similar to the one of node S, it will distribute higher L value to node C3. 3.2
Message Delivery Based on the Maximum Common Neighbor Nodes
Periodically, mobile nodes compose neighboring list of neighbor nodes which exist within its own transmission range. The more it has neighbor nodes, the more nodes it can deliver its message. Therefore, higher L is distributed to nodes which have the maximum common neighbor node.
LCN Based Routing for Delay and Disruption Tolerant Mobile Networks
265
Fig. 3. L distribution based on the maximum common neighbor node
Figure 3 describes the process of distributing L based on the maximum common neighbor node. Let us assume that arbitrary node C1, C2, C3, C4, C5 exist within the range of mobile node S. Each node composes neighboring list of neighbor nodes. The neighboring list of node C1 is {S, C9}, the list of node C2 is {S, C7, C8} and node C3 generates neighboring list of {S, C4, C5, C6, C7}. Node S will distribute higher L to node C3 which has the maximum number of neighbor node in order to increase chance of message delivery. When a mobile node encounters another node, it performs the following tasks. • Exchange the type of message and neighboring list of each other. • When there are messages to exchange, it compares the maximum common neighbor node and shares L. 3.3
LCN Routing Protocol
LCN(Largest Common Neighbor nodes) protocol is designed to work based on directivity information and neighboring node list, which are described in chapter 3.1 and 3.2 respectively. When arbitrary two nodes encounter, they perform the following tasks. • Exchange the type of message, direction information and neighboring list of each other. • When there are messages to exchange, they re-distribute L based on the result of FLCN( ).
266
D.-O. Seo, G.-H. Kim, and D.-H. Lee
The algorithm described above can be defined as following 1. Global 2. var beaconInterval 3. var C = L / 2 4. 5. Procedure getDirection 6. do 7. x1 = x2; 8. y1 = y2; 9. x2 = preWaypoint.getX( ); 10. y2 = preWaypoint.getY( ); 11. angle = getAngle(x1, y1, x2, y2); 12. od; 13.end getDirection 14. 15.Procedure getNeighborNumber 16. do 17. num = getCount(beaconInterval); 18. appendNum(num); 19. od; 20.end getNeighborNumber 21. 22.Procedure LCN 23. do 24. FD = getDirection(x1, y1); 25. FN = getNeighborNumber(); 26. FLCN = C * FD * FN; 27. od; 28.end LCN Mobile nodes perform the following tasks. • Nodes compute the following value based on the neighbor's location and its own. - the direction - neighbor nodes within the transmission range • Nodes exchanges message when encounters with arbitrary nodes. - type of message - the direction of node - number of common neighbor nodes • When there are nodes to exchange, it exchanges L based on the result of FLCN( ). The result of FLCN( ) is based on the directivity information and the number of neighbor nodes.
LCN Based Routing for Delay and Disruption Tolerant Mobile Networks
267
Fig. 4. The process of message delivery of LCN algorithm
Figure 4 explains the process of message delivery in LCN algorithm based on the directivity and the maximum number of common neighbor nodes, when trying to send packets from sender node S, to destination node D. Among C1, C2, C3, sender node S allocates higher L to C3 node which has the most similar directivity and the maximum number of neighbor nodes. Subsequently, C3 distributes higher L value to C6 based on the directivity and neighboring list. Therefore, it will reduce the number of unnecessary message generated in the process of message transmission to the destination node D. 3.4
Performance Evaluation
In order to evaluate the performance of LCN routing protocol proposed in the article, DTNs simulator, The Opportunistic Network Environment Simulator(The ONE), developed by Helsinki University in Finland, is used.[8-9]. In this test, 2 results are compared. The first element is the average delay time of message arriving and the second element is overhead rate computed based on the number of unnecessary transmission in order to transmit message to the destination in the network. Table 1. Performance evaluation environments Variable Number of Node Area Size Simulation Time Buffer Size Message TTL Movement Model
Value 300 10 km * 10 km 1hr, 3hr, 6hr 5M 60minutes Shortest Path Map Based Movement
268
D.-O. Seo, G.-H. Kim, and D.-H. Lee
Table 1 shows the test environments values of performance evaluation. Mobility model used in this test is Shortest Path Map Based Movement, which is supported by the simulator.
Fig. 5. Transmission success rate of each protocol
Figure 5 compares message transmission success rate of Spray and Wait routing protocol and LCN routing protocol. There is very little difference compared to Spray and Wait routing protocol, however it shows better chance of message transmitting against Epidemic routing protocol.
Fig. 6. The process of message delivery of LCN algorithm
LCN Based Routing for Delay and Disruption Tolerant Mobile Networks
269
Figure 6 compares latency among 3 protocols. Epidemic routing protocol shows smaller value of latency compared to the other. This explains that Epidemic routing protocol has a higher probability of encountering fastest node to the destination as it delivers message to every node it encounters. Especially, LCN protocol performs limited flooding and it explains that it is minimizing the probability of reaching message destination in the short period of time. There is also load from LCN protocol consisting directivity information and the maximum neighbor node list.
Fig. 7. Comparison of Spray and Wait, LCN algorithm
Figure 7 compares the expected delay of Spray and Wait, LCN algorithm as a function of the number copies L used. LCN routing protocol proposed in this article employs information based on the directivity and the number of common neighbor nodes, to transmit messages. It is shown that the success rate of message transmit is increased and unnecessary traffic is decreased. However, further research is required to reduce the overhead within the LCN protocol implementation.
4
Conclusion and Further Research
Due to the trend shift into Ubiquitous computing environments, the introduction of various networks is accelerated and the various wireless network technologies are introduced to support ubiquitous environments effectively. In order to support heterogeneous network integration, DTN(Delay Tolerant Networks) is proposed. Epidemic routing protocol and Spray and Wait routing protocol are proposed for effective routing on DTN, there are downside of buffer management problem as it generates large amount of message copy in the network. It also has a weakness of taking long time to complete data transmission. LCN routing protocol proposed in this article, it distributes higher L value based on directivity information of each node and
270
D.-O. Seo, G.-H. Kim, and D.-H. Lee
the maximum common neighbor node at wait phase of Spray and Wait routing protocol. It maintains remarkably low overhead rate compared to Epidemic routing protocol, lower overhead rate compared to Spray and Wait routing protocol. It is also shown that the lower value of L resulted in lower value of transmission delay time. By decreasing the amount of unnecessary packet occurred in the existing routing protocol, it is confirmed that buffer management problem of each node can be improved. In the future, in order to evaluate the performance of LCN routing protocol proposed in this article, the various performance evaluations can be performed using Levy Walk model, which is similar to the real mobile nodes. Further research can also be performed on effective DTNs routing algorithm in vehicle network, which can be used in the real DTNs environments. Acknowledgement. The present Research has been conducted by the Research Grant of Kwangwoon University in 2011.
References 1. Chang, D., Shim, Y., Kim, G., Choi, N., Ryu, J., Kwon, T., Choi, Y.: Mobility Information based Routing for Delay and Disruption Tolerant Network. Journal of Information Science, Telecommunications 36(2) (April 2009) 2. Park, Y., Lee, C., Lee, M., Chang, D., Cho, K., Kwon, T., Choi, Y.: BBR: Bearing Based Routing for Delay and Disruption Tolerant Nwtwork. In: Korea Institute of Communication Proceedings (November 2008) 3. So, S.H., Park, M.K., Park, S.C., Lee, J.Y., Kim, B.C., Kim, D.Y., Shin, M.S., Chang, D.I., Lee, H.J.: Delay Tolerant Network Routing Algorithm based on the Mobility Pattern of Mobile Nodes. Journal of Electronic Engineering 46, TC No. 4 (April 2009) 4. DTNRG. A Delay-Tolerant Networking Research Group, http://www.dtnrg.org/ 5. Harras, K., Almeroth, K.: Transport Layer Issues in Delay Tolerant Mobile Networks. In: IFIP Networking Conference, Coimbra, PORTUGAL (May 2006) 6. Vahdat, A., Becker, D.: Epidemic Routing for Paritally-Connected Ad Hoc Networks. Technical Report CS-2000-06, Duke University (July 2000) 7. Spyropoulos, T., Psounis, K., Raghavendra, C.S.: Spray and wait: An Efficient Routing Scheme for Intermittently Connected Mobile Networks. In: SIGCOMM 2005 (2005) 8. The ONE, The Opportunistic Network Enviornment simulator, http://www.netlab.tkk.fi/tutkimus/dtn/theone 9. Keränen, A., Ott, J., Kärkkäinen, T.: The ONE Simulator for DTN Protocol Evaluation. In: 2nd International Conference on Simulation Tools and Techniques, SIMUTools 2009, Rome (March 2009)
A Perspective of Domestic Appstores Compared with Global Appstores Byungkook Jeon Dept. of Information Technology Eng., Gangnung-Wonju Nat’l University, Namwonro 901, Wonju-City, Kangwon Prov., Korea [email protected]
Abstract. In the open markets, the category, the proportion, downloading counts for the apps in an appstore are an important measure which would determine the success or failure of the appstore In this paper, we investigate and analyze domestic appstores compared with global appstores to vitalize Korea open markets To cope with this, the category, the ratios and the physical distribution state for the apps in the representative domestic and global appstores are investigated. As a result, we proposed that the ecological prospect of domestic appstores is better to approach the fine-grained category models as well as need to invest more in the apps development to go along with the flow of changes in the global open markets. Keywords: App, Appstore, Mobile content, Ecosystem, Category, Smartphone.
1 Introduction The smartphone boom has developed various industries such as H/W and S/W, mobile contents as well as changed consumption patterns for consumers of mobile telecommunications. Especially in the open market, whether the usability of a socalled 'App' as a mobile content has emerged as a hot issue in the smartphone industry and mobile Internet industry. An appstore is an open market where anyone can buy or sell mobile contents. Therefore, the apps in an appstore are important in terms of the physical counts and qualitative aspects. Also, the category and the ratio of the apps, downloading counts according to customers' preferences in an appstore are an important measure which would determine the success or failure of the appstore. Owing to these, the open markets should be healthy the mobile ecosystem based the smartphone. In this paper, we investigate and analyze domestic appstores compared with global appstores to vitalize Korea open markets. To cope with this, the category, the ratios and the physical distribution state for the apps in the representative domestic and global appstores are investigated. By analyzing the detail investigation, we will put domestic apps into the mobile ecosystem’s perspective. This paper is organized as follows. In Section 2, we will investigate a status of domestic and global appstores. Contrastive analysis of domestic and global appstores is presented in Section 3. The ecological outlook for domestic appstores is described in Section 4. Conclusion and summary are given in Section 5. T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 271–277, 2011. © Springer-Verlag Berlin Heidelberg 2011
272
B. Jeon
2 A Status of Domestic and Global Appstores To get a status of domestic and global appstores, it is necessary to examine an excess and deficiency of the apps through the analysis of each category in the appstores. Then, we investigate the categories and the number of the apps in each appstore. To give validity, the investigation was targeted the leading domestic and global appstores. Also, the investigation method was directly counted after connected to each downloadable page for domestic appstores on the smartphone customers. 1) A status of domestic appstores Korea's representative appstores are SKTelcom’s T-store, KT's olleh-Market and LG U+’s OZ-store [1, 2, 3]. All of them are mobile telecommunication companies, which have been mobile services, mobile marketing, and the apps services. In addition, T-store and OZ-store are mostly service the Android-smartphone oriented apps, but olleh-Market services the apps for Apple-iPhone. According to a conducted investigation in early April, 2011, the app categories are classified into about 17, but due to the insufficient quantity of apps within the categories, OZ-store were classified only 11 categories. The appstore orders that holds the largest number of the apps are T-store (49,467 apps), olleh-Market (36,868 apps) and next OZ-store (9,466 apps). So the total number of available apps is 95,801 apps. The apps of e-Book than other categories within olleh-Market are characterized by overwhelming majority. Also within T-store, the apps of e-Book and Education category were relatively very heavy side. Unusually, although olleh-Market has serviced 330,011 MP3 files, with fairness to the others we will exclude the app counts. Because the MP3 files do not necessary the extra development time, by the copyright agreement it can increased the number at any time.
Fig. 1. The share of each category in domestic appstores
The sum proportion of the apps per each category in domestic appstores is shown in Fig. 1. As seen in the figure, the apps of e-Book category as 39% share were excessive. Immediately following, Education(15%), Entertainment(10%), Music(9%) categories were shared. In particular, e-Reference, Photography, News/Weather, Travel, Banking/Finance, Sports, Productivity all are characteristic of the lowest share less than 1%. Ring Music among the categories is a unique category name of domestic appstores because the companies can be phone ringtone service.
A Perspective of Domestic Appstores Compared with Global Appstores
273
2) A status of global appstores The representative global appstores are Apple's App Store and Google's Android Market [4, 5]. Although the other appstores are RIM's AppWorld, Microsoft's Windows Marketplace, Nokia’s OVI Store, etc., we will exclude this investigation because they have relatively very few apps. By April 22, 2011, the number of apps were aggregated total 462,072 apps in Apple's App Store [6]. Among them, the downloadable apps are 373,960 and the inactive apps 88,112. So the inactive apps were excluded from this investigation. Android Market was also found that 187,876 apps are available for download and services [5]. Android Market were divided the app category into larger Applications and Game category, which Game category is characterized by a very fine-grained game genre classification. Although Android Market is the latecomer compared to App Store, the number of apps is growing very rapidly since 2010. The number of the apps in App Store was much more e-Book (14%) and Game (15%) category than the others. Even in Android Market, the apps of Entertainment(28%) and Game(17%) category were relatively plentiful. Especially when two stores are contrasted with each other, the apps of categories in App Store were almost more than those of Android Market. However, in Entertainment and Social Networking category, it is characterized by where are more Android Market’s apps than App Store’s.
Fig. 2. The share of each category in global appstores
The summation proportion of the apps of each category in two stores is shown in the following Fig. 4. It shows the highest proportion is Entertainment category (17%), and next Game (16%), e-Book (13%), respectively. The others except these categories are almost evenly distributed between 1~7%.
3 Contrastive Analysis of Domestic and Global Appstores Most of domestic and global appstores have different launching time of the appstores as well as nationalities and tendencies of users, too. It is difficult to analyze all the appstores using the same measures. Therefore, through benchmarking for the successful global appstores, domestic appstores are necessary to identify development plans, growth factors and so on.
274
B. Jeon
1) Quantitative analysis of the apps in the appstores According to our investigation, since there were 31,721 apps in domestic appstores surveyed in October 2010, the total number of the apps has increased 95,801 [7]. It shows that the total number of the apps have grown by almost three times over just the past 6 months. Among global appstores, the total number of the apps of App Store and Android Market surveyed in October 2010 was 278,694 and 105,355 respectively. But now each of them has increased by 34% (373,960 apps), 78% (187,876 apps) [7]. Fig. 3 shows the ratio of the total number of the apps in domestic appstores will be compared with those of App Store and Android Market. As mentioned above, the apps of domestic appstores have been developed quite a lot. But compared to Android Market and App Store, the total number of the apps is still low by two or three times.
Fig. 3. The ratio of the aggregated number of the apps
2) The category analysis of the appstores A distribution state of categories for the total number of the apps in domestic and global appstores is shown in Fig. 4. The bar in the figure means of the aggregated number of the apps per category. The apps of global appstores are arranged in order of higher ratio such as Entertainment (17%), Game (16%) and e-Book (13%) category, respectively. And next, the rests are decreasing a slight rate from Lifestyle (7%) category.
Fig. 4. A distribution Diagram of the aggregated apps per each category
On the other hand, the total number of the apps of e-Book(39%) category in domestic appstores is very overrated, and next Music (16%), Education (15%) category are in order. The apps belong to these categories do not necessary the extra
A Perspective of Domestic Appstores Compared with Global Appstores
275
development time as well as easily increased the numbers at any time just by the copyright agreement. An interesting feature is significantly a lower rate of the apps in the rest categories. In addition, the ratios of the apps of domestic Entertainment and Game categories are nothing 10%, 4% respectively as compared with those of overseas, despite potentially the most profitable areas in Entertainment and Game categories [8, 9]. This directly suggests whether which of categories is appropriate as a better revenue model. Therefore, in the viewpoint of CP(Content Providers) to develop the apps in the near future, Fig. 4 will be a good indicator for the share of each category. Especially it indicates what kinds of categories the CP are needed more research, development, and investment.
4 The Ecological Outlook for Domestic Appstores 1) The ecological analysis of the categories From now on, the domestic CP are necessary to look for the apps development areas and check the direction. So we expect the ecosystem by investigating the distribution status of categories presented in the previous Section. The number of categories in domestic appstores consists of up to 17 categories, but these of overseas are subdivided more than 20 categories. Some of them are not exist in domestic appstores but in global appstores. These categories are Shopping, Business, Healthcare/Fitness, Medical category, etc. Even if there are Travel, Productivity and e-Reference category, the apps in domestic appstores are rarely associated with these. Paradoxically, it means that if the CP will serve as a means of entering the appstore markets to develop the new apps , these categories can be given many opportunities. Game category is necessary to refer to Android Market where have separated a little more granular by genre of the game. Because the games are profitable, many apps will be developed in the future, also occupied with lots of download counts. Therefore, if we evaluate an overall status based on the types, the kinds and the number of domestic and global categories, an expected category ecosystem is better to approach any fine-grained category models depending on the apps increased exponentially. 2) The ecological analysis of the apps As described above, the total number of the apps in domestic appstores has increased 95,801 apps. It shows that the total number of the apps have grown by almost three times over just the past 6 months.Looking at domestic appstores individually, T-Store has a two-fold increase, olleh-Market 5 times, OZ-Store 3 times. These results should be related what the domestic smartphone subscribers were to 10million people in March 2011. A report of Korea Communications Commission expected the smartphone subscribers to be about 20million people until December 2011 [10]. Due to these subscribing trends, the appstore ecosystem will expanded to accelerate the development of more apps. Now over 6 months, the number of the apps in App Store increased to more Business by 58%, Banking/Finance by 56%, Lifestyle by 53% than ever before. Also,
276
B. Jeon
the number of the apps in Android Market increased approximately two times higher the apps of Travel/Navigation, Game, e-Book, Healthcare/Fitness, Banking/Finance, Lifestyle and Productivity category than ever before. Among these, the first concern is the rapid increase of the apps for business purposes in areas of Business, Banking/Finance and Productivity category. It indicates that the role of the smartphone is being used as business purposes of any company beyond some simple functions of mobile phone. Second, the apps related LBS(Location Based Service) has been significantly increased that the activation of SNS(Social Networking Service) and Travel/Navigation category is invoking. Third, the apps growth in Lifestyle category is proved being as if the smartphone can be used as a tool to help human life. In addition to these categories, as there are increased more the apps in Healthcare/Fitness category, so the smartphone seems to be used as a means of healthcare or medical supporting equipment for personal health. Finally, it means that the rapid increases of the apps in these categories are expanding the application areas considered functionality, usability and scalability of the smartphone. Therefore, in accordance with global trends of development of the app, the CP and the leading domestic appstores should to be enhanced continuously the insufficient category level as well as need to develop and invest the apps to go along with the flow of changes in the global open markets.
5 Conclusions Some of the world's largest telecommunications companies have teamed up to create so-called ‘WAC(Wholesale Applications Community) store’ as an appstore of sorts that they say will rival Apple's and those of other smartphone makers. The store will be designed to encourage developers to create mobile and online applications for all smartphones and operating systems, and launching in the near future. Therefore in this paper, we investigated and analyzed domestic appstores compared with global appstores to vitalize domestic open markets. Although the total number of the apps in domestic appstores has increased by almost three times over just the past 6 months, this number is still low by two or three times comparing with global appstores. Especially, an interesting feature is significantly a lower rate of the apps in the rest categories except e-Book category in domestic appstores. By analyzing the detail investigation, we put domestic apps into the mobile ecosystem’s perspective. As a result, we proposed that the ecological prospect of domestic appstores is better to approach the fine-grained category models as well as need to invest more in the apps development to go along with the flow of changes in the global open markets.
References [1] http://www.tstore.co.kr [2] http://market.olleh.com/appMain [3] http://ozstore.uplus.co.kr
A Perspective of Domestic Appstores Compared with Global Appstores [4] [5] [6] [7]
277
http://www.apple.com/iphone/apps-for-iphone http://www.androlib.com http://148apps.biz Jeon, B., Choi, J.H.: A study on the method to vitalize mobile contents market in the open platform circumstance. Report of Gangnung-Wonju Nat’l Univ. 12 (2010) [8] OVUM report Mobile Content and Applications in Emerging Market: the Future (2010) [9] PWC. Global Entertainment and media market Outlook 2010-2014 (2010) [10] http://www.kcc.go.kr
A Design of Retrieval System for Presentation Documents Using Content-Based Image Retrieval Hongro Lee, Kwangnam Choi, Ki-Seok Choi, and Jae-Soo Kim NTIS div., Korea Institute of Science & Technology Information(KISTI) {hongro,knchoi,choi,jaesoo}@kisti.re.kr
Abstract. In recent year, a number of text-based slide search engines on the website for looking for power-point files have opened. However, general user wants to search information by an intuitive method which detects things based on their knowledge. In this paper, we propose an intuitive retrieval method for searching the presentation slide using content-based image retrieval. The proposed method uses the feature of digital image transformed from each slide to calculate similarity between stored slide and query slide instead of texts in the presentation slide or file name. The local feature and global feature was extracted in digital image transformed from slide and we rapidly calculate the similarity between features. We showed that similar slide can be efficiently and intuitionally searched by using content-based image retrieval technique. Keywords: slide search engine, power-point, presentations, information retrieval, content-based image retrieval, feature extraction.
1
Introduction
Information retrieval (IR) is the field of computer science that is most involved with Research & Development for search and a daily activity for many researchers throughout the world. In the case of the PowerPoint contents, some estimates put the number of presentations on the web close to 30 to 50 million. In recent year, there are many applications on a website for slide-searching for looking for specific these presentation files. Many existing search engines have used text-based retrieval method which retrieves the texts included in presentation or file name. However, these methods are not easy to remember texts in presentation file or contents of presentation consist of image and illustration. In addition, we sometimes utilize the PowerPoint for producing figure embedded in word processor document but there are times that we check hundreds of presentation files in own computer for modification of presentation files. In other words, general users trying to retrieve the power-point slide tent to intuitionally use object, person, and scene cognizant of the fact. In particular, we use figures generated through presentation tool such as MS PowerPoint for including diagrams or figures in the word processor documents. We will be probably opened all presentation file one by one to modify and reuse the existing presentation files. In order to overcome these problems, we proposed a retrieval system for the presentation slide using Content-based Image Retrieval (CBIR). We use the feature of image transformed from each slide instead of texts in presentation T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 278–285, 2011. © Springer-Verlag Berlin Heidelberg 2011
A Design of Retrieval System for Presentation Documents
279
slide. We use the combination of local feature and global feature which is extracted in transformed to image from presentation slide to rapidly calculate similarity of slide. This paper is organized as follows. Chapter 2 organized related works and Chapter 3 describes steps of our algorithm. Experimental results are then given in Chapter 4. At the end, we present conclusions and comments in Chapter 5.
2
Related Works
In this chapter, we review the previous works and applications on traditional search engine field. There are three trends that are the text-based search engines, the text-based image retrieval technologies, and the contents-based image retrieval technologies. 2.1
Text-Based Search Engines
The text-based slide search engines are traditionally used approach to find any particular slide or classify similar slides. In these methods, slides are previously tagged with metadata such as keyword and the file name of slide. One of the advantages of text-based slide search engine includes the ability to represent both general and specific instantiations of an object at different complexity levels. The most text-based slide search engines that one finds for PowerPoint are mostly not standalone. Invariably their reliance is on Google Custom Search. SlideFinder [8] is a well-known slide search engine that it indexes texts in slides from many universities all over the world. SlideWolrd [9] powerpoint search engine searches for text query from nearly 7 milion presentations. This site use Google Custom Search to do the job. It also hosts presentations uploaded by users that is arranged in categories like computers, science, entertainment, culture, education and many more. These presentations can be searched out or narrowed to the local machine. Presentations can also be shared in social media and embedded in blogs or websites. This site also has a sister site of the last one and mostly geared towards medical presentations. Patients for example can search for presentations by ailments under Health Conditions. Medical practitioners can delve under various medical topics for a specific slide. The SlideBoxx [10] is a centralized slide search engine enables collaboration by connection the people who need slides with those who created them, increases productivity by reducing the time spent looking for slides and possible duplicating the effort of creating the same slide multiple times, and captures intellectual capital by cataloging and making accessible the work of current and past contributors. Finally, Presentation 2go [11] search engine is powered by Google and Yahoo at the backend. The slides can be downloading as PPT files, viewed in the browser as HTML or in a Flash viewer. 2.2
Text-Based Image Retrieval
The text-based image retrieval (TBIR) method started in early 1970s and becomes recently clear that the combination of visual and textual retrieval has biggest potential. It applies traditional text retrieval techniques to image annotations or descriptions. However, the text-base image retrieval also faces many challenges. Through text descriptions, images can be organized by topical or semantic hierarchies to facilitate
280
H. Lee et al.
easy navigation and browsing based on standard Boolean queries. However, since automatically generating descriptive texts for a wide spectrum of images is not feasible, most text-based image retrieval systems require manual annotation of images. Obviously, annotating images manually is a cumbersome and expensive task for large image databases, and is often subjective, context-sensitive and incomplete [1]. As a result, it is difficult for the traditional text-based methods to support a variety of task-dependent queries. Chen Zhang [2] said one major problem is that the task of describing image content is highly subjective. The perspective of textual descriptions given by an annotator could be different from the perspective of a user. A picture can mean different things to different people. It can also mean different things to the same person at different time. Furthermore, even with the same view, the words used to describe the content could vary from one person to another [3]. He investigates the role of user term feedback in interactive text-based image retrieval. Term feedback refers to the feedback from a user on specific terms regarding their relevance to a target image. 2.3
Contents-Based Image Retrieval
Content-based Image Retrieval (CBIR) is that the search will analyze the actual contents of the image rather than the metadata such as keywords tags, and descriptions associated with the image. Content-based image retrieval, a technique which uses visual contents to search images from large scale image databases according to users' interests, has been an active and fast advancing research area since the 1990s. A representative local feature extraction method is Scale Invariant Feature Transform (SIFT) [4] which is an approach for detecting and extracting local feature descriptors that are reasonably invariant to changes in illumination, image noise, rotation, scaling, and small changes in viewpoint. Detection stages for SIFT features are scale-spaces extrema detection, keypoint localization, orientation assignment, generation of keypoint descriptors. General visual features most widely used in content-based image retrieval are color, texture, shape, and spatial information. Color is usually represented by the color histogram, color correlogram, color coherence vector, and color moment under a certain color space. Texture can be represented by Tamura feature, Wold decomposition, SAR model, Gabor and Wavelet transformation. Shape can be represented by moment invariants, turning angles, Fourier descriptors, circularity, eccentricity, and major axis orientation and radon transform. The spatial relationship between regions or objects is usually represented by a 2D string. In addition, the general visual features on each pixel can be used to segment each image into homogenous regions or objects. Local features of these regions or objects can be extracted to facilitate region-based image retrieval [3].
3
Proposed Method
In this paper, we propose a method for detecting similar presentation slides using content-based image retrieval technique. The process begins at dividing into each slide from an inputted presentation file. And then, the image generator module
A Design of Retrieval System for Presentation Documents
281
performs conversion to digital images from each slide in presentation file. The feature extraction module extracts global and local feature from slides convertible into images. The first step of the process for global feature extraction is to calculate the color and edge histogram for each image using a proposed descriptor. The second step of the process for local feature extraction is a process for generation of the feature vector in each slide image using local descriptor. The gradient and orientation about each pixel in image are generated by using sub blocks selected by feature point and orientation is arranged a histogram. In order to measure similarity between feature stored in database and feature generated by the query image, we use histogram intersection method proposed by Swain and Ballad [5]. In field of image classification study using local feature, histogram intersection method is recently used for overcoming long process time. The block diagram of the proposed method is shown in Fig. 1.
Fig. 1. Block diagram of proposed slide retrieval method
3.1
Feature Extraction
The first step is to calculate the color histogram for each image. Color histogram analysis has been widely used in image indexing and retrieval systems. Color histogram is invariant to translation and rotation about the viewing axis, and change very slowly with changes in angle of view, change in scale, and occlusion [2]. While several similar image groups are clustered in the same cluster even though they are not identical images, every identical images of same group is not always included in same cluster by the first fuzzy c-means clustering. Identical images may have lower similarity by various transformation attacks even they are images of same group. We necessarily do secondary clustering to cluster identical images in the same cluster and willingly submit overlapping of partial data which should be eliminated after doing fine matching. To retrieve images with similar semantic meaning, we use the feature of edge histogram, a unique feature of an image, in secondary clustering.
282
H. Lee et al.
Edges in images constitute an important feature to represent their content. An edge histogram represents the frequency and the directionality of the brightness that can be changed in an image. It is a unique feature of an image, which can not be altered by color histogram or by homogenous texture attribute. This feature can be used to retrieve images with similar semantic meaning. The implemented edge histogram descriptor uses the specifications of MPEG-7. First of all, we extract the feature point by using Harris-corner detection algorithm and then generate blocks of 64 by 64 for calculation of gradient and orientation value about each pixel after created blocks is divided into sub-blocks of 4 by 4. We can calculate magnitude m( x, y ) and orientation θ ( x, y ) of a gradient: m( x, y ) = ( L( x + 1, y ) − L( x − 1, y )) 2 + ( L( x, y + 1) − L( x, y − 1)) 2
θ ( x, y) = tan −1
L( x, y + 1) − L( x, y − 1) L( x + 1, y ) − L( x − 1, y )
(1) (2)
Where L( x, y ) is each pixel value of pixel coordinate x, y in sub-block. The orientation of gradient is quantized into 8 orientations by rotation of 45 degree and it has their different orientation each other. Next, a sub-block consists of total 16 by 16 pixels are redefined with histogram about orientation of gradient. Finally, local histogram is generated by a vectorization of histogram about each sub-block. 3.2
Measuring Similarity
Displayed equations or formulas are centered and set on a separate line (with an extra line or half-line space above and below). Displayed expressions should be numbered for reference. The numbers should be consecutive within each section or within the contribution, with numbers enclosed in parentheses and set on the right margin. The performance of the proposed method is measured by fine matching frequency and compared with SIFT only method. While total matching time consists of feature extracting time, clustering time and fine matching time, we only consider about fine matching frequency because it is the main process as a matter of computation time. The SIFT only method should compare all pairs of dataset images, therefore matching frequency results in N*(N-1)/2 in the dataset size of N and consumes very large computation time. If number of Cluster is K and number of image of each cluster Ci is n (Ci), total matching frequency is as below. The total data number may be larger than the dataset size of N because of data repeatability. k
N ≤ ∑ n( Ci )
(3)
i =1
Besides, decrement rate of matching frequency is computed as below. k
matchingfrequency = ∑ i =1
Decrement( %) = ( 1 − ∫
n(Ci )*(n(Ci ) − 1 ) 2
matching frequency o f proposed method )*100 matching f requency of SIFT only method
(4)
(5)
A Design of Retrieval System for Presentation Documents
283
Equations should be punctuated in the same way as ordinary text but with a small space before the end punctuation mark. Equations should be punctuated in the same way as ordinary text but with a small space before the end punctuation mark.
4
Experimental Results
The proposed method is implemented on the PC of Quad-core AMD Opteron Processor 2218 2.6 GHz CPU, 4G RAM using Visual C++ 6.0. The test dataset contains general presentation files collected by crawler on a website and the National R&D Presentation files generated in the Korea Institute of Science and Technology Information (KISTI), totally about 1000 documents. The size of each image extracted in presentation files is normalized to equal resolution by 200X200 for having the identity of all images. We choose RGB color space, the most common color space where each pixel is represented by a linear combination of three components, red, green and blue. We choose the number of color bins as 4x4x4 for the R, G, B channels, and we get 64-dimensional vector. We compute edge histogram only from the Y component in YCbCr color space and segment an image into 4x4 sub images and generate a histogram of edge distribution for each sub-image, which are categorized into five types. Since there are 16 subimages in the image, a total of 5X16=80 histogram bins are required. Here, we learned the Edge Histogram distance between multiple identical advertising image pairs (i.e. a training set) by Euclidean Distance and the data are used as a threshold for selecting nearest neighbor images (i.e. secondary clustering). This is because the histograms are very similar but not exactly same since the difference between identical images. Every image in a cluster is used as a seed to select images that are close to be within the threshold. The accuracy result was average 95% as shown in Table 1. The experimental results show that the algorithm achieves robustness to geometric distortion such as rotation, scaling, translation and obtain a rapid processing speed of similarity measurement. Table 1. Experimental results accuracy and decrement Presentation files The number dataset of slide 1 2021 2 2101 3 1932 4 2611 5 1637 6 766
Matching frequency
Decrement (%)
Accuracy (%)
1692892 1320776 1082929 4835082 2127046 130462
62.5 59.7 58.6 74.1 61.2 55.5
94.2 95.1 94.7 95.6 96.4 95.1
Finally, we implemented an application applied with our slide retrieval method to test the performance as shown in Figure 2. Since flexible formation and modification of queries can only be obtained by involving the user in the retrieval procedure for contentbased image retrieval, we designed crucially the application. User interfaces in our systems typically consisted of a query formulation part and a result presentation part.
284
H. Lee et al.
Fig. 2. Application by using proposed method
5
Conclusion
We proposed a method for searching presentation slide using content-based image retrieval technology. Many existing search engines have used text-based image retrieval methods which retrieves texts included in presentation or the file name. However, these methods are not easy to remember title or content of presentation consist of image or illustration. In addition, we sometimes utilize power-point for producing figure embedded in word processor document but there are times that we check hundreds of presentation files in own computer for modification of presentation files. Therefore, in this paper, we used images instead of text for retrieval of presentation file. We extracted robust and accuracy local features and global features from images generated by translation of each presentation slide. We use the combination of local feature considering the speed of the similarity measurement and the global feature using color, edge properties for comparing objects on image in detail. The test dataset contains the free existing presentation file on a website, totally 100 presentation files on 30 pages on average. The experimental results show that proposed method achieves robustness to various presentation files and obtain a rapid processing speed of similarity measurement. The proposed approach can be used to induce development of application which can intuitionally search presentation files on a website or own hard drive. In the
A Design of Retrieval System for Presentation Documents
285
future, we will research development of algorithm can search similar presentation file using the part of image. There will be attendant segmentation algorithm, object extraction algorithm and so on.
References 1. Long, F.H., Zhang, H.J., Feng, D.D.: Fundamentals of Content-based Image Retrieval 2. Zhang, C., Chai, J.Y., Jin, R.: User term feedback in interactive text-based image retrieval. In: Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, August 15-19 (2005) 3. Keister, L.H.: User types and queries: impact on image access systems. In: Fidel, R., et al. (eds.) Challenges in Indexing Electronic Text and Images, ASIS, pp. 7–22 (1994) 4. Lowe, D.G.: Object recognition from local scale-invariant features. In: Proceedings of the International Conference on Computer Vision, vol. 2, pp. 1150–1157 (1999) 5. Swain, M.J., Ballard, D.H.: Color indexing. International Journal of Computer Vision 7(1), 11–32 (1991) 6. Czajkowski, K., Fitzgerald, S., Foster, I., Kesselman, C.: Grid Information Services for Distributed Resource Sharing. In: 10th IEEE International Symposium on High Performance Distributed Computing, pp. 181–184. IEEE Press, New York (2001) 7. Mukhopadhyay, S., Smith, B.: Passive capture and structuring of lectures. In: Proceedings of ACM International Conference on Multimedia, pp. 477–487 (1999) 8. SlideFinder, http://www.slidefinder.net 9. Slideworld, http://www.slideworld.com 10. SlideBoxx, http://www.slideboxx.com 11. Presentation 2go, http://www.presentations2go.eu
Data Quality Management Based on Data Profiling in E-Government Environments Youn-Gyou Kook1,*, Joon Lee2,*, Min-Woo Park, Ki-Seok Choi, Jae-Soo Kim, and Soung-Soo Shin3,* 1,2
Dept. of NTIS (National Science & Technology Information Service) KISTI (Korea Institute of Science and Technology Information) Daejeon, Korea 3 Dept. of Quality & Standardization KDB (Korea Database Agency) Seoul, Korea {Ykkook,rjlee98}@kisti.re.kr, [email protected]
Abstract. Recently, it is important to manage the quality of e-government data for guaranteeing the reliability of e-government to the public and being consistent in interoperating that data among legacy information systems in egovernment environments. Data quality management is to try removing an invalid data and manipulating an incomplete or a revisable data on running database. So it is necessary that discovering, analyzing and reviewing that data which caused distrust in e-government. Data profiling is a process of analyzing location, structure, value of data and data profile included metadata for managing the valid data among that systems. In this paper, we present an approach of data profiling among legacy information systems in e-government environments to manage the quality of data and to guarantee the reliability of egovernment to the public. So we enumerate some invalid, incomplete and revisable data that is needed to manipulate in proper. The proposed approach will be caused the advanced data quality of e-government and reliability to the public. Keywords: Data Quality Management, Data Profiling, Interoperability, Information System, E-Government.
1
Introduction
The information systems of e-government are processed the businesses of the administrative organs and the public institutions, and provided the services to the public for efficiency, rapidity, convenience and so on. The recent governments build the various information systems to serve their business more and more, and make the plan of their information strategy what is to develop a plan for constructing information infrastructure to provide the various information, share and exchange that using internet. That systems be operated in independent working have the proper *
Corresponding authors.
T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 286–291, 2011. © Springer-Verlag Berlin Heidelberg 2011
Data Quality Management Based on Data Profiling in E-Government Environments
287
missions which are served their own business to the insiders or the public and shared e-government information which are possible for interoperating a relational data in the distributed environments. So the public be could served a business by the information systems of e-government using internet such follows that: they can require an information to e-government, read their own information, become aware of an announcement of e-government, copy a certification and so on. All of these things are possible with information technology by advancing software, hardware and network infrastructure. Most businesses are changed to digitalization and electronic business by using the information technology that is bringing to the convenience of the human life. We called electronic government (eGovernment) which is the government to process the business in efficient through the information systems have the administrative information, the public information and the electronic business. Many information systems are operating by e-government product incalculable information being not aware of the existence. So we need to manage that mass information and to operate that systems efficiently what is managing the same thing in various information systems, reducing a system to avoid duplication business, preventing use of garbage information, interoperating information and system which are needed to cooperate, making use of the business and service of the other system and so on. In particular, it is very important to interoperate information among information systems of e-government and to manage the quality of information to prevent misuse of that [1, 3, 4]. In this paper, we describe an approach of data quality management based on data profiling of the information systems in e-government environments to which is advanced data value on interoperating data among that. Data profiling is process of checking a location, structure, value of data and analysis of data profile to use and manage the valid data among that systems. This process aid to find out error data and abnormal data that it is difficult to interoperate that data. So we present data profiling to advance data quality of e-government and enumerate some invalid data to control in proper. The proposed approach will be caused the advanced data quality of egovernment.
2
Data Profiling of Information Systems in E-Government
It is important to serve the administrative information to the public and to process a digitalized business in e-government environments. There are various data of egovernment which are notification, statistics, law, foreign exchange, identification, certification and the other administrative data. If we misuse notification that is public data of e-government, then we'll get confuse after known error and do not trust government. So e-government has to guarantee the reliability on using their data that is a valid. To interoperate data in e-government, data profiling can support to advance the reliability and integrity of data source that is an input data and a connected. Data profiling is the process of analyzing database metadata which is the current state of data source such as columns and records, and checking the reasonable data location, data structure and data value. After that process, we can find out an error-prone status and distinguish an abnormal from the checked metadata and data source. Figure 1 shows the steps of data quality management including data profiling.
288
Y.-G. Kook et al.
Fig. 1. The steps of data quality management in e-government
y
y
y
y
y
Gathering and Analyzing Metadata: We gather the designed production on building the information systems of e-government and the physical metadata of database of that. The physical metadata have to include name of table and column, data type, domain information, constraints, entity relation, code definition of database. And we analyze the physical metadata over comparing with the designed production such as the lists of table and column, the name of table and column, the data type. If we discover the invalid point, we need to verify that. Selection of Data Profiling Source and Type: We select the source of businesses, tables and analyzing type to do data profiling of e-government's data source. The source of businesses and tables is ranged their scope to analyze and check, that is, to select table, column, key (primary key, foreign key, unique key), normalization and etc. Data Profiling Metadata and Data Source: We analyze the selected sources that are tables, column and data. With the analysis of data status, we can discover the omission value, the invalid value, the non-unique value, the violation of data and structure integrity. If we discover error or invalid data, then we need to verify that. Review and Reporting Data Profiling: We synthesize the results of profiling and review that with the e-government business managers. The managers need to confirm error status and invalid data that are analyzed with profiling. And we draw the business rule to revise and modify the error status and invalid data by the discussion with the managers. Data Extract: We extract data that are invalid data in error status to transform or clean that. There are two part types, one is needed to transform data type, code definition, data format, etc, and the other is needed to clean table name, column name, invalid data, null value, key data, and so on.
Data Quality Management Based on Data Profiling in E-Government Environments
y y y
3
289
Data Transform: We transform that data by the business rule. 1) The egovernment business managers transform that by his hand-operating. 2) The data quality management programs modify that by the business rule. Data Cleansing: The e-government business managers have to clean that data by his hand-operating. The invalid data is needed to modify for guaranteeing the reliability of e-government. Data Load: The transformed and cleaned data load to database for migration, sharing, exchange or synchronization among the legacy information systems of e-government.
Data Quality Management in E-Government Environments
For interoperating data of information systems in e-government, we need to process the data quality management based on data profiling. We discover an error status and invalid data with the analysis of metadata and profiling, and draw the business rule to modify an error and invalid data. In this step, the business manager and us transform and clean that data by the business rule. To advance the data quality, we must discover an error and invalid data through data profiling for interoperability among legacy information systems of egovernment. This section presents the violated items that can be found out case by case based on data profiling and describes the approach of improvement quality of that items. y
Metadata Analysis: First of all, we select the source database to manage the data quality of e-government and acquire all of the description about that database which is database design description, entity relation diagram, domain layout, table layout, column layout, data dictionary, etc. In second, to compare the description with the metadata of the database, we aggregate metadata from the source running database. An invalid status and data is discovered by analyzing the metadata. Finally we improve the quality of that data such as an invalid name of table and column, an invalid set key (primary, foreign, unique key), an invalid constraint, and so on.
y
Column Analysis: After analysis of metadata, we gather the number, name and attributes of column to analyze that of the source table. In particular, the attributes of column are data type, valid value, unique value, size of value, null value. -
Data type attribute: It is to compare the data type of column with the description and review whether proper type. Data type attribute: It is to compare the data type of column with the description and review whether proper type. Valid data attribute: It is to discover an invalid and incomplete data which is beside the valid scope, even if the data type is valid. There are some cases of invalid and incomplete data in valid data type. In the data type of a column is integer, only the positive integer is accurate data, negative integer and zero are inaccurate data. In the data type is
290
Y.-G. Kook et al.
-
-
character, the valid scope is alphabet, and the invalid scopes are number or special characters. And it must check the valid data format which is the defined formal pattern of column, such as postcode, timestamp, social identification (SID), international standard book number (ISBN), international standard serial number (ISSN). Unique data attribute: Because it is the unique value to identify a key of column in a table, it has not to be duplicated or omitted in a column. Data size attribute: It is to find out an incomplete data which is not comprised in the defined size of column. If a data value is over the max size or is needed to contain the detailed data under the min size, that column is has been to revise the data size attribute. Null attribute: With analysis of data value in a column, it is possible to revise the property of column. If a data value is significant item, the property of that column can be not null. And all of data value in a column is null, that column can be needed to remove that.
As mentioned above that, we carry out the process of data profiling in which is needed to manage the quality of data in e-government. It is guaranteed the reliability of e-government data to the public, improved the quality of e-government business data among the legacy information systems. In the current information systems of egovernment, the business manager try to advance the quality of e-government data and to acquire the certification about which is the data quality management by the system managers of e-government information systems from a trusted certificate authority. To interoperate the data of e-government among legacy information systems, it has to consider more detailed and extended data profiling. There are the problems of heterogeneity among those systems to exchange and share data which are the data and relation heterogeneity [6]. Data heterogeneity consists of the structural, semantic and presentational heterogeneity, and relation heterogeneity consists of the horizontal and vertical relation heterogeneity. These are mentioned at [5, 6] to solve those problems. In particular, we have to consider the extended data profiling which are the unity of column name, data type, data format, unit, date, constraint, code, null, foreign exchange, deadline and so on.
4
Conclusion
It is important to manage the quality of e-government data for guaranteeing the reliability of that to the public and improving an invalid and incomplete data within sharing and exchanging that among legacy information systems of e-government environments. An approach of the data quality management is analyzing and reviewing that data in which are invalid, incomplete or revisable based on the data profiling. This paper described a method of the data profiling which is discovering, analyzing and reviewing invalid, incomplete or revisable data within e-government environments. On analyzing that data, it is necessary that the scope of data profiling is defined to the detailed attribute. To advance the quality of e-government data, we have to make a thorough investigation and analyze that whether invalid, incomplete,
Data Quality Management Based on Data Profiling in E-Government Environments
291
revisable or valid. The proposed approach will be caused the advanced data quality of e-government and guaranteeing the reliability of e-government data to the public. In the future work, we have to define the business rules which are not mentioned above that to manage the quality of e-government data and to interoperate the distributed data among the information systems in e-government. And we will apply the proposed approach in the tool that is designing an application for interoperating the distributed e-government data and develop that tool to use in e-government environments.
References 1. Korea Database Agency. Data Quality Assessment Procedure Manual. Data Quality Management Guide-4 (January 18, 2011), ISBN 978-89-88474-09-9 2. IBM, IBM WebSphere Information Analyzer User Guide (August 7, 2007) 3. National Information Society Agency. A guide of technology evaluation standards for guarantee of interoperability (December 2006) 4. Act of E-Government in Korea (May 17, 2010) 5. Kook, Y.-G., Lee, J., Kim, J.-S.: e-Government Grid System for Information Interoperability. The Korea Academia-Industrial Cooperation Society, 3660–3667 (December 2009) 6. Kook, Y.-G., Lee, J., Choi, K.-S., Kim, J.-S.: Dynamic Relation Management of Hierarchical Data for Interoperability in Distributed Environments. Communication in Computer and Information Science 74 (June 2010) 7. ISO/IEC IS 11179, Information technology specification and standardization of data elements (2003)
Design of Code Template for Automatic Code Generation of Heterogeneous Smartphone Application* Woo Yeol Kim, Hyun Seung Son, and Robert Young Chul Kim Dept. of CIC(Computer and Information Communication), Hongik University, Jochiwon, 339-701, Korea {john,son,bob}@selab.hongik.ac.kr
Abstract. Development of heterogeneous application becomes an issue recently as of the diversification of smartphone platform. It is possible to develop heterogeneous smartphone application when using the e-MDD (Embedded Model Driven Development). While e-MDD is divided into Model-to-Model and Model-to-Text transformations, we have been converting target independent model to the target dependent model using the Model-to-Model transformation in the past. This paper is regarding the creation of code for target dependent model mentioned in the above using the Model-to-Text transformation. Template-based approach has been used for the creation of code. Code template has been made for Windows Mobil and Android as an example of application, and the code was created using Acceleo. Keywords: e-MDD(Embedded Model Driven Development), Code Template, Heterogeneous Development, Smartphone, Automatic Code Generation.
1 Introduction The conventional platform type for smartphone was so limited. The platform has been diversified as the issue for smartphone is getting bigger recently. Typical examples are iPhone SDK [1] of Apple, Android SDK of Google [2], and Windows Phone7 SDK of Windows [3]. In addition, there are various kinds of smartphone platform [4]. As the platform types are diversified like this, the reuse of software has become a bigger issue. It is because there is a problem of developing the same smartphone application programs newly to apply to another platform. It is possible to develop heterogeneous smartphone application when using e-MDD (Embedded Model Driven Development) [5]. e-MDD is divided into Model-to-Model transformation and Model-to-text transformation. For developing heterogeneous smartphone application in the past, we used Model-to-Model transformation to convert the target independent model to target dependent model [6,7,8]. This paper is *
This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education, Science and Technology(2011-0004203) and the Ministry of Education, Science Technology (MEST) and National Research Foundation of Korea(NRF) through the Human Resource Training Project for Regional Innovation.
T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 292–297, 2011. © Springer-Verlag Berlin Heidelberg 2011
Design of Code Template for Automatic Code Generation1
293
regarding the code creation of previously composed target dependent model using the Model-to-Text transformation. There is a standard method of Model-to-Text transformation to OMG (Object Management Group) for the transformation from model to text (code). Model-to-Text provides the transformation language to convert the model, composed based on MOF (Meta Object Facility) [9], to the text. Templatebased approach is used as the transformation language. It is used mainly for creating the code, which is put entirely in the hands of the user. This method requires composing template for the code creation. As an application example, code template has been composed for Windows Mobile and Android respectively, and the code was created with Acceleo. It has been possible to create 30% and 35% of the code for respective model as a result. This paper is organized as follow. e-MDD is explained in Chapter 2 as a related work. Chapter 3 explains the template for the code generation. Chapter 4 shows the case study. Finally in Chapter 5, it mentions about the conclusion and future research.
2 Related Work Model-to-Text approach is divided into visitor-based and template-based [10]. The very basic code creation method of visitor-based is the visitor mechanism consisted with the circulation of model’s internal expression and the text writing on the text stream. Its example is Jamda, an object-oriented framework, providing the set of class that represents the UML model [11]. Jamda creates the API for handling the model and the code with the visitor mechanism (CodeWriters). Jamda does not support the MOF standard for the definition of new meta model, however, the element type of new model can execute the sub-classing of existing Java class. Template based model text is being supported by most of MDD tools nowadays. Template is usually constituted with the object text including the meta-code splices that can execute the selection of code that is accessing the information in the source and its repeated expansion. LHS is used the execution logic that access the source, and RHS is the executable logic for the code selection and repeated expansion which is the combination of string patterns. Also, there is a definite separation of grammar in between the LHS and RHS. The example of its method is Acceleo provided by eclipse [12].
3 Design of Code Template Code template describes the code type for each language, brings in the data from UML meta model, and creates the source code. In order to define the code template, it is necessary to compare and analyze the programming languages. The code template of this paper has been composed based on the class diagram. It has been divided into the class name, attribute, and method for each code, therefore, and the relations with the class have been analyzed. In the comparison and analysis result of the code, there are lots of similar parts with Java and C#. In order to compose and execute the code template, this paper has used Acceleo [12] based on eclipse that is complying with the Model-to-Text transformation standard for OMG.
294
W.Y. Kim, H.S. Son, and R.Y.C. Kim
[module generateCS [module generate ('http://www.eclipse.org/uml2/2.1.0/UML')/] ('http://www.eclipse.org/uml2/2.1.0/UML')/] [template public generate(c : Class)] [template public generate(c : Class)] [file (c.name.concat('.cs'), false)] [file (c.name.concat('.java'), false)] public [if (c.isAbstract)] abstract[/if] public [if (c.isAbstract)] abstract[/if] class [c.name.toUpperFirst()/] class [c.name.toUpperFirst()/] [for (superC : Class | c.superClass) [for (superC : Class | c.superClass) before(' : ') separator(',')] before(' extends ') separator(',')] [superC.name/][/for] [superC.name/][/for] [for (interf : Interface | [for (interf : Interface | c.getImplementedInterfaces()) before(':') c.getImplementedInterfaces()) before(' separator(',')] [interf.name/][/for] { implements ') separator(',')] //association [interf.name/][/for] { private : //association [for (p : Property | [for (p : Property | c.getAssociations().memberEnd)] c.getAssociations().memberEnd)] [p.type.name/] [p.name/]; [/for] private [p.type.name/] [p.name/]; public : [/for] [for (p : Property | [for (p : Property | c.getAssociations().memberEnd)] c.getAssociations().memberEnd)] void public void set[p.name.toUpperFirst()/] set[p.name.toUpperFirst()/]([p.type.name/] ([p.type.name/] in); [/for] in); [/for] //attribute //attribute [for (p : Property | c.attribute)] private : private [p.type.name/] [p.name/]; [/for] [for (p : Property | c.attribute)] [for (p : Property | c.attribute)] [p.type.name/] [p.name/]; [/for] public [p.type.name/] public : get[p.name.toUpperFirst()/]() { [for (p : Property | c.attribute)] return this.[p.name/]; [p.type.name/] } [/for] get[p.name.toUpperFirst()/]() { [for (o : Operation | c.ownedOperation)] return this.[p.name/]; public [o.type.name/] [o.name/]() { } [/for] // TODO should be implemented public : } [/for] [for (o : Operation | c.ownedOperation)] } [o.type.name/] [o.name/]() { [/file] // TODO should be implemented [/template] } [/for] } [/file] [/template] (a) Code Template of Windows Mobile (C#)
(b) Code Template of Android (Java)
Fig. 1. Proposed Code Templates
Design of Code Template for Automatic Code Generation1
295
The code template for Windows Mobile Figure 1(a) is composed in the form of created code and additional data input. Observing the code template, it is divided into special commands using “[ ]” and the parts generated as it had been put in. For example, “[if (c.isAbstract)] abstract[/if]” means to create the abstract when the condition c.isAbstract is true as is shown in the figure. Also “[for (p: Property | c.attribute)][p.type.name/] [p.name/]; [/for]” means to execute the output of property type name and property name to the class by repeating all the property p in the attribute. Code template for Android Figure 1(b) also is composed in the form of created code and input of additional data as same as Windows Mobile. Observing the code template, it is divided into the special commands using “[ ]” and the parts generated as it had been put in. As is in the figure, “class [c.name.toUpperFirst()/]” means to change the first character of the class name to the uppercase and to execute the output of the class text. If the class name is cname, its output is going to be CName. The difference of code templates between Windows Mobile and Android is the difference of C# and Java used in each platform. The code template composed like this is finally manipulating the data and creating the code using the meta model of UML model.
4 Code Generation of Heterogeneous Smartphone Application Using Code Template The final code created using the code template is shown in Figure 2. LinkViewController and LinkView classes have been created in the Windows Mobile application as is shown in Figure 2(a) as the input class diagram has 2 classes. As Windows Mobile inserts event handler inside the function for the button click, button handler has been added up in the internal function of LinkViewController. The development is completed with addition of function detail of corresponding functions based on the automatically generated code. In the Android application of Figure 2(b), the classes of LinkViewController, Btn1_onClick, Btn2_onClick, and LinkView have been created as the input class diagram has 4classes. Android application can be developed only with the function addition of corresponding functions based on the automatically generated code. Created LinkViewContorller class has been generated in the Java form of UML model’s onCreate. Android is adding up event handler by using the anonymous class. Therefore, 2 buttons have been divided into the class. onClick of Btn1_onClick class is moved to the assigned page. onClick of Btn2_onClick class is a code that draws circle and rectangle alternately on the screen. The code that draws circle and rectangle on the screen has been composed up in the LinkView class. It is not a perfect creation of code through UML design, however, about 30% of the code for Window Mobile application and about 35% of the code for Android application can be generated as of now. The design and embodiment procedure can be carried out more swiftly if the code creation rate can be increased.
296
W.Y. Kim, H.S. Son, and R.Y.C. Kim
public class LinkViewController extends class LinkViewController : Form { Activity { Linkview linkView; public void onCreate(Bundle public LinkViewController() { savedInstanceState) { } } private void Btn1_onClick(object } sender, EventArgs e) { class Btn1_onClick extends } Button.OnClickListener{ private void Btn2_onClick(object public void onClick(View v) { sender, EventArgs e) { } } } } class Btn2_onClick extends class LinkView : Panel { Button.OnClickListener{ private void OnPaint(object sender, public void onClick(View v) { PaintEventArgs e) { } } public void DrawRect(PaintEventArgs e) } public class LinkView extends View { { public LinkView(Context context, } public void DrawCircle(PaintEventArgs AttributeSet attrs) { } e) { protected void onDraw(Canvas canvas) { } } } public void DrawRect(Canvas canvas) { } public void DrawCircle(Canvas canvas) { } } (a)Model of Windows Mobile (b)Model of Android Fig. 2. Model-to-Text Transformation
Design of Code Template for Automatic Code Generation1
297
5 Conclusion This paper has created the code of previously composed target dependent model using the Model-to-Text transformation. Template-based approach has been used for the creation of code. While it requires composing the template for code generation, code template has been designed with the analysis of existing codes for Windows Mobile and Android. And for the execution of model transformation, template has been composed with the grammar of Acceleo tool. As an application example, code has been created by applying code template to the platforms of Windows Mobile and Android. As its result, it was possible to create 30% and 35% codes out of total codes for Windows Mobile and Android respectively. The code template proposed can create skeleton code only as it is based on the class diagram. The quantity and quality of code generation are going to be enhanced in the future by composing the code templates applied with sequence and state diagram in addition to the class diagram. It will be possible to develop heterogeneous smartphone applications more swiftly and effectively when adding up other diagrams besides class diagram.
References 1. Gang, D.: Touching the iPhone SDK 3.0. In: Insight (2009) 2. Seokhoon, K.: A Trend of Android Platform. The Korea Contents Association Semiannual 8(2), 45–49 (2010) 3. Wigley, A., Moth, D., Foot, P.: Microsoft Mobile Development Handbook. Microsoft Press, Redmond (2007) 4. Jegal, B.: Smartphone market and Mobile OS Trends. In: Semiconductor Insight (2010) 5. Son, H.S., Kim, W.Y., Kim, R.Y.C.: Semi-Automatic Software Development based on MDD for Heterogeneous Multi-Joint Robots. In: CA 2008(IEEE), vol. 2, pp. 93–98 (2008) 6. Kim, W.Y., Son, H.S., Kim, J.S., Kim, R.Y.C.: Development of Windows Mobile Application using Model Transformation Technique. Korea Computer Congress 16(11), 1091–1095 (2010) 7. Son, H.S., Kim, W.Y., Jang, W.S.: Robert Young Chul Kim: Development of Android Application using Model Transformation. In: KIISE & KIPS Joint Workshop on Software Engineering Technology 2010, vol. 8(1), pp. 64–67 (2010) 8. Kim, W.Y., Son, H.S., Yoo, J., Park, Y., Kim, R.Y.C.: A Study on Target Model Generation for Smartphone Applications using Model Transformation Technique. In: ICONI 2010, vol. 2, pp. 821–823 (2010) 9. OMG: MOF Model to Text Transformation Language, v1.0, OMG Available Specification (2008) 10. Czarnecki, K., Helsen, S.: Feature-Based Survey of Model Transformation Approaches. IBM Systems Journal 45(3), 621–664 (2006) 11. Jamda: The Java Model Driven Architecture 0.2 (2003), http://sourceforge.net/projects/jamda/ 12. Obeo, Acceleo User Guide, http://www.acceleo.org/
A Study on Test Case Generation Based on State Diagram in Modeling and Simulation Environment* Woo Yeol Kim, Hyun Seung Son, and Robert Young Chul Kim Dept. of CIC(Computer and Information Communication), Hongik University, Jochiwon, 339-701, Korea {john,son,bob}@selab.hongik.ac.kr
Abstract. In the conventional tests, test case is generated in the design stage. However, actual test can be executed after its embodiment. As there is as much time difference between the design and execution of the test, the errors in the designs of test and software are checked out late. This paper is proposing the test case generation method so as automatic test can be carried out in the virtual simulation environment. The method proposed generates the test case automatically based on the state diagram and executes it in the virtual simulator. It can reduce the time difference between the design and execution of test, accordingly, to find out the error in the test case and problems in the design promptly. As a result, it can identify the error in the beginning stage of software development and save the time and expense need for the development. Keywords: Articulated Robot, Modeling & Simulation, Test, Virtual Environment, Model based Test.
1 Introduction In the software development life cycle, the expense differs significantly according to the error finding stage. If the cost is 1 for finding and solving errors in the request phase, it costs 30~100 times when finding and solving errors in the production phase [1]. Software errors can be found out when test is carried out. If software test can be executed earlier, therefore, the error can be found fast and the development period and expense can be reduced. However, test can be executed after software development is completed in the conventional tests. This paper suggests the automatic generation method of test case that can be processed in the virtual simulation environment [2,3,4,5]. The method proposed is creating the test case from the state diagram. Generated test case can be executed directly in the virtual simulation environment. As for the test case generation method, it converts state diagram into state table and generates the testing transition tree based *
This research was supported by the MKE(The Ministry of Knowledge Economy), Korea, under the ITRC(Information Technology Research Center) support program supervised by the NIPA(National IT Industry Promotion Agency)(NIPA-2011-(C1090-1131-0008)) and the Ministry of Education, Science Technology (MEST) and National Research Foundation of Korea(NRF) through the Human Resource Training Project for Regional Innovation.
T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 298–305, 2011. © Springer-Verlag Berlin Heidelberg 2011
A Study on Test Case Generation Based on State Diagram
299
on it. Test case ID is identified and test case is generated using the created transition tree. The generated test case makes up the event list after classifying event and action separately for the processing in the simulator. When processing the event list in the virtual simulation environment, robot is activated and the Pass/Fail is checked out by observing the robot’s behavior. With conventional test, test case is generated in the design stage, but it can be executed after embodiment of software [6]. There occurs time difference as much between the design and execution of the test. However, this method does not wait until the embodiment stage by executing the test case directly in the virtual simulation environment but processing the test in the design stage. So the design error can be found fast in early stage. This paper is organized as follow. Chapter 2 describes the model based test as a related work. Chapter 3 explains how to generate the test case proposed. Chapter 4 shows the case study of proposed method. Finally in Chapter 5, it mentions about the conclusion and future research.
2 Related Work Following is the observation of model based test. Andras Toth et al. [7] had proposed the framework for the model level testing of UML model. Planner algorithm has been used for automatic generation of test case. The input of this framework is the UML state diagram exported to the independent XML tool of which format is assigned by UML tool. Conversion program generates text file automatically by the generation of planner model that can be manipulated like the formal language of UML state diagram. Planner meta model has established the provision of high level expression to maintain the methodology opened for other planner tools. The design can be tested, as of the result of this project UML, and the design flow can be found in the modeling stage of primary development process for the realization activity to save significant amount of effort and expenses. The theoretical background for the extraction of model based test case in the formal conformance testing for UMLSC (UML State-Chart) had been expressed by Stefania Gresi et al. [8]. Bertolino A. et al. [9] have suggested the aggregation method that is integrating the sequence diagram with the state for extracting reference model perfectly and rationally that can be used in the industrial context. This is used to extract the test case automatically. The author had extracted the test case based on the UML specification from all the contexts based on the component and object. The objective of this paper is for processing perfect model from the state and sequence diagrams and for making UIT (Use Interaction Test) together with the model in the research. This method guarantees that it includes all the allowable sequences. However, it cannot provide any coverage measuring on the embodied system. It satisfies the important requirements entrusted by the industry. The advantage of this method is for creating accurate test case and for inducing less testing effort by providing as much inconformity information as possible in the model.
300
W.Y. Kim, H.S. Son, and R.Y.C. Kim
3 Test Case Generation in M&S The purpose of test case generation based on state diagram is to verify the relations between the event, behavior, action, state, and state transition. With using this technique, it can determine if the state based motion of system can satisfy the system specifications. There are 3 reasons for the fault of state based system. The first is when the state diagram cannot transit the system function specification accurately. The second is when the syntax of state diagram is wrong or inconsistent. The third is the conversion form state diagram to the code. It does not matter if converted using the automation tool, but it may cause troubles if converting manually.
Fig. 1. Flowchart of the test case generation based on state diagram
Figure 1 shows the test case generation based on state diagram using the state diagram. Initially, state diagram model is converted into the state table. State table has been separated using the tables of state and event, which can express the state of all the situations. Then, create the state transition tree based on the state table. State transition tree is the array of movable states in each state recursively. In here, the test case level is varied according to the recursive execution frequency. Figure 2 shows how to make the state table. The state available for moving toward when meeting with the event in each state can be indicated on the intersection of the table after indicating each state on the top of table and expressing the event on the left side of the table.
Fig. 2. State table conversion in the state diagram
Figure 3 shows how to create the transition tree. Array all the states on the top of table in the sequence. Then, indicate all the states available for access in the next step. If arraying the state available for re-access, the number of test case generation is varied according to the depth of array. The array of all these cases in the sequence makes the test case.
A Study on Test Case Generation Based on State Diagram
301
Fig. 3. Creation of state transition tree in the state table
Generate the final test case is using the state transition tree. Generated test cases are shown in the Table 1. Generated test case is describing the scenario executed by each state in the state transition tree. Table 1. Test case TCID TC1 TC2
Initial State S_A S_B
Event
Action
E1 E2
Do Do
Next State S_B S_A
Event
Action
End
E2 E1
Do Do
S_A S_B
Generated test case cannot be processed right away in the simulator. So convert the test case to the event list as is shown in Table 2 for execution. Table 2 is the conversion result of the test cases generated in the Table 1 to the event list. TCID is the ID of test case. In the Type column, s means state, and e means event. State/Event have been assigned in plural so as both state and event can come in. P/F means Pass/Fail. The state and event in the event list are carried out alternatively, and the robot executes the behavior whenever the event list is processed in the virtual simulator environment Table 2. Event list Test Case ID TC1 TC1 TC1 TC1 TC1 TC2 TC2 TC2 TC2 TC2
Type s e s e s s e s e s
State/Event S_A E1 S_B E2 S_A S_B E2 S_A E1 S_B
P/F
4 Case Study This application case shows the multi-jointed robot [10,11] moving to the target position using the 4 directions of forward, backward, left, and right. Articulated robot is composed with the state diagram shown in Figure 4. ‘Idle’ is the initial state of robot, which is the robot state when activated initially or all the works have been finished. ‘Initialized’ means the resetting state of robot. The target of robot is assigned
302
W.Y. Kim, H.S. Son, and R.Y.C. Kim
and the coordinates are set up in this state. ‘goRobot’ means the state that robot is moving to the location nominated. Robot is moving forward or backward in this state to check the current position. ‘goLeft’ means the state that robot is running to the left. ‘goRight’ is the state that robot is turning to the right. ‘Stopped’ is the state that is generated when the user has suspended the robot. ‘Stopped by Intrusion’ is the state that is generated when the robot arrives at the destination. Robot is in the ‘Idle’ state when activated, and it is converted to the ‘Idle’ state or stops completely when the user commands the robot to stop.
Fig. 4. State diagram of articulated robot
Generate the test case automatically for the execution of test. State diagram must be composed with the design tool as is shown in Figure 5, and create the state table by selecting the '1▶' button in the red block for the generation of test case.
Fig. 5. Creation of state table
State table is the table that draws the called state when a specific event is generated in the corresponding state. With the state table, it is easy to check out which state has been called when a certain event is generated in the current state. Figure 6 is the state table created.
A Study on Test Case Generation Based on State Diagram
303
Fig. 6. State table
State transition tree is generated based on the current state table when selecting '2▶' button in the screen of composed state table. State transition tree draws all the states in the form of tree that can be called in the current state for the visual checking. The tree will have more branches as the depth is deepened, and the depth can be assigned by selecting Switch. Figure 7 is showing the state transition tree created.
Fig. 7. State transition tree
Test case is generated finally when selecting '3▶' button in the state transition tree. Test case is generated as is shown in Figure 8, and it shows which event and which action can be occurred in the current state (Start state) through such test case, also it can be checked up in the form of table through this which state will be the next state.
304
W.Y. Kim, H.S. Son, and R.Y.C. Kim
Fig. 8. Generated test case
Test case is inserted in the event list as is shown in Figure 9 when double clicking the test case for the execution of test in the test case generated. When clicking the Run button, it is processing from No 1 to the end automatically. Time is the event generation frequency, and repeat is the number of repetitions.
Fig. 9. Event list
A Study on Test Case Generation Based on State Diagram
305
5 Conclusion It requires automatic generation of test case and execution method of test case for the execution of test in the virtual simulation environment. This paper has developed the tool supporting such procedure, applied to the multi-jointed robot, generated and executed the test case. As the result, the test case generated in the design stage can be executed directly in the virtual simulation environment. The method proposed overcomes the conventional problem in the design and execution of test, and finds out design errors by executing the test without waiting until its embodiment stage. It has shortened the time difference between the design and execution of test, and enabled prompt countermeasure against the problems generated in the design by finding easily out the errors of test case. There is a problem in the method proposed that the tester must check and confirm the test case executed in the virtual simulation environment visually in person. In order to overcome such a weak point, automatic checkup method is under research for expected result of the test as a future study.
References 1. Boehm, B.W.: Software Engineering Economics. Prentice-Hall, Englewood Cliffs (1981) 2. Son, H.S., Kim, W.Y., Kim, R.Y.C.: Implementation of Technique for Movement Control of Multi-Joint Robot. In: The 30th KIPS Fall Conference 2008, November 14, vol. 15(2), pp. 593–596 (2008) 3. Kim, W.Y., Son, H.S., Kim, R.Y.C., Carlson, C.R.: MDD based CASE Tool for Modeling Heterogeneous Multi-Jointed Robots. In: CSIE 2009, vol. 7, pp. 775–779. IEEE Computer Society, Los Angeles/Anaheim (2009) 4. Kim, J.S., Son, H.S., Kim, W.-Y., Kim, R.Y.C.: A Study on Education Softwarefor Controling of Multi-Joint Robot. Journal of The Korean Association of Information Education 12(4), 469–476 (2008) 5. Kim, J.S., Son, H.S., Kim, W.-Y., Kim, R.Y.C.: A Study on M&S Environment for Designing The Autonomous Reconnaissance Ground Robot. Journal of the Korea Institute of Military Science and Technology 11(6), 127–134 (2008) 6. Burnstein, I.: Parctical Software Testing. Springer, Heidelberg (2003) 7. Toth, A., Varro, D., Pataricca, A.: Model Level Automatic Test Generation for UML State-Charts. In: Sixth IEEE Workshop on Design and Diagnostics of Electronic Circuits and System, DDECS 2003 (2003) 8. Gresi, S., Latella, D., Massink, M.: Formal Test Case Generation for UML State-Charts. In: Ninth IEEE International Conference on Engineering Complex Computer System Navigating Complexity in e-Engineering Age (2004) 9. Bertolino, A., Marchetti, E.: Introducing a reasonably complete and coherent approach for model based testing. In: Jensen, K., Podelski, A. (eds.) TACAS 2004. LNCS, vol. 2988, Springer, Heidelberg (2004) 10. McGhee, R.B., Frank, A.A.: On the Stability Proper-ties of Quadruped Creeping Gaits. Mathematical Biosciencies 2(1/2) (1968) 11. Raibert, M.H.: Legged Robots. ACM 29(6), 499–514 (1986)
An Efficient Sleep Mode Procedure for IEEE 802.16e Femtocell Base Station Sujin Kwon1 , Young-uk Chung2 , and Yong-Hoon Choi3 1
Namyang R&D Center, Hyundai Motor Company, Korea Department of Electronic Engineering, Kwangwoon University, Korea School of Electronics and Information Engineering, Kwangwoon University, Korea {yuchung,yhchoi}@kw.ac.kr 2
3
Abstract. Reducing energy consumption becomes an important issue in mobile communication systems. Especially, base station (BS) is suspected to be a major component of energy consumption budget. To solve this problem we can adopt femtocell systems with Bs sleep mode. In this paper, we propose two BS sleep mode procedure to improve energy efficiency. Simulation results show that the proposed procedures can reduce energy consumption especially in femtocell environment. Keywords: BS sleep mode, power saving, energy consumption.
1
Introduction
Energy efficiency is an emerging issue in mobile communication systems. It was reported that mobile communication systems consumed nearly 80TWh electricity in 2008 [1] and it is expected to be grown up as a major source of the world energy consumption in the near future [2]. So, many communication system operators have been interested in energy saving approaches. Especially, power consumption in the base station (BS) is one of major sources. For example, in UMTS, one Node-B consumes around 1500 W on average and it contributes about 60-80% of the whole network energy consumption [3],[4]. Due to their energy inefficiency and large quantities, BS sites are believed responsible for 80% of the energy consumption in mobile communication systems [5]. Most efforts have addressed energy efficiency in mobile communication systems, through the improvement of energy efficiency of BS equipments, the optimization network deployment to reduce the number of required BS, and the introduction of small sized cell. [6] affirms that reducing cell size can increase system capacity while remaining the overall system’s energy consumption unchanged. And, by adopting the BS sleep mode with a small sized cell deployment architecture such as femtocell architecture, overall system’s energy consumption can also be reduced. BS’s without active users are powered off and enter into the BS sleep mode.
This work was supported in part by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2011-0005683).
T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 306–311, 2011. c Springer-Verlag Berlin Heidelberg 2011
An Efficient Sleep Mode Procedure for IEEE 802.16e
307
There has been some studies which deal with the BS sleep mode [7]-[9]. However, most of these studies are focused on the effect of the BS sleep mode. None of the previous works take into consideration the detailed procedure and performance of the BS sleep mode. In this paper, a new procedure of the BS sleep mode that improve the energy efficiency of femtocell based mobile communication systems is investigated using the IEEE 802.16e standard as a basis.
2
Proposed BS Sleep Mode Procedure
The BS sleep mode reduces energy consumption of BS by being powered-off when there is no active user in the cell. In femtocell environment, traffic load to a cell varies considerably according to time, location, season, and user’s movement pattern. Such variation in traffic load of a femtocell is more drastic than that of a macrocell. In femtocell environment, small number of users are in a cell and their waiting time is relatively long compared with macrocell environment. However, such drastic variations in traffic are not followed by the variations in energy consumption. Because, BS’s still consume large amount of energy even when there are little traffic. If the BS sleep mode is adopted, the energy consumption of BS is severely reduced during hours of low traffic.
Fig. 1. Basic BS Sleep mode Procedure
Most efforts have addressed energy efficiency in mobile communication systems, through the sleep mode at Mobile Station (MS) side. In IEEE 802.16 series, MS’s can reduce their energy consumption with sleep mode when lightly loaded traffic is served. Two types of Power Saving Classes (PSC’s) are specified according to traffic types; The PSC I is designed for Best Effort (BE) and Non-Real Time (NRT) traffic while the PSC II is designed for delay sensitive traffic such as Real Time (RT) traffic. In PSC I, the duration of a sleep window is doubled until it reaches its maximum value. On the other hands, the sleep and listening windows are fixed to support real-time property in PSC II. The interval of sleep window is decided according to the characteristic of target services. We
308
S. Kwon, Y. Chung, and Y.-H. Choi
can adopt and extend the sleep mode procedure of MS to that of BS. The basic type of sleep mode procedure at BS side is shown in Fig. 1. A sleep cycle consists of sleep windows interleaved with listening windows. A listening window is a time duration which traffic can be exchanged between an MS and a BS, while a sleep window is a time duration which the transceiver equipment is powered down for power saving. In our proposed procedure, an MS and a BS have their own sleep cycle. During the listening window of an MS, the serving BS transmits traffic until its transmission buffer becomes empty. After the BS finishes its transmissions, the MS checks if it has no traffic to transmit until the Sleep Trigger Time (STm ) expires. If so, the MS requests to enter into the sleep mode by transmitting MOB-SLP-REQ message. Using MOB-SLP-REQ and MOB-SLP-RSP message, the MS negotiates sleep mode parameters such as minimum sleep interval, maximum sleep interval, listening interval and so on with the BS. After all MS in the cell are entered into the sleep mode, the BS checks if there is newly-arriving traffic from network and MS’s until the Sleep Trigger Time (STb ) expires. If so, the BS enters into the BS sleep mode. The sleep duration of BS should be equal or smaller than the sleep duration of MS, because a BS should transmit traffic during listening window of MS’s. In the IEEE 802.16m standard, the sleep window of MS and the listening window of BS can be adjustable. According to the sleep cycle of MS, the sleep cycle of BS should be adjusted. After the BS wakes up, it checks if there is traffic exchange with MS’s. If yes, the BS exchanges traffic with the target MS. If no, the BS waits until the Sleep Trigger Time (STb ) expires and goes to sleep with setting its sleep window as smaller one between the double size of previous one and the maximum sleep interval.
Fig. 2. Enhanced BS Sleep mode Procedure
If we define that the initial sleep interval be Tmin , the maximum sleep interval be Tmax and the listening interval be TL , k-th sleep window size is derived as follows. Tk = {
2k−1 Tmin , if 2k−1 Tmin < Tmax Tmax , otherwise
(1)
An Efficient Sleep Mode Procedure for IEEE 802.16e
309
Fig. 2 shows an enhanced version of the BS sleep mode procedure. In the base BS sleep mode procedure which is shown in Fig.1, BS should remain listening mode until the target MS wakes up from the sleep mode, when it receives newlyarriving traffic from network during listening period. Instead, BS enters sleep mode after receiving the traffic in the enhanced version. Because BS knows when the target MS will wake up, it can fit its wake up time to that of the target MS. To do this, the enhanced procedure can reduce energy consumption.
3
Simulation Results
We evaluated the performance of proposed procedure using simulation. To verify the performance of the proposed procedure, we compared energy consumption according to traffic load, the value of minimum sleep interval and maximum sleep interval. In this simulation, loaded traffic is modeled using Poisson arrival process. The minimum sleep interval is assumed to be 2 (frames). 512 and 1024 frames are considered as the maximum sleep interval. Power consumption of a BS per frame time is assumed that 1W for the listening mode and 0.1W for the BS sleep mode. Fig. 3 shows the energy consumption ratio with varying traffic arrival rate. In this figure, “Type 1” indicates the basic BS sleep mode procedure and “Type 2” indicates the enhanced BS sleep mode procedure. “User N” indicates there are N active users in the cell. 100 % energy consumption ratio means that no sleep mode is adopted. From this results, we can see that more energy are consumed as more traffic is arrived. The reason is that more traffic requires more frequent wake up and it makes the sleep window to be small. Also we can see that the
Fig. 3. Energy consumption ratio vs. traffic arrival rate
310
S. Kwon, Y. Chung, and Y.-H. Choi
Fig. 4. Energy consumption ratio vs. maximum sleep interval
enhanced procedure reduce energy consumption as 20 % compared with the basic procedure. That’s because the enhanced procedure can provide more sleep interval. Fig. 4 shows the energy consumption ratio when Tmax is 512 and 1024. From this results, we can see that we can save more energy by adopting longer maximum sleep interval. Because, the longer the maximum sleep interval provides longer sleep window. But as traffic load increases, the chance to use the long sleep window decreases.
4
Conclusions
In this paper we proposed two BS sleep mode procedures and evaluated them. We considered IEEE 802.16e based femtocell environment. BS sleep mode is an effective way to reduce energy consumption in mobile communication systems. From simulation results, we showed the BS sleep mode is more effective when traffic arrival rate is low and the number of active user is small. So, we can say that it is not suitable to adopt the BS sleep mode to macrocell BS but in femtocell environment, the BS sleep mode would be useful.
References 1. Green IT/Broadband and Cyber-Infrastructure, http://green-broadband.blogspot.com 2. Global Action Plan, An inefficient truth, http://www.globalactionplan.org.uk/, Global Action Plan Rep. (2007)
An Efficient Sleep Mode Procedure for IEEE 802.16e
311
3. Node B datasheets, http://www.motorola.com/ (2008) 4. J.T. Louhi, Energy efficiency of modern cellular base stations, INTELEC, Rome, Italy (2007) 5. Richter, F., Fehske, A.J., Fettweis, G.P.: Energy Efficient Aspects of Base Station Deployment Strategies for Cellular Networks. In: IEEE Vehicular Technology Conference, VTC 2009, Anchorage, USA (2009) 6. Badic, B., O’ Farrell, T., Loskot, P., He, J.: Energy Efficient Radio Access Architectures for Green Radio: Large versus Small Cell Size Deployment. In: IEEE Vehicular Technology Conference, VTC 2009, Anchorage, USA (2009) 7. Chen, T.-C., Chen, J.-C., Chen, Y.-Y.: Maximizing unavailability interval for energy saving in IEEE 802.16e wireless MANs. IEEE Trans. Mobile Computing (April 2009) 8. Jin, S., Choi, M., Choi, S.: Performance analysis of IEEE 802.16m sleep mode for heterogeneous traffic. IEEE Communications Letters 14(5), 405–407 (2010) 9. Kwon, S.W., Cho, D.H.: Enhanced power saving through increasing unavailability interval in the IEEE 802.16e systems. IEEE Communications Letters 14(1), 24–26 (2010)
Performance Analysis of Wireless LANs with a Backoff Freezing Mechanism Ho Young Hwang1 , Seong Joon Kim3 , Byung-Soo Kim4 , Dan Keun Sung5 , and Suwon Park2, 1
2
Dept. of Computer Engineering Dept. of Electronics and Communications Engineering, Kwangwoon University, Korea {hyhwang,spark}@kw.ac.kr 3 Samsung Electronics, Korea 4 Dept. of EECS, University of Michigan, U.S.A. 5 Dept. of Electrical Engineering, KAIST, Korea
Abstract. In this paper, we propose an accurate analytical model of IEEE 802.11 DCF WLANs with a backoff freezing mechanism. In order to analyze the DCF with the backoff freezing mechanism, we model the backoff process of a station with a Markov chain. The proposed Markov chain models three different types of the backoff operations: choosing a new backoff counter, freezing the backoff counter, and decrementing the backoff counter. The proposed analytical model also considers the three different types of the previous channel status: idle, busy due to a collision, and busy due to a successful transmission. Considering the backoff freezing mechanism in the proposed model affects the performance measures such as throughput. The analytical and simulation results show that the proposed analytical model is very accurate with the backoff freezing mechanism. Keywords: IEEE 802.11, wireless local area network (WLAN), distributed coordination function (DCF).
1
Introduction
The distributed coordination function (DCF) of IEEE 802.11 is known as carrier sense multiple access with collision avoidance (CSMA/CA) adopting a binary exponential backoff (BEB) mechanism [1]-[3]. The DCF may use an advanced mechanism exchanging two additional short control frames: a request-to-send (RTS) frame and a clear-to-send (CTS) frame. In the DCF, when at least one of
This work was supported in part by the Korea Research Foundation Grant funded by the Korean Government [KRF-2008-357-D00181] and in part by the Research Grant of Kwangwoon University in 2011. Corresponding author.
T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 312–320, 2011. c Springer-Verlag Berlin Heidelberg 2011
Performance Analysis of Wireless LANs with a Backoff Freezing Mechanism
313
stations transmits a frame, the other stations which listen to the transmission freeze their backoff counters. To evaluate the throughput of the DCF, Bianchi [4] introduced an analytical model under a saturated condition. Wu et al. [5] refined this model by considering a frame retry limit specified in the standard. Chatzimisios et al. [6] presented an analytical model to compute the delay of the DCF by considering retransmission delays with and without a frame retry limit. The above models [4]-[6] assumed that a listening station immediately decrements its non-zero backoff counter after a distributed interframe space (DIFS) or an extended interframe space (EIFS). However, in the IEEE 802.11 standard, the listening station decrements its backoff counter only when the channel remains idle for a slot time after the DIFS or the EIFS. To model the backoff freezing mechanism of the DCF, an analytical model was presented by Ziouva and Antonakopoulos [7]. The model freezes the backoff counter with a non-zero value when the channel is sensed busy. However, they assumed that the transmission probability and the collision probability are independent of the channel status. To model the dependence of the backoff process on the channel status, Foh and Tantra [8] presented an analytical model with freezing of backoff counters. They classified each backoff state according to the previous channel status. However, the previous channel status in the model was limited into two types: idle and busy. In addition, the model did not consider a frame retry limit and delay performance. Choi et al. [9] presented an analytical model for the performance evaluation of the DCF using a closed queueing network where each backoff stage is modeled by a queueing system. Bianchi and Tinnirello [10] presented an approach which relies on elementary conditional probability arguments, and they and Xiao [11] also presented a bidimensional Markov chain. Their models considered the backoff freezing operation specified in the DCF standard, but the backoff freezing operation considered in the models is limited into the case that the control frames are transmitted at the lowest mandatory PHY rate. In addition to the unicast service in IEEE 802.11, Ma and Chen [12] studied the saturation performance of broadcast service analytically and by simulation. They considered the backoff freezing operation in the broadcast service. In our previous study [13], we analyzed the saturation performance of IEEE 802.11 WLAN under the assumption of no consecutive transmissions by considering a frame retry limit, freezing of backoff timer, and the dependence of backoff procedure on the previous channel status. The analysis is simple due to the assumption of no consecutive transmissions. The performance results are conservative within normal network operation regions. In [14], we presented an analytical model of IEEE 802.11e enhanced distributed channel access (EDCA) WLANs with a virtual collision handler (VCH). In [15], we proposed a MIMObased uplink collision mitigation scheme in uplink WLANs in order to mitigate a collision problem which degrades the system performance. In [16], we analyzed goodput in a single cell WLAN environment with hidden nodes under a non-saturated condition.
314
H.Y. Hwang et al.
In this paper, we propose an accurate analytical model for the performance analysis of the IEEE 802.11 DCF WLANs with the backoff freezing mechanism. In order to analyze the DCF with the backoff freezing mechanism, we model the backoff process of a station with a Markov chain. The proposed Markov chain models the three different types of the backoff operations: choosing a new backoff counter, freezing the backoff counter, and decrementing the backoff counter. Due to the backoff freezing mechanism, the backoff process is dependent on the previous channel status. The proposed analytical model considers the three different types of the previous channel status: idle, busy due to a collision, and busy due to a successful transmission. Considering the backoff freezing mechanism in the proposed model affects the performance measures such as throughput. The analytical and simulation results show that the proposed model is very accurate with the backoff freezing mechanism.
2
A Markov Chain Model for WLANs with a Backoff Freezing Mechanism
To model the backoff freezing mechanism of the DCF WLANs, we propose an analytical model of the DCF WLANs considering three types of the previous channel status: idle, busy due to a collision, and busy due to a successful transmission. We model the backoff process of stations with a discrete-time Markov chain (DTMC) model. Fig. 1 shows a new DTMC model for a station without a retry limit. The proposed DTMC model captures the three types of backoff operations: choosing a new backoff counter, freezing the backoff counter, and decrementing the backoff counter. The state {(i, j, k)|i ∈ [0, 2]; j ∈ [0, r]; k ∈ [0, Wj − 1]} represents the state of a tagged station which has backoff operation i and the j-th backoff stage with a backoff counter value of k. The contention window (CW) size of backoff stage j is expressed as j 2 · CWmin , 0 ≤ j ≤ m, Wj = 2m · CWmin , m < j ≤ r, where m is the backoff stage of which CW size is the maximum CW size, and r is the last backoff stage. The term i indicates the following backoff operation: 1) a tagged station chooses a new backoff counter value (i = 2); 2) it freezes the backoff counter with a non-zero value (i = 1); and 3) it decrements the backoff counter value (i = 0). The term i also represents the following channel status: channel is busy due to a transmission of the tagged station (i = 2); it is busy due to transmissions of the other stations (i = 1); and it is idle (i = 0). Let bi,j,k be the stationary distribution of the Markov chain. The stationary state probabilities can be derived in terms of b2,0,0 as follows:
Performance Analysis of Wireless LANs with a Backoff Freezing Mechanism
315
1͏W0 1
2, 0, 0
2, 0, 1 qb
1
1˰pi
0, 0, 0 pi
1˰qi
1˰qb
0, 0, 1
1˰qi
qi
qb
1
…
1, 0, 2
qi
1
1, 0, W0-2
1˰qb
0, 0, 2
2, 0, W0-1
2, 0, W0-2
qb
1
1, 0, 1 1˰qb
…
2, 0, 2
1˰qi
qi 0, 0, W0-2
1͏W1 1˰pbc
2, 1, 0 pbc
2, 1, 1 1˰qbc
qbc
qb
1˰qbc
1, 1, 1 1˰qb 0, 1, 0
1˰pi
pi pbc
1˰qi
…
…
2, 1, 2 qbc
qb
1˰qb
0, 1, 1
1˰qi
qi 0, 1, 2
1˰qi
2, r, 1 1˰qbc
qbc
1˰qbc
1, r, 1 pi 1˰pi
1˰qb 0, r, 0
1˰qi
qi 0, r, 1
qbc 1, r, 2
1˰qb 1˰qi
qi
qi 0, r, 2
qb
qbc
qb
1, 1, W1-1 1˰qb
0, 1, W1-2
…
…
2, r, 2 qb
qb 1˰qbc
…
1˰pbc 2, r, 0
qbc
2, 1, W1-1
1, 1, W1-2 1˰qb
…
1͏Wr
1˰qbc
…
1, 1, 2
qi
2, 1, W1-2
2, r, Wr-2 1˰qbc
…
qbc
2, r, Wr-1
qb 1˰qbc
1, r, Wr-2 1˰qb 1˰qi
qi
qbc
qb
1, r, Wr-1 1˰qb
0, r, Wr-2
Fig. 1. A Markov chain model for a station with a backoff freezing mechanism
316
H.Y. Hwang et al.
if j ∈ [0, r]; k ∈ [0, Wj − 1],
b2,j,k = ψj b2,0,0 , b1,0,k = b1,j,k
(W0 − 1 − k) · qi ψ0 b2,0,0 , 1 − qb
if j = 0; k ∈ [1, W0 − 2],
(Wj − 1 − k) · qi + qbc = ψj b2,0,0 , 1 − qb
b0,j,k = (Wj − 1 − k)ψj b2,0,0 ,
(1) if j ∈ [1, r]; k ∈ [1, Wj − 1], if j ∈ [0, r]; k ∈ [0, Wj − 2],
where qi , qb , and qbc denote the probabilities that the tagged station freezes its backoff counter since at least one of the other stations except the tagged station transmitted a frame after an idle period, a busy period, and a busy period due to a collision, respectively. ⎧ 1, ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ W0 − 1 ⎪ ⎪ pi , ⎪ ⎪ ⎨ W1 j ψj = W0 − 1 Ws−1 − 1 ⎪ pi pi + ⎪ ⎪ ⎪ W1 Ws ⎪ s=2 ⎪ ⎪ ⎪ j ⎪ W −1 ⎪ Ws−1 − 1 0 ⎪ ⎪ p pi + ⎪ i ⎩ W Ws 1 s=2
if j = 0, if j = 1,
pbc , if j ∈ [2, r − 1], Ws
−1 pbc Wr − 1 pbc 1− pi + , if j = r, Ws Wr Wr
where pi and pbc denote the probabilities that a frame encounters a collision when it is transmitted from the tagged station after an idle period and after a busy period due to a collision, respectively. The value of b2,0,0 is obtained by imposing the normalization condition as follows:
1=
W −1 0
b2,0,k +
k=0
+
r
j=1
3
⎡ ⎣
W 0 −2
b1,0,k +
k=1
Wj −1
k=0
W 0 −2
b0,0,k
k=0
Wj −1
b2,j,k +
k=1
Wj −2
b1,j,k +
⎤ b0,j,k ⎦
(2)
k=0
Analysis of Interactions among Stations and Throughput
Let τi , τbc and τbs denote the probabilities that a station accesses the channel after an idle period, a busy period due to a collision, and a busy period due to a successful transmission, respectively. Each of the channel access probabilities
Performance Analysis of Wireless LANs with a Backoff Freezing Mechanism
317
τi , τbc and τbs can be expressed in terms of the stationary probabilities τi , τbc and τbs , respectively. ⎧ r
τi ⎪ ⎪ ⎪ τ = , where τ = b0,j,0 , i i ⎪ ⎪ 1 − Pb ⎪ j=0 ⎪ ⎨ r
τbc (3) τbc = , where τbc = b2,j,0 , ⎪ ⎪ Pb (1 − Ps ) ⎪ ⎪ j=1 ⎪ ⎪ τbs ⎪ ⎩ τbs = , where τbs = b2,0,0 , Pb Ps
where Pb is the probability that the channel is busy, and Ps is the probability that a frame transmission is successful. The channel busy probability Pb and the successful transmission probability Ps are obtained as follows: n Pb = (1 − Pb ) · 1 − (1 − τi )n + Pb (1 − Ps ) · 1 − (1 − τbc ) + Pb Ps · nτbs (4) n−1 Pb · Ps = (1 − Pb ) · nτi (1 − τi )n−1 + Pb (1 − Ps ) · nτbc (1 − τbc ) + Pb Ps · nτbs (5)
The collision probabilities pi and pbc after an idle period and a busy period due to a collision, respectively, are given by
n−1 τi n−1 pi = 1 − (1 − τi ) = 1− 1− , (6) 1 − Pb
n−1 τbc n−1 pbc = 1 − (1 − τbc ) = 1− 1− . (7) Pb (1 − Ps ) The probabilities qi and qbc that the tagged station freezes its backoff counter since at least one of the other stations except the tagged station transmitted a frame after an idle period and a busy period due to a collision, respectively, are given by qi = pi and qbc = pbc . The probability qb that a tagged station freezes its backoff counter after a busy period is given by n−1 qb = (1 − Ps∗ ) · 1 − (1 − τbc ) + Ps∗ · nτbs (8) where Ps∗ is the probability that a frame transmission is successful when the frame is transmitted from one of the other stations except the tagged station. The probability Ps∗ is derived as Pb∗ · Ps∗ = Pb · Ps − τi (1 − pi ) + τbc (1 − pbc ) + τbs (9) where Pb∗ is the probability that the channel is busy when a frame is transmitted from at least one of the other stations except the tagged station. The probability Pb∗ is derived as Pb∗ = Pb − (1 − Pb )τi + Pb (1 − Ps )τbc + Pb Ps τbs (10)
318
H.Y. Hwang et al.
To evaluate the throughput of the DCF WLANs, we use the definition of the normalized system throughput introduced in [4]. The normalized throughput S is defined as the fraction of time used to successfully transmit payload bits of stations. S=
Pb Ps E[P ] , (1 − Pb )σ + Pb Ps Ts + Pb (1 − Ps )Tc
(11)
where E[P ] is the expected time used for a successful payload transmission, σ is the slot time, TS is the average time used for a successful transmission, and TC is the average time used for a collision.
4
Numerical Results
To evaluate the throughput performance of the IEEE 802.11 DCF WLANs with the backoff freezing mechanism, we use the system parameters shown in Table 1 for both analytical and simulation results. Table 1. Example of the System Parameters Payload MAC Header PLCP Preamble Length PLCP SIGNAL Length Data Rate (RD ) Control Rate (RC ) DATA Length RTS Length CTS Length ACK Length Slot Time SIFS DIFS EIFS
8192 (bits) 224 (bits) 16 (μs) 4 (μs) 18 (Mbps) 12 (Mbps) (8192+224+22)/(4RD )· 4 + 20 (μs) (160+22)/(4RC )· 4 + 20 (μs) (112+22)/(4RC )· 4 + 20 (μs) (112+22)/(4RC )· 4 + 20 (μs) 9 (μs) 16 (μs) 34 (μs) 94 (μs)
Fig. 2 shows the normalized saturation throughput of the DCF WLANs without a frame retry limit. The results show the throughput performance for basic access and RTS/CTS exchange methods with the system parameters. Fig. 2 illustrates a comparison of the results of our proposed model (lines), Bianchi’s model [4] (dashed lines) and simulation (marks). The result of our proposed model agrees with the simulation result very well, while the result of the Bianchi’s model shows a difference in comparison with the simulation result. It is because the Bianchi’s model does not capture the backoff freezing mechanism of the DCF standard, while our proposed model captures the backoff freezing mechanism and the dependence of the backoff process on the previous channel status.
Performance Analysis of Wireless LANs with a Backoff Freezing Mechanism
319
0.7
Normalized Saturation Throughput, S
0.65
0.6
0.55
0.5
0.45
0.4 (RTS, W=16, m=r=4) Simulation (RTS, W=8, m=r=4) Simulation (Basic, W=16, m=r=4) Simulation (Basic, W=8, m=r=4) Simulation (The proposed model) Analysis (Bianchi model) Analysis
0.35
0.3
5
10
15
20
25 30 Number of Stations, n
35
40
45
50
Fig. 2. Normalized saturation throughput of DCF for different minimum CW sizes (W = 16 and W = 8).
5
Conclusion
In this paper, we proposed an accurate analytical model for the performance analysis of the IEEE 802.11 DCF WLANs with the backoff freezing mechanism. In order to analyze the DCF with the backoff freezing mechanism, we modeled the backoff process of a station with a Markov chain. The proposed Markov chain models three different types of the backoff operations: choosing a new backoff counter, freezing the backoff counter, and decrementing the backoff counter. The proposed analytical model considers three different types of the previous channel status: idle, busy due to a collision, and busy due to a successful transmission. Considering the backoff freezing mechanism in the proposed model affects the performance measures such as throughput. The analytical and simulation results show that the proposed analytical model is very accurate with the backoff freezing mechanism.
References 1. Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specification. ANSI/IEEE Std 802.11, 1999 Edition (R2003) (June 2003) 2. IEEE 802.11a, Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications: High-Speed Physical Layer in the 5 GHz Band. IEEE Std 802.11a-1999 (December 1999)
320
H.Y. Hwang et al.
3. IEEE 802.11g, Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications: Further Higher Data Rate Extension in the 2.4 GHz Band. IEEE Std 802.11g-2003 (June 2003) 4. Bianchi, G.: Performance analysis of the IEEE 802.11 distributed coordination function. IEEE J. Sel. Areas Commun. 18(3), 535–547 (2000) 5. Wu, H., Peng, Y., Long, K., Cheng, S., Ma, J.: Performance of reliable transport protocol over IEEE 802.11 wireless LAN: analysis and enhancement. In: Proc. IEEE INFOCOM, pp. 599–607 (June 2002) 6. Chatzimisios, P., Boucouvalas, A.C., Vitsas, V.: IEEE 802.11 packet delay - a finite retry limit analysis. In: Proc. IEEE Globecom, pp. 950–954 (December 2003) 7. Ziouva, E., Antonakopoulos, T.: CSMA/CA performance under high traffic conditions: throughput and delay analysis. Computer Communications 25(3), 313–321 (2002) 8. Foh, C.H., Tantra, J.W.: Comments on IEEE 802.11 saturation throughput analysis with freezing of backoff counters. IEEE Commun. Lett. 9(2), 130–132 (2005) 9. Choi, J., Yoo, J., Kim, C.: A novel performance analysis model for an IEEE 802.11 wireless LAN. IEEE Commun. Lett. 10(5), 335–337 (2006) 10. Bianchi, G., Tinnirello, I.: Remarks on IEEE 802.11 DCF performance analysis. IEEE Commun. Lett. 9(8), 765–767 (2005) 11. Tinnirello, I., Bianchi, G., Xiao, Y.: Refinements on IEEE 802.11 distributed coordination function modeling approaches. IEEE Transactions on Vehicular Technology 59(3), 1055–1067 (2010) 12. Ma, X., Chen, X.: Saturation performance of IEEE 802.11 broadcast networks. IEEE Commun. Lett. 11(8), 686–688 (2007) 13. Kim, S.J., Hwang, H.Y., Kwon, J.K., Sung, D.K.: Saturation Performance Analysis of IEEE 802.11 WLAN under the Assumption of No Consecutive Transmissions. IEICE Transactions on Communications E90-B(3), 700–703 (2007) 14. Hwang, H.Y., Kim, S.J., Sung, D.K., Song, N.-O.: Performance Analysis of IEEE 802.11e EDCA with a Virtual Collision Handler. IEEE Transactions on Vehicular Technology 57(2), 1293–1297 (2008) 15. Jin, H., Jung, B.C., Hwang, H.Y., Sung, D.K.: MIMO-Based Collision Mitigation Scheme in Uplink WLANs. IEEE Communications Letters 12(6), 417–419 (2008) 16. Yang, J.W., Kwon, J.K., Hwang, H.Y., Sung, D.K.: Goodput Analysis of a WLAN with Hidden Nodes under a Non-Saturated Condition. IEEE Transactions on Wireless Communications 8(5), 2259–2264 (2009)
Performance Analysis of Forward Link Transmit Power Control during Soft Handoff in Mobile Cellular Systems Jin Kim, Suwon Park, Hyunseok Lee, and Hyuk-jun Oh Dept. of Electronics and Communications Eng., Kwangwoon University, 26 Kwangwoon-gil, Nowon-gu, 139-701, Seoul, Republic of Korea [email protected], {spark,hyunseok,hj_oh}@kw.ac.kr
Abstract. Forward link performance during a soft handoff for two conditions is evaluated. We showed quantitatively that the performance of the combined signal from two base stations is better than of a signal from each of two base stations as diversity combining. We also showed that the use of diversity combining at the mobile allowed the call to be maintained at a sufficiently low Eb/No value. Keywords: Transmit Power Control, Closed Loop, Outer Loop, Soft Handoff.
1
Background
In the wireless communication systems, there are two link adaptation schemes; one is the transmit power control, and the other is the transmit rate control. The former adaptively controls the transmission power according to the fading of the channel between a transmitter and its corresponding receiver. The latter adaptively controls the transmission data rate by changing the modulation and coding schemes (MCS). In this paper, we focus on the transmit power control. Transmit power control has been one of the important issues in mobile communication systems; especially, for code division multiple access (CDMA) based systems. The transmit power control mechanism is categorized as the open loop transmit power control (OLTPC) and the closed loop transmit power control (CLTPC). In OLTPC, a station autonomously adjusts its transmit power level based on the received signal strength of the signal from its corresponding station. In CLTPC, the corresponding station sends a command to the station for adjusting the transmit power of the station. The command is usually based on the received signal to interference and noise ratio (SINR). The CLTPC is comprised of two parts: the inner (or fast) loop transmit power control and the outer (or slow) loop transmit power control. In the inner loop TPC, the corresponding station generates commands based on a given target energy-per-bit to interference density ratio, (Eb/Io)τ proportional to the received SINR, and sends the commands to the station as shown in Fig.1. The (Eb/Io)τ is set based on the required quality metrics such as bit error rate (BER), frame error rate (FER), block error rate (BLER), or the like. The station controls its transmit power based on the commands from the corresponding station. If the command is of power-up, it increases the transmit power. And if the command is of power-down, the T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 321–327, 2011. © Springer-Verlag Berlin Heidelberg 2011
322
J. Kim et al.
station decreases its transmit power. The inner loop TPC is eager to meet the target (Eb/Io)τ. On the other hand, the outer loop TPC is responsible for setting the target (Eb/Io)τ based on the quality indicator measured in real time. [2]
Fig. 1. Forward link power control
Handoff in mobile cellular communication systems is a process whereby a mobile station (MS) communicating with one base station (BS) called as current BS is switched to another BS called as target BS for better communication. If it is done during a call, it is called as an active handoff. And if it is done during an idle state, it is called as an idle handoff. Handoff is also categorized as hard handoff or soft handoff. The hard handoff breaks the connection to the current BS before it makes a connection to the target BS. On the other hand, the soft handoff breaks the connection to the current BS after it makes a connection to the target BS as shown in Fig.2.
Fig. 2. Forward link soft handoff signal transmission
In this paper, forward link performance during a soft handoff is analyzed for several conditions. Used performance metrics are basically bit error rate (BER) and frame error rate (FER) at the mobile station. They are evaluated for various conditions. During the soft handoff, an MS receives forward link signals carrying the same data from all base stations in the active set, which is the set containing all base stations participating in the soft handoff. In this paper, we assume that the size of the active set is two. A rake receiver in the MS maximal ratio combines the two forward link signals before channel decoding. [5]
Performance Analysis of Forward Link Transmit Power Control
323
This paper is organized as follows. Section 2 and 3 give conventional outer loop transmit power control scheme for performance analysis and system model of soft handoff for simulation. Section 4 presents results of performance analysis through simulations. Finally, we give a conclusion and discuss the future works in section 5.
2
Sampath’s Outer Loop Transmit Power Control
The main purpose of the CLTPC algorithm is to maintain the FER of a forward link channel to a desired level. A mobile station generates a forward link transmit power control bit (PCBf) by comparing a measured
( Eb / I 0 )
the measured one is larger than or equal to
( Eb / I 0 )τ , the MS generates a power-
with the target
( Eb / I 0 )τ . If
down command corresponding to PCBf = 1. Otherwise, it generates a power-up command of PCBf = 0. The PCBf is multiplexed and transferred to the corresponding BS with data bit stream. After receiving the PCBf, the BS adjusts its transmit power according to the PCBf. For PCBf = 1, the BS decreases its transmit power by a predefined level compared to the previous transmit power level. For PCBf = 0, the BS increases its transmit power by the predefined level. Typically, the predefined level is 1 dB. [1] In order for the MS to achieve the desired FER in real time, the ( Eb / I 0 )τ is adjusted by the outer loop transmit power control. It is not standardized. That is, it is one of implementation issues. For performance evaluation, Sampath algorithm based outer loop transmit power control is used in this paper. [2] Because it is simple and robust, it is known as a popular outer loop transmit power control algorithm. Sampath algorithm is briefly described as follows: By using cyclic redundancy check (CRC) bits contained within the received data frame, an MS checks if the frame is in error or not. If it is considered as in error, the ( Eb / I 0 )τ is increased based on Eq. (1) in order to generate power-up commands more frequently. Otherwise, the
( Eb / I 0 )τ is decreased based on Eq. (2) in order to
generate power-up commands less frequently.
⎛ Eb ⎞ ⎛ Eb ⎞ ⎜ ⎟ [ j + 1] = ⎜ ⎟ [ j ] + k Δ ⎝ I 0 ⎠τ ⎝ I 0 ⎠τ
(1)
⎛ Eb ⎞ ⎛ Eb ⎞ ⎜ ⎟ [ j + 1] = ⎜ ⎟ [ j ] − Δ ⎝ I 0 ⎠τ ⎝ I 0 ⎠τ
(2)
where j is an index of the current frame, and is the step size in dB. The Sampath algorithm aims to keep the FER always less than or equal to 1/(k+1). Hence, if desired FER is FERreq, k should be chosen to be (1/ FERreq 1). For example, for FERreq = 0.01 = 1%, k should be 99.
324
3
J. Kim et al.
Simulation Conditions for Performance Evaluation
In this paper, we assume that an MS and two BS’s are in soft handoff. And the MS are located at an equi-distant position from the two BS’s which are a current BS and a target BS. The MS receiving forward link signals from both current BS and target BS generates PCBf based on the comparison of ( Eb / I 0 ) measured after maximal ratio combining and the target target
( Eb / I 0 )τ . If outer loop power control is not used, the
( Eb / I 0 )τ is fixed. And if outer loop power control is used, it can be changed.
Performance evaluations based on computer simulation are done for both fixed target ( Eb / I 0 )τ case and variable target ( Eb / I 0 )τ case. During soft handoff, both base stations adjust their transmit powers according to the received PCBf from an MS. In real system environment, the received PCBf can be in error due to the characteristics of wireless communications. Thus, we evaluated the performance for two cases; ideal and practical cases. They are the cases without and with bit error of PCBf, respectively. Table 1. Cases for Performance Evaluation
target
( Eb / I 0 )τ
Fixed Variable
PCBf Error Without error With errors Case A Case C Case B Case D
Table 2. Simulation conditions Description
Carrier frequency Data rate Frame length # Information bits CRC bits Tail bits Channel coding Constraint length Code rate Channel decoder # Coded symbols Data modulation # Modulated symbols Variance of AWGN
Value
2 GHz 9.6 kbps 20 ms 172 12 8 Convolutional code 9 1/4 Viterbi decoder 768 QPSK 384 6 dB
Performance Analysis of Forward Link Transmit Power Control
325
In this paper, performances are evaluated for the four cases in Table 1. FER and BER for each forward link signals before and after maximal ratio combining will be presented. Computer simulations are done based on the simulation conditions as shown in Table 2. Fading channel is implemented based on Jakes’ model. And transmission and application delay of PCBf are not considered.
4
Simulation Results
Fig. 3 and Fig. 4 show simulation results for cases in Table 1. Fig. 3 depicts simulation results for BER vs. ( Eb / I 0 )τ for cases A and C. Fig. 4 shows results for FER vs.
( Eb / I 0 )τ for cases B and D, respectively. 0
10
-1
10
-2
BER
10
-3
10
-4
10
Uncoded BER of Link 1 (Command error) Uncoded BER ofLink 2 (Command error) BER of Combining Link (Command error) Uncoded BER of Link 1 Uncoded BER ofLink 2 BER of Combining Link
-5
10
0.5
0.53
0.56
0.59
0.62
0.65
0.68
0.71
0.74
0.77
0.8
(Eb/Io)
τ
Fig. 3. BER vs.
( Eb / I 0 )τ
Fig. 3 shows result without outer loop TPC. This means fixed target
( Eb / I 0 )τ ,
which is plotted from 0.5 [dB] to 0.8 [dB] at intervals of 0.03 [dB]. A combining signal from two base stations is improved significantly in BER performance as diversity combining. Two combining signals present the effect of power control command error. BER of combining signal with PCBf error is worse than without PCBf error. Fig. 4 shows result with the outer loop TPC using variable target ( Eb / I 0 )τ . There are
, the step size in dB of increasing or decreasing for the target
( Eb / I 0 )τ ,
from 0.01 to 0.1. FER of the combining signal for two signals from two base stations meets the target FER (1% = 0.01). Fig. 4(a) and 4(b) show insignificant effect of PCBf error due to outer loop TPC.
326
J. Kim et al.
10
0
0
10
Link 1 Link 2 combining link
-1
-1
10
FER
FER
10
Link 1 Link 2 combining link
10
-2
10
-2
-3
10
-3
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
0.1
10 0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
0.1
Δ
Δ
(a) Without PCBf error
(b) With PCBf error Fig. 4. FER vs.
5 Conclusions This paper presents forward link performance analysis during a soft handoff for two conditions. Performance of a combining signal from two base stations is better than of a signal from each of two base stations as diversity combining. The use of diversity combining at the mobile allows the call to be maintained at a sufficient Eb/No value. As one of the two conditions, algorithm for updating target ( Eb / I 0 )τ based on FER (outer loop of closed loop power control) is also evaluated. The outer loop adjusts and sets target ( Eb / I 0 )τ , so that all users get the same performance in terms of frame error rate. The other one of the two conditions is about bit error of PCBf. The bit error of PCBf would diminish overall system capacity. On the other hand, the use of outer loop with bit error of PCBf would not effect on system performance. Acknowledgments. This work was partly supported by the National Research Foundation of Korea Grant funded by the Korean Government (NRF-2010-0025509), and the present research has been partly conducted by the Research Grant of Kwangwoon University in 2011.
References 1. Chulajata, T., Kwon, H.M.: Combinations of Power Controls for cdma2000 Wireless Communications System. In: Vehicular Technology Conference, IEEE VTS-Fall VTC, 52nd (2000) 2. Sampath, A., Sarath Kumar, P., Holtzman, J.M.: On Setting Reverse Link Target SIR in a CDMA System. Vehicular Technology Conference, IEEE 47th (1997)
Performance Analysis of Forward Link Transmit Power Control
327
3. Lee, C.-C., Steele, R.: Closed-loop power control in CDMA systems. IEE Proc. Communication 143(4) (August 1996) 4. Giovanardi, A., Mazzini, G., Tralli, V., Zorzi, M.: Some results on power control in wideband CDMA cellular networks. In: Wireless Communications and Networking Conference, WCNC 2000. IEEE, Los Alamitos (2000) 5. Worley, B., Takawira, F.: Power reduction and Threshold Adjustment for Soft Handoff in CDMA Cellular Systems. In: AFRICON. IEEE, Los Alamitos (1999)
Performance Improvement Method for Wi-Fi Networks Sharing Spectrum Jongwoo Kim1, Suwon Park1, Seung Hyong Rhee2, Yong-Hoon Choi3, HoYoung Hwang4, and Young-uk Chung5 1
Dept. of Electronics and Communications Engineering 2 Dept. of Electronics Convergence Engineering 3 Dept. of Information Control Engineering 4 Dept. of Computer Engineering 5 Dept. of Electronic Engineering Kwangwoon University, Seoul, Korea {jongwoo_kim,spark,rhee,yhchoi,hyhwang,yuchung}@kw.ac.kr
Abstract. We propose a solution that improves the performance of Wi-Fi networks sharing spectrum that are deployed in the same or adjacent area. The proposed solution uses the PS-Request protocol to sense the other Wi-Fi networks accurately, and manages the overlapped subcarriers in order to avoid the interference among the Wi-Fi networks sharing spectrum. The proposed solution can avoid the interference from the overlapped spectrum of Wi-Fi networks deployed in the same or adjacent region. Keywords: channel sensing, PR-Request, guard band expansion.
1
Introduction
People have wanted to use various wireless internet services anywhere and anytime with low price or eventually for nothing but with high speed data rate. They also want to use portable devices such as smart phones or smart pads, which can access the wireless internet services and should be as cheap as possible. Wi-Fi system using the unlicensed ISM (Industrial, Scientific and Medical) band is one of the best candidates for the wireless internet services that fulfill the requirements. The more smart terminal subscribers, the more wireless internet service zones should be deployed. For Wi-Fi systems, the wireless internet service zones are called as hop spots. And the wireless internet data per user is also increasing. Thus the more capacity per unit area should be increased for supporting the wireless internet services that are less blocked. Because the capacity per channel is limited, the more channel should be deployed in the same or adjacent region for increasing the capacity per unit area. Usually, the deployment of Wi-Fi networks are done by difference wireless service operators that want to support their own subscribers. It is done by the operator's own decision without any cooperation with the other operators wanting to deploy their own Wi-Fi networks on the same or adjacent region. In some cases, wireline internet service subscribers deploy their private Wi-Fi networks at home or office. Of course, they are not cooperated with the others. T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 328–334, 2011. © Springer-Verlag Berlin Heidelberg 2011
Performance Improvement Method for Wi-Fi Networks Sharing Spectrum
2
329
Problem Statements
In the 2.4GHz ISM band, there are eleven (or thirteen or fourteen) Wi-Fi channels as shown in Fig.1. Eleven Wi-Fi channels are typical. There are only three nonoverlapping channels, which are shown as solid lines in Fig.1. If four or more Wi-Fi networks are installed in the same or adjacent area, they are inevitably overlapped. That is, they have to share spectrum. Wi-Fi STA's that are served by different Wi-Fi APs and are partly or completely sharing spectrum may not send data frames because of CSMA/CA operation of Wi-Fi system. For example, Wi-Fi networks using channel #1 can sense Wi-Fi networks using channel #2 because they share the spectrum as shown in Fig.2. As Wi-Fi networks sharing spectrum are densely deployed, the performance of each Wi-Fi network will be worse.
Fig. 1. Graphical representation of Wi-Fi channels in 2.4 GHz band
Fig. 2. Spectrum overlapping of two Wi-Fi networks using channel #1 and channel #2
3
Proposed Solution
IEEE 802.11a/g/n based Wi-Fi systems use the orthogonal frequency division multiplexing (OFDM). Fig. 3 shows the subcarrier allocation of the Wi-Fi system.
Fig. 3. Subcarrier frequency allocation
330
J. Kim et al.
In order to solve the problem, we propose to manage the number of subcarriers used by the Wi-Fi systems. That is, the proposed solution is to change the number of subcarriers that are really used for carrying data. If the subcarriers located within the spectrum overlapped band are used by only one Wi-Fi network, the Wi-Fi networks will not interfere each other. In other word, the proposed solution is a kind of guard band expansion. For example, let two Wi-Fi networks deployed in the same or adjacent region. They are using overlapped channel #1 and channel #2, respectively, as shown in Fig. 2. Because the signal from the other Wi-Fi network may be sensed, Wi-Fi STA's served by different Wi-Fi networks cannot send their data frame simultaneously. However, if the two Wi-Fi networks adjust the number of subcarriers as shown in Fig. 4, the overlapped frequency band can be eliminated. It makes the two Wi-Fi networks communicate simultaneously without interference. Because the subcarriers for carrying data are included in the expanded guard band and are not used to carry data, the allowed maximum data rate per Wi-Fi STA can be decreased but the average data rate per Wi-Fi STA can be increased.
Fig. 4. An example of the proposed solution
Accurate channel sensing of Wi-Fi system is a precondition for the proposed solution. One of two main purposes of the carrier sensing in Wi-Fi system is to check the network allocation vector (NAV), and the other is the clear channel assessment (CCA). NAV is the virtual carrier sense used by Wi-Fi STA's in order to reserve the channel for necessary frames followed by the current frame. CCA is to measure the received energy of signals through the assigned channel. CCA checks whether the channel is busy or not for the current frame. CCA is composed of two related functions; carrier sense (CS) and energy detection (ED). CS means for a receiver to detect an incoming Wi-Fi signal preamble. ED means for the receiver to measure the non-Wi-Fi energy level of the channel for the assigned frequency range, which includes the noise floor, the ambient energy, the interference sources, and the unidentifiable Wi-Fi signals that may have been corrupted but cannot be decoded any longer. Unlike CS detecting the time that the channel occupied by the current frame will be busy, ED measures the channel intermittently to report its energy level when it is larger than the predefined threshold. The threshold is used to determine whether the channel is busy or idle, and it is usually referred as an ED threshold level or a CCA sensitivity level. When an AP senses the channel, all of Wi-Fi STA's served by the Wi-Fi AP are in receiving mode or power saving mode in order to sense the other WiFi network accurately. It is hard to make all of Wi-Fi STA's be in receiving mode or
Performance Improvement Method for Wi-Fi Networks Sharing Spectrum
331
power saving mode because the Wi-Fi AP cannot control the transmission of its Wi-Fi STA's. So we used the PS-Request frame which are previously proposed by the authours.[1] The PS-Request frame makes an Wi-Fi AP control the transmission of STA's by using the vestigial power management bit within the Wi-Fi frame structure from the Wi-Fi AP. The PS-Request frame can guarantee that all of Wi-Fi STA's are in the power saving mode. At that time, the Wi-Fi AP can detect the other Wi-Fi networks without disturbance from its Wi-Fi STA's. For the proposed solution, the Wi-Fi AP should inform the number of used subcarriers to all of its Wi-Fi STA's. We exploit a beacon frame of management frame type. That is, we propose a beacon frame body composed of two bits. Table 1 indicates the meaning of two bits included in the proposed beacon frame body. The 1st bit indicates the direction to adjust the number of subcarriers, and 2nd bit corresponds the number of subcarriers to be adjusted. In this paper, we assume that the granularity for managing the number of subcarriers is sixteen subcarrriers. Table 1. Information of the new beacon frame body
# data subcarriers # pilot subcarriers # subcarriers in the upper guard band # subcarriers in the lower guard band # DC subcarriers
4
00 38 3 17 5 1
01 24 2 31 6 1
10 38 3 6 16 1
11 24 2 6 31 1
Simulation Results
For the simulation, subcarriers are allocated as shown in Fig.3. For simplicity, it is assumed that two Wi-Fi networks #1 and #2 are deployed in the same region. 4.1
Simulation Conditions
Table 2 shows the conditions for the computer simulation. Table 3 shows four cases for the performance evaluation by the simulation. If the interferee is Wi-Fi channel #1, they are corresponding to the cases that the interferer is Wi-Fi channel #1, #2, #3 and #4 as shown in Fig.1, respectively. Table 2. Simulation conditions
Bandwidth (MHz) FFT Size Pulse Shaping Filter Channel Coding Modulation Subcarrier Space (kHz) Channel
Wi-Fi network #1 20 64 Raised Cosine (Roll-off factor=0.25) Convolutional coding (R=1/2, K=7) QPSK 312.5 AWGN
Wi-Fi network #2 20 64 Raised Cosine (Roll-off factor=0.25) Convolutional coding (R=1/2, K=7) QPSK 312.5 AWGN
332
J. Kim et al. Table 3. Four cases for performance evaluation
Parameters (Wi-Fi network #1/Wi-Fi network #2) # data subcarriers # pilot subcarriers # subcarriers in the upper guard band # subcarriers in the lower guard band # DC subcarriers
4.2
Case 1 (Ch.1/Ch.1) 24/24 2/2 31/6 6/31 1/1
Case 2 (Ch.1/Ch.2) 38/24 3/2 17/6 5/31 1/1
Case 3 (Ch.1/Ch.3) 38/38 3/3 17/6 5/16 1/1
Case 4 (Ch.1/Ch.4) 38/48 3/4 17/6 5/5 1/1
Simulation Results
Fig. 5 shows the throughput vs. the number of Wi-Fi STA's. As the number of Wi-Fi STA's is increasing, the throughput of Wi-Fi networks is slightly decreased. However, if the other Wi-Fi network sharing the spectrum is deployed in the same area, the throughput of each Wi-Fi network is decreased. The dashed line represents the throughput of 10 Wi-Fi STA's served by Wi-Fi network #1 with channel #1 is decreased according to the number of Wi-Fi STA's served by Wi-Fi network #2 with channel #1. 6
Throughout (Mbps)
5
4
3
2
1 WiFi network #1 WiFi network #1 with WiFi network #2 0 10
15
20
25 30 35 Number of STAs
40
45
50
Fig. 5. Throughput of Wi-Fi network
Fig. 6 shows the performance of the proposed solution and the throughput vs. the number of Wi-Fi STA's. Since the proposed solution manages the subcarriers overlapped by two Wi-Fi networks, there is no interference between two Wi-Fi networks. That is, even if the frequency bands of each Wi-Fi network are overlapped, the Wi-Fi STA's served by two Wi-Fi networks can send frames simultaneously. Because the number of subcarriers carrying data is decreased, the maximum throughput of each Wi-Fi network is decreased. But, the total throughput of two WiFi networks is increased as shown in Fig. 6 compared to Fig. 5.
Performance Improvement Method for Wi-Fi Networks Sharing Spectrum
333
10 9 8
Throughout (Mbps)
7 6 5 4 3 Case Case Case Case
2 1 0 20
30
4 3 2 1
(Ch.1/Ch.4) (Ch.1/Ch.3) (Ch.1/Ch.2) (Ch.1/Ch.1) 40
50 60 70 Number of STAs
80
90
100
Fig. 6. Total throughput vs. the number of Wi-Fi STA's
5
Conclusions
We propose a solution that improves the performance of Wi-Fi networks sharing spectrum that are deployed in the same or adjacent area. The proposed solution uses the PS-Request protocol to sense the other Wi-Fi networks accurately, and manages the overlapped subcarriers in order to avoid the interference among the Wi-Fi networks sharing spectrum. The proposed solution can avoid the interference from the overlapped spectrum of Wi-Fi networks deployed in the same or adjacent region. It outperforms the conventional schemes. Acknowledgments. This work was partly supported by the National Research Foundation of Korea Grant funded by the Korean Government (NRF-2010-0025509), and the present research has been partly conducted by Research Grant of Kwangwoon University in 2011.
References 1. Kim, J., Park, S., Rhee, S.H., Choi, Y.-H., Hwang, H.: Energy Efficient Coexistence of WiFi and WiMAX Systems Sharing Frequency Band. In: Kim, T.-h., Lee, Y.-h., Kang, B.H., Ślęzak, D. (eds.) FGIT 2010. LNCS, vol. 6485, pp. 164–170. Springer, Heidelberg (2010) 2. Chiasserini, C.F., Rao, R.R.: Coexistence Mechanisms for Interference Mitigation between IEEE 802.11 WLANs and Bluetooth. In: Proceedings of INFOCOM 2002, pp. 590–598 (2002) 3. Yuan, W., Wang, X., Linnartz, J.-P.M.G.: A Coexistence Model of IEEE 802.15.4 and IEEE 802.11b/g. In: Philips research (2007)
334
J. Kim et al.
4. Berlerman, L., Hoymann, C., Hiertz, G.R., Mangold, S.: Coexistence and Interworking of IEEE 802.16 and IEEE 802.11(e). In: Vehicular Technology Conference, VTC 2006Spring, IEEE 63rd, vol. 1, pp. 27-31 (2006) 5. Berlerman, L., Hoymann, C., Hiertz, G.R., Walke, B.: Unlicensed Operation of IEEE 802.16: Coexistence With 802.11(a) in Shared Frequency Bands. In: IEEE 17th International Symposium on Personal Indoor and Mobile Radio Communications, pp. 1-5 (2006) 6. Jing, X., Raychaudhuri, D.: Spectrum Co-existence of IEEE 802.11b and 802.16a Networks Using Reactive and Proactive Etiquette Policies. In: First IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks, DySPAN 2005, pp. 243–250 (2005)
Energy Saving Method for Wi-Fi Stations Based on Partial Virtual Bitmap Sangmin Moon1, Taehyu Shin2, Suwon Park2, Hyunseok Lee2, Chae Bong Sohn2, Young-uk Chung3, and Ho Young Hwang4 1
Advanced Infortainment Research Team, Hyundai Motor Company 2 Dept. of Electronics and Communications Engineering 3 Dept. of Electronic Engineering 4 Dept. of Computer Engineering Kwangwoon University, Seoul, Korea [email protected], {taehyu.shin,spark,hyunseok,cbsohn,yuchung,hyhwang}@kw.ac.kr
Abstract. Mobile devices are powered by batteries with limited capacity. In order to use them for a long time, energy efficient operation of them is necessary. In this paper, we propose an energy saving method for Wi-Fi stations which support IEEE 802.11. It modifies existed partial virtual bitmaps in IEEE 802.11 to bitmaps based on IEEE 802.11a physical layer transmission time. Keywords: IEEE 802.11, energy saving, beacon interval, traffic indication map.
1
Introduction
The proliferation of mobile devices has posed a great concern on energy saving. Limited battery capacity of mobile devices restricts some wireless applications because they need much energy for their operation. There have been many attempts to reduce energy consumption of wireless devices in order to prolong their operation time per charging up. For a long time, energy saving has been one of popular research topics for mobile devices. There were several methods to reduce the energy consumption of wireless LAN (WLAN) terminals. [1] proposed a scheme that an Wi-Fi station (STA) controls its transmitting power with power-combination table. [2] proposed a scheme that optimizes the energy consumption by using adaptive ATIM window size in Wi-Fi system. As a matter of fact, a basic power management scheme for the Wi-Fi system is contained in IEEE 802.11 specification. [3] The scheme is for infrastructure network because maximum energy saving can be achieved there. By using the scheme, battery-powered Wi-Fi STAs can save energy. A Wi-Fi STA can sleep or go to a power-saving (PS) mode during a designated time in which it does not communicate with its Wi-Fi access point (AP). Even though the Wi-Fi AP can not transmit frames to the Wi-Fi STA during the PS mode, it buffers frames for the Wi-Fi STA. After the designated time is elapsed, the Wi-Fi STA wakes up and check whether data for it is T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 335–340, 2011. © Springer-Verlag Berlin Heidelberg 2011
336
S. Moon et al.
buffered at the Wi-Fi AP or not. To inform it of the Wi-Fi STA, Wi-Fi AP periodically assembles and transmits a traffic indication map (TIM) by using beacon frame.
Fig. 1. Setting of the partial virtual bitmap on the IEEE 802.11
As shown in Fig. 1, the TIM is a virtual bitmap composed of 2008 bits; offsets are used so that the Wi-Fi AP needs to transmit only a small portion of the virtual bitmap. Each bit in the TIM corresponds to a particular association identifier (AID); setting the bit indicates that a Wi-Fi AP has buffered unicast frames for a Wi-Fi STA with the AID corresponding to the bit position. Wi-Fi STAs in PS mode periodically wake up to listen to traffic announcement to check whether the Wi-Fi AP has any buffered frames for them. When a Wi-Fi STA knows that its frames are buffered at the Wi-Fi AP through the TIM field of the beacon frame, it sends a PS-Poll frame to the Wi-Fi AP in order to retrieve the buffered frames. If more than one Wi-Fi STAs know that their frames are buffered at the Wi-Fi AP, they have to compete for permission of issuing only their own PS-Poll frame. After successful reception of a buffered frame of the permitted Wi-Fi STA, the other Wi-Fi STAs may continue to compete for the next permission.
2 Problem Statements Fig. 2 shows an example of buffering and frame retrieving procedure during PS mode of Wi-Fi STAs specified in IEEE 802.11. It also shows problems that is the start point of this paper. Wi-Fi STAs indicated in the TIM field of the beacon frame may retrieve the frames buffered at the Wi-Fi AP. During the fourth beacon interval, there are frames buffered for many STAs; #17, #19, #21, #23, #25, #27, #29 and #31. Assume that after competition, STA #17 acquired a right to send a PS-Poll. It receives a buffered frame in response. After then, STA #17 sleeps. Consequently STAs #23, #31, #29, #19, #25 receive their own frames before the end of the beacon interval by acquiring the permission after the competition. However, STA #21 and #27 do not acquire the permission of sending their PS-Poll frames within the beacon interval. Although they cannot retrieve the frames buffered at the Wi-Fi AP, they continue to be awake until the end of the beacon interval. They consumed energy that can be saved if they are not indicated in the TIM field of the beacon frame.
Energy Saving Method for Wi-Fi Stations Based on Partial Virtual Bitmap
337
Fig. 2. An example of conventional energy saving methods for Wi-Fi STAs
3 Proposed Partial Virtual Bitmap Setting Partial virtual bit map setting in the conventional scheme is as follows; the Wi-Fi AP set the bit map based only on whether the buffered frames for each STA exist or not, regardless of the number of Wi-Fi STAs that can be served during a beacon interval. In order to reduce energy consumption of Wi-Fi STAs in a PS mode, we propose to modify the conventional virtual bit map setting considering the number of Wi-Fi STAs that can be served during a beacon interval. That is, in order to reduce unnecessarily consumed energy by Wi-Fi STAs which have buffered data at the WiFi AP, but can not be served by it during the beacon interval, a modified partial virtual bitmap (MPVB) setting considering the number of STAs which it may be served for the beacon interval is proposed. The MPVB will be set based on possible data transmission time within a beacon interval, and sending and receiving time in accordance with buffered frame size of each Wi-Fi STA. Possible transmission time during a beacon interval Ttotal is defined as follows.
⎧⎪TBI − TCF −end Ttotal = ⎨ ⎪⎩TBI − TB
(TCF −end > 0 ) (TCF −end = 0)
(1.1)
where TBI is the beacon interval, TCF-end is a contention free period within the beacon interval, and TB is the time for the transmission of the beacon frames within the beacon interval.
338
S. Moon et al.
The required time for retrieving the buffered frame is as follows.
TType1 = TBackoff + TDIFS + 2TSIFS + TPS −Poll + TData + TACK
(1.2)
TType 2 = 2TSIFS + TData + TACK
(1.3)
Equation (1.2) is the time necessary for retrieving the first frame of the chosen Wi-Fi STA. Equation (1.3) is the time necessary for retrieving the next frame in case of multiple frames. Wi-Fi AP knows the listen interval values of the Wi-Fi STAs through the listen interval field included in an association request frame from each Wi-Fi STA. The values are not changed before re-association. In addition, the offset of the awake period of a Wi-Fi STA is known through the transmission of the first packet. Before the MPVB is set, therefore, a Wi-Fi AP selects the Wi-Fi STA of the packet which first arrives at the buffer among the Wi-Fi STAs receiving the beacon frame. Next, the Wi-Fi AP selects following Wi-Fi STAs to transmit buffered frame based on the following procedure. The time to be necessary for frame transmission of the Wi-Fi STA is subtracted from Ttotal and until the anymore necessary time is insufficient or there is no frame to be transmitted at the buffer, it is repeatedly performed. If there are the multiple frames in the selected Wi-Fi STA, it prevents from the specific Wi-Fi STA monopolizing the channel by setting the maximum value which means the available number of transmitted frames. In this way, if Wi-Fi AP sets TIM based on the MPVB scheme, the Wi-Fi STAs which can retrieve the buffered frame would remain the awake-state during a beacon interval. Therefore, we resolve the unnecessary power consumption of the Wi-Fi STAs that don’t be served from the WiFi AP.
4 Simulation Results Table 1 is the simulation environment in which we verify the proposed algorithm. Table 1. Simulation conditions for performance evaluation System Physical layer System model Traffic model Average arrival rate Time unit related to the arrival rate (ms) Packet size (kbytes) Data rate (Mbytes) Beacon interval (ms) Awake interval of the stations (ms) Initial awake time of the stations (ms) Simulation time (ms) Iterations of the simulation
Wi-Fi IEEE 802.11a MK/G/1/N buffer Poisson process 1.1 1.4 1.7 2.0 100 2312 6 100 3 x 100 0, 100, 200 60 x 60 x 1000 (= 1 hour) 30
Energy Saving Method for Wi-Fi Stations Based on Partial Virtual Bitmap
339
Fig. 3. Power consumption reduction according to the traffic variation
Fig. 3 shows the ratio of the existing scheme to the proposed algorithm with the average packet arrival rate at the buffer and increasing the number of STAs. The number of STA is rises as the average arrival rate increasing. Also, the number of unnecessary STA which cannot send the PS-Poll frame is increased. In this situation, if the MPVB scheme is applied, the twelve STAs having the data rate of 6Mbps would secure 8% extent sleeping hours when the traffic is small (λ=1.1). As the mount of the traffic increases, it can confirm that STAs sleeping hours about 28, 50, 70% is secured. In other words, the energy saving effect is gotten.
5 Conclusion In this paper, the existing partial bitmap setting method is modified in order to reduce the inefficient power consumption of the STA using the Wi-Fi system. The frame transmission time according to the data rate supported from IEEE 802.11a was calculated. The amount of data that can be transmitted during the beacon interval was calculated and the partial virtual bitmap was set. In simulation, the amount of saving power and packet delay was obtained based on the Poisson traffic model and MK/G/1/N buffer model compared with the existing scheme. In case the amount of traffic is small from the simulation result, there is no practical effect of the proposed scheme. However, more sleeping time about the greatest to conventional scheme 70% could be secured as the amount of traffic increased. Moreover, the listen interval of the STA is the effect on the power saving. We can expect the power consumption saving effect of the STA as the listen interval is increased, but the packet delay is enlarged. Therefore, the power of the STA has to be managed by considering the listen interval and packet delay together. The proposed scheme in this paper has the compatibility with IEEE 802.11e supporting QoS. It is necessary that it studies the performance analysis and saving
340
S. Moon et al.
power consumption enhancement effect considering the IEEE 802.11e. In addition, if the performance analysis about the substantial traffic modeling such as FTP, HTTP, and VOD are made, the proposed algorithm will be able to more usefully used. Acknowledgments. This work was supported by the National Research Foundation of Korea Grant funded by the Korean Government (NRF-2010-0025509).
References 1. Woesner, H., Ebert, J.-P., Schlä, M., Woliz, A.: Power-saving mechanisms in emerging standards for wireless LANs: The MAC level perspective. IEEE Personal Communication 5(3), 40–48 (1998) 2. Daji, Q., yun, C., Amit, J., Shin, K.G.: Adaptive transmit power control in IEEE 802.11a wireless LANs. In: The 57th IEEE Semiannual on Vehicular Technology Conference, 2003. VTC 2003-Spring, vol. 1, pp. 433-437 (April 2003) 3. IEEE Std Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer(PHY) Specifications (2007) 4. IEEE Std 802.11a Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer(PHY) Specifications: High-Speed Physical Layer Extension in the 5GHz Band (1999) 5. IEEE Std 802.11b Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer(PHY) Specifications: High-Speed Physical Layer Extension in the 2.4GHz Band (1999) 6. Jung, E.-S., Vaidya, H.H.: An Energy Efficient MAC Protocol for Wireless LANs. IEEE INFOCOM 3, 1756–1764 (2002) 7. Perez-Costa, X., Camps-Mur, D.: A Protocol Enhancement for IEEE 802.11 Distributed Power Saving Mechanisms No Data Acknowledgement. In: 16th IST Mobile and Wireless Communications Summit, pp. 1–7 (2007) 8. Ibe, O.C.: Fundamentals of Applied Probability and Random Processes. Academic Press, New York (2005) 9. Kendall, D.G.: Some Problems in the Theory of Queues. J. Royal Statistics Society 13, 151–185 (1951) 10. Costa, X.P., Mur, D.C., Sashihara, T.: Analysis of the Integration of IEEE 802.11e Capabilities in Battery Limited Mobile Devices. IEEE Wireless Communications 12, 26– 32 (2005) 11. Xiao, Y., Rosdahl, J.: Throughput and Delay Limits of IEEE 802.11. IEEE Communications Letters, 355–357 (2002) 12. Gast, M.S.: 802.11 Wireless Networks. O’Reilly, Sebastopol (2005)
A Handover Scheme for Improving Throughput Using Vehicle’s Moving Path Information* Sang Hyuck Han1, Suwon Park2, and Yong-Hoon Choi1 2
1 Dept. of Information and Control Eng., Kwangwoon University Dept. of Electronics and Communications Eng., Kwangwoon University 447-1, Wolgye-dong, Nowon-gu, Seoul 139-701, Korea {allons-y,spark,yhchoi}@kw.ac.kr
Abstract. With the successful proliferation of smart phones, the demand for seamless and faster service in vehicular environment is increasing. Typical handover mechanisms in practice today may lead to unnecessary or late timing handovers, when they are directly applied to vehicular environment. In this paper, we propose a novel handover mechanism using a vehicle's moving path information to reduce the number of handovers or to increase throughput. During scanning process, a mobile node (MN) estimates average distances from its neighbor base stations (BSs) for a certain time interval using the vehicle’s moving path information. The BS that has the shortest distance from the MN is selected as a target BS for possible handover, so that the number of handovers can be reduced along the path. We carried out performance evaluation with four different moving paths and six different topologies. Through extensive simulations, we show that the proposed scheme reduces the average number of handovers and increases the network throughput. Keywords: Handover, moving path, throughput.
1
Introduction
As smart phone has recently become widespread, it is possible to receive wireless Internet service in all types of everyday environment such as at home, in workplace, on vehicle, on bus, or on train. The handover scheme that is applied to the current wireless network (e.g., 3G or Mobile WiMAX) manages the list of neighbor BSs based on signal strength. In the scheme, handover is performed to a BS whose signal strength is the strongest, when the signal strength of the serving BS goes down below the given threshold value. However, additional factors as below should be taken into consideration in the environment where vehicle moves. • It is not necessary to perform handover to the cell that has short dwell time. For example, it is assumed that handover takes place to cell 1, cell 2 and cell 3 *
This research was supported by the KCC(Korea Communications Commission), Korea, under the R&D program supervised by the KCA (Korea Communications Agency) (KCA-201109913-04002).
T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 341–346, 2011. © Springer-Verlag Berlin Heidelberg 2011
342
S.H. Han, S. Park, and Y.-H. Choi
consecutively as vehicle moves. Then, if the service with sufficient quality can be provided even when handover takes place from cell 1 to cell 3 directly, it is unnecessary to perform handover to intermediate cell 2. • Since vehicle is allowed to run only on road, it is advised to avoid the handover to the cell that provides service mainly in the area that is inaccessible to vehicle. The signal strength from the inaccessible cell tends to be worse, although it is above the threshold during scanning period. Handover causes service quality degradation and signaling overhead, which is a weak point to providing multimedia service in real time. Therefore, it is important to reduce the frequency of handover. To this end, many studies have been conducted to apply the location estimation scheme, which has been previously used for location management, to handover [1][2]. Use of this study method can reduce scanning overhead to neighbor BS. However, vehicle’s passenger is not assumed, and the location estimation algorithm is complex. As a result, it is difficult to apply the method to a real situation. In this paper, we propose a handover mechanism using a vehicle's moving path information to reduce the number of handovers or to increase throughput. During scanning process, a MN estimates average distances from its neighbor BSs for a certain time interval using the vehicle’s moving path information. The BS that has the shortest distance from the MN is selected as a target BS for possible handover, so that the number of handovers can be reduced along the path. This rest of this paper is organized as follows. The Section 2 briefly describes the proposed handover scheme and simulation conditions. The Section 3 verifies the proposed scheme based on experiment. Finally Section 4 presents conclusions.
2
Proposed Handover Scheme and Simulation Conditions
We assume that we know the whole moving path of a vehicle. First, we observe the average speed of a vehicle at regular interval (e.g., 100ms in this paper). Second, based the speed, we predict the average distance between a given vehicle and neighbor BSs at every time interval. When a vehicle needs to perform handover to one of the neighbor BSs, then it tries to make a handover to the predicted nearest BS for a given time period, which is 100ms. Therefore, the best throughput can be guaranteed for that period. If the vehicle cannot find any candidate BS by the proposed scheme, then handover is performed to a BS whose signal strength is the strongest, which is the same to the original handover scheme. The experimental space is 5km x 5km with the two-lane roads placed at the distance of 100m in the grid shape. Arrangement of BS was considered in the six types. Each BS is arranged at the distance of 500m, 600m, 700m, 800m, 900m, and 1,000m in the experimental space. In this experiment, the four moving paths were used for performance analysis as shown in Fig. 1. Each moving path was derived from Levy Walk Model [3] whose moving pattern is similar to a real MN. In all of the four moving paths, vehicles move at the constant speed of 72km/h.
A Handover Scheme for Improving Throughput Using Vehicle’s Moving Path Information 343
< Moving Path 1 >
< Moving Path 2 >
< Moving Path 3 >
< Moving Path 4 >
Fig. 1. Four different moving paths used in our simulation. Each moving path was derived from Levy Walk Model. Table 1. Summary of experiment conditions Parameters BS height MN height Propagation model BS maximum PA power Penetration loss
Value 32m 1.5m Cost 231 Hata Propagation Model 43 dBm 10 dB
Table 2. Illustration of AMC sets and the corresponding data rates Modulation 64QAM 64QAM 16QAM 16QAM QPSK QPSK QPSK
FEC rate 3/4 2/3 3/4 1/2 3/4 1/2 1/4
RSSI (dB) > -66.86 > -68.86 > -72.86 > -74.86 > -79.86 > -81.86 Else
Data rate (Kbps) 16,848 14,976 11,232 7,488 5,616 3,744 1,872
344
S.H. Han, S. Park, and Y.-H. Choi
In order to measure received signal strength indicator (RSSI) value between MN and BS, we used Cost 231 Hata Propagation model [4]. The simulation conditions used in the experiments are summarized in Table 1. Average throughput of a MN is derived from Table 2, which illustrates the modulation and coding scheme (MCS) set and the corresponding data rates for the Mobile WiMAX (10MHz)/WiBro (8.75 MHz) system. The data rates are the values obtained by filling in all the subchannels with a particular MCS level, assuming single antenna link in partial usage subchannel (PUSC) zone. Also, the TDD downlink (DL) to uplink (UL) symbol ratio is set to 29:18 and 27:15 for WiMAX and WiBro, respectively.
3
Simulation Results
We monitored average number of handovers, average RSSI values, and average throughput. Three performance metrics of the proposed scheme are compared to those of the existing handover scheme. Fig. 2 compares the average number of handovers for moving path 1-4 with six different BS arrangements. When the BS to BS distance was 500m, the average number of decrease in handovers was three times. When the BS to BS distance was 600m: 4.5 times, and the BS to BS distance was 700m: 1.25 times. Otherwise, the number of handovers in the proposed scheme was the same to the number of handovers in the original handover scheme.
Fig. 2. Number of handovers
Fig. 3 shows average RSSI values when the proposed handover scheme was compared with the existing handover scheme. As shown in Fig. 3, the higher RSSI gain was obtained when the BS to BS distance was close (e.g., 500m, 600m, and 700m). In case of 500m of BS to BS distance, the value increased by 1.86dB on average in four different moving paths. In case of 700m of BS to BS distance, the value increased by 0.2dB on average.
A Handover Scheme for Improving Throughput Using Vehicle’s Moving Path Information 345
Fig. 3. Average RSSI values
Fig. 4 shows average throughput gains. In case of 500m to 700m of BS to BS distance, the throughput values were observed to be increased. On the other hand, in case of 800m to 1,000m of BS to BS distance, the throughput values were observed to be unchanged or improved slightly in four moving paths. For example, when BS to BS distance was 500m, the value increased by around 552Kbps on average. However, when BS to BS distance was 900m, the value just increased by around 20 Kbps on average.
Fig. 4. Average throughput
346
4
S.H. Han, S. Park, and Y.-H. Choi
Conclusions
In this paper, we propose a novel handover mechanism using a vehicle's moving path information to reduce the number of handovers or to increase throughput. During scanning process, a MN estimates average distances from its neighbor BSs for a certain time interval using the vehicle’s moving path information. The BS that has the shortest distance from the MN is selected as a target BS for possible handover, so that the number of handovers can be reduced along the path. We carried out performance evaluation with four different moving paths and six different topologies. Through extensive simulations, we show that the proposed scheme reduces the average number of handovers and increases the network throughput.
References 1. Aljadhai, A., Znati, T.F.: Predictive Mobility Support for QoS, Provisioning in Mobile Wireless Environments. IEEE J. Select. Areas Commun. 19, 1915–1931 (2001) 2. Liu, T., et al.: Mobility Modeling, Location Tracking, and Trajectory Prediction in Wireless ATM Networks. IEEE J. Select. Areas Commun. 16, 922–936 (1998) 3. Rhee, I., Shin, M., Chong, S.: On the Levy-Walk Nature of Human Mobility: Do Humans Walk Like Monkeys? IEEE INFOCOM 2008 (2008) 4. Hata, M.: Empirical Formula for Propagation Loss in Land Mobile Radio Services. IEEE Trnas. Veh. Tech. 29, 317–325 (1980)
Effects of the Location of APs on Positioning Error in RSS Value Based Scheme Hyeonmu Jeon, Uk-Jo, Mingyu-Jo, Nammoon Kim, and Youngok Kim Department of Electronics Engineering, Kwangwoon University, Korea [email protected]
Abstract. In this paper, the effects of the location of APs on positioning error are discussed in received signal strength (RSS) value based scheme. The RSS values are measured to know the RSS characteristics of AP as the distance between the AP and the receiver is increased. The positioning errors are evaluated with three APs. A triangle is formed with three APs and the errors are distinguished by inside error and outside error, respectively, whether the receiver is located inside the triangle or not. The results indicate the inside errors of triangle, which is formed by three APs, are lower than outside errors. Keywords: Received signal strength, Localization, Log model, Positioning, WLAN.
1
Introduction
Global positioning system (GPS), which is one of the typical outdoor positioning technologies, is utilized in our life to meet the diverse needs of customers. However it is not suitable for location tracking in indoor environment. To comply with the needs of customers even in indoor environment, the non-GPS based indoor positioning technologies are introduced and they are employed with various systems such as wireless local area network (WLAN), WiBro, Zigbee, and etc. Especially, it is known that the positioning system with WLAN is practical scheme because wireless network already in use routinely [1]. There are many ways to use WLAN for positioning such as log model, fingerprint, and etc. The fingerprint method has the cumbersome step on which received signal strength (RSS) data of each point have to be collected in advance and it takes many times to collect the data. The log model method, however, does not need this process. Because of the advantage of simplicity and practicality, the log model method is considered in this paper. In this paper, the effects of the location of WLAN APs on positioning error are discussed in RSS value based scheme. The RSS has the property that the RSS does not have constant value according to the distances between the AP and the receiver. That is, whenever RSS values are measured at same position, it has a slight variation from the mean RSS value. Because of this RSS property, when the received signal strength only is considered, the distance between the AP and the receiver can be estimated by using the RSS value. In addition to, it is know that the signal interference from T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 347–352, 2011. © Springer-Verlag Berlin Heidelberg 2011
348
H. Jeon et al.
different APs is not appeared clearly although the signal for data transmission is affected by the interference [2]. The signal interference is ignored in our experiments because we only consider the signal strength. In the measurement of RSS values, the signal is received at 60cm height from the floor in LOS environment, and the RSS values are measured 10 times and averaged repeatedly to complement the Gaussian noise and unstable RSS values. A LG XnoteR50 laptop computer that has antennas surround the monitor is used as the receiver and the ZIO-AP1550N that uses 2.4GHz band and 802.11g is used for the AP.
2
RSS Characteristics of AP
The propagation can be affected by signal attenuation, shadowing, multipath, scattering, and diffraction [3]. Because of these effects, it is useful to know the RSS characteristics of AP to estimate the location in indoor environment. In common office building, the RSS values are measured at interval of 0.5m to know the RSS characteristics of AP as the distance between the AP and the receiver is increased. The RSS can be modeled by a constant component and a variable component. Generally, the constant component is modeled by a path loss propagation model and the variable component is modeled by complex propagation effects which are signal attenuation, shadowing, multipath, scattering, and diffraction. In this paper, the log normal shadowing model is used to handle the problems of RSS [4]. Log normal shadowing model is expressed as follows: 10
log
,
(1)
means 1m as a reference distance and is the RSS value at reference where the distance and the loss coefficient is N=1.6, because it is appropriate in building LOS environment [5]. When the RSS value is measured at the receiver, the measured RSS values can be transformed to the distance between the receiver and the AP by using the equation (1).
Fig. 1. The measured RSS values and the estimated distance by measured RSS
Figure 1 shows the comparison of the log normal shadowing model and the measured the RSS values corresponding the each distance. Also, it presents the estimated distance by measured RSS value between the receiver and the AP by using
Effects of the Location of APs on Positioning Error in RSS Value Based Scheme
349
log normal shadowing model. There are location errors between the shadowing model and the measured RSS value specifically around 4 ~ 5m and 6 ~ 7m. Since the RSS is weaker and more unstable as distance from the AP is increased, the errors between the estimated distance and real distance are increased. As shown in the figure, it is demonstrated that the measured RSS values are similar to Log normal shadowing model until 4m. In other words, the ranging with RSS value can be reliable when the APs are installed at intervals of 4m.
3 3.1
Position Estimation with RSS Value Localizaion Using Three APs
To evaluate the position errors in RSS value based scheme, the APs are set as shown in figure 2. The grid interval in the figure is 1m. Although it is reliable of APs to be set at intervals of 4m, the APs are installed as shown below to consider the indoor environment which is typically installed APs at office building. The AP1, the AP2, and the AP3 are selected to form a triangle.
Fig. 2. Placement of APs
The RSS values of each point are recorded, and the estimated distances based on them are calculated using log normal shadowing model. Then, the estimated coordinates are obtained by Lateration matrix using the each distances between the APs and the receiver. Lateration is a technique that estimates the position by using the distances between APs and receiver and the matrix of lateration is expressed as follows [3]:
350
H. Jeon et al.
HX
B
(2)
where (3)
H
B
1 2
(4)
are the coordinates of and is the distance between the AP and and the receiver. Therefore, the least-square solution of (2) is given by X
H H
and the matrix X, which is coordinates of 3.2
(5)
H B, , can be estimated.
Experimental Results
The positioning errors are evaluated with three APs. A triangle is formed with three APs and the errors are distinguished by inside error and outside error, respectively, whether the receiver is located inside the triangle or not. Figure 3 shows the placement of three APs and the arbitrary nineteen points. As shown in the figure, a triangle is formed by AP1, AP2, and AP3. The RSS values are measured at arbitrary nineteen points to compare the position error at inside points (1~5) and outside points (6~19) of triangle.
Fig. 3. Placement of three APs and the arbitrary nineteen points
Effects of the Location of APs on Positioning Error in RSS Value Based Scheme
351
Fig. 4. Error histogram for each point Table 1. Mean errors and Standard deviation for triangle method
Triangle Inside(1~5) Outside(6~19) Total(1~19)
Mean error 2.1611 7.3866 6.1299
Standard deviation 2.7324 4.3636 4.4809
The each RSS values are transformed to coordinates by log normal shadowing model and lateraion matrix, and then the positioning errors between each estimated coordinate and real coordinate are calculated. Figure 4 shows the positioning error of each point. As shown in the figure, inside and outside points of triangle are classified by red and blue colors, respectively. The experimental results indicate the inside errors of triangle, which is formed by three APs, are lower than outside errors. Table 1 presents the mean and standard deviation of errors for inside and outside of triangle. By comparing the mean error, the position error of inside points is about 29% of the outside error while the standard deviation is about 62% of the outside error. Therefore, it can be induced that the inside area of triangle provides more reliable positioning performance and the positioning errors are increased at outside area of triangle. That is, the location of the target, which is estimated by RSS values, has to be inside the triangle formed by APs rather than the outside.
4
Conclusion
In this paper, the effects of the location of WLAN APs on positioning error are discussed in RSS value based scheme. We use the log normal shadowing model and lateraion matrix to estimation the position. The positioning errors are evaluated with three APs, which form a triangle, and the errors are distinguished by inside error and outside error, respectively, whether the receiver is located inside the triangle or not. The difference of positioning mean error between the inside and the outside of triangle is more than 5m. It means that the target, which estimates its location by RSS values, has to be inside the triangle formed by APs.
352
H. Jeon et al.
Acknowledgment. This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (No.2011-0004197).
References 1. Liu, H., Darabi, H., Banerjee, P., Liu, J.: Survey of Wireless Indoor Positioning Techniques and Systems. IEEE Transactions on Systems, Man, and Cybernetics 37(6) (November 2007) 2. Jeon, H., Kim, N., Jo, U., Jo, M., Kim, Y.: Performance Evaluation of RSSI based Positioning scheme over different receiving positions. KICS 44 (February 2011) 3. Figueiras, J., Frattasi, S.: Mobile Positioning and Tracking. Wiley, Chichester (2010) 4. Jianwu, Z., Lu, Z.: Research on Distance Measurement Based on RSSI of Zigbee. In: ISECS International Colloquium on Computing, Communication, Control, and Management, pp. 210–212 (2009) 5. Rappaport, T.S.: Wireless Communications: Principles and Practice. Prentice Hall, Upper Saddle River (2002)
Distributed Color Tracker for Remote Robot Applications and Simulation Environment Yong-Ho Seo Department of Intelligent Robot Engineering, Mokwon University, Mokwon Gil 21, Seo-gu, Daejeon, Republic of Korea [email protected]
Abstract. This paper proposed a distribuited color tracker for a robot and a simulated car navigation which can run in a network node. To perform a reliable color tracking both in a real robot and a simulated environment, a color tracker needs to deal with a reliable color filtering and a region segmentation technique. The proposed tracker also has been implemented as a network service so that it can be deployed either in any computer nodes in network environments or in the robot itself. The tracker service has been also developed to perform a coordination service for a robot that can follow a user wearing a colored shirt and for a simulated car to navigate a given track autonomously. We successfully demonstrated a human following of a robot and an autonomous car navigation using the tracker. Keywords: Color Tracker, Distributed Robot Software, Network Service Programming, Vision Based Tracking.
1
Introduction
An image processing technique that performs a color object, a face and hand gesture detection is an interesting issue in research areas including human computer interaction and human robot interaction. A vision based tracking that a robot can follow a user wearing a colored shirt for a robot equipped with a camera is also an interesting issue in robotics research nowadays [1]. A service oriented programming is a kind of new programming paradigm in these days throughout the IT industry. The concept of the Service-Oriented Programming (SOP) is being defined throughout the industry including: Sun’s Jini Openwings Microsoft’s .NET and HP’s CoolTown [2]. This technology is driven by the exploitation of networking technology and the need to be able to create more powerful capabilities based on services [3]. Therefore a distributed network programming has been used for a robot to do an autonomous color object tracking task to show the usability and the key characteristics and patterns of a service-oriented network programming to developers and researchers in robotics area. In this paper, we proposed a color tracker using a network service programming for a robot to do a reliable robot tasks such as human following and autonomous car navigation in a simulated environment. T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 353–361, 2011. © Springer-Verlag Berlin Heidelberg 2011
354
2
Y.-H. Seo
Developing Distributed Network Services
To implement a distributed color tracker for remote robot application and simulation environment, we used MSRDS (Microsoft Robotics Developer Studio) which is a S/W toolkit for robotics developers to create robotics applications for a variety of hardware platforms based on Microsoft’s .NET framework [4],[5]. This network based service programming tool for robotics includes a lightweight REST-style, service-oriented runtime which can create applications that enable user monitor or control a robot remotely using a Web-browser and send it commands to robots and control them to survey a remote location. Many application services are already included and can be provided by third parties are shown in Fig. 1.
Fig. 1. Structure of Application Services of MSRDS
Fig. 2. Development Patterns and Features of CCR
Distributed Color Tracker for Remote Robot Applications and Simulation Environment
355
Major features of Microsoft Robotics Developer Studio are CCR and DSS for programming network services for developing robotic applications. CCR means Concurrency and Coordination Runtime. This makes it simple to write programs to handle asynchronous input from multiple robotics sensors and output to motors and actuators. Fig. 2 shows a development patterns and features of CCR. It forms a quite sequential manner with no thread blocking and no callback handlers compared to the existing parallel programming pattern and it also get rid of bottleneck in performance so that we can avoid time delaying and decrease complexity of service module. We can develop scalable concurrent applications such as a workload decomposed into many heterogeneous work items, lots of latent concurrency that can be mapped to computational resources and data flow scheduler using CCR. DSS is a Decentralized Software Services that provides a lightweight, stateoriented service model that combines the notion of representational state transfer with a system-level approach for building high-performance, scalable applications. In DSS services are exposed as resources which are accessible both programmatically and for UI manipulation. The DSS runtime provides a hosting environment with built-in support for service composition, publish/subscribe, lifetime-management, security, monitoring, logging, and much more both within a single node and across the network. Fig. 3 shows an example of deploying distributed services into network environment using DSS and the internal structure of DSS service implementation.
Fig. 3. Distributed Services using DSS and Internal Structure of DSS service
3
Color Tracker Service
In this research, color tracker was developed as a service named Simple Vision service that implements image processing functions using a conventional Webcam USB camera or a simulated camera in a simulation environment. When robot follows a specific person using this color tracker, a color object is considered as a user's colored shirt. This assumption is also used to detect a face and hand gestures. To detect a specified color object, the service uses a normalized RGB color and a similarity measure between current pixel vectors and a registered color vectors and segmentations.
356
Y.-H. Seo
Fig. 4. Color Tracker Service and its Deployment in a network node
The similar color detection approach of using a normalized RGB also has been used in face detection research [6]. Fig. 4 shows the appearance of the developed tracker service and its deployment in a current network node listed on the control panel of the DSS host runtime. Normalized RGB is a representation which is easily obtained from the RGB values by following simple normalization equations. (1) (2) (3) As the sum of the three normalized components is known (r + g + b = 1), the third component does not hold any significant information and can be omitted, reducing the space dimensionality. The remaining components are also called “pure colors”, for the dependence of r and g on the brightness of the source RGB color is diminished by the normalization. A remarkable property of this representation is that for matte surfaces, while ignoring ambient light, normalized RGB is invariant to changes of surface orientation relatively to the light source [7]. This advantage helped this color space to gain popularity together with the transformation simplicity [8], [9]. We can consider this normalized RGB values of each pixel as a 3D vector. And we can also measure the correlation of two pixels between current pixel vector and a registered color vector by using the dot product of vectors expressed in an orthonormal basis which is related to their length and angle in Euclidean geometry. The following formula can determine the size of the angle between two nonzero vectors as a similarity measure of two pixels. (4) where
a and b denote the length (magnitude) of vector a and b.
In a real implementation, we simply apply Cos(θ) for the similarity measure between two color vectors. Values of Cos(θ) range from 1 to -1. If the two input vectors are pointing in the same direction, then the return value will be 1. If the two
Distributed Color Tracker for Remote Robot Applications and Simulation Environment
357
input vectors are pointing in opposite directions, then the return value will be -1. If the two input vectors are at right angles, then the return value will be 0. Therefore it implies how similar the two vectors are. To detect blob candidates, the proposed image processing service performs segmentation over the filtered image. To find a final region among the N-th largest segmented candidate regions, the service finally uses a shape validation using simple geometric constrains. Fig. 5 shows the diagram of the image processing of the proposed color tracker service.
Fig. 5. Image Processing Diagram of Color Tracker
This service performs a color object and the other services can get the detection results by subscribing to this color tracker. The color tracker can be deployed either in any of computer nodes in network environments or in the robot itself. While the color tracker is running, the service provides several types of notifications which mean sending detection results of the tracker to other services. In addition, this service also performs a simplified face and hand gestures detections. To detect a face, it uses a skin image filtered by a predefined skin color model. After attaining a skin image, the service performs segmentation over the skin image. To find a face region among the segmented face candidate regions, the service uses an ellipse validation and two simple geometric constrains. The first constrain is a face on a color object and the second constrain is a face that should be in the upper image plane. To detect hand gestures, the service uses geometric information over a skin image and foreground image. It also uses results of a detected color object and a face. The foreground image is calculated by subtracting the current image from the background image. The background image is grabbed when a motion is not detected between the current camera frame and the previous camera frame for a specified time in a different image. These simplified face and hand gestures detections are useful in developing a HRI application.
358
4
Y.-H. Seo
Following Service
In this research, object following service has also been developed to perform an orchestration service which means a coordination service for a robot that can follow a user wearing a colored shirt or a given road track autonomously. The follower service is originally designed for use with a mobile robot that has a two-wheel differential drive, front and rear contact sensors, a forward facing Laser Range Finder device and a webcam. But later, we modified the follower service so that it can be applied to a legged robot and a car with a steering wheel. This service can be used either with a real robot or a simulated robot. This feature is come up with the necessary files for a robot configuration called manifest prepared for choosing one robot among different robot platforms supported either by MSRDS itself or third parties. The follower service orchestrates several partner services such as drive which is used for robot movements, contact sensor service as an implementation of robot bumper, LRF service for obstacle avoidance and Simple Vision service which performs color object tracking.
5
Experimental Results
In order to verify the feasibility and the performance of the developed color tracker service in a network environment, we conducted the following experiments with a mobile robot, a 4-legged robot and a simulated car navigation environment in MSRDS simulation. In the experiments, the proposed color tracker service can detect color object under varying lighting conditions and complex backgrounds from live frame images captured from a webcam on a mobile robot at a high detection rate of over 90%. The face detection rate using a webcam on a real robot is 80%. This result is less comparable to the well known real-time face detection method [10]. However the computation time of this vision service is much faster than other existing methods. In the first experiment, the developed color tracker and following services have been successfully demonstrated using a mobile robot called “X-Bot” which is modified from a conventional cleaning robot, “iClebo” from Yujin Robot to place a laptop computer on top of the mobile robot [11]. And we also applied the color tracker to a 4-legged robot called “Genibo” developed by DASA robot [12]. The developed services successfully perform following a human and a color ball as shown in the following Fig. 6 and Fig. 7.
Fig. 6. Experiments with a mobile robot, X-Bot
Distributed Color Tracker for Remote Robot Applications and Simulation Environment
359
Fig. 7. Experiments with a 4-legged robot, Genibo
Fig. 8. Microsoft RoboChamps (left) and its Simulated urban environment (right)
Fig. 9. Screen Shot of Simulated Car Navigation using Color Tracker in Urban Challenge
360
Y.-H. Seo
In addition, we applied the developed color tracker and following services to a simulated car navigation. We used the Urban Challenge environment of Microsoft RoboChamps as simulation environment for autonomous car navigation. This challenge was sponsored by KIA Motors and held both in Korea and US in 2008. In this challenge, a developer can choose from 1 of 5 simulated robot cars, which resemble real-world KIA car models. Also a developer can program the robot car to navigate a route in a rich 3D urban environment. A number of sensors have been added to the car to complete the challenge, including contact sensors also referred to as bumpers, two webcams, a GPS, and a laser range finder. The specific route is required to navigate to all of the KIA building checkpoints. Specific rules are checked by referee dashboard program as you can see in Fig. 8. By using the proposed color tracker and following services, we successfully completed the challenge. The simulated car passed the given route only using a vision guided simple navigation logic as shown in Fig. 9.
6
Conclusion
This paper proposed a new approach using a network service programming for a robot to do a vision based tracking both in a real and a simulated environment. A color tracker as a network service for a real robot and simulated car navigation can be run in a remote network node and uses simple and effective image processing such as a normalized RGB correlation and segmentation for object and face detection. The proposed color tracker in a network environment successfully performs a color object, a face and hand gestures detections so that a real robot or a simulated car can smoothly follow a user wearing a colored shirt or a given route in a simulated urban environment. Finally the feasibility and effectiveness of the proposed tracker has been successfully demonstrated. The developed simple vision and follower services are also included as samples of technologies with full source codes in MSRDS. All services were written in the C# language using MRDS and .NET framework. Acknowledgments. This work was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education, Science and Technology(MEST) (2011-0013776).
References 1. Schlegel, C., Illmann, J., Jaberg, H., Schuster, M.: Vision based person tracking with a mobile robot. In: Ninth British Machine Vision Conference, BMVC 1998, Southhampton, pp. 418–427 (1998) 2. Bieber, G., Architect, L., Carpenter, J.: Introduction to Service-Oriented Programming, Software Engineer, Motorola ISD, http://www.openwings.org 3. Jones, S: Enterprise SOA Adoption Strategies. InfoQ (May 17, 2006) ISBN 978-1-84728398-6 4. Almeida, O., Helander, J., Nielsen, H., Khantal, N.: Connecting Sensors and Robots through the Internet by Integration Microsoft Robotics Studio and Embedded Wen Services. In: Proceeding of IADIS International Conference (2007)
Distributed Color Tracker for Remote Robot Applications and Simulation Environment
361
5. Microsoft Robotics Developer Center, http://msdn.microsoft.com/robotics, and .NET Framework Developer Center, http://msdn.microsoft.com/netframework 6. Soetedjo, A., Yamada, K.: Skin Color Segmentation Using Coarse-to-Fine Region on Normalized RGB Chromaticity Diagram for Face Detection. IEICE Transactions on Information and Systems E91-D(10), 2493–2502 (2008) 7. Skarbek, W., Koschan, A.: Colour image segmentation – a survey –, Tech. rep., Institute for Technical Informatics. Technical University of Berlin (October 1994) 8. Brown, D., Craw, I., Lewthwaite, J.: A som based approach to skin detection with application in real time systems. In: Proc. of the British Machine Vision Conference (2001) 9. Soriano, M., Huovinen, S., Martinkauppi, B., Laaksonen, M.: Skin detection in video under changing illumination conditions. In: Proc. of 15th International Conference on Pattern Recognition, vol. 1, pp.839–842 (2000) 10. Viola, P., Jones, M.J.: Robust Real-Time Face Detection. International Journal of Computer Vision 54(2), 137–154 (2004) 11. Yujinrobot iClebo, http://iclebo.com/english 12. Dasarobot Genibo, http://www.genibo.com
Mobile Robot Control Using Smart Phone and Its Performance Evaluation Yong-Ho Seo, Seong-Sin Kwak, and Tae-Kyu Yang Department of Intelligent Robot Engineering, Mokwon University, Mokwon Gil 21, Seo-gu, Daejon, Republic of Korea {yhseo,skwak,tkyang}@mokwon.ac.kr
Abstract. Smart phone that can support mobile computing environment has been popular recently. A new robot application using smart phone is also interesting and feasible research topic at this moment in accordance with a fast growth of mobile internet environment. In this study, we propose a remote control method of a mobile robot using a smart phone with Windows Mobile. We also developed the proposed method using Microsoft's .Net Compact Framework and Bluetooth data communication to support various OS Platforms from Windows to CE kernel based Embedded OS. In experiment, we evaluated the feasibility and the effectiveness of the proposed method by showing a remote control of a mobile robot using G-Sensor in a smart phone, a line tracing robot application and its performance evaluation. Keywords: Smart Phone, Mobile Robot, Remote Control, Bluetooth Data Communication.
1
Introduction
Smart phone based commercial markets are widely formed and becoming more popular these days due to recent advance of related technology and drastically broadened mobile phone markets [1]. Those smart phone markets include newly advanced, miniaturized, stabilized Mobile CPU, Flash memory, Display, Embedded OS and other related markets. Therefore the smart phone markets are called an integration of new generation mobile phone markets [2]. Smartphone is a mobile phone which offers PC-like rich advanced functionality on a smart phone OS which is the major feature that distinguishes smart phone from the previous phone. Among many kind of smart phone OS, Microsoft Windows Mobile, Apples’ iOS and Google’s Android are mostly used. Using these OS, users can easily handle the smart phone’s various functions and complex development environment. Also, smart-phone lies beyond previous mobile phone. Smart phones are equipped with auxiliary hardware such as digital camera, gravity sensor, Blue-Tooth, wireless network and touch screen. Due to this hardware’s support, many new contents that weren’t available on the previous mobile phones now available. Therefore many commercial companies are trying to getting involved this new contents generating market. And these hardware technologies are evolving fast even at this moment [3]. T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 362–369, 2011. © Springer-Verlag Berlin Heidelberg 2011
Mobile Robot Control Using Smart Phone and Its Performance Evaluation
363
In this paper, adopting these advanced mobile computing technologies and populated smart phones, a new smart phone based robot application that succeeded previous PC-based robot application is proposed. Among robot applications, mobile robot control system based on smart phone wireless control is developed and demonstrates its effectiveness. For a mobile robot to be remote controlled, Window Mobile 6.1 based smart phone and our own designed mobile robot equipped with one-chip microcontroller, multiple infra red sensors and Blue-Tooth is used. To retain portability on varied windows based smart phone OS environments, .Net framework and Blue-Tooth is used on its software implementation. To develop smart phone user interface and handle gravity sensor efficiently, Windows Mobile SDK and corresponding smart phone’s SDK is used. In the experiment, through performance evaluation of smart phone’s gravity sensor based mobile robot remote control and remote line tracing control, effectiveness of proposed system is verified. Also the difference between Wired versus Bluetooth based remote control and PC based versus smart phone based control performance is analyzed and further research is concerned.
2
Hardware Configuration for Smart Phone Based Mobile Robot Control System
Mobile robot is configured with microcontroller, infra-red (IR) sensor, Blue-Tooth, LCD, DC motor and motor drive circuits. Six IR sensors detect front, left, right and bottom side obstacles as shown in Fig. 1. Also IR sensors detect existence of floor to prevent the robot from falling. Measurements from IR sensors are transmitted by Blue-Tooth to smart phone side and the smart phone controls robot based on the received IR measurements.
Fig. 1. Hardware Configuration of Smart Phone based Mobile Robot
At mobile robot, microcontroller continually acquires IR sensor measurements from analog to digital converter (ADC) and sends them to Blue-Tooth using serial port. At the microprocessor on the robot, firmware is programmed to receive motor drive command from smart phone. Table 1 shows the hardware specification of the mobile robot and the smart phone used in this paper.
364
Y.-H. Seo, S.-S. Kwak, and T.-K. Yang Table 1. Hardware Specification of Mobile Robot and Smart Phone H/W Item Microcontroller Motor Sensor Bluetooth Smart Phone Smart Phone OS Dev. Tool Comm. Speed
3 3.1
Feature ATMEL ATMega128 DC Geared Motor x 2EA IR Floor Sensor x 3EA SPP / BT2.0 Samsung SCH-M490(T-Omnia) Windows Mobile 6.1 .NET Compact Framework 3.5 with C# Language 19200 bps / 57600 bps
Software Configuration for Smart Phone Based Mobile Robot Control System .Net Compact Framework
.Net Compact Framework is a foundation library for Windows Mobile and Windows Embedded CE devices. As a lower set of .Net Framework, .Net CF is consist of 30% of overall .Net framework class libraries and optimized Common Language Runtime (CLR) which does similar roles as Windows Communication Foundation and Windows Forms [4]. .Net Compact Framework provides functions and classes for mobile development or mobile included development and also with its own exclusive classes. Therefore the developers who used to windows development environment can join mobile development without any additional training for mobile programming, embedded system programming or any knowledge to set up the development environment. Fig. 2 shows the latest .Net Compact Framework based smart phone development environment named “WP7” which include various phone application features with sensors, Silverlight and XNA.
Fig. 2. .Net Compact Framework based Smart Phone Development Environment, WP7
3.2
Software Configuration
Software configuration of the proposed system is show in the following Fig. 3. The software of mobile robot is configured with AVR C compiler based firmware which
Mobile Robot Control Using Smart Phone and Its Performance Evaluation
365
transmits robot’s sensor measurement to smart phone and receives motor command from smart phone. The software in smart phone is configured with .Net Compact Framework which drives the robot based on the received sensor information from robot and C# based mobile application.
Fig. 3. Software Configuration of Smart Phone based Mobile Robot
Fig. 4. Data Flow between Smart Phone and Mobile Robot
The developed software supports a user interface that has various control modes options for a user to the mobile robot to select a specific mode among line tracer mode, gravity steering mode.
366
Y.-H. Seo, S.-S. Kwak, and T.-K. Yang
While mobile robot moves according to the mobile mode selected from smart phone, it collects information of surrounding environments through IR sensors and transmit those information to the smart phone via Bluetooth. Then smart phone analyzes the data from the robot and generated new motor control command and transmit the command to the mobile robot side. Fig. 4 shows the data flows between the smart phone and the mobile robot software in detail. Also, mobile robot smoothly moves by adopting velocity control of DC motor using PWM pulse. And smart phone application is developed to control the robot according to the selected mobile mode after connected with mobile robot through Bluetooth.
4
Experimental Results
User interface of the smart phone application for a mobile robot control is show in Fig. 5. This UI was developed using smart application interface which is based on C# and .Net Compact Framework 3.5. Various buttons are placed to manually control the robot and to make it easy to connect Bluetooth, select which mobile mode to run, and verify current IR sensor measurements. The developed robot software is available on various smart phones and windows CE based embedded devices.
Fig. 5. User Interface of Smart Phone Application
4.1
Gravity Sensor Steering
In Gravity Steering mode, user can control the mobile robot as if a user drives a car with steering handle of car. In this mode, the smart phone simulates the physical car handle by estimating current pose using 3-axis gravity sensor. User also can feel the speed of the mobile robot through vibration of smart phones driven by embedded vibration motor [5]. Fig. 6 shows the axis of gravity sensor and its data plot when the rotation movement according to x axis has been occurred.
Mobile Robot Control Using Smart Phone and Its Performance Evaluation
367
Fig. 6. Axis of Gravity Sensor and its Data Plot
To acquire the gravity sensor measurement and drive the vibration motor, publicly available Samsung Window Mobile SDK 1.2.1 is used. Using the Samsung SDK, one can develop software that uses various function of smart phone easily [6]. (1) Equation 1 is used to control velocity of the mobile robot’s motor. Because of the output of gravity sensor from Samsung Mobile SDK ranges from -67 to 67, we normalized the given input by value of 134. Constant k is maximum motor velocity. To express angular velocity on mobile robot, 0 to 90 values of sine function is used. The value of x is the measured inclination at sensor x-axis coordinate. Each value of L, R corresponds to left, right motor velocity. To move the robot in a constant velocity, navigation functions such as acceleration in front, immediate left and right turn are implemented. Fig. 7 shows the experiment of the gravity steering mode using above Equation and navigation functions.
Fig. 7. Experiment of Gravity Steering Mode
4.2
Remote Line Tracer
In line tracer mode, line tracer algorithm is carried out on the smart phone using received floor-facing 3 IR sensors measurements from the mobile robot. Navigation functions for the line tracing mode are forward, left turn, right turn, acceleration, immediate left and right turn and setting initial direction for line tracing. Fig. 8 demonstrates experiment of the remote line tracer algorithm, which successfully tracked the given circled line.
368
Y.-H. Seo, S.-S. Kwak, and T.-K. Yang
Fig. 8. Experiment of Remote Line Tracer Mode
4.3
Performance Evaluation of Remote Control
We performed a performance evaluation of a remote control with smart phone by measuring the wireless data transmission latency. Unlike traditional PC based robot control, to evaluate performance of wireless smart phone based robot control system, average wired/wireless data transmission latency is measured at each communication speed. In the experiment, 10 byte data packet is sent and received 20 times in Loopback manner. Time elapsed while smart phone sends data to robot and robot resends received data to smart phone is measure at 5 meter and 10 meter distance [7]. The results are described in Table 2 and Fig. 9.
19,200 bps
57,600 bps
Fig. 9. Data Latency according to Bluetooth Comm. Speeds and Distances
Figure 9 shows latency plot according to two different communication speeds and three different distances in Bluetooth connection. Table 2 shows the average data transmission latency according to the Bluetooth communication speeds. Table 2. Data Latency according to Bluetooth Communication Speeds Comm. Speed of Bluetooth (bps) 19,200 57,600
Average data transmission latency (mSec) Near 5M 10M 63.2 67.9 70.9 33.6 36.8 58.9
Mobile Robot Control Using Smart Phone and Its Performance Evaluation
5
369
Conclusion
In this paper, we described the remote control of mobile robot using Bluetooth with smart phone based wireless robot control software. Also we also conducted several remote robot experiments and its performance evaluation to verify the proposed control environment. In the experiment, gravity steering mode and line tracing modes are tested. The control algorithms for the mobile modes were implemented at smart phone side and the smart phone controls the mobile robot by sending motor control signal through wireless connection. To validate this wireless control system, data transmission latency is measured at each communication speed and at each distance between the robot and the smart phone. The proposed smart phone based remote robot control system has its significance on being a new type of robot application by successfully combining traditional PC based mobile control method and smart phone’s distinct features, especially mobility. Initiated from this research, concerning rapid advance of smart phone, various embedded sensors, soon the smart phone controls system could replace the traditional PC based controls. Also near feature, it is expected that many intelligent service robots would widen their service variability through connecting themselves with smart phones so that they can use smart phone’s various features and various contents on smart phones. Acknowledgments. This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (KRF 2011-0013776).
References 1. Slawsby, A., Giusto, R., Burden, K., Linsalata, D.: Worldwide Smart Handheld Device 2004-2008 Forecast and Analysis. IDC (2004) 2. Deans, C.: Global Trends And Issues For Mobile/Wireless Commerce. In: Proc. of the Americas Conference on Information Systems (AMCIS), Paper 327 (2002) 3. Lin, F., Ye, W.: Operating System Battle in the Ecosystem of Smartphone Industry. In: Proc. of the International Symposium on Information Engineering and Electronic Commerce, IEEC 2009, pp. 617–621. IEEE Computer Society, Washington (2009) 4. Microsoft .NET Framework Developer Center, http://msdn.microsoft.com/netframework 5. de Souza, M., Carvalho, D.D.B., Barth, P., Ramos, J.V., Comunello, E., von Wangenheim, A.: Using Acceleration Data from Smartphones to Interact with 3D Medical Data. In: Proc. of the SIBGRAPI Conference on Graphics, Patterns and Images, SIBGRAPI 2010, Gramado, pp. 339–345 (2010) 6. Samsung Mobile Innovator, http://innovator.samsungmobile.com 7. Golmie, N., Van Dyck, R.E., Soltanian, A.: Interference of bluetooth and IEEE 802.11: simulation modeling and performance evaluation. In: Proc. of the 4th ACM International Workshop on Modeling, Analysis and Simulation of Wireless and Mobile Systems (MSWIM 2001), pp. 11–18. ACM, New York (2001)
Study on Data Transmission Using MediaLB Communication in Vehicle Network Chang-Young Kim1 and Jong-Wook Jang2 1
Convergence of IT Devices Institute Busan(CIDI) 2 DONG-EUI University 995 Eomgwangno, Busanjin-gu, Busan, 614-714, Korea {hapgang,jwjang}@deu.ac.kr
Abstract. As demands for the Infotainment System, the Multimedia Networking technology for automobile, called MOST(Media Oriented Systems Transport), has been actively applied to the automobile industry, to meet the demands of the Infotainment System. The existing MOST devices relied on 12C, 12S communication mode for internal bus communication but the widening bandwidth in the MOST network and additional installations in a single device required bus communication and MediaLB(Media Local bus). Majority of the existing MOST devices use I2S or I2C bus communication mode to process synchronous data (stream data) and asynchronous data (packet data). Hence, this study will take a look at control channel and transmission mechanism of synchronous data and asynchronous data by applying MediaLB mode, which is an advance from 12S or 12C bus communication mode proposed by MOST network and currently in use, suggest an efficient way to transmit asynchronous data and analyze its performance. Keywords: Asynchronous Data Transmission, MOST(Media Oriented systems Transport), MediaLB(Media Local Bus).
1
Introduction
MOST network is an optimized communication technology capable of controlling a single transmission medium, which transmits high-quality audio, video and packet data all at the same time, on a real time basis. [1]. At present, the standardized 25Mbps and 50Mbps are already being served in vehicles across Europe, Korea and Japan and 150Mbps will start service in around 2012 when it gets standardized. MOST25 is known for outstanding reliability and relatively easy realization driven by transmission of audio data and compressed video data. It also composes such multimedia devices as GPS(Global Positioning System), navigation and video display into a ring topology type single network. It supports connection with up to 64 devices, plug-and-play function and plastic optical fiber, which has high transmission speed and excellent EMI (Electro Magnetic Interference), and is used as transmission medium in MOST physical layer. This study will use MediaLB communication method, which is relatively less dependent on hardware among the diverse methods used to transmit data through T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 370–379, 2011. © Springer-Verlag Berlin Heidelberg 2011
Study on Data Transmission Using MediaLB Communication in Vehicle Network
371
MOST network to suggest an efficient way of transmitting asynchronous data and analyze its performance.
2
MOST Network Frame Structure
MOST25 frame comes in either 512bit or 64byte and MOST 25 network frame consists of preamble, boundary descriptor, synchronous data area (stream data), asynchronous data area (packet data), control channel, frame control and parity bit. Details of each component are described as follows [2] Fig. 1.
Fig. 1. MOST25 Network Frame Structure
2.1
Synchronous Data Area
Synchronous data area is used for real-time transmission/reception of audio and video data in MOST network. The allocated channels have to be connected through the connection master prior to data transmission and control channels are used for connection control. Node, which is a frame that transmits data that require regular transmission such as audio and transmits synchronous message, regularly sends data and the receiving node regularly reads data. It can adjust bandwidth in the range of 28byte to 60byte according to the value of boundary descriptor in MOST frame’s header field. 2.2
ASynchronous Data Area
Asynchronous data area transmits data that are irregularly generated such as the Internet and video data and boundary descriptor-dependent bandwidth ranges in between 0 to 36 byte. The length of data field supported by protocol is either 48 byte data link layer protocol generally used to control via 12C bus or 1014 byte data link layer protocol used in MediaLB. 2.3
Control Channel
Control channel, which transmits command, status and diagnosis information to manage MOST network, is a domain for control of MOST bus and is used for data
372
C.-Y. Kim and J.-W. Jang
transmission that controls hardware connected to MOST bus node. The transmissions cover command message associated with play and stop of devices connected to specific nodes. Control channels are distributed across 16 frames (one block) to ensure that control channel per frame does not take up too much bandwidth and each frame transmits 2 byte channels.
3 3.1
MediaLB and NetServices Media Local Bus
MediaLB(Media Local Bus) is different from series interface, which is generally dedicated to transmit most of multimedia data and others needed for application control. In addition, it is a synchronous medium transmitted to MOST, is used in printed circuit board, saves space and the number of pins by connecting a number of integrated parts. Fig. 1 shows an example of MediaLB 3 pin communication mode. [3].
Fig. 2. MOST25 Network Frame Structure
MediaLB is either composed of 3 pins or 5 pins of which the latter is for parts MediaLB that can only load unidirectional pin circuit. Its need for more space, circuits and slowing maximum speed discourage it from being used. Table. 1 shows the status of MediaLB 3 pin signal, which is most widely used. Table 1. Font sizes of headings. Table captions
Study on Data Transmission Using MediaLB Communication in Vehicle Network
373
MediaLB is designed to be compatible with 256*Fs(12.288 Mhz at 48Khz frame), 512*Fs(24.576 Mhz at 48Khz frame), 1024*Fs(49.152 Mhz at 48Khz frame) and has a controller at MediaLB bus that processes clock generation. MediaLB device operates MediaLB interface in accordance with the designated clocks. INIC produces and processes MLBCLK when it is not synchronized with MOST network. MediaLB has many transmission capacities as shown in Table. 2 according to clock speed. [4][6]. Table 2. The number of channels affected by clock speed
3.2
NetServices API
NetServices, which represents standard protocol stack for MOST, is based on INC(Network Interface Controller) or INIC(Intelligent Network Intterface Controller). It basically supports programming interface for applications that include function blocking and serves as a module to transmit packet data in asynchronous domain and to control network via control channel. It includes mechanism and routine to operate and manage networks and guarantees networks’ dynamic behavior mode. NetServices is realized in EHC(External HoST Controller) and while synchronous channel control is part of network service, asynchronous data transmission is not. Fig. 3 describes basic structure of MOST NetServices.[5].
Fig. 3. MOST25 Network Frame Structure
374
C.-Y. Kim and J.-W. Jang
NetServices API provides basic transmission mechanism between MOST network and application, saves time and cost associated with MOST device development and delivers strong compatibility between devices. Layer 1 of NetServices API provides transmission mechanism to access MOST network while layer 2, which is a core combination of MOST devices, is responsible for communication with the sub-units.
4
MOST Test Environment and Performance Test
This study designed and made MOST board by applying MediaLB with a bus communication between OST Interface Controller and EHC. It then set up the test environment for MOST network to measure device performance. Fujitsu MB8601 board based on WindowCE 5.0 was used as a reference to design and make the board. 4.1
MediaLB Communication Test Environment
One of two MOST boards made to test MediaLB communication was set as Tx(master node) and the other one was set as Rx(slave node) both of which were then connected to MOST OptoLyzer(slave node) to form three nodes in total. Tx and Rx devices were set in a way to transmit random packet data via MOST network’s asynchronous channel to measure transmission speed in each node. MOST network composed of three nodes by connecting two MOST boards and one OptoLyze is shown in Fig. 4. 00 is Tx(master node), 01 is Rx(slave node) and 02 is OptoLyzer(slave node).
Fig. 4. MOST Communication Network Test Configuration
Fig. 5 is the block diagram representing how MediaLB communication, which is the device’s internal bus communication mode in MOST network, is delivered. It is clear that MediaLB communicates in between EHC(External Host Controller) and INIC(Intelligent Network Interface Card). LLD driver node work and MOST NetServices V2.x version were ported in EHC to run application program while Tx and Rx were set in a way to process packet data’s transmission speed, Rx valid packet processing value and information on buffer request. Also, the test environment enabled monitoring of actual data flow, transmission speed of Rx node and other information via OptoLyzer.
Study on Data Transmission Using MediaLB Communication in Vehicle Network
375
Fig. 5. MediaLB communication Block Diagram
4.2
Test Result
A number of environmental variables were set and modified to run a test aimed at checking transmission speed at MediaLB communication. The first variable is MediaLB clock, which is defined as 256Fx, 512Fx and 1024Fx in MOST network, but the test applied only 256Fx and 512Fx since only the two are supported in MOST25. The second variable is asynchronous socket bandwidth, which is from 4~ 36byte and can increment by 4byte. The maximum value that can be used in asynchronous data area is 36byte out of the maximum 60byte MOST data domain, which takes out 24byte, the minimum value of synchronous data area in MOST data domain. This study set 32byte excluding 4byte in asynchronous data socket for Rx as the maximum value. The third variable was measured by giving 0~5ms delay time when calling transmission function. This lowers the risk of buffer request failure by giving spare time for processing of the previous buffer when making packet transmission. This affects the time expended during buffer processing, which is also related to buffer size capacity and processing speed as confirmed in the test. The fourth variable is processing for a new attempt when request is failed. NetService API, which provides such function, informs the time open for a new buffer request in case of request failure to keep off packet loss triggered by transmission failure. The fifth variable has to do with buffer processing. It used values ranging from 4~128byte for MBM_MEM_UNIT_SIZE and tested up to 65535 for MBM_MEM_UNIT_ NUM. Both variables were tested on the basis of values defined in MBM(Message Buffer Management) in MOST NetServices Layer .
Ⅰ
4.3
MediaLB Clock 256Fs
MediaLB clock value was set at 256Fx and MBM(Message Buffer Management) buffer processing unit was set at 4byte to measure packet data transmission speed in Tx(master node). Delay value was set at 0 when calling transmission function. Buffer request failure rate when not processing request for a new buffer was extremely high
376
C.-Y. Kim and J.-W. Jang
and the actual valid number of packet processing was very low to the extent of not being able to make any significant interpretation. These data values hold no significant meaning to data transmission because of new data continuing to pile up even before any buffer processing. Table 3. Transmission Time in Tx Node
When delay was set in the range of 1ms to 5ms during transmission function call under the same conditions for another buffer request, zero data loss and rapid transmission time were observed in 5ms delay. Also, transmission time could be shortened by keeping the number of buffer requests to minimum and subsequently reduce delay caused during processing for new buffer request. Table 3 points to transmission speed of Tx node in the conditions stated above and Fig. 6 indicates the same results in graph. Data(Mbyte) 12 M
D a t a
10 M 8M 6M
( M b y t e
Data(Mbyte)
4M
)
2M 0M 9199
18402
36807
55157
73454
91903
Time(ms)
Fig. 6. Graph of Transmission Time in Tx node
Test results from Rx node were gained through OptoLyzer(slave node), which is able to show hourly packet processing in a graph and check on real-time overload status, data type and any new events. The values from 1Mbyte to 10Mbyte were set in sequence for comparison by setting test environment as equal with that under Tx. In Fig. 7, (a) graph shows 1Mbyte packet data and (b) graph shows events occurred when transmitting 10Mbyte packet data and transmission time. Both indicate a strong transmission with any data loss.
Study on Data Transmission Using MediaLB Communication in Vehicle Network
377
It takes around 11 seconds for 1Mbyte transmission, which tells a slight delay compared to that in Tx node and an overall 2000 delay than Tx.
( a )1Mbyte
( b )10Mbyte
Fig. 7. Graph on Transmission at Rx node
4.4
MediaLB Clock 256Fs
MediaLB clock was set at 512F(maximum) and MBM(Message Buffer Management) buffer processing unit was set from 4byte to 128byte(maximum) in order to test transmission speed from MediaLB clock. MBM_MEM_UNIT_ NUM value was modified from 512 to 65535(maximum). In the first test, MediaLB clock was set at 512Fx, while MBM_MEM_UNIT_SIZE and MBM_MEM_UNIT_NUM were set at 4 and 512, respectively, as values processed for a new buffer request. In this case, Rx valid packet numbers (measured at OptoLyzer) were accurately transmitted but transmission time hardly differed compared with MediaLB Clock 256Fs. There were changes in transmission time alongside increase in MBM_MEM_UNIT_SIZE and MBM_MEM_UNIT_NUM but there were also cases in which no change was observed under particular conditions that gave generated critical values. In the second test, MediaLB clock was set at 512Fx, while MBM_MEM_ UNIT_SIZE and MBM_MEM_UNIT_NUM were set at 128byte and 512byte. It obviously showed improved performance than that in Fig. 6. Fig. 8. The study changed conditions of variables specified earlier to derive new values but even an increase in the values did not have any impact on the end result since most of them reached critical value at certain points. Finally, MediaLB clock was set at 512 Fs maximum value of MBM_MEM_ UNIT_SIZE at 128byte and minimum value of MBM_MEM_UNIT_NUM at 65535. Asynchonous socket bandwidth was additionally set at 20byte while processing for a new buffer request. The result generated from performing a test in the foregoing conditions was by far the best one out of the other tests performed with different set of variables and conditions set. Fig. 9.
378
C.-Y. Kim and J.-W. Jang
Fig. 8. Graph of Transmission Time in Tx node
Fig. 9. Graph of Transmission Time in Tx node(Socket-20)
5
Conclusion
This study delivered MediaLB communication and measured transmission speed of asynchronous data and others via MOST board. MediaLB clock was set at 256Fx and MBM(Message Buffer Management) buffer processing unit at 4byte in the first test environment, which found that transmission time was around 9199ms when transmitting 1Mbyte. Among all the tests conducted under different conditions and variables, 512Fs of MediaLB clock, 128byte of MBM_MEM_UNIT_SIZE, 65535 of MBM_MEM_UNIT_NUM and asynchronous socket bandwidth additionally set at 20byte after processing for a new buffer request yielded the best result. The most optimal condition in the current test environment is 3842ms of transmission speed when transmitting 1Mbyte of packet data. As indicated in the test results, transmission time is most affected by asynchronous socket bandwidth. Furthermore, critical values are witnessed under certain conditions during which transmission speed does not change even when the values are increased.
Study on Data Transmission Using MediaLB Communication in Vehicle Network
379
In short, changing just one part of the conditions cannot raise transmission speed to support the most optimal speed and all the conditions should become optimized for an efficient data transmission. The causes that discourage any further performance improvement in the test environment set in this study are deemed to be buffer processing time, processing time in NetServices, bus speed, OS processing and other complex issues. Hence, further in-depth studies are needed on these areas for higher transmission speed and efficiency of MediaLB communication in MOST network as well as tests on ways to use MHP(MOST high protocol). Acknowledgment. This research was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MEST)(No. 2010CB012).
References 1. MOST Cooperation, MOST Specification 2.5, http://www.mostcooperation.com 2. Grzemba, A.: Most-The Automotive Multimedia Network, p. 85. Deggendorf, Franzis (2008) 3. SMSC Media Local Bus Specification, Version4.1 4. Grzemba, A.: Most-The Automotive Multimedia Network, from MOST25 to MOST150, pp.210 (2011) 5. SMSC Multimedia and Cotrol Networkking Technology MOST NetServices Layer 1 Wrapper for INIC, V3.0x 6. SMSC MediaLB Devices Interface Macro OS62400 advanced product Data Sheet
Topology Configuration for Effective In-Ship Network Construction Mi-Jin Kim1, Jong-Wook Jang1, and Yun-sik Yu2 1
Department of Computer Engineering, Dong-Eui University 614714 Busan, Republic of Korea 2 Convergence of IT Devices Institute Busan(CIDI) Busanjin-gu, Busan 614-714, Korea [email protected], {jwjang,ysyu}@deu.ac.kr
Abstract. Recently, all areas of IT has been rapidly in land. As a result, the location is very important in everyday life. But in ship are limits network with communication Equipment and service. As a result, be far removed from information age. Currently, ship is connected with the outside world via satellite on the voyage. But, because high billing, ship is possible data transfer using low speed bandwidth. Because ship is not possible using LAN, only limit ship's crew some use internet in limited space. So, ship is required construction of the network. In this paper, because composition of basic IT environment with information need of crew and customer from ship to cruises, there construct LAN in ship and propose network design for network of ship. The abstract should summarize the contents of the paper and should contain at least 70 and at most 150 words. It should be set in 9-point font size and should be inset 1.0 cm from the right and left margins. There should be two blank (10-point) lines before and after the abstract. This document is in the required format. Keywords: Ship, In-Ship Network, Cruise Ship, Cruise Industry, CAN, MOST, Bus Topology, Ring Topology, Star Topology.
1
Introduction
In Korea, high-value-added ship building started in 1990s by investing capital and introducing technology-intensive shipbuilding methods. Since 2000, Korea has been ranked first in the shipbuilding sector, accounting for 35% or more of the worldwide ship market. Its ship exports and production continue to increase. Information technology (IT) has not been actively introduced in the shipbuilding industry, though, and IT convergence is at a very low level in the said industry, despite the high level of Korea's shipbuilding technology. The localization rate of the core embedded software is especially low. Although Korea is strong in the IT sector, its communications technology for ships is not good enough to the building of a modern communications network for high-value-added ships, including cruise ships. As convergence technology is currently the focus, the convergence of IT and T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 380–392, 2011. © Springer-Verlag Berlin Heidelberg 2011
Topology Configuration for Effective In-Ship Network Construction
381
the shipbuilding industry is rapidly becoming an issue in the shipbuilding sector and its relevant equipment industry [1]. To promote IT convergence in shipbuilding, an in-ship network is increasingly being required for the evolution of the ship concept into a digital ship, the introduction of extra-large-scale ships such as SuperSeaCat, and the promotion of the cruise industry. For efficient and safe voyages, the devices in ships must be interconnected and integrated via a network to allow exchange, collection, and management of device data. As in the information service on land, the appropriate environment for meeting the information needs of sailors and passengers at sea must be created. In this study on IT convergence, a LAN was constructed in a ship and connected to a server via the CAN network, and the MOST network was constructed by topology to allow communication among cabins. Multimedia service was transferred to each cabin to analyze the network efficiency by topology. The network construction was based on the cabins of the cruise ship that was designated as the 21st century’s best sightseeing product by the WTO, to allow its application to large-scale ships.
2 2.1
Relevant Studies CAN
The controller area network (CAN) is a serial network communication method that was first devised for the automobile industry. Recently, it is being applied to a wide range of industries, including the automobile industry. CAN is a bus-type network that is realized using a two-strand twisted wire. It forms a network by connecting embedded systems and ensuring resistance against external factors (noise) and high reliability with the least communication error rate. A maximum of 110 ECUs can be connected to a network. It also has error detection and correction features for severely noisy environments. CAN has four frame types: the data, remote, error, and overload frames. Table 1 shows the functions of each frame [2]. Table 1. CAN frame types Type
Function
Data frame
Used to transfer data from the trans- mission nod e to the reception node
Remote frame
Used to request data transfer from other nodes
Error frame
Used to transfer error detection data from the recep tion node
Overload frame
Generated when the data or remote frame must be delayed or when an error occurs due to internal overload
382
M.-J. Kim, J.-W. Jang, and Y.-s. Yu
The data frame of a standard CAN message consists of seven different fields (Table 2). A maximum of 8-byte data can be transferred. The start-of-frame (SOF) indicates the start of the message frame. The arbitration field has an 11-bit designator and a remote transmission request (RTR) bit. An RTR bit of 0 refers to the data frame, and an RTR bit of 1, to the RTR request. The control field has six bits that consist of two spare bits and four data length code (DLC) bits. The data field has 8 bytes, including the data to be transferred. The cyclic redundancy check (CRC) field has 15bit cyclic redundancy check codes and a 1-bit delimiter, and is used to check message errors. The acknowledge (ACK) field has two bits: a 1-bit ACK slot and a 1-bit ACK delimiter. The end-of-frame (EOF) field has 7 bits. A value of 1 for each bit indicates the end of the message [3]. Table 2. CAN data frame SOF
2.2
Arbitration Field
Control Field
Data Field (8 bytes)
CRC Field
ACK Field
EOF
MOST
Media-oriented systems transport (MOST) is the multimedia networking technology that is optimized for vehicle and other applications. High-quality audio and video packet data for vehicle multimedia service can be transferred at the same time, and a single transmission medium can be controlled in real time. This method can be widely applied to any sector that requires large capacity and high quality. A flexible plastic optical fiber (POF) is used for MOST. The MOST network has a ring form to ensure stable data transfer from one device to others. A maximum of 64 MOST devices compose a network [4].
Fig. 1. Basic MOST network structure
In the MOST system, data move from one point to multiple points, and all devices share a system clock pulse. The MOST network basically has an integrated timing
Topology Configuration for Effective In-Ship Network Construction
383
master. If the timing master device transfers the data signal that contains the clock information, all devices that are connected to the MOST network prepare communications that are synchronized with the clock. The basic data transfer cycle is 44.1 kHz or 48 kHz, and other devices must send and receive data based on the same specifications. In the MOST network, 16 frames, which are divided into three areas (the synchronous frame, asynchronous frame, and control data areas), are combined in a block and transferred. The total size of the synchronous channel, which is suitable for the synchronous transfer of the video-audio stream, and of the asynchronous channel, which is not used for cyclic data transfer, as in TCP/IP, is 60 bytes. In the control channel, which is used to control MOST devices and networks, 19 bytes are transferred in the CSMA method, and a maximum of 2,756 messages can be transferred per second [5].
Fig. 2. MOST transfer type
2.3
Cruise Industry
With continuous economic growth and income increase, more and more people are interested in cruise travels. Marc Mancini [6] defined a cruise as ‘travel using a ship,’ and Maria B. Lekakou [7], as ‘enjoying leisure on a ship that accommodates 100 or more people for one or more days.’ Park et al. [8] defined cruise travel in their paper as ‘safe ship travel for travelers seeking amusement, in which they can visit tourist attractions, including historical cities, harbors, resorts, and natural beauties, taking advantage of high-quality services and facilities for lodging, eating and drinking, and amusement.’ Cruise travel can be classified according to the location, travel range, and navigation type. In terms of the navigating location, cruise travel is divided into inland cruise
384
M.-J. Kim, J.-W. Jang, and Y.-s. Yu
travel, in which the ship cruises through lakes and rivers, and marine cruise travel, in which the ship cruises along the coasts of tourist attractions. In terms of the travel range, cruise travels are divided into domestic cruise travels, in which the ship cruises within domestic territorial waters, and international cruise travels, in which the ship cruises overseas. In terms of the navigation type, cruise travel is divided into six types, including harbor cruise travel. In the sightseeing sector, the cruise industry has shown the greatest growth in the world in the last 30 years, with rapid economic development. Cruise travel needs exceeded 10 million during that period, and the cruise industry is recognized as the future industry with the highest added value in the marine transportation sector. Accordingly, the cruise business is being promoted especially in countries that have a strong marine transportation capacity. With the continuous increase in the number of passengers, shipping companies’ sales strategies, demand creation with diversified cruise products, and the rapid economic growth in Europe and Asia, the yearly average growth rate of the cruise industry has been hitting up to 8% since 1985, and shows sufficient growth potential in terms of demand. Cruise demand can be divided into that of North America, that of Europe, and that of others. North America accounts for 68% of the total demand; Europe, 19%; and Asia, 7% [9]. OSC, the UK’s marine transportation consulting firm, has estimated the cruise demand by region until 2020, as shown in Figure 3.[10]
Asia and others
Europe
North America
Unit: million
Fig. 3. Cruise demand estimate by region
3
System Configuration and Design
For the LAN configuration in the ship, the communication structure for one deck of the cruise ship and the communications room was first constructed. As shown in Fig. 4, the model was based on a 160,000-ton cruise ship that accommodates over 3,500 passengers. Of the 15 decks, 100 cabins were placed on 10
Topology Configuration for Effective In-Ship Network Construction
385
decks, 50 for each side. Network lines were installed via the server, router, and switch in the communications room, and were configured into the bus type in cabins. For multimedia service, the MOST network was applied to cabins, using optical cables.
Communications Room
Passenger cabins
Fig. 4. Network configuration on a deck
All decks were organized in three topologies, as shown in Fig. 5, to find the most efficient topology in terms of the multimedia service transfer rate.
Fig. 5. Basic MOST network structure
386
M.-J. Kim, J.-W. Jang, and Y.-s. Yu
In each cabin, diverse devices were connected to the MOST network so that passengers can enjoy multimedia contents from the server, including movies and games. Thus, the Internet can be used in each cabin with the network, which had been possible only in the communications room or a special location. Accordingly, the sailors’ and passengers’ information needs can be satisfied, and this is a type of benefit for sailors, who must always stay in the ship.
4
System Implementation
The system was implemented using NS-2 and was configured based on a cruise ship model, but due to the NS-2 program limitation, the number of cabins was reduced. The 3D implementation was challenging, so a 2D system was configured, and the cabins were expressed as nodes. The system was configured with bus, ring, and star topologies. Under the planned scenario, node nos. 3 and 7 requested data to the server node at 0.3 and 0.5 s, respectively. Node no. 3 received packets until 1.5 sec, and node no. 7 received packets until 1.8 sec to measure which type of topology would receive the most packets within the same time. Then which topology would make MOST150 most efficient when its characteristics were implemented by topology was determined. The packets that flowed to node no. 3 were expressed as black, and those that flowed to node no. 7 were expressed as blue. The experiments by topology are outlined below. 4.1
Bus Topology
The important considerations for LAN configuration are the sharing and competitiveness characteristics. This concept is inherited differently from the characteristics manifested at the two-point link. As a network, which involves a configuration of multipoints into a two-point link, cannot fully reflect the characteristics of LAN, a new type called LAN node should be made. The interface that creates and configures LAN comes up with a new method known as make-LAN in the top-level OTCL-class simulator. As shown in Fig. 5, the total number of nodes is 12. Ten nodes, ranging from node no. 0 to node no. 9, are cabins, and node no. 10, which is not seen, is used as LAN node. Further, node no. 11 serves as the communication room's server. Fig. 6 shows the total packet amount that node nos. 3 and 7 received during the same time. 4.2
Ring Topology
As shown in Fig. 7, the total number of nodes is 10. Nine nodes, ranging from node nos. 1 to 9, are cabins, and node no. 0 serves as the communication room's server. Fig. 8 shows the total packet amount that node nos. 3 and 7 received at a prescribed time. 4.3
Star Topology
As shown in Fig. 9, the total number of nodes is 10. Nine nodes, ranging from node nos. 1 to 9, are cabins, and node no. 0 serves as the communication room's server. Fig. 10 shows the total packet amount that node nos. 3 and 7 received at a prescribed time.
Topology Configuration for Effective In-Ship Network Construction
Fig. 6. Implementation of Bus Topology
Fig. 7. Packet amount of Bus Topology
387
388
M.-J. Kim, J.-W. Jang, and Y.-s. Yu
Fig. 8. Implementation of Bus Topology
Fig. 9. Packet amount of Ring Topology
Topology Configuration for Effective In-Ship Network Construction
Fig. 10. Implementation of Star Topology
Fig. 11. Packet amount of Star Topology
389
390
M.-J. Kim, J.-W. Jang, and Y.-s. Yu
Fig. 12. Packet amount that over time was received by No. 3node according to the type of topology
Fig. 13. Packet amount that over time was received by No. 7 node according to the type of topology
Topology Configuration for Effective In-Ship Network Construction
391
By topology, node nos. 3 (Fig. 11) and 7 (Fig. 12) were analyzed, and it was revealed that as in Table 3 and 4, the packet amount by topology differs over time. As shown in the above figures and tables, during the same time, the ring topology received the least number of packets, and the star topology received the largest number of packets. The packet amount received by star topology was distinctively great. Table 3. Packet amount that over time was received by No. 3 node according to type of topology sec
0.6 sec
0.9 sec
1.2 sec
1.5 sec
Bus topology
716600
148100
2245400
3009800
Ring topology
551240
1154440
1736840
2340040
Star topology
2955720
5985240
9016840
12053640
Topology
Table 4. Packet amount that over time was received by No. 3 node according to type of topology sec
0.8 sec
1.1 sec
1.4 sec
1.8 sec
Bus topology
704120
1468520
2232920
3262520
Ring topology
530440
1122200
1716040
2527240
Star topology
2955720
5985240
9016840
13064520
Topology
5
Conclusions
Domestically, research is being conducted to apply the ship area network (SAN) concept in connecting wired and wireless networks within a ship with the aim of providing next-generation value-added services. If all the devices within a ship, such as the sensors and terminals, are connected with one another using the SAN concept, this will enable the effective development of information/system/service management and communication middleware in next-generation ships. Thus, in this paper, the SAN concept was used, and the configuration of the wired networks within a ship was designed based on the model of a cruise ship, which will produce great economic ripple effects in tourism and the shipbuilding industry. Based on the experiments by topology, during the same time, the ring topology received the least number of packets, and the star topology received the largest number of packets. Further, the packet amount received by the star topology was distinctively great. Thus, if the SAN LAN networks are configured with the star topology, this will implement networks effectively, thereby further improving the communication services for the customers and sailors. For further research, it is necessary to design and configure not only wired networks but also wireless sensor networks based on continued research on SAN. The greatest
392
M.-J. Kim, J.-W. Jang, and Y.-s. Yu
constraint in this matter is that a ship is made from steel or alloys, which makes it very difficult to deliver radio waves, and such internal circumstances cannot arbitrarily be changed. This makes it difficult to design networks aboard a ship. Thus, it is necessary to craft designs based on the combined wired/wireless-network concept. Acknowledgments. This research was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MEST) (No. 2011CB019).
References 1. Kim J.: IT-based Total Ship Solution Technology Development Status and Future Direction. IT Magazine SOC, with Focus on Advanced Shipbuilding (Shipbuilding + IT) 2. CAN System Engineering. Spring 3. CAN Specifications 2.0 Parts A and B, Robert Bosch Gmbg (September 1991) 4. MOST BOOK, Automotive 5. MOST Cooperation, MOST Specifications Rev 3.0 (May 2008) 6. Mancini, M.: Cruising: A Guide to the Cruise Line Industry, 2nd edn., p. 3. Thomson (2004) 7. Lekakou, M.B., Pallis, A.A., Papadopoulou, M.N.: Plain Cruising? The State of the Cruise Industry in Greece and EU Policy Developments. In: International Association of Maritime Economics (IAME) Conference, p. 313 (2005) 8. Park, et al.: A Study on Developing Cruise Tourism Linking Korea, China, and Japan. Korea Tourism Research Institute, 6 (1999) 9. Ocean Korea, High Growth Potential of Northeast Asia Cruises, Ocean Korea, 1. Special examination, pp. 74--75 (2007) 10. Ocean Shipping Consultants,The World Cruise Shipping Industry to 2020 (2005)
Mi-Jin Kim Feb, 2004: Obtained Bachelor’s Degree in Computer Engineering at Dongeui University Aug, 2008: Obtained Master’s Degree in Computer Education at Education School of Pukyong National University Mar, 2009 – Present: Doctorial Course of Computer Applications Engineering at Dongeui University Interest subjects: IPTV, Routing Protocol, IPv6, IP Network, Vehicle Network
※
Jong-wook Jang. Feb, 1995: Obtained Doctor’s Degree in Computer Engineering at Pusan National University 1987 – 1995: Worked for ETRI Feb, 2000: Postdoctoral in UMKC 1995 – Present: Professor of Computer Engineering, Dongeui University Interest subjects: Wire and Wireless Communication System, Automobile Network
※
Performance Analysis of Inter-LMA Handoff Scheme Based on 2-Layer in Hierarchical PMIPv6 Networks Jongpil Jeong1,*, Dong Ryeol Shin1, Seunghyun Lee1, and Jaesang Cha2 1
School of Information and Communication Engineering, Sungkyunkwan University, Suwon Korea 440-746 2 Department of Media Engineering, Seoul National University of Science & Technology, Seoul Korea 139-743 {jpjeong,drshin,lshyun0}@ece.skku.ac.kr, [email protected]
Abstract. Many schemes to reduce the inter-LMA handoff delay in hierarchical Proxy Mobile IPv6 have been proposed but the previous schemes waste relatively large network resources to decrease the path rerouting delay. In this paper, we propose the 2-layered LMA concept, where the seamless inter-LMA handoff can be supported regardless of path rerouting time. As a result, the waste of wired resources and the rate of the inter-LMA handoff can be reduced. From the performance analysis, the proposed scheme gives the shortest handoff delay due to registering only to MN’s LMA and more efficient handoff process than PMIPv6. Such advantageous features of the proposed scheme neither incur any increase of the total handoff rate nor require additional LMAs. Keywords: Inter-LMA Handoff, Proxy MIPv6, 2-Layer, Ping-pong Effect.
1
Introduction
As a network-based local mobility solution developed by Internet Engineering Task Force (IETF), Proxy Mobile IPv6 (PMIPv6) [1] is being actively discussed. Unlike host-based Mobile IP, it does not require MN to modify its protocol stack and involve in mobility-related signaling messages exchange. By using proxy mobility management agents to perform the mobility related signaling on behalf of MN, it achieves the function of mobility management without MN’s involvement. Therefore, it is believed to accelerate the deployment of IP mobility management and to be the solution for mobility management in the future all-IP wireless networks [2]. The path rerouting between a mobile node (MN) and its new mobile access gateway (MAG) is one of important issues which increase the handoff delay [1-6]. If the serving MAG and the new MAG are connected to the same local mobility anchor (LMA), there is no need to re-establish the new path for the migration of the MN since the path between the LMA and the target MN is the same. In consequence, the impact of a path re-establishment to the handoff time is negligible when the handoff occurs between MAGs that are connected to the same LMA. The group of MAGs *
Corresponding author.
T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 393–402, 2011. © Springer-Verlag Berlin Heidelberg 2011
394
J. Jeong et al.
connected to an LMA is called cluster domain. The handoff between MAGs in the same LMA cluster domain is then called intra-LMA handoff. On the contrary, the handoff between MAGs which are connected to the different LMA domains is called inter-LMA handoff. When inter-LMA handoffs occur serially between two neighboring MAGs which are connected to the different LMAs, i.e., when a MN migrates from cell a in LMA A to cell b in LMA B and continuously turns back to the cell a, the second handoff from cell b to cell a is called inter-LMA ping-pong handoff [7]. In these cases, the handoff time delay is not negligible. The path re-establishment of an inter-LMA handoff is similar to the case of the new session establishment in the mobile networks and it needs new path routing and hopby-hop connection setup along the new path. Therefore, the re-establishment of the path may require more time than the allowed maximum delay limit for the real-time applications. Most of the existing researches on this problem [8-11] have been conducted to reduce path re-establishment time. These schemes describe the schemes to decrease path re-establishment time by finding a common crossover switch between original and new routes or by making direct connection between serving and new LMAs. However, it is still difficult to execute inter-LMA handoff within the required maximum delay. Besides, the network resources are wasted by the new established routes which may not be optimized. In this paper, to overcome the problems in the previous research, the concept of 2layered LMA domains is proposed. In the proposed scheme, one MAG is connected to two mobility enhanced LMAs and possible to transmit the traffic to a new LMA after full-path re-establishment in the case of inter-LMA handoff. The quality of service (QoS) negotiation and another route tracking are also practicable within the maximum delay. Moreover, the proposed scheme can effectively reduce both the inter-LMA handoff and the inter-LMA ping-pong handoff rate. Besides, the number of required LMAs is the same as that in the conventional single layered network. Following this introduction, we describe background works and the proposed scheme in Section 2 and 3, respectively. In Section 4, we analyze the performance of the proposed scheme based on the inter-LMA handoff and the inter-LMA ping-pong handoff rate. Finally, we make conclusions in Section 5.
2 2.1
Backgrounds Proxy Mobile IPv6 Overview
On the deployment perspective, PMIPv6 does not require any software modification in the MN for mobility management [12]. On the performance perspective, it tries to reduce mobility-related signaling cost and handoff delay by separating MN from involvements of the handoff procedure [12]. Based on the fact that MIPv6 is a mature protocol with several implementations, it reuses the Home Agent (HA) functionality and message format of MIPv6 [1]. However, in order to provide network-based mobility management in a domain, it introduces two functional entities, LMA and MAG. LMA is similar to HA in MIPv6. It is responsible for maintaining the reachability to the MN’s IP address while MN moves among MAGs in the PMIPv6
Performance Analysis of Inter-LMA Handoff Scheme
395
domain by updating the binding cache and maintaining the tunnel to MAG for packet delivery. On the other hands, the MAG is responsible for detecting MN’s movement and initiating binding registration on behalf of MN.
Fig. 1. Network Structure and Signaling Flow of Proxy MIPv6
Describe in Fig. 1(a), once MN enters a PMIPv6 domain and first attaches to an access link, the MN attachment procedure is performed. After MAG receives Router Solicitation (RS) message from MN, it will perform the access authentication using MN’s identify by sending a Query message to AAA server, which is assumed as the policy store deployed in the cellular core network. After successful authentication, MAG gets the MN’s profile and then sends Proxy Binding Update (PBU) message containing MN’s identity to LMA. Once LMA receives the PBU, it also performs the access authentication to be sure that PBU is authorized. If MAG and MN are trusted, LMA accepts the PBU message, stores a binding cache for the MN and sends Proxy Binding Acknowledgment (PBA) including the MN’s home network prefix. MAG sends Router Advertisement (RA) messages to MN on the access link advertising the MN’s home network prefix, which is got from LMA. Then, the MN is able to configure an IP address, and uses the tunnel between MAG and LMA to send or receive packet. After the attachment procedure is completed, the MN’s IP address keeps the same while it moves within the PMIPv6 domain. Fig. 1(b) shows the signaling call flow for MN’s handoff from previously attached MAG (MAG1) to the newly attached MAG (MAG2). When MAG1 detects the detachment of MN, it sends de-registration (DeReg PBU) message to LMA to end the packet delivery tunnel. Once the MN attaches to MAG2, the MAG2 performs binding update procedure and sets up a tunnel between MAG2 and LMA for MN to deliver packet. When MN receives the RA message, containing the MN’s home network prefix, it will not detect the change of IP layer attachment. 2.2
2-Layered LMA Concept
The 2-layered clustering is the cell grouping methodology for a seamless inter-LMA handoff in mobile networks. The proposed concept allots two LMAs for one cell by
396
J. Jeong et al.
having a MAG be connected to two mobility enhanced LMAs. The two LMAs that contain the same cells belong to different layers respectively, and the LMAs in each layer are distributed in contact with each other like existing LMA structures. The example of cellular configuration with this proposed concept is shown in Fig. 2. Overlapping LMA domains can reduce the traffic as shown in Fig. 2. Observe that the two LMA domains do not overlap in Fig. 2(a), while they overlap in Fig. 2(b) and (c). Here the value of w indicates the degree of overlapping which is actually the number of rows of overlapped cells. Without overlapping as in Fig. 2(a), whenever an MN crosses the LMA domain boundary, its location needs to be updated. If the adjacent LMA domains are overlapped as in Fig. 2(b), only the MNs crossing the overlapping region cause binding update. In other words, an MN needs to fully cross a cell to cause binding update. When the size of overlapping area is larger, the MNs need to cross several cells (here it is 2) as in Fig. 2(c), to cause binding update. This scheme thus significantly reduces the signaling traffic due to binding update compared to non-overlapping scheme. However, the shortcomings of the overlapping approach are that the cells in a LMA domain are not overlapped uniformly. As a result, managing the binding update is more complex. We present the proposed scheme which can effectively reduce the binding update rate without such overhead.
Fig. 2. The example of 2-layered LMAs
In the proposed 2-layered clustering environment, the number of cells in one LMA is twice that of the single layered case, if the capacity of mobility enhanced LMAs is the same as that in the single layered case. To prove this, let the total wired capacity of the whole mobility enhanced LMAs be T , the capacity of one LMA be C , and the number of cells in the whole network be N cell . The required capacity of each layer is then T 2 which is half of the total capacity and the number of LMAs per layer is T
2C N 2C . Then, the number of cells per LMA is given by T cell which is the number of
total cells over the number of LMAs per layer. Hence, the number of cells per LMA of the network with the proposed concept is twice the number of cells in an 1-layered LMA,
3
C N cell . T
Proposed Scheme
When a new session arrives at a cell (MAG), the corresponding MAG selects one of two LMAs connected to it. The MAG tries to make a connection to the LMA in a way
Performance Analysis of Inter-LMA Handoff Scheme
397
that the center of corresponding LMA is closer to the cell where the new session arrives. In Fig. 3, to easily select the closer LMA, we assume that the information about the arrangement of LMAs is stored in each MAG, such as preference value in PBU message. If the resource in the first-try LMA is not enough to make a new connection, the MAG attempts to make a connection to its second LMA. The new session is blocked when there is no channel available for this LMA.
Fig. 3. Operation of the proposed scheme
When radio hints such as router solicitation (RS) message are received to neighboring MAGs, the corresponding MAG of the current cell ( MAGServing ) checks the location of the MAG of the newly migrating cell ( MAGNEW ). If MAGNEW and MAGServing are connected to the same mobility enhanced LMA, a new connection between MAGNEW and the connected LMA is setup first. The old connection between MAGServing and the LMA is then released after the new connection setup is made. While MN communicates with the fixed network through the old path before new connection setup, it communicates with new path after the connection to MAGNEW . Therefore, the time required for path re-establishment is approximately only a singlehop connection setup time. The inter-LMA handoff needs more complex operations than the intra-LMA handoff. As for the real-time traffic, the path re-establishment delay is critical for guaranteeing the required maximum delay limit to avoid a forced termination of a handoff session. To accomplish the path re-establishment and the end-to-end QoS negotiation with in a limited time, the radio handoff and the path re-establishment should be executed independently. When the MN with a real-time traffic session enters a border cell of the serving LMA, the new path related with MN’s another
398
J. Jeong et al.
LMA is established. In other words, the serving MAG ( MAGServing ) of the MN is switched from the serving LMA to the overlaid one. If the traffic has a non-real-time attribute, the inter-LMA handoff scheme is the same as that in the real-time case, except that the new path is re-established with only radio hint. If the radio hint is also used in the case of real-time traffic to determine when to reroute a new path, the threshold value of a radio hint power for a non-real-time case is much higher than that for a real-time case to decrease the number of inter-LMA handoffs. Consequently, the new path is re-established simultaneously with handoff and the number of inter-LMA handoffs for the non-real-time traffic is much smaller than that for the real-time case. The shortcoming of the proposed scheme in the non-real-time case is the relatively long path re-establishment time, but it is not so critical as to violate the QoS requirements for non-real-time traffic.
4
Performance Analysis
In this section, the mathematical model of the cellular environment for mobile networks is described. We assume that a MAG is assigned to each cell and the overlapping of cells results in a cell having an equilateral hexagonal shape, such as in Fig. 4. Although the proposed system model adopts the square-like shapes of LMAs, we assume that the shapes of LMAs in the analysis model are hexagonal for the convenience of the analysis. A ring is defined as a single or a group of boundary cells. A LMA, however, is made up of several rings. Here, inter-edges are defined as the boundary edges of the outermost ring of a LMA and all the other remaining edges are termed as intra-edges.
Fig. 4. Network Architecture for the proposed scheme
Let I E ( N ) and I A ( N ) be the numbers of inter- and intra-edges in a LMA with N rings (where N = 1,2,3,…) respectively. Let C (i) be the number of cells in ring I (where i=0,1,2,…,N-1), and TC ( N ) be the total number of cells in a LMA with in N rings. Then, the following relationships can be derived [13]: I E ( N ) = 12 N − 6
(1)
⎧⎪1 C (i ) = ⎨ 1 {6 + I E (i )} ⎪⎩ 2
(2)
Performance Analysis of Inter-LMA Handoff Scheme
399
N −1
TC ( N ) = C (0) + ∑ C (i ) = 3 N 2 − 3 N + 1
(3)
1 I A ( N ) = {6TC ( N ) − I E ( N )} = 9 N 2 − 15 N + 6 2
(4)
i =1
4.1
Inter-LMA Pingpong Handoff Rate
We denote RE , k ( N ) as the inter-LMA handoff rate of the k-layered Clustering case with N rings in one LMA. From Eq. (1) and (4), RE ,1 ( N ) = I E ( N ) / 6TC ( N ) is derived as, RE ,1 ( N ) =
2N −1 3N 2 − 3N + 1
(5)
To analyze the inter-LMA pingpong handoff rate, the probability that the MN migrates to the neighboring cell is needed because the inter-LMA pingpong handoff can occur when the MN experiences more than two handoffs. We assume that the originating session blocking probability and the handoff blocking probability are zero, and the session holding time and cell sojourn time are exponentially distributed with rates of μ M and μ R , respectively. Then, Ph , the probability that an MN handoffs to the neighbor cell with in a session holding time, is given by [14], Ph =
1 1+ μ M / μ R
(6)
Let the inter-LMA pingpong handoff rate in k-layered Clustered system with N rings be RP,k . Then RP ,1 ( N ) is derived as the multiplication of RE ,1 ( N ) , Ph and the probability that the MN in the outer neighboring ring of the original LMA handoffs into the original LMA again. This probability can be described as the number of interedges in one LMA over the number of edges in the outer neighboring ring of the original LMA, I E ( N ) / 6C ( N ) . Hence, we have RP ,1 ( N ) =
2N − 1 2N −1 1 ⋅ ⋅ 3N 2 − 3N + 1 6 N 1 + μ M / μ R
(7)
In the real-time traffic case of a 2-layered Clustered system, it is assumed that the MN performing an inter-LMA handoff enters into the (N-1)th ring of the overlaid LMA (a few exceptional cases exist, but these are ignored for convenience of analysis). Therefore, RE ,2, RT ( N ) , Ph and the probability that the MN in the (N-1)th ring of the original LMA handoffs in to the border ring. Consequently, RP ,2,RT ( N ) is derived as RP ,2, RT ( N ) = RE ,2, RT ( N ) ⋅ Ph ⋅
I E ( N − 1)
6C ( N − 1)
=
2N − 3 2N − 3 1 ⋅ ⋅ 3 N 2 − 9 N + 7 6 N − 12 1 + μ M / μ R
(8)
Similar to the real-time case, the inter-LMA handoff MN enters into the (N-1)th ring of the overlaid LMA in the non-real-time case. For the inter-LMA pingpong handoff to occur the inter-LMA handoff, MN in the (N-1)th ring enters into the Nth ring and moves across the inter-edge of the LMA. Hence, we have,
400
J. Jeong et al. I E ( N − 1) I (N ) ⋅ E 6C ( N − 1) 6C ( N − 1) 2N − 3 (2 N − 1) 2 2 N − 3 1 = ⋅ ⋅ ⋅ 2 3 N − 9 N + 7 (6 N − 6) 2 6 N − 12 1 + μ M / μ R
RP ,2, NRT ( N ) = RE ,2, NRT ( N ) ⋅ Ph ⋅
(9)
When N is large enough to ignore the remaining terms except the highest order, RP,1 ( N ) and RP,2, RT ( N ) converge to the same rate 2 9N , and RE ,2, NRT ( N ) converges to 81N .
1.0
1-Layered 2-Layered (RT) 2-Layered (NRT)
Inter-LMA Handoff Rate
0.8
0.6
0.4
0.2
0.0 1
2
3
4
5
1-Layered 2-Layered (RT) 2-Layered (NRT)
1.0
Inter-LMA Ping-pong Handoff Rate
2
0.8
0.6
0.4
0.2
0.0 1
2
3
4
5
N
N
Fig. 5. Inter-LMA handoff(left) and pingpong handoff (right)
In the case of inter-LMA pingpong handoff, from Fig. 5 (right) when
μM
μ R = 1,
we notice that the handoff rate of real-time traffic in the 2-layered case is slightly larger than that in the 1-layered case when the number of cells per LMA is relatively small, but the difference decreases as the LMA size increases. If the number of cells in one LMA is larger than 50, the performances of these two schemes are almost the same. The handoff rate for non-real-time traffic in the 2-layered systems is about 1 9 of the other two cases as described in the previous section. 4.2
Impact of the Delay:
tmc (MN-CN) and th (MN-HA)
In Fig. 6 (left), we can notice that the disruption times of PMIPv6 (intra-LMA) and the proposed scheme are independent of the delay increase between MN and CN. However, for PMIPv6 (inter-LMA), the disruption time gets greater (up to 900 ms), while this delay increases (up to 150 ms). These results corroborate the intention of the protocol specifications: The MN in a foreign network only registers with its LMA (PMIPv6 intra-LMA and the proposed scheme). On the other hand, for PMIPv6 (inter-LMA), the MN registers with its CN, involving then an extra message to perform the return routability and registration procedure. Also, PMIPv6 (intra-LMA) and the proposed scheme give the shortest handoff delay because the MN registers only to its LMA, which is in the same foreign network. The handoff delays with PMIPv6 protocol (intra-LMA and the proposed scheme) is the lowest and independent of the increase of the delay between the MN and the home network because when the MN moves within a LMA domain. On the other
Performance Analysis of Inter-LMA Handoff Scheme
401
hand, the HA is involved in the registration process for the other protocols analyzed, resulting in higher handoff delays when the delay between MN and its home network increases. PMIPv6 for inter-LMA movements results in a handoff delay unacceptable for VoIP sessions, even if the delay between the MN and its home network is low. 2400
PMIPv6 Inter Proposed Scheme (PMIPv6 Inter)
2200
1000
2000 1800
600
(th) PMIPv6 Inter (th) Proposed (PMIPv6 Inter) (tmc) PMIPv6 Intra
400
(tmc) Proposed (PMIPv6 Intra) 200
disruption time (ms)
disruption time (ms)
800
1600 1400 1200 1000 800 600 400
Proposed 0
200 0
20
40
60
80
100
120
delay (ms): tmc, th
140
0.02
0.04
0.06
0.08
0.10
FER
Fig. 6. Disruption time vs. delay (left) and Disruption time vs. FER (right)
4.3
Disruption Time as the FER
The number of transactions and the size of the messages exchanged affect the handoff delay. For the evaluation, the size of each message and the values of the fixed backoff timers are obtained from [1-2]. We consider a 128-kb/s channel. The values of the delay D and the inter-frame time τ are set, respectively, to 10 ms, as in [15], and to 1 ms. The number of maximum transmissions allowed ( N m ) is 6. The handoff delay is evaluated at various frame error rates (FER) between 0% and 10%. As shown in Fig. 6 (right), the result shows that the proposed scheme is more efficient handoff process than PMIPv6 because of the low number of exchanged messages over the wired and wireless link. The proposed scheme and PMIPv6 handoff latencies are increased with respect to the FER. These results are affected by the increasing signaling costs of re-transmission, although increasing signaling is important for the analysis of the difference between the two methods. The results for FER < 3% show a slight difference. However, if the probability of a packet retransmission increases, the difference of the two schemes also increases. Additionally, frequent movement of the mobile node in the LMA domain results in local handoff management, the difference of total handoff latency is increased further.
5
Conclusions
One of the most important topics of existing inter-LMA handoff schemes for mobile networks is to minimize the impact related with the communication path rerouting which might generate forced terminations of some delay-sensitive connections due to the long handoff delay. In this paper, the proposed scheme with the layered arrangement of LMAs is proposed to solve this problem. The MAGs in the mobile networks of the proposed layered LMA concept are connected directly to two LMAs, and this configuration of the mobile network enables the path re-establishment before
402
J. Jeong et al.
a radio handoff when an inter-LMA handoff occurs. The proposed handoff scheme using the 2-layer LMA concept for the mobile networks can effectively reduce the time delay for an inter-LMA handoff, the number of overall inter-LMA handoffs and the inter-LMA ping-pong effects. Acknowledgments. This research was supported by Basic Science Research Program(2010-0024695) and Future-based Technology Development Program(No. 2010-0020727) through the National Research Foundation of Korea(NRF) funded by the MEST.
References 1. Gundavelli, S., et al.: Proxy Mobile IPv6. RFC 5213, IETF (August 2008) 2. Kong, K., Lee, W., Han, Y., Shin, M., You, H.: Mobility management for all-IP mobile networks: mobile IPv6 vs. proxy mobile IPv6. IEEE Wireless Communications 15(2), 36– 45 (2008) 3. Fathi, H., Prasad, R., Chakraborty, S.: Mobility management for VoIP in 3G systems: evaluation of low-latency handoff schemes. IEEE Wireless Communications 12(2), 96– 104 (2005) 4. Johnson, D., et al.: Mobility Support in IPv6. RFC 3775, IETF (June 2004) 5. Koodli, R., et al.: Fast Handovers for Mobile IPv6. RFC 4068, IETF (July 2005) 6. Soliman, H., Castelluccia, C., Malki, K.E.: Hierarchical Mobile IPv6 Mobility Management (HMIPv6). RFC 4140, IETF (August 2005) 7. Akyildiz, I., McNair, J., Ho, J., Uzunalioglu, H., Wang, W.: Mobility Management in Next-Generation Wireless Systems. Proceedings of the IEEE 87(8), 1347–1385 (1999) 8. Xie, J., Akyildiz, I.F.: A Distributed Dynamic Regional Location Management Scheme for Mobile IP. IEEE Transactions on Mobile Computing 1(3) (July 2002) 9. Pack, S., Choi, Y.: A Study on Performance of Hierarchical Mobile IPv6 in IP-based Cellular Networks. IEICE Transactions on Communications E87-B(3), 462–469 (2004) 10. Wu, Z.D.: An Efficient Method for Benefiting the New Feature of Mobile IPv6. In: Proceeding of IASTED, pp. 65-70 (October 2002) 11. Akyildiz, I.F., Wang, W.: A Dynamic Location Management Scheme for Next-Generation Multitier PCS Systems. IEEE Transaction on Wireless Communications 1(1) (January 2002) 12. Kempf, J., et al.: Goals for Network-Based Localized Mobility Management (NETLMM). RFC 4831, IETF (April 2007) 13. Agrawal, P., Zeng, Q.: Introduction Wireless and Mobile System, 2nd edn. Thomson Publishing (2006) 14. Lin, Y.-B., Mohan, S., Noerpel, A.: Queueing Priority Channel Assignment Strategies for PCS Hand-Off and Initial Access. IEEE Trans. on Veh. Tech. 43(3), 704–712 (1994) 15. Fathi, H., Chakraborty, S., Prasad, R.: Optimization of Mobile IPv6-Based Handoffs to Support VoIP Services in Wireless Heterogeneous Networks. IEEE Trans. on Vehicular Technology 56(1) (January 2007)
A Study on Dynamic Gateway System for MOST GATEWAY Scheduling Algorithm Seong-Jin Jang and Jong-Wook Jang Department of Computer Engineering, Dong-Eui University 614714 Busan, Republic of Korea [email protected], [email protected]
Abstract. To ensure infotainment and telematics services, much attention is placed on MOST. The MOST network has different protocols, such as MOST25, MOST50, and MOST150, requiring gateways to send and receive information. In the case of ischronous and Ethernet channel data that can be supported only with MOST150, when data is transmitted from MOST150 to MOST25, it cannot be treated by MOST25, leading to data loss and transmission delays. Thus, this study proposes a MOST gateway system to connect the MOST25 and MOST150 networks to form a single network. It aims to use the simulation tool NS-2, to analyze the CBQ scheduling algorithm performance between the MOST25 and MOST150 networks, and to consequently propose scheduling algorithm improvement measures suitable to cars. Keywords: MOST, MOST 150, MOT Gateway, Gateway Scheduling Algorithm.
1
Introduction
With the recent rise in the use of automotive electronic-control systems and the recent technological advances, intelligent vehicles are increasingly being highlighted. Also, to ensure higher reliability and safety, and to support the drivers' convenience, such as through the auto cruise function and collision avoidance, more efforts are being made to adopt new information technologies, such as an advanced information and communications technology, electronics, and control technologies [1, 2]. Further, with the increasing integration of many multimedia devices in vehicles, studies on MOST have been actively conducted for the last several years. One optical cable can replace dozens or hundreds of wires used to connect various electronic components in a vehicle with MOST application. With MOST application, the design and production processes for the development of new cars can be improved, and the competitiveness of vehicles can be boosted with quality enhancement, multimedia convenience, and vehicle weight and fuel consumption reduction [1]. There are now 100 vehicle models, which use MOST infotainment backbones, and foreign companies like Daimler Chrysler, Audi, and BMW, and South Korean companies like Hyundai and Kia, have applied the MOST technology to their vehicles [3,4]. Vehicles equipped with MOST25 need to replace all MOST150-related devices to receive enhanced services, such as higher bandwidths and Ethernet, incurring high network building costs and consequently weakening their competitiveness. Prompt T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 403–411, 2011. © Springer-Verlag Berlin Heidelberg 2011
404
S.-J. Jang and J.-W. Jang
technological development is needed to address such high costs before it becomes too late. Thus, as MOST25, MOST50, and MOST150, each of which supports different bandwidths, consist of heterogeneous networks, gateway development is essential to enable these systems to send and receive information between networks with different protocols [4,5,6]. MOST150 has a frame structure similar to that of MOST25, but it has additional ischronous channels and an Ethernet channel. If a gateway is configured in MOST25 and MOST150 networks to send data, when data is sent from MOST150 to MOST25, ischronous and Ethernet channel data cannot be handled, causing data loss and data transmission delays. Thus, in this study, a MOST gateway system is proposed to enable the synchronized connection of MOST25 and MOST150 networks to form a network. The most widely used CBQ scheduling algorithm is applied to ensure the service quality, enhanced resources utilization, and transmission delay of the affected ischronous channels and Ethernet channel in MOST25. Also, the performance was analyzed using NS2, according to the priority and bandwidths, and measures for boosting the efficiency and improvement were proposed. Chapter 2 of this paper discusses the existing MOST network problems and the need for a MOST gateway scheduling algorithm, and Chapter 3 proposes measures to configure the MOST gateway and CBQ scheduling algorithm. Chapter 4 analyzes the performance of the MOST gateway scheduling algorithm, and Chapter 5 presents this paper’s conclusion.
2
Definition of the Related Researches and Problems
This chapter discusses the problems surrounding MOST and the existing scheduling algorithms to explain the background of the mechanism to be implemented in this study. MOST25 products that support up to 25 Mbps, MOST50 products that support up to 50 Mbps, and products that support multimedia services with a large capacity of up to 150 Mbps are now being marketed. As MOST25, MOST50, and MOST150 consist of heterogeneous networks, when sending and receiving data, they need gateways, and not many study results on this are currently available. When data is transmitted from a high bandwidth to a low bandwidth via a gateway, the data volume increases, which leads to data loss and transmission delay. In particular, when data is transmitted from MOST150 to MOST25, the ischronous and Ethernet channel data that can be supported only in MOST150 cannot be handled by MOST25, causing more data loss and transmission delays. To solve these problems, or to enable the handling of MOST150's ischronous and Ethernet channel data in MOST25, ischronous data is converted into stream data, and Ethernet data is converted into packet data. Priority and bandwidth problems occur between such converted MOST150 packet data and the stream data, and between the existing MOST25 packet data and the stream data. In extreme cases, certain data occupy the bandwidth, leading data with low transmission priority to lose transmission opportunities, thus disabling timely data transmission and consequently creating resource starvation. To solve this problem, diverse existing queue schedulers can be applied . MOST networks transmit diverse multimedia, packet, and control data; thus, to guarantee the service quality and delay of the data according to the data characteristics,
A Study on Dynamic Gateway System for MOST GATEWAY Scheduling Algorithm
405
scheduling algorithms, which are applied to IP-based packet data, can be applied. For differentiated services in simple first-in-first-out (FIFO) queuing, the priority queuing (PQ) policy was proposed, and to prevent PQ starvation and to ensure fair services by flow, the fair queuing (FQ) policy was proposed. Research was conducted on the weighted fair queuing (WFQ) and weighted round robin (WRR) methods that overcame the shortcomings of FQ and that improved the performance. Currently, the most widely used is class-based queuing (CBQ), which combines the features of PQ and FQ [7,8].
3 3.1
Configuration of the MOST Gateway MOST Gateway Design Methods
The MOST25-MOST150 gateway structure proposed in this study is shown in Fig. 1. The data mapping table stores and manages the information of the numerous frames included in the MOST25 and MOST150 frames, and the MOST25 frames are filtered on the basis of the data mapping table.
Fig. 1. Most Gateway Structure
Each input frame is classified via the data classifier according to the filter information recorded in the input packet headers. The classified frames are transmitted as packet data queues, stream data queues, and control data queues. Then each queue is given a weight according to the characteristics of each queue and the schedulers are configured according to the allotted bandwidth values.
Fig. 2. MOST 25 frame structure
406
S.-J. Jang and J.-W. Jang
Fig.2 shows the MOST25 data frame structure. With MOST25 supporting the 25Mbps transmission, the first byte of its 512 bits (64 bytes) is used for managing frames, the next 60 bytes are used to transmit stream and packet data, and the last two bytes are used for control data.
Fig. 3. Compares of the MOST25 and MOST150 frame structures[4]
Fig.3 compares the MOST25 and MOST150 frame structures. MOST150 can support a 150Mbps bandwidth (wider than that of MOST25) as well as a wider range of video applications, and can transmit Ethernet channels for the ischronous transfer mechanism and for efficient, homogeneous IP-based data transmission [4]. Schedulers are included in the design to optimize the delays and bandwidths associated with a difference in the bandwidths that the MOST25 and MOST150 networks can support.
Fig. 4. CBQ Packet-handling Method
3.2
CBQ Scheduling Algorithms
CBQ, which is used the most for the MOST gateway, can be applied. CBQ, a policy based on WRR queuing schedulers, is designed to ensure accurate bandwidth sizes by overcoming the WRR shortcomings even when diverse packet sizes coexist. As shown in Fig.4 CBQ internally consists of two schedulers—a WRR queuing scheduler and a link-sharing scheduler—and basically operates with WRR, but when
A Study on Dynamic Gateway System for MOST GATEWAY Scheduling Algorithm
407
each class uses more or less output bandwidths than the allotted one via the estimator, the link-sharing scheduler will operate [7,9]. When data is transmitted from MOST250 to MOST25, after the MOST gateway CBQ scheduler differentially transmits ischronous and Ethernet channel data to each queue, the data is handled according to the bandwidth utilization and priority To control the CBQ link-sharing scheduler output, the following variable is used [10]. In the case of the theory IDT
(Ideal Interdeparture Time)
Idle
In the case of actual
Current IDT
Fig. 5. Idle variables Calculate •idle
The value is the difference between the ideal departure time and the actual departure time to meet the allotted bandwidth. The idle value is calculated with Equation (1). p is the size of the transmitted packet, r the output link bandwidth, and f the greater service volume allotted to the relevant class (e.g., a value greater than 0 means less services being provided than the allotted volume). (1) •avgidle
For the most important variable with CBQ to control the traffic, the actual traffic service ratio is tracked. This value, as with Equation (2), can be calculated using the average of the EWMA function. w means weight and is set to 16. 1
4
w
(2)
Evaluation of Performance
In this chapter, when the CBQ schedulers are applied to the proposed MOST gateway to differentiate the bandwidth utilization and priorities, the performance is evaluated and analyzed. To compare the performance, the NS-2 simulator was used, and a topology was configured as in Fig. 6. When data was transmitted from MOST150 to MOST25, the existing stream data was converted into CBR data, and the ischronous channel data was converted into real audio data. After the bandwidth utilization and priorities were set as in Table 1 and Table 2, the performances were compared. Figure 7 shows the CBQ algorithm's traffic-handling amount from Table 2, and Figure 7 shows the dropped packet amount from Table 2. The CBR data of Queue 1, which has the first priority and a 50% bandwidth, can be transmitted without
408
S.-J. Jang and J.-W. Jang
requiring bandwidth borrowing due to the absence of dropped packets, as in Figure 8. Queues 2 and 3 have the same priorities, but due to a bandwidth difference, queue 2 has less dropped packets than queue 3 because queue 2 borrowed more extra bandwidths compared to queue 3.
Fig. 6. MOST Ring Topology Table 1. Bandwidth Utilization and Priority NO.1
bandwidth priority
CBR Queue1
RealAudio1 Queue2
RealAudio2 Queue2
50% 6
30% 1
20% 1
Table 2. Bandwidth Utilization and Priority NO.2
bandwidth priority
CBR Queue1
RealAudio1 Queue2
RealAudio2 Queue2
50% 1
30% 6
20% 6
Fig. 7. Throughput of packets from Table 2
A Study on Dynamic Gateway System for MOST GATEWAY Scheduling Algorithm
Fig. 8. Amount of drop packets from Table 2
Fig. 9. Throughput of packets from Table 1
Fig. 10. Amount of drop packets from Table 1
409
410
S.-J. Jang and J.-W. Jang
Fig. 9 shows the CBQ algorithms' traffic-handling amount from Table 1, and Fig.9 shows a dropped handling amount from Table 1. From this it was learned that compared with queue 3, queue 2, which has the highest priority and a great bandwidth, can show an excellent traffic-handling performance. Queue 1, which has the lowest priority, used all the extra bandwidths of queues 2 and 3, and did not send data but led all the data to be dropped, as in Figure 10. Figure 11 shows the traffic-handling amounts of queue 2, which has the best traffichandling capability, from Table 1 and 2. Queue 2 in Table 1 showed the best traffichandling performance. In the case of Table 2, however, queue 2 did not send CBR data for 40 seconds and instead dropped all the data, failing to ensure a fair bandwidth. To ensure real-time data performance at the MOST gateway, MOST150 ischronous channel data is transmitted to MOST25, a method that has been proven to be efficient, but real-time data priority should be lowered while the extra bandwidth should be allocated.
Fig. 11. Throughput of Queue2
5
Conclusion
This study proposed measures to configure the MOST gateway so that the existing MOST25 equipment can be used in MOST networks, and so that the MOST150 performance can be optimized, and accordingly explained the data-handling procedures. Also, to send MOST150 ischronous channel data to MOST25 data, the data was converted into stream data, and CBQ scheduling algorithms were applied to it. The performances were also compared according to the optimal bandwidth utilization and priority, using NS2. Thus, the environment suitable to cars was verified. It was confirmed that if the results of this study will be applied to the MOST gateway, excellent performance can be ensured, thereby enabling efficient communication. Acknowledgements. This research was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MEST)(No. 2011CB019).
A Study on Dynamic Gateway System for MOST GATEWAY Scheduling Algorithm
411
References 1. Park, S.-H.: The latest trends in technology of MOST, EIC, KERI (2008) 2. Bishop, R.: Intelligent vehicle applications worldwide. IEEE Intelligent Systems and Their Applications 15(1), 78–81 (2000) 3. Grzemba, A.: MOST Book 2 (2011) 4. MOST cooperation, MOST specification Rev 3.0 (2008) 5. Bosch, CAN specification version 2.0. Published by Robert Bosch GmbH (September 1991) 6. Han, J.-S.: A Study On a dynamic Gateway system for CAN(Controller Area Network). Yonsei University (2007) 7. Semeria, C.: Supporting Differentiated Service Classes:Queuing Scheduling Disciplines, Juniper Networks, White paper (2000) 8. Ito, Y., Tasaka, S., Ishibashi, Y.: Variably weighted round robin queuing for core IP routers, Performance, Computing, and Communications Conference 21st IEEE International April 3-5 (2002) 9. Jang, D.-H.: Scheduling Algorithm For Supporting Differentiated Service Classes. Graduate School of Chung-Ang University (2003) 10. Floyd, S.: Notes on Class Based Queuing: Setting Parameters, Informal notes (September 1995)
Seong-Jin Jang. Aug, 2003: Obtained Master’s Degree in Computer Education at Education School of Dong-Eui University May, 2005 - Present: Doctorial Course of Computer Applications Engineering at Dongeui University
※
Interest subjects: MOST Network, Routing Protocol, Vehicle Network
Jong-wook Jang. Feb, 1995: Obtained Doctor’s Degree in Computer Engineering at Pusan National University 1987 – 1995: Worked for ETRI Feb, 2000: Postdoctoral in UMKC 1995 – Present: Professor of Computer Engineering, Dongeui University Interest subjects: Wire and Wireless Communication System, Automobile Network
※
Implementation Automation Vehicle State Recorder System with In-Vehicle Networks Sung-Hyun Baek and Jong-Wook Jang Department of Computer Engineering, Dong-Eui University, Busan, Republic of Korea [email protected], [email protected]
Abstract. Recently, because life and property loss due to car accidents, the blackbox to use airplanes installed in-vehicle. existing vehicle blackboxs stored only external video and image. if car accident, the driver is not known vehicle running condition. to know the status of the vehicle, in-vehicle mount sensors for various measurement and control. these sensor is controlled by ECU(Electronic Control Unit) and communicate through the in-vehicle network. In-vehicle network is control network, multimedia network. control network use OBD-II(On-board diagnostics) protocol on all vehicles since 2006 and multimedia network use MOST(Media Oriented System Transport) protocol for communication between ECUs. To use OBD-II, MOST protocol, drivers use data of blackbox to occur many driving data by many ECU of vehicle. So, driver can get more definite information than the existing black box. in this paper, to complement problem of existing blackbox to store external video and image, there implement blackbox system to use MOST network and OBD-II network. so, the blackbox provide the current status and information of the vehicle. blackbox system consist of CAM for image acquisition, GPS for time and position information acquisition, OBD-II for receiving vehicle running data, MOST interface card for receiving multimedia data. using Visual C++, the blackbox operate each sensor and module to receive the data portion. Keywords: blackbox, in-vehicle network, OBD-II, MOST, CAN, vehicle blackbox.
1
Introduction
Blackboxes were first used in a aircrafts to store the flight records and use them to determine the causes of accidents. This technology was recently applied to vehicles, and is being used to identify the causes of vehicle accidents the purpose of the car blackbox is to store the driving video, vehicle location, and time information related to an accident as objective data, and to identify their causal relationships with the accident. The car blackbox can also record hit-and-run cases during driving or parking accidents to protect the driver’s propery. With the increase in the importance of the car blackbox. It has been made obligatory for all vehicles in Europe in 2010, and for all 4.5-ton-or-less vehicles in the U.S in 2011. Between 2010 and 2013 in korea (12/29/2009 ministry of land, transport and maritime affairs), business vehicles started to be required to be equipped with a digital driving recorder. In November 207 T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 412–420, 2011. © Springer-Verlag Berlin Heidelberg 2011
Implementation Automation Vehicle State Recorder System with In-Vehicle Networks
413
the Korean agency for technology and standards established the national standard (KS) for the car accident recorder(KSR5076) to provide regulatory support for its technical development and for its use in relevant industries. According to the KS blackbox standards, video data are important, but the driving data (vehicle speed, brake, condition, seatbelt fastening status, GPS, ABS tire pressure, air bag condition, ETC.) are also necessary in actual car accidents. Most car blackboxes in the market. However, meet the KS video data standards, but do not comply with the requirement for the vehicle condition data. In actual car accidents, video data are not enough to accurately identify the accident cause[1][5]. In this paper, video data were stored; the vehicle conditions, including the RPM and speed, were stored using the OBD-II protocol, which is widely used for in-vehicle networks; and the vehicle multimedia information were stored using the media oriented systems transport(MOST) protocol, which is widely used for vehicle multimedia networks, to realize an integrated blackbox what provides accurate information for actual accident analysis.
2 2.1
In-Vehicle Networks OBD-II Network
Vehicles are equipped with sensors for measurement and control, and these devices are controlled by the electronic control unit (ECU). The original purpose of developing the ECU was to control the core functions of the engine, including the ignition timing, fuel injection, variable valve timing, idling, and limit setting; but with the development of vehicle technologies and computer performance, the ECU now controls almost all parts of the vehicle, including the automatic transmission, drive system, brake system, and steering system. Due to the continuous development of the electronic diagnosis system, it has recently been established as the standard diagnosis system, called the “on-board diagnostics system version II (ODB-II).” In the OBD network, data on the main systems or failures that are transferred from the sensors to the ECU can be seen from a vehicle console or other external device using the serial communications feature[3][4]. All vehicles that use the OBD-II network adopt the standard diagnostic trouble code (DTC) and connection interface (ISO J1962), but there are five different electronic signals, as follows: SAE-J1850 (VPW-PWM); ISO 15765 and SAE-J2234 (CAN communication); and ISO 1941-2 and ISO 14230-4 (ISO method). According to the regulations, however, all vehicles sold in the U.S., which is the world’s largest car market, have had to use CAN (ISO157654) since 2008. Accordingly, it is expected that the European and Asian markets will adopt CAN (ISO15765-4), and that CAN’s interfaces will be unified into one type. When a fault occurs in a vehicle, OBD-II describes the fault information using the five-digit diagnostic trouble code. The trouble types and codes are also standardized. General car maintenance agencies use the trouble codes based on the OBD-II standard to detect abnormalities in vehicles and to repair them. CAN (ISO15765 and SAE-J2234) was designed for stable operation in a noisy environment. Because the CAN bus is basically the broadcasting type, all its nodes
414
S.-H. Baek and J.-W. Jang
can detect all messages, but can process only the messages for themselves. The bus supports a transfer speed of 125 kbit/s at a distance of 500 m. CAN is stronger against noise and has a faster transfer speed than other standard protocols. The message frame has the priority information, and the frame with a higher priority is transferred first. The message is very short (8 bytes), and CRC-15 is used to handle errors and corrections (Fig. 1).
Fig. 1. CAN message frame
2.2
MOST Network
MOST is the high-speed integrated multimedia system communication protocol that enables transfer of high-capacity multimedia data, which include digital image, voice, and control data, at a speed of up to 150 Mbps. The fiber-optic cable that is formed in the ring network is used. MOST is the multimedia networking technology that is optimized for vehicles and other applications. High-quality audio and video packet data for vehicle multimedia services can be transferred at the same time, and a single transmission medium can be controlled in real time. The protocol is available using the physical layer of a plastic optical fiber or an electrically non-shielded or shielded stranded wire, which meet vehicle environment requirements. At present, there are MOST25 (25 Mbps bandwidth), MOST50 (50 Mbps bandwidth), and MOST150 (150 Mbps bandwidth). It is expected that these will be unified into MOST150, which has the largest bandwidth. The frame length that is defined in MOST150 is 3,072 bits (384 bytes) in all. Twelve bytes are used for management purposes, and the remaining 372 bytes are used for the user data. The user data area is largely divided into the synchronous area (streaming) and the asynchronous area (packets). The user data area can be freely changed at multiples of four byte.
Fig. 2. MOST150 frame
3
Configuration of the Integrated Blackbox System
The integrated blackbox in this study consists of the transmitter and receiver for communication with the MOST network; the transmitter and receiver for communication with the OBD-II network and the external memory for data storage (USB and SD memory); GPS for collecting time and vehicle location information; and a notebook computer for operating the integrated blackbox. As shown in Fig. 3, the video data while driving, the multimedia data from the MOST network, the driving condition data from the OBD-II network, the GPS
Implementation Automation Vehicle State Recorder System with In-Vehicle Networks
415
reception time, and the vehicle location data are stored in the notebook computer’s main memory and then in the external memory every five minutes[2].
Fig. 3. Configuration of the integrated blackbox
4
Design of the Integrated Blackbox System
The core function of this blackbox system is the timing synchronization between networks by reading the data of the OBD-II and MOST networks. To ensure the blackbox’s performance of this core function, an algorithm was designed for the synchronization of the communication between the OBD-II network, the MOST network, the GPS, the CAM, and the PC. Fig. 4 shows the flowchart of the integrated blackbox. The blackbox creates four threads and simultaneously conducts four tasks; and the GPS processor, which retrieves the global time to write time information on each data, serves as the main processor. The time and location data are received from the GPS. After the time information is retrieved, three processors operate. First, the blackbox requests the driving data (vehicle speed, RPM, travel distance, tire pressure, etc.) according to the PID of the appropriate sensor (Table 1) by connecting the OBD-II network via an OBD connector (OBD-Link). When each PID is requested from the vehicle, the appropriate sensor that is connected to the OBD-II network responds[6]. In the second processor, the driving data, including the sharp curve, the abrupt deceleration, and the school zone are received from the virtual navigation that is connected to the MOST network via the MOST PCI interface. The third processor stores the CAM frame, and OpenCV processes the frame. OpenCV is the strong image-processing library that was manufactured by Intel. The OpenCV() library is used to store the images in the main memory at a resolution of 320*240 and a frame rate of 30 fps using the MPEG-4 codec. A resolution of up to 640*480 is available, but it was set at 320*240 considering the external memory capacity. The time information from the GPS is added to the image data before they are stored. The blackbox system checks the external memory before storing the data, removes the oldest image data when the memory is full, and stores the data every 10 seconds.
416
S.-H. Baek and J.-W. Jang
Fig. 4. Flowchart of the integrated blackbox Table 1. PID for OBD-II
Mode PID Returned Data Bytes 01 01 01 01 01 01 01 01
00 01 03 05 0C 0D 0F 51
4 4 2 1 2 1 1 1
Description PIDs support DTC clear Fuel system status Engine coolant temperature Engine RPM Vehicle speed Intake air temperature Fuel type
Implementation Automation Vehicle State Recorder System with In-Vehicle Networks
417
The fourth processor retrieves the GPS data. The basic protocol of GPS is the National Marine Electronics Association (NMEA) data format. There are many NMEA protocols, but $GPRMC, which is basically used in GPS, was used in this study. Table 2 shows the structure of $GPRMC. Table 2. $GPRMC structure Field 1 2 3 4 5 6 7 8 9 10 11 12 13
Sentence ID UTC time Status Latitude N/S indicator Longitude E/W indicator Speed over ground Course over ground UTC date Magnetic variation Checksum Terminator
Example $GPRMC 92204.999 A 4250.5589 S 14718.5084 E 0 0 211200 *25 CR/LF
After the GPS satellite data are received, they are stored after the time information is added to them for thread synchronization. The blackbox receives and stores the data other than the image data (the driving condition, multimedia, and GPS data) at a rate of 1 fps. When the system actually starts its operation, the GPS time information is additionally stored to synchronize the data in three threads. The data that is stored in the main memory is in turn stored in the external memory every 10 seconds.
5
Implement of the Integrated Blackbox System
The system development environment was Windows XP on a desktop computer, and a USB was used as the external memory device. The main window of the blackbox system program has three areas: the OBD-II, MOST, and GPS data reception area; the blackbox control area; and the CAM image display area (Fig. 5). Fig. 6 shows the window wherein the OBD-II, MOST, and GPS data are outputted according to each event. Fig. 7 shows the window wherein the OBD-II, MOST, and CAM data are stored in the external memory. When the blackbox starts its operation, the external images are outputted and stored in five-minute sections. When the external memory is full, the oldest image data are removed. If an accident happens or the Store button is pressed, the external images (CAM) are stored in a separate space (USB memory), and the time information is used as the image data name. The stored data are analyzed via simulation so that users can easily understand them (Fig. 8).
418
S.-H. Baek and J.-W. Jang
Fig. 5. Car blackbox system implementation
Fig. 6. GPS, MOST, and OBD-
Ⅱ
Ⅱ data outputted
Fig. 7. GPS, MOST, OBD- , and CAM data stored in the external memory
Implementation Automation Vehicle State Recorder System with In-Vehicle Networks
419
Fig. 8. Simulation window
6
Conclusion
The car blackbox is mounted on vehicles to track the situation and cause of a car accident. It enables accurate determination of the causes of traffic accidents by providing critical clues to such causes, and legally protects drivers against unfair disadvantages. With the increasing demand for the car blackbox and the development of car electronics, it is expected that the blackbox technology will continue to be developed and will become a part of vehicle standards in a few years. Current blackboxes store only image and voice data. A driver can be disadvantaged if an accident occurs due to an internal trouble in the ECU. In this study, to realize an advanced car blackbox and to reduce the driver’s potential disadvantage, a CAM camera was used to collect driving images, and a GPS module was used to collect the time and vehicle location. In addition, an OBD-II driving interface was used to check the current driving conditions, and the MOST network interface was used to monitor the multimedia device conditions, to accurately find out the cause of a car accident. Further studies will be conducted to transfer the data to an external server via an external network (3G or WIFI) in case the blackbox is damaged in a car accident. In addition, a better blackbox system will be realized in terms of the actual installation of the blackbox by embedding. Also, Based on these studies, this proposed system will apply to wrong accident data that verify fault accident between victim and perpetrators by using the existing blackbox and will prove reliability of the proposed black box Acknowledgment. This research was supported by the Converging Research Center Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education, Human Resource Training Project for Regional Innovation. (No. 2010CB012).
References 1. BIR, IT-combined Industry Trend and Development Strategy - Car 2. MOST, http://www.smsc.com
420
S.-H. Baek and J.-W. Jang
3. On-board diagnostics, http://ko.wikipedia.org/wiki 4. Lee, B.: OBD-II Exhaust Gas, Gyeongyeongsa (2005) 5. Ha, G., Lee, J., Heo, Y., Choi, S., Sin, M.: OBD DTC Generation Simulator Development. In: Korean Institute of Electrical Engineers 38th Summer Conference, pp. 1157–1158 (2007) 6. Opensource project that used OBD-II
Adapting Model Transformation Approach for Android Smartphone Application* Woo Yeol Kim, Hyun Seung Son, Jae Seung Kim, and Robert Young Chul Kim Dept. of CIC(Computer and Information Communication), Hongik University, Jochiwon, 339-701, Korea {john,son,jskim,bob}@selab.hongik.ac.kr
Abstract. The current smart phone applications are developed subordinately to its platform. In other words, applications are developed according to its own unique platform by each relevant vender, such as Cocoa for Apple, Android for Google and Windows Mobile for Microsoft. This paper suggests adapting the whole process of MDD (Model Driven Development) for the development of heterogeneous smart applications. Our proposed MDD is composed of Modelto-Model Transformation, which separates the independent from the dependent platform and transforms the independent model to the dependent model through rules on model transformation, and Model-to-Text Transformation, which forms codes from each dependent model. Model, metamodel and model transformation language are required to achieve Model-to-Model Transformation while model, metamodel and code template are needed for Model-to-Text Transformation. UML model, UML metamodel, ATL model transformation language and Acceleo code generation language also are adapted MDD in smart phone development environment. This paper also mentions a case of applying MDD in SnakePlus game. Keywords: Model Transformation, Android, Smartphone, ATL(ATLAS Transformation Language), Metamodel.
1
Introduction
Reusable system for achieving maximum use of previously formed resources must be established to achieve quick software development. However, reuse of software is difficult to achieve as the mobile-embedded software is dependent on the system and is developed centered on the source code [1]. Furthermore, as various platforms are provided by different cell phone manufacturers and suppliers, a method for solving this problem is required. *
This research was supported by the MKE(The Ministry of Knowledge Economy), Korea, under the ITRC(Information Technology Research Center) support program supervised by the NIPA(National IT Industry Promotion Agency)(NIPA-2011-(C1090-1131-0008)) and the Ministry of Education, Science Technology (MEST) and the IT R&D Program of MKE/KEIT [10035708, “The Development of CPS(Cyber-Physical Systems) Core Technologies for High Confidential Autonomic Control Software”].
T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 421–429, 2011. © Springer-Verlag Berlin Heidelberg 2011
422
W. Yeol Kim et al.
MDD (Model Driven Development) is the mechanism for changing necessary technical model after designing platform-independent metamodel to automatically form code through the model. The model can be selected to re-create code in case the application program is required in another environment. Revision of application program is not required. This method can increase the productivity of code related with model re-use [2-4]. Thus, one model can be automatically formed into heterogeneous models by applying the model transformation technique in smart phone development environment. This paper applies MDD in Android platform to develop heterogeneous smart phone applications. To apply in Android platform, application structure is analyzed and classified into independent and dependent characteristics. MDD is classified into Model-to-Model Transformation and Model-to-Text Transformation [5]. Model-toModel Transformation uses ATL(ATLAS Transformation Language)[6] while Acceleo [7] is used in Model-to-Text Transformation. The entire process, from classification of independent/dependent model to code generation, is automatically executed by using tools. As an example, MDD is applied in the development process of the game, SnakePlus. The application of model transformation technique in smart phone environment can be verified through the application case. Furthermore, the application case presented the possibility of applying the technique in other various platforms, such as iPhone and Window Mobile. This paper is organized as follows. Chapter 2 explains related studies, including the basic concept of MDD, model transformation method and APT. Chapter 3 mentions the MDD-based development method of Android application. Chapter 4 describes Android application in the development process. Lastly, Chapter 5 mentions the conclusion and future work.
2
Related Work
MDD is classified into Model-to-Model Transformation and Model-to-Text Transformation. Model-to-Model Transformation is the method of transforming input source model as the subject model. Model transformation can be achieved from model to model, model to code, and code to model. Furthermore, transformation can be achieved in various models, such as UML, control flow diagram, and data flow diagram. Although the formation of metamodel is not required as UML metamodel is provided in UML, MOF(Meta Object Facility)[8] is used to design metamodel in models without metamodel.
Fig. 1. Basic principle of model transformation [9]
Adapting Model Transformation Approach for Android Smartphone Application
423
The basic principle of model transformation is presented in Figure 1. To execute model transformation, source model, source metamodel, target metamodel, and transformation must be defined. Model transformation engine uses such factors to transform into target model. Language that defines transformation rules is an important factor in model transformation. UMT(UML Model Transformation)[10], MTL(Model Transformation Language)[11], QVT(Query / View / Transformation)[12], ATL(ATLAS Transformation Language)[13,14] are the types of languages. UMT is the method using XML, XMI[15], XSLT. XMI is the input value in model transformation. However, as this is fundamentally XML, transformation is achieved by XSLT, the transformation language of XML. MTL can achieve model transformation of source models written in DSL(Domain Specific Language) and use the concept of Java virtual machine to transform metamodels written in metamodel language as a type of compiler. As the model transformation language selected as the standard in OMG, QVT is advantageous in that it can accurately express the specific transformation point, such as by When, Where, and can also form and re-use patterns. Like MTL, ATL executes model transformation using MDA framework by transformation definition language and transformation engine. The definition of transformation rule of MTL is more accurate and easier to understand in ATL, which can also define complex transformation rules. Furthermore, it is advantageous in that it provides re-use and synthesis of transformation models. Among the diverse types of transformation languages, this thesis selected ATL for such advantages. Model-to-Text Transformation is the method of forming code from model. This method is classified into two types: visitor-based and template-based [9]. In the visitor-based code generation method, the visitor mechanism is composed of purifying the internal expression of model and writing text in text stream. An example of this method is Jamda, the object-oriented framework that provides the assembly of classes that represent UML model [16]. Template-based method is more similar with the code than the visitor mechanism. This is easily used in the repeated development of template. Template can represent text work due to the approach presented in section as well as code pieces within text that are incorrect in syntax or meaning. Acceleo is an example of this method. Template-based Acceleo is used for code generation in this paper.
3
Model Transformation for Smartphone Application
MDD can be classified into Model-to-Model and Model-to-Text. Although the final goal of model transformation method is to form code from model, the middle model can be used to achieve greater modulation and maintenance, optimization and tuning, or to eliminate defects. Furthermore, Model-to-Model transformation is helpful in calculating and uniting different views of system module. Model-to-Text is focused on transforming the relevant model into code. Figure 2 presents the outline diagram of MDD-based development method, the method used in this paper. Source & Target Metamodel, TIM(Target Independent Model), and transformation spec must be entered to execute Model-to-Model, and ATL tool is required for execution. Through this input process, TIM can be automatically formed through TSM(Target Specific Model). TIM is the independent
424
W. Yeol Kim et al.
model of the development target while TSM refers to the dependent model of the target. Model, metamodel, and model transformation language is required for Modelto-Model Transformation. UML model, UML metamodel, and ATL model transformation language were selected for use. TSM and transformation spec must be entered to execute Model-to-Model, and Acceleo tool is required for execution. TSM Code is formed through this input process. Model, metamodel, and code template are required for Model-to-Text Transformation. Like Model-to-Model Transformation, UML was used in model and metamodel while Acceleo was used for code template to apply MDD in smart phone development environment.
Fig. 2. Outline of model transformation method
4
Case Study
SnakePlus is the game application created based on the existing Snake[17] game. The existing Snake game is progressed by moving the snake North, East, West, and South to eat apples. In SnakePlus, the snake can be moved in a total of 8 directions and various items have been added to increase the number of entertaining elements in the previously simple game. This paper explains the model transformation process based on SnakePlus. 4.1
TIM(Target Independent Model)
In this example, one target independent model (TIM) is used to form TSM and code for Android. SnakePlus is composed of a total of 7 packages: Main, StageFactory, StageComponent, Background, Item, Wall, and Snake. Main initializes SnakePlus and draws image within the screen. Stage is formed and image information of objects is loaded on memory at the beginning of SnakePlus. StageFactory manages the stage and objects of SnakePlus. Objects are loaded on memory and location information is managed. StageComponent manages the common information of objects within
Adapting Model Transformation Approach for Android Smartphone Application
425
SnakePlus. The function of reading and writing the coordinates of object and image information of object are connected to ComponentManager. BackGround manages the background information within stage. It also achieves input and output of images in background information. Item manages the coordinates and image information of SnakePlus items. Item includes obstacle, apple, poison, and mouse. It also achieves input and output of item coordinates and image information. Wall manages the wall information within stage. Furthermore, it achieves input and output of wall images and coordinates. Snake manages information of snake within stage and achieves input and output of snake location and direction as well as image information. Main package is composed of StartClass, SnakePlusView, SnakPlusMain, ComponentManager, RefreshHandler class and all information is initialized when starting SnakePlus. Main package is as presented in Figure 3. The following presents explanations on Main package.
Fig. 3. Class diagram of TIM
4.2
Model-to-Model Transformation (TSM)
Android SnakePlus is composed of a total of 7 packages: Main, StageFactory, StageComponent, BackGround, Item, Wall, Snake. Like TIM, Main package is the starting point of Android TSM. However, the class composition within package alters due to characteristics of Android platform. Android can be written in XML and Java languages and its structure can be largely classified into layout and activity. Layout is XML file and button, textbox directly displayed on screen are written by XML code. Thus, it is the file for writing buttons that are directly released on smart phone screen.
426
W. Yeol Kim et al.
Activity is class file and codes, such as event listener of layout, are written. Activity is the starting point of Android. After the activity is started, it immediately summons the layout. Because of this, StartClass of TIM transforms to SnakePlusActivity class holding activity properties and also changes the class-composing method. The composition of classes other than SnakePlusActivity class is identical to TIM. Main package is composed of SnakePlusActivity, SnakePlusView, SnakePlusMain, ComponentManager, RefreshHandler class. When compared with the structure of TIM, it presents differences in the part where StartClass class is changed to SnakePlusActivity class, the part displaying change in the image saving form of ComponentManager class, and the part where the method of SnakePlusMain class is changed to Android interior API. Figure 4 presents a detailed diagram on such differences.
Fig. 4. Class diagram of TSM for Android
4.3
Model-to-Text Transformation (TDC)
Code generation template is required to achieve formation of Android code. This code template uses the Java code generation template, which was defined in the previous example. As observed in the target-dependent model, a total of 19 classes exist in the formed code. Among these classes, StartClass of target-independent model is SnakePlusActivity, while draw method of SnakePlusMain class is changed to onDraw and touchEvent is changed to onTouchEvent. Figure 5 is the formed SnakePlusMain.
Adapting Model Transformation Approach for Android Smartphone Application
427
public class SnakePlusMain extends SnakePlusView { private RefreshHandler mRedrawHandler = new RefreshHandler(this); private StageFactoryPlant stage; private TextView mStatusText; private int gameSpeed; private long mLastMove; private int mMode; public final static int NONE = 0; public final static int READY = 1; public final static int RUNNING = 2; public final static int LOSE = 3; public final static int SPEED_DOWN = 4; public final static int SPEED_UP = 5; ComponentImageManager mImageManager; public SnakePlusMain(Context context, AttributeSet attrs) { } public SnakePlusMain(Context context, AttributeSet attrs, int defStyle){ } private void initSnakeMain() { } public boolean onTouchEvent(MotionEvent event) { } public void setTextView(TextView newView) { } public int getGameSpeed(){ } public void setGameSpeed(int num){ } public void update(){ invalidate(); } public void onDraw(Canvas canvas) { super.onDraw(canvas); } } Fig. 5. Class code of formed SnakePlusMain
4.4
Results of Execution
To complete program, additional codes for function must be written after forming code of target-dependent model. Results presented in Figure 6 can be checked when this program is compiled and executed.
428
W. Yeol Kim et al.
The development of the final application could be achieved through targetindependent model, target-dependent model, and code generation automation process. Although perfect code generation cannot be achieved currently, approximately 45% of Windows mobile application codes can be formed through UML. Fast execution of design and realization process can be achieved through increased code generation rate.
Fig. 6. Results of execution
5
Conclusion
This paper applied MDD in Android platform development to achieve heterogeneous smart phone development. For MDD application, classification was made between Model-to-Model Transformation and Model-to-Text Transformation and automation tool was used to achieve development. The application structure of Android platform was analyzed to execute Model-to-Model Transformation. By using the results of analysis, independent and dependent characteristics were classified and rules of model transformation were created based on independent/dependent models. ATL was used to describe the rules of model transformation and model transformation was executed in eclipse environment. In result, it was verified that class diagram was formed according to the rules of model transformation. Java used in Android was analyzed to execute Model-to-Text Transformation. Code template was written based on the results of analysis. It was written in Acceleo grammar to execute transformation in eclipse environment. In result, it was verified that code was formed according to rules of model transformation. The possibility of achieving transformation from formed target-independent model to other heterogeneous platform was presented by application in SnakePlus of Android environment. Transformation to heterogeneous platforms, such as iPhone, Window mobile, can be achieved by using the platform independent model of this paper and redefining the rules of model transformation. In future research, iPhone and Window mobile platform will be analyzed to apply the proposed method in each platform.
Adapting Model Transformation Approach for Android Smartphone Application
429
References 1. Jantsch, A.: Modeling Embedded System and SOCs. Morgan Kaufmann, San Francisco (2004) 2. Kim, W.Y., Son, H.S., Park, Y.B., Park, B.H., Carlson, C.R., Kim, R.Y.C.: The Automatic MDA (Model Driven Architecture)Transformations for Heterogeneous Embedded Systems. In: Proceedings of The 2008 International Conference on Software Engineering Research and Practice, vol. 2, pp. 409–414 (July 14, 2008) 3. Kim, W.Y., Kim, R.Y.C.: A Study on Modeling Heterogeneous Embedded S/W Components based on Model Driven Architecture with Extended xUML. The KIPS Transactions 14-D(1) (February 2007) 4. Kim, W.Y., Son, H.S., Kim, R.Y.C., Carlson, C.R.: MDD based CASE Tool for Modeling Heterogeneous Multi-Jointed Robots. In: Proceedings of the 2009 WRI World Congress on Computer Science and Information Engineering, vol. 7, pp. 775–779 (April 01, 2009) 5. Czarnecki, K., Helsen, S.: Classification of model transformation approaches. In: OOPSLA 2003 Workshop on Generative Techniques in the Context of Model-Driven Architecture (2003) 6. Wikipedia, ATLAS Transformation Language, http://en.wikipedia.org/wiki/ATLAS_Transformation_Language 7. Obeo, Acceleo User Guide, http://www.acceleo.org/ 8. OMG, Meta Object Facility Specification. In: OMG Unified Modeling Language Specification, Version 2.0 (January 2006) 9. Czarnecki, K., Helsen, S.: Feature-Based Survey of Model Transformation Approaches. IBM Systems Journal 45(3), 621–664 (2006) 10. Grønmo, R., Oldevik, J.: An Empirical Study of the UML Model Transformation Tool(UMT). In: The First International Conference on Interoperability of Enterprise Software and Applications (INTEROP-ESA), Geneva, Switzerland (February 2005) 11. Vojtisek, D., Jézéquel, J.-M.: MTL and Umlaut NG:Engine and Framework for Model Transformation, http://www.ercim.org/publication/Ercim_News/ enw58/vojtisek.html 12. OMG, Documents associated with Meta Object Facility (MOF) 2.0 Query / View / Transformation, Version 1.0 (April 2008) 13. Bézivin, J., Dupé, G., Jouault, F., Pitette, G., Rougui, J.E.: First Experiments with the ATL Model Transformation Language: Transforming XSLT into XQuery. In: Proceedings of the Workshop on Generative Techniques in the Context of Model Driven Architecture, Anaheim,CA (2003) 14. Jouault, F., Kurtev, I.: Transforming Models with ATL. In: Proceedings of Model Transformations in Practice Workshop (MTIP), MoDELS Conference, Montego Bay, Jamaica (2005) 15. OMG, XML Metadata Interchange Specification. In: OMG Unified Modeling Language Specification, Version 2.1.1 (December 2007) 16. Jamda, The Java Model Driven Architecture 0.2 (2003), http://sourceforge.net/projects/jamda/ 17. Snake, Technical Resources, http://developer.android.com/resources/samples/Snake/
Implementation of a Volume Controller for Considering Hearing Loss in Bluetooth Headset Hyuntae Kim1, Daehyun Ryu2, and Jangsik Park3 1
Department of Multimedia Engineering, Dongeui University, Gaya-dong, San 24, Busanjin-ku, Busan, 614-714, Korea [email protected] 2 Faculty of Information Technology, Hansei University, Dangjung-dong, 604-5, Kunpo city, Kyunggi Province, 435-742, Korea [email protected] 3 Department of Electronics Engineering, Kyungsung University, Daeyeon3-dong, 110-1, Nam-gu, Busan, 608-736, Korea [email protected]
Abstract. Today’s, many young people suffer from noise-induced hearing loss by using wearable hearing devices, such as Bluetooth headset. This paper present hearing loss reduction and more natural volume control systems considering individual hearing characteristics for Bluetooth headset. Experimental results using CSR Bluetooth headset example design board(DEV-PC-1645, included with Kalimba-DSP) show that individuals may be able to perceive without the inconvenience at the less sound intensity and the more sensitive frequency bands. As a result, we may prevent hearing loss properly by reducing excessive sound energy in each frequency bands. Keywords: Personal Intensity Control, Minimum Threshold of Hearing, Bluetooth Headset, Kalimba-DSP.
1
Introduction
Noise-induced hearing loss has increased according to spreading wearable hearing devices. Personal audio devices has 78 136dB, allowed maximum Sound Pressure Level (SPL). When listening to music on earphones at volume level of 100 dB, loud enough to cause permanent damage after just 15 minutes per day! Recently, many kind of technique has developed for preventing noise-induced hearing loss [1]. One of them, KLISTEN Inc.’s hearing damage protection technique gotten a spot light [2]. But the volume at each frequency varies linearly up and down without considering threshold of pain, the effect of noise-induced hearing loss were degraded. Proposed hearing loss reduction and more natural volume control system control the sound volume considering individual hearing characteristics and threshold of feeling, as a result hearing loss from audio device were reduced. This paper present hearing loss reduction and more natural volume control algorithms considering individual hearing characteristics and threshold of feeling for Bluetooth headset.
∼
T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 430–438, 2011. © Springer-Verlag Berlin Heidelberg 2011
Implementation of a Volume Controller for Considering Hearing Loss in Bluetooth Headset 431
2
The Need for Hearing Loss Protection
Fig. 1 show permitted noise exposure time and magnitude at each frequency after setting 120 dB SPL, as threshold of pain. If pure sound were supplied with same sound pressure level at each frequency, allowed time for protecting hearing loss are different. For example, pure sound exposure at 250Hz with 120 dB allowed during 30 minutes, but pure sound at 3 kHz with 90dB just only permitted 30 minutes. Equal-loudness curve in Fig. 1 show a similar frequency characteristics shape to level of threshold in quiet from standard equalloudness curve [3].
Fig. 1. Relation curve between noise exposure allowed time and loudness level at each frequency band
Fig. 2. Threshold in quiet for several ages and genders
432
H. Kim, D. Ryu, and J. Park
According to this point, it is necessary to measure the threshold in quiet per person at first in proposed algorithm for supplying proper sound to individually different hearing characteristic. And then calculate the distance between threshold in pain and threshold in quiet. Finally one-step increment or decrement size of sound level is calculated. Thresholds in quiet for different gender and age are shown in fig. 2. And the proposed volume control concept shown in fig. 3.
Fig. 3. The proposed volume control concept
3 3.1
Implementation of the Proposed System Schematic Diagram of the Hearing Loss Prevention Algorithm
For implementation, BlueCore5-Multimedia embedded on CSR Bluetooth Headset example design board (DEV-PC-1645) was used [4-7]. BlueCore5-Multimedia include a Kalimba DSP core. The Kalimba DSP core is an open-platform DSP that can perform signal processing functions on over-air data or audio CODEC data to enhance audio applications such as echo and noise cancellation, MP3 encoding and decoding, voice recognition, etc. Schematic diagram for implementation of the proposed algorithm in Kalimba-DSP is shown in Fig. 4. In Fig. 5, mode 0 is a loop-back mode which means equalizer filtering in frequency domain using audio input signal in Kalimba-DSP. And the volume controlled by reconstructing equalizer in frequency domain using button 2(volume up) and button 3(volume down). Mode 1 is tonegeneration mode, which measure threshold in quiet for audio device user using pure tone with sequentially increased sound pressure level. The user selects the threshold in quiet at selected frequency using button 2. After this measurement is completed, new equalizer is set up. 3.2
Measurement for Threshold in Quiet
Measurement for threshold of quiet is possible in mode 1(tone-generation mode). The controller generates pure tone with sequentially increased sound pressure level at selected frequency (250Hz, 500Hz, 1000Hz, 2000Hz, 4000Hz). Audio device user listen pure tones with sequentially increased sound pressure level at the frequency and
Implementation of a Volume Controller for Considering Hearing Loss in Bluetooth Headset 433
then select by the button 2 the threshold of quiet for self-tuning. After self-tuning is completed, personal data for threshold of quiet save to memory unit and used to set up new equalizer.
Fig. 4. Schematic diagram for the algorithm
Threshold in pain is maximum loudness sound level for man it is used to control sound volume as a upper limit value. 3.3
The Process of Equalizer
The process of equalizer consists of three stages. The first stage is frequency sampling filter design, equalizer filtering to the next, volume control is the last stage. Proposed system can control relatively deferent at each selected frequency considering personal characteristic, while volume control for conventional audio devices linearly control equally over frequency range. As a result, proposed system may protect hearing loss of user. jω H [k ] is impulse response of the equalizer to be designed. H (e ) is Fourier transform of the equalizer. H [k ] is discrete Fourier transform of h[n] and the same as frequency sampling of H (e jω ) . Frequency sampling filter design started with setting H1[ k ] like to equation (1) using EQ value, Δ fn at selected frequency for control the
volume. Index k within 0~64 matched to digital frequency 0 ~ π , 65~127 matched to π ~ 2π . Setting example of H1[k ] is shown in Fig. 5 (a). H1[k ] values lied in digital frequency 0 ~ π maiden by linear interpolation between each samples and the values in π ~ 2π earned by folding and copying the left half values . Finally H 2 [k ] is made up completely and it is shown in Fig. 5 (b). ⎧Δ f 1 , k = 3 ( f = 250Hz ) ⎪ ⎪Δ f 2 , k = 7 ( f = 500Hz ) ⎪Δ , k = 15 ( f = 1kHz ) ⎪ f3 H 1 [k ] = ⎨ , 0 ≤ k ≤ 127 ⎪Δ f 4 , k = 31 ( f = 2kHz ) ⎪Δ , k = 64 ( f = 4kHz ) ⎪ f5 ⎪⎩0 , otherwise
(1)
434
H. Kim, D. Ryu, and J. Park
(a)
(b)
(c)
(d)
Fig. 5. Equalizer Design using frequency sampling method. (a) Volume controlled value Δ fn (b) Linear interpolated equalizer (c) Inverse discrete Fourier transform of H 2 [k ] (d) Circular shifted and approximated impulse response of the equalizer.
Inverse discrete Fourier transform of H 2 [k ] is h1[ n] it is shown in Fig. 5 (c). It has no delay element, it is desirable to assign linear phase. Proposed system use circular shift like to equation (2) instead of exponential function for linear phase.
~ ⎧h [n] = h1 [((n − m)) N ], 0 ≤ n ≤ N − 1 h2 [n] = ⎨ 2 ⎩0 , otherwise
(2)
Where, m = 64, N = 128
h2 [n] values are discarded for n < 32 or n > 96, because that become negligible.
Therefore impulse response of the final equalizer becomes equation (3), it is shown in Fig. 5 (d). h2 [n] = h2 [n] , 0 ≤ n ≤ 64
(3)
Block convolution is used for real-time speech processing in digital systems. Total audio input expressed with sum of the finite length L, xr [n] , it is shown in equation (4).
Implementation of a Volume Controller for Considering Hearing Loss in Bluetooth Headset 435 ∞
x[n] = ∑ xr [n − rL ],
(4)
r =0
⎧ xr [ n + rL] , 0 ≤ n ≤ L − 1 ⎩0 , otherwise
Where xr [ n] = ⎨
Overlap-add method is used for real-time processing in system output y[n] , it is shown in equation (5). ∞
y[ n] = x[ n] ∗ h[ n] = ∑ y r [ n − rL],
(5)
r =0
Where yr [n] = xr [n] ∗ h[n] Discrete Fourier transform of yr [n] in equation (5) is equal to equation (6).
Yr [k ] = X r [ k ]H [k ] , 0 ≤ k ≤ N − 1
(6)
Considering phase in equation (6) as designed previously, it becomes to equation (7). Yr [k ] ≅ X r [k ]H 2 [k ]e − j ( 2πk / N ) m = Ys [k ]e − j ( 2πk / N ) m , 0 ≤ k ≤ N − 1
(7)
Ys [ k ] is output of the equalizer filtering in proposed scheme, its inverse discrete Fourier transform in time domain becomes equation (8).
y [ n] = ys [((n − m)) N ], 0 ≤ n ≤ N − 1 ⎧~ y r [ n] ≅ ⎨ r ⎩0 , otherwise
(8)
If threshold in quiet Min f were set up at each frequency, sound intensity level increment at the frequency becomes to equation (9).
Δf =
Max f − Min f N
L f ( n) = L f ( n − 1) + Δ f
(9) (10)
Where, threshold in pain, Max f were set by standard equal loudness curve. L f (n) means n-th sound intensity level at frequency f . The number N should default to 10, it is divided into 10 steps volume control level.
4
Experimental Results
BlueCore5-Multimedia embedded on CSR Bluetooth Headset example design board (DEV-PC-1645) was used for implementation of hearing loss prevention algorithms. It is shown in Fig. 6.
436
H. Kim, D. Ryu, and J. Park
The first test is equalizer filtering in frequency domain after FFT processing. Spectral values of equalizer can be changed by buttons at selected frequencies. Changes of spectral distribution of the test input signal with five representative frequencies is shown in output of the equalizer filtering, as like to Fig. 7.
Fig. 6. CSR Bluetooth Headset example design board (DEV-PC-1645)
Second test is setting up individual hearing characteristics on the test board. This consists of loop-back mode and tone-generation mode. It is possible to change of the mode by button. Sampling frequency is set to 8kHz in loop-back mode, while it is set to 44.1kHz in tone-generation mode. Pure tone at 250Hz, 500Hz, 1000Hz, 2000Hz, 4000Hz generated exactly in tonegeneration mode. After self-tuning is completed, personal data for threshold of quiet save to memory unit and used to set up new equalizer. When the volume up and down button pressed, the values of equalizer changed according to the button, it is shown in Fig. 8. Final test is listening test with proposed volume variation and traditional linear volume variation. In order to evaluate the performance of proposed system a Mean Opinion Score (MOS) has to be performed with 10 listeners. If test score were better than 4, performance quality could not distinguish between two methods. Listener groups consisted of a professor, two graduate students and seven undergraduate students. And their major or interesting fields is audio signal processing. As test results, proposed method do not degrade the quality of sound and protecting hearing loss, it is shown in Table 1. Table 1. Listening test results Test signal Sinusoidal signal with five frequencies
A music signal
Reference energy level - 3 dB - 6 dB + 3 dB + 6 dB - 3 dB - 6 dB + 3 dB + 6 dB
Test results 4.5 4.1 4.6 4.1 4.7 4.3 4.8 4.4
Implementation of a Volume Controller for Considering Hearing Loss in Bluetooth Headset 437
(a)
(b) Fig. 7. A results of equalizer filtering test. (a) Spectrum of the test input signal with five representative frequencies. (b) Spectrum of the output signal from equalizer filtering.
438
H. Kim, D. Ryu, and J. Park
Fig. 8. An example of equalizer volume variation for a person
5
Conclusions
Experimental results using CSR Bluetooth headset example design board(DEV-PC1645) show that individuals may be able to perceive without the inconvenience at the less sound intensity and the more sensitive frequency bands. As a result, we may prevent hearing loss to reduce excessive sound energy in each frequency bands.
References 1. Oppenheim, A.V., Schafer, R.W., Buck, J.R.: Discrete-time signal processing, 2nd edn. Prentice Hall, Englewood Cliffs (1999) 2. Talbot-Smith, M.: Audio Engineer’s Reference Book. Elsevier Science Ltd., Amsterdam (2001) 3. ISO 226:2003: Acoustics – Normal equal –loudness-level contours. International Organization of Standards (2003) 4. Moore, B.C.J.: An Introduction to the Psychology of Hearing, 5th edn. Emerald Group Pub. Ltd. (2003) 5. Kalimba DSP Assembler User Guide, CSR (2006) 6. Mono Headset SDK User Guide, CSR (2007) 7. BlueCore 5-Multimedia Kalimba DSP User Guide, CSR (2007)
An Extended Cloud Computing Architecture for Immediate Sharing of Avionic Contents Doo-Hyun Kim1, Seunghwa Song1, Seung-Jung Shin3, and Neungsoo Park2,* 1
Department of Internet & Multimedia Engineering Department of Computer Science & Engineering, Konkuk University, Seoul, Korea 3 Division of Information Technology, Hansei University, Korea {Doohyun,neungsoo}@konkuk.ac.kr, [email protected], [email protected] 2
Abstract. Unmanned aircraft system supposed to give users efficient mobility is considered as one of the most valuable autonomous control systems. It is mainly used in the field of defense, anti-fire, surveillance, geographical information collection and avionic image services. While most of unmanned aircrafts are in the type of fixed-wing, we developed video surveillance system based on rotary-wing aircraft that could continuously approach and trace targets. To efficiently utilize and share the avionic video contents, we also developed distributed computing environment focusing on improving reliability or real-time synchronization. However, there has been a limit in the number of nodes that share same contents. Thus, cloud computing is considered as one of most efficient technique paradigms for contents sharing. In this paper, we suggest an extended cloud computing architecture and show details of the platform structure for open and immediate sharing of avionic contents. Keywords: Avionic Video Surveillance, Social Computing, Cloud Computing, Immediate Contents Sharing.
1
Introduction
Avionic video data using unmanned aircraft is used in various fields such as defense, anti fire, GIS (Geographical Information System) and avionic image service. Most avionic video data services are image contents captured by fixed-wing aircraft which can cover huge area at high-altitude. Those services are efficient to provide geographical information such as building, mountain, and rivers. However, need of continuous tracing or surveillance features to targets in local area has become important. Because rotary-wing aircraft can hover and fast direction control is available, it is suitable to meet those requirements. Typical avionic video system has been designed for demand of limited users, and guaranteeing reliability and real-time performance were main research. However, to maximize usability of avionic contents, service providing system for immediate sharing of contents has become important. *
Corresponding author.
T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 439–446, 2011. © Springer-Verlag Berlin Heidelberg 2011
440
D.-H. Kim et al.
For this, we have researched way of utilizing social media cloud computing which has been attended as new technology paradigm. In this paper, we explain need of integrated avionic contents that consist of video data and additional avionic data. And we also suggest platform architecture to effectively distribute the avionic contents over social media cloud computing environment. Previous researches on avionic video surveillance system and its characteristics will be introduced in chapter 2. In chapter 3, we mention concept of avionic contents and suggest architecture of service provider for cloud computing.
(a) Target-Around mode
Fig. 1. Avionic video surveillance system (b) Target-Watch mode
(c) Target-Tracking mode Fig. 2. Dual Eye system for rotary-wing unmanned aircraft
2 2.1
Fig. 3. Various modes of DualEye
Avionic Video Surveillance System Using Unmanned Rotarywing Aircraft Architecture of Avionic Video System
Unmanned aircraft system is an autonomous flying control system which rotates and transfers based on 3-axis. The rotary-wing aircraft, one of modern aircraft types, flies
An Extended Cloud Computing Architecture for Immediate Sharing of Avionic Contents
441
using force from rotating wings. While motion of fixed-wing aircraft is mainly influenced by inertia, the rotary-wing aircraft can freely change its direction and hover. For example, controlling tilt of rotating rotor, the aircraft’s position and rotation can be immediately controlled. Although this mechanism requires a lot of energy and makes occurs relatively slow movement, exquisite control within small area, and continuous surveillance and tracing to targets could be efficiently done. One of most important feature of surveillance and scouting aircraft is video processing. Previous research of unmanned avionic video surveillance system using helicopter is to develop features of close tracing, monitoring targets, and recording video. The aircraft contains integrated video processing system, named DualEye, which consists of two cameras and optimized for surveillance system using unmanned rotary-wing aircraft. Autonomous aircraft has high-performance flight control computer including various sensor devices such as AHRS (Attitude Heading Reference System), and GPS. Collected data such as video, attitude, location, and altitude from DualEye and sensors are immediately transferred to GCS (Ground Control System) through wireless network. The GCS consists of three modules, HeliCommander (for monitoring status and control of aircraft), HeliNavi(for navigation and localization) and HeliVision(for video recording and sharing). Figure 1 illustrates integrated avionic video surveillance system. 2.2
DualEye and Video Sharing System
The DualEye is integrated video recording system which is to be applied to rotarywing aircraft. The DualEye has features of video encoding/decoding, and tilting camera. This is highly optimized solution for video recording and target tracing by image processing. Figure 2 is actual DualEye system attached on aerial vehicle for system test. Using DualEye and rotary-wing aircraft, various special mode of video recording is available. Following is explanation of those modes and illustrated in Figure 3. •
• Fig. 4. Dual Eye and HeliVision system
Target-Around Mode: Object can be steady also, while human wants to take a shot of target object from different sides. Here camera itself cannot see all sides of steady target, while interacting with FCC camera will be able see target object from all sides. Target-Watch mode: Target could move from one location to another. Here camera to be able to fully track target
442
•
D.-H. Kim et al.
object camera servos interacts with FCC. As shown in the left part of Figure 3 (b), camera is not able to track target when it moves to out of view location. Hence in order to track object in that area, DualEye and flight control system interacts and move aircraft so that camera can take a picture of target. Target-Tracking mode: Both of helicopter and target object can move, however every time camera has to track target object, keeping it in the visible area. As shown in Figure 3 (c), when the target object moves and goes out of camera view area, DualEye detects its direction and interact with flight control computer. Then, aircraft or tilt angle of camera will be properly controlled to trace target.
Avionic video data transferred to GCS is shared through distributed nodes by video sharing system, named HeliVision. Many guest nodes make one group and video source is distributed by master node. For high-quality vision synchronization without delay, support of distributed real-time platform is essential. This research of integrated surveillance system using video sharing platform has contributed to improving reliability and real time performance, and has maximized usability of avionic video contents. Figure 4 illustrates interaction between DualEye and HeliVision system.
3
Avionic Contents Sharing through Cloud Computing Environment
As mentioned in previous chapter, avionic video data collected for special purpose are highly valuable in various fields. Typical avionic video data has been used as stored data and shared by the limited number of users. Although distributed computing makes reliability and synchronization quality of sharing video improved, targets to share the contents are accessible by only guest nodes. Thus there is limit of expansion of information sharing area and contents usability.
Fig. 5. Concept of Social Media Cloud
3.1
For high rate of use of contents, it is necessary to be spread more and faster. The one way for effective media information distribution is social medial cloud computing which is one of most attended technology.
Concept of Cloud Computing
Basic social media cloud computing structure consists of a few layers such as application, SaaS(Software as a Service), PaaS(Platform as a Service) and IaaS (Infra
An Extended Cloud Computing Architecture for Immediate Sharing of Avionic Contents
443
as a Service). To effectively distribute avionic video contents through cloud computing environment, social media application service platform for service compatibility is required. Figure 5 illustrates structure of social media cloud. Followings are explanation of each layer. •
•
•
•
3.2
Application layer: This layer means a model which makes developing application service easy. This model is to found interactive relationship between service users and developers. Interaction technique in social media requires real-time interaction through various terminal devices. For example, some video media service just support limited interaction and control network for QoS (Quality of Service). This application may guarantee high quality of video service. SaaS (Software as a Service) layer: This layer contains social media APIs, application SDK and web-based cloud service development environment for cloud-platform-based social media. This layer is open platform structure to bridge Application layer and PaaS. Cloud service provider provides services to satisfy with various user requirements on open solution structure of SaaS. That is the reason why open library and service development environment which support various social media function should be provided. Service provider also supports interfaces such as social media APIs, web-based social development tool libraries, application development SDK that makes users able to access to service resource of other cloud system. PaaS (Platform as a Service) layer: This layer is core technique of social media service platform. The PaaS layer manages IaaS resources of cloud computing, and forms standardized platform. Main researches are virtual model for expandability and redistribution of service and data analysis. IaaS (Infrastructure as a Service) layer: IaaS is a service layer to support infra resource which virtualizes server, storage, and network resources. This layer helps upper layers to be tested on virtual infrastructure like actual services are working. After testing on this test bed, verified service can be released. Avionic Contents
Unmanned aircraft system is one of most high-technology vehicle to make various missions complete remotely. Thus, to control that, we need to watch detailed and accurate status of vehicle using high-performance sensor devices. Using those like AHRS or GPS, we can be informed attitude, altitude, and location of vehicle. The avionic information can be used effectively in analysis of target or target area. Followings are description of avionic contents. •
• •
Location: Localization is critical information because most missions are operated at place that is far from user. User should check vehicle’s location frequently to ensure its successful mission. GPS sensor is helpful and widely used device for localization and navigation. That provides latitude and longitude data as international standard format. Video: Video is most important and useful avionic contents because it is most typical and intuitive information to understand target area. User can grasp status of target fast using video data and act next step for mission success. Attitude and direction: Checking attitude of vehicle is not only important for stable control of aircraft, but also noticing environment of target area. Aerial
444
•
•
D.-H. Kim et al.
vehicle can easily lose its control by external force like wind. Instability of vehicle can effect on gaining other contents like video and even cause an air accident. User may want to know the reason of damage of contents or system fault on real time. Attitude information is helpful to notice these problems. Furthermore, direction is essential to assume moving target’s heading or destination. Attitude information including heading can be received by gyro sensor, AHRS, or IMU sensors. Altitude: Altitude is one of most important information for aerial vehicle. Vehicle’s altitude means how high it is distant from ground, so we can assume distance between target and vehicle. This information can be accepted by a few devices such as GPS, air pressure sensor, and laser sensor. Velocity: Sometimes it is necessary to trace moving target. If we know own velocity of aircraft, it is good to assume speed of moving target. This information is supported by accelerometer or GPS.
Not only above information, but also other type of information such as brightness, air humidity, air density, camera tilt angle and real time clock can be needed to know exact status of target area. The value of avionic contents could be maximized with those meta data. More detailed information analysis is able and quality of contents will be improved with the meta data. These meta data is supposed to make avionic contents highly valuable and useful. Table 1 shows a list of avionic contents and related devices. Table 1. List of Avionic Contents and related devices Contents Video Location Air environment Time Brightness View angle, direction Velocity
3.3
Device Camera GPS, Altimeter Thermometer, hygrometer Clock Illuminometer Gyro sensor, Magnetometer Accelerometer, IMU
Immediate Avionic Contents Sharing Service Architecture
To role as an avionic contents service provider through cloud computing environment, we suggest layered software model of social media cloud computing, named IACSS (Immediate Avionic Contents Sharing Service). The IACSS is divided into two layers, one is application layer and another is SaaS layer. Application layer defines service software such as receiving, processing and retransmitting contents. And touching data and processing meta data are also included. Following is description of main modules of application layer. •
Data Receive: Data Receive module is to receive source data from avionic system. It is required to guarantee quality and reliability of contents without loss or delay of data.
An Extended Cloud Computing Architecture for Immediate Sharing of Avionic Contents
• •
445
Local video decoder and player: This module is to provide local application development as library. For example, GCS system can use this module for monitor video contents directly. Meta data process: This module accomplishes classifying and processing of meta data. It is supposed to make additional meta data more efficient and valuable to be used easily. Various format of data expression can be supported by this module.
Fig. 6. Architecture of IACSS for social media cloud computing environment
•
Contents storing/share: To save and distribute contents, access to storage resource is necessary. This module provides manners of accessing data storage using service provider component in SaaS layer. Data storage service can be private solution or other type of service in cloud computing.
The SaaS layer is to support social media APIs and service provider (or distributor) modules for contents distribution. This layer contains open APIs and service components for efficient share of contents. Components of IACSS in SaaS layer are described in following. • •
Social Media APIs: This defines open APIs of libraries in IACSS provider platform to distribute contents in cloud computing environment easily. Other user or social media service can access avionic contents through this. Service Provider (Distributor): This is core component of IACSS provider. Other services in cloud computing will have access to this component through Social Media APIs. Service Provider will provide contents saved in Data Storage, or immediately stream video contents from application.
Figure 6 illustrates architecture of IACSS provider platform. Once the Data Receive component accepts source data from aerial vehicle, received data is processed by Meta data process component. Application developer can use local video component to use avionic contents. Refined contents will be stored in data storage or
446
D.-H. Kim et al.
shared through Service Provider component in SaaS layer. Other services or users in cloud computing environment can access avionic contents via Social Media APIs.
4
Conclusions
We introduced a video surveillance system based on rotary-wing aircraft which can record various type of video using its characteristics. For efficient application of these high-quality avionic contents, there were researches of video sharing system based on distributed computing, however, limit of the capable user of sharing contents is exist. To overcome this, need of social media cloud computing environment that can maximize a utilization rate of contents has been attended. In this paper, we discussed need of integrated avionic contents which consist of video and additional avionic data and suggested layered platform architecture named IACSS provider for distributing avionic contents over cloud computing. The IACSS provider is divided into two layers, the application layer and the SaaS layer. The Application layer is to provide convenient components which accept and process data so that users can easily use. Main components are DataReceive, Meta data process, local decoder and contents sharing. The SaaS layer consists of Social Media APIs and Service Provider components that are to share contents efficiently. Using IACSS provider platform, it is supposed to improve expandability and sharing effectiveness of avionic contents. In the future, by implementing service based on IACSS, various avionic application services are supposed to be released and share valuable contents immediately. Acknowledgement. This research was supported by the MKE (The Ministry of Knowledge Economy), Korea, under the ITRC (Information Technology Research Center) support program supervised by the NIPA (National IT Industry Promotion Agency (NIPA-2011-C1090-1131-0003 and NIPA-2011–C1090-1101-0008).
References 1. Kim, D.-H., Nodir, K., Kim, J.-G.: HELISCOPE Project: Research Goal and Survey on Related Technologies. In: IEEE ISORC 2009, pp. 112–118 (2009) 2. Park, H., Kim, M.H., Chang, C.-H., Kim, K., Kim, J.-G., Kim, D.-H.: Design and Experimental Validation of UAV Control System Software Based on the TMO Structuring Scheme. In: Obermaisser, R., Nah, Y., Puschner, P., Rammig, F.J. (eds.) SEUS 2007. LNCS, vol. 4761, pp. 419–428. Springer, Heidelberg (2007) 3. Kim, S.-G., Song, S.-H., Chang, C.-H., Kim, D.-H., Heu, S., Kim, J.: Design and Implementation of an Operational Flight Program for an Unmanned Helicopter FCC Based on the TMO Scheme. In: Lee, S., Narasimhan, P. (eds.) SEUS 2009. LNCS, vol. 5860, pp. 1–11. Springer, Heidelberg (2009) 4. Dikaiakos, M.D., Katsaros, D., Pallis, G., Vakali, A., Mehra, P.: Cloud Computing. IEEE Internet Computing 12(5) (September 2009) 5. Keahey, K., Freeman, T.: Science Clouds: Early Ex-periences in Cloud Computing. IEEE Internet Computing 12(5) (September 2009) 6. Nurmi, D., Wolski, R., Grzegorczyk, C., Obertelli, G., Soman, S., Youseff, L., Zagorodnov, D.: The Eucalyptus Open-source Cloud-computing. Cloud Computing and Its Applications (October 2008)
Implementation of Switching Driving Module with ATmega16 Processor Based on Visible LED Communication System Geun-Bin Hong, Tae-Su Jang, and Yong K. Kim Wonkwang University, School of Electrical Information Engineering Iksan 570-749, Korea {ghdrmsqls,ts-1stepjang}@nate.com, [email protected] http://ei.wonkwang.ac.kr/
Abstract. This work implemented visible light communication system for LED lighting to manufacture light transmitter/receiver using infrared communication module, and measured the values of voltage change ranging 0.50m~2.50m to the distance. This experiment indicated that as the farther the distance, the less the light intensity, which suggests that it caused loss of voltage and data, and that communication error occurred depending on the difference of extraneous light between day and night. Based on the results of this study problems must be solved and further research and improvements for the loss of data should be conducted. Keywords: Visible light communication, Illumination, dimming, LED, Optical Signal Processing.
1
Introduction
As fluorescent lights which are currently most commonly used consist of naturedestructive materials, these are lighting fixtures who have a number of environmental issues and energy consumption recently, whereas LED is more environment-friendly, and features 90% power savings and longer life compared to fluorescent lights, and thus it has attracted attention as a field of green industry. LED technology, starting with the field of lighting fixture, is expanding into LED TV, LED notebook, etc and spreading to mobile phones. [1, 2, 3] The technology that wireless communication is available wherever there is LED lighting, has recently been developed. In particular, wireless visible light communication is drawing attention, and the typical example is LED-based visible light communication(VLC:Visible Light Communication). VLC refers to a communication that uses visible light range which can be seen to the human's eyes, and also a new information and communication technology that using infrastructure in which lightings visible to the human's eyes such as incandescent and fluorescent lightings are replaced by Light Emitting Diode(LED), information is sent to each object and the information sent is reused. In other words, it can be said that VLC is a technology which transmits and exchanges information using lights including visible light range(VIS:380~780nm), or near-infrared ray(NIR:700~2500nm)range emitting from T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 447–452, 2011. © Springer-Verlag Berlin Heidelberg 2011
448
G.-B. Hong, T.-S. Jang, and Y.K. Kim
home lighting, outdoor advertising signs, traffic signals, display of various devices, etc, and a new optical wireless technology differentiated from optical communications through the existing optical fiber. The visible band communication technology using such LED has received a renewed interest as LED performance increased rapidly, and in particular, it began to receive attention again as visible light wireless communication technology in combination with LED lighting as LED-based lighting technology evolves and expands in its supply. [4, 5] It is to examine a variety of wireless multimedia communication services by adding communication functions to the lighting fixtures with this advantage of LED. Therefore this paper aims to determine the feasibility of this technology in the field of next-generation home network based on the configuration of the system which can convey information signal to each room through visible light communication for example, LED lightings in the ceiling, using the remote controller which is the medium of the are communication that is conveniently used in everyday life.
2
Main Body
The entire configuration of visible light communication lighting control using infrared communication in this paper is shown in [Fig. 1]. The use of this system includes transmitting data signal to VLC module where infrared sensor is mounted using infrared remote controller, and sending again the data signal received through the infrared sensor to LED, VLC reception area. It performs in the form of that the data signal transmitted to LED is again sent to visible light sensor which is connected to the existing PC, in other words, VLC reception area. The visible light reception area used KSM-603- and MCU- dedicated chip ATmega16, which is IR sensor, used 6modules consisted of 21 LEDs. In addition, TSL250RD visible light receiving sensor was used in the visible light reception area.
Fig. 1. VLC system configuration using LED lighting fixture
2.1
Configuration
The distance from transmission area of visible light communication to the reception one is at least 50 cm and 6 modules consisted of 21 LEDs and data signals are sent to
Implementation of Switching Driving Module with ATmega16 Processor
449
the luminescent area, and infrared remote controller which can adjust the brightness and optical sensor which can receive data signal in the receiving area were used. The size of the board was about 11.5cm × 8.5cm, and divided into two parts: transmission and reception of the visible light communication. 2.2
Visible Light Communication Transmission
The access block diagram of the visible light communication reception area in this study is shown in [Fig. 2]. It is consisted of IR sensor, MCU, 6 high-intensity LED modules, etc. If data signal is sent through IR sensor from infrared remote control, data signals are sent to LED which is the reception area of VLC.
Fig. 2. VLC transmission access block diagram
2.3
Visible Light Communication Reception
The receiver of the visible light communication is, as shown in [Fig. 3], consisted of visible light sensor and MCU to receive data signals from LED. The data signals received from the visible sensor are transmitted to PC through MCU and OPAMP.
Fig. 3. VLC reception access block diagram
450
G.-B. Hong, T.-S. Jang, and Y.K. Kim
2.4
Transmission and Reception Area
[Fig. 4] shows the final system after transmission and reception area of the visible light communication for lighting is designed. The left one indicates VLC transmitter where infrared sensor is mounted and the right one is VLC receiver.
Fig. 4. Manufactured VLC transmitter․receiver
3
Results and Implications
The experimental configuration for analysis of visible light communication system performance for lighting is shown in [Fig. 5]. The voltage values were measured through the visible light receiving sensor using oscilloscope.
Fig. 5. VLC experimental configuration
[Fig. 6] represents the resulting values after measurement of voltage values depending on the distance through the visible light sensor with the maximum brightness of LED. We can find that it was almost constant up to 50cm~2m but, the
Implementation of Switching Driving Module with ATmega16 Processor
451
Fig. 6. Value of voltage change depending on communication distance
Fig. 7. Value of distance depending on the number of LED modules
voltage rapidly fell down after 2.25m, there was a loss of voltage: 11.56[V] at 50cm, 11.1[V] at 2.50m, which indicates a difference of 0.46[V]. In addition, if the distance is over 2.25m, it was difficult to receive accurate data because of insufficient light intensity. [Fig. 7] indicates the resulting values after measurement of distance between day and night depending on the number of LED modules. There were less data loss by natural light during the night than during the daytime. The communication distance during the nighttime was much better. The error of the distance between day and night depending on the number of modules is about min. 20cm~max. 50cm. To improve communication distance, the optical power emitted from the unit area must be
452
G.-B. Hong, T.-S. Jang, and Y.K. Kim
increased, narrowing the gap between LEDs and increasing the number. LED for light which is actually used is greater than the actual radiation and thus transmission distance will increase even more.
4
Conclusions
This paper produced visible light communication system for lighting and analyzed its performance through an experiment. In an experiment of 0.50m~2.50m, we could find that there was a loss of data by 0.46[V], and that longer the distance, more difficult to receive accurate data because of insufficient intensity of light. In addition, if 6 modules are used through remote control light dimming, 1.76m transmission was possible during the daytime without any errors, and transmission up to 2.29m was possible at night. This study demonstrated that by configuring visible light communication which has a wide area of LED lighting and can perform communication at the same time and information delivery system. We can determine the feasibility of this technology in the field of next generation home network. The VLC has many problems for example communication problems depending on the changes of conditions both in the light emitting area and visible light receiving sensor but, if continuous research is further conducted improvement for smooth communication, it is expected to spread dramatically in the future due to its advantage with respect to fusion with lighting equipment. Acknowledgements. This paper is funded by 2010 Small Business Administration's Industry-Academia Joint technology Development Projects.
References 1. Lee, I.E., Sim, M.L., Kung, F.W.L.: Performance enhancement of outdoor visible-light communication system using selective combining receiver. IET Optoelectronics 3(1), 30–39 (2009) 2. Liu, X., Makino, H., Maeda, Y.: Basic Study on Indoor Location Estimation using Visible Light Communication Platform. In: IEEE EMBS Conference, pp. 2377–2380 (2008) 3. Zeng, L., O’Brien, D.C., Le-Minh, H., Lee, K., Jung, D., Oh, Y.: Improvement of Data Rate by Using Equalisation in an Indoor Visible Light Communication System. In: IEEE International Conference on Circuits and Systems for Communications 2008 (IEEE ICCSC 2008), pp. 170–173 (2008) (accepted for publication) 4. Little, T.D.C., Dib, P., Shah, K., Barraford, N., Gallagher, B.: Using LED Lighting for Ubiquitous Indoor Wireless Networking. In: IEEE International Conference, WIMOB 2008, pp. 373–378 (2008) 5. Hara, T., Iwasaki, S., Yendo, T., Fujii, T., Tanimoto, M.: A new receiving system of visible light communication for its. In: IEEE Intelligent Vehicles Symposium, pp. 474–479 (June 2007)
A Performance Evaluation on the Broadcasting Scheme for Mobile Ad-Hoc Networks Kwan-Woong Kim, Tae-Su Jang, Cheol-Soo Bae, and Yong K. Kim Wonkwang University, School of Electrical Information Engineering Iksan 570-749, Korea {watchbear,ts-1stepjang}@nate.com, [email protected], [email protected] http://ei.wonkwang.ac.kr/
Abstract. The broadcasting in the multi-hop sensor networks is a basic operation that supports many applications such as route search and setting up addresses. The broadcasting using flooding causes problems which can be mentioned as broadcasting storm such as redundancy, contention, collision. A variety of broadcasting techniques using wireless sensor networks have been proposed to achieve superior performance rather than simple flooding technique. The proposed broadcasting algorithms in wireless networks can be classified into six subcategories: flooding technique, probabilistic technique, counter-based scheme, distance-based technique, location-based techniques, and neighbor knowledge-based technique. This study analyzes simple flooding technique, probabilistic technique, counter-based scheme, distance-based technique, and neighbor knowledge-based technique, and compares the performance and efficiency of each technique through network simulation. Keywords: MANET, Broadcasting, Routing Protocol, AODV.
1
Introduction
Wireless Sensor Network (WSN) is a multi-hop network that organizes itself and dynamically changes. [1, 2] In addition, all the sensor nodes in the wireless sensor network can communicate with each other without going through the existing communications infrastructure(base stations, AP) or central control unit. Therefore it can be applied to military networks such as disaster recovery or rescue or battlefields caused by temporary configuration net- works, earthquakes, hurricanes, terrors. The most simple broadcasting technique is flooding. [3] This technique rebroadcasts the packet received to another node if each wireless sensor node received a new packet, when already received packets are sorted out. All the nodes which first receive a packet re-broadcasts the packet received, the number of rebroadcasting packet is N-1. Here, N is the number of sensor nodes in the wireless sensor networks. Due to these characteristics of flooding technique, if rebroadcasting to each node is not restricted, there may be redundancy, contention, collision. This phenomenon is called as broadcasting storm. [3] Redundancy refers to when one node receives identical packets from surrounding two or more nodes. Contention is a problem that T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 453–459, 2011. © Springer-Verlag Berlin Heidelberg 2011
454
K.-W. Kim et al.
occurs when multiple nodes re-broadcast the packets received, thus increasing the frequency of collision between packets/channels. To solve such problems, determination of whether to re-broadcast at each node receiving packets must be made. The most important point in the broadcasting techniques for WSN is to maximize reach-ability and minimize redundancy. This paper describes broadcasting techniques in the sensor networks proposed to improve simple flooding technique.
2
Relevant Studies
Recently continuous researches to seek for efficient broadcasting techniques on the wireless network continue to be performed. The most important factor in the broadcasting technique is to reduce the delay of packet transmission and while increasing reach-ability minimize the number of rebroadcasting packets. If a large amount of rebroadcasting is transmitted, it may ensure a higher percentage of reach-ability but, as IEEE 802.11 protocol transmits with lack of reliability, it reduces the yield of network and causes collision and delayed transmission [4]. On the other hand, if the number of rebroadcasting packets is reduced, it can reduce the bandwidth required, increase the yield of network, and reduce the delay of packet transmission. The broadcasting techniques are flooding technique, probabilistic technique, counter-based scheme, distance based technique, location-based techniques, and neighbor knowledge-based technique. [3,5] As the most simplified broadcasting technique, flood scheme is a method that each node resends every new broadcasting packet received with no exception. In the operation of flooding technique, each wireless node rebroadcasts firstly received broad casting packets and stores for future reference. As all the wireless nodes rebroadcast any packets at least at one time, N is the number of the whole nodes on the network, and if the network is not divided, the total number of the rebroadcasted packets is N1. The optimal depends on the number of nodes extended on the network and its intensity. Count-based method counts the number of identical packets received during RAD of wireless node, and determines whether to rebroadcast. In other words, If one node receives more identical packets than the critical value Cth, it does not rebroadcast the packet. As shown in [6,7], if the critical value Cth is 3 or 4, we can find that it increases the reach-ability than the simple flooding technique, and minimizes the number of redundant packets. Distance based scheme rebroadcasts packets only at receiving nodes a certain distance away from the transmitting node. The distance information can be obtained by measuring the strength of signal or using secondary devices like GPS(Global Positioning System) [8,9]. Neighbor knowledge-based technique is based on accurate neighbor information to determine the rebroadcasting of the packets. [9,10] This technique exchanges periodically HELLO messages among neighbors to collect neighbor information. The technique as proposed in, uses HELLO message to write up 1hop neighbor list on each node. Neighbor list of the current nodes are all attached to all the broadcasting packets. If a packet is delivered to a neighbor of the current node, each neighbor node compares its own neighbor list with the list attached to the packet, and if its all
A Performance Evaluation on the Broadcasting Scheme for Mobile Ad-Hoc Networks
455
neighbors are included in the list attached to the packet, it does not rebroadcast. [10] proposes SBA(Scalable Broadcast Algorithm) which is available to write 2hop neighbor list on each node by using HELLO message. The neighbor knowledge-based technique can almost optimize the number of rebroadcasting but, as HELLO message itself occupies the channel's bandwidth and HELLO message transmission uses broadcasting technique, thus increasing the probability of collision and overall, there is a problem of poor performance.
3
Performance Evaluation
It can solve broadcasting storm using distance-based scheme but, there is a latent potential of not reaching to packet due to dynamic topology on the network. The biggest disadvantage of this technique is low node density(sparser), which suggests that it may difficult to receive broadcasting packets by nodes which are existing outside of the transmission range on the network. The principles to optimize rebroadcasting is as follows. [5,7] Fig. 1 shows an example of simple wireless net work. Node S is an original source of broadcasting packet, and the neighbor node N1N5 receives broadcasting packets from node S simultaneously. As nodes N1, N2, N3 within a gray circle are adjacent each other, most of the coverage of three nodes are redundant. If N1 rebroadcasts a packet but two nodes N2, N3 blocks it is rebroadcasting, collision may be reduced. On the other hand, as N4, N5 nodes are far away from N1, N4 and N5 must rebroadcast packets. Therefore the best rebroadcasting technique is that three nodes N1, N4, N5 must rebroadcast packets but, the other nodes blocks rebroadcasting.
Fig. 1. A simple ad hoc network topology, for example
3.1
Simulation Configuration
Flooding, Probability Based(PB) Scheme, Counter Based scheme(CB), Distance based (DB) technique, and Scalable Broadcast Algorithm(SBA) techniques are compared for each of their performance through simulation. For simulation, NS2
456
K.-W. Kim et al.
simulator was used, and network models was 40120 wireless nodes in the area of 1.0km×1.0km to examine the effect of node density. The initial locations of nodes are randomly set, and a pair of nodes were randomized to generate CBR/UDP(Constant Bit Rate/User Datagram Protocol) traffics. The channel bandwidths was set in 2Mbps. Each node used SMAC protocol, and the channel model used wireless channel/wireless physical transmission model. For radio transmission model.
4
Simulation Result
Fig. 2 shows the reach-ability depending on the number of nodes with respect to five techniques. If it is 40 node, DB shows the lowest arrival rate because nodes within the critical value of the distance prevented packets from rebroadcasting. Therefore, in such lowest node density networks, it is difficult to deliver broadcasting packets to all nodes. If it is 60 or 80 node, all techniques showed the best reach-ability. In the network having nodes more than 100, reach-ability was reduced due to high rate of collision.
Fig. 2. Comparison of arrival rate graph
As many nodes attempt to relay against the same packet, it causes higher rater of collision and, as shown in Fig. 3, it resulted in a lot of packet loss. Resulting, SBA showed a higher rate of reach-ability compared to the other techniques in many experimental conditions. The performance improvement reason is because in terms of
A Performance Evaluation on the Broadcasting Scheme for Mobile Ad-Hoc Networks
457
methods determing rebroadcasting, neighbors are using information of neighboring nodes and efficiently rebroadcasting compared to other techniques. Fig. 3 shows saved rebroadcasting rate, and Fig. 4 illustrates packet loss by collision. In the flooding technique, the number of nodes receiving rebroad casting packets is the same as the one of nodes which actually rebroadcasted packets, and thus, as shown in Fig. 3, saved rebroadcasting rate showed 0% in all experimental conditions. In addition, as nodes that received packets all rebroadcast, and there comes a lot of collisions. So it can be found that the more nodes we have, the more packet loss we may get compared to other technique, as shown in Fig. 5.
Fig. 3. Rebroadcast savings comparison graphs
If we look at the graph changes of SBA in the Fig. 3, it shows a certain level of SRB regardless of the node density. It's because nodes determine the optimal rebroadcasting based on surrounding neighbor nodes' information and density. Therefore, to compare with Fig. 3, we can find that even the lowest density showed a higher rate of reach-ability. To compare between Fig. 3 and Fig. 4, we can find that SBA regulates the number of rebroadcasting packets depending on the node density to maintain a certain level of reach-ability. To look at the number of packet loss by collision in Fig. 4, flooding technique showed the highest because it has the most number of retransmission, and in case of proposed algorithm, the higher the node density is, it improves compared to existing techniques. Fig. 5 demonst rates the packet delay time by each algorithm. In case of all the algorithms except SBA, it can be found that the more nodes we have, the delay time increases. But, SBA shows a constant delay time because message rebroadcasting is conducted via the shorted path.
458
K.-W. Kim et al.
Fig. 4. Comparison of packet loss due to a conflict graph Comparison of packet delay graph
Fig. 5. Comparison of packet delay graph Table 1. The SBA's processing requirements and memory requirements
Node Density
Average size of HELLO in SBA(byte)
40 60 80 100 120
59.90 76.26 92.63 108.99 125.35
Required Memory for neighbour information in SBA (bytes) 350 727 1238 1883 2661
CPU operation (SBA)
CPU operation (CB)
195 528 1083 1930 3228
6 6 6 6 6
A Performance Evaluation on the Broadcasting Scheme for Mobile Ad-Hoc Networks
459
The neighbor information-based algorithm, SBA determines whether to rebroadcast based on accurate information, which indicates almost optimized performance. However, other algorithms do not use neighbor information and do not have a message like HELLO to determine whether to rebroadcast, and do not require memory for storage of neighbor information. To get higher performance as shown in Table 1, SBA consumes processing power and memory, channel bandwidth, compared to other algorithms.
5
Conclusions
This paper looks to broadcasting techniques that can reduce broadcasting storm in the wireless sensor network, and analyzes advantages and disadvantages of each technique. Additionally, this paper compares the performances between, through computer simulation, flooding, counter-based, distance-based, probability-based, and neighbor information-based technique-SBA. As a simulation result, SBA showed superior performance in terms of reach-ability and re-transmission efficiency. But, SBA requires higher rate of processing load and memory compared to other algorithms, as well as additional message broadcasting to obtain neighbor information. These findings can be used as basic materials for the further study of new good broadcasting techniques which require less processing load and memory.
References 1. Ian, F., Akyildiz, Bramaniam, Y.S.: IEEE Wireless Communication (2002) 2. IETF MANET Working Group, http://www.ietf.org 3. Ni., S.Y., Tseng, Y.C., Chen, Y.S., Sheu, J.P.: ACM/IEEE International Conference on Mobile Computing and Networking (1999) 4. Haas, Z., Halpern, J., Li, L.: Proceedings of the IEEE INFOCOM (2002) 5. Williams., B., Camp, T.: Proceedings of the ACM International Symposium on Mobile Ad-Hoc Networking and Computing (2002) 6. LAN MAN Standards Committee of the IEEE CS, Wireless LAN Medium Access Control(MAC) and Physical Layer (PHY) Specifications. IEEE Std 802.11-1997 7. Gandhi, R., Parthasarathy, S., Mishra, A.: Proceedings of the Fourth ACM International Symposium on Mobile Ad Hoc Networking and Computing (2003) 8. Tseng, Y., Ni, S., Shih, E.: IEEE Trans. Comput. (2003) 9. Peng, W., Lu, X.: Proceedings of the First ACM International Symposium on Mobile Adhoc Networking and Computing (2003) 10. Lim, H., Kim, C.: Proceedings of the ACM International Workshop on Modeling. Analysis and Simulation of Wireless and Mobile Systems (2000) 11. Anderson, J.B., Rappaport, T.S., Yoshida, S.: IEEE Communication Magazine (1995)
Carbon Nanotube as a New Coating Material for Developing Two Dimensional Speaker Systems Jeong-Jin Kang1 and Keehong Um2 1
Department of Information & Communication, DongSeoul University, Seongnam, Gyunggi, Korea 2 Department of Information Technology, Hansei University, Gunpo-city, Kyunggi-do, Korea [email protected], [email protected]
Abstract. A speaker is a two-port audio system which produces sound through electromechanical operations. It transforms electric voltage signals into audible sound signals. Almost all speakers available are three dimensional. These days, many mobile electronic devices such as mobile phones have become smaller and thinner. A problem with this miniaturization, however, is that the volume of the speakers has also decreased. In contrast to conventional three dimensional speakers, there have been several attempts to develop two dimensional speakers using piezoelectric materials. The main parts have been piezoelectric films, conductive macromolecular coating materials, and electrodes. An improved material that have been used is that used indium tin oxide in place of conductive macromolecular coating materials. We have invented a new type of two dimensional flexible speaker by applying carbon nanotube in order to replace indium tin oxide. The speaker system we have designed shows outstanding advantages, such as flexibility, wide ranges of operating frequencies, and low values impedance values, and stability. Keywords: film speaker, audible sound signals, piezoelectric materials, reverse piezoelectric effect.
1 Introduction The piezoelectric effect is the internal generation of an electrical charge resulting from an applied mechanical force. It is understood to be the electromechanical interaction between the mechanical and the electrically-charged state in crystalline materials. The piezoelectric effect is a reversible process in that the materials exhibit the reverse piezoelectric effect, which is the internal generation of a mechanical force resulting from an applied electrical field. We have invented a new type of two dimensional flexible speaker by utilizing the reverse piezoelectric effect on certain piezoelectric materials and carbon nanotube as a new coating material for developing two dimensional film speaker system. We have substituted CNT for CMC and ITO to improve the operating performance of the system.
2 System Prototype The prototype of a two-dimensional film speaker we have designed is composed of the following three parts: T.-h. Kim et al. (Eds.): ACN 2011, CCIS 199, pp. 460–465, 2011. © Springer-Verlag Berlin Heidelberg 2011
Carbon Nanotube as a New Coating Material
461
(1) a piece of piezoelectric material, called piezo film, (2) two pieces of coating material –CNT (3) and two electrodes.
Fig. 1. Five pieces assembled
The film we developed is shown in Fig.1. It is a set of five pieces put together. The side view of the prototype is shown in Fig.2.
Fig. 2. Side view of the film speasker
If we separate the prototype in Fig.1 or Fig.2, we can have five pieces disassembled as shown in Fig.3. Electrode
CNT Input terminal Piezo film (PVDF)
CNT
Electrode Input terminal
Fig. 3. Five pieces disassembled
462
J.-J. Kang and K. Um
3 Components of Film Speakers 3.1 Piezoelectric Material Some of piezoelectric materials exhibit the reverse piezoelectric effect, which is the internal generation of a mechanical force resulting from an applied electrical field. The previous prototype speakers we designed were mainly composed of piezoelectric material (called piezo film), and coating material. The piezo film is made from polyvinylidene fluoride(PVDF), which is a highly non-reactive and pure thermoplastic fluoropolymer[1,2]. PVDF is a specialized plastic material in the fluoropolymer family. 3.2 Coating Materials The conductive macromolecular compound (CMC) and indium tin oxide(ITO) are coating materials developed previously. We will consider some of limitations of CMT and ITO, and we will replace them with CNT. (1) Conductive macromolecular compound(CMC) The first coating material available was conductive macromolecular compound. The frequency characteristics of the film speaker is shown in Fig.4 in the frequency range of 250-1000 Hz. It shows sound pressure level (SPL) vs. frequency, where SPL or sound level is a measure of the effective sound pressure of a sound relative to a reference value. It is measured in decibels (dB) above a standard reference level. The commonly used "zero" reference sound pressure in air is 20 µPa RMS, which is usually considered the threshold of human hearing (at 1 kHz). For CMC, for the surface impedances of 50 and 1000 Ω/ are shown.
ㅁ
Fig. 4. Frequency characteristics of a film speaker using CMC for coating material in the frequency range of 250-1000 Hz
Carbon Nanotube as a New Coating Material
463
The CMC is faulty in several ways: 1. 2. 3. 4. 5.
The conductive characteristics of the film speaker showed some limitations in the audio frequency ranges (20-20kHz). The performance is not good enough in some audio frequency ranges (2020kHz). It is difficult to achieve a uniform thickness when it is coated on the both surfaces of piezo film. The sound pressure level(SPL) of a film speaker is not high enough to generate a high quality of sound. The frequency response shows that the characteristics of the SPL is not useful in the frequency range below 400Hz as shown in Fig.4.
(2) Indium tin oxide(ITO) The second coating material being used in film speakers is indium tin oxide (ITO, or tin-doped indium oxide). Indium tin oxide is one of the most widely used transparent conducting oxides since it has good properties including electrical conductivity and the ease with which it can be deposited as a thin film. It is a solid solution of two chemicals of indium(III) oxide (In2O3) and tin(IV) oxide (SnO2). The typical component ratio is 90% In2O3, 10% SnO2 by weight. But there are several main disadvantages with ITO. Because of high cost and limited supply of indium, the fragility and lack of flexibility of ITO layers, and the costly layer deposition requiring a vacuum, alternatives are being sought. It was found to be fragile due to mechanical pressure caused by small vibrations of the film. There have been several trials to use other materials to place ITO. Carbon nanotube conductive coatings are a prospective replacement[3,4]. 3.3 Carbon Nanotube(CNT) as a New Coating Material for Film Speakers To overcome these technical disadvantages of previous coating materials, carbon nanotube(CNT) conductive coatings have been considered a prospective replacement[3,4]. We have developed a carbon nanotube(CNT) as a new coating material on the surfaces of film speakers. We have proved that the CNT improves the characteristics of SPL and it shows a higher quality of audible sound. The coating material, CNT, is made from a mixture of CNT solution and CNT dispersant[5]. The CNT solution includes such ingredients as methyl alcohol, ethyl alcohol, isoprophyl alcohol, toluene, and ethyl acetate. It involves a controlled mixing process. The CNT dispersant includes materials such as sodium dodecy sulfate (SDS), triton X, lithium dodecy sulfate in a controlled mixing process. We formed CNT using polyvinylidence fluoride (PVDF). The coating material, CNT, is made from the mixture of CNT solution and CNT dispersant. By changing parameters(such as the thickness of CNT, the density of CNT solution, and the density of CNT dispersant), the impedance of coated CNT can be properly chosen. Shown in Fig.5, is the frequency characteristics of the film speaker using CNT and CMC as coatings in the frequency range of 2001000 Hz. In the figure, conductive macromolecular compound(CMC) and CNT are shown for the surface resistances of 50,500, and 1000 Ω/ . It is clear that the SPL (in dB) of CNT is better than that of CMC.
ㅁ
464
J.-J. Kang and K. Um
Fig. 5. Frequency characteristics of a film speaker using CNT and CMC for coating material in the frequency range of 200-1000 Hz
Shown in Fig.6, is the frequency characteristics of the film speaker using CNT and CMC in the frequency range of 1000-1800 Hz. It is clear that the SPL (in dB) of CNT is better than that of CMC.
Fig. 6. Frequency characteristics of a film speaker using CNT and CMC for coatings in the frequency range of 1000-1800 Hz
SPL of CNT becomes bigger as the value of surface resistance becomes smaller. This means that by choosing proper values of surface resistance, the output of film speaker can be obtained. For example, SPL 72 dB is usually considered to be a good quality of sound. In this case, the proper surface resistance of 50 Ω/ would be a good choice of CNT.
ㅁ
Carbon Nanotube as a New Coating Material
465
3.4 Measured Results We have shown that the proper surface impedance values of CNT was 50-2000 ohms per square. The axial sensitivity (in dB SPL, 1 watt at 1m) of CNT with 50 ohms per square and 1000 ohms per square was shown to be 90 and 85, respectively, in the range of 1kHz to 18kHz, which is within the range of audio frequency.
4 Conclusion The speaker system we designed showed outstanding advantages, such as flexibility, wide ranges of operating frequencies, and low resistance values, and stability. We have shown that the proper surface impedance values of CNT was 50-2000 ohms per square. The axial sensitivity (in dB SPL, 1 watt at 1m) of CNT with 50 ohms per square and 1000 ohms per square was shown to 90 and 85, respectively, in range of 1kHz to 18kHz, which is part of audio frequency. This work was supported by Hansei University. Acknowledgements. The authors wish to express their thanks to Dong-Soo Lee and In Suk Park at FILS Co., Ltd. for their assistance in the experiments and important suggestions. Without their help, this work would have been impossible to complete. Remarks: This work is the extended version of the paper presented at the IWIT 2011 Spring conference held at Eulji University.
References [1] MSIUSA.com, Piezo Film Sensors, Technical Manual, Measurement Specialties, Inc. (1999) [2] Gautschi, G.: Piezoelectric Sensors, pp. 1–45. Springer, Heidelberg (2002) [3] Researchers find replacement for rare material indium tin oxide (online). R&D Magazine (Advantage Business Media) (April 11, 2011), http://www.rdmag.com/News/2011/04/Materials-Researchersfind-replacement-for-rare-material-indium-tin-oxid/ (retrieved April 11, 2011) [4] Kyrylyuk, A.V., Hermant, M.C.,Schilling, T., Klumperman, B., Koning, C.E., van der Schoot, P.: Controlling electrical percolation in multicomponent carbon nanotube dispersions, Nature Nanotechnology (Nature Publishing Group): Advance Online Publication (April 10, 2011), http://www.nature.com/nnano/journal/vaop/ncurrent/full/ nnano.2011.40.html (retrieved April 11, 2010) [5] Wang, X., Li, Q., Xie, J., Jin, Z., Wang, J., Li, Y., Jiang, K., Fan, S.: Fabrication of Ultralong and Electrically Uniform Single-Walled Carbon Nanotubes on Clean Substrates. Nano Letters 9(9), 3137–3141 (2009) [6] Martel, R., Derycke, V., Lavoie, C., Appenzeller, J., Chan, K.K., Tersoff, J., Avouris, P.: Ambipolar Electrical Transport in Semiconducting Single-Wall Carbon Nanotubes. Physical Review Letters 87(25), 256805 (2001)
Author Index
Adam, Ghada 183 Ahn, Hyosik 132, 139 Arockiam, L. 117 Bae, Cheol-Soo 453 Baek, Nakhoon 59 Baek, Sung-Hyun 412 Blondia, Chris 161 Calduwel Newton, P. 117 Cha, Jaesang 393 Chang, Hyokyung 132, 139 Cheong, Seung-Kook 125 Cho, Jae Hoon 238 Cho, Sikwan 49 Choi, Euiin 132, 139, 146 Choi, In-Sik 152 Choi, Ki-Seok 278, 286 Choi, Kwangnam 278 Choi, Sook-Young 42 Choi, WonYoung 215 Choi, Yong-Hoon 306, 328, 341 Chun, Myung Geun 76 Chun, Woo Bong 76 Chung, Young-uk 306, 328, 335 De Florio, Vincenzo Do, Joo-Young 59
161
El-Bendary, Nashwa
183
Gaur, M.S. 86 Ghali, Neveen I.
183
Han, Sang Hyuck 341 Hassanien, Aboul Ella 183 Hong, Geun-Bin 447 Hur, Kyung Woo 96 Hwang, Ho Young 312, 328, 335 Ikeda, Daizo
11
Jang, Changbok 132, 139, 146 Jang, Jong-Wook 370, 380, 403, 412
Jang, Seong-Jin 403 Jang, Tae-Su 447, 453 Jang, Wonjun 49 Jeon, Byungkook 271 Jeon, Hyeonmu 347 Jeong, Jongpil 393 Jung, BokHee 209 Jung, Hong Soon 76 Jung, Sungmo 31 Kang, Jang-Mook 42 Kang, Jeong-Jin 460 Kang, Yongho 139 Kim, ByungChul 1, 66 Kim, Byung-Soo 312 Kim, Chang-Young 370 Kim, Dong Hwa 238 Kim, Doo-Hyun 439 Kim, Gwang-Hyun 261 Kim, Hye-Jin 176 Kim, Hyuntae 430 Kim, Jae Seung 421 Kim, Jae-Soo 278, 286 Kim, Jin 321 Kim, Jong Hyun 31 Kim, Jongwoo 328 Kim, Kwan-Woong 453 Kim, Mi-Jin 380 Kim, Nammoon 347 Kim, Robert Young Chul 292, 298, 421 Kim, Seoksoo 31, 36 Kim, Seong Joon 312 Kim, SoonGohn 190, 197, 203, 209, 215, 223 Kim, Woon-Yong 245 Kim, Woo Yeol 292, 298, 421 Kim, Yong K. 447, 453 Kim, Youngok 347 Kim, Yong-Soo 253 Kim, Young-cheol 176 Ko, Eung Nam 190, 197, 203 Ko, Young Woong 96 Kook, Youn-Gyou 286 Kwak, Ho-Young 176
468
Author Index
Kwak, Seong-Sin 362 Kwon, Sujin 306 Lal, Chhagan 86 Laxmi, Vijay 86 Lee, ChangKeun 209 Lee, Dong-Ho 261 Lee, Hongro 278 Lee, Hyung-Woo 49 Lee, Hyunseok 321, 335 Lee, JaeYong 1, 66 Lee, Jeong Gun 96 Lee, Joon 286 Lee, Junghoon 176 Lee, Seunghyun 393 Lee, Ye Hoon 109 Lee, Young-Hun 125 Lim, Ji-Hyang 229 Lim, Moon-Cheol 245 Lim, SunHwan 66 Mansour, Fatma 183 Mingyu-Jo, 347 Miura, Akira 11 Moon, Sangmin 335 Oh, Deockgil Oh, Hyuk-jun Park, Park, Park, Park, Park,
1 321
Gyung-Leen 176 Jangsik 430 JinSeok 223 Mankyu 1 Min-Woo 286
Park, Neungsoo 439 Park, Seok-Gyu 245 Park, Suwon 312, 321, 328, 335, 341 Rhee, Seung Hyong 328 Ryu, Daehyun 430 Seo, Doo-Ok 261 Seo, Yong-Ho 353, 362 Shin, Dong Ryeol 393 Shin, Jeong-Hoon 19 Shin, Minsu 1 Shin, Seung-Jung 439 Shin, Soung-Soo 286 Shin, Taehyu 335 Shin, Yong-Nyuo 76 Snasel, Vaclav 183 Sohn, Chae Bong 335 Soliman, Omar S. 183 Son, Hyun Seung 292, 298, 421 Song, Hyun-Ju 125 Song, Jae-gu 36 Song, JeongHoon 176 Song, Seunghwa 439 Sung, Dan Keun 312 Suzuki, Toshihiro 11 Uk-Jo 347 Um, Keehong
460
Yang, Tae-Kyu 362 Yoo, Kwan-Hee 59 Yoon, Hyo Woon 229 Yu, Yun-sik 380