Communications in Computer and Information Science
236
Min Zhu (Ed.)
Information and Management Engineering International Conference, ICCIC 2011 Wuhan, China, September 17-18, 2011 Proceedings, Part VI
13
Volume Editor Min Zhu Nanchang University 235 Nanjing Donglu Nanchang, 330047, China E-mail:
[email protected]
ISSN 1865-0929 e-ISSN 1865-0937 ISBN 978-3-642-24096-6 e-ISBN 978-3-642-24097-3 DOI 10.1007/978-3-642-24097-3 Springer Heidelberg Dordrecht London New York Library of Congress Control Number: Applied for CR Subject Classification (1998): C.2, H.4, I.2, H.3, D.2, J.1, H.5
© Springer-Verlag Berlin Heidelberg 2011 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com)
Preface
The present book includes extended and revised versions of a set of selected papers from the 2011 International Conference on Computing, Information and Control (ICCIC 2011) held in Wuhan, China, September 17–18, 2011. The ICCIC is the most comprehensive conference focused on the various aspects of advances in computing, information and control providing a chance for academic and industry professionals to discuss recent progress in the area. The goal of this conference is to bring together researchers from academia and industry as well as practitioners to share ideas, problems and solutions relating to the multifaceted aspects of computing, information and control. Being crucial for the development of this subject area, the conference encompasses a large number of related research topics and applications. In order to ensure a high-quality international conference, the reviewing course is carried out by experts from home and abroad with all low-quality papers being rejected. All accepted papers are included in the Springer LNCS CCIS proceedings. Wuhan, the capital of the Hubei province, is a modern metropolis with unlimited possibilities, situated in the heart of China. Wuhan is an energetic city, a commercial center of finance, industry, trade and science, with many international companies located here. Having scientific, technological and educational institutions such as Laser City and the Wuhan University, the city is also an intellectual center. Nothing would have been achieved without the help of the Program Chairs, organization staff, and the members of the Program Committees. Thank you. We are confident that the proceedings provide detailed insight into the new trends in this area. August 2011
Yanwen Wu
Organization
Honorary Chair Weitao Zheng
Wuhan Institute of Physical Education, Key Laboratory of Sports Engineering of General Administration of Sport of China
General Chair Yanwen Wu
Huazhong Normal Universtiy, China
Program Chair Qihai Zhou
Southwestern University of Finance and Economics, China
Program Committee Sinon Pietro Romano
Azerbaijan State Oil Academy, Azerbaijan
International Program Committee Ming-Jyi Jang Tzuu-Hseng S. Li Yanwen Wu Teh-Lu Liao Yi-Pin Kuo Qingtang Liu Wei-Chang Du Jiuming Yang Hui Jiang Zhonghua Wang Jun-Juh Yan Dong Huang JunQi Wu
Far-East University, Taiwan National Cheng Kung University, Taiwan Huazhong Normal University, China National Cheng Kung University, Taiwan Far-East University, Taiwan Huazhong Normal University, China I-Shou University, Taiwan Huazhong Normal University, China WuHan Golden Bridgee-Network Security Technology Co., Ltd., China Huazhong Normal University, China Shu-Te University, Taiwan Huazhong University of Science and Technology, China Huazhong Normal University, China
Table of Contents – Part VI
Output Feedback Stabilization for Networked Control Systems with Packet Dropouts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hao Dong, Huaping Zhang, and Hongda Fan
1
Study of the Fuzzy Nerve Network Control for Smart Home Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . GaoHua Liao and JunMei Xi
7
The Study on RF Front-End Circuit Design Based on Low-Noise Amplifier Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhao San-ping
13
Balanced Ridge Estimator of Coefficient in Linear Model under a Balanced Loss Function (I) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wenke xu and Fengri Li
20
SEDE: A Schema Explorer and Data Extractor for HTML Web Pages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xubin Deng
26
Application of Artificial Neural Network (ANN) for Prediction of Maritime Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xu Jian-Hao
34
Embedded VxWorks System of Touch Screen Interrupt Handling Mechanism Design Based on the ARM9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Han Gai-ning and Li Yong-feng
39
A New Architectural Design Method Based on Web3D Virtual Reality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhang Jun
45
Design and Implement of a Modularized NC Program Interpreter . . . . . . Chen Long, Yu Dong, Hong Haitao, Guo Chong, and Han Jianqi
50
Parallel Computing Strategy Design Based on COC . . . . . . . . . . . . . . . . . . Jing-Jing Zhou
58
Preliminary Exploration of Volterra Filter Algorithm in Aircraft Main Wing Vibration Reduction and De-noising Control . . . . . . . . . . . . . . . . . . . Chen Yu, Shi Kun, and Wen Xinling
66
VIII
Table of Contents – Part VI
Development Strategy for Demand of ICTs in Business-Teaching of New and Old Regional Comprehensive Higher Education Institutes . . . . . Hong Liu
74
A Novel Storage Management in Embedded Environment . . . . . . . . . . . . . Lin Wei and Zhang Yan-yuan
79
Development Strategy for Demand of ICT in Small-Sized Enterprises . . . Yanhui Chen
84
Development Strategy for Demand of ICT in Medium-Sized Enterprises of PRC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yanhui Chen
89
Diagnosing Large-Scale Wireless Sensor Network Behavior Using Grey Relational Difference Information Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hongmei Xiang and Weisong He
94
Mining Wireless Sensor Network Data Based on Vector Space Model . . . Hongmei Xiang and Weisong He
100
Influencing Factors of Communication in Buyer-Supplier Partnership . . . Xudong Pei
105
An Expanding Clustering Algorithm Based on Density Searching . . . . . . . Liguo Tan, Yang Liu, and Xinglin Chen
110
A Ship GPS/DR Navigation Technique Using Neural Network . . . . . . . . . Yuanliang Zhang
117
Research of Obviating Operation Modeling Based on UML . . . . . . . . . . . . Lu Bangjun, Geng Kewen, Zhang Qiyi, and Dai Xiliang
124
The Study of Distributed Entity Negotiation Language in the Computational Grid Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Honge Ren, Yi Shi, and Jian Zhang
131
Study and Application of the Smart Car Control Algorithm . . . . . . . . . . . Zhanglong Nie
138
A Basis Space for Assignment Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shen Maoxing, Li Jun, and Xue Xifeng
148
The Analysis on the Application of DSRC in the Vehicular Networks . . . Yan Chen, Zhiyuan Zeng, and Xi Zhu
152
Disaggregate Logit Model of Public Transportation Share Ratio Prediction in Urban City . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dou Hui Li and Wang Guo Hua
157
Table of Contents – Part VI
IX
Design of Calibration System for Vehicle Speed Monitoring Device . . . . . Junli Gao, Haitao Song, Qiang Fang, and Xiaoqing Cai
166
Dynamic Analysis and Numerical Simulation on the Road Turning with Ultra-High . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Liang Yujuan
173
Solving the Aircraft Assigning Problem by the Ant Colony Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tao Zhang, Jing Lin, Biao Qiu, and Yizhe Fu
179
Generalization Bounds of Ranking via Query-Level Stability I . . . . . . . . . Xiangguang He, Wei Gao, and Zhiyang Jia
188
Generalization Bounds for Ranking Algorithm via Query-Level Stabilities Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhiyang Jia, Wei Gao, and Xiangguang He
197
On Harmonious Labelings of the Balanced Quintuple Shells . . . . . . . . . . . Xi Yue
204
The Study of Vehicle Roll Stability Based on Fuzzy Control . . . . . . . . . . . Zhu Maotao, Chen Yang, Qin Shaojun, and Xu Xing
210
Fast Taboo Search Algorithm for Solving Min-Max Vehicle Routing Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chunyu Ren
218
Research on the Handover of the Compound Guidance for the Anti-ship Missile beyond Visual Range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhao Yong-tao, Hu Yun-an, and Lin Jia-xin
224
Intelligent Traffic Control System Design Based on Single Chip Microcomputer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xu Lei, Ye Sheng, Lu Guilin, and Zhang Zhen
232
Calculation and Measurement on Deformation of the Piezoelectric Pump Actuator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xing Wang, Linhua Piao, and Quangang Yu
239
FEM Analysis of the Jet Flow Characteristic in a Turning Cavity . . . . . . Xing Wang, Linhua Piao, and Quangang Yu
246
Software Compensation of the Piezoelectric Fluidic Angular Rate Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xing Wang, Linhua Piao, and Quangang Yu
253
Finite Element Analysis for Airflow Angular Rate Sensor Temperature Field and Pressure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xing Wang, Linhua Piao, and Quangang Yu
261
X
Table of Contents – Part VI
Control System of Electric Vehicle Stereo-Garage . . . . . . . . . . . . . . . . . . . . Wang Lixia, Yang Qiuhe, and Yang Yuxiang
267
Research the Ameliorative Method of Wavelet Ridgeline Based Direct Wavelet Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yan Zhe and Li Ping
273
Study on the Transportation Route Decision-Making of Hazardous Material Based on N-Shortest Path Algorithm and Entropy Model . . . . . Ma Changxi, Guo Yixin, and Qi Bo
282
Encumbrance Analysis of Trip Decision Choosing for Urban Traffic Participants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Li Zhen-fu, He Jian-tong, and Zhao Chang-ping
290
Study on Indicators Forecasting Model of Regional Economic Development Based on Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yang Jun-qi, Gao -xia, and Chen Li-jia
297
An Adaptive Vehicle Rear-End Collision Warning Algorithm Based on Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhou Wei, Song Xiang, Dong Xuan, and Li Xu
305
A kind of Performance Improvement of Hamming Code . . . . . . . . . . . . . . . Hongli Wang
315
Intelligent Home System Based on WIFI . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhang Yu-han and Wang Jin-hai
319
A Channel Optimized Vector Quantizer Based on Equidistortion Principal and Wavelet Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wang Yue
328
ESPI Field Strength Data Processing Based on Circle Queue Model . . . . Hongzhi Liu and Shaokun Li
335
The Research on Model of Security Surveillance in Software Engineering Based on Ant Colony Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hongzhi Liu and Xiaoyun Deng
343
Realization on Decimal Frequency Divider Based on FPGA and Quartus II 350 Hu XiaoPing and Lin YunFeng Design of Quality Control System for Information Engineering Surveillance Based on Multi-agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hongzhi Liu, Li Gao, and GuiLin Xing
357
Table of Contents – Part VI
XI
A Study about Incentive Contract of Insurance Agent . . . . . . . . . . . . . . . . Hu Yuxia
364
Scientific Research Management/Evaluation/Decision Platform for CEPB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhang Shen, Liu Zhongjing, and Wang Hui-li
370
Necessary and Sufficient Condition of Optimal Control to Stochastic Population System with FBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . RenJie and Qimin Zhang
376
The Research on Newly Improved Bound Semi-supervised Support Vector Machine Learning Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xue Deqian
383
The Application of Wireless Communications and Multi-agent System in Intelligent Transportation Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wei Xiaowei
391
Study on Actuator and Generator Application of Electroactive Polymers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jia Ji, Jianbo Cao, Jia Jiang, Wanlu Xu, Shiju E., Jie Yu, and Ruoyang Wang
398
Research on Chinese Mobile e-Business Development Based on 3G . . . . . Li Chuang
404
The Statistical Static Timing Analysis of Gate-Level Circuit Design Margin in VLSI Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhao San-ping
410
Forensic Analysis Using Migration in Cloud Computing Environment . . . Gang Zhou, Qiang Cao, and Yonghao Mai
417
Research on Constitution and Application of Digital Learning Resources of Wu Culture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Minli Dai, Caiyan Wu, Hongli Li, Min Wang, and Caidong Gu
424
Research on Digital Guide Training Platform Designing . . . . . . . . . . . . . . . Minli Dai, Caidong Gu, Jinxiang Li, Fengqiu Tian, Defu Zhou, and Ligang Fang
430
A Hypothesis Testing Using the Total Time on Test from Censored Data as Test Statistic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shih-Chuan Cheng
436
Collaborative Mechanism Based on Trust Network . . . . . . . . . . . . . . . . . . . Wei Hantian and Wang Furong
445
XII
Table of Contents – Part VI
Design for PDA in Portable Testing System of UAV’s Engine Based on Wince . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . YongHong Hu, Peng Wu, Wei Wan, and Lu Guo
452
Adaptive Particle Swarm Optimizers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Li Li and Qin Yang
458
Based on Difference Signal Movement Examination Shadow Suppression Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hu ChangJie
461
Application of Clustering Algorithm in Intelligent Transportation Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Long Qiong, Yu Jie, and Zhang Jinfang
467
Exploration and Research of Volterra Adaptive Filter Algorithm in Non-linear System Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wen Xinling, Ru Yi, and Chen Yu
474
Application of Improved Genetic Algorithms in Structural Optimization Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shengli Ai and Yude Wang
480
Research on Intelligent Schedule of Public Traffic Vehicles Based on Heuristic Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Liangguo Yu
488
The Integration Framework of Train Scheduling and Control Based on Model Predictive Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chao Mi and Yonghua Zhou
492
A Design of Anonymous Identity Generation Mechanism with Traceability for VANETs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . An-Ta Liu, Henry Ker-Chang Chang, and Herbert Hsuan Heng Lai
500
A Improvement of Mobile Database Replication Model . . . . . . . . . . . . . . . . Yang Chang Chun, Ye Zhi Min, and Shen Xiao Ling
511
Software Design and Realization of Altimeter Synthetically Detector . . . . Shi Yanli, Tan Zhongji, and Shi Yanbin
517
Emulational Research of Spread Spectrum Communication in the More-Pathway Awane Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shi Yanli, Shi Yanbin, and Yu Haixia
522
A Pilot Study on Virtual Pathology Laboratory . . . . . . . . . . . . . . . . . . . . . . Fan Pengcheng, Zhou Mingquan, and Xu Xiaoyan
528
Research and Practice on Applicative “Return to Engineering” Educational Mode for College Students of Electro-mechanical Major . . . . Jianshu Cao
536
Table of Contents – Part VI
XIII
Engineering Test of Biological Aerated Filter to Treat Wastewater . . . . . Weiliang Wang
544
The Design of Propeller LED Based on AT89S52 . . . . . . . . . . . . . . . . . . . . . Xu zelong, Zhang Hongbing, Hong Hao, and Jiang Lianbo
551
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
559
Output Feedback Stabilization for Networked Control Systems with Packet Dropouts* Hao Dong1, Huaping Zhang1, and Hongda Fan2 2
1 Network Center, Yantai University, Yantai 264005, China Information Engineering, Naval Aeronautical and Astronautical University, Yantai 264001, China
[email protected]
Abstract. The problems of stability and stabilization for the networked control systems (NCS) with stochastic packet dropouts are investigated. When packet dropouts occur between sensor and controller,the networked control system is modeled as a markov jump linear system with two operation modes .Based on this model, the sufficient condition for the stability of the system is presented, then static output feedback controller is obtained in terms of LMIs condition.A number example illustrates the effectiveness of the method in this paper. Keywords: Networked control system, packet dropout, stochastically stable, linear matrix inequality(LMI).
1 Introduction Networked control systems (NCSs) are control loops closed through a shared communication network[1-3].That is, in networked control systems, communication networks are employed to exchange the information and control signals (reference input, plant output, control input, etc.) between control system components (sensors, controllers, actuators, etc.) .The main advantages of networked control systems are low cost, reduced weight, simple installation and maintenance, and high reliability. As a result, networked control systems have been widely applied to many complicated control systems, such as, manufacturing plants, vehicles, and spacecraft. However, the insertion of communication network in the feedback control loop complicates the application of standard results in analysis and design of an NCS because many ideal assumptions made in the traditional control theory can not be applied to NCSs directly. The packet dropout is one of the most important issues in the NCSs. Data packet dropout can degrade performance and destabilize the system. In recent years, NCSs with packet dropout have been a hot research topic and obtained more concern. Some work on the effect of dropout on NCS has been published [4-5].The augmented state space method is an important method for dealing with the problem of data packet dropout provided in [4].[3] models NCSs with data packet dropout as asynchronous dynamic systems, but the stability condition derived in [3] is in bilinear matrix *
This work was supported by Educational Commission of Shandong Province, China (J08LJ19-02).
M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 1–6, 2011. © Springer-Verlag Berlin Heidelberg 2011
2
H. Dong, H. Zhang, and H. Fan
inequalities, which are difficult to solve. The issue of data packet dropout is modeled as a Markov process in [6], but no rigorous analysis is carried out. In this paper, we consider the stabilization problem of networked control systems with a discrete-time plant and the time driven controller. The packet dropout occurs between sensor and controller,and the networked control system here is modeled by a markov jump linear system(MJLS) with two modes. Then we can apply some MJLS theories to analysis stability and stabilization problems of NCS.
2 Problem Formulation The framework of NCS with data packet dropouts is depicted in Fig. 1, where the plant is described by the following discrete-time linear time-invariant system model ⎧ x k +1 = Axk + Buk , ⎨ ⎩ yk = Cxk
(1)
where k ∈ ], xk ∈ \ n is the system state, uk ∈ \ p is the control input , yk ∈ \ m is the measurement output. When date dropouts occur between sensor and controller , the dynamics of the switch S can be described as : When S is closed, the sensor output yk is successfully transmitted to the controller, the switch output yk is changed to yk ,and when it is open, the switch output is held at the previous value yk −1 and the packet is lost.
Actuatorr
Plant
Sensor
uk
yk
Network with packet dropouts
Controller
S
yk
Fig .1. Networked control system with packet dropouts
Thus the dynamics of the switch S can be modeled as ⎧⎪ yk , S is closed , yk = ⎨ ⎪⎩ y( k −1) ,S is open
(2)
Here,we consider the following static output feedback controller with packet dropouts: uk = Kyk .
(3)
Output Feedback Stabilization for Networked Control Systems with Packet Dropouts
3
T
Let zK = ⎡⎣ xkT ykT−1 ⎤⎦ be the augmented state vector. Then by depiction of the network channels and use of the models (1)–(3), the closed-loop networked control system with the packet dropout can be represented by the following two subsystems. (a) No packet dropouts exist in between the sensor and the controller. z k +1 = A1 zk , A = ⎡ A + BKC 0 ⎤ , 1 ⎢ 0⎥ C ⎣
(4)
⎦
(b) Packet dropouts occur between the sensor and the controller. z k +1 = A2 z k , A = ⎡ A BK ⎤ . 2 ⎢0 I ⎥⎦ ⎣
(5)
Now taking all the subsystems into consideration, the two subsystems can be lumped into a general framework that is described by the following discrete-time Markov jump linear system: zk +1 = Ark zk ,
(6)
{rk , k ∈ ]} is
a Markov chain taking value in finite space ℵ = {1, 2} ,with transition probability from mode i at time k to mode j at time k + 1 as:
pij = Pr {rk +1 = j | rk = i} , with pij ≥ 0 , i , j ∈ℵ ,and
2
∑p j =1
ij
= 1.
Lemma 1[7]. System xk +1 = A(rk ) xk is stochastically stable, if and only if for each mode ∀rk ∈ℵ , there exists matrix
Xi > 0
N
such that AT (i )∑ Pij X j A(i ) − X i < 0 holds. j =1
Lemma 2[8]. Given matrices X , Y , Z , with appropriate dimensions, and Y > 0 ,then
− X T Z − Z T X ≤ X T YX + Z T Y −1 Z
3 Controller Design In this section, stability analysis and static output feedback controller are considered for the NCS with packet dropouts. A sufficient condition is established via the theory from the discrete-time Markov jump linear system, and the corresponding controller design technique is provided. Theorem 1. For given controller (3), system (6) is stochastically stable, if for each mode i ∈ S , there exist matrices Xi > 0 , Si satisfying the following coupled LMIs: ⎡− Xi ⎢ Φi = ⎢ SA ⎢⎣ i i
⎤ ⎥ <0 Si + SiT + ∑ pij X j ⎥ , ⎥ j =1 ⎦ AiT SiT 2
(7)
4
H. Dong, H. Zhang, and H. Fan
Proof. First, from (7) yields 2
Si + SiT + ∑ pij X j < 0 , j =1
since Xi > 0 , it follows that Si + SiT < 0 ,then Si is nonsingular for each mode i ∈ℵ . Based on Lemma 1, system (6) is stochastically stable, if and only if for each mode i ∈ℵ , there exists matrix Xi > 0 such that 2
Ai T ∑ pij X j Ai − X i < 0 j =1
(8)
.
In the following, we prove that if (7) holds, then (8) holds. Since Si is nonsingular, preand post-multiply (7) by diag { I , Si−1} and diag { I , Si−T } , respectively, and let Li = Si−T , inequality (7) is equivalent to ⎡− X i ⎢ ⎢ A ⎢⎣ i
⎤ ⎥ <0 Li + L + L ∑ pij X j Li ⎥ . ⎥ j =1 ⎦ AiT
(9)
2
T i
T i
From Lemma 2, it follows that the following inequality holds: 2
2
j =1
j =1
−(∑ pij X j )−1 ≤ Li + LTi + LTi ∑ pij X j Li
(10)
,
from (10), it follows that if (9) holds, then ⎡− X i ⎢ ⎢ A ⎢⎣ i
⎤ ⎥ <0 −(∑ pij X j )−1 ⎥ ⎥ , j =1 ⎦ AiT
2
(11)
which is equivalent to (8) applying the Schur complement. This indicates that system (6) is stochastically stable. The proof is completed. For getting the explicit expression of K, Φi can also be rewritten as: ⎡ − Xi ( A0i + M i K N i )T SiT ⎤ ⎢ ⎥ 2 Φi = ⎢ <0 S ( A + M i KN i ) Si + SiT + ∑ pij X j ⎥ , ⎢⎣ i 0i ⎥⎦ j =1
(12)
where T ⎡ A 0⎤ , M1 = ⎡⎣ BT 0⎤⎦ ⎥ ⎣C 0 ⎦
A1 = A01 + M 1 KN 1 , A2 = A02 + M 2 KN 2 , A01 = ⎢
N1 = [C
A 0⎤ , ⎡ T 0 ] , A02 = ⎡ ⎢ 0 I ⎥ M2 = ⎣B ⎣ ⎦
0⎤⎦ , N 2 = [ 0 T
,
I ] , i ∈ℵ .
Notice that (12) in Theorem 2 is not LMIs if the controller parameter matrices K is unknown, and K cannot be obtained by (12). In the following, we give a method to solve (12) and obtain K. In fact, if let Si = ε i I , K i = ε i K then we have:
Output Feedback Stabilization for Networked Control Systems with Packet Dropouts
Theorem 2. If for each mode i ∈ℵ , there exist matrices X i > 0 , satisfying the following coupled LMIs: ⎡− X i ⎢ Φi = ⎢ ∗ ⎣⎢
ε i A0Ti + N iT K iT M iT ⎤
⎥ N <0 2ε i I + ∑ pij X j ⎥ j =1 ⎦⎥
Ki
5
and scalars ε i
, (13)
then there exists controller (3) such that the closed-loop(6) is stochastically stable, the controller parameter matrices are given as: K = ε i−1 K i
Example Consider the controlled plant as following: ⎡1 ⎡ 1 4⎤ , B=⎢ A=⎢ ⎥ ⎣5 ⎣0.4 1 ⎦
2⎤ , ⎡0.2 1 ⎤ , p11=0.3,p12=0.7,p21=0.6,p22=0.4 C =⎢ ⎥ ⎥ 3⎦ ⎣ 1 2⎦
Solve LMI (13), the LMI is feasible with the following results ⎡ 1.3969 0.0116 ⎤ K =⎢ ⎥ ⎣ -2.6514 -0.0341⎦ ,
then the closed-loop system is stochastically stable. Fig.2 gives the simulation result of the closed-loop system ,the switching mode is shown as Fig.3. From Fig.2, the closed-loop system is stable.
Fig. 2. The state trajectories of the closed-loop system
Fig. 3. The mode
rk
4 Conclusions In this paper, the packet dropouts problems existing in the channel between sensor and controller channel are considered. First the NCS is modeled as a Markov jump linear systems, then the sufficient conditions for the mean-square stablility of the system is presented, and the static output feedback stabilization problem is solved in terms of LMIs.
6
H. Dong, H. Zhang, and H. Fan
References 1. Hespanha, T.J.P., Payam, N., Xu, Y.G.: A survey of recent results in networked control systems. Proc. of the IEEE 95(1), 138–162 (2007) 2. Walsh, G.C., Ye, H.: Scheduling of networked control systems. IEEE Control System Magazine 21(1), 57–65 (2001) 3. Zhang, W., Branicky, M.S., Phillips, S.M.: Stability of networked control systems. IEEE Control System Magazine 21(1), 84–99 (2001) 4. Zhang, W.: Stability Analysis of Networked Control Systems. Case Western Reserve University, Ohio (2001) 5. Ling, Q., Lemmon, M.: Optimal dropout compensation in networked control systems. In: The 42nd IEEE International Conference on Decision and Control, pp. 670–675. IEEE Press, Piscataway (2003) 6. Nilsson, J.: Real-time control systems with delays,PhD thesis, Lund Institute of Technology (1998) 7. Costa, O.L.V., Fragoso, M.D., Marques, R.P.: Discrete-time Markovian jump linear systems. Springer, London (2005) 8. Petersen, I.R.: A stabilization algorithm for a class of uncertain linear systems. Systems Control Letters 8, 351–357 (1987)
Study of the Fuzzy Nerve Network Control for Smart Home Controller GaoHua Liao and JunMei Xi Nanchang Institute of Technology, Nanchang, China
[email protected]
Abstract. The controlling problem of smart home system exists nonlinear and indefiniteness. The whole system smart disassemble into intelligent comprehensive of every family subsystem. Individual subsystems of control actions by the nodes to implement. The method has been put forward that we can transform the smart home controller using the technology of the fuzzy neural network. It solved that the home controller how to get self-learned and self-adaptive capacity and intelligence analyze-judge capacity, satisfy domestic demands of the intelligent control and have better economy and practicality. Keywords: Smart home, Fuzzy neural network, Intelligent controller, Intelligent Building.
1 Introduction Along with the society and economic level's continualenhancement, various appliances enter the ordinary family,which is constituted of all forms of modernized rooms with appliances as the basic hardware[1]. The controlling problem of smart home system exists nonlinear and uncertainty characteristics. Home owners life is non-linear and uncertainty and different people have different habits. Even the same man might also change, the working rule of household appliances and equipment is different from the family, the climate, the place, thus also uncertainty [2]. As this property of home control systems, that the nerve network is necessary and introduced in control system. Fuzzy logic easily express human knowledge, and the nerve network have the advantages of distributed information stored and learning ability. Individual subsystems of control actions implement by the nodes, study communication means and wisdom node control of the principles, structures, implementation and the control rules of the system. It has provided effective solutions for the nonlinear and not sure of home control system.
2 Control Structure Design The intelligence controller's structure with neural-fuzzy control put forward to solve the home system problem of the self-learning and adaptive control. Smart home M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 7–12, 2011. © Springer-Verlag Berlin Heidelberg 2011
8
G. Liao and J. Xi
controller have data processing, rules of the extraction, self-learning, adaptive and other features. Through the line of training and online learning algorithm, fuzzy logic control and the rule of automatic adjustment realize self-adaptive function of the home controller,and then make good quality control for system[3]. Principle is shown in Fig.1.
Fig. 1. Intelligent control principle
Neural-fuzzy control contains five parts: nerve network synthetic for the sensing input variables fuzzied; nerve network synthesis for the fuzzy control rules extraction of home controller; nerve network synthesis for the not fuzzy algorithm of controller; fuzzy rule database; data or regular input and output.
3 Construction Home Monitoring System Factual Data Fuzzy Fuzzification process of the sensors factual data or master intellectual input realize mapping from exact value to fuzzy. The map make the precise value cut into corresponds to the limited class attach ing language variable value and fuzzy variable membership function value. It shown in Fig.2.
Fig. 2. Fuzzy of precise value
The Fig.2 shows that only two corresponding fuzzyset for each state variables, obtain two fuzzy values and two membership functions for each state values NN role is realize mapped from accurate value to fuzzy value. If you think of a triangle and function, or gaussian, achieve the mapping of precise value to fuzzy variable
,
Study of the Fuzzy Nerve Network Control for Smart Home Controller
9
membership functions. Use multiplayer feed forward neural networks (neural network BP) to obtain the values. Each state variables corresponding one input neuron and four output neuron. Its structure is shown in Fig.3. The number of neuron change with consideration domain, increase quantifying level require increasing the neuron number of hidden layer. This requirement does not make problems more complicated, and only need to increase the training of the off-line. I is fuzzy variable of input, W is input variable of membership functions.
Fig. 3. Fuzzification neural network structure of precise value
Fuzzy Reasoning Rulebase of Controller. Fuzzy control rule describes the relationship between input fuzzy variables and output fuzzy variable. Typical format of thefuzzy rule database is the total with the structure form rules of If-then. The amount of rules depends on fuzzy subsets quantity of the input and output variable. Rulebase storage a lot of control rule of home system. They are the core of the home system intelligence. To achieve the implementation the effective control of the nodes at home, respectively set up sublibrary area for each intelligent node in fuzzy rulebase and unified addressing. Initial rules of rulebase set by manual input. At a later time, rules rulebase according to the nerve network collection data from the intelligence node, update optimization the fuzzy rules by self-learn collect, or manual input expert knowledge rule cut out the redundant or not the necessary rules, finally, separately transmit to the intelligent node by the output module. Fuzzy Rules Extraction of Controller. Fuzzyrule is fuzzy neural controller core of smart home. Smart home controller has the ability through the data collected from sensors or intellectual input of home owners learning to exploit the fuzzy logic rules[2]. Weights and Node of the fuzzy-neural network home can be explained. This is the reason that fuzzy rule can be drawn. The fuzzy nerve network are composed of the input, fuzzification, rules, logic output and output level. Home controller using of nerve network, a set of input, output sample convert into control rule of the membership function and control rule base. Nerve network adopt multilayer neural network BP. Its structure is indicated in Table1. The fuzzy nerve network excavate fuzzy logic rules from the collection data usually have two channels: • In case of membership function of the input data vector and outputs vector of the system determined, relationship of the input and output expressedby the fuzzy logic rules “If-Then”. The the fuzzy logic rules get through the domain experts for review and experiments.
10
G. Liao and J. Xi Table 1. Storage Structure of fuzzy control rule base Rulebase Smart node name
address Prerequisite
conclusion
Rule confidence
information appliance
001
If statement
Then statement
%
Security protection
002
If statement
Then statement
%
Climate control
003
If statement
Then statement
%
Remote manipulation
004
If statement
Then statement
%
Illumunation control
005
If statement
Then statement
%
Curtain control
006
If statement
Then statement
%
• In case of membership function of the input data vector and outputs vector of the system not determined, input and output of data can be fuzzification(The input and output data sectionally put in the common membership Function). The relationship between input and output achieve by implication coding of the fuzzy nerve network. There have't fuzzy logic rule explicitly express. Fuzzy Rules Confidence. If measurement Z of non-learning sample obey the normal distribution N(α,σ2), number feature α can be used to represent approach degree of the real system. Unbiased Estimation of α: −
1 n ∑ Zi n i =1
(1)
− 2 1 n ( Zi − Z ) ∑ n − 1 i =1
(2)
Z=
Unbiased Estimation of σ2: −
S2 =
Difuzzification of Smart Home Controller. Maps deduced from a fuzzy subsets to common collection. The map called judgment. Precise control can be get by the judgment[5]. Usually, The method fuzzyquantity translate into accurately quantity have the maximum membership degree, weighted average, median and selection and so on. Here gravity method is used to achieve inverse fuzzification. M
M
i =1
i =1
U ikj = ∑ μ (U i j )U i j / ∑ μ (U i j )
(3)
Where: μ (U i j ),U i j are fuzzy sets value of the fuzzyoutput variable and their membership function value in fuzzy sets respectively. Uj is the jth output of controller. It can be implied by BP network with hidden layer.
Study of the Fuzzy Nerve Network Control for Smart Home Controller
11
System Software Design Data processing function implement of the intelligence self-learning and self-adaptive must design specific software. Software design is basis on traditional computer hardware resources and the more perfect network operating system. Software system achieve four functional:data send-receive, data processing, data modify and data remote calls. • Data send-receive module. Data send module mainly sent the control rules of the data processing module extract by using the fuzzy nerve network learning to the intelligent control nodes of the smart home subsystem. Each of rules database node for revision, deletion, and added, thus enhance intelligent control level of the intelligence node for household equipment. Data module task is to accept sensors parameter of each subsystem intelligence node collected and the running state parameter of each equipment, and put these arguments into the database of intelligence controller, so as to provide data processing module procedure for the rules extracted . • Data processing.The main finish data of the database using fuzzy the technology to learn, analysis to extract the useful rules. • Data view-modify module. Task is to accept that user call,view data of the database and call, view and modify rules of the rules database. The data remote view, amended is largely accepted the user service request by the family gateway. Its functions and work processes and service are much the same with local service[6].The different is the service request signal from the family gateway, Provided data and rules firstly sent to the family gateway, and then sent to the remote computer by the family gateway with internet.
4 Summary Study of the smart home of the controller in this paper. Fuzzy Nerve network technology has been used to study principle, characteristics of the smart home fuzzyneural controller. And the basis of these theories,structure of smart home controller has been analysis and designed, giving rise to the principles diagram and storage structure table ofthe rulebase.Design the overall aspect of the network system and communications media, and the gateway , give the the principles and methods of softhardware. There have been solved self-learning and self-adaptive of the smart home controller, extracted the implement scheme of fuzzy control rule ability, and improved intelligent levels of the home controller from the essence. Adopt a decentralized system in line with the feature and application smart home, has good stability and flexibility, the home systems consistent reached home life of health and safety purposes. For some special features, can greatly increase the energy efficient, effective and better than the system of performance.
References 1. Peng, X.j., Li, R.: Resea rch on Emb edd ed Smart Home Control System Based on ARM. Low Voltage Apparatus (18), 42–45 (2009) 2. Wang, K.: The Study of Smart Home System. Xi’an University of Science and Technology, XI’AN (2005)
12
G. Liao and J. Xi
3. Zhang, Z.-q.: Research and Realization of Smart Home System in double-deck House. Xi’an University of Science and Technology, XI’AN (2006) 4. Gao, Q., Qin, W., Ni, Z.F., Huang, W.X., Chen, S.M., Yao, Q.: Design and Realization of Electrical Appliances and Home Monitor System Base on GSM. Journal of Zhejiang ScitechUniversity 26, 391–394 (2009) 5. Douligeris, C.: Intelligent Home Systems. IEEE Communications Magazine (October 1993) 6. Cao, W.: Design and Implementation of a Street Lamp Monitoringand Management System with SMS/GPRS/USSD and GIS. Acta Scientiarum Naturalium Universitatis Neimongol 40, 722–727 (2009)
The Study on RF Front-End Circuit Design Based on Low-Noise Amplifier Architecture Zhao San-ping Hebi Vocational and Technical College
[email protected]
Abstract. In this paper, a low power, high linearity and high gain front-end circuit with novel LNA in 0.18μm CMOS technology for 5.2 GHz wireless applications is proposed. By employing current-bleeding and current-enhanced techniques, the conversion gain and the linearity can be increased and the power consumption is reduced. The complementary common-gate LNA is adopted to reduce noise and to improve linearity. With the LO power of 0 dBm, the proposed front-end circuit has conversion gain of 18.4 dB, input 1-dB compression point of -16 dBm and IIP3 of -6 dBm, while it consumes only 9.4mW. The chip size including pads is 0.767mm × 0.96mm. Keywords: RF, Front-end Circuit, Low-noise Amplifier.
1 Introduction In recent years, there have been an increase demands in the low-power, low-noise and high-linearity integrated circuit (IC) designs for portable devices like mobile phones, notebook computers and Personal Digital Assistants (PDA’s). In the portable electric-products, a low power low noise and high linearity receiver can maximize the life time of battery and reduce the signal distortion. Recently, CMOS technology has become a popular choice for implementations of RFICs due to convenient integration of low power, low noise and high linearity. There are many design techniques of LNAs and mixers up to now. The LC-filter combination of LNA can achieve the power consumption of 9 mW [1]. The Feedback structure can attain noise figure of only 1.9 dB in LNA design [2]. To reduce the power consumption, the current-reused structure was applied to the LNA design [3]. Based on an unbalanced architecture, the designed mixer can achieve high conversion gain [4]. To boost the conversion gain, a folded-switching mixer employs the current-enhanced technique [5-6]. To reduce the power dissipation and supply voltage, Gilbert-cell and folded structure were applied to the mixer circuit [7]. The active-impendence was applied to the mixer of front end to achieve variable conversion gain and better linearity [8]. The mixer uses the current bleeding technique and an extra inductor to improve the conversion gain and the noise figure [9]. Recently, improving the linearity, the conversion gain and the power consumption has been the research tendency of radio-frequency integrated circuit (RFIC) design. In addition, the isolation, and the noise figure are also important performances of the front-end circuits. M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 13–19, 2011. © Springer-Verlag Berlin Heidelberg 2011
14
Z. San-ping
Fig. 1. The schematic circuit of the proposed front-end circuit
In this paper, a RF front-end circuit is presented for wireless applications at the 5.2GHz frequency band. By employing the proposed circuit topology and the design techniques, the circuit performances such as linearity, noise and conversion gain can be improved. This paper is organized as follows. Section II introduces the front-end architecture and the circuit design. The experimental results of the front-end circuit are described in Section III. Finally, concluding remarks are provided in Section IV.
2 Circuit Design and Analysis 2.1 Operation Principle of the Proposed Front-end Circuit The proposed front-end circuit shown in the Fig. 1 comprises a LNA, a RF transconductance stage and a pair of local-oscillation (LO) switches with output loads. The complementary common-gate LNA and the current-enhanced transconductance stage consisting of transistors M 1 , M 2 , inductors L1 and transistors M 3 , M 4 , respectively, can provide enough gain and better linearity. The down-conversion mixer which is composed of the LO switches M 5 and M 6 can perform frequency translation by multiplying the input RF signal with a LO switching signal and then can obtain an intermediate-frequency (IF) signal at the output terminal. Besides, the resistors RL are used as passive loads at the output.
Fig. 2. Small-signal equivalent circuit of the input matching network
The Study on RF Front-End Circuit Design Based on Low-Noise Amplifier Architecture
15
Fig. 3. Measured and simulated conversion gains of different types of circuits versus the RF input power
If the input RF voltage signal v RF (t ) = VˆRF cos(ωRF t )
and when the transistors
M 1 and M 2 of the LNA and the transistors M 3 and M 4 of the RF transconductance stage are all operated in the saturation region, the RF current flowing into the mixer can be written as 1 (1) ˆ iRF = iD 4 − iD 3 = I RF + ( g m1 + g m 2 )( g m 3 + g m 4 )ωRF LV 1 RF sin(ωRF t ) 2 where I RF = I D 4 − I D 3 is the difference of the bias currents of M 4 and M 3 . When
the LO signal is applied and the LO switches are assumed to be ideal, the output current I O = iRF during the first half period of the LO signal and I O = −iRF during the other half period. Thus, the output current can be written as 1 ˆ iO = iD 5 − iD 6 = −[ I RF + ( g m1 + gm 2 )( gm 3 + g m 4 )ωRF LV 1 RF sin(ωRF t )]S (ωLO t ) 2
where
(2)
S (ωLO t ) is the Fourier series expansion of the switching function. This
alternation in the sign of the output signal provides the desired mixing effect that results in the IF output current of the form.
I o, IF =
−1
π
ˆ ( g m1 + g m 2 )( g m 3 + g m 4 )ωRF LV 1 RF sin ωIF t
(3)
Thus, the conversion gain of the front-end circuit is represented as
CG =
VIF ,rms 1 = ( g m1 + g m 2 )( g m 3 + g m 4 )ωRF L1RL VRF ,rms π
(4)
Where g m1 + g m 2 = K1 (VGS1 − Vtn ) + K 2 (VSG 2 − Vtp ) and g m3 + g m 4 = K3 (VGS 3 − Vtn ) + K 4 (VSG 4 − Vtp ) . If the transconductance parameters of the transistors M 1 , M 2 , M 3 and M 4 are chosen to satisfy K1 = K 2 and K3 = K 4 , then we have
g m1 + g m 2 = K1,2 (VBIAS 1 − Vtp − Vtn )
(5)
g m 3 + g m 4 = K 3,4 (VBIAS 1 − Vtp − Vtn )
(6)
16
Z. San-ping
From equations (5) and (6), the conversion gain in equation (4) is unconcerned with the time-varying input signal. Hence, the proposed front-end circuit can achieve high linearity performance.
3 The Current Bleeding Technique It is evident that the mixer linearity tends to degrade due to a larger voltage drop across LR . In the proposed front-end circuit in the Fig. 1, an active current source M 4 is inserted to the source terminals of the LO switches to decrease the currents of LR and then a higher conversion gain can be obtained via the low overdrive voltages of the LO switches M 5 and M 6 while keeping well linearity. To operate the transistor M 3 in saturation region for preserving conversion gain and linearity, the current-bleeding ratio α = I D 4 / I D1 and the sizes of M 3 , M 5 , and M 6 must satisfy the following inequality. 2V −V + 2(VTH 3 − VTH 5,6 ) (W / L)3 (1 − α ) ≤ BIAS 3 BIAS 1 VBIAS 1 − 2VTH 3 2(W / L)5,6
(7)
where 0 ≤ α < 1 . If the bleeding current is injected into the source terminals of the LO switches in the proposed circuit, the conversion gain can be improved from 2dB to 3dB in simulation. Fig. 3 shows the measured and simulated conversion gains versus the RF input power for different types of circuits.
Fig. 4. Measured results of fundamental output power and third-harmonic output power versus the RF input power of the front-end circuit
Fig. 5. Measured Noise figure of the proposed front-end circuit versus the IF frequency from 50MHz to 150MHz
The Study on RF Front-End Circuit Design Based on Low-Noise Amplifier Architecture
17
3.1 Input Matching of the Proposed Front-End Circuit
Since the coupling capacitances ( Cin and C1 ) and the bias resistance (R BIAS ) are very large that can be neglected, then the input equivalent circuit shown in Fig. 2 can be used to analyze the input matching of the proposed front-end circuit. The input impedance of the proposed circuit can be written as Z in =
(1 + S 2C gs '' L ')ro '+ SL '
(8)
(1 + Gm ro '+ SC gs ' ro ')(1 + S 2C gs '' L ') + S 2C gs ' L '
Where ro ' = ro1 / / ro 2 , Cgs ' = Cgs1 + Cgs 2 , C gs " = C gs 3 + C gs 4 , L ' = L1 / 2 , Gm = gm1 + gm 2 . Then we have Re[ Z in ] =
{α
α1[α 2 − ω 2 L '(Cgs ''− Gm ro ' Cgs ")]ro '
(9)
2 2 − ω L '[C gs ''(1 − Gm ro ') + C gs ' ]} + [C gs ' ro ' α1 ] 2
2
Fig. 6. Measured results of input return loss versus RF frequency from 1 GHz to 10 GHz Im[ Z in ] =
L ' (α 2 − ω 2 L '[C gs '' − Gm ro ') + C gs ]} − C gs ' α12 ro '2
{α
(10)
− ω 2 L '[C gs ''(1 − Gm ro ') + C gs ' ]} + [C gs ' ro ' α1 ]2 2
2
Where α1 = 1 − ω 2C gs " L ' , α2 = 1 + GM ro ' . From equations (9) and (10), the real part of in Z can be designed to be 50Ω via appropriately choosing the sizes of M 1 and M 2 , and the imaginary part of in Z in can be canceled by appropriately choosing L1 and the sizes of M 3 and M 4 .
18
Z. San-ping
Fig. 7. Chip micro-photograph of the proposed front-end circuit with a chip size is 0.767mm × 0.96mm
4 Performances of the Proposed Front-End Circuit The proposed low power and high linearity front-end circuit with a novel LNA architecture is implemented by a standard 0.18μm RF CMOS technology. The output IF is fixed at 100MHz, RF input signal of 5.2 GHz and LO input signal of 5.1 GHz. The input RF transconductance stage is bias at dc current of 6.5mA and supply voltage VBIAS1 of 1.4V. The proposed front-end circuit has been measured under the supply voltage VBIAS2 of 1.8V and the LO input power of 0dBm. The measured results are described in Fig. 3 to Fig. 7. Fig. 3 shows the measured and simulated conversion gains versus the RF input power. The measured conversion gain and input 1-dB compression point (P-1dB) of the proposed circuit achieve 18.4 dB and -16dBm, respectively. Fig. 4 illustrates the measured fundamental output power and the third-harmonic output power relative to the RF input power. As depicted in the figure, the input third-order intercept point is -6 dBm. Fig. 5 shows measured results of the noise figure versus the output IF frequency varying from 50 MHz to 150 MHz. The noise figure of the propose circuit is 8.4 dB at 100 MHz IF frequency. The measured input return loss is -9dB as shown in Fig. 6. The specification of the proposed circuit is summarized and compared with other papers in Table 1. Fig. 7 shows the micro-photograph of the proposed front-end circuit using TSMC 0.18μm CMOS technology. The chip size including pads is 0.767mm × 0.96mm.
The Study on RF Front-End Circuit Design Based on Low-Noise Amplifier Architecture
19
Table 1. Specification for the Designed Front-End Circuit Ref. RF Frequency (GHz) LO Frequency (GHz) IF Frequency (MHz) Supply Voltage(V) Power Consumption(mW) Conversion Gain (dB) Input Third-Order Intercept Point (dBm) Noise figure (dB) Input Return Loss (dB) LO to RF Isolation (dB) Technology Type
This work 5.2 5.1 100 1.8 9.4 18.4 -6 8.4 -9 >50 0.18 μ m
[10 5 4.6 40 2.5 31.5 20 -6.5 13.5 -20 0.13 μ m
[11 5.8 5.6 200 1.5 17.2 15.7 -20.56 7.8 -8 0.18 μ m
[12 5.1 5.3 200 1.8 38.9 19.8 -6 4.5 >60 0.18 μ m
5 Conclusions The authors would like to thank the National Chip Implementation Center (CIC) Steering Committee for their valuable comments and suggestions in design and fabrication of this chip. The proposed front-end circuit is fabricated by Taiwan Semiconductor Manufacturing Company (TSMC) in Taiwan.
References 1. Bevilacqua, Niknejad, A.M.: An Ultra-Wide Band CMOS LNA for 3.1 to 10.6 GHz Wireless Receiver. In: IEEE ISSCC, vol. 1, pp. 382–533 (2004) 2. Bruccoleri, F., Klumperink, E.A.M., Nauta, B.: Noise canceling in wideband CMOS LNAs. In: IEEE ISSCC, vol. 1, pp. 406–407 (2002) 3. Ben Amor, M., Fakhfakh, A., Mnif, H., Loulou, M.: Dual Band CMOS LNA Design With Current Reuse Topology. In: IEEE DTIS, pp. 57–61 (2006) 4. Myoung, N.G., Kang, H.S., et al.: Low-Voltage, Low-Power and High-Gain Mixer Based on Unbalanced Mixer Cell. In: IEEE EMICC, pp. 395–398 (2006) 5. Vidojkovic, V., Tang, J.V.D., Leeuwenburgh, A., Oermund, A.H.M.V.: A Low-Voltage Folded-Switching Mixer in 0.18-μm CMOS. In: IEEE JSSC, vol. 40, pp. 1259–1264 (June 2005) 6. Vidojkovic, V., Tang, J.V.D., Leeuwenburgh, A., Roermund, A.V.: A High Gain, Low Voltage Folded-Switching Mixer with Current-Reuse in 0.18-μm CMOS. In: IEEE RFIC Symp., pp. 31–34 (2004) 7. Reja, M., Moze, K., Filanovsky, I.: A Novel 0.6V CMOS Folded Gilbert-Cell Mixer for UWB Application. In: IEEE SOCC, pp. 169–172 (2008) 8. Cusmai, G., Brandolini, M., Rossi, P., Svelto, F.: A 0.18-μm CMOS Selective Receiver Front-end for UWB applications. In: IEEE JSSC, vol. 41(8) (2006) 9. Phan, T.-A., Kim, C.-W., et al.: Low Noise and High Gain CMOS Down Conversion Mixer. In: IEEE ICCCAS, vol. 2, pp. 1191–1194 (2004) 10. Ong, H.K.F., Choi, Y.B., Yeoh, W.G.: A Variable Gain CMOS RF front-end for 5 GHz Applications. In: IEEE RFIT, pp. 314–317 (2007) 11. Wang, X., Weber, R.: A Novel Low Power Low Boltage LNA and Mixer for WLAN IEEE 802.11a Standard. In: IEEE SMIC, pp. 231–234 (2004)
Balanced Ridge Estimator of Coefficient in Linear Model under a Balanced Loss Function (Ⅰ)* Wenke Xu and Fengri Li** College of Science, Northeast Forestry University, P.R. China, 150040
[email protected]
Abstract. Based on the conception of Zellner‘s Balanced Loss, considering the goodness of fitting and the accuracy of Ridge Estimator. the paper constructed the balanced loss function and put forward Balanced Ridge estimation of coefficient in linear model, also obtained the property of Balanced Ridge Estimator superior to Least Squares Estimator and admissibility of Balanced Ridge Estimator. Keywords: Balanced loss function, Balanced ridge estimator, Ridge estimator, Admissibility.
1 Introduction Suppose the linear model ⎧y = Xβ +ε ⎨ 2 ⎩ E (ε ) = 0, Cov(ε ) = σ I n
with
(1)
y is n × 1 observation vector, X is n × p design matrix with full column rank,
β ∈ R P and
σ2
> 0 are unknown parameter vector, β
I n is n × n unit matrix(for
is denoted by βˆ = ( X ′X ) −1 X ′y . The basic idea of Least Squares Estimator is the square length y − X β 2 of deviation
short I ) .The Least Squares Estimator of
vector ε = y − X β reaches the minimum, which fully considers the goodness of fitting of linear model. For consideration to the accuracy of estimation and the goodness of fitting, Zellner put forward the conception of Balanced Loss Function[1]
ω ( y − Xd )′ ( y − Xd ) + (1 − ω )( d − β )′ S ( d − β )
, >
with 0 ≤ ω ≤ 1 S 0 are known. When Balanced Loss Function reaches the minimum with respect to d , the d is what we want. Considering the accuracy of estimation and the goodness of fitting, there are many studies on the estimate and the risk under a Balanced Loss[2-7]. *
Supported by Scientific Research Funds for The National Forestry Public Welfare Project (201004026). ** Corresponding author. M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 20–25, 2011. © Springer-Verlag Berlin Heidelberg 2011
Balanced Ridge Estimator of Coefficient in Linear Model
21
Least Squares Estimator has many advantages. But when the design matrix is not of full rank or multicollinearity, Least Squares Estimator will be ineffective. With this in mind, A.E.Hoerl and R.W.Kennard put forth the theory of ridge estimator in 1970[8]. −1 In the linear model (1), the ridge estimator of β is defined as βˆk = ( X ′X + kI p ) X ′y .
With the parameter k > 0 can be chosen and is called ridge parameter. When k takes the different value, the different estimator will be obtained. So ridge
estimator βˆk is the class of all estimators. Where k = 0 ,
βˆk = ( X ′X ) X ′y −1
is Least Squares
Estimator. So in the strict sense, Least Squares Estimator is one of ridge estimators. But in the ordinary course of events, ridge estimator does not include Least Squares Estimator[9]. Based on the conception of Zellner‘s Balanced Loss, when the design matrix exists multicollinearity , so the paper constructed the balanced loss function: Q (β
)= ω
y− Xβ
2
+ (1 − ω
) ( X ′X
Definition 1: For the linear model (1), where k the following condition Q ( βˆω ) =
minp Q( β ) ,Then β ∈R
of β .
+ kI p
)
−1
X ′y − β
2
> 0 , 0 < ω < 1 , if
βˆω ∈ R p satisfies
βˆω is called Balanced Ridge Estimator
Under Definition 1, the paper presented Balanced Ridge Estimator of parameters in linear model and gained properties superior to Least Squares Estimator. Following are some symbols which are applied in this paper: A′ , A −1 , tr ( A) , rank ( A) , R ( A) , diag (λ , λ , " , λ ) , θ , MSE θ = E (θ − θ )′(θ − θ ) and θˆ ~ θ denotes 1
2
n
()
respectively the transposition of matrix A, the inverse of matrix A, the trace of matrix A, the rank of matrix A, the vector space spanned by the column vectors of A, the main diagonal matrix that the elements of the principal diagonal are λ1 , λ2 ,", λn , the
length of vector θ , the mean square error of vector θ and of θ (on the MSE (θ ) ) .
θ is admissible estimation
2 Balanced Ridge Estimator of Coefficients in Linear Model Theorem 1: For the linear model (1), the Balanced Ridge Estimator under balanced −1 loss function Q( β ) is following as: βˆω = (ω X ′X + (1 − ω ) I p ) (ω X ′y + (1 − ω ) βˆk )
With βˆk = ( X ′X + kI p )−1 X ′y is Ridge Estimator, k > 0 , 0 < ω < 1 .
Proof: By definition 1, Let ∂Q ( β ) ∂β
= −2ω X ′y + 2ω X ′X β + 2 (1 − ω ) β −2 (1 − ω )( X ′X + kI )−1 X ′y =0
(ω X ′X + (1 − ω ) I ) βˆω = ω X ′y + (1 − ω ) ( X ′X + kI )
−1
X ′y
(2)
22
W. Xu and F. Li
Since ω X ′y + (1 − ω ) ( X ′X + kI ) X ′y −1
=
−1 −1 X ′ [ ω y + (1 − ω ) X ( X ′X ) ( X ′X + kI ) X ′y
]
∈ R( X ′) = R( X ′X ) ⊂ R(ω X ′X + (1 − ω ) I ) So the equation (2) is consistency equation, The unique solution of the equation (2) is
βˆω = (ω X ′X + (1 − ω ) I )
−1
(ω X ′y + (1 − ω ) βˆ ) k
Let arbitrary vector b Q (b ) = ω
y − X βˆω
2
−1 + (1 − ω ) ( X ′X + kI p ) X ′y − βˆω
2
+ ω Xβˆω − Xb
2
+ (1 − ω ) βˆω − b
(
+2[ − ( ω X ′X + (1 − ω ) I ) βˆω + ω X ′y + (1 − ω ) (X ′X + kI p )−1 X ′y ] βˆω − b
2
)
By (2), the last item on the above equation is equal to zero Q ( b ) ≥ ω y − Xβˆω
So
2
−1 + (1 − ω ) (X ′X + kI p ) X ′y − βˆω
βˆω is Balanced Ridge Estimator of β
.
The relations between Balanced Ridge Estimator Squares Estimator are following as: Property 1: For the linear model (1), then
()
2
( )
= Q βˆω
、 Ridge Estimator and Least
()
(a) when ω = 1 , βˆω = βˆ ; b when ω = 0 βˆω = βˆk ; c when k = 0 Theorem 2: For the linear model (1) and arbitrary scalar MSE ( βˆω )
, βˆ
ω
= βˆk = βˆ
k > 0 0 ≤ ω ≤ 1 , then
≤ MSE ( βˆ ) .
(3)
Proof: There exist a orthogonal matrix φ , Such that X ′X = φΛφ ′ , Λ = diag (λ1 , λ2 ,"λP )
Let Z = X φ α = φ ′β , then the linear model (1) is adapted to
y = Zα + ε
,
E (ε ) = 0 Cov(ε ) = σ 2 I n
、Ridge Estimator and Least Squares Estimator of α are
Balanced Ridge Estimator following as respectively:
αˆ = ( Z ′Z ) Z ′y = Λ −1Z ′y ; αˆ k = ( Z ′Z + kI p ) Z ′y = ( Λ + kI p ) Z ′y ; −1
−1
αˆω = (ω Z ′Z + (1 − ω ) I p )
−1
(ω Z ′y + (1 − ω ) αˆ ) = (ωΛ + (1 − ω ) I ) k
p
−1
−1
(ω I
p
+ (1 − ω ) ( Λ + kI p )
−1
) Z ′y
And the results were described as bellow:
αˆ = φ ′βˆ ; αˆ k = φ ′βˆk ; αˆω = φ ′βˆω ;
( )
( )
( )
MSE (αˆ ) = MSE βˆ ; MSE (αˆ k ) = MSE βˆk ; MSE (αˆω ) = MSE βˆω
(4)
Balanced Ridge Estimator of Coefficient in Linear Model
23
For proving the expression (3), through the equation (4), there only need prove MSE (αˆω )
≤ MSE (αˆ )
Since MSE (αˆω ) = trCov (αˆω ) + ( E (αˆω ) − α )′ ( E (αˆω ) − α ) trCov (αˆω ) = σ
2
{
tr ⎡⎣ωΛ + (1 − ω ) I ⎤⎦ −1 ⎡ω I + (1 − ω )( Λ + kI )−1 ⎤ Λ ⎣ ⎦ ⎡ωΛ + (1 − ω )( Λ + kI )−1 ⎤ ⎡ωΛ + (1 − ω ) I ⎤ −1 ⎦ ⎣ ⎦⎣
= σ2
λi (ωλi + 1 − ω + ω k )
p
∑ i =1
( E (αˆω ) − α )′ ( E (αˆ
( λi + k ) (ωλi + 1 − ω ) 2
{⎡⎣ωΛ + (1 − ω ) I ⎤⎦
−1
⎤ ⎡ωΛ + (1 − ω ) I ⎤ −1 − I ⎦ ⎦⎣
⎡ωΛ + (1 − ω )( Λ + kI )−1 Λ ⎤ − I ⎣ ⎦ = (1 − ω )2 k 2
}
= f1 (ω )
2
) −α ) =α ′
ω
{⎡⎣ωΛ + (1 − ω ) Λ ( Λ + kI ) −1
2
}α
}
αi 2 = ∑ 2 2 i =1 ( λi + k ) ( ωλi + 1 − ω ) p
f 2 (ω )
With α =( α1, α 2,⋅⋅⋅, α p )′ p λi (ωλi + 1 − ω + ω k ) + p αi2 2 So MSE (αˆω ) = σ 2 ∑ (1 − ω ) k 2 ∑ 2 2 2 2 i =1 ( λi + k ) ( ωλi + 1 − ω ) i =1 ( λi + k ) ( ωλi + 1 − ω ) 2
= f1 (ω ) + f 2 (ω )
To definte f
(ω ) = f1 (ω ) + f 2 (ω )
And df1 (ω ) dω
p
∑
2kσ 2
=
i =1
λi (ωλi + 1 − ω + ω k )
( λi + k ) (ωλi + 1 − ω ) 2
3
;
df 2 ( ω ) dω
=
−2 (1 − ω ) k 2
α i λi 3 i =1 ( λi + k ) ( ωλi + 1 − ω ) 2
p
∑
2
df1 (ω ) = 2kσ 2 d ω ω =1
p
1
∑ (λ + k ) λ i =1
i
2
i
ε0 ; df 2 (ω ) dω
=0 ω =1
df (ω ) df (ω ) So that it can gain df (ω ) = 1 + 2 >0 dω ω =1 d ω ω =1 dω ω =1
f (ω ) and dfd(ωω ) are all continuous functions. So when ω < 1 and is sufficiently large, then df (ω ) > 0 , that is to say when ω < 1 and is large, When ω ≤ 1 ,
dω
24
W. Xu and F. Li
f (ω ) = MSE (αˆω ) is monotone increasing function of ω , then there is
ω ∗ <1,
(ω ,1) ,
when ω ∈
∗
MSE (αˆω ) = f (ω ) < f (1) = MSE (αˆ )
And when ω = 1 , MSE (αˆω ) = MSE (αˆ ) . Theorem 3: For the linear model (1) and arbitrary scalar 2
2
βˆω ≤ βˆ 2
βˆω
Proof: By (4), then
= αˆω
2
,
2
βˆ =
For proving (5), there only need prove Because k − ω k = k (1 − ω )
αˆ
k > 0 , 0 ≤ ω ≤ 1 ,then (5)
2
αˆω ≤ αˆ 2
2
≤ 0,So λi (ωλi + 1 − ω + ω k ) ≤ ( λi + k ) (ωλi + 1 − ω )
i = 1, 2, " p λi 2 (ωλi + 1 − ω + ω k )
As well as Since αˆω
≤1
( λi + k ) (ωλi + 1 − ω ) −1 = αˆ ′Λ ⎡ω I + (1 − ω )( Λ + kI )−1 ⎤ ⎡⎣ωΛ + (1 − ω ) I ⎤⎦ ⎣ ⎦ 2
2
2 2
⎡⎣ωΛ + (1 − ω ) I ⎤⎦
−1
⎡ω I + (1 − ω )( Λ + kI ) −1 ⎤ Λαˆ ⎣ ⎦
2 ⎛ λ 2 ωλ + 1 − ω + ω k 2 λ p 2 (ωλ p + 1 − ω + ω k ) ⎞ ) 1 ( 1 ⎜ ⎟ αˆ = αˆ ′ diag ," , 2 2 ⎜ ( λ1 + k )2 (ωλ1 + 1 − ω )2 ( λp + k ) (ωλ p + 1 − ω ) ⎟⎠ ⎝
≤ αˆ ′αˆ = αˆ
Lemma
1[10]:
2
For
linear
model
(1),
Then
Aβˆ ~ Cβ
A( X ′X ) A′ ≤ A( X ′X ) C ′ . −1
if
and
only
if
−1
Theorem 4: For linear model (1), and arbitrary scalar
k >0
, 0 ≤ ω ≤ 1 . Then
within linear estimation class, βˆω is an admissible estimator of β , that is βˆω Proof: By theorem 1
(
βˆω
= φ ( ωΛ + (1 − ω ) I p )
To definite D= ωΛ + (1 − ω ) I p
)
−1
(ω I
p
−1
(ωI
p
)
−1 + (1 − ω )(Λ + kI p ) Λφ ′βˆ
+ (1 − ω ) ( Λ + kI p )
−1
)Λ
So φ Dφ ′( X ′X ) −1φ Dφ ′ = φ DΛ Dφ ′ −1
=φ
2 ⎛ λ ωλ + 1 − ω + ω k 2 ( 1 ) ," , λ p (ωλ p + 1 − ω + ω k ) ⎞⎟ φ ′ diag ⎜ 1 2 2 ⎜ ( λ1 + k ) 2 (ωλ1 + 1 − ω ) 2 ( λ p + k ) (ωλ p + 1 − ω ) ⎟⎠ ⎝
And φ Dφ ′( X ′X )−1 = φ DΛ −1φ ′
~β.
Balanced Ridge Estimator of Coefficient in Linear Model =φ
25
⎛ ωλ + 1 − ω + ω k ωλ p + 1 − ω + ω k ⎞ 1 ⎟φ ′ diag ⎜ ," , ⎜ ( λ1 + k )(ωλ1 + 1 − ω ) ( λ p + k )(ωλ p + 1 − ω ) ⎟⎠ ⎝
Because k − ω k = k (1 − ω ) ε0, So λi (ωλi + 1 − ω + ω k ) δ ( λi + k ) (ωλi + 1 − ω ) As well as
λi (ωλi + 1 − ω + ω k )
2
( λi + k ) (ωλi + 1 − ω ) 2
2
ωλi + 1 − ω + ω k
δ λ + k ωλ + 1 − ω , i = 1, 2," p ( i )( i )
Therefore φ Dφ ′( X ′X ) −1φ Dφ ′ < φ Dφ ′( X ′X )−1 , by lemma 1, βˆω is an admissible estimate of β . Theorem 1 gives the expression of the Balanced Ridge Estimator. Theorem 2 shows the Balanced Ridge Estimator is superiority over Least Squares Estimator under Mean Square Error criterion. Theorem 3 shows the length of Balanced Ridge Estimator is smaller than the length of Least Square Estimator, so Balanced Ridge Estimator is the compression toward a origin for Least Square Estimator, and also is a compression estimation. Theorem 4 shows the Balanced Ridge Estimator is an admissible estimation.
References 1. Zellner, A.: Bayesian and non-Bayesian estimation using balanced loss functions. In: Gupta, S.S., Berger, J.O. (eds.) Statistical decision theory and related topics V, pp. 377–390. Spring, New York (1994) 2. Wan, A.T.K.: Risk comparison of inequality constrained least squares and other related estimators under balanced loss. Economics Letters 46, 203–210 (1994) 3. Rodrignes, J., Zellner, A.: Weighted balanced loss function and estimation of the mean time to failure. Communications in Statistics-Theory and Methods 23, 3609–3616 (1994) 4. Giles, J.A., Giles, D.E.A., ohtani, K.: The exact risk of some pretest and stein-type regression estimators under balanced loss. Communications in Statistics-Theory and Methods 25, 2901–2919 (1996) 5. Xu, X., Wu, Q.: Linear Admissible Estimators of Regression Coefficient Under Balanced Loss. Acta Mathematiea Scientia 20(4), 468–473 (2000) 6. Luo, H., Bai, C.: The Balanced LS Estimation of the Regressive Coefficient in a Linear Model. Journal of Hunan University (Natural Sciences) 33(2), 122–124 (2006) 7. Qiu, H., Luo, J.: Balanced Generalized LS Estimation of the regressive coefficient. Joural of East China Normal University (Natureal Science) (5), 66–71 (2008) 8. Hoerl, A.E., Kennard, R.W.: Ridge Regression: Biased Estimation for Non-orthogonal Problems. Technometrics 12(1), 55–68 (1970) 9. Wang, S., Shi, J., Yin, S., et al.: Introduction Linear Model, 3rd edn. Science Press, Beijin (2004) 10. Wang, S.: Linear Model Theory and its application. Anhui education Press (1987)
SEDE: A Schema Explorer and Data Extractor for HTML Web Pages Xubin Deng School of Information, Zhejiang University of Finance & Economics, Hangzhou, 310018, China
[email protected]
Abstract. We present an approach for automatically exploring relation schema and extracting data from HTML pages. By abstracting a DOM-tree constructed from a HTML page into a set of generalized lists, this approach automatically generates a relation schema for storing data extracted from the page. Based on this approach, we have developed a software system named as SEDE (Schema Explorer and Data Extractor for HTML pages), which can reduces the workload of extracting and storing data objects within HTML pages. This paper will mainly introduce SEDE. Keywords: DOM-tree abstraction, HTML page, relational database, relation schema.
1 Introduction As HTML pages contain useful data objects, how to extract them from ill-structured HTML pages is now a hot research topic. To this goal, there are three classes of approaches. The first class uses a set of predefined extraction rules to search for data objects [1,2]. The second class finds semantic data blocks based on page structure and appearance [3,4,5]. The third class finds frequent subtrees in HTML parse trees [6,7]. These approaches still have limitations such as the requirement of manual efforts, the neglect of relationships between data objects, the overlook of how to organize, store and query data objects, etc. In order to partly overcome the above limitations, this paper presents a new approach to automatically transform HTML pages into relational database (RDB), which includes the following steps. 1) Transformation. Transform an HTML page into a set of correlated relation tables, which serves as the first RDB schema and the data source for Web-based applications. 2) Schema integration. Integrate new schema with current RDB schema when page changes. 3) View generation. Extract web data via views of the RDB when necessary. Based on this approach, we have developed a software system named as SEDE (Schema Explorer and Data Extractor for HTML pages), which can reduce the workload of extracting, storing and querying data objects within HTML pages. In this paper, we shall mainly introduce SEDE. Related Work. Most close to this work is the web data extraction algorithm given in [8], which employs a HTML parse tree to search for contiguously-repeat structures and M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 26–33, 2011. © Springer-Verlag Berlin Heidelberg 2011
SEDE: A Schema Explorer and Data Extractor for HTML Web Pages
27
extract data records into tables using a partial tree alignment approach. The differences are the following. 1) [8] needs to filter much information that may be useful for some applications, while this work can losslessly transform an HTML page into a set of correlated relation tables. 2) [8] cannot obtain a whole schema for the HTML page, while this work can.
2 Foundation of SEDE Transformation Algorithm. This algorithm includes three steps. 1) DOM-tree creation. This step first uses a Web Browser control of Microsoft Visual Basic to obtain an HTML parse tree of an HTML page, and then transforms the HTML parse tree into a DOM-tree. 2) DOM-tree abstraction. This step finds continually repeat structures in HTML DOM-tree and represents them using a set of generalized lists. 3) DOM-tree transformation and data extraction. This step constructs a schema tree for an abstracted HTML DOM-tree and fills these relation tables in the schema tree using data extracted from the abstracted HTML DOM-tree. We omit the detail discussion of this algorithm; readers can refer to [9] for detailed discussion. Schema Integration. As HTML pages are changeful and the schema obtained from the latest version of a page may be different from that obtained from the last version of the page, anytime a new schema is obtained, we integrates it with the current RDB schema using an algorithm similar with tree edit distance algorithm given in [10]. This algorithm can compute an optimal sequence of operations (i.e., the edit script) that turns ordered forest F1 into ordered forest F2. The following algorithm realizes this notion. Algorithm 1. Schema Integration INPUT: S1: old schema tree; S2: new schema tree OUTPUT: Turn S1 into S2, adjust relevant views, and return true; or trigger modification alert to the user and return false. BOOL Integrate (TNode S1, TNode S2 ){ Script SC = (); //to store a sequence of operations on S1. Forest F1= (S1), F2= (S2); ED (NULL, F1, NULL, F2, SC); IF(exits op∈SC that deletes high score information){ Trigger alert; RETURN(FALSE); } ELSE { Execute SC; adjust views as S1 turns into S2. RETURN(TRUE); } } // Integrate float ED (TNode P1, Forest F1, TNode P2, Forest F2, Script &SC){//P1(P2) is the parent of F1 (F2) //compute a script turning F1 into F2; return its edit distance. TNode v = the rightmost tree root in F1; TNode w = the rightmost tree root in F2; IF (F1 is empty and F2 is empty)dist=0; ELSE IF (F2 is empty){
28
X. Deng
SC=SC+(delete
); dist=ED(P1, F1-v, NULL, empty, SC)+Cost(delete v); } ELSE IF (F1 is empty){ SC=SC+(insert); dist=ED (NULL, empty, P2, F2-w, SC)+Cost(insert w); } ELSE { Script SC1= (), SC2= (), SC3= (); dist1==ED(P1, F1-v, P2, F2, SC1)+Cost(delete v); SC1= SC1+(delete); dist1==ED(P1, F1, P2, F2-w, SC2)+Cost(insert w); SC2= SC2+(insert); dist3=ED(v, CF(v), w, CF(w), SC3)+ED(P1, F1-T(v), P2, F2-T(w), SC3)+Cost(modify v to w); SC3= SC3+(modify v to w); IF (dist3<= dist1 AND dist3<= dist2) { SC = SC +SC3; dist= dist3; } ELSE IF (dist1<= dist2) { SC = SC +SC1; dist= dist1; } ELSE { SC = SC +SC2; dist= dist2; } } RETURN(dist); } //ED In algorithm 1, CF(v) denotes v’s children forest, T(v) denotes the subtree rooted at v, F − v denotes the forest obtained by deleting v from F, F − T (v) denotes the forest obtained by deleting T (v). View Generation. After transforming DOM-tree into RDB, an evaluation algorithm is employed to evaluate the score of each relation table and attribute field, aiming at filtering such relation tables and attribute fields with poor semantic meaning (the evaluation algorithm will be introduced in another paper). By traversing schema tree and based on scores of relation tables and attribute fields, the following algorithm generates relation views for all relation tables with scores great than a given threshold. Algorithm 2. View Generation INPUT: r: the root of a schema tree. OUTPUT: Creates relation views for all relation tables with scores great than a given threshold. void GenerateView(TNode r){ //TNode: schema tree node. String cmd= ""; //to store a set of create view commands GenerateViewScript(r, cmd); IF (cmd != "")RunSql(cmd); //Execute commands in cmd. } // GenerateView void GenerateViewScript(Node x, String &cmd){ IF (x==NULL)RETURN; IF (x is a table AND x.Score> threshold){ cmd+= create view generated by x’s high score fields;} IF (x is a table node) { //recursively check x’s child tables FOR(each x’s child p)GenerateViewScript(p, cmd); } } //GenerateViewScript
SEDE: A Schema Explorer and Data Extractor for HTML Web Pages
29
User Interface of SEDE System The main menu of SEDE is shown in Fig. 1, which shows the following functions: 1) edit data source; 2) HTML extraction; 3) browse and edit; 4) view generation, as introduced bellow. SEDE: Schema Explorer and Data Extractor for HTML Pages Edit Data Sources HTML Extraction Browse and Edit View Generation Quit
HTML page (1) Transformation ...
(2) Schema integration
Schema alert RDB schema
Relational tables
(3) View creation ...
Data extraction
Applications
... Views
SEDE: Schema Explorer and Data Extractor for HTML Pages Fig. 1. Main menu of SEDE
Edit Data Source. This function is used to insert, modify or delete HTML data sources to be extracted. Fig. 2 shows the user interface of this function, by which users can edit information about what URL the HTML page comes from, which database this page will be transformed into, which server the database will be created on, how much score should a table or an attribute have if it makes sense to an application (i.e., the threshold), etc.
Edit Data Source URL http://www.amazon.com/books-used-books-textbooks/b/ref=sa_n Server Name (local)
Modify
Database Name AmazonBookStore
Delete
Filter Threshold 3 URL http://fund.eastmoney.co http://www.amazon.com/ http://www.sina.com.cn/
Insert
Exit Server Database (local) ttjjw (local) AmazonBookStore (local) Sina
Update Time 2010/04/26 16:28:45 2010/07/20 19:31:49 2010/04/23 9:14:04
Filter Threshold 3 3 3
Fig. 2. User interface of edit data Source function
30
X. Deng
HTML Extraction. This function is used to load a selected HTML data source into a Webbrowser control of VB and transform it into corresponding database with scores of each table and attribute automatically evaluated. It also integrates current database schema with the old one, and triggers alert to the user if necessary. To do this, first we select an HTML data source and click “OK” (Fig. 3), next when HTML page is shown completely, we click “Extract this Page”, then the selected HTML data source can be automatically extracted into given database. HTML Extraction Select an HTML Data Source to Be Extracted from the Following List: URL
Server
Database
http://fund.eastmoney.com/
(local)
ttjjw
http://www.amazon.com/books-us (local)
AmazonBookStore
http://www.sina.com.cn/
Sina
OK
(local)
Exit
Fig. 3. Select an HTML data source
Browse and Edit. This function is used to browse the extraction result and modify alias and score of each table and attribute for creating proper views of this HTML page. Although the score of each table and attribute can be evaluated automatically, we still provide this interface in case of some special use. View Generation. This function is used to generate views according to alias and score of each table and attribute for applications. To do this, we select an HTML data source and click “OK”.
3 Experiments Application Example. Suppose we are given the Amazon bookstore page shown in Fig. 4, where contents in the ellipse are the list of top 100 books, then the transformation result is shown in Fig. 5, where table VTB78 is automatically generated according to content in the ellipse (aliases and scores of attributes are modified). The generated view of VTB78 (alias: TopBook) is also shown in Fig. 5. Scalability of the Algorithm. Given HTML page θ with length S, then the cost of the algorithm (denoted as Ω) can be regarded as a function of the size and distribution of θ. During processing, substrings in θ are scanned exactly once, but not all substrings are output into corresponding relation table, unless they are attributes or contents in θ. If we denote the sum of the sizes of finally output substrings as E and the output ratio as
SEDE: A Schema Explorer and Data Extractor for HTML Web Pages
31
HTML Extractor
Fig. 4. Amazon bookstore page See and Edit Extraction Result URL: http://www.amazon.com/books-used-textbooks/b/ref=sa_menu_bo0/104-2129462-0495903%5Fencoding=UTF8&node283 Server Name: (local) Database Name: AmazonBookStore VTB78 TopBook ID0 ‘ID0’ ID77 ‘ID77’ ID78 ‘ID78’ A_td_1 ‘A_td_1’ C_td_4 ‘Book Numer’ A_td_2 ‘A_td_2’ A_a_3 ‘A_a_3’ A_img_4 ‘A_img_4’ A_a_7 ‘A_a_7’ C_a_18 ‘Book Name’ A_span_8 ‘A_span_8’ C_span_19 ‘Author’ A_span_9 ‘A_span_9’ C_span_20 ‘Original Price’ A_strong_10 ‘A_strong_10’ C_strong_21 ‘New Price’ VTB79 AVTB79 VTB80 AVTB80 VTB81 AVTB81 VTB82 AVTB82
Edit table: VTB78 ID77 ID78 Book Name AuthorOriginal PriceNew Price ▲ ID0 0 0 0 Breaking D by Ste $22.99 $13.79 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
1 2 3 4 5 6 7 8 9
The Shack Watchmen Twilight (T sTORI Tel Eclipse (Th New Moon The Story Three Cups When You
by Wi $14.99 by Ala $19.99 by Ste $10.99 by Tor $24.95 by Ste $19.99 by Ste $19.99 by Da $25.99 by Gr $15.00 by Da $25.99
$8.24 $13.59 $6.04 $13.72 $10.99 $10.99 $14.27 $8.25 $14.29
Alias:
TopBook
Score:
10 Filter each table or column that has a score less than or equal to 3 Modify
Exit
Fig. 5. Extracting Amazon bookstore page
η=E/S, then Ω can be calculated as (where CS and CO denote the scan and output cost per byte, respectively): Ω =SCS +ECO=(CS +ηCO)S =(CS /η+CO) E.
(1)
32
X. Deng
Cost Ω (s)
Thus we can get two conclusions: 1) if η is a constant, then the cost is proportion to S and E; and 2) if CO is far greater than CS and η is not so little that causes CS /η being comparable with CO, then the linear relation between Ω and E is irrelevant to η. The above conclusions are verified via the experiment shown in Fig.6, where the output ratios for sample 1 and 2 are 0.43 and 0.68, respectively. Fig. 6(a) shows two separate Ω-S lines with respect to the two samples, but in Fig. 6(b), the two lines gather into one Ω-E line. 150 100 50
Sample 1 Sample 2
0
0
10
20 30 Sample Size S (MB) ( a)
40 0
5
10 15 20 Output Size E (MB) ( b)
25
Fig. 6. Scalability experiment of the algorithm
4 Conclusions Extracting, querying, and analyzing data objects “lurking” in HTML pages for Web-based applications is present a hot research topic. To this goal, there are three categories of approaches. The limitations to these approaches include the following. 1) Some of them need users to construct enigmatical or non-standard descriptions, thus are time-consuming and difficult to learn for the users. 2) Some of them stipulate that data objects to be extracted must be of a definite format, which limits their applicability. 3) Some of them did not solve problems of how to extract, organize, store and query data objects within semantic data blocks found. 4) Some of them depend on heuristic rules or some pre-defined parameters and may lose data that may make sense to some applications. In order to overcome the above limitations, this paper presents an algorithm that automatically and losslessly transforms HTML pages into relation tables and provides users with a convenient and flexible data extraction method based on the view mechanism of RDB. Based on this algorithm, a software system named as SEDE (Schema Explorer and Data Extractor for HTML Pages) has been developed. Experiments show that this approach is practical for finding, extracting and maintaining useful data objects from HTML pages. Acknowledgement. This work was supported by the Key Project of Zhejiang University of Finance & Economics (No.2009YJZ10).
SEDE: A Schema Explorer and Data Extractor for HTML Web Pages
33
References 1. Padmadas, V., Gadge, J.: Web Data Extracion Using Visual Features. In: Proc of Int’l Conf. and Workshop on Emerging Trends in Technology (ICWET 2010), pp. 218–221 (2010) 2. Liu, W., Meng, X., Meng, W.: ViDE: A Vision-Based Approach for Deep Web Data Extraction. IEEE Transactions on Knowledge and Data Engineering 22(3), 447–460 (2010) 3. Cai, D., Yu, S.P., Wen, J.R., Ma, W.Y.: VIPS: A Vision-based Page Segmentation Algorithm. Microsoft Technical Report, MSR-TR-2003-79 4. Hiremat, P.S., Benchalli, S.S., Algur, S.P., Udapud, R.V.: Mining Data Regions from Web Pages. In: Proc of Int’l Conf. on Management of Data (COMAD 2005) (2005b) 5. Burget, R.: Layout Based Information Extraction from HTML Documents. In: Proc of the 9th International Conference on Document Analysis and Recognition (ICDAR 2007), p. 5 (2007) 6. Zaki, M.J.: Efficiently Mining Frequent Trees in a Forest. In: Proc. of 8th ACM SIGKDD Int’l Conf. on Knowledge Discovery and Data Mining (SIGKDD 2002), pp. 71–80 (2002) 7. Xiao, Y., et al.: Efficient Data Mining for Maximal Frequent Subtrees. In: Proc. of the 3rd IEEE Int. Conf. on Data Mining (ICDM 2003), pp. 379–386 (2003) 8. Zhai, Y.H., Liu, B.: Web Data Extraction Based on Partial Tree Alignment. In: Proc. of 14th Int’l. Conf. on World Wide Web (WWW 2005), pp. 76–85 (2005) 9. Deng, X.B.: Automatic Transformation of HTML Pages into Relational Database. Journal of Information and Computational Science 7(2), 349–355 (2010) 10. Bille, P.: A Survey on Tree Edit Distance and Related Problems. Theoretical Computer Science 337(1-3), 217–239 (2005)
Application of Artificial Neural Network (ANN) for Prediction of Maritime Safety Xu Jian-Hao Navigation Department of Qingdao Ocean Shipping Mariners College Qingdao, Shandong [email protected]
Abstract. In the field of marine, prediction of the marine safety is very important. It is reported that defects exist in the small intestinal epithelial barrier of inflammatory bowel disease, which might be associated with increased intestinal permeability at a very early stage. All endangered large whale species are vulnerable to collisions with large ships; and “ship strikes” are the greatest known threat to one of the world's rarest whales, the North Atlantic right whale. The magnitude of this threat is likely to increase as maritime commerce expands. Factors influencing the incidence and severity of ship strikes are not well understood, although vessel speed appears to be a strong contributor. The purpose of this study was to characterize the hydrodynamic effects near a moving hull that may cause a whale to be drawn to or repelled from the hull, and to assess the accelerations exerted on a whale at the time of impact. Using scale models of a container ship and a right whale in experimental flow tanks, we assessed hydrodynamic effects and measured accelerations experienced by the whale model in the presence of a moving vessel. Using an artificial neural network (ANN) prediction of the marine safety, results are compared with experimental values and deviations are determined. Based on the implemented investigations, minimal deviations between experimental and predicted values are obtained and can be concluded that ANN can be used for prediction of the marine safety. Keywords: Maritime safety, Ship collision, Human element, Artificial neural network (ANN).
1 Introduction Collisions with vessels (or “ship strikes”) can result in injury and death in a number of marine vertebrate taxa, including large whales (Da shun, 1999; Jensen and Silber, 2003; Glass et al., 2009), sirenians(i.e., manatees and dugongs) (U.S. Fish and Wildlife Service, 2001; Greenland and Limpus, 2006; Lightsey et al., 2006), and turtles (Hazel and Gyuris, 2006;Hazel et al., 2007). All endangered largewhale species are vulnerable to collisions with ships. Several reports provided summations of records of ship strikes involving largewhalesworldwide, accounting for nearly 300 incidents through 2002 (Laist et al., 2001; Jensen and Silber, 2003) and over 750 incidents through 2007 (Van Waerebeek and Leaper, 2008). These numbers are certainly minima as many other strikes likely go undetected or unreported, some collisions do not leave external evidence of collision, and in the case of some recovered carcasses, the cause of death M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 34–38, 2011. © Springer-Verlag Berlin Heidelberg 2011
Application of Artificial Neural Network (ANN) for Prediction of Maritime Safety
35
could not be determined (Glass et al., 2009) due, for example, to advanced decomposition. Observed injuries resulting from whale/ship collisions can include, for example, broken bones, hemorrhaging, other evidence of blunt trauma, and severe propeller cuts (Knowlton and Kraus, 2001; Moore et al., 2005; Campbell-Malone, 2007); and on occasion a vessel may arrive in port with a whale carcass pinned to its bow or riding atop the bulbous bow. One critically endangered species, the North Atlantic right whale (Eubalaena glacialis), appears to be more prone, on a per-capita basis, to vessel collisions than other large whale species (Vanderlaan and Taggart, 2007) and ship strikes are considered a significant threat to recovery of the species (National Marine Fisheries Service, 2005). According to mentioned, knowing of marine safety are important so in the paper is investigated application of the artificial neural network (ANN) for predicting of marine safety. These studies are a means by which the Chief Inspector of Marine Accidents may publish reports highlighting specific safety issues, safety trends, or any other issue which he feels should be brought to the attention of the marine community and the public. Applications of the artificial neural network (ANN) for predicting of marine accidents are investigated in the paper.
2 Artificial Neural Network An artificial neural network (ANN), usually called neural network (NN), is a mathematical model or computational model that tries to simulate the structure and/or functional aspects of biological neural networks. It consists of an interconnected group of artificial neurons and processes information using a connectionist approach to computation. In most cases an ANN is an adaptive system that changes its structure based on external or internal information that flows through the network during the learning phase. Neural networks are non-linear statistical data modeling tools. They can be used to model complex relationships between inputs and outputs or to find patterns in data. Although computing these days is truly advanced, there are certain tasks that a program made for a common microprocessor is unable to perform; even so a software implementation of a neural network can be made with their advantages and disadvantages. Advantages: • • • • • • •
A neural network can perform tasks that a linear program cannot. When an element of the neural network fails, it can continue without any problem by their parallel nature. A neural network learns and does not need to be reprogrammed. Disadvantages: The neural network needs training to operate. The architecture of a neural network is different from the architecture of microprocessors therefore needs to be emulated. Requires high processing time for large neural networks.
Another aspect of the artificial neural networks is that there are different architectures, which consequently requires different types of algorithms, but despite to be an apparently complex system, a neural network is relatively simple.
36
X. Jian-Hao
In Engineering, Neural Networks Serve Two Important Functions: As pattern classifiers and as nonlinear adaptive filters. An artificial neural network is an adaptive, most often nonlinear system that learns to perform a function (an input/output map) from data. Adaptive means that the system parameters are changed during operation, normally called the training phase. After the training phase the artificial neural network parameters are fixed and the system is deployed to solve the problem at hand (the testing phase). The artificial neural network is built with a systematic step by step procedure to optimize a performance criterion or to follow some implicit internal constraint, which is commonly referred to as the learning rule. The input/output training data are fundamental in neural network technology, because they convey the necessary information to “discover” the optimal operating point. The nonlinear nature of the neural network processing elements provides the system with lots of flexibility to achieve practically any desired input/output map.
3 Meaning of Accident Any occurrence on board a ship or involving a ship whereby for the purpose of these regulations and of section of the Act, “accident” means: (a)there is loss of life or major injury to any person on board, or any person is lost or the ship falls. (b)overboard from, the ship or one of its ship’s boats; (c) causes any loss of life, major injury or material damage, is lost or is presumed to be lost, (d) is abandoned, (e) is materially damaged by fire, explosion, weather or other cause, (f) grounds, (g) is in collision, (h) is disabled, or causes significant harm to the environment; or any of the following occur: a collapse or bursting of any pressure vessel, pipeline or valve, a collapse or failure of any lifting equipment, access equipment, hatch-cover, staging or boatswain’s chair or any associated load-bearing parts, a collapse of cargo, unintended movement of cargo or ballast sufficient to cause a snagging of fishing gear which results in the vessel heeling to a dangerous angle. (i) a contact by a person with loose asbestos fiber except when full protective an escape of any harmful substance or agent, if the occurrence might have caused serious injury or damage to the health of any person. a list, or loss of cargo overboard, clothing is worn, or In this regulation “disabled” means not under command for a period of more than 12 hours, or for any lesser period if, as a result, the vessel needs assistance to reach port; and “grounds” means making involuntary contact with the ground, except for touching briefly so that no damage is caused.
4 Probability of Grounding and Collision Events The most cost-effective way to reduce risk caused by collision and grounding is to reduce the probability of these events. It is a general principle that the most effective
Application of Artificial Neural Network (ANN) for Prediction of Maritime Safety
37
and least costly steps for safety provisions are as far back in the series of events as possible. The limited number of consistent, research based analyses which have been performed on preventive measures related to reducing collision and grounding probabilities generally confirms that risk control options within this area are very cost-effective compared to most other risk-reducing measures introduced by maritime authorities. In recent years, there has been a rapid development of new navigational systems. A growing number of Vessel Traffic Systems (VTS) are established around the world. An Automatic Identification System (AIS) has been introduced, and systems have been developed for access of AIS information through the Automatic Radar Plotting Aid (ARPA). Moreover, the Electronic Chart Display and Information System (ECDIS) with and without track control has been installed on new vessels, see Vanem et al. IMO has introduced requirements for new ships to fulfill particular maneuverability criteria and safe levels of manning are constantly discussed. It is generally agreed that all these activities have considerable influence on the probability of ship accidents in the form of collisions, contacts and grounding. However, so far very few rational analysis tools have been available to quantify the effect of these changes. On this background researchers work on the development of rational, mathematically based models for determination of the probability of ship–ship collision, ship collisions with floating objects such as lost containers, and grounding accidents. Probability of ship–ship collisions The main principle behind the most commonly used risk models is first to determine the number of possible ship accidents Na, i.e. the number of collisions if no aversive man oeuvres are made. This number Na of possible accidents, assuming blind navigation, is then multiplied by a causation probability Pc in order to find the actual accident frequency.
5 Conclusion The paper is made as a step in connecting scientific work and economy what is nowadays fundamental for managing investigation and development in science and economy. With application of this investigation we can contributed to development of model for predicting different maritime safety. With the proposed model it is possible to decrease number of experimental measuring which is often unavailable to maritime because described measuring system is very expensive and development and investigated division in industry cannot afford it. The further work will be continued with investigation of other parameters of safety, which are important in investigations studies of problems, which appear during navigation processes, and also in investigation of the influence safety.
References 1. Backhed, F., Ley, R.E., Sonnenburg, J.L., Peterson, D.A., Gordon, J.I.: Host-bacterial mutualism in the human intestine. Science 307(5717), 1915–1920 (2005) 2. Rhodes, J.M.: The role of Escherichia coli in inflammatorybowel disease. Gut 56(5), 610–612 (2007)
38
X. Jian-Hao
3. Rakoff-Nahoum, S., Paglino, J., Eslami-Varzaneh, F., Edberg, S., Medzhitov, R.: Recognition of commensal microflora bytoll-like receptors is required for intestinal homeostasis. Cell 118(2), 229–241 (2004) 4. Fiocchi, C.: One commensal bacterial molecule—all we need for health. N. Engl. J. Med. 353(19), 2078–2080 (2005) 5. Alverdy, J.C., Laughlin, R.S., Wu, L.: Influence of the critically ill state on host–pathogen interactions within the intestine: gutderived sepsis redefined. Crit. Care Med. 31(2), 598–607 (2003) 6. Santosa, S., Farnworth, E., Jones, P.J.: Probiotics and their potential health claims. Nutr. Rev. 64(6), 265–274 (2006); Olson, T.S., Reuter, B.K., Scott, K.G., et al.: The primary defect in experimental ileitis originates from a nonhematopoietic source. J. Exp. Med. 203(3) 541–552 (2006) 7. Mannon, P.: Gut permeability and colitis. Gastroenterology 137(2), 732–734 (2009) 8. Suzuki, T., Elias, B.C., Seth, A., et al.: PKC eta regulates occludin phosphorylation and epithelial tight junction integrity. Proc. Natl. Acad. Sci. USA 106(1), 61–66 (2009)
Embedded VxWorks System of Touch Screen Interrupt Handling Mechanism Design Based on the ARM9 Han Gai-ning1 and Li Yong-feng2 1
Information Engineering Department, Xianyang Normal Universty, China 2 Northwest Institute of Mechanical & Electrical engineering, China [email protected]
Abstract. This document explains and research of S3C2410 embedded processor based on ARM9, and Vxworks as a real-time embedded system, to which meet the system interrupt real-time requirements. Therefore, interrupt processing for the VxWorks system development is essential. This design based on the ARM920T processor to write and debug touch screen interrupt, to discuss the VxWorks operating system interrupt handling procedures. Keywords: ARM9, VxWorks, Touch screen, Interrupt.
1 Introduction Interrupt mechanism in the microprocessor, including two types of interrupt sources, one external interrupt source, an external device interface that is sent to the CPU interrupt request, such as hardware interrupts; One is an internal interrupt, through the CPU generated by the interrupt instruction, or experience in the implementation of programs produced some anomalies, known as soft interrupts. In embedded systems, hardware interrupt handling is the most important real-time system design, the most critical component is the CPU and external devices effective way to communicate it to avoid repeated check the status of external devices to waste CPU time to improve CPU's efficiency[1]. In this test system, the S3C2410 of ARM9 core, real-time operating system of embedded VxWorks , use LCD display and touch screen technology, to test system design.
2 Mainly Composed of System The system is to achieve the ultimate goal of providing a vehicle for vehicle graphics and information services functions of embedded applications, to ensure system reliability, system modularity, the system's functionality as the main research. Vxworks System based ARM for the basic component structure diagram is as follows: M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 39–44, 2011. © Springer-Verlag Berlin Heidelberg 2011
40
H. Gai-ning and L. Yong-feng
Fig. 1. The basic composition of system
The design of embedded real time operating system VxWorks, and in this operating system software development were studied.
3 Embedded System Architecture In general, the architecture of embedded systems can be divided into four parts: embedded processors, peripherals, embedded operating systems and embedded applications, as shown in Fig. 2.
Fig. 2. Embedded System Architecture
4 VxWorks Interrupt Handling Mechanism Design of Real-Time Interrupt Handling. Usually corresponds to the external event interrupt, the interrupt system by interacting with the external events. In order to obtain the shortest possible interrupt response latency, VxWorks interrupt handling and use of common tasks are handled in a different stack interrupt handling mechanism, the interrupt service program to run any task in a different context, there is no task control block. Therefore, this makes the generation of the interrupt register will cause some of the key tasks of storage and does not cause context switching, interrupt handling any task not related to context switch, thereby reducing interrupt latency, real-time is relatively high.
Embedded VxWorks System of Touch Screen Interrupt Handling Mechanism Design
41
Usually the value of the interrupt signal by the amount of the task or two to communicate, that is, when an interrupt occurs, the interrupt service program calls semGive release the semaphore, the task to communicate with the amount of access to the signal after the program continue. Interrupt service program under VxWorks (ISR, Interrupt Service Program) has only completed the minimum notice period interrupt occur, the other non real-time processing interrupts and as far as possible by the communication mechanism between tasks to be triggered on to complete the task in the context of , similar to Linux in the bottom half (bottom-half) processing mode, both to write the interrupt service program to avoid the restrictions, further reduce interrupt latency[2]. VxWorks system commonly used Wind in the kernel is highly optimized interrupt semaphore and not related to achieve mutually exclusive access, which also reduce the interrupt latency has a certain significance. Interrupt Programming Interface Design. In order to allow users to easily use the C language interrupt service program to achieve the interruption of the control, VxWorks program library in the architecture independent and architecture-related intLib intArchLib program library system interface functions are provided in . For the application programmer can generally only need to know intLib library, and BSP (Board Support Packet, board support package) programmer, you also need to know intArchLib library. The most commonly used interface function is provided by the library in intLib disconnect function intConnect (). The function prototype is[3]: STATUS intConnect ( VOIDFUNCPTR * vector, /* Need to connect an external interrupt vector */ VOIDFUNCPTR routine, /* Interrupt occurs, the interrupt service program */ int parameter /* Passed to the interrupt service program parameter */ ) This function will be specified with the specified interrupt service routine external event vector associated with interrupt vector. When the specified external interrupt event occurs, the function parameter to specify the parameters of the parameter, called the interrupt service routine, complete the corresponding interrupt service. VxWorks interrupt latency is short, to meet the requirements of real-time, and users can use the C language interrupt service routine without having to master the compilation. Just call intConnect() function to interrupt their own interrupt service function and specific mount up. In fact, VxWorks does not interrupt the function the user will fill in the address directly to the specified interrupt vector, but in the VxWorks own interrupt service function, called the user's interrupt function. Touch Screen Interrupt Control Design Resistive touch screen with touch screen, when the finger down, the resistance to change, and increase the voltage of these resistors that change can be captured through
42
H. Gai-ning and L. Yong-feng
the AD converter voltage changes, thus positioning the location of touch. Voltage detector designed for touch screen and AIN7 termination in AIN5 were detected on the X and Y coordinate values[4]. Design Procedure Touch Screen Interrupt (1) initialize the initialization procedure Touch screen controller for the automatic conversion mode. The value of which Register ADCDLY were selected according to specific needs. Touch screen interrupt service routine TouchIRQ is to determine whether the touch screen is pressed. (2)Coordinates of the touch screen and the conversion process to obtain Set by the initialization procedure, start all the hardware interface, then VxWorks interrupt function intConnect() interrupt service routine will interrupt vector INT_LVL_ADC TouchIRQ and direct binding, direct real-time system, hardware interrupt handling, because VxWorks interrupt notification system is external events, in order to respond quickly to interrupts, interrupt service program TouchIRQ running in a particular space, unlike any other task, so do not interrupt the task context switch. Touch Screen Interrupt Program Design Process
Fig. 3. Touch screen control flow
Initialization and Interrupt the Touch Screen to Achieve (1) Touch screen Initialization function to complete the main mode of interrupt entry void TouchInit (void) { Uart_Printf ("touch screen test program \ n"); Uart_Printf ("Please click on the touch screen test, please note the serial output \ n");
Embedded VxWorks System of Touch Screen Interrupt Handling Mechanism Design
43
rADCDLY = (50000); / / ADC Delay register 13.56ms rADCCON = (1 <<14) | (ADCPRS <<6) | (0 <<3) | (0 <<2) | (0 <<1) | (0); rADCTSC = (0 <<8) | (1 <<7) | (1 <<6) | (0 <<5) | (1 <<4) | (0 <<3) | (0 <<2) | (3); intConnect (31, Adc_or_TsAuto, 0); / / call the interrupt service routine rINTMSK = rINTMSK & (~ (BIT_ADC)); rINTSUBMSK = rINTSUBMSK & (~ (BIT_SUB_TC)); (2) Interrupt service program void Adc_or_TsAuto(void) { // Mask sub interrupt (ADC and TC) rINTSUBMSK|=(BIT_SUB_ADC|BIT_SUB_TC); if(rADCTSC&0x100) rADCTSC&=0xff; // Set stylus down interrupt else { rADCTSC=(0<<8)|(1<<7)|(1<<6)|(0<<5)|(1<<4)|(1<<3)|(1<<2)|(0); rADCCON |= 0x1; // Start Auto conversion while(rADCCON & 0x1); //check if Enable_start is low while(!(rADCCON & 0x8000)); // Check ECFLG Uart_Printf("\r\n Touch Xqq Y Position\n"); Uart_Printf("X-Posion[AIN5] is %04d\n", (0x3ff&rADCDAT0)); Uart_Printf("Y-Posion[AIN7] is %04d\n", (0x3ff&rADCDAT1)); rADCTSC=(1<<8)|(1<<7)|(1<<6)|(0<<5)|(1<<4)|(0<<3)|(0<<2)|(3); } rSUBSRCPND|=BIT_SUB_TC; rINTSUBMSK=~(BIT_SUB_TC); // Unmask sub interrupt (TC) ClearPending(BIT_ADC); } In this system the most optimized design is through Vxworks system taskspawn () function to create (generate and activate) method for new tasks to achieve the touch screen to interrupt control tasks. The system uses the function: / / create a task taskSpawn ("INTERRUPEXAM", 90,0 x100, 2000, (FUNCPTR) inttask, 0); void inttask () / / task function { TouchInit (); / / Initialize external interrupt intEnable (31); / / enable IRQ interrupt (clear the CPSR register I bit) }
5 Touch Screen Interrupt Test PC applications developed by way of network interface and serial joint test, the application downloaded to the target machine, the feedback information through the serial port operation to view the target machine.
44
H. Gai-ning and L. Yong-feng
Fig. 4. Touch screen interrupt the test
Through the joint commissioning of hardware and software, VxWorks-based embedded under the touch screen interrupt normal operation.
6 Summary Through the touch screen under VxWorks interrupt handling mechanism and implementation, and development and application of human-machine interface is easy to more convenient, but also in embedded applications under the touch screen is very important.
References 1. Gang, J., Yang, F., Hu, G.-r.: Embedded operation system. Computer Applications 11(4), 7 (2000) 2. Caratozzolo, P., Serra, M., Ocampo, C., Riera, J.: Aporposal forth epropulsion system of aseries hybrid electric vehicle Power Electornics Speciailst. In: IEEE 34th Annual Conference on PESC 2003, vol. 2(15), p. 19 (2003) 3. Yangcui, E., Wang, L., Wang, J.: VxWorks real-time systems under the device driver code. EIC 10(4) (2003) 4. Barr, M.: Porgramming Embedded Systems in C and C ++-Quantum Porgramming for Embedded Systems. CMP Books 43, 50 (2002)
A New Architectural Design Method Based on Web3D Virtual Reality Zhang Jun Harbin Institute of Technology Northeast Forestry University [email protected]
Abstract. The incorporation of virtual reality-based technology into architectural design is one of the directions developing the future architectural design. It is inevitable to bring profound effects to architectural design. This subject puts emphasis on limitations and deficiencies of expression and display in architectural design. Virtual reality-based technology, especially based on Web3D, is available to improve solutions and make certain accomplishments. Keywords: Web3D virtual reality, architectural design expressions, interactive type.
1 Introduction In the past twenty years, "Virtual" has become one of the most popular words worldwide and has been incorporated into our life. For example, we can learn in a virtual university; we can work in a virtual office; we can visit a virtual museum and watch TV programs performed by virtual performers; and we can go to a virtual doctor while we are sick, and etc. All of this is due to a fact that virtual reality-based technology has been continuously developed and advanced and widely applied in E-business, distance education, architectural design, computer aided design and etc., leading to profound effects. The term "Virtual Reality", which was firstly proposed by laron Lanier in the 1980s who established USA VPL, played as an integrated information technology which was mainly featured in immersion as well as interaction and imagination. [1] It comprehensively makes advantages of computer graphic system and other display and control interface units and plays as a computer 3D technology available for immersion and interactive operation. Furthermore, it is a high fidelity man-machine interface simulating human beings seeing, listening and acting and etc. in the natural environment.
2 Analysis on Necessity Incorporating Virtual Reality -Based Technology into Architectural Design Experience Promotes Virtual Reality-Based Technology into Architectural Design. Along with continuous times development, architectural design is not only M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 45–49, 2011. © Springer-Verlag Berlin Heidelberg 2011
46
Z. Jun
simply expressed in two-dimension and three-dimension, and the simple computer-aided design (CAD) can not cater to customers' requirements on architectural design. In the times, the relationship between customers and designers has been changed and replaced by an equal cooperation. Furthermore, architectural design has been expanded in its category. Customers have been more and more focusing on experiencing from architectural design. By virtue of virtual reality-based technology, simulation effects are available for customers through visual, acoustical and tactile senses to better participate into design and for communication between both parties. Architectural design is a series of innovations covering planning, designing, construction and maintenance and etc. Due to its huge cost and non-reversible execution procedures, excessive errors are not allowable. Virtual reality is a computer system available for creating and expressing a virtual world. The full utilization of virtual reality can reduce working strength of designers, shorten design period as well as improve design quality and save investment. Therefore it is inevitable to develop such technology. Continuous Development of Digital Technology Makes the Foundation for Application of Virtual Reality-Based Technology. The traditional architectural design gave more emphasis on engineering design and was mainly supported by CAD. However, along with rapid development of network technology and computer hardware technology, the improvements in network broadband and 3D operational capability make Web3D virtual reality-based technology expressed in architectural design. Virtual reality-based technology combining images, cartons and videos and etc. is available to create a real virtual world for customers. [2] One aim of Web3D is to create a new carrier combining network, multi-medial, three-dimensional graphs and virtual reality so as to continuously expand connotative meanings and extensions of architectural design. The term “Post Architectural Design” emerges as the times require. The related digital technology has been continuously improved. The combination with digital technology and virtual reality- based technology marks that technology development has climbed a new stage.
3 Effects of Virtual Reality-Based Technology on Architectural Design Expressions The traditional architectural design was generally expressed in plain layout based on perspective views. In the stage working out plans, designers make idea expressions by hands. After the scheme has been established, two- dimensional or three-dimensional software is generally used to make two-dimensional or three-dimensional rendering and refine design schemes. Then two-dimensional software will be used to make layout and work out drawings. During this period, presentations on the layout of architecture functions will be made through cartoons for setting architectures in the three-dimensional rendering documents. After all the above have been finished, designers will make architecture models with wood, gypsum, plastic materials and etc. And finish the alternations for architectural design schemes. Although these expressions are in accordance with thought process of designers, their deficiencies are obvious. Firstly, a rather large deviation will be brought as twodimensional plan view is used to express three-dimensional architectures. Also the
A New Architectural Design Method Based on Web3D Virtual Reality
47
expressiveness is not so enough for people to overall understand the general architectural design scheme and details. Although three-dimensional architectural cartoons are used to display shapes and functions of architectures, it can only display in the established way due to its weak interactivity. Customers cannot have their own way to observe and understand architectures. Furthermore, it cost high to make architectural models due to complex process, long manufacturing period and rather high cost. In order to solve such above deficiencies, we propose a concept of "Interactive Architectural Display Drawings" incorporating Web3D virtual reality-based technology, which not only boasts advantages of the traditional architectural design expressions, but also combines interaction features of virtual reality-based technology. People can learn and understand architectures in a more objective manner by virtue of this architectural expression featured in reality and interaction.
4 Interactive Architectural Display Drawings Basic Concepts. Interactive architectural display drawings, by virtue of virtual reality-based technology and making use of virtual architectural digital models, make three-dimensional browse and interactive operation possible, with high feeling of reality and interaction, so as to reflect shapes and functions of architectures in a more comprehensive and real manner. Basic Compositions. Interactive architectural display drawings essentially mean that, architectural digital models available for interactive operation are inserted into static background graphs. It is mainly composed of two parts as static background graphs and architectural digital models. Basic Characteristics. Interactive architectural display drawings are mainly featured in: a) interaction; b) fluency; c) feeling of reality and d) real-time comparison. Detailed illustrations will be given below. Interaction. Interaction is the most remarkable characteristic distinguished from the traditional architectural expressions, which is centered on architecture presentations. Design concepts and functions are illustrated through static pictures and words. Due to its single function, we consider it as a static architectural display interface and a static interface design. Interactive architectural drawings are featured in interaction, which not only give presentations, but also available for customers to participate and interactively experience. Target synthesizing technology through three-dimensional space images generated by the computer, which is displayed in visual, acoustical and tactile senses and cartoons, is available for customers to experience. It has changed the traditional CAD which was statically driven to deliver information. Therefore, we can consider Interactive architectural drawings as interactive design. Interactive architectural drawings are available to establish an organic relationship between virtual architectural models and customers by interactions so as to provide simulations of design results. Through making a virtual architectural environment based on Web, communications between designers and customers can be made possible through Internet. The basic interactions mainly include:
48
Z. Jun
• Zoom in/Out Customers can observe characteristics of architectures in a long or short distance through this operation to master the details and the general scheme. • Rotate Customers can randomly rotate virtual architectures and observe shapes of architectures at all directions. • Move Users can move virtual architectures within window and parallel to move certain part for individual observation. • Camera Some cameras are set in the interactive architectural display drawings. Through such operation, customers can freely switch for the convenience of observing characteristics of architectures. • Animation Different from the traditional commercial cartoons, interactive architectural display drawings mainly aim at delivering architecture information and take reality, rationality and logistics as its purpose. Therefore compared to the forms used by cartoons, it is more plain and visual. Cartoon set in interactive architectural drawings are concluded into two kinds: expression and display for design scheme; for displaying architectural structures and principles. Fluency. With respect to fluency, while customers are performing interaction operation with virtual digital models, virtual digital models can make real-time response without delay as soon as possible. The optimum operation state shall be that models are available to make immediate response while customers are performing interaction operation with virtual digital models. If virtual digital models are delayed to make response, customers’ feeling will be affected. The factor mostly affecting the response of virtual digital models is the volume of documents. In case of a relative small document volume, it will make a rather quick response; or design speed will be delayed. Therefore, the volume of documents shall be well controlled in obtaining the original documents of virtual digital models. For example, we can simply model mesh numbers in 3D Studio Max, establish models by basic models and apply illustrations and etc. to reduce document volume. Feeling of Reality. The feeling of reality means that virtual digital models are available to objectively reflect architecture characteristics approaching reality, which is the precondition and basis for performing interactions with customers. [3] Twodimensional and three-dimensional expression manners at the current stage are available for only delivering information of some attributes of buildings. By virtual reality-based technology, a complete virtual building can be displayed to create a real feeling for customers. Design is not only responsibility of designers, but also customers and management department help. Effects approaching reality depends on model building and rendering extent of three- dimensional models to a large extent. If models
A New Architectural Design Method Based on Web3D Virtual Reality
49
are made delicately and well rendered, the virtual digital models are available with strong feeling of reality. Real-Time Comparison. During architectural design, different design schemes and various presumptions are generally proposed for architectures to be designed. In a virtual three-dimensional space, different schemes can be switched and observed realtime. In the same observation point or sequence, different appearances can be watched. In such a way, characteristics and deficiencies of different design schemes can be compared for further decisions. In fact, the application of virtual reality-based technology can not only compare different schemes, but also modify certain part and real-time compare with the former schemes.
5 Design Ideas of Interactive Architectural Drawings While using Web3D virtual reality-based technology, interactive architectural drawings can be made according to the following ideas. The whole process is divided into three parts: establishment of static background pictures, generation of virtual architectural digital models, and combination of both. The establishment of static background pictures is to draw plain pictures by two-dimensional software according to design principles. The generation of virtual architectural digital models is to obtain three-dimensional digital models in the architectural design and development stage as well as treat and transfer patterns, and finally to make virtual digital models by software. The combination of both is to insert virtual digital models into the static background pictures covering information of architectures and link related operations through program setting.
6 Conclusion According to participation and immersion degree of customers, virtual reality-based technology is divided into four categories including Web3D virtual reality technology, immersion virtual reality technology, distribution virtual reality technology and reality-enhanced virtual reality technology, among which Web3D virtual reality technology has been most widely applied at present.
References 1. Pan, Z., Ma, X., Shi, J.: Overview of Multi-level Model Automatic Generation Technology in Virtual Reality. China Journal of Images and Graphics (4), 754–759 (1998) 2. Zhang, J.: Architectural Decoration Design and Management System based on Web and Virtual Reality. In: Computer-aided Engineering and Construction System Information-An Assay of the 5th National Multi-media Aided Engineering Academic Meeting, Xinjiang (July 2002) 3. Shi, J.: Virtual Reality Basis and Calculations, pp. 216–246. Beijing Science and Technology Press (2002) 4. Zeng, F., Shengyang: 3D Technology under Web Environment Su Yong. Journal of Jiangsu University of Science and Technology 22(6) (2001)
Design and Implement of a Modularized NC Program Interpreter Chen Long1, Yu Dong 2, Hong Haitao1, Guo Chong2, and Han Jianqi2 1
School of Computer Science and Technology, University of Science and Technology of China (USTC), Hefei 230022, China 2 National Engineering Research Centre for High-end CNC, Shenyang Institute of Computing Technology of Chinese Academy of Sciences(SICT), Shenyang 110168, China [email protected]
Abstract. In order to improve the universality and expansibility of the existing CNC (Computer numerical control) interpretation technique, a new model of NC program interpreter is proposed based on the format and characteristics of NC program. This model uses the modularized structure. Because of the independence of module function and the consistency of the interface, each module of the interpreter can be design and expansion independently without affecting other modules. Improve the grammatical rules by using EBNF (Extended Backus-Naur Form),which effectively eliminates the uncertainty of grammatical analysis caused by grammatical ambiguity. The error handling module introduced cannot only detect all kinds of interpretation error automatically, but also pointed out the wrong location directly. A modularized interpreter prototype system which verified the validity of the proposed method was developed. The experimental Results shows that the interpretation time is much less than the program machining time, so the modularized NC program interpreter proposed can improve the interpretation efficiency significantly. Keywords: CNC, interpreter, EBNF, Ambiguity, Error handing.
1
Introduction
In the modern CNC systems, the computer can’t deal directly with NC program, that we must translate the NC program into the corresponding machine instructions at first. The main task of this process is to extract the data through the NC program interpreter, and convert them to PLC (Programmable Logic Controller) command and motion command, which will be sent to the PLC and motion control module. Therefore, interpreter is one of the most important modules of CNC system, the function of which affect system performance directly. However, with the development of numerical control technology, existing CNC system is restricted with the interpreter lacking in expansibility, modularity and openness [1, 2]. In past decades, researchers have proposed different approaches to meet the increasingly complex processing needs: A research proposed by Kong Zhenyu[3] analyses the construction of numeric control software and the interior data stream, and M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 50–57, 2011. © Springer-Verlag Berlin Heidelberg 2011
Design and Implement of a Modularized NC Program Interpreter
51
gets the construction figure of the interpret system. Similarly, Wu Kangni [4] proposed a method to design the lexical analyzer and parser of the interpreter. Nevertheless, the implementation of these methods was limited to theory and discussion of the interpretation process. Zhang Qin [5] investigates a method to design and implement a NC code interpreter for open architecture NC system, and a modularized structure which consists of lexical analysis sub-module, syntactic and semantic analysis submodule and code generation sub-module was proposed. This structure can only achieve the basic functions of the interpreter, since the research on Implementation of the process is not deep enough, such as details of syntax analysis and error handing mechanism. The research by Liu Yaodong[6] explains the NC specification dictionary structure by using EBNF representation, which is further implemented in tool command language, an embedded script language tool. However, he did not take ambiguity into account of the grammatical rules, which bring uncertainty to the NC code analysis. The present work proposes a new model of interpreter which is consisted of the lexical analysis module, syntax analysis module, machining information storage module, machining command converting module and error handing module. The modularized structure made each module of the NC program interpreter can be design and expansion independently without affecting the other modules. In the syntax analysis module, the standard NC program rules were described as EBNF (Extended Backus-Naur Form) which eliminated the ambiguity of the rules of grammar. An error handing mechanism is introduced, if there are errors during interpretation, the error handing module will find them and generate a report to the user. Finally, a prototype system was proposed to verify the feasibility of interpreter, and the experimental Results shows that the interpretation time is much less than the processing time, it means the modularized interpreter proposed can improve the interpretation efficiency significantly.
2
Description of NC Program
International Organization for Standardization has developed word address format for ISO-6983 standard form as follows [7]: N...G...X...Y...Z...I...J...K...M...S...T...F...
,
As mentioned above A line of NC program consists of a line number at the beginning followed by several “words”. A line number is the letter N followed by an integer between 0 and 99999. A word may either give a command or provide an argument to a command, and it consists of address character followed by a real value or expression. The same address character may have different meaning. X, Y, Z, I, J, K represents the movement of machine axes, F is the feed rate, S is the spindle speed and T is the tool number. We call such a representation of a letter with special significance as the address character. A value is some collection of characters what can be processed to come up with a number, such as an expression, a real number and an integer.
52
3
C. Long et al.
Conceptual Model of Proposed Interpreter
Figure 3 describes the conceptual new model of interpreter which is divided into five modules: lexical analysis module, syntax analysis module, machining information storage module, machining command converting module and error handing module.
Fig. 1. The new model of the proposed interpreter
First, the lexical analysis module parsing NC program into several words, and the data these words represent will be extracted and stored in the machining information storage module. Then the syntax analysis module checks these words to see whether they match grammatical rules. Finally the machining command converting module translates the machining data into motion and PLC commands, sent to the bottom of the module to continue. It is important to note that if an error occurs during the interpretation, error handing module will deal with it.
4
The Design of Each Module of Interpreter
Lexical Analysis Module. The major functionality of lexical analysis module is to scan source program from left to right, from which to identify "word" symbol. The basic units of a NC program is “word”, it’s an address character followed by a value. Meanwhile, Lexical analysis can do some other tasks. For example, source code can be stripped of comments and white space (spaces, tabs or line breaks, etc.) for the NC program's follow-up treatment. The output of lexical analysis is fed into the syntax analysis module and machining information storage module. An example is shown in Fig. 2:
Design and Implement of a Modularized NC Program Interpreter
53
Fig. 2. Example of Lexical analysis and Syntax analysis
Syntax Analysis Module. Syntax analysis, which divide a sequence of NC program words into all kinds of syntactic units, and makes sure the logic relation within each block of NC program is correct according to rules of syntax based on the analysis in the lexical. If a syntax error occurs, it calls the error handling module gives the corresponding error handling. Fig. 2 shows an example of syntax analysis. EBNF (Extended Backus-Naur Form) is a language used to represent CFG (Content-Free Grammar), which was developed by John Backus and Naur in 1960. Since then, it has become an important tool to represent the grammatical rule of a language. NC language can be considered as one type of simple computer programming language, therefore it’s reasonable to use EBNF to represent the NC program grammatical rule. The meaning of main EBNF symbol: “=”means“defined as”, “+”means“followed by”, “|”means“or”, “[]”means“the items contained within brackets is optional”, “()”means“the items contained within parentheses is not optional”, and“{}”means“the items contained within brackets can be repeated from 0 to many times”. The following is an EBNF-based grammatical rule definition of what combinations of symbols form a valid line of NC program: line =line_number { segment } end_of_line; line_number =letter_n {digit}; segment = word | comment; word =address_character real_value; comment= “**” comment_character; address_character =letter_a | letter_b | letter_c | letter_d | letter_f | letter_g | letter_h | letter_i | letter_j | letter_k | letter_l | letter_m | letter_n | letter_p | letter_q | letter_r | letter_s | letter_t | letter_x | letter_y | letter_z; real_value =real_number | expression_value | unary_combo; real_number =[“-”] (( digit + { digit } + [“.”]+ {digit}) | ([“.”] + digit + {digit})); expression_value = “(” real_value { binary_operation real_value } “)”; digit = “0” | “1” | “2” | “3” | “4” | “5” | “6” | “7” | “8” | “9”; binary_operation = “+” | “-” |“*” |“/” |“or”|“xor” |“mod”; unary_combo = ordinary_unary_operation + expression; ordinary_unary_operation = “abs” |“sin”|“cos” |“tan”| “asin” |“acos” |“atan” | “exp” |“ln” |“sqrt”; end_of_line = “;”; letter_a = “a” | “A”; letter_b = “b” | “B”; If a grammatical rule sentence correspond to two different syntax trees or has a number of different derivation of the most left derivation or the most right derivation,
54
C. Long et al.
it called ambiguous grammar. In the design of grammatical rules, we must avoid using ambiguous grammar, because it will have two or more different kinds of understanding when parsing. The uncertainty of syntax analysis will inevitably lead to the uncertainty of the generated code. For example, the parse of “X13.68” is uncertain, because “X=13.68” and “X1=3.68” are both possible. The solution is to modify the grammar rules as following: address_character =…… letter_x | letter_y | letter_z letter_x1 | letter_y1 | letter_z1; word =address_character “=”real_value; The ambiguity of grammatical rule can be effectively eliminated by adding "=". Therefore, in accordance with the specific needs of construction or expansion of NC program grammatical rules, we must eliminate the parse uncertainty brought by ambiguity. Machining Information Storage Module. The main task of machining information storage module is to provide machining information data for other modules. In fact, it is a data storage unit. The machining information data obtained by Lexical analysis module and syntax analysis module will be extracted and stored in machining information storage module depending on the division of NC instructions. The interpreter contains a lot of internal state information. It is stored in machining information storage module which consists of line information storage structure and global information storage structure. Line information storage structure stores all the information of one line of NC program. Global information storage structure stores all the parameters and global information of the interpreter and it can achieve interaction between the various lines of data interaction. Machining Command Converting Module. Each canonical machining function produces a single tool motion or logical action. They are atomic commands, which are intended to tell a controller what to do at the control level. The main task of them is to convert words into the command formats which the controller can handle. The name of canonical machining functions indicates roughly what they do. Some of them are described in Table 1. Table 1. Canonical machining functions table Word G01 G17 G20 M00 M02 M03 ……
Functions STRAIGHT_FEED() SELECT_PLANE() USE_LENGTH_UNITS() PROGRAM_STOP() PROGRAM_END() START_SPINDLE_CLOCKWISE() ……
Description Linear interpolation Plane selection Units selection Program stopped Program finished Spindle clockwise ……
When NC program was parsed by syntax analysis module, the output which represented machining commands and parameters are also stored into the machining information storage structure. The main task of machining command converting module is to fetch processing information and interpreter status information from the
Design and Implement of a Modularized NC Program Interpreter
55
machining information storage structure and translate them into canonical machining functions. The results are sent to the bottom module of the CNC system for the next step of the machining operation. The machining command converting module mainly deals with two types of machining control commands of the module: Non-motion machining control commands, such as the spindle speed setting (S word), the tool setting (T word), and the feed rate setting (F word); Motion machining control commands, such as point positioning (G00), linear interpolation (G01), and circular interpolation movement (G02). Error Handing Module. For a practical interpreter of CNC system, whether it has a good error-handling capability is an important indicator to evaluate it. If there are errors during interpretation, the error handing module will find them and finally generate a report to the user. The error handing module mainly deals with three actual types of errors: Lexical error, such as lack of related character (N10 G M06 T01) and illegal address character (U00 E01). Syntax error, such as modal compatibility error (N70 G01 G02 X44), X and U appear at the same time; F is missed after G01 and so on. Function call errors: it is an error to call any function incorrectly. To solve all of the above errors, an error handing mechanism is discussed in detail. Table1 shows the modal groups of G codes and M codes. The flags can be set for each modal group, such as G_GROUP_1_FLAG and M_GROU_P1_FLAG. Similarly, there are F_WORD_FLAG exist. The values of all the flags will be set to zero at first. When a word belonging to one group is read, the corresponding flag should be increased by one. Based on the above settings, the following pseudo-code section can be used to detect all the errors of NC program. { If (G_GROUP_1==G01) {If (!F_WORD_FLAG) return error;} //F has to be with G1 If (G_GROUP_1_FLAG>1) return error; //The word belong to the same group cannot be together } The error handing module can be constructed based on this errors detection mechanism which can be achieved easily.
5
Implementation Verification
So far, a prototype system to verify the feasibility of proposed NC program interpreter has been implemented according to the ISO6983 standard. Without the support of PLC and motion control module, machining command converting module can’t implement the corresponding canonical machining functions, so we just print the names of these canonical machining functions to replace this process. With the NC program input, the interpretation results are shown in Figure 3. For example, the word G00 in the line of N60 corresponds to spindle’s clockwise rotation, so the machining function START_SPINDLE_ CLOCKWISE () was printed. While the word G01 and G02 in the line of N70 belong to the same modal group, an error is displayed.
56
C. Long et al.
Fig. 3. NC program input and Interpretation results
Furthermore, to test interpretation rate, in Red Hat Linux @200MHz machine, the prototype system interpret a line cutting program of 50,000 lines needs 9372ms. In general, the processing time of a line is more than 0.1s, while the interpretation of a single line is less than 0.01s, so the interpreter does not significantly affect the rate of implementation of the system speed.
6
Conclusion
A modularized NC program interpreter is proposed for the CNC system of machine tool. In the syntax analysis module, the standard NC program rules which eliminated the ambiguity effectively were described as EBNF. In the error handing module, an error detection mechanism is put forward. So far, a NC program interpreter prototype system has already been built to verify the feasibility of the proposed design. The interpretation results indicated that the system development efficiency can be significantly improved. In future, more research activities will be put on the expansion of the NC program through the extension of the paradigm. Acknowledgment. This work was supported by the Major National S&T Program (High-grade CNC Machine Tools and basic manufacturing equipment- The Innovation Platform Construction for Supporting Technology of Open Numerical Control(ONC) System: No. 2011ZX04016-071).The foundation’s support is greatly appreciated.
References 1. Xiao, T.Y., Han, X.L., Wang, X.L.: General NC code translation techniques. Journal of System Simulation 10, 1–7 (1998) 2. Zhao, D.L., Fang, K., Qian, W.: Design and realization of NC code explaining. Manufacturing Automation 28, 43–45 (2006)
Design and Implement of a Modularized NC Program Interpreter
57
3. Kong, Z.Y., Ma, J.: CNC wire cutting of ISO code interpreter. Electrical Discharge Machining 1, 21–23 (1997) 4. Wu, K.N., Li, B., Chen, J.H.: Implementation of NC code interpreter of open architecture NC system platform. China Mechanical Engineering 17, 168–171 (2006) 5. Zhang, Q., Yao, X.F.: Design and Implement of a NC Code Interpreter for Open Architecture CNC System. Modular Machine Tool & Automatic Manufacturing Technique 2, 59–61 (2010) 6. Liu, Y.D., Guo, X.G.: An intelligent NC program processor for CNC system of machine tool. Robotics and Computer-Integrated Manufacturing 23, 160–169 (2007) 7. SO6983. Numerical control of machines – program format and definition of address words – Part 1.Data format for positioning, line motion and contouring control system. International Standards Organisation (1982)
Parallel Computing Strategy Design Based on COC Jing-Jing Zhou College of Information & Electronic Engineering, Zhejiang Gongshang University [email protected]
Abstract. Sparse and unstructured computations are widely involved in engineering and scientific applications. It means that data arrays could be indexed indirectly through the values of other arrays or non-affine subscripts. Data access pattern would not be known until runtime. So far all the parallel computing strategies for this kind of irregular problem are single network topology oriented, which cannot fully exploit the advantages of modern hierarchical computing architecture, like grid. We proposed a hybrid parallel computing strategy RP, shorted for “Replicated and Partially-shared”, to improve the performance of irregular applications in the COC (Cluster of Clusters) environment. Keywords: Irregular Applications, Replicated and Partially-shared, Cluster of Clusters.
1 Introduction Irregular data access is widely involved in many scientific computing and engineering applications in the form of indirect data indexing or non-affine subscriptions. The data access pattern could only be determined at runtime. This irregular issue becomes extremely important when we would like to parallelize these applications. So far, there are mainly three models for parallelizing this kind of irregular applications. The first one is the HPF-like model. High Performance FORTRAN(HPF)[1] is a big contribution towards the programming model on distributed memory machines. It allows programmers to specify the parallelism of the program either explicitly with an independent directive or implicitly by specifying the data distribution. But it can be a tedious task for programmers to deduce an optimal data distribution [2]. The HPF-like model is actually an extension of HPF by employing the inspector/executor scheme [3] to handle the irregular problem. However, the step of specific data partition is still in the first place. Generally each datum has a single owner. Computations that could change the datum’s value would be performed only on the owning node of this datum [4]. Input operands to the computations are received via messages from their owners. Therefore, the overhead of operations on large numbers of remote data can be involved. Some works have focused on the optimization of this model in recent years. P. Brezany [5] gave a runtime library which could on some level properly handle the irregular issue. A new minimized computation partition scheme was proposed in [6], as well as the cache-hit oriented computation partition scheme in [7]. This HPF-like model has been applied to many applications in practice, and mostly been implemented with a low M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 58–65, 2011. © Springer-Verlag Berlin Heidelberg 2011
Parallel Computing Strategy Design Based on COC
59
level runtime-support library. However, as the data partition is always the first step and only static analysis can be performed during compilation, the uncertainty of data access pattern still exists. Therefore, this model cannot guarantee load balance among all the compute nodes. The second one is the idea of Software Distributed Shared Memory (SDSM) proposed in [8]. By using software mechanisms to trace remote data access and perform communication, SDSM provides programmers a unified view of global memory on a distributed memory system. But every time after the update of data an implicit global synchronization is needed. Since every node that takes part in the computation keeps a complete and valid copy of the entire data set, this model definitely eases the burden on programmers and compilers. However, huge overhead can be introduced by unnecessary global synchronization. Experiments show that applications employed this SDSM model perform rather poor on clusters, especially when irregular computation is involved [9]. Continuous efforts have been made to solve this problem [10], but the original SDSM model gets complicated when new rules are employed. The well known replicating computation model takes in the idea of sharing data. Its essence is to eliminate all the network communication cost by doing redundant computation of fully shared data set. However, the lack of available memory space could become an issue as the program size grows, there might raise the Out-of-Core problem. Therefore, good scalability cannot be guaranteed by this SDSM or replicating computation model. The third one is the producer-consumer based partially shared model, “P-C PS” for short later in this paper, proposed by Ayon Basumallik [11]. In his work, a specific dependency between data and computation is studied to determine which part of the entire data set should be shared among computing nodes. As only part of the data would be shared, it will not occupy too much memory space. And better load balancing state can be achieved since the computation partition is considered prior to the data partition. However, there still are operations on remote data which bring about communication and synchronization cost. Table 1. Simple comparison among three traditional models
Mo HPF-like SDSM P-C PS
Simple
Scale
Load Balanced
√ √ √
√
COC (Cluster of Clusters), which is composed of different sized clusters, can provide innumerous storage capacity and unified huge computing resource. These clusters could employ different network protocols or topologies and be loosely organized. The intra- and inter-cluster network topology can be quite different. Then how to take the advantage of this system for irregular computations becomes an issue. Therefore, we are dealing with the problem of how to run irregular applications efficiently in a heterogeneous network based computing environment.
60
J.-J. Zhou
Our contribution is to propose an irregular problem solving scheme adapted to the COC computing environment: “Replicated and Partially-shared” hybrid computing strategy, RP for short later in this paper. We try to incorporate the advantages of the traditional models and eliminate the negative side-effects they may bring about. The essence of our method is to do computation redundantly inter-clusters while adopt the partially shared model for intra-clusters. The following benefits could be obtained: z z z
enable irregular applications to well fit the heterogeneous network; achieve shorter execution time owing to balanced workload; obtain good scalability of irregular applications.
The rest of this paper is organized as follows: some necessary background information of irregular computing is covered in section 2. RP hybrid computing scheme is described in section 3, along with detailed comparison between our method and other traditional ones. Experimental results are presented in section 4 and section 5 gives all the conclusions.
2 RP Model Based on COC As a COC system is composed of clusters with different storage and computing capacities, the ground work of our model is to divide the workload among these clusters with the help of hypergraph partition technology. Cost of this stage can be amortized. The hypergraph is defined in [12]. A hypergraph H = (V, N) consists a set of vertices V and a set of hyper-edge N connected those vertices, where each nj ∈Nis a set of vertices from V. Noticed that each datum in an irregular problem could be accessed by a set of computations, therefore, an irregular computing problem could be mapped into a hypergraph, where V stands for the data and N stands for data related computing task. Weights ( w ) can be associated to the edges ( n j ∈ N ), which represent the cost of doing task n j on certain machine. As shown in figure 1, the data elements are presented as squares and the tasks are presented as a set of edges are connected by dots, where the edges with same color represent the same task. Assuming this hypergraph is partitioned into three parts according to the number of clusters in a COC environment, as indicated by three enclosing rectangles. Our strategy is to minimize the number of cut-edges as long as load balancing among clusters is satisfied. Let N be the set of nets in domain A, CNA be the set of cut-edges involved in domain A, the workload of cluster A could be defined as j
A
WL A = ∑ n
j ∈N A
w j ( A) + ∑ n
j ∈CN A
w j ( A)
Assuming the number of clusters is n, and the average workload of these clusters is WL
avg
=
∑ WL i
j
/n
For 0 ≤ i ≤ n
We say the workload is balanced as long as the following condition is satisfied: WL i ≤ WL avg * (1 + ε ) For 0 ≤ i ≤ n ,
Parallel Computing Strategy Design Based on COC
61
where ε is a predetermined ratio presenting the maximum imbalance in task division. After this hypergraph partition, the application could be totally mapped to the COC system in the form of different sets of tasks as shown in Figure 2.
Fig. 1. Sketch-map for the partition of hypergraph abstracted from irregular problems
Fig. 2. Map the irregular problem to a COC system with hypergraph partition
Considering the heterogeneity of inter- and intra- cluster network topology, a hybrid strategy should be employed to accommodate irregular problems in the computing environment. Extremely huge computation capacity could be obtained as long as we adopt different parallel computing models to different levels of a COC system. 2.1 Inter-cluster Strategy
It is quite clear that any task turned out to be a cut-edge in the hypergraph will introduce communication and synchronization overhead. Suppose all the clusters in a COC system are connected with Internet and geographically distributed, and then the communication overhead could become the bottle neck of improving the program’s execution efficiency. As a matter of fact, many clusters are loosely organized to form a grid-like uniformed computing resource. The cost of large scale communication among clusters is unacceptable. Both partially and fully shared models will bring about more
62
J.-J. Zhou
or less communication overhead. But replicate computing model could eliminate all the possible network overhead by doing redundant computation. And the cost of replicating computation can be controlled since the number of cut-edges is minimized in the hypergraph partition stage. As long as the overhead of communication among clusters over beats the cost of redundant computation, the replicating computation model is a better choice. According to experimental results, in most cases, replicating computation among clusters can greatly decrease the total execution time of applications. 2.2 Intra-cluster Strategy
As all the compute nodes within a cluster are connected with high productive network, like InfiniBand, communication and synchronization cost among these nodes is relatively acceptable. Execution efficiency on this level is mainly constrained to memory size and cache miss rate, etc. With all these factors taken into consideration, producer-consumer oriented partially shared framework is the best choice.
Task Partition perm1
permN
dataN
data1 Inspector CommTable1
CommTableN Production
data1
dataN Gather
data1
dataN
Computation targetN
target1 Reduction Final Result
t
Fig. 3. Proceeding flow of producer-consumer oriented partially shared model
Here, producer and consumer together define a logical relationship of the data production and consumption. Take the code in Figure 1 for example, data set p is produced in loop L1 and consumed in loop L2. Ayon [13] studied on the producer-consumer graph and gave some reasonable analyzing strategies. By studying this logical relationship, we can specify the data needed to be shared among compute nodes. Therefore, it is unnecessary to keep a complete and valid copy of the entire data set in each node, which can save quite a lot of memory space and promise a more efficient execution. The proceeding flow of partially shared model is given in Figure 3.
Parallel Computing Strategy Design Based on COC
63
In a word, combined with two traditional models, the RP hybrid computing strategy can well fit the irregular computing problems into a heterogeneous network of COC environment. Not only does it avoid the unacceptable data transformation delay on the inter cluster level, but also reduces the storage cost and ensures balanced workload and high execution efficiency on the intra cluster level.
3 Experimental Results Our benchmarks are based on the CG application and are implemented in MPICH, version 1.2.7. We did the experiments on a COC system composed of two clusters, which are geographically distributed and connected through Internet. The configuration is as follows: Cluster A CPU
Intel Xeon 3.0GH Z
L1 Data Cache
16K
L2 Cache
1024K
Me mory
2GB
Disk Size
60GB
Operati ng System
Fedora 3
Net work
Gigabyte Switch Cluster B
CPU
Intel Xeon 2.8GH Z
L1 Data Cache
8K
L2 Cache
512K
Me mory
2GB
Disk Size
40GB
Oper ating System
Redhat 9
Net work
Infini-band
Fig. 4. Configurations of Cluster A and B
According to our test, the intra-cluster network is generally over 10 times faster then the inter-cluster connection. Obviously, a much shorter total execution time is achieved by employing RP strategy, as shown in Figure 4.
64
J.-J. Zhou CG 70 )d 60 no 50 ce 40 s( 30 em it 20 10 0
partially replicate r-p 100
500 1000 2000 size of data set(*1000)
Fig. 5. Experiment result on CG
4 Conclusions and Future Work A COC oriented “Replicated and Partially-shared” hybrid computing strategy is proposed to fit irregular applications into the hierarchical and heterogeneous network based computation environment. A class of irregular problems employed our strategy could obtain better performance and scalability than using traditional models. A new hybrid parallel computing model is under construction, and we hope it can be more general purposed and self-adaptive to the characteristic of irregular applications and execution environment. Acknowledgement. This work was supported by National Natural Science Foundation of China (No. 60903214, 60970126), the National High Technology Development 863 Program of China (No. 2008AA01A323), Scientific Research Fund of Zhejiang Provincial Education Department Project(Y200908196).
References 1. High Performance Fortran Forum, High Performance Fortran language specification, version 1.0, Technical Report CRPC-TR92225, Houston, Tex (1993) 2. Frumkin, M., Jin, H., Yan, J.: Implementation of NAS Parallel Benchmarks in High Performance Fortran, Technical Report NAS-98-009 3. Das, R., Uysal, M., Saltz, J., Hwang, Y.-S.S.: Communication Optimizations for Irregular Scientific Computations on Distributed Memory Architectures 4. Koelbel, C.H., Loveman, D.B., Schreiber, R.S., Steel Jr., G.L., Zosel, M.E.: The High Performance Fortran Handbook. MIT Press, Cambridge (1994) 5. Brezany, P., Bubak, M., Malawski, M., Zajaac, K.: Large-Scale Scientific Irregular Computing on Clusters and Grids. In: Proceedings of the International Conference on Computational Science (2002) 6. Minyi, G.: Automatic Parallelization and Optimization for Irregular Scientific Applications. In: Proceedings of the 18th International Parallel and Distributed Processing Symposium (2004) 7. Han, H., Tseng, C.-W.: Exploiting Locality for Irregular Scientific Codes. IEEE Transactions on Parallel and Distributed Systems (2006)
Parallel Computing Strategy Design Based on COC
65
8. Hu, Y.C., Lu, H., Cox, A.L., Zwaenepoel, W.: OpenMP for Networks of SMPs. Journal of Parallel and Distributed Computing 60(12), 1512–1530 (2000) 9. El-Ghazawi, T., Carlson, W., Draper, J.: UPC Language Specifications V1.0 (February 2001) 10. Min, S.-J., Basumallik, A.y., Eigenmann, R.: Optimizing OpenMP programs on Software Distributed Shared Memory Systems. International Journal of Parallel Programming 31(3), 225–249 (2003) 11. Basumallik, A., Eigenmann, R.: Optimizing Irregular Shared-Memory Applications for Distributed-Memory Systems. In: Proceedings of the 11th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (March 2006) 12. Feng, X.-B., Chen, L., Wang, Y.-R., An, X.-M., Ma, L., Sang, C.-L., Zhang, Z.-Q.: Integrating Parallelizing Compilation Technologies for SMP Clusters. Journal of Computer Science and Technology 20(1), 125–133 (2005) 13. Catalyurek, U.V., Aykanat, C.: A Fine-Grain Hypergraph Model for 2D Decomposition of Sparse Matrices, ipdps. In: 15th International Parallel and Distributed Processing Symposium (IPDPS 2001) Workshops, vol. 3, p. 30118b (2001)
Preliminary Exploration of Volterra Filter Algorithm in Aircraft Main Wing Vibration Reduction and De-noising Control Chen Yu1, Shi Kun2, and Wen Xinling1 1
Zhengzhou Institute of Aeronautical Industry Management, China 2 Zhengzhou Thermal Power Corporation, China [email protected]
Abstract. In order to reduce vibration and noising of the aircraft main wing through the active control algorithm, research of non-linear adaptive filter algorithm is very important. This paper carries out variable step-size uncorrelated algorithm study based on traditional Volterra LMS algorithm. Through simulation of weakness and moderate intensity correlation input signal, the improved algorithm achieved good effect, and convergence speed is greatly improved, after 500 times iteration, weight coefficient mean-square error norm (NSWE) can achieve -60dB, but traditional Volterra LMS algorithm can only achieve -6dB, which proved the convergence of the rapidity and precision. Keywords: Volterra series, LMS, non-linear adaptive filter, convergence speed, convergence precision.
1 Introduction The aircraft’s main wing will produce unstable, separated eddy current load when aircraft in large attack angle or lesser attack angle flying. These eddy current load can cause vibrating strongly on the surface of aircraft main wing, and the serious even can cause aircraft main wing fatigue damage, etc. Because eddy current load of aircraft main wing is very complex and usually describes a kind of non-linear characters, in order to make aircraft main wing vibration reduction and de-noising, research of non-linear adaptive filter algorithm is very important. In solving non-linear problem, people have established many non-linear adaptive filter methods, such as neural network method, kalman filter, particles filter, Volterra filtering, etc. Because Volterra series is a kind of function and most non-linear system can be described in high precision by using Volterra series. Therefore, the research of Volterra filters adaptive filter algorithm has aroused researchers' attention at home and abroad. Because LMS algorithm has many advantages such as simple structure, good stability, which is the one of classic, effective adaptive filter algorithm, and widely used in adaptive control, radar, system identification and signal processing, etc. But, fixed step-length LMS adaptive algorithm is contradictory among convergence rate, tracking speed and maladjusted noise, etc. In order to overcome this inherent contradiction, peoples developed various variable step-length LMS improvement adaptive filter M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 66–73, 2011. © Springer-Verlag Berlin Heidelberg 2011
Preliminary Exploration of Volterra Filter Algorithm
67
algorithm. [1] This paper introduced a new improvement algorithm under analyze traditional Volterra LMS adaptive filter algorithm.
2 Volterra LMS Adaptive Filter Algorithm The relationship between input x( n) and output y ( n) of discrete non-linear Volterra system can be expressed in Volterra series as formula (1): [2] N −1
N −1 N −1
m1 = 0
m1 = 0 m2 = m1
y (n) = h0 + ∑ h1 (m1 )x(n − m1 ) + ∑
+
N −1 N −1
N −1
∑∑ ∑
m1 = 0 m2 =m1 m3 =m2
∑ h (m , m )x(n − m ) x(n − m ) 2
1
2
1
2
h3 (m1 , m2 , m3 )x(n − m1 ) x(n − m2 ) x( n − m3 ) + "
+ ∑ m =0 ∑ m = m "∑ m N −1
N −1
1
2
N −1
1
d
= md −1
hd (m1 , m2 ," , md ) x( n − m1 ) x (n − m2 ) " x (n − md ) + "
(1)
From formula (1) we can see, Volterra series considers the dynamic behavior of the system, and it can be regarded as a having memory’s Taylor series under the circumstance of existing expansion, therefore, it can describe non-linear dynamic systems. Form formula, hd ( m1 , m2 ," , md ) is called d orders Volterra kernel, N describes memory length. Formula (1) describes that non-linear system has infinite Volterra kernel. But, in fact application, we must carry on truncation processing, truncation processing contains order number d and memory length N. We should truncate the non-linear system according to the actual type and requirements precision, usually, we adopts second-order Volterra series truncation model. And we suggest h0 = 0 , and then formula (1) can be simple into formula (2): y (n) =
N −1
N −1 N −1
∑ h (m )x(n − m ) + ∑ ∑ h (m , m )x(n − m ) x(n − m )
m1 = 0
1
1
1
m1 = 0 m2 = m1
2
1
2
1
(2)
2
From formula (1) and (2), when order d and summary times N is large, the calculate amount of filter is large, the identification problem of non-linear system based on Volterra series is for unknown non-linear system using observations of input and output signal, in a sort of identification rule, by using the online recursive methods identify system various order, determine Volterra kernel coefficients, then define the non-linear system. The principle structure diagram is shown as Figure 1 based on Volterra series model non-linear system adaptive identification.[3]
Fig. 1. Structure diagram of Volterra adaptive filter algorithm
68
C. Yu, S. Kun, and W. Xinling
Through the application of Volterra adaptive filter algorithm, we can identify unknown nonlinear system, which making the error signal e(n) in a sense for minimal, also making some a cost function J(n) of e(n) into a minimum value. From Figure 1, W ( n) is coefficient vector of Volterra filter, when cost function J(n) achieved minimum, we can think kernel vector H (n) ≈ W (n) . According to the difference of the cost function J(n), we can get different adaptive algorithm. Because the definition of Volterra input vector and linear filter input vector is different, it make the convergence condition is different, the desired signal of Volterra series adaptive filter (i.e. the estimate signal) is a( n) . If definition cost function J(n) is shown as formula (3). J (n) = e2 (n) = [ y(n) − W T (n) X (n)]2
(3)
It is consist of the traditional least mean-square error (LMS) algorithm, Volterra filter LMS adaptive algorithm is shown as formula (4). e(n) = y (n) −W T (n) X (n)
(4)
W ( n + 1) = W (n ) + μ X (n )e(n)
LMS algorithm has the advantages of small amount of calculation, but because the system with non-linear, which making the input signal correlation matrix eigenvalues to the extended greaten, and its convergence speed become slow. In addition, because the input inevitably LMS algorithm existing interference noise, which will produce imbalance parameters, and the maladjusted quantity is proportional to the iteration step, therefore, we can use variable step-length LMS way to improve performance of the algorithm. Therefore, the requirement is contradictory among the convergence speed, time-varying system tracking speed and the accuracy of convergence in the algorithm of fixed step length adaptive filter. So, convergence speed of LMS Volterra filter algorithm is slowly. In order to speed up the convergence speed, we can adopt different step length to linear part and non-linear part. [4] The step length adjustment to variable step length adaptive filter should meet step length is bigger during initial convergence stage or unknown system parameter changes, which can have a bigger fast convergence rate and the tracking speed of time-varying systems. And then after the algorithm convergence, whatever the main input jamming signal v(n) is bigger, the algorithm should maintain a small adjustment to achieve small step length of the steady-state disorders noise.
3 LMS Variable Step-Length Decorrelation Algorithm In LMS algorithm, we can define correlation coefficient a (n) of X (n) and X (n − 1) similar to projection coefficient, which is shown as formula (5): (5) X T (n ) X (n − 1) a (n ) = X T (n − 1) X (n − 1)
a ( n) presents the association degree between X (n) and X (n − 1) , and
a (n) is larger,
the connections between them are stronger. Therefore, we can write the improved update orientation vector, such as below formula (6).
Preliminary Exploration of Volterra Filter Algorithm
69
b(n) = X (n) − a(n) X (n − 1)
(6)
Clearly, a ( n) X ( n − 1) is the relevant part between X (n) and X (n −1) . Subtracting a part from X (n) , which is equivalent to decorrelation operations. For example, input signals
x(n) is generated by the model x(n) = ax(n − 1) + v (n) , mean value of
v (n) is 0, and the gaussian distribution with variance of 1, then the correlation coefficient of x (n) is shown as formula (7). c( n) =
E[ x( n) x (n - 1)] 2
E[ x (n - 1) ]
=
E[( ax(n - 1) + v (n )) x(n - 1)] 2
E[ x (n - 1) ]
=a+
E[v (n ) x( n - 1)]
(7)
2
E[ x (n - 1) ]
Visible, absolute value of a is the larger, c ( n) is the bigger, correlation of x (n) is stronger. Use adjusting weight coefficient of c ( n) , which can achieve more accurately and more quickly value. Therefore, formula (4) can be amended as formula (8).
W (n + 1) = W (n) + μ e(n)b(n)
(8)
In order to solve the contradiction between the steady-state error and the convergent speed, we will modify the constant step factor μ of formula (8) to variable step by step length factor, which is shown as formula (9). p ( n ) = χ p ( n − 1) + (1 − χ )[ e ( n ) e ( n − 1)
(9)
μ (n + 1) = δμ ( n) + ε p 2 ( n) n = kL, μ min ≤ μ (n + 1) ≤ μ max μ ( n + 1) = μ max n = kL, μ(n +1) > μmax μ ( n + 1) = μ min n = kL, μ (n + 1) < μ min μ ( n + 1) = μ (n) n ≠ kL Among formula (10), L is segmentation length, 0 < χ < 1 , 0 < δ < 1 , and approximating to 1,
ε >= 0 , and which is a many small normal number. μmin and
μ max respectively is lower limit and upper limit of step length. Instead μ (n + 1) of formula (10) in formula (9) and normalized, we can get weight vector adjust formula of variable step-length decorrelation LMS algorithm, which is shown as formula (11). W (n + 1) = W (n) +
μ (n + 1) e( n ) b ( n ) 2 ζ + b( n)
(11)
From formula (11), ζ is a very small normal number. According to literature [5], Weight coefficient mean-square error (NSWE) is shown as formula (12). N −1
NSWE =
10 log10∑ hi ( n) − hi* i=0 N −1
∑h i=0
2
(12)
*
i
On aircraft main wing vibration reduction and de-noising, we plans to use this adaptive filter algorithm to study, and to seek more favorable and convergence filter algorithm. [6]
70
C. Yu, S. Kun, and W. Xinling
4 Algorithm Simulation and Performance Analysis If we suggest identified each order kernel coefficient of the non-linear system expected output signal a(n) is:
a (n)=-0.75x( n) + 0.42x ( n − 1) – 0.34 x ( n − 2) + 0.5x 2 ( n) + 0.23x 2 (n − 1) –
1.51x 2 (n − 2)
–
0.54x ( n ) x ( n − 1)
+
1.74x ( n − 1) x ( n − 2)
–
0.9 x ( n ) x ( n − 2) + v 2(n) . Input signal is x ( n) = ax ( n − 1) + v1( n) , among them, v1 and v 2 are all mean of 0, variance of 1, and mutual independence gaussian white noise signal. Adaptive Volterra filter order is 2, and memory length is N of 3. If we select μmax = 0.5 , μmin = 0.001 , L = 12 , σ = 0.95 , α = 0.95 ,
β = 0.000001 , and c = 0.000001 , all simulation curve is taken from the average
through 20 times independent simulation results. When input signal is weak related, when x(n) = 0.3x(n − 1) + v1(n) , improvement LMS algorithm each convergence curve of Volterra kernel coefficient is shown as Figure 2.
Note: “―“ is improve Volterra LMS algorithm, and “-- “ is traditional Volterra LMS filter algorithm Fig. 2. Volterra kernel coefficient convergence conditions in Weak related input signal
Preliminary Exploration of Volterra Filter Algorithm
71
Fig. 2. (continued)
From Figure 2 simulation results can be seen, the algorithm whether the convergence rate or steady-state mismatch is superior to the traditional Volterra LMS algorithm. Algorithm in 500 times’ iterations, algorithm is all achieve basic convergence and effect is good, as well as convergence precision is high. While, the convergence speed of traditional Volterra LMS algorithm has very big disparity. After the iteration of 3,000 times, Volterra weight coefficient can only reach basic convergence and achieve to an optimal value. Comparison diagram of weight coefficient mean square error (NSWE) is shown as Figure 3.
72
C. Yu, S. Kun, and W. Xinling
Note: “―“ is improve Volterra LMS algorithm, and “-- “ is traditional Volterra LMS filter algorithm Fig. 3. Comparison diagram of weight coefficient mean square error (NSWE)
We can see from Figure 3, this paper’s algorithm after 500 times iteration, weight coefficient mean-square error norm can achieve -60dB, but traditional Volterra LMS algorithm can only achieve -6dB, therefore, this algorithm is shown that Volterra LMS algorithm is far superior performance than traditional. When the input signal is a medium strength of correlation, i.e. x(n) = 0.6 x(n − 1) + v1(n) , this improvement algorithm is also superior to the traditional Volterra LMS algorithm. But, with the related intensity creasing (form 0.3, 0.6 to 0.9), traditional LMS algorithm and this paper Volterra all cannot achieve rapid convergence algorithm. Therefore, such as aircraft main wing with complex non-linear vibration condition, the algorithm can't achieve vibration reeducation and de-noising purpose, which are the future research direction and key.
5 Conclusions This paper studied Volterra LMS algorithm, and proposes a kind of variable step length LMS improved Volterra decorrelation algorithm, which largely improving the performance of the algorithm, especially when the input signal is weak intensity related and moderate intensity related circumstance. But when the input signal is in high strength related, performance of the algorithm is greatly affected, which can't even realize convergence. Therefore, in the future, we will use the relevance theory, adopt quadrature component of input signal adaptive filter to update the weight vector, and carry out the signal decorrelation processing, which meeting the complex aircraft main wing vibration reducation and de-noising active control. Acknowledge. This paper is supported by the Aeronautical Science Foundation in China. ( No.2009ZD55001) and (No.2010ZD55006).
Preliminary Exploration of Volterra Filter Algorithm
73
References 1. Jin, J., Li, D., Xu, Y.: New uncorrelated variable step LMS algorithmand its application. Computer Engineering and Applications, 57–58 (2008) 2. Liu, L., Hu, P., Han, J.: A Modified LMS Algorithm for Second-order Volterra Filter. Journal of China Institute of Communications, 122–123 (2002) 3. Long, J., Wang, Z., Xia, S., Duan, Z.: An Uncorrelated Variable Step-Size Normalized LMS Adaptive Algorithm. Computer Engineering & Science, 60–61 (2006) 4. Aboulnasr, T., Myas, K.: A robust variable step-size lms-type algorithm: Analysis and simulations. IEEE Trans on Signal Processing, 631–639 (1997) 5. Lee, J., Mathews, V.J.: A fast recursive least squares adaptive second-order Volterra filter and its performance analysis. IEEE Trans on Signal Processing, 1087–1102 (1993) 6. Li, F., Zhang, H.: A new variable step size LMS adaptive filtering algorithm and its simulations. Journal of Chongqing University of Posts and Telecommunications (Natural Science Edition), 593 (2009)
Development Strategy for Demand of ICTs in BusinessTeaching of New and Old Regional Comprehensive Higher Education Institutes Hong Liu School of Business, Linyi University, Linyi 276005, P.R. China [email protected]
Abstract. In China, many regional higher education institutes (mainly be regional university) are believed as catalyst for the economic progress (especial for commerce), have been founded from 1998. From 2006, most of those regional higher education institutes promoted to be new comprehensive regional higher education institutes (CRHEIs) in succession. Now more and more students studying in those CRHEIs, and then the quality of teaching should be improved. In this paper, we analyses ICT (Information and communication technology)–based teaching and learning issues at CRHEIs level and old regional higher education institutes (OHEIs, mainly be regional university) level. Using normative Delphi method, we discussed ICT-based issues and get that ICT integration in CRHE and ORHE and get that CRHE need to be in lined with proper strategy in order to get their true benefits. Keywords: ZPD incidence development strategy, ICT, comprehensive regional higher education.
1
Introduction
Comprehensive regional higher business education institutes need ICT systems to facilitate the exchange of ideas and information about agriculture[1]. It is recognized by many researchers that use of ICT and application is an important business-skill for any worker. ICT usage at institute helps students to continue their learning beyond the classroom[2]. ICT -skilled teachers in comprehensive regional higher education institutes should adopt right pedagogical tools and practices in their teaching and enable their students embrace these new technologies. With time passed, ICT has changed the whole process of teaching and learning. While ICT usage in institute has become a standard, teachers will become more informed, more interactive, and more confident in the usage of various kinds of hardware and software to encourage and challenge students. But some teachers lag far behind the others in adopting ICT innovation[3]. However, change in professional domain, ICT innovation, and teaching methods is ineluctable. Keeping pace with the time, teachers ought to change their pedagogical tools to adapt such change. In china, many regional higher education institutes (mainly be regional college) have been founded from 1999 in China. From 2006, most of those regional higher M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 74–78, 2011. © Springer-Verlag Berlin Heidelberg 2011
Development Strategy for Demand of ICTs in Business- Teaching
75
education institutes promoted to be comprehensive regional higher education institutes (CRHEIs) in succession. Students in those CRHEIs become more and more. Those CRHEIs should improve their quality of teaching. In this study, teaching /learning with and without the help of ICT in CRHEIs and old comprehensive higher education institutes (ORHEIs) is explored. Their ability gaps in tackling and solving problems are recorded, and therefore proper strategy or mechanism can be figured out to reduce these ability gaps to a minimum. In the following, we propose Linyi University as an example on our research of CRHEIs in PR China. In this research, the personnel of college of business science in Linyi University are asked to devise development strategy with the help of normative Delphi technique. The purpose of these development strategies is to increase teachers/learners experiences by the use of ICT in CRHEIs, and therefore improve output of CRHEIs. The School of Business in Linyi University now has 130 faculty members, of whom there are 11 Professors, 47 Vice Professors, 32 with doctoral degrees, 10 in the process of completing doctoral degree programs, and 73 with master’s degrees. The school now has 6030 full-time undergraduates and non-degree students. We propose Qingdao science university as an example on our research of ORHEI in PR China. The School of Business in Qingdao science university has been build about 60 years and now has 101 faculty members, of whom there are 30 Professors, 38 Vice Professors, and 11 lecturers. 88% of the Professors and Vice Professors had achieved a PhD degree. The school now has 3 degree programs. The school now has 2081 full-time undergraduates. The organization of this paper is as follows. In section , we propose the introduction of ICT in CRHEIs. In section II, we discuss the data analysis of college of Business in Linyi University and Qingdao science University. In section III, we present our conclusion that development strategy for requirement of ICT proposed by panelists.
Ⅰ
2
Data Analysis
In this paper, we get the following data of college of business in Linyi University and Qingdao science University as the way of [4]. According to normative Delphi technique, a questionnaire was prepared and hand-delivered to the 230 members of staff in college of business in Linyi University, and 180 of panelists answered the questionnaire. There are 126 members of staff in college of business in Qingdao science University. 78 of panelists answered the questionnaire. Those data showing ZPD gaps obtained through the questionnaires is shown in Table 1 and Table 2. The concept of Zone of proximal development (ZPD)was coined by [5], ZPD gap is the difference between future/maximum and current state of any development/use of ICT. In this research, teaching/learning with and without the help/use of ICT at CRHEIs and ORHEIs level are explored and their ZPD gaps are recorded, so that a strategy that can reduce these gaps to a minimum can be devised. Please refer to Table 1 and Table 2 for issues.
76
H. Liu
Table 1. ZPD gaps of Linyi University (New Regional Comprehensive Higher Education Institutes)and Qingdao science university (Old Regional Comprehensive Higher Education Institutes) issues 1 2 3 4 5 6 7
Prepareng/Developing Class Lecture Presenting/Sharing Material Assessing Student’s Learning Managing Student Conduct Administrative Support Academic Research Social Networks
ZPD gaps University 3.08 1.98 1.51 1.74 2.13 2.30 1.99
of
Linyi
ZPD gaps of Qingdao science university 2.87 2.06 0.96 1.78 1.75 1.97 1.00
1. A teacher prepare/develop class lecture by reading online, searching information from ICT before his class lecture. ZPD gap(3.08) of CRHEIs and ZPD gap(2.87) of ORHEIs are recorded which shows levels of teachers using ICT tools and applications for these tasks. 2. For developing course material, sharing educational content, communication between teachers and outside using ICT and applications, ZPD gap (1.98) of CRHEIs and ZPD gap(2.06) of ORHEIs are obtained. 3. Checking exam papers, recording grades, and announcing results takes a lot of teachers’ time. ZPD gap of teachers of CRHEIs and ORHEIs are 1.51 and 0.96 respectively. 4. For managing student conduct with the help of ICT, ZPD gap (1.74) of CRHEIs and ZPD gap(1.78) of ORHEIs are obtained. 5. Teachers spends a lot of time in accomplishing administrative tasks such as keeping student records, issuing books, and supporting students with ICT and applications in their studies. ZPD gap of teachers of CRHEIs and ORHEIs regarding these tasks are 2.13 and 1.75 respectively. 6. For finding research information, communicating with researchers, and sharing ideas with other teachers, ZPD gap (2.03) of CRHEIs and ZPD gap(1.97) of ORHEIs are obtained. 7. Teachers quest for knowledge using social networks and learner forums. ZPD gap of teachers of CRHEIs and ORHEIs regarding these tasks are 1.99 and 1.00 respectively. Now we discuss the above data as the way of [6]. In comprehensive regional higher education institutes, teachers usually perform a number of tasks. It takes a teacher a lot of time to prepare a class lecture for his day-to-day teaching task. A teacher must develop course material and sharing educational content using ICT and applications. If a teacher is effective at checking exam papers, recording grades, and announcing results, his keeping record tasks will become much easier. Administrative tasks require a teacher to spend time keeping student records, issuing book, and supporting students. There are many ICT and applications that a teacher can use while finding research information and sharing ideas with other teachers. In almost all
Development Strategy for Demand of ICTs in Business- Teaching
77
aspects relating these teaching and learning issues about CRHEIs in PRC, big ability gaps are measured. Such significant gaps show different level of staff and staff in using ICT and applications. However, small ability gaps are measured about ORHEIs in PRC. The main causes of this difference are due to lack of funding, unavailability of resources and lack of attitude or vision etc between CRHEIs and ORHEIs. The spread of ICT and applications is considered as necessary in CRHEIs of developing countries; thus they can drive in pedagogical challenges coming from latest development. However, few strategies have been devised to solve these issues in developing countries (especially PRC). Accordingly we try to devise a strategy including some important measures for ICT enhancement in this study.
3
Conclusions
Through Comparing the ZPD gaps of Linyi University and Qingdao science university, some development strategy for requirement of ICT are listed below, most proposed by panelists in college of Business in Linyi University. 1. ICT for teaching RHE teachers need to be proficient in usage of ICT and applications to work effectively. Some of the recommendations suggested by our panelists in this regard are: (1) design of a persistent training program for faculty/staff in the use of ICT. (2) teaching/support staff should be supported /encouraged to use innovative methods of teaching in their routine work. 2. ICT for gaining proper attainment of students Some of the actions suggested by our panelists in this dimension are: (1) enable high speed ICT access for management, faculty and administrative staff. (2) local ICT needs to be developed and consummated. (3) students progress between key stages may be measured through management information system. (4) while delivering a lecture or performing some admin task, available software/hardware can be used conveniently. 3. ICT for teachers’ development Some of the recommendations suggested by our panelists in this dimension are: (1) RHE teaching/support staff should get opportunities of studying and gaining higher qualifications through scholarships/fellowships and attending research oriented events. (2) establish an information management strategy to be shared with other related stakeholders. (3) design of a mechanism that providing access to appropriate information using portal technology to other related stakeholders.
References 1. Information and Communication Technology Strategic Plan, 2005-06 to 2009-10, University of Oxford, http://www.ict.ox.ac.uk/strategy/plan/ ICT_Strategic_Plan_March2007pdf (retrieved August 2009) 2. Argyll and Bute Community Services: ICT Strategy for Education, http://www.argyllbute.gov.uk/pdffilesstore/ infotechnologystrategy (retrieved August 2009)
78
H. Liu
3. Deshpande, P.: Connected – where next?: A strategy for the next phase in the development of education ICT in Bournemouth, http://www.bournemouth.gov.uk/ Library/PDF/Education/Education_ICT_Strategy_2004_to_2009.pdf (retrieved August 2009) 4. Shaikh, Z.A.: Usage, acceptance, adoption, and diffusion of information & communication technologies in higher education: a measurement of critical factors. Journal of Information Technology Impact 9(2), 63–80 (2009
A Novel Storage Management in Embedded Environment Lin Wei and Zhang Yan-yuan Dept. Computer Science and Engineering, Northwest Polytechnic Univ, Xi'an, China [email protected]
Abstract. Flash memory has been widely used in various embedded devices, such as digital cameras and smart cell phones, because it has a fast access speed, a high availability and an economization of power. However, replacing hard drives with flash memory in current systems often either requires major file system changes or causes performance degradation due to the limitations of block based interface and out-of-place updates required by flash. We introduce a management framework of object-based storage system which can optimize the performance for the underlying implementation. Based on this model, we propose a data allocation method that takes richer information from an object-based interface. Using simulation, we show that cleaning overhead can be reduced by up to 11% by separating data and metadata. Segregating the hot data and cold data can further reduce the cleaning overhead by up to 19%. Keywords: Flash memory, Object-based flash file system, Data allocation.
1 Introduction The demand for storage capacity has been increasing exponentially due to the recent proliferation of multimedia contents. In the meantime, NAND flash memory becomes one of the most popular storage media for portable embedded systems such as MP3 players, cellular phones, PDAs (personal digital assistants), PMPs (portable media players), and in-car navigation systems. However, replacing hard drives with flash memory in current systems often either requires major file system changes or causes performance degradation due to the limitations of block based interface and out-of-place updates required by flash. Flash memory requires intelligent algorithms to handle their unique characteristics, such as out-of-place update and wear-levelling. Thus, use of flash memory on current systems falls into two categories: Flash Translation Layer (FTL[9,10])-based systems and flash-aware file systems. FTL is usually employed between operating system and flash memory. The main role of FTL is to emulate the functionality of block device with flash memory by hiding the erase-before-write characteristics as much as possible. Once FTL is available on top of NAND flash memory, any disk-based file system can be used. However, since FTL is operating at the block device level, FTL does not have any access to file system level information and this may limit the file system performance, reduces performance and wastes computing resources. On the other hand, several flash-aware file systems, such as JFFS2 [1], YAFFS2 [2], ELF [3], and TFFS [4], have been developed to simplify the file system design M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 79–83, 2011. © Springer-Verlag Berlin Heidelberg 2011
80
L. Wei and Z. Yan-yuan
without being in need of FTL and to extract maximum performance out of flash memory. Currently, JFFS2 or YAFFS2 serves as one of the most widely used general-purpose flash file systems in an embedded environment. However, Flash-aware file systems are designed to be generic and not tuned for specific hardware, and thus are relatively inflexible and cannot easily optimize performance for a range of underlying hardware. To solve these problems, we propose an object-based model for flash. In this model, files are maintained in terms of objects with variable sizes. The object-based storage model offloads the storage management layer from file system to the device firmware while not sacrificing efficiency. Thus, object-based storage devices can have intelligent data management mechanisms and can be optimized for dedicated hardware like SSDs. We simulate an object-based flash memory and propose two data placement policies based on a typical log structure policy. Our first approach separates data and metadata, assuming that metadata changes more frequently than data. The second approach segregates hot metadata and cold metadata to avoid additional cleaning. We compare the cleaning overhead of these approaches to identify the optimal placement policies for an object-based flash memory.
2 Background 2.1 Current Flash Memory File System JFFS2 is a log-structured file system designed for flash memories. The basic unit of JFFS2 is a node in which variable-sized data and metadata of the file system are stored. Each node in JFFS2 maintains metadata for a given file such as the physical address, its length, and pointers to the next nodes which belong to the same file. Using these metadata, JFFS2 constructs in-memory data structures which link the whole directory tree of the file system. This design was tolerable since JFFS2 is originally targeted for a small flash memory. However, as the capacity of flash memory increases, the large memory footprint of JFFS2, mainly caused by keeping the whole directory structure in memory, becomes a severe problem. The memory footprint is usually proportional to the number of nodes, thus the more data the file system has, the more memory is required. YAFFS2 is another variant of log-structured file system [2]. The structure of YAFFS2 is similar to that of the original JFFS2. The main difference is that node header information is moved to the NAND spare area and every data unit, called chunk, has the same size as NAND pages to efficiently utilize NAND flash memory. Similar to JFFS2, YAFFS2 keeps data structures in memory for each chunk to identify the physical location of the chunk on flash memory. It also maintains the full directory structure in main memory since the chunk representing a directory entry has no information about its children. In order to build these in memory data structures, YAFFS2 scans all the spare areas across the whole NAND flash memory. Therefore, YAFFS2 faces the same problems as JFFS2. 2.2 Object-Based Flash Translation Layer In a system built on object-based storage devices (OSDs) [5, 6], the file system offloads the storage management layer to the OSDs, giving the storage device more flexibility on
A Novel Storage Management in Embedded Environment
81
data allocation and space management. Recently, Rajimwale et al. proposed the use of an object-based model for SSDs [7]. The richer object-based interface has great potential to improve performance not only for SSDs but also for other new technologies. Our object-based model on flash can be divided into two main components: an object-based file system and one or more OSDs. The object-based file system maintains a mapping table between the file name and the unique object identifier for name resolution. A flash-based OSD consists of an object-based FTL and flash hardware. The object-based FTL also contains two parts: a data placement engine that stores data into available flash segments, and an index structure that maintains the hierarchy of physical data locations. A cleaning mechanism is embedded to reclaim obsolete space and manage wear levelling. The status of each object is maintained in a data structure called an onode(object inode), which is managed internally in the OSD.
3 Data Allocation Method One optimization with object-based model is the exploration of intelligent data placement policies to reduce cleaning overhead. In a typical log-structured policy, data and metadata are written sequentially to a segment to avoid erase-before-write, an approach we term a combined policy. The problem is that different data types are stored together; since metadata is usually updated more frequently than user data, this approach causes the cleaner to move a large amount of live user data out before erasing the victim segment. 3.1 Centralized Policy Our first approach is centralized policy, separates metadata and data into different segments, as was done in systems like DualFS [7] and hFS [8]. The figure 1 shows the data location with centralized policy. SB(Super Block)is at the first segment of the flash memory, SIB(Segment Information Block)record the status of each segment. The metadata of each object is centralized into the onode segment. Unlike those systems that do not manage file metadata internally, this could be easily accomplished in OSDs with sufficient information from the file system.
Fig. 1. Data location with centralized policy
3.2 Cold-Hot Model When writing requests are performed, the system must allocate free pages to store data. The data could be classified into hot and cold attributes. If the system stores the hot and cold data in the same segment, the cleaning activity needs to copy valid object to another free flash memory space for reclaiming segments. This operation would cause a lot of extra system overhead. To address the above problem, hot and cold data are
82
L. Wei and Z. Yan-yuan
separately stored to different segments. When the system writes new data to flash memory, it would be written to the cold segment. If the data are updated, the data would be considered as hot and be written to the upper region. Then, the segment which contains obsolete data would be moved to the suitable dirty list according the amount of invalid pages if there are no free pages in it.
4 Experiments We have implemented a simulator which consists of 512 MB NAND-type flash memory in MTD [11] module of Linux. Table 1 lists our experimental environment and setting. We also implement 3 different data placement policies including combined policy, centralized policy, centralized and hot-cold policy. The workload generator converts file system call-level traces to object-based requests and passes them to OSDs. The FTL contains an index structure, data placement policies and a cleaner. The evaluation mainly focuses on the cleaning overhead in terms of number of segments cleaned and number of bytes copied during cleaning under three data placement policies. Table 1. Experimental environment and setting Experimental Environment
NAND Flash
CPU: Pentium 4 3.2GHz
Block Size: 16 KB
Memory : 512MB
Page Size: (512 + 16) B
Flash memory: 512MB
Page read time: 35.9 us
OS: Linux 2.6.11
Page write time: 226 us
MTD module: blkmtd.o
Block erase time: 2 ms
For each policy in Figure 2, the left bar indicates the total number of segments cleaned and the right bar indicates the number of bytes copied during garbage collection. Each bar is normalized to the combined policy. Centralized policy can reduce cleaning overhead by up to 11%, centralized and hot-cold policy can further reduce the overhead by up to 19%. data cleaned metadata cleaned
data moved metadata moved
Cleaning overhead
1.0 0.8 0.6 0.4 0.2 0.0
combined
centralized
centralized and hot-cold
Fig. 2. Cleaning overhead of three placement policies
A Novel Storage Management in Embedded Environment
83
The amount of live data copied in the centralized policy under the read-heavy workload is reduced because dirty metadata segments have less live data than data segments, thus fewer pages are copied out from victim segments. By segregating access time from metadata, the cleaning overhead is further significantly reduced since frequent metadata updates are avoided by hot-cold policy.
5 Conclusions The performance of flash memory is limited by the standard block-based interface. To solve this problem, we have proposed the use of an object-based storage model for flash memory. We have explored a data allocation method for object-based file system by separating frequently updated metadata and data. The experiment shows the centralized and hot-cold data placement policies were able to reduce cleaning overhead over the typical log structured scheme.
References 1. Woodhouse, D.: JFFS: The Journaling Flash File System. In: Proc. Ottawa Linux Symposium (2001) 2. Aleph One Ltd. YAFFS: Yet another flash file system, http://www.yaffs.net 3. Dai, H., Neufeld, M., Han, R.: ELF: an efficient log-structured flash file system for micro sensor nodes. In: ACM Conference on Embedded Networked Sensor Systems (SenSys), pp. 176–187 (2004) 4. Douglis, F., Cáceres, R., Kaashoek, M., Li, K., Marsh, B., Tauber, J.: Storage Alternatives for Mobile Computers. In: Symposium on Operating Systems Design and Implementation (OSDI), pp. 25–37 (1994) 5. Rajimwale, Prabhakaran, V., Davis, J.D.: Block management in solid-state devices. In: USENIX Annual Technical Conference 2009 (June 2009) 6. Woodhouse, D.: The journaling flash file system. In: Ottawa Linux Symposium, Ottawa, ON, Canada (July 2001) 7. Piernas, J., Cortes, T., García, J.M.: DualFS: a new journaling file system without meta-data duplication. In: Proceedings of the 16th International Conference on Supercomputing, pp. 84–95 (2002) 8. Zhang, Z., Ghose, K.: hFS: A hybrid file system prototype for improving small file and metadata performance. In: Proceedings of EuroSys 2007 (March 2007) 9. Intel Corporation, Understanding the Flash Translation Layer (FTL) Specification (1998), http://developers.intel.com 10. Flash-Memory Translation Layer for NAND Flash (NFTL), M-Systems 11. Woodhouse, D.: Memory Technology Device (MTD) subsystem for Linux, http://www.linux-mtd.infradead.org/
Development Strategy for Demand of ICT in Small-Sized Enterprises Yanhui Chen School of Engineering, Linyi University, Linyi 276005, Shandong, P.R. China [email protected]
Abstract. With time passed, information and communication technology (ICT) penetrates into all aspects of industrial production and changes the whole process of industry. In the past years, small-sized enterprises have played an important role in the economy. But little attention has been paid to the usage of ICT in small-sized enterprises in China. In this paper, we analyses ICT–based issues from technicians in Linyi Xinyuan Friction Material Limited Company. The methodology for carrying out the tasks mainly contains questionnaires according to normative Delphi technique. The paper ends with recommendations which small-sized enterprises authorities should take in order to conform ICT in their production. Keywords: information and communication technology, ZPD gaps, small-sized enterprises.
1
Introduction
During the past 20 years, a wave of new information and communication technology (ICT) was introduced with great impact in almost all aspects of industrial production worldwide [1]. ICT do offer a new way in which ideas can be generated, communicated, and assessed [2]. Advances in the field of ICT, including Email, Internet bulletin boards, Internet-based courseware delivery strategies, and video conferencing have together changed the whole process of industry. In the past few years, small and medium-sized enterprises have played an important role in the economy and easing employment pressure [3, 4]. Small and medium-sized enterprises comprise more than 99% of all enterprises with more than 73% of entire workforce in China (similarly in other countries). It is vital for China as well as the other countries worldwide. Small-sized industrial enterprises in China are identified as follows: small-sized enterprises employ less than 300 employees; its annual income is not more than 30 million yuan or balance value of assets is not more than 40 million yuan. As the importance of small-sized enterprises has increased, it has been accompanied by an increase in the amount of research attention paid to them. Now small-sized enterprises face enormous pressures as China integrates more into the world economy. The way that small-sized enterprises develop in an increasingly competitive market has become one key issue [5]. How to stay competitive is a question that bothers most of M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 84–88, 2011. © Springer-Verlag Berlin Heidelberg 2011
Development Strategy for Demand of ICT in Small-Sized Enterprises
85
the enterprises as well as small-sized enterprises as they cannot compete on mass production [6]. One of the possible answers is innovation through lifelong and informal learning by the use of ICT. ICT is needed to facilitate the exchange of ideas and information about industry production [7]. With time passed, ICT has changed the whole process of industry. While ICT usage in industry has become a standard, workers will become more informed, more interactive, and more confident in the usage of various kinds of hardware and software. It is recognized by many researchers that use of ICT tools and application is an important life-skill for any worker, whereas some workers lag far behind the others in adopting ICT. A lot of research is dedicated to usage of ICT in large enterprises since large enterprises are able to invest more in ICT. Little attention has been paid to the usage of ICT in small-sized enterprises in China. Significance of usage of ICT in small-sized enterprises is not highlighted. In this study, we propose Linyi Xinyuan Friction Material Limited Company as an example on our research of Chinese small-sized industrial enterprises. As the way of [5], we get the following data of Linyi Xinyuan Friction Material Limited Company. The company covers an area of more than 260 acreages with assets of more than 36 million yuan. There are 280 employees, including 59 high and middle-level technicians in Linyi Xinyuan Friction Material Limited Company. The main product of the company is automobile disc brake shoes of more than four hundred models. The annual production of the company reached 1 million sets of automobile disc brake shoes, applying to more than 500 different kinds of vehicles. Working and learning with and without the help of ICT in Chinese small-sized enterprises is explored. Their ZPD gaps in tackling and solving problems are recorded, and therefore proper strategy or mechanism can be figured out to reduce these ZPD gaps to a minimum. According to normative Delphi technique, a questionnaire was prepared and hand-delivered to the 59 high and middle-level technicians, and 54 of the staff answered the questionnaire [8]. Within next 2 months, these 54 members complete other questionnaires for 3 rounds. The same group was asked to devise development strategy that small-sized enterprises authorities should take in order to integrate ICT in their company. The purpose of these development strategies is to increase technicians’ experiences by the use of ICT, and therefore improve output of small-sized enterprises. The concept of Zone of proximal development (ZPD) was coined by [9, 10]. ZPD gap is the difference between future/maximum and current state of any development/use of information technology. In this research, working and learning with and without the help/use of ICT are explored and their ZPD gaps are recorded, so that a strategy that can reduce these gaps to a minimum can be devised. Those data showing ZPD gaps obtained through the questionnaires is shown in Table 1. Please refer to Table 1 for issues. The organization of this paper is as follows. In section 1, we propose the introduction of ICT in small-sized enterprises. In section 2, we discuss the data analysis of Linyi Xinyuan Friction Material Limited Company. In section 3, we present our conclusion that development strategy for requirement of ICT proposed by technicians.
86
2
Y. Chen
Data Analysis
Data showing ZPD gaps obtained through the questionnaires from 54 technicians of Linyi Xinyuan Friction Material Limited Company. Please refer to Table 1 for issues. Table 1. ZPD gaps
1 2 3 4 5 6 7 8 9 10
issues Learning Professional Knowledge Finishing Routine Tasks /Sharing Material Communication with Leaders and Colleagues Innovative Research Use of Common ICT tools Rely on ICT tools in small-sized enterprises Use of ICT tools in small-sized enterprises Get help by ICT tools in small-sized enterprises ICT Demand in small-sized enterprises ICT Supply in small-sized enterprises
ZPD gaps 2.16 1.45 1.03 2.85 0.05 0.97 0.86 0.90 1.36 2.90
1. A technician learns professional knowledge by reading online, searching information from internet. High ZPD gap (2.16) is recorded which shows levels of technicians of small-sized enterprises using ICT tools and applications for these tasks. 2. For finishing their routine tasks task and sharing material using ICT tools and applications, ZPD gap (1.45) is obtained. 3. For communication with colleagues, leaders and outside between using ICT tools and applications, ZPD gap is 1.03. 4. For finding professional information, communicating with researchers, quest for knowledge using learner forums, large ZPD gap (2.85) is obtained. 5. Very small ZPD gap of 0.05 is recorded for use of common ICT tools such as MS office, web browsers, e-mail, search engines etc. 6. Regarding how much should technicians faculty of small-sized enterprises rely on ICT tools and applications, low ZPD gap (0.97) is recorded. 7. Regarding how much should technicians faculty of small-sized enterprises use ICT tools and applications, low ZPD gap (0.86) is recorded. 8. Regarding how much help technician’s faculty of small-sized enterprises get while using ICT tools and applications, low ZPD gap (0.90) is recorded. 9. ZPD gap of 1.36 is measured for the issue of demand for ICT in small-sized enterprises of PRC. 10. ZPD gap of 2.90 is recorded for ICT supply in response to its demand in small-sized enterprises. Technicians of small-sized enterprises usually perform a number of tasks. It takes a technician a lot of time to finishing his routine tasks using ICT tools and applications. There are many ICT that a technician can use while finding research information and
Development Strategy for Demand of ICT in Small-Sized Enterprises
87
sharing ideas with other technicians. If a technician is effective at communication with his colleagues, his tasks will become much easier. In several aspects relating these issues about technician in small-sized enterprises, big ZPD gaps are measured. Such significant gaps show different level of staff in using ICT tools and applications. In this study, the use of MS office, web browsers, search engines, and email is popular for most staff. Further than these skills, staff is less practical in using other tools which are essential in the development of their career profiles. The main causes of such significant gaps are due to lack of funding, unavailability of resources and lack of attitude or vision etc. The spread of ICT tools and applications is considered as necessary in small-sized enterprises of developing countries. However, few strategies have been devised to solve these issues in developing countries (especially PRC). Accordingly in this study we try to devise a strategy including some important measures for ICT enhancement.
3 Conclusions Some development strategy for requirement of ICT in small-sized enterprises is proposed by technicians in Linyi Xinyuan Friction Material Limited Company. It is necessary for government and small-sized enterprises to realize the full potential of ICT. All members of the small-sized enterprises, that is, management, technicians and common laborers, should be involved in readiness. Opinion leaders could be used as effective promotional vehicles in the implementation of ICT among small-sized enterprises. Small-sized enterprises currently implementing ICT should be identified and supported. Such small-sized enterprises who are successfully implementing ICT should be showcased as success models. While some small-sized enterprises want to adopt ICT to facilitate improved performance and subsequent growth, they may be constrained by finances. The small-sized enterprises should be given the necessary attention in policy, financial and general business support. It is necessary for small-sized enterprises to pursuit more funds from government. Local internet needs to be developed and consummated. Computer level of 3:2 technicians -computer ratio should be reached. High speed internet access for management, technicians and common laborers should be enabled. Technicians can be supported to use internet in their routine tasks. Technicians also need a lot of attention in adopting an effective and efficient e-learning environment. Technicians must be keen to collaborate with other colleagues in and outside the factory. Teams comprising of management, technicians and common laborers need to be developed which develop task-based education content. ICT training centers that fulfill training needs of technicians needs to be established. It should be designed that a persistent training program for technicians in the use of ICT.
References 1. Eyitayo, O.T., Ogwu, F.J., Ogwu, E.N.: Information Communication Technology (ICT) Education in Botswana: A Case Study of Students’ Readiness for University Education. Journal of Technology Integration in the Classroom 2(2), 117–130 (2010)
88
Y. Chen
2. Bader, M.B., Roy, S.: Using Technology to Enhance Relationship in Interactive Television Classroom. Journal of Education for Business 74(6), 357–364 (1999) 3. Anderson, A.R., Li, J., Harrison, R.T., Robson: The Increasing Role of Small Business in the Chinese Economy. Journal of Small Business Management 41, 310–316 (2003) 4. Zhang, W.: Zhongguo Zhongxiao Qiye Fazhan Xianzhuang (2005), http://www.ccw.com.cn 5. Cunningham, L.X., Rowley, C.: Small and Medium-Sized Enterprises in China: A Literature Review. Human Resource Management and Suggestions for Further Research, Asia Pacific Business Review 16(3), 319–337 (2010) 6. Raimonda1, A., Pundziene, A.: Increasing the Level of Enterprise Innovation through Informal Learning: The Experience of Lithuanian SMEs. International Journal of Learning 16(11), 83–102 (2009) 7. Information and Communication Technology Strategic Plan, 2005-06 to 2009-10, University of Oxford, http://www.ict.ox.ac.uk/strategy/plan/ ICT_Strategic_Plan_March2007pdf (retrieved August 2009) 8. Shaikh, Z.A.: Usage, acceptance, adoption, and diffusion of information & communication technologies in higher education: a measurement of critical factors. Journal of Information Technology Impact 9(2), 63–80 (2009) 9. Cyphert, F.R., Gant, W.L.: The Delphi technique: A case study. Phi Delta Kappan 52, 272–273 (1971) 10. Vygotsky, L.S.: Mind in Society: The development of higher psychological processes. Harvard University Press, Cambridge (1978)
Development Strategy for Demand of ICT in Medium-Sized Enterprises of PRC Yanhui Chen School of Engineering, Linyi University, Linyi 276005, Shandong, P.R. China [email protected]
Abstract. With time passed, information and communication technology (ICT) has penetrated into all aspects of industrial production worldwide. ICT offer a new way in which ideas can be generated, communicated, and assessed. During the past years, medium-sized enterprises have played an important role in the economy. However, little attention has been paid to the usage of ICT in medium-sized enterprises in China. In this paper, we analyses ICT–based issues from technicians in Shandong Linyi Lingong automobile drive axle Limited Company. The methodology for carrying out the tasks mainly contains questionnaires according to normative Delphi technique. Some recommendations are proposed for medium-sized enterprises authorities to take in order to properly penetrate ICT in their production. Keywords: information and communication technology, Delphi, ZPD gaps, medium-sized enterprises.
1 Introduction In the past 20 years, a wave of new information and communication technology (ICT) was introduced in almost all aspects of industrial production. Advances in the field of ICT, including Email, Internet bulletin boards, Internet-based courseware delivery strategies, and video conferencing have together changed the whole process of industry. ICT do offer a new way in which ideas can be generated, communicated, and assessed [1]. During past few years, small and medium-sized enterprises have played an important role in the economy and easing employment pressure [2]. Small and medium-sized enterprises comprise more than 99% of all enterprises with more than 73% of entire workforce in China. It is vital for China as well as the other countries worldwide. Medium-sized enterprises in China are identified as follows: medium-sized industrial enterprises employ from 300 to 2000 employees; its annual income is from 30 million to 300 million yuan or balance value of assets is from 40 million to 400 million yuan. As the importance of medium-sized enterprises has increased, it has been accompanied by an increase in the amount of attention paid to them. Now medium-sized enterprises face enormous pressures when China integrates into the world economy gradually. The way that medium-sized enterprises develop in an increasingly competitive market has become one main problem [3]. How to stay M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 89–93, 2011. © Springer-Verlag Berlin Heidelberg 2011
90
Y. Chen
competitive is a question that bothers most of the enterprises because they cannot compete on mass production [4]. One of the possible answers is innovation through lifelong and informal learning by the use of ICT. ICT is needed to facilitate the exchange of ideas and information about industry production [5]. ICT has changed the whole process of industry as time passed. While ICT usage in industry has become a standard, workers will become more informed, more interactive, and more confident in the usage of various kinds of hardware and software. It is recognized by many researchers that use of ICT tools and application is an important life-skill for any worker, whereas some workers lag far behind the others in adopting ICT. A lot of research is dedicated to usage of ICT in large enterprises since large enterprises are able to invest more in ICT. Little attention has been paid to the usage of ICT in medium-sized enterprises in China. Importance of usage of ICT in medium-sized enterprises is not highlighted.
2 Methodology In this study, Shandong Linyi Lingong automobile drive axle Limited Company is proposed as an example on our research of Chinese medium-sized enterprises. The company covers an area of more than 46 acreages with assets of more than 400 million yuan. There are above 1200 employees, including 265 high and middle-level technicians in Shandong Linyi Lingong automobile drive axle Limited Company. The main product of the company is mild form automobile transmission, agricultural equipment transaxle case, and engineering machinery components assembly. The annual production of the company reached 600 thousand sets of automobile drive axle for different famous automobile factory. According to normative Delphi technique, a questionnaire was prepared and hand-delivered to the 265 high and middle-level technicians, and 198 of the staff answered the questionnaire [6]. Within next 2 months, these 198 members complete other questionnaires for 3 rounds. The same group was asked to devise development strategy that medium-sized enterprises authorities should take in order to integrate ICT in their company. The purpose of these development strategies is to increase technicians’ experiences by the use of ICT, and therefore improve output of medium-sized enterprises. The concept of Zone of proximal development (ZPD) was coined by [7]. ZPD gap is the difference between future/maximum and current state of any development/use of information technology. In this research, working and learning with and without the help/use of ICT in Chinese medium-sized enterprises are explored. The ZPD gaps in solving problems are recorded, so that proper mechanism can be figured out to reduce these ZPD gaps to a minimum. Those data showing ZPD gaps obtained through the questionnaires is shown in Table 1.
3 Data Analysis Data shows ZPD gaps obtained through the questionnaires from 198 technicians of Shandong Linyi Lingong automobile drive axle Limited Company. Please refer to Table 1 for issues.
Development Strategy for Demand of ICT in Medium-Sized Enterprises of PRC
91
Table 1. ZPD gaps
issues 1 2 3 4 5 6 7 8 9 10
Learning Professional Knowledge Finishing Routine Tasks /Sharing Material Communication with Leaders and Colleagues Innovative Research Use of Common ICT tools Rely on ICT tools in medium-sized enterprises Use of ICT tools in medium-sized enterprises Get help by ICT tools in medium-sized enterprises ICT Demand in medium-sized enterprises ICT Supply in medium-sized enterprises
ZPD gaps 2.01 1.23 0.96 2.65 0.04 0.86 0.83 0.82 1.22 2.42
1. A technician learns professional knowledge by reading online, searching information from internet. High ZPD gap (2.01) is recorded which shows levels of technicians of medium-sized enterprises using ICT tools and applications for these tasks. 2. For finishing their routine tasks task and sharing material using ICT tools and applications, ZPD gap (1.23) is obtained. 3. For communication with colleagues, leaders and outside between using ICT tools and applications, ZPD gap is 0.96. 4. For finding professional information, communicating with researchers, quest for knowledge using learner forums, large ZPD gap (2.65) is obtained. 5. Very small ZPD gap of 0.04 is recorded for use of common ICT tools such as MS office, web browsers, e-mail, search engines etc. 6. Regarding how much should technicians faculty of medium-sized enterprises rely on ICT tools and applications, low ZPD gap (0.86) is recorded. 7. Regarding how much should technicians faculty of medium-sized enterprises use ICT tools and applications, low ZPD gap (0.83) is recorded. 8. Regarding how much help technicians’ faculty of medium-sized enterprises get while using ICT tools and applications, low ZPD gap (0.82) is recorded. 9. ZPD gap of 1.22 is measured for the issue of demand for ICT in medium-sized enterprises of PRC. 10. ZPD gap of 2.42 is recorded for ICT supply in response to its demand in medium-sized enterprises.
4 Discussion Technicians of medium-sized enterprises usually perform a number of tasks. It takes a technician a lot of time to finishing his routine tasks. There are many ICT that a technician can use while finding research information and sharing ideas with other technicians. If a technician is effective at communication with his colleagues, his tasks will become much easier.
92
Y. Chen
In several aspects relating these issues about technician in medium-sized enterprises, big ZPD gaps are measured. Such significant gaps show different level of staff in using ICT tools and applications. In this study, the use of MS office, web browsers, search engines, and email is popular for most staff. Further than these skills, staff is less practical in using other tools essential for the their development. The main causes of such significant gaps are due to lack of funding, unavailability of resources and lack of attitude or vision etc. The spread of ICT tools and applications is considered as necessary in medium-sized enterprises of developing countries. However, few strategies have been devised to solve these issues in developing countries. Accordingly in this study we try to devise a strategy including some important measures for ICT enhancement.
5 Conclusions Some development strategy for requirement of ICT in medium-sized enterprises is proposed by technicians in Shandong Linyi Lingong automobile drive axle Limited Company. It is necessary for government and medium-sized enterprises to realize the full potential of ICT. All members of the medium-sized enterprises, that is, management, technicians and common laborers, should be involved in readiness. Opinion leaders could be used as effective promotional vehicles in the implementation of ICT among medium-sized enterprises. Medium-sized enterprises currently implementing ICT should be identified and supported. Such medium-sized enterprises who are successfully implementing ICT should be showcased as success models. While some medium-sized enterprises want to adopt ICT to facilitate improved performance and subsequent growth, they may be constrained by finances. The medium-sized enterprises should be given the necessary attention in policy, financial and general business support. It is necessary for medium-sized enterprises to invest more funds in this field. Local internet needs to be developed and consummated. Computer level of 1:1 technicians -computer ratio should be reached. High speed internet access for management, technicians and common laborers should be enabled. Technicians also need a lot of attention in adopting an effective and efficient e-learning environment. A persistent training program should be designed for technicians in the use of ICT. ICT training centers that fulfill training needs of technicians needs to be established. Technicians must be keen to collaborate with other colleagues in and outside the factory. Teams comprising of management, technicians and common laborers need to be developed which develop task-based education content.
References 1. Bader, M.B., Roy, S.: Using Technology to Enhance Relationship in Interactive Television Classroom. Journal of Education for Business 74(6), 357–364 (1999) 2. Anderson, A.R., Li, J., Harrison, R.T., Robson: The Increasing Role of Small Business in the Chinese Economy. Journal of Small Business Management 41(3), 310–316 (2003)
Development Strategy for Demand of ICT in Medium-Sized Enterprises of PRC
93
3. Cunningham, L.X., Rowley, C.: Small and Medium-Sized Enterprises in China: A Literature Review. Human Resource Management and Suggestions for Further Research, Asia Pacific Business Review 16(3), 319–337 (2010) 4. Raimonda1, A., Pundziene, A.: Increasing the Level of Enterprise Innovation through Informal Learning: The Experience of Lithuanian SMEs 5. Information and Communication Technology Strategic Plan, 2005-06 to 2009-10. University of Oxford (retrieved August 2009) 6. Cyphert, F.R., Gant, W.L.: The Delphi technique: A case study. Phi Delta Kappan 7. Vygotsky, L.S.: Mind in Society: The development of higher psychological processes. Harvard University Press, Cambridge (1978)
Diagnosing Large-Scale Wireless Sensor Network Behavior Using Grey Relational Difference Information Space Hongmei Xiang and Weisong He Chongqing College of Electronic Engineering, Chongqing, P.R. China [email protected]
Abstract. Grey relational difference information space (GRDIS) theory is an effective tool for wireless sensor network diagnosis and situation prediction on poor-information network systems. This paper discusses the application of GRDIS in poor-information systems in which there is a few or incomplete sampled wireless sensor network data, or sample data updates quickly, or whole sample data is very complex, but in some space or time region, sample data obey regularity. Some experiments have been presented. Keywords: GRIDS, Wireless Sensor Network, Behavior Diagnosis.
1 Introduction A sensor is commonly viewed as a programmable, low-cost, low-power, functional tiny mobile or stationary device which usually has a much shorter working life span [1]. The current state of the deployment of sensor nodes seems to be ubiquitous. Starting from small objects such as insects to very large objects (and systems) such as state highway infrastructure has large number of embedded sensor nodes. In fact any kind of automation relies heavily on a set of programmed sensor nodes. For example, programmed set of sensor nodes drive fire-detector, remote health care, immersive sensors monitor crop growth, etc. In fact, there are very few units which do not use sensor nodes to perform their functions. The wide applications of wireless sensor networks and the challenges in designing such networks have attracted many researchers to develop protocols and algorithms for sensor networks [2][3][4][5][6][7]. However, the situation that having incomplete running mechanism and being naked to inherent connotation is often faced by network administrator. The situation present in the following ways: few and incomplete valid sampled network data; sampled data update frequently and contradict each other; overall data is complex, but the some time or spatial data is regularity. In this paper, GRIDS is proposed to provide a simple scheme to analyze large-scale wireless sensor network behavior under condition that given information is few. M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 94–99, 2011. © Springer-Verlag Berlin Heidelberg 2011
Diagnosing Large-Scale Wireless Sensor Network Behavior
95
The roadmap of the paper is as follows. To begin with, we motivate the need for diagnosing wireless sensor network behavior. Second, we illustrate grey relational differences information space theory. Furthermore, we provide experimental design. In the end, we conclude. 1.1 Grey Relational Differences Information Space Grey theory [6] is first conducted by Deng Julong in 1982 in China, and is used to predict from small-scaled, uncertain data. The research target of grey theory is the indeterminable, poor-informational system, which means that the system information is partly known and partly unknown. Grey relational differences information space is the basis of grey relational analysis.
@ GRF is grey relational factor set, Δ GR is grey
Definition 2.1. Assume
relational differences information space
@ GRF ⇒ Δ GR , Δ GR = {Δ, ζ , Δ 0i (max), Δ 0i (min)} ,
Δ 0i (k ) = x0 (k ) − xi (k ) , k ∈ K = {1, 2,..., n} ; Let
γ ( x0 (k ), xi (k ))
reference column,
as compare metric at the k − th
x0 is
xi is compare column; γ ( x0 (k ), xi (k )) is average value of
k∈K . If
point of Δ GR ,
satisfy
1) Norm
0 < γ ( x0 , xi ) ≤ 1, γ ( x0 , xi ) = 1 ⇔ x0 = xi , or xi and x0 is isomorph . γ ( x0 , xi ) = 0 ⇔ x0 , xi ∈ ϕ .
2) Symmetry
γ ( x, y ) = γ ( y, x), iff X = {x, y} .
3) Whole often
γ ( xi , x j ) ≠ γ ( x j , xi ), xi , x j ∈ X , X = {xi i ∈ I , POT .I ≥ 3};
96
H. Xiang and W. He
4) Closing The little the difference information
Δ 0i (k ) is, the bigger the γ ( x0 (k ), xi (k ))
is, denoted as
Δ 0i (k ) ↓⇒ γ ( x0 (k ), xi (k )) ↑ .
γ ( x0 (k ), xi (k )) as Grey Relational Coefficient of k xi to x0 , refer to γ ( x0 , xi ) as Grey Relational Grade of xi to x0 .
Then, we refer to
γ ( x0 , xi ) =
point
1 n ∑ γ ( x0 (k ), xi (k )). We call the four conditions for four axiom n k =1
of grey relation. Theorem 2.1 Let
( ΔGR , Γ) as the state of grey relational differences information space. Δ GR = {Δ, ζ , Δ 0i (max), Δ 0i (min)}, Δ = {Δ 0i ( k ) i ∈ I , k ∈ K = {1, 2,..., n}}, or Δ = {Δ 0i i ∈ I }, Δ 0i (k ) = x0 (k ) − xi (k ) , Γ satisfy four axiom of grey relation, then under is
γ ( x0 (k ), xi (k )) =
(ΔGR , Γ) ,the Grey Relational Coefficient γ ( x0 (k ), xi (k ))
min min Δ0i (k ) + ζ max max Δ0i (k ) i
k
i
k
Δ0i (k ) + ζ max max Δ0i (k ) i
k
Δ 0i (k ) = x0 (k ) − xi (k ) , ζ ∈ [0,1] ,then Grey Relational Grade γ ( x0 , xi ) is
γ ( x0 , xi ) =
1 n ∑ γ ( x0 (k ), xi (k )). n k =1
The basic tasks of grey relational analysis is micro or macro approximate of behavior, analyzing and determining the impact degree of each factors and the contribution metric of factor towards primary behavior.
2 Experimental Result In this paper, we collected eight sensor nodes data. The wireless sensor network is shown in Fig. 1.
Diagnosing Large-Scale Wireless Sensor Network Behavior
97
Fig. 1. Eight Nodes
s1
2 1 0 -1
s2
2 1 0 -1
s3
4 2 0 -2
s4
2 0 -2
s5
2 0 -2
s6
2 0 -2
s7
6 4 2 0
s8
These eight sensor nodes, collected over each time bin, make up an eight time series. The time series is put into the data matrix, where sensor nodes vary across columns of and time varies across rows. The result is shown in Fig. 2.
6 4 2 0
50
100
150
200
250
300
350
400
450
500
550
50
100
150
200
250
300
350
400
450
500
550
50
100
150
200
250
300
350
400
450
500
550
50
100
150
200
250
300
350
400
450
500
550
50
100
150
200
250
300
350
400
450
500
550
50
100
150
200
250
300
350
400
450
500
550
50
100
150
200
250
300
350
400
450
500
550
50
100
150
200
250
300
350
400
450
500
550
Fig. 2. Eight Time Series
We apply grey relational differences information space theory. The result is shown as follows. Take 4 time points for example. S1=(3.94175, 4.10419, 4.02862, 4.10267); S2=( 4.59713, 4.50804, 4.55134, 4.65386); S3=( 6.12177, 6.08516, 6.23075, 6.16251); S4=(7.30286, 6.9077, 6.9079, 7.28666); S5=( 5.9208, 5.70464, 5.69311, 5.62988);
98
H. Xiang and W. He
S6=( 1.14474, 1.15791, 1.12062, 1.17267); S7=(250526, 150575, 161565, 155327); S8=(920, 934, 564, 621). Finish computing, we obtain the
relational grade
γ ( x0 , xi ) :
γ ( xs 5 , xs1 ) = 0.7795 γ ( xs 5 , xs 2 ) = 0.8811 γ ( xs 5 , xs 3 ) = 0.8459 γ ( xs 5 , xs 4 ) = 0.9066 γ ( xs 5 , xs 6 ) = 0.8536 γ ( xs 5 , xs 7 ) = 0.5128 γ ( xs 5 , xs 8 ) = 0.6291 The
γ ( xs 5 , xs 4 )
is biggest of
all relational grade, so
s4 impact s5 most, s2
second, s6 third, s3 fourth, s1 fifth, s8 sixth, s7 seventh.
3 Summary In this paper, we apply grey relational difference information space method on multivariate time series to obtain the relation of each sensor nodes. From experimental result, we can arrive at the conclusion that grey relational difference information space method is the good method for poor-information network systems especially when there is a few or incomplete sampled wireless sensor network data, or sample data updates quickly, or whole sample data is very complex, but in some space or time region, sample data obey regularity. Acknowledgement. The authors would like to thank reviewer for their helpful comments. This research is supported by Chongqing Education Committee Research Foundation under Grant KJ092503.
References 1. Kumar, V.: Sensor: The Atomic Computing Particle. ACM Sigmod Record (December 2003) 2. Akyildiz, I.F., Su, W., Sankarasubramaniam, Y., Cayirci, E.: Wireless sensor networks: A survey. Computer Networks 38(4), 393–422 (2002) 3. Karlof, C., Wagner, D.: Secure routing in wireless sensor networks: Attacks and countermeasures. In: Proceedings of 1st IEEE International Workshop on Sensor Network Protocols and Applications (May 2003) 4. Newsome, J., Song, D.: GEM: graph embedding for routing and data-centric storage in sensor networks without geographic information. In: Proceedings of the First ACM Conference on Embedded Networked Sensor Systems (SenSys 2003), pp. 76–88 (November 2003)
Diagnosing Large-Scale Wireless Sensor Network Behavior
99
5. Perrig, R., Szewczyk, V., Wen, D., Culler, Tygar, D.: SPINS: Security protocols for sensor networks. In: Proceedings of Seventh Annual International Conference on Mobile Computing and Networks (July 2001) 6. Rajasegarar, S., Leckie, C., Palaniswami, M.: Anomaly detection in wireless sensor networks. IEEE Wireless Communications 5(4), 34–40 (2008) 7. Yu, D.: DiF: A Diagnosis Framework for Wireless Sensor Networks. IEEE Infocom, pp. 1-5 (2010) 8. Deng, J.-l.: Grey Forecast and Grey Decision. Huazhong University of Science and Technology Press, Wuhan
Mining Wireless Sensor Network Data Based on Vector Space Model Hongmei Xiang and Weisong He Chongqing College of Electronic Engineering, Chongqing, P.R. China [email protected]
Abstract. In this letter, in order to explore wireless sensor network data, a vector-space-model is applied. With application of the method, the similarity between query features and wireless sensor network features is calculated, which facilitates detecting anomalies. Experiments are conducted and, together with application of SIGCOMM 2008 trace data, show some positive results. Keywords: Vector space model, Wireless sensor network, Feature Analysis.
1 Introduction The advent of cheap, compact sensor nodes with an on board central processing unit (CPU), memory, and wireless radio has enabled the development of wireless sensor networks that support in-network processing [1]. For the sake of remote monitoring or a heterogeneous environment or control of actuators in a homogeneous, sensor networks are often deployed in an unattended area of interest. Examples of applications of wireless sensor networks include home automation, vehicle tracking, target detection, and environmental monitoring [1]. It is important in applications where robust and reliable monitoring and unusual activities can be detected in an accurate and timely manner is necessary. Actually, however, sensor nodes have limited power, bandwidth, memory, and computational capabilities [2]. These inherent limitations of sensor nodes can make the network more vulnerable to faults and malicious attacks [3], [4]. To identify any anomalies or misbehavior is important in the network to develop reliable and secure functioning of the network. An anomaly or outlier in a set of data is defined as an observation that appears to be inconsistent with the remainder of the data set [5]. By analyzing either sensor data measurements or traffic-related attributes in the network, we can identify abnormal behaviors. While minimizing energy consumption in resource constrained wireless sensor networks, identifying anomalies with acceptable accuracy become a challenge. The majority of energy in sensor networks can be consumed in radio communication. For example, in Sensoria sensors and Berkeley motes, the ratio between communication and computation energy consumption ranges from 103 to 104 [6]. So, how to exploit distributed in network processing for the purpose of minimizing the communication requirements in the network is a key research challenge for anomaly detection in the context. The development of efficient distributed algorithms for anomaly detection can be required. On the contrary, centralized approaches to anomaly detection need large numbers of M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 100–104, 2011. © Springer-Verlag Berlin Heidelberg 2011
Mining Wireless Sensor Network Data Based on Vector Space Model
101
raw measurements to be communicated to a selected centralized node for processing, which will reduce the lifetime of the network and deplete the energy in the sensor network. The remain of the paper is as follows. To begin with, we introduce our approach. Second, we conduct experiment. In the end, we conclude.
2 Our Approach A. Vector Space Model Vector space model (VSM) [7] is an algebraic model for representing text documents (and any objects, in general) as vectors of identifiers, such as, for example, index terms. It is used in information filtering, information retrieval, indexing and relevancy rankings. Its first use was in the SMART Information Retrieval System. A typical three-dimensional index space is shown in Figure 1, where each item is identified by up to three distinct terms. The three-dimensional example may be extended to t dimensions when t different index terms are present. In that case, each document Di is represented by a t-dimensional vector Di = ( d i1 , d i 2 ,..., d it ) ,
dij representing the
weight of the jth term. Given the index vectors for two documents, it is possible to compute a similarity coefficient between them, sim( Di , D j ) , which reflects the degree of similarity in the corresponding terms and term weights. Let Di = wi1 , wi 2 ,..., wit , D j = w j1 , w j 2 ,..., w jt , then t
sim( Di , D j ) = ∑ wik • w jk
(1)
k =1
Such a similarity measure might be the inner product of the two vectors, or alternatively an inverse function of the angle between the corresponding vector pairs; when the term assignment for two vectors is identical, the angle will be zero, producing a maximum similarity measure.
Fig. 1. Vector Representation of Document Space
102
H. Xiang and W. He
B. Mining Wireless Sensor Network with VSM The commonly method for calculating weight is Term Frequency-Inverse Document Frequency (TFIDF), which is shown as follows
( IDF ) k = [log 2 n] − [log 2 d k ] + 1 where
n is the total number of documents, d k is number of documents that contain
feature k . Let f i is
(2)
k
denote occurrence times of document
i , then the feature weight
k
f i • ( IDF )k . The formula shows that the more a feature occur in a document, the
more it benefit the document. But, the more the feature appears in the whole document space, the less it’s contribution. We construct mathematical model as follows. Let every data be denoted as n−dimensional vector Di = d i1 , d i 2 , …, d in , i = 1, 2, …, m . The query row vector is
Q = (q1 , q2 , …, qn ) . We define that: q j = 1 , if the jth feature of
document Q occurs. Otherwise, q j = 0 . The document feature weight matrix
A is constructed as
follows
⎡ D1 ⎤ ⎢D ⎥ A=⎢ 2⎥ ⎢# ⎥ ⎢ ⎥ ⎣ Dm ⎦ where
(3)
dij denotes the jth feature weight of document Di , that is, the occurrence
times of the documents.
jth feature in document Di . Let N denotes the total number of
n j denotes the occurrence number of documents that contain jth feature.
Then the feature weight is modified as
wij = d ij • log( N / n j ) . Let W denotes
the feature weight matrix, then
W = A • log( N / n j )
(4)
3 Experimental Result The SIGCOMM traces of wireless traffic belonging to the traced network at several monitoring nodes distributed across the conference floor are gathered from 10:54 to 15:40 on Aug. 21, 2008. In addition, the traces on the wired switch to which the wireless access points connect are gathered. Here is a description of the traces we are gathering and the anonymization that is being performed. Our description here focuses on tracing on the wireless LAN. A
Mining Wireless Sensor Network Data Based on Vector Space Model
103
subset of this (viz., everything above the PHY layer) also applies to the tracing on the wired LAN. Each monitor will capture all of the 802.11 frames it sees, including: data frames, management frames (e.g., association, authentication), control frames (e.g., RTS, CTS, ACK). For each wireless frame captured at a monitor, we record up to 250 bytes of the following information: per-frame PHY information (channel frequency , RSSI and modulation rate), entire MAC header, with only the source and destination MAC addresses, the entire IPv4 and TCP/UDP header, with the source and destination IPv4 addresses anonymized, the entire DHCP payload, the DNS request/response payload. T-Fi plot visualizations provide a quick understanding of the completeness of a 802.11 packet trace. A T-Fi plot is a heat map. Firstly, the orientation on the y-axis shows completeness; the fraction of transmitted packets caught by the monitor. Secondly, the width of the shaded region on the x-axis shows the range of load. Finaly, the intensity of the shaded region shows the frequency of load. We define load (x-axis) as the number of packets sent by the AP and all associated clients between two beacon packets received from the AP. Over the same interval we define score (y-axis) as an approximation of the completeness of that interval. A score of 1 indicates the interval is complete, a score of 0 indicates that none of the packets were captured from that interval. The T-Fi plot is shown as Fig.2.
Fig. 1. Eight Nodes
D1 :< ACK ,190.35.226.121, ARP, IPV 6, ACK , ARP, ACK > D2 :< ACK , TCP, ARP, NBNS , TCP > D3 :< ACK , IEEE802.11, IPV 6, TCP > Q :< ACK , TCP,190.35.226.121 >
104
H. Xiang and W. He
Then,
⎡0 ⎤ ⎢1 ⎥ ⎢ ⎥ 0 0 0 0 ⎡ 0 0.477 0.352 ⎤ ⎢0 ⎥ ⎥, W=⎢ 0 0 0.176 0 0.352 0.477 0 ⎥ , Q = ⎢ 0 ⎢ ⎥ W ⎢ ⎥ ⎢⎣ 0 0 0 0.176 0.176 0 0.477 ⎥⎦ ⎢ 0.125⎥ ⎢ ⎥ ⎢0 ⎥ ⎢0 ⎥ ⎣ ⎦
⎡0.477 ⎤ P = W • QW = ⎢⎢0.044 ⎥⎥ . ⎢⎣0.022 ⎥⎦ Therefore, the sort of documents is
D1 , D2 , D3 .
4 Summary In this paper, a VSM-based approach is proposed to detect the anomalies of wireless sensor network. According to experimental results, we find that VSM-based approach is a promising method to discover the anomalies in a special manner. Acknowledgement. The authors would like to thank reviewer for their helpful comments. This research is supported by Chongqing Education Committee Research Foundation under Grant KJ092503.
References 1. Akyildiz, et al.: Wireless Sensor Networks: A Survey. Computer Networks 38(4), 393–422 (2002) 2. da Silva, A., et al.: Decentralized Intrusion Detection in Wireless Sensor Networks. In: Proc. 1st ACM Int’l. Wksp. oS and Sec. in Wireless and Mobile Networks, pp. 16–23 (2005) 3. Djenouri, D., Khelladi, L., Badache, A.: A Survey of Security Issues in Mobile Ad Hoc and Sensor Networks. IEEE Commun. Surveys and Tutorials 7(4), 2–28 (2005) 4. Shi, E., Perrig, A.: Designing Secure Sensor Networks. IEEE Wireless Communications, 38–43 (2004) 5. Hodge, V., Austin, J.: A Survey of Outlier Detection Methodologies. Artificial Intelligence Rev., 85–126 (2004) 6. Zhao, F., et al.: Collaborative Signal and Information Processing: An Information-Directed Approach. Proc. IEEE 91(8), 1199–1209 (2003) 7. Salton, G., Wong, A., Yang, C.S.: A vector space model for automatic indexing. Communications of the ACM 18(11), 613–620 (1975)
Influencing Factors of Communication in Buyer-Supplier Partnership Xudong Pei School of Economics and Management, Xi’an Shiyou University, Xi’an, 710065, China [email protected]
Abstract. Inter-organizational communication has been documented as a critical factor in promoting collaboration among firms. However, the influencing factors of communication remain unclear. Based on social exchange theory, this paper explores the influencing factors of communication in the context of buyer-supplier partnership. The results show that trust, commitment and dependence are positively associated with communication in buyer-supplier partnership. Keywords: trust, commitment, dependence, communication.
1 Introduction Over the last several years, there has been a growing interest in inter-organizational relations both in research and practice. Firms have recognized the need to manage the supply chain as part of broader business strategies, and in particular to build and exploit collaborative relationships with supply chain partners. Managers and researchers, in the interest of determining how to develop more effective inter-firm relationships, have enlarged the focus from formal contracts to more behavioral and relational approaches. Managers believe that these latter approaches can create more flexible, responsive partnerships. Consequently, inter-organizational partnership receives considerable research attention. However, unless the two-way communication between buyer and supplier exists, partnerships would not be able to adequately provide for overall long-term competitiveness. That communication is the essence of organizational life has been well documented by communication and management scholars and practitioners [1]. Similarly, literature in relationship marketing has recognized how collaborative communication is critical to fostering and maintaining value-enhancing inter-organizational relationships [2][3]. Reflecting its centrality to business performance, one business executive asserted that communication is as fundamental to business as carbon is to physical life [1]. Operations management researchers have also documented how inter-organizational communication enhances buyer–supplier performance [4-6]. In empirical studies, researchers have typically considered communication as a facet of a broader construct, such as supply management [7], or examined the extent to which the use of select communication strategies by buyer firms enhances supplier firm operational performance [5]. Although the importance of communication in buyer-supplier M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 105–109, 2011. © Springer-Verlag Berlin Heidelberg 2011
106
X. Pei
partnership has been recognized by researchers and managers, the influencing factors of communication remain unclear. Based on social exchange theory, this paper intends to address these important gaps and investigates the influencing factors of communication in the context of buyer-supplier partnership. We propose that trust, commitment and dependence are positively associated with partnership communication in buyer-supplier partnership.
2 Theory and Hypotheses 1. Dependence and communication. Dependence can be conceptualized as the economic power one firm has over another, which in turn may result in significant levels of adaptation [8]. As firms join forces to achieve mutually beneficial goals, they acknowledge that each part is dependent on the other. Dependence results from a relationship in which both firms perceive mutual benefits from interacting and in which any loss of autonomy will be equitably compensated through the expected gains [9]. Both parties recognize that the advantages of interdependence provide benefits greater than either could attain singly. Interdependency between partners increases when the size of the exchange and importance of exchange are high, when participants consider their partner the best alternative, and when there are few alternatives or potential sources of exchange. Communication can be defined broadly as the formal as well as informal information shared with the partners in a timely manner [10]. This definition appears to center on the efficacy of information exchange, which means that manufacturers require formal information such as cost, quality, and quantity information with response to their products, as well as overall organizational aspects from suppliers at any time, for any reason. Gulati and Sytch noted that a high level of interdependence generates conditions that develop and maintain trust and commitment owing to opportunistic behavioral costs, differential power (dependence), and asymmetric control [11]. The dependent partner may be willing to honor a request made by its partner and the superior partner may make requests of the dependent partner that solely benefits the superior partner. In a relationship where a supplier firm depends on a larger buyer for a substantial part of its output, the buyer may have a degree of coercive power over the supplier. Communication as a result of dependence is more likely to take place when there is buyer concentration and on account of the buyer's importance to the supplier. Therefore, we propose: Hypothesis1. Dependence buyer-supplier partnership.
is positively associated with communication in
2. Commitment and communication. Partner commitment refers to an exchange partner believing that a valued relationship with another is considered sufficiently important to warrant making a maximum effort at maintaining it; that is, the committed party believes the relationship is worth maintaining to ensure it endures indefinitely [12]. We clarify the definition of commitment to some degree, holding it as the belief of an exchange partner in an ongoing relationship” and that committed behavior ensures “maximum efforts at maintaining” the relationship [12]. Additionally, we conjecture herein that commitment performs a vital function in the partner’s exchange
Influencing Factors of Communication in Buyer-Supplier Partnership
107
relationship. The nature of commitments in all relationships, including interorganizational, intraorganizational, and interpersonal relationships stands for stability and sacrifice, In sum, commitment is a way to represent their efforts and reflects the belief that a partner is ready to take any potential high-risk action for the furtherance of the relationship, and will not elect to engage in any opportunistic options in alternative situation. In this regard, commitment can be explained as affection, which refers to a sense of belonging and a closeness of attachment to the organization. Commitment between supply chain partners integrates the supply chain business process. For this reason, commitment can be positioned as a key mediating variable between critical antecedents and outcomes [12]. Inter-organizational communication may lead to increased behavioral transparency and reduced information asymmetry, thereby lowering transaction costs and enhancing relationship value. When buyers and suppliers make special efforts to design a relationship with good information exchange between trading partners, they benefit from higher levels of relationship performance [4]. Communication plays an important role in activating and translating relational norms into value-enhancing relational assets. Thus, a long-term commitment between buyer and supplier provides the strategic context necessary for fostering collaborative communication. Therefore, we propose: Hypothesis2. Commitment is positively associated with communication in buyer-supplier partnership. 3. Trust and communication. Zaheer et al. define inter-organizational trust as “the extent to which organizational members have a collectively held trust orientation toward the partner firm” [8]. Trust is considered to exist when one party has confidence in an exchange partner's reliability and integrity [12], and a willingness to rely on an exchange partner on whom one has confidence. Trust may promote collaborative communication and enable supply chain partners to build stronger relational bonds [13]. With relational trust, supply chain partners are able to focus on knowledge development and exchange and increase investment in relational competencies. Insofar as these relational competencies are ‘‘socially created,’’ resulting from ongoing collaborative communication among exchange partners, and not easily tradable in strategic factor markets they may confer durable strategic advantages for the supply chain partners [14][15]. Thus, trust in buyer and supplier partnerships provides the strategic context necessary for fostering collaborative communication. Such relational trust also enables the exchange parties to cultivate relational norms that promote cooperation for mutual gains [12]. When supply chain partners develop relational trust, they tend to rely on understandings and conventions involving fair play and good faith, such that any agreements between them are enforceable largely through internal processes rather than through external arbitration or the courts [15]. Thus, relational trust enables the communication and exchange of information and knowledge, lowers transaction costs and enhances transaction value through strategic collaboration. In contrast, lacking of relational trust, adversarial buyer-supplier relationship focused on transaction cost economizing can inhibit the development of relational competencies, frustrate
108
X. Pei
collaborative communication, and heighten opportunism, which ultimately dissipates relational rents. Inadequate or insufficient two-way communication limits a firm’s ability to leverage otherwise supportive relationships to accomplish this. Moreover, rapid advances in technology and global information infrastructure mean that buyers and suppliers must possess appropriate, competitive two-way communication systems if they are to maintain the ability to respond quickly and effectively to changing customer needs and expectations. Thus, mutual trust between buyer and supplier fosters collaborative communication. Therefore, we propose: Hypothesis3. Trust is positively associated with communication in buyer-supplier partnership. 2.1 The Research Framework of This Study We develop a framework (see Figure 1) to examine the relationship among trust, commitment, dependence and communication in buyer-supplier partnership.
Dependence
Commitment
Communication in buyer-supplier Partnership
Trust Fig. 1. The conceptual model
3 Conclusions In the new economy, as firms become more dependent on outside partners to meet sophisticated customer needs, managing inter-organizational relationships effectively becomes important to gaining a competitive advantage. Consequently, inter-organizational partnership receives considerable research attention. Unless the two-way communication between buyer and supplier exists, partnerships would not be able to adequately provide for overall long-term competitiveness. Based on social exchange theory, this paper investigates the influencing factors of communication in the context of buyer-supplier partnership. We propose that trust, commitment and dependence are positively associated with partnership communication in buyer-supplier partnership. Acknowledgment. This work was supported by the Soft Science Project for Science and Technology Department of Shaanxi Province under Grant 2010KRM38(2) and by the Base Research Project of the Education Department of Shaanxi Province under Grant 2010JZ20.
Influencing Factors of Communication in Buyer-Supplier Partnership
109
References 1. Reinsch, N.L.: Business performance: communication is a compound, not a mixture. Vital Speeches of the Day 67, 172–174 (2001) 2. Mohr, J., Fisher, R.J., Nevin, J.R.: Collaborative communication in interfirm relationships: moderating effects of integration and control. Journal of Marketing 60, 103–115 (1996) 3. Schultz, R.J., Evans, K.R.: Strategic collaborative communication by key accounts representatives. Journal of Personnel Selling and Sales Management 22, 23–31 (2002) 4. Claycomb, C., Frankwick, G.I.: A contingency perspective of communication, conflict resolution and buyer search effort in buyer–supplier relationships. Journal of Supply Chain Management 40, 18–34 (2004) 5. Prahinksi, C., Benton, W.C.: Supplier evaluations: communication strategies to improve supplier performance. Journal of Operations Management 22, 39–62 (2004) 6. Cousins, P.D., Menguc, B.: The implications of socialization and integration in supply chain management. Journal of Operations Management 24, 604–620 (2006) 7. Chen, I.J., Paulraj, A.: Towards a theory of supply chain management: the constructs and measurement. Journal of Operations Management 22, 119–150 (2004) 8. Zaheer, A., Venkatraman, N.: Relational governance as an interorganizational strategy: an empirical test of the role of trust in economic exchange. Strategic Management Journal 16, 373–392 (1995) 9. Mohr, J., Spekman, R.: Characteristics of Partnership Success: Partnerships Attributes, Communication Behavior, and Conflict Resolution Techniques. Strategic Management Journal 2, 135–152 (1994) 10. Anderson, J.C., Narus, J.: A model of distributor firm and manufacturer firm working partnerships. Journal of Marketing 54, 42–58 (1990) 11. Gulati, R., Sytch, M.: Dependence asymmetry and joint dependence in Inter-organizational relationships: effects of embeddedness on a manufacturer’s performance in procurement relationships. Administrative Science Quarterly 52, 32–69 (2007) 12. Morgan, R.M., Hunt, S.D.: The commitment–trust theory of relationship marketing. Journal of Marking 58, 20–38 (1994) 13. De Toni, A., Nassimbeni, G.: Buyer–supplier operational practices, sourcing policies and plant performance: result of an empirical research. International Journal of Production Research 37, 597–619 (1999) 14. Kale, P., Singh, H., Perlmutter: Learning and protection of proprietary assets in strategic alliances: building relational capital. Strategic Management Journal 21, 217–237 (2000) 15. Dyer, J.H., Singh, H.: The relational view: cooperative strategy and sources of interorganizational competitive advantage. Academy of Management Review 23, 660–679 (1998)
An Expanding Clustering Algorithm Based on Density Searching Liguo Tan, Yang Liu, and Xinglin Chen School of Automation Science and Engineering, Harbin Institute of Technology, Harbin, 150001, China [email protected]
Abstract. Most clustering algorithms need to preset the initial parameters which affect the performance of clustering very much. To solve this problem, a new method is proposed, which determine the center points of clustering by density-searching according to the universality of the Gaussian distribution. After the center was obtained, the cluster expands based on the correlation coefficient between clusters and the membership of the samples until the terminating condition is met. The experimental results show that this method could classify the samples of Gaussian distribution with different degree of overlap accurately. Compared with the fuzzy c-means algorithm, the proposed method is more accurate and timesaving when applied to the Iris data and Fossil data. Keywords: clustering, density searching, clustering center, Algorithm.
1 Introduction Clustering algorithm belongs to the scope of unsupervised learning. It is widely applied to computer science, life and medical science, social science and economics etc [1, 8], especially in the image processing, data mining, and video. Data clustering is one of the main tasks of data mining [2]. It is used to find unknown object class in the database and identify meaningful mode or distribution. So far a lot of clustering algorithm has been already appeared, roughly divided into partition type clustering, such as K - means [3], K - medic algorithms [4]. Hierarchical clustering, such as BIRCH algorithm [5], CURE algorithm. Based on the density and grid of clustering, such as DBSCAN algorithm, OPTICS algorithm. In addition, there are some special clustering analysis methods, such as clustering fusion algorithm, high dimensional clustering algorithm [7], dynamic data clustering algorithm [6], etc. Although these clustering algorithms have obtained some effects in practical application, they have their inherent disadvantages. Such as, most algorithms are excessive dependence on initial value, they require users to input parameters in advance. In order to obtain high precision caused a complex algorithm, large amount of calculation, and are influences of the evaluation function. This article aims at above problems, proposes a clustering algorithm search based on density search. This method does not need to give any parameters in advance, and calculation is far less than the existing clustering method based on density. M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 110–116, 2011. © Springer-Verlag Berlin Heidelberg 2011
An Expanding Clustering Algorithm Based on Density Searching
111
2 Based on Density Search of Expansive Clustering Algorithm According to the statistics can know, most of stochastic process is normal distribution in practice. And any kind of distribution function can be composed of linear combination of multiple normal distribution function. Object data sets of clusterings also accord with the characteristics, thus it is reasonable to hypothesize that clustering algorithms of input data set meet normal distribution . Furthermore, the clustering center is in the input data sets the most dense area. On this basis, the algorithm put the clustering process into two steps. The first step, using density search clustering center, according to the definition of the correlation coefficient to divide data set partition. The second step, using incremental circle expansion strategy and definition of the membership to classify data set.
3 Algorithm Description Consider the two-dimensions, for example. Pseudo code of the algorithm can be formulated as follows. Step 1 search the date sets center. While (clustering density li > l0 ) ( l0 defined as the clustering density valve)
xi randomly, figure out the clustering level li between the n points closest to xi and xi . The equations employed to solve li is as Let pick up a point designated as
follows. n
li = ∑ ( xi + j − xi ) .
(1)
j =1
And
li + k = min(li , li +1 ,..., li + n ) , k = 0,1,..., n . Then save n points around xi + k and
calculate the clustering levels with n points centered respectlly by the procedures demonstrated as before. Then find the minimum according the definition above. If li +1 > li , we get the minimum li and mark the n points around xi . The center for the n
n points is denoted as x = 1 ∑ x . Do this process recursively until the while z1 j n j =1 condition is broken. Then cancel all the sample points that are on the searching orbit. End while Now m clustering centers can be obtained: xz1 xz 2 ... xzm .
, ,
Step2 connect the two clustering center points xzi , xzj arbitrarily and use the points on the line to approximate the normal distribution to get the variance for mean. Step3 use the distance r between the two clustering points xzi , xzj chosen arbitrarily and σ i , σ j resulting from step2 to calculate correlation coefficient
ρij .
112
If ( ρ
L. Tan, Y. Liu, and X. Chen
ij
⎧ ⎛⎜ 2 ⎡⎢β0+β21(σi +σj )2⎤⎥⎞⎟ ⎪ ⎜⎜r −r⎢⎢ β1(σi+σj ) ⎥⎥⎟⎟ ⎪⎝ ⎣ ⎦⎠ ρij =⎨e ⎪ ⎪⎩0
< ρ0 )
Set with Δri
the
circle
around
= ησ i ,η ∈ ( 0,1) .
the
r ≤β1(σi +σj )
.
(2)
r >β1(σi +σj )
clustering
center
in
step
one
radius
If (the number of samples absorbed is not as 2% times as the original one|| the two circle tangent each other) Stop circling and the resulting clustering center is. n
xz =
∑μ x i =1 n
i zi
∑μ i =1
.
(3)
i
Else Continue circling with incremental radius. Step4 when overlaps between the samples exist slightly, the membership function is defined. ⎧ γ ( ri − x ) ⎪ ri x > ri . (4) p ( x ) = ⎨e ⎪1 x ≤ ri ⎩
ri is the radius of the i subclass, γ is the regulator, it regulate the membership
between every subclass and the samples outside the circle. When p ( x ) < p0 . When p ( x ) < p0 , the samples are identified as outlier.
4 Algorithm Demonstration In this section, clustering algorithm via density-searching is demonstrated and analyzed step by step in order to formulate the algorithm clearly and verify the reliability. First, a simulation is designed shown as Fig.1, where five groups of Gaussian distribution data sets are given, of which, two are aliased seriously, other two are aliased a little and the last one is isolated. Then, search the center point of the five groups of data sets and make n in the algorithm be 15. We mark the actual center point of each subclass by red triangle and mark center point of each pseudo-subclass found by text method by small red circle. By analyzing the algorithm for fast searching subclass center point proposed by this article, it is known that, there may be two problems as the following during the searching process. First, the same center point may be just found in several different steps in the recursive process. Second, initial sample point chosen arbitrarily may not have the minimum clustering density in the subclass. However, the center point would converge to the optimal solution finally, even if it may appear finite times repeat. That
An Expanding Clustering Algorithm Based on Density Searching
113
is because the algorithm removes all the sample points that are on the searching orbit, just keeping some points in high clustering density. While there are still some sample points in low clustering density kept in original sample sets caused by the second problem after choosing clustering center point. But these points don’t have serious influence on calculation of clustering center point for isolated distribution. Just as shown in Fig. 1, center points obtained by the proposed algorithm are very close to the actual center points of the sample sets.
Fig. 1. comparison between the center point of pseudo-subclass found by text method and the actual center points of the subclass with different correlation coefficient
Fig. a. The clustering of the pseudo-subclasses with little correlation coefficient Fig. 2. The clustering among the pseudo-subclasses of different correlation coefficient
114
L. Tan, Y. Liu, and X. Chen
Fig. b. The clustering of the pseudo-subclasses with low correlation coefficient
Fig. c. The clustering of the pseudo-subclasses with high correlation coefficient Fig. 2. (continued)
After the center was obtained, the cluster classifys based on the correlation coefficient between clusters and the membership of the samples. Figure 2 shows the design of the simulation of three groups: Figure(a), there are four independent pseudo-sub-class of normal distribution; we use variable step circle to circle sample points of each pseudo-sub-class. Figure (b), we could see that five data sets mix between each other, but the five data sets are not serious sub-class of normal distribution, divided into two groups, the first group are two intersections, the second group are three intersections, also to delineate the scope of data sets, to determine the points outside the circle and mark the attribution. Figure(c), the five data sets mix between each other and aliasing is very serious, dividing into two cases, one group is that two pseudo-sub-class mix between each
An Expanding Clustering Algorithm Based on Density Searching
115
other very serious, and the other group is that three pseudo-sub-class mix between each other very serious. Figure (a) shows that the clustering of a low level relevant sub-class is very easy to complete; the points outside of the round are outliers. Figure (b) shows that the clustering of a high level relevant sub-class is relatively more complicated, in the figure each small circle represents the corresponding sub-class cluster center, the great circle path on behalf of the boundaries of the corresponding subclass, which is completed by the method of limited step proposed by this paper, the same sign outside of the round belongs the same sub-class, which is completed by the method of weighting function, with no marked points outside of the circle is judged to be outliers. Figure (c) shows that the clustering of a higher level relevant sub-class is relatively simple, the points outside of the round are outliers. The above simulation results show that the method can eliminate outliers effectively and complete the clustering of the sample very well.
5 Typical Experimental Data To verify the practicality of this algorithm, two sets of typical data was selected to test, Experiment 1 Iris data sets are selected as the test data. The data set is divided into three categories, each of which contains 50 data. Experiment 2 actual Fossil data set is selected as the test sample. The data set is composed of 87 samples which consists of six-dimensional space, is also divided into three categories. Category 1 contains 40 samples (serial number 1-40), Category 2 contains 34 samples (serial number41-74), and Category 3 contains 13 samples (serial number 75-87). Using the algorithm proposed in this article and the classical fuzzy C means algorithm (FCM) to clustering Iris data sets and Fossil data sets respectively. We use the average results as the final results to compare after running 20 times for each method, the results are shown in Table 1, Table 2 and Table 3. Table 1. The comparison between the actual clustering center and the one found by the two algorithms Flower kinds Setosa Versicolour Virginica
Actual center
Center of ECM
Center of FCM
(5.1,3.5,1.4,0.2) (6.5,3.0,5.5,1.8) (6.0,2.7,5.1,1.6)
(5.07,3.39,1.48,0.2) (6.61,3.02,5.63,1.94) (5.90,2.77,4.85,1.48)
(5.00,3.40,1.48,0.25) (6.77,3.05,5.64,2.05) (5.88,2.76,4.36,1.39)
Table 2. The clustering accuracy resulting from the two algorithms Flower kinds Setosa Versicolour Virginica
ECM 98.00 % 96.30 % 89.47 %
FCM 94.17 % 95.67 % 85.33 %
116
L. Tan, Y. Liu, and X. Chen Table 3. The average clustering time derived from the two algorithms Test data
ECM
FCM
Iris
0.1267
0.0938
Fossil
0.1534
0.1312
From the data given in table 1, we can conclude that the clustering center obtained from the algorithm proposed in this article is much closer to the truthful data clustering center than FCM. In other words, the clustering center found by our algorithm is more accurate. As we can see from table 2, the testing result in Iris data set shows that the clustering accuracy with the algorithms in the article is exactly higher than FCM. According to table 3, the algorithms in the article needs less average clustering time than FCM in clustering Iris and Fossil data sets. Further more, the consistence in testing result for real data and manual data, proves once again that the algorithm proposed in this article is superior to FCM, both in classification accuracy and average clustering time.
6 Conclusion This article in accordance with actual process of normal distribution or the combination of the normal distribution for the infinite approximation principle proposes a kind of clustering algorithm based on density searching. This algorithm could automatically and exactly defines the data set center as well as classifies a high degree of aliasing data set. More over it effectively solves the traditional clustering problems which should be given the parameters in advance, and reduces the calculate amount. In line with the testing of standard Iris data set and Fossil data set, the algorithm is not only more accurate in classification, but also uses less time than FCM.
References 1. Hu, B.-p., He, X.-s.: Novel BSS algorithm for estimating PDF based on SVM. Computer Engineering and Applications 45(17), 142–144 (2009) 2. Berkhin, P.: A Survey of clustering data mining techniques. In: Koganand, J., Nieholas, C., Teboulle, M. (eds.) Grouping Multidimensional Data: Recent Advances in Clustering, pp. 25–71. Springer, Heidelberg (2006) 3. Yu, F.-x., Su, J.-y., Lu, Z.-m., et al.: Multi-feature based fire detection in video. International Journal of Innovative Computing, Information and Control 4(8), 1987–1993 (2008) 4. Lei, X.-F., Xie, K.-Q., Lin, F.: An Efficient Clustering Algorithm Based on Local Optimality of K-Means. Journal of Software 7(19), 1683–1692 (2008) 5. Jiang, S.-y., Li, X.: Improved BIRCH clustering algorithm. Journal of Computer Applications 29(1), 293–296 (2009) 6. Zhou, X.-Y., Zhang, J., Sun, Z.-H.: An Efficient Clustering Algorithm for High Dimensional Turnstile Data Streams. Computer Science 33(11), 14–17 (2006) 7. Strehl, A., Ghosh, J.: Relationship-based Clustering and Visualization for High-dimensional Data Mining. Informs Journal on Computing 15(2), 208–230 (2003) 8. xu, R., Wunsch, D.: Survey of clustering algorithms. Transactions on Neural Networks 16(3), 645–678 (2005)
A Ship GPS/DR Navigation Technique Using Neural Network Yuanliang Zhang School of Mechanical Engineering, Huaihai Institute of Technology, Lianyungang, Jiangsu, China [email protected]
Abstract. A stable and accurate ship navigation system is very important for people navigating in the ocean. Dead reckoning (DR) system is a frequently used navigation system for ships navigating in the ocean. It can provide precise short term navigation data, but the error of DR system can accumulate over time without limitation. GPS can be used for navigation in outside environment. But the accuracy of GPS for civilian use is still big. In this paper a cheap single GPS receiver is used for the navigation for ships. This paper proposed a new Kalman filter based GPS/DR data fusion method using neural network. This method was designed based on the characteristic of the GPS receiver. By using this data fusion method the cheap single GPS receiver can cooperate with DR system to provide precise navigation information. Simulation is conducted to validate the proposed data fusion method. Keywords: Kalman filter, GPS, dead reckoning, data fusion method, BP neural network.
1
Introduction
Along with the rapid development of the exploration of the ocean, the requirement of good ship navigation systems increases quickly. Since the electromagnetic wave can be used to communicate information, most of the navigation systems used in the land can also be used in the ocean [1]. DR system is a frequently used technique for ships navigation. The main idea of DR is to use the velocity and acceleration information of the ship to calculate the position of the ship. DR system can provide short term precise navigation data. But since the errors of DR system accumulate over time without limitation it cannot be used to navigate the ships alone without any validation. It needs an external aid to provide compensation information to improve its long-term navigation precision. GPS is a most widely used global positioning system which has many applications including positioning, locating, navigating, surveying and determining the time. A GPS receiver relies on the signals including system time and ephemeris information received from several satellites that are not geostationary. Knowing the absolute satellite positions from the sent messages, the antenna position and the GPS system time can be calculated if four or more pseudoranges are available. M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 117–123, 2011. © Springer-Verlag Berlin Heidelberg 2011
118
Y. Zhang
GPS can provide positioning information with a bounded error. Since GPS and DR system have synergistic relationship it is better to combine these two kinds of navigation systems to provide navigation information. Kalman filter is a frequently used method to fuse the multi-sensors data to provide more accurate navigation information. Much work has been done about using GPS and GPS/DR navigation system to provide navigation information [2,3,4,5]. In this paper a new Kalman filter based GPS/DR data fusion method which is designed based on the characteristic of the GPS receiver and BP neural network is proposed to fuse the data coming from GPS and DR system. By modifying the belief of the GPS and DR system the proposed data fusion method can provide an accurate navigation result. Simulation using real GPS data is conducted to validate the proposed data fusion method.
2
DR and GPS System for Ship Navigation
DR System. DR system can provide short term precise navigation data. But the error of DR system can accumulate without limitation. The calculation formula of DR system for ship navigation is: 1 Tn VD cos Kdt R ∫Tl . 1 Tn λn = λl + ∫ VD sin K sec ϕ dt R Tl
ϕn = ϕl +
(1)
Here VD is the velocity of the ship, R is the radius of earth, Tl is the initial time, Tn is the calculation time, K is the direction of the ship, λl and ϕl are the longitude and latitude of the initial position, and λn and ϕn are the longitude and latitude of the present position. GPS System. In GPS applications, the main goal is to get the position of a receiver as accurately as possible. Since GPS works in outside environment many sources of disturbances can affect the precision of GPS. These disturbance sources include clock offset, atmospheric and ionospheric effect, multipath effect and receiver noise. DGPS can provide very precise positioning service but the cost of DGPS is very high. In this paper we used a cheap single GPS receiver to provide the navigation information. The absolute error of this GPS receiver is about 10-15 meters. We proposed a new Kalman filter based data fusion method which was designed based on the characteristic of the GPS receiver and BP neural network to fuse the data coming from GPS and DR system. Here we are interested in relative positioning accuracy. Fig. 1 shows the single GPS data collected in the fixed point. These data are the latitude and longitude values after subtracting their mean values. Fig. 2 shows the values of k + 1 time step latitude (longitude) subtracting k time step latitude (longitude). From Fig. 1 and Fig. 2 it can be seen that the error range of the single GPS receiver is big but the data between adjacent sampling time drift small, no more than three times of the GPS resolution. Fig. 2 shows the way the single GPS receiver error changes in time. Successive errors are tightly related and therefore the error is strongly colored.
A Ship GPS/DR Navigation Technique Using Neural Network
119
4 3
latitude (m)
2 1 0 -1 -2 -3
0
200
400
600
800
0
200
400
600
800
1000
1200
1400
1600
1800
1000
1200
1400
1600
1800
6
longitude (m)
4
2
0
-2
-4
t (s)
Fig. 1. Data of the single GPS receiver after subtracting the mean value 0.4
latitude (m)
0.2
0
-0.2
-0.4
0
200
400
600
800
0
200
400
600
800
1000
1200
1400
1600
1800
1000
1200
1400
1600
1800
longitude (m)
0.5
0
-0.5
t (s)
Fig. 2. Difference of GPS output between k
+ 1 time step and k time step
Neural Network Neural networks have the ability to “learn” the system characteristics through nonlinear mapping and provide a strong degree of robustness because of their ability to exhibit fault tolerance. By means of both off-line and on-line weight adaptation, neural networks can improve the adaptability. From Fig. 2 it can be seen that the current time
120
Y. Zhang
GPS output have some relationship with previous neighboring GPS data. In this paper the neural network is trained to predict the GPS output based on previous GPS outputs. A BP neural network is adopted to do the prediction job. In a fixed point 1860 sampling time measurements, one measurement per second, were collected. First 998 measurements are used as the training data of the neural network and the other 862 measurements are used as the test data of the neural network. The test results are shown in Fig. 3. Fig. 3 tells that the neural network can predict the single GPS receiver output effectively. 3 GPS data neural network output
latitude (m)
2 1 0 -1 -2 -3
0
100
200
300
400
500
600
700
800
0
100
200
300
400 t (s0
500
600
700
800
6
longitude (m)
4
2
0
-2
-4
Fig. 3. Prediction results of the test data
3
Data Fusion Method
The neural network result can be used to design the GPS/DR system data fusion method. DR system can provide precise short-term navigation data. The error of the DR system can accumulate without limitation. On the other hand the error of the single GPS receiver is big but with limitation. A data fusion method using neural network is proposed to use DR system to provide accurate short-term navigation information and the single GPS to modify the long term navigation information. The characteristic of the single GPS receiver is that the error drifts small (no more than three times of the single GPS receiver resolution) between two adjacent sampling times. For latitude the resolution is about 0.1856m and for longitude the resolution is about 0.1504m . A BP neural network is trained to predict the current sampling time GPS output based on the previous two sampling times GPS data. We assume that the covariance of DR system errors in latitude and longitude direction is Q and the covariance of the single GPS receiver errors is R .
A Ship GPS/DR Navigation Technique Using Neural Network
121
The details about the GPS/DR system data fusion process is shown in the following ( x indicates latitude and y indicates longitude). In the starting point an imaginary single GPS receiver is assumed to be fixed and the real single GPS receiver moves with the ship. A BP neural network is used to predict the current time GPS output. The output of the neural network is the prediction of the imaginary GPS receiver’s output. 1) At the starting point when the ship does not run collect twice GPS output to get GPS xp1 , GPS xp 2 , GPS yp1 and GPS yp 2 . 2) Run the ship at time T (written ‘1’ for simple). From the DR system we can get the current time DR system based coordinate xDR (1| 0) and y DR (1| 0) . The covariance of these data are Qx and Qy . 3) Use the GPS data of step 1 and the neural network to predict the current time imaginary GPS output: GPS x and GPS y . 4) In the same time we get the real GPS output: GPS x (1) and
GPS y (1) .
5) Predict the current time GPS based coordinate using the difference between the real and imaginary GPS output: xGPS (1) = GPS x (1) − GPS x and
yGPS (1) = GPS y (1) − GPS y . 6) Since the DR system can provide precise short term navigation information, in one sampling interval, 1 second, the DR system based coordinate is precise. We can use this to calculate the covariance of GPS based coordinate. If
xGPS (1) − xDR (1) ≤ 0.1854m the
error
covariance
of
xGPS (1) is
0.1854 × 0.1854m . Else if xGPS (1) − xDR (1) ≤ 0.1854 × 2m the error 2
xGPS (1) is (0.1854 × 2) 2 m 2 . Else the covariance of xGPS (1) is a big one. Well we get the covariance of xGPS (1) , Rx . In the same way we can get the covariance of yGPS (1) , Ry . covariance of
7) The belief of DR system based coordinate are: Rx
( Qx + Rx ) , Ry ( Qy + Ry ) ,
and the belief of GPS system based coordinate are:
Qy
(Q
y
+ Ry ) .
Qx ( Qx + Rx ) ,
8) Calculate the fused result: x(1|1) = ( Rx ( Qx + Rx ) ) xDR (1| 0) + ( Qx ( Qx + Rx ) ) xGPS (1) ,
( (Q
y(1|1) = Ry
is
y
)
Px (1|1) = Qx Rx ( Qx + Rx )
Py (1|1) = Qy Ry
( (Q
+ Ry ) yDR (1| 0) + Qy
(Q
y
+ Ry ) .
and
y
)
+ Ry ) yGPS (1) . The covariance of x(1|1)
the
covariance
of
y (1|1)
is
122
Y. Zhang
9) Get the new inputs data of the neural network: GPS xp1 = GPS xp 2 , GPS yp1 = GPS yp 2 ,
GPS xp 2 = GPS x (1) − x(1|1) and GPS yp 2 = GPS y (1) − y (1|1) . These data can be used to pass through the neural network to predict the next sampling time imaginary GPS output. The covariance of these data is a fraction of Px (1|1) and Py (1|1) (written as PGPS ) basing on the weights of the neural network. 10) And so on.
4
Simulation
In this paper MATLAB is employed to do the data fusion method simulation. Real GPS data is used here. A BP neural network presented in Section 3 is trained to predict the current sampling time output of the single GPS receiver. The GPS data is fused with the DR system data with stochastic accumulating errors. Fig. 4 shows the fusion simulation results by using the proposed data fusion method and Kalman filter respectively. From the simulation result we can see that for this single GPS receiver the proposed data fusion method has a better performance than the Kalman filter. And in this case the GPS always works in a good status. Sometimes the GPS receiver may work in a bad status, for example, the GPS receiver cannot detect enough satellites to do the positioning or the satellites in the sky change. In this case the error of GPS is very big and Kalman filter cannot provide good performance. For the proposed data fusion method better performance can be obtained by modifying the belief of GPS 2 Kalman filter data fusion method 2
latitude errors (m)
1.5 1
0.5 0
-0.5
0
200
400
600
800
0
200
400
600
800
1000
1200
1400
1600
1800
1000
1200
1400
1600
1800
longitude errors (m)
1.5
1
0.5
0
-0.5
t (s)
Fig. 4. Data fusion results of the proposed data fusion method
A Ship GPS/DR Navigation Technique Using Neural Network
123
system. When GPS works in a bad status the belief of GPS system can be set to very small. In this case the DR system is mainly used to provide the navigation information. When the GPS recovers the belief of GPS system will be enlarged.
5
Conclusion
A precise and reliable navigation system is very important for ships to finish its mission. DR system can provide short term precise navigation information. But the errors of DR system can accumulate over time without limitation. DR system cannot be used to provide navigation information alone. It needs some complement. GPS has the synergistic characteristic with the DR system. It can provide positioning information with a bounded error. The absolute error of the cheap single GPS receiver used in this paper is about 10-15 meters. DGPS can provide very precise positioning information. But the cost of DGPS is very high. This paper proposed a GPS/DR data fusion method using BP neural network. This proposed method can fuse the navigation information coming from the cheap single GPS receiver and DR system and provide precise navigation result for the ships. Good simulation results verify the effectiveness of the proposed data fusion method. Acknowledgment. This paper is supported by Jiangsu Province Marine Resources Development Research Institute Science and Technology Open Fund Project (JSIMR10A05).
Reference 1. Yang, F., Kang, Z., Du, Z., Zhao, J., Wu, Z.: The application and prediction of the ocean localization and navigation technique. Hydrographic Surveying and Charting 26(1), 71–74 (2006) 2. Wu, F., Kubo, N., Yasuda, A.: Fast Ambiguity Resolution for Marine Navigation. Journal of Japan Institute of Navigation 108, 173–180 (2003) 3. Ding, T.: The Ship Navigation and Supervision System Using DGPS. AIS and GPRS, World Shipping 29(3), 48–49 (2006) 4. Hu, L., Chen, Y., Wang, L.: The Design of the Embedded Ship Navigation System Basing on GPS and Electronic Ocean Chart. Electronic Technique Application 6, 7–9 (2005) 5. Jia, Y., Jia, C., Wei, H., Zhang, B.: Design and Implementation of a Ship Navigation System Based on GPS and Electronic Chart. Computer Engineering 29(1), 194–195 (2003)
Research of Obviating Operation Modeling Based on UML Lu Bangjun, Geng Kewen, Zhang Qiyi, and Dai Xiliang Dept. of Transportation Command, Automobile Management Institute, Bengbu, China [email protected] Abstract. Military conceptual model is abstract to "obviating behavior space". The purpose of this model is to provide module for integral joint operation simulation. According to case description, it offers a formal representation as to the elements such as battlefield barriers and obviating units entity through the establishment of obviating system package. Dynamic modeling mechanism based on UML for building cooperating operational model and obviating operational model is formed to achieve the operational use cases of each battle behavior entity. Analysis is also made on the operation activities at obviating course and suffering sneak attack through the erection of corresponding obviating activity diagram. This model implements interaction relation and message delivering, and reflects inside logic in obviating operation mission, entity, function, interaction and activity. Keywords: operation modeling, obviating units, barriers, unified modeling language, solid model.
1 Introduction Modern wars show that building and setting barriers are to delay, restrict and destroy the enemy’s action and maneuvering, which are effective measures to enhance the effective and stable defense; while "obviating barriers, opening up access" can keep the initiative and seize a favorable trend of mobility [1]. "Obviating barriers, opening up access" is a typical process of tactical operation, of which the operations model is a presentation of abstract and analogy during the process of obviating operation [2], and it is the process of modeling when based on requirements (orders, etc.) of mutual information, that one or more battle entities perform one or more actions to achieve the purpose of obviating barriers [3]. Unified Modeling Language (Unified Modeling Language, hereafter referred to as UML), is an important tool for military conceptual modeling [4], it possesses high accuracy and rationality in the military expression as to behavior space and modeling, which has been validated by practice [5-6]. UML can properly describe internal logic of the "obviating barriers" operational nodes, operational tasks, operational functions, operational activities, and it is an abstract military concept model for "obviating behavior space”, for it is an important development in building method of the operation modeling. It can meet the new demands of operation modeling simulation which is required by the changing technologies and tactics. M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 124–130, 2011. © Springer-Verlag Berlin Heidelberg 2011
Research of Obviating Operation Modeling Based on UML
125
2 Descriptions on Battle Behavior Entity Carrying Out Tasks Obviating units are mainly responsible to open up and expand the passageways in penetrating the enemy's cutting-edge location barriers during the offensive battles [1]; the task description must reflect the duties and functions of the operational behavior entity in the operations, and must reflect he unique organization and implementation of operations procedures in obviating barrier. Obviating operations required for participants include headquarters, commanders, etc. UML, by using case diagram as described in Figure 1, makes a clear definition on the obviating battle system context, system requirements and scope with the user mode of each participant expressed in system description when an obviating unit carries out tasks.
Fig. 1. Use cases description of operational behavior entity
3 Entity Descriptions of Barriers and Obviating Group Entities refer to all the individual subjects and objects that can be identified in the operational simulation system, including simulation entity descriptions and system capture [7]. Making abstract the barriers on the battlefield and the operational system obviating structure by capturing the general things in the system constitutes a base class (barrier); taking the base class as the basis and then, capturing the other classes, thus complete the design of class view for the purpose of describing in the relevant entity object. Entity Description of Barrier System. Entity description of barrier system is to make an abstract feature extraction of BattleBarriers described by its attributes, and it is the object of operations. Although the types of barriers are different and their properties are varied, yet there share some common features, such as tag identity, location, area size and harm grade. The abstract properties of barriers and their running construct barrier base (Barrier), and extensively change into explosion barrier, mine barrier, fortification barrier, and the class view of battlefield barriers designing [1-2] as shown in Figure 2 .
126
L. Bangjun et al.
Fig. 2. Model of barriers
The construction of BattleBarriers, as shown in Figure 2, achieves the purpose of system description of barriers within the obviating theatres of operations on the battlefield area, it is the polymerization of all the barriers, which exhibits the static model and the internal logic structure within them. And it is interrelated with natural barriers and the enemy fire attacks. Entity Description of Obviating Unit. Battle group is a common property for an obviating unit battle formation. Manipulating and capturing the such properties description construct battle groups (Group), they have the common features which the battle groups share, such groups as personnel configuration, communications equipment, battle equipment, location configuration, battle mission, friendly neighbors coordination, battle support and other battle groups as shown in Figure 3. And from these groups derives sub-groups such as reconnaissance group, obviating group, warning and covering group, survival sweeping and marking group, obviating preparedness group.
Fig. 3. Model of obviating unit
Obviating unit entity is the principal part to complete operations. Figure 3 shows how to build obviating units, and describes the whole-part relationships between the obviating unit and each battle group, which is interrelated with the command, the commander.
4 Construction of Obviating Operation Model The Obviating Operation System Package Diagram. In UML, package is a common mechanism used to group all the elements for building modeling [4]. Browse
Research of Obviating Operation Modeling Based on UML
127
all the modeling elements for obviating operation system structure, then put all the elements close to each other in concepts and semantics into packages including the battle system package, obviating program package, operating force calculation package, modeling kits, etc, as shown in Figure 4. Among them, the battle system packages include such entity elements as obviating unit, headquarters and commanders; barrier system packages include natural barriers, explosive barriers and barrier barriers.
Fig. 4. Obviating operation system package diagrams
Model of Obviating Operation Entity Collaboration. UML class diagrams, use case diagram and package diagram describe obviating operation system from the static point of view, and obviating operation is a dynamic behavior, only by creating a dynamic model for the system can the situation of obviating operations be fully reflected. UML collaboration diagram is control flow modeling according to organizations, it is the coordination mechanism of organized structure described by the image, and is an important method of building dynamic models: Figure 5 shows the construction model of main operations of obviating operation entity collaboration.
Fig. 5. Model of battle behavior entity collaboration
,
As shown in Figure 5 "Object, chain and information" are important elements and graphics features of collaboration diagrams. In the model of battle behavior entity
128
L. Bangjun et al.
collaboration, the "object" describes principal part of various operations; the "chain" reflects the relevance and the internal interactive mechanism, while the "information" reflects the use case description and function of each object. Taking obviating preparedness sub-group for example, when the information of "obviating support order” is received by the commander, the action of implementation of the "obviating support” begins, its role is shifted to the role of the object of "obviating group" , then the implementation of obviating operation starts. Model of Obviating Operations. Obviating operation focuses on the barrier of the set objectives, the main object of each operational behavior develops interaction according to the battle process in a particular battle space. Not only does the model need to reflect the mission, the use cases, the timing for operation and the life cycle of all the operational subjects, but also it needs to reflect the object interaction, internal collaboration, attack-defense behavior. Besides, it needs to reflect the operational process control. Obviating operations model, based on established the UML dynamic modeling mechanism, as shown in Figure 6, is also known as obviating operations sequence diagram, which describes the visualized trajectory of operation flow as time goes on. Operational object and timing for operation are two major factors in building the model. Figure 6 shows that, each object during the operational processes is arranged from left to right (also known as X-axis direction), time is reflected in top-down vertical broken line (also known as Y-axis direction) change, it is the life cycle description of each object is in the operational process. Model is the modeling for obviating control flow operations by time sequence, it describes the "realization" of each operational behavior entity use case, and it shows the interaction, relationship and information which are developed among the grouped objects of the headquarter, the commander, the battle group according to time.
Fig. 6. Model of obviating operations
Obviating Operation Process Analysis and Modeling. Obviating units, when received commander "advancing command", march forward provided by the planned route in an organized units by using a flexible approach to ensure that they cannot be found and attacked by the enemy; when they enter the operating area, they should quickly occupy the operation starting positions by making obviating preparation, as shown in Figure 7.
Research of Obviating Operation Modeling Based on UML
129
Fig. 7. Obviating operation activity diagram
Fig. 8. Operation activity diagrams when attacked by the enemy
Obviating battle process model is constructed based on UML activity diagram, also known as obviating battle activity diagram, as shown in Figure 7. During the obviating activities, methods such as secret search and removal together with forced obviating operation can be used [1]. Operation Description When Attacked By Enemy. Obviating operation, known as an important operational background, is often launched in hostile areas of great concern; therefore it is vulnerable to enemy fire attack. In view of this existing particular dynamic situation, Figure 8 shows the entity object by substituting lane space for combat space. Figure 8 is the dynamic modeling of obviating operations when attacked by enemy. When encountered enemy fire attack, the covering unit can take such tactical covering measures against the enemy as fire suppression or laying a smokescreen, while obviating personnel can take on-the-spot concealment and wait for a favorable opportunity to continue their operation, while the commander makes judgment on the enemy situation. When the enemy fire is too strong, ask a higher or friendly adjacent unit for fire support, they can timely rescue the wounded and continue implementing obviating operation[1].
130
L. Bangjun et al.
5 Conclusion "Setting barriers" and "obviating barriers" are two interwoven contradictions in the modern battlefield. All kinds of barriers are widely used in battles so that the tasks of obviating barriers and opening up access become very difficult; but the capability improvement on obviating operation in turn forces further and constant improvement and enhancement on barriers so as to meet the requirements of barrier setting operations. Applying UML to obviating operations modeling is conducive to the modeling realization of integration of conceptual model and computer in construction ideas, which has a broad applicability and excellent prospects. Such model not only provides interface for the generation of computer generation force (CGF), it also provides the interface for other types of tactical level operations modeling for reference.
References 1. Xu, X.: Engineer Tactics. The PLA Press, BeiJing (1994) (in Chinese) 2. Cao, Z., Ma, Y.: Research on Composability of Operation Modeling. Journal of System Simulation 19(4), 1421–1424 (2007) 3. Hu, Y.: Military Conceptual Model of Operations by Small Ground Units in a City. Journal of Naval University of Engineering 18(5), 28–32 (2006) 4. Grady, B., James, R., Ivar, J.: UIrB User Manual. Machinery. Shao, W., Zhang, W.: Industry Publishing House, Beijing (2005) (in Chinese) 5. Shen, R., Zhang, Y.: The Description of Requirement of Weapon and Equipment Systems Based on UML. Systems Engineering and Electronics 27(2), 270–274 (2005) 6. Fan, Y., Li, W.: Research on the Formal Description Language for Military Conceptual Modeling. Fire Control & Command Control 3l(6), 19–22 (2006) 7. Qi, Z., Wang, Z., Zhang, W., Wu, Q.: Design and Implication of Attack and Defense Simulation System for Ballistic Missile with UML. Journal of System Simulation 18(3), 602–606 (2006) 8. Guo, Q., Yang, L., Yang, R.: Computer generating Power. Bei Jing National Defense Industry Publishing House (2006) (in Chinese) 9. Duan, C., Yu, B., Yang, X., Liu, J.: Analyzing and Modeling of Operational Activities. Fire Control & Command Control 31(l1), 34–38 (2006)
The Study of Distributed Entity Negotiation Language in the Computational Grid Environment Honge Ren, Yi Shi, and Jian Zhang Information and Computer Engineering College, Northeast Forestry University, Harbin, China [email protected]
,
Abstract. In the distributed computational grid environment existing trust negotiation languages(TNL for short) could not meet the need of the high efficiency of negotiation, defending the malicious attack and negative expression, therefore we proposed a Distributed Entity Negotiation TNL, which satisfies most of requirements needed in the negotiation language, protects the safety of entities, supports distributed authorization and proof, and negative expression. To improve the efficiency of negotiation, we add it with the feedback policy. Keywords: trust negotiation, distributed authorization and proof, release policy, feedback policy.
1 Introduction With the popular of Internet, network has been an important carrier of communication and interaction. Consequently, the interaction among the entities gradually becomes a focus of study. Because of increasingly complicate net environment, people begin to pay attention to the safety of exchanging message through internet, especially the information refers to personal, organization and country. To solve these issues, Winsborough [1] proposes the theory of automatic trust negotiation (ATN) which is a process of setting up trust relationship among strangers by exposing the digital credentials and access control strategy step by step. As ATN runs in the open distributed environment, many languages can’t meet the need of crossing platform and operating system isomerism. Furthermore, not all of negotiation languages are able to serve for ATN which has its requirements for languages and running system. As a result, we need a kind of TNL, which not only satisfies most of the requirements in negotiation languages, but also can protect sensitive information, support distributed authorization and proof, support negative expression and improve the efficiency of negotiation. After the appearance of ATN, a lot of research results have arisen (such as : PSPL、 TPL、 X-Sec、 RT、 KeyNote、 DL、 X-TNL、 TrustBuilder、 XACML ect.), but there are no results could meet the demand of TN, and they just meet some aspect of the requirement. For example, [2] defines a algorithm of distributed proof, but which doesn’t support complicate resources access control and credential protection. Li puts forward a resources access control policy RT (role-based trust-management) language M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 131–137, 2011. © Springer-Verlag Berlin Heidelberg 2011
132
H. Ren, Y. Shi, and J. Zhang
in [3], which supports parameterize role and link role, but RT doesn’t support sensitive information protection. Li[4] holds the sensitive information protection through the technology of encrypted credentials based on RT, however, neither [3] or [4] can support distributed proof. Winslett[5] proposes a PeerAccess framework with well extendibility, which supports distributed authorization and proof and proof hints, but the release predication in the framework is single. It neither supports negative expression nor specifies the exposure policy. In this paper, we propose Distributed Entity Negotiation (DEN) language knowledge framework. We introduce the usage and semantics of DEN language in detail, then analyze the aplication of DEN, lastly, we conclude shortages of our work and look forward the new research opportunities.
2 The Framework and Semantics of DEN In this paper, we develop DEN through improving and extending PeerAccess and RT, which makes DEN have the new features of sensitive information protection, negative expression and information feedback during the proof. Beyond the features of well extendibility, it also supports distributed authorization and proof and proof hints. In DEN, we call the person interacting in the network an Entity, and all of the entities make up of the Entity Set N, where each entity can request resources, send and receive messages. each entity has a local knowledge base (KB), and all the local KBs form the global KB. The KB manages and achieves the action of the whole entities. Fig.1 shows the structure of a local KB, which is composed of advanced behavior management module (ABMM), release policy (RP), facts/rules set, messages send/receive set and blacklist.
ABMM Proof Hints
RP
Messages Send/Receive Set
Exposure Policy
Blacklist
Facts/Rules Set
Fig. 1. The KB structure of DEN
Advanced Behavior Management Module (ABMM). The ABMM is used for managing the advanced action of an entity, which includes two parts: One part is the proof hints, which can supply an entity with helpful instructions during the proof. For example, when Alice can’t prove that she can use the No.1 project document
The Study of Distributed Entity Negotiation Language
133
(pro-docum1) of AMC Company, she asks the workmate Bob for help, however, Bob can’t do it either. Now we assume Bob respond Alice by two ways: a. Bob sends a proof hint “Bob lsigns find(AMC signs auth(pro-docum1, Alice)), Alice, Carla))”to Alice, telling that she should ask Carla for help. Then, Alice can make use of the proof hint from Bob to turn to Carla for help. Readers can refer the usage of “find” to [5]. b. Because of being busy with his work, Bob is unwilling to give any hint to Alice. But we stipulate that an entity can’t omit the request of another entity. According to that, Bob has to send Alice a feedback information “Bob lsigns feedback (busy, Bob, Alice)”, telling that he can’t help Alice because of his work. Once receiving the information, Alice will not ask Bob for help in a period time, and she will turn to other entities. In the rules above, “feedback” is a feedback predicate, and the semantic formation of it is: A signs/lsigns feedback (resn, A, B), which shows A feedback a reason for helpless to B. “resn” is a state information (busy) with the meaning of reason. To prevent information leakage, “resn” can’t be a fact/rule. In the process of proving, we stipulate B can only ask the entity C in the proof hint provided by A for help, only if there is no entities provide proof hint to B, but this situation is rare. PeerAccess stipulates a peer ( the same as entity) can omit the requests of others, which will cause empty waiting of the requesters. The case wastes not only the time to ask others for help but also the negotiation efficiency of the requester, so we put forward the feedback policy. Release Policy. Release policy contains rules on release predicate, which stipulate which information can be released and to whom. The release policy of PeerAccess defines “ srelease1” release predicate based on sticky policy with sticky2 property. We don’t deny the desirability of srelease under some condition, but it couldn’t express all the needed release conditions. Consequently, we define a new predicate “disclose”. The new release predicate overcomes the shortages of “srelease” in many ways. It has the following characteristics: a. It default supports multi-level release, which means B can release the information sent by A to others, and others too. In the grid, there are usually more than two entities taking part in negotiation, so we consider the information is multi-level releasable. The “srelease” defined in PeerAccess can’t meet the need unless adding conditions in “srelease”, but it is more complicate and harder than “disclose” in expression. b. Entities can only release the information thought as true, which is the same as “srelease”. c. When the released information is about an entity’s privacy(“auth” authorization information, “own” property information, “member” identity information.ect ), 1
2
Srelease semantics stipulate that Alice can only send out a formula signed by Bob if she is sending it to herself or to Bob, or she can prove that Bob thinks that it is okay for her to send the message out. Further, Alice can only send out facts and rules that she believes to be true. The signer of a particular piece of information retains control over its future dissemination to other peers.
134
H. Ren, Y. Shi, and J. Zhang
and if the entity doesn’t detail his/her release conditions on releasing to whom or not, the information is also multi-level releasable; if the entity defines conditions, others should multi-level release it according to the condition. We often put the condition defined by the information related entity into “disclose”, called inner condition. For example, we assume ϕ is the information to be released, which is related to C, and C is unwilling to release it to Nancy, then the rule is expressed through DEN: C lsigns disclose( ϕ ,X,Y,Y ≠ Nancy). d. During the multi-level releasing, the entity can increase limited conditions (called outer conditions) in the case where that information has passed through his/her hands, but he/she can’t delete the conditions added by others. Especially, the added outer condition couldn’t contrary to those added by others. For example: A stipulate that ϕ ( ϕ is unrelated to A) can’t be released to David, when
releasing an information ϕ to B (A lsigns disclose( ϕ ,X,Y,none) → Y ≠ David, where “none” means that A hasn’t increased inner condition ), then B should release ϕ to others under the inner condition. In addition, B can also increase outer conditions, eg: B requires the releasability of ϕ also depended on the agreement of B(B lsigns
disclose( ϕ ,X,Y,none) → Y ≠ David ∧ B condDisclose( ϕ ,X,Y,none)). But we didn’t allow B to add the following condition: B lsigns disclose( ϕ ,X,Y,none) → Y ≠ David ∧
¬ (Y ≠ David).
Particularly, the inner condition can contrary to the outer conditions, that means the entity related to the released information has bigger power than any others, which can’t be realized by other release predicates(srelease). Entities can defend information leakage better through the feature. The next ,we will introduce the semantics of “disclose”: A signs/lsigns disclose ( ϕ ,B,C,inner-cond) → outer-cond. Where A, B and C are arbitrary entity name, ϕ is a fact or rule, inner-cond is inner condition, and outer-cond is outer condition. The inner-cond can counter to the outer-cond. The form of these two conditions is: f1 ∧ … ∧ f n or ¬f1 ∧ … ∧ ¬f n (n>0), which is the conjunction of facts. A allows B to release a information ϕ to C under both the inner condition and the outer condition. According to [5], we also stipulate ϕ is true and releasable at A. In addition, if the inner-cond is null, we should use “none” to signify. Blacklist. Blacklist maintains and manages the information of entities whose credibility is not lower than 0.5(we assume the range of credibility is[0,1], and entities can define the value herself/himself. In this paper, we use 0.5), after evaluated by reputation evaluate institution. And it plays an important role during the negotiation. The working principle of blacklist: when two strangers A and B begin to negotiate(A first to start), B searches for information matches A, then there causes two results: negotiation termination if matching; to continue if not( i.e. A is temporarily trusted).
The Study of Distributed Entity Negotiation Language
135
After that, A sends a malicious information or lots of duplicated information (to consume the resources of B)to B, then B will immediately stop to report to the REC(Reputation Evaluation centre), who evaluates A, and feedbacks the result credibility(lower than 0.5) to B. Meanwhile, B updates the blacklist by adding A, thus B can prevent another attack of A. For reasons of space, we won’t detail the working principle of REC. The remaining parts of KB: Facts/rules set: it retains facts and rules owned by an entity. The concepts of fact and rule referring to the standard Datalog is the same as those in [5]. To improve the expressive and richness of DEN ,we add the negative expression to rule. Messages send/receive: it retains facts and rules sent out and those received separately.
3 The Application of DEN We continue to use several predicates defined in PeerAccess, such as: sign, lsign. The usage of these predicates refers to [5]. To improve the readability and understandability, we will analyze the process of trust negotiation through applying DEN. Assume Alice is a manager of library Lib, who can lend books to the members of Lib. Bob is a member of Lib, and he wants to borrow book1 from Alice. Both the membership and management credentials are authorized by the curator or the client (David) of curator. When a member wants to borrow a book with the credential lost or forgot bringing, the member should show the identity certificate signed by curator or client to the manager of Lib. To prevent the occurrence of above, all the certificates related to members are signed by curator or client in advance. According to the above, let’s show the process of borrowing a book: Step1. Bob finds a needed book (book1), who wants to borrow it, and this time Alice asks him whether he is a member of Lib: Bob lsigns disclose (? Alice signs lend (book1, Bob), Bob, Alice), Alice lsigns disclose (? curator signs member(Y, Lib) → David signs member(Y, Lib), Alice, Bob, none). Step2. Now, Bob doubts the identity of Alice, and he wants to see the proof of her identity: Bob lsigns disclose (? curator signs manager (Z, Lib) → David signs manager (Z, Lib), Bob, Alice, none). Step3. Alice also doubts the intention of Bob, then she searches for Bob in her blacklist, then she finds Bob is temporarily trusted, so she relievedly shows him her credential: Alice lsigns disclose (curator signs manager (Alice, Lib), Alice, Bob, curator signs member(Y, Lib) → David signs member(Y, Lib)). Step4. Bob certains that Alice is a manager of Lib, and then he also shows his credential to Alice.
136
H. Ren, Y. Shi, and J. Zhang
There are two cases: 1. Bob shows Alice his credential immediately: Bob lsigns disclose (David signs member (Bob, Lib), Bob, Alice, curator signs manager (Z, Lib) → David signs manager (Z, Lib)), go ahead to step 7. 2. Bob forgets bringing the membership credential, and he has no time to fetch, at this time he wants Alice to certify his identity, but Alice has no power to do it, and she tells Bob that he can’t borrow the book unless he has the certificate signed by curator or client: Bob lsigns disclose (Bob signs find( David signs member (Bob, Lib), Bob, Alice), Bob, Alice, none), Alice lsigns disclose(Alice signs find (curator signs certificate (Y, Lib) → David signs certificate (Y, Lib), Bob, curator), Alice, Bob, none) or Alice lsigns disclose(Alice signs find (curator signs certificate (Y, Lib) → David signs certificate (Y, Lib), Bob, David), Alice, Bob, none). Step5. Because Bob has no idea of the contact methods of curator and client, Alice insteads him to ask curator for help: Alice lsigns disclose (Alice signs find (curator signs certificate (Bob, Lib), Alice, curator), Alice, curator, none). There are also two cases: 1. The curator is busy now; Alice turns to client (David) for help: curator lsigns feedback (busy, curator, Alice), Alice lsigns disclose (Alice signs find (David signs certificate(Bob, Lib), Alice, David), Alice, David, none), go ahead to step 7. 2. The curator promises the request: curator lsigns disclose (curator signs certificate (Bob, Lib), curator, Alice, none). Step6. There are two cases, when Alice asks David for help: a. David is having a rest, hoping not to be interrupted: David lsigns feedback (rest, David, Alice), Bob can’t prove himself, and he fails to borrow book1. b. David is willing to help Alice: David lsigns disclose (David signs certificate (Bob, Lib), David, Alice, none). Step7. Bob derives his identity certificate, then Alice lends book1 to Bob: Alice lsigns lend (book1, Alice, Bob). The research point is the release policy of DEN, so the example assumed above doesn’t reflect the negative expression and the function of blacklist. If readers are interested in them, write the cases using DEN by yourself.
4 Conclusion In this paper, we propose DEN language for distributed computational grid through utilizing others work, which supports distributed authorization and proof through “lsign”, negative expression and protects the sensitive information of entities through “disclose” defined newly and blacklist added in KB. In addition, we propose the feedback policy to improve the efficiency of negotiation. We introduce the KB structure and semantics of
The Study of Distributed Entity Negotiation Language
137
DEN, then analyze the usage through an example, which reflects that DEN realizes the targets put forward. Limitations of space prevent the example from reflecting DEN supports parameterize role and link role, and users can deploy it themselves. Meanwhile, we haven’t detailed the exposure policy and “blacklist”, and we haven’t proved how the feedback policy improves the efficiency. Consequently, we will continue to study these issues.
References 1. Winsborough, W.H., Seamons, K.E., Jones, V.E.: Automated trust negotiation. In: DARPA Information Survivability Conf. and Exposition, pp. 88–102. IEEE Press, New York (2000) 2. Bauer, L., Garriss, S., Reiter, M.K.: Distributed proving in access-control systems. In: Paxson, V., Waidner, M. (eds.) Proc.of the IEEE Symp. on Security and Privacy, pp. 81–95. IEEE Press, Washington (2005) 3. Li, N.H., Winsborough, W.H., Mitchell, J.C.: Distributed credential chain discovery in trust management. In: Herbert, A.S. (ed.) Proc. of the 8th ACM Conf. on Computer and Communications Security, pp. 156–165. ACM Press, New York (2001) 4. Li, J., Li, N., Winsborough, W.H.: Automated trust negotiation using cryptographic credentials. In: Atluri, V., Meadows, C., Juels, A. (eds.) Proc. of the ACM Conf. on Computer and Communications Security, pp. 46–57. ACM Press, New York (2005) 5. Winslett, M., Zhang, C., Bonatti, P.A.: PeerAccess: A logic for distributed authorization. In: Atluri, V., Meadows, C., Juels, A. (eds.) Proc of the ACM Conf. on Computer and Communications Security, pp. 168–179. ACM Press, New York (2005)
Study and Application of the Smart Car Control Algorithm Zhanglong Nie Changzhou College of Information Technology, Changzhou, Jiangsu, China [email protected]
Abstract. The speed and direction control of the own designing intelligent car is the core of intelligent car control system.In order to get fast, stable and reliable running on a different track,the system collects the discrete path information by an array of photoelectric sensors,the digital PID algorithm and indirect PID algorithm are utilized on the drive-motor and steering gear.Therefore,the system avoids the step-variation of intelligent car’s direction and speed change,eliminates the over-control and oscillation,and approximately obtains continuous control effect.The study show that the control algorithm has a nice self-tracing effect on the black and white(or large color difference)track. Keywords: Smart Car, Tracing Algorithm, Photoelectric Sensor, PID Algorithm.
1 Introduction The freescale smart car competition is a national science and technology competition organized jointly by the freescale company and Tsinghua University. The organizing committee will provide a standard car model, DC motor and rechargeable batteries, the team must design the smart car identifing dependently the specific trace, the car is the winner,who has been running the whole track ,being the fastest,and the better technical report. The team need to learn and use the Codewarrior IDE and online development methods, design the plan used to automatically identify the trace , the motor drive circuit, the speed sensing circuit, the steering servo motor drive model and the software programming about MC68S912DG128 microcontroller. The expertise of the competition involve the fields: control, pattern recognition, sensor technology, electrical, computer, mechanical ,and so on, the students are trained on the practical hands-on ability and knowledge integration ways, so the competition not only helps students improve the independent innovation capacity, but also promotes the academic level of the related subject.
2 The Key Control Algorithm Design The smart car control system is a typical closed loop system, the control system mainly achieve the control of the front wheel direction and the rear wheel speed.Therefore, MCU needs to receive the signal the trace identify circuit, the speed sensor signal, and M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 138–147, 2011. © Springer-Verlag Berlin Heidelberg 2011
Study and Application of the Smart Car Control Algorithm
139
tracing using a path search algorithm, then controling the steering servo motors and DC drive motor. The control strategy of the smart car can be divided into the speed priority and the stable priority, namely, the control target of the former is as fast as possible, and the latter focus on the stability of the car. In short, the design goals are different, the control methods will be different, and the control design is a wide variety [1]. The smart car tracing algorithm is a key part of the intelligent car design, the main design work of the smart car is carried out around it. The tracing methods: the 7 sensors are used to identify the trace, each sensor is a linear order and the 2 cm spacing,the white reflectance of the sensor is set to the largest,and the black reflectance is set to the minimum. the four index (0,1,2,3) ranges are divided between the maximum and minimum values , the location of the smart car is basically determined by by combining of the index various. (1) the speed control algorithm design 1) the speed collection design The key of the closed loop control is to gain the current value of the control object, so a way is used to measure the actual speed of the rear wheels.the way included two types: the optical encoder and Hall sensors. The high precision is the advantage of the optical encoder, but its installation is inconvenient and requiring the high environment. The advantages of Hall sensor is to a simple installment and the high test precision, the disadvantage is the delay between the measured speed and the actual speed, this design uses the latter. The circumference of the rear is 17cm, and the six magnetic steels are added to the rear wheel,so it can produce the six pulses in each week.An input capture interrupt can be generated about 2.83cm . the speed of the car is calculated by the time interval of the two rising edge[2]. However, there are differences between the realization of capturing and the principles described above ,because the bus frequency is 32M, the timer frequency factor is up to 127, the maximum timer overflow time is as follows shown: t = 65535 / ((32 * 1000000) / 128) = 262ms
(1)
The input capture time intervals can be divided into the 3 types. First, the two input capture generates in the same timer period,that the overflow interrupt did not happen between the two input capture. So the current speed value is the difference between the second speed value and the first speed value. Second, a timer overflow interrupt is generated between the two input capture, and the second value is less than the first time, so the current speed value is abtained by adding the difference between the two value to the remaining value of the previous cycle. Third, a more timer overflow interrupt is generated bewteen the two input capture , this situation is more difficult, because there may be confusion. In practice, a global variable is counted by 1 when the overflow interrupt happens, we can know how many the input capture interrupts happened by the global variable value, so getting the actual speed. the speed calculation formula is given below: s = 2.83 / ((262/65535) * T) = 2.83 / (0.004 * T) = 707.5 / T
(2)
The T is the number of pulses between the twice input capture. In the practical application, we can be calculated: 28.3 / 262 = 0.108m / s. Obviously, if the T is greater
140
Z. Nie
than 262ms, then the speed is less than 0.108m / s. Because it is too low ,the rate is no sense in practice, so the case of “count == 1”is considered in the paper,when the count is greater than 1,the speed is less than 0.108m / s, and the speed is about 0, the speed value assigned to 0xFFFF. The some values will be used into the corresponding speed list, and the current speed range is determined after the T value is roughly compared . The actual speed measured is only a first step, but the speed control is more important, the actual speed is the small fluctuations around the target speed. The PID algorithm is used to control the speed, the following describes the design of PID algorithm. 2) PID algorithm design The P of the PID controller is the scale factor, the I is the integrating factor, the D is the differential factor. the control volume is formed by the linear combination of three factors. Becaule the PID controller is simple, easy to adjust the parameters, do not need the exact mathematical model, so it is applied in all areas of industrial. it is firstly appeared in the analog control system, the function of the traditional analog PID controller is realized through the hardware platform.With the emergence of the computer, it is transfered to the computer control system, the software is instead of the realization of the original function of PID, so it is called digital PID controller, a set of algorithms forming are called PID algorithm [3] . In the analog control system, the usually control law of the controller is the PID control. To illustrate the controller works, we look at an example shown in Figure 1.
Fig. 1. Small power DC motor speed control system
n0(t) is the given speed ,n(t) is the actual speed, their difference is:e(t) = n0(t) - n(t), e(t) which adjusted by PID controller becomes u (t) which is the voltage control signal, u (t) amplified through the power could drive the DC motor to change the car speed.
r(t)
e(t)
比比 积分 微分
u(t)
被被 对象
Fig. 2. Analog PID control system schematic
y(t)
Study and Application of the Smart Car Control Algorithm
141
The principle of the general analog PID control system is shown in Figure 2. The system consists of analog PID controller and the controlled objects. r (t)of the figure is given value, y (t) is the actual output value of the system, e (t) is their difference,that is: e (t) = r (t) - y (t)
(3)
e (t) is the input of the PID controller, u (t) is the output of the PID controller,and also is the input of controlled object. So the control law of analog PID controller is: t
u(t)=Kp[e(t)+
1 de( t ) e( t )dt + TD ]+ u 0 ∫ TI 0 dt
(4)
Kp: the scale constant, TI: integral constant, TD: Differential constant, U0: control constant. The computer control is used as the collect control, it can only calculate the control volume based on the deviation value collected, it could not continuously output the control volume to achieve the continuous control ,but the analog control could do. Because of this, the integral term and derivative term in formula (4) can not be directly used, they must be discretized. The discrete processing methods are: the T is as the sampling period, k is as the sampling NO, the discrete sampling time is the t, the sum is instead of integration, the increment is instead of the differential ,and forming the following formula:
u k = Kp[ e k +
T TI
k
∑e
j
+
j= 0
TD ( e k - e k −1 )]+ u 0 T
(5)
k: the collect serial number, k = 0,1,2, ...; uk: the output value of the computer in the k-times signal collection; ek: the input deviation value in the k-times signal collection; ek-1: the input deviation value in the k-1 times signal collection; According to formula (5) ,the incremental digital PID algorithm is as follows:
uk
= u + Δu k −1
k
(6)
The system is using the incremental digital PID algorithm to achieve speed control[4]. That is, according to the collecting speed and target speed,the system could adjust the output to continuously approaching target speed. The follow formula is used by the speed control procedures of the design: get = ((es1-es0) + (int) (es1-aim_speed)) * K
(7)
es1 is the current speed, es0 is the previous speed, aim_speed is the target speed, K is a coefficient constant. The speed control output is set to 60 levels, the top 30 levels are forward gears, the rearward 30 levels are reverse gears.The higher gear means the faster. This method can better control the speed by the actual test,and the test speed value is around the real speed value, Generally,the fluctuation range can be controlled in 500 ticks.
142
Z. Nie
(2) the tracing algorithm design 1) tracing algorithm strategy and control processes The track is labeled by the black ribbon on the white background, obviously, the car can automatically follow a black ribbon to move forward, so we must identify effectively black and white gray level information. In order to better identify the black and white, the tracing tool of the system selects the grayscale sensor distinguishing sensitively the black and white color. According to the different levels of black and white, the output voltage value of the grayscale sensor will be different, the output voltage value can convert the corresponding digital value by the A / D conversion module of MCU. the following points should be noted when the A / D converter module is using, First, the sampling time of A / D should comply with the requirements of real-time systems, Secondly, the sample accuracy of the black and white should be higher,so the system could provide the accurate data to meet the software processing requirements. The software can distinguish that the position of the car is left, right, or missed track by using the particular algorithm.Here,the process control algorithm of the car is given. The black and white collect value of the 7 grayscale sensors on the front of the car are converted into the corresponding figures by the 10-bit 7 channel A / D conversion module, according to the figures,the control software can judge the black ribbon is under which sensor, so the system determine growth or slow down, turn left or right. The corresponding control algorithm flow is shown in Figure 3. The idea of the control algorithm: firstly, it determines whether the middle sensor is located in the the black ribbon, if it is located, the car fastly goes forward; otherwise , judging the current car is a left side or right side, or missed track, and then further processing. If the current car is left side, the process is shown in Figure 4. First, determining whether some sensor among the left three sensors is located in the black ribbon, if the
begin
left process start middle senssor is located in black ribbon?
go forward
left 1
Y
N
which sensor is located in the black ribbon?
Left ,right,or missed track?
left 2 right
left miss left process
missed process
left 3
turn left 8 degree move 15 gear
turn left 25 degree move 12 gear
turn left 40 degree move 10 gear
right process
end
Fig. 3. Control algorithm flow diagram
end
Fig. 4. Left side process flow diagram
Study and Application of the Smart Car Control Algorithm
143
left 1 sensor is located, then turning left the 8 degrees ,going forward using the 15 gear; if the left 2 sensor is located, then turning left the 25 degrees , moving using the 12 gear; if the left 3 sensor is located, then turning left the 40 degrees moving forward using the10 gear,so the processes of left side case are over . If the car is the right side, its processing method is quite similar to the left side. If the all current sensors are located in the white color, the process is shown in Figure 5. When the all current sensors are located in the white color, indicating that the car has been completely out of black ribbon, so determing the next process according to the previous state.if the previous state is left side,then turning left the 42 degrees , moving using the 10 gear; if the previous state is right side,then turning right the 42 degrees , moving using the 10 gear. 2) improved tracing algorithm strategy The design of intelligent model car will encounter the difficult on the path detection and the tracing strategy design, according to the car production process and its own characteristics,the following is the research about improving the speed and increasing the stability . The 10 infrared sensors installed in front of the car consists of the linear array of photoelectric sensors spaced 20mm , these sensors can vertically detect the track information,and getting the gray value of the track,the value is range 0-1023, the greater value means the more light. In actual testing, the infrared sensors is welded by hand, even the black or white is the same,but the sampling values from different sensors will be different. It is shown in Figure 6,these values are obtained by the median filter and 5 times average filter. In the figure, the pure white, pure black and intermediate color are marked by the corresponding red, yellow and blue line, it is not difficult to find that the same black or white sampling values from different sensors are different, but for the same sensor, the different black or white sampling values are obvious difference, the same gray sampling values from different sensors also has its relevance.
missed track start
left
the previous state of miss?
right
turn left 42 degree
turn right 42 degree
move 10 gear
move 10 gear
end
Fig. 5. Missed track flow diagram
144
Z. Nie
ϾӴᛳ఼᭄↨䕗 data comparison of the 10 sensors
pure white ܼⱑ intermediate Ё䯈㡆 pure ܼ咥 black
Fig. 6. Data comparison 1 of the 10 sensors
ϾӴᛳ఼᭄↨䕗 data comparison of the 10 sensors
pure white-intermediate ܼⱑЁ䯈㡆 Ё䯈㡆ܼ咥 black intermediate-pure
Fig. 7. Data comparison 2 of the 10 sensors
The difference between the pure white sampling values and the intermediate color sampling values, and the difference between the intermediate color sampling values and the pure black sampling values, they can be marked on a picture shwn in Figure 7, the red line means the difference between the pure white and the intermediate color, the yellow line is the difference between the intermediate color and the pure black, it is clear that the two difference is linear.Therefore, we can map the 10 sensor sampling values to the same vector space through the linear way,the mapping value is range 0-1000. Obviously, the 10 sensor data after mapping could be analogous. Then there will be many available mathematical methods to find the deviation of the sensor line array center and track. Here ,first, the system selects the 5 sensor near black ribbon by the screening method,then doing the quadratic curve fitting, determining the offset of the car by seeking the abscissa value of the minimum point. the real-time sampling data normalized need the brightness compensation. the brightness deviation is mainly due to the influence of the uneven ambient light, that there may be a strong side ligh. Therefore, in order to simplify the processing, we just assume that this effect is linear, the system has a linear brightness compensation before fitting the data and the screening,.
Study and Application of the Smart Car Control Algorithm
145
The following is the derivation of mathematical formulas obtaining the reliable deviation[5]. The real-time sampling data of all the sensors is: x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, the parameter calibration process is needed before the testing tracing, these parameters are required through the large amounts of data statistics and calculations. The cluster analysis method can be used here. a large number of sample data is to the cluster analysis by hand, we found that the most two clustering is the sensor sampling values of pure white and pure black. Therefore, we can extract the field pure white and pure black reference value by the real-time detection before running. The two reference values were established for the hi and li, i is a number of sensors,From the above we can see that the linear mapping can be used to the data standardization, the data range is 0-1000, that is:
x i' − 0 x −l = i i 1000 − 0 hi − l i
First set: x i' = y i
,after mapping,the data is: x i' = 1000 ×
xi − l i . hi − l i
,the data of the linear standardization is needed to the brightness
* compensation, the deviation function of the linear brightness: y = ax + b . Using the least square principle, the variance sum is:
10
10
i =1
i =1
M = ∑ ( y i* − y i ) 2 = ∑ (axi + b − y i ) 2 ,obtaining after the derivative for a,b: 10 ⎧ ∂M ⎪ ∂ a = 2 ∑ ( ax i + b − y i ) x i = 0 ⎪ i =1 ⎨ 10 ∂ M ⎪ = 2 ∑ ( ax i + b − y i ) = 0 ⎪⎩ ∂ b i =1
,
simplifing: 10 ⎧ 10 2 ⎪a∑ x i + Nb = ∑ xi y i ⎪ i =1 i =1 ⎨ 10 10 ⎪a x + Nb = yi ∑ i ⎪⎩ ∑ i =1 i =1
It is not difficult to solve the a and b for the second-order linear equations. Here we are only interested in the value of a, because it can directly reflect the brightness deviation. Therefore, each sensor must remov this error volume:
yi' = y i − axi , the yi' is the
gray value of each sensor after the brightness compensation.Then, we selected the k-sensors whose gray value is minimum, and extending the 2 sensors to the left and right, so that we can get the five data closely related with the black guide lines: zi (i = k-2, k-1, k, k +1, k +2), then to be parabola fitting,the fitting function
N=
k +2
∑ (z
i =k −2
is: * i
z * = ax 2 + bx + c
− zi )2 =
k +2
∑ (ax
i =k −2
2 i
,
then,
the
variance
sum:
+ bxi + c − z i ) 2 , obtaining after the derivative for a,b,c:
146
Z. Nie
k +2 ⎧ ∂N 2 2 ⎪ ∂a = 2 ∑ (axi + bxi + c − zi )xi = 0 i =k − 2 ⎪ k +2 ⎪ ∂N = 2 ∑ (axi2 + bxi + c − zi ) xi = 0 ⎨ i =k − 2 ⎪ ∂b k +2 ⎪ ∂N = 2 ∑ (axi2 + bxi + c − zi ) = 0 , ⎪ i =k − 2 ⎩ ∂c
simplifing: k +2 k +2 k +2 ⎧ k +2 4 2 2 3 ⎪a ∑ xi + b ∑ xi + c ∑ xi = ∑ xi zi i =k −2 i =k −2 i =k −2 ⎪ i =k −2 k +2 k +2 k +2 ⎪ k +2 3 2 ⎨a ∑ xi + b ∑ xi + c ∑ xi = ∑ xi zi i =k −2 i =k −2 i =k −2 ⎪ i =k −2 k +2 k +2 ⎪ k +2 2 ⎪a ∑ xi + b ∑ xi + cN = ∑ zi i =k −2 i =k −2 ⎩ i =k −2
So, getting the abscissa of the curve minimum point: x p = −
b . 2a
Setting: △L = xp, that is, △L is the deviation of the sensor line array center and the track,from Figure 4, the error signal e(t) of PID controller is the △L, which is the basis of the car tracing using the PID algorithm, then, the weights of the PID are determined through experiments ,and finding the parameters of the car running good.
3 Summary The speed control and tracing algorithm is an effective method of smart car control, the algorithm not only heightens the dynamic performance and reaction speed of the smart car system, but also improves the system adaptability and robustness, and the smart car could run with a faster speed. During the experiment, the first version of smart car is slow in the S-curve trace, and easy to miss trace,these mainly due to the front short distance perceived by the 7 sensors.and affecting the prediction of smart car. In order to predict the situation in front of the track, the second version of smart car install the 10 tilt sensors. In addition, the environment light largely impact on the smart car running, so improving the anti-jamming ablity by desgning the filter circuit and optimizing the control algorithm. The experiment shows that this system can achieve a good tracing effect.
References 1. Kaisheng, H., et al.: Analysis the technology program of South Korean Smartcar model. Electronic Engineering & Product World (March 2003) 2. Wang, Y., Liu, X.: Embedded Application Technology Tutorial. Tsinghua University Press, Beijing (2005)
Study and Application of the Smart Car Control Algorithm
147
3. Yang, X.-l.: Application of PID Arithmetic in Aptitude Vehicle. Experiment Science and Technology (August 2010) 4. Jia, X.: Feedforward-Improved PID Algorithm Applied in the Smartcar. Computer & Information Technology (December 2008) 5. Cai, G., Hong, N.: A Study of Navigation Based on the SmartCar. Electronic Science and Technology (6) (2009)
A Basis Space for Assignment Problem Shen Maoxing1, Li Jun1, and Xue Xifeng2 1
Department of Management Science, Xijing University, Xi’an Shaanxi, P.R. China, 710123 2 Department of Mathmatics, Northwest University, Xi’an Shaanxi, P.R. China, 710069 [email protected]
Abstract. A algebra structure of capality probability vectors is presented through analysis and researches of the model of assignment problem. And a concept of basis space of mission assignment problem is introduced. Furthermore, some properties are obtained in this space. And hence, a new research way is pointed for the mission assignment problem. Keywords: Mission assignment, Capality probability vector, Space structure, Mission unit, Task.
1
Introduction
The matter of mission assignment is an important problem in operations research and widly uesd in various management or control filds such as design of intelligent transportation systems and intelligent computing or comminication system or industry control system. In the face of the modern informationization situation under high tech and control command automation rapidly development, the researches on the mission assignment is more and more necessary and inevitability. The actual mission assignment problem is become more complexity and diversification due to the various difference in deraignment and tactics. Therefore, we make a bid for establishing a structural description of mission assignment backgrounded on the case of independent work without regard to the optimization for tasks choice. Based this thinking and framework, further deeper researches will evolve with the other work on scheme space of mission assignment. This work can be thinked as a try in the field of basic throrey.
2
Problem Description
Usually, we consider the mission assignment problem is as follows: there are m groups (mission unit) deploying in a certain area, they asked to do the task stream containing n tasks came into this defense area. Generally speaking, the purpose of mission assignment is that how to assign the m working units to n tasks to finishing them for the sake of obtaining some effects or purposes. Ui We note m groups as U1 , ……, U m i.e. ( i = 1, 2, , m ), And the
n tasks as
T1
,……, T
n
i.e.
M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 148–151, 2011. © Springer-Verlag Berlin Heidelberg 2011
Tj
( j = 1, 2,
, n ),
A Basis Space for Assignment Problem
149
U i to T j :
pij
Suppose we have known the capality probability of ( i = 1, 2,
, m ; j = 1, 2,
p1 = ( p11, p12 ,
, n ), and note these as a vector as
, p1n ) =ˆ ( p1 j ) , …, pm = ( p m1 , pm 2 ,
, p mn ) =ˆ ( pmj )
(1)
and call it as capality probability vector, the capality probability matrix is noted as
P = ( pij ) m×n . Generally, people hope to the capality as most as possible of the tasks through once mission assignment and working. The problem is how to construct a optimal scheme of mission assignment. That is, in order to maximizing the mathematics expectation of the tasks, to determining which one mission unit to do which one task in the situation of this m mission units and the n tasks. We can construct a LP (linear programming) model for this problem as follow as : m
n
max Q = ∑∑ xij pij i =1 j =1
⎧ ⎧0, U i no work T j ⎪ xij = ⎨ s.t.⎨ ⎩ 1, U i work to T j ⎪0 ≤ p ≤ 1, i = 1, 2, , m; j = 1, 2, ij ⎩
(2)
,n
If append a condition: a mission unit to a task only one task, a task is being done by only one mission unit, this means that n
∑ xij = 1, j =1
m
∑x i =1
ij
=1
( i = 1, 2,
, m ; j = 1, 2,
,n)
(3)
Hence, this model has became as a extend assignment problem model. We shall pay attentions to adopt a algebra structure to descript the mission assignment but the solving way in here.. 2.2
Space Structure Constructing
λ ∈ Λ ; vector set PP = [0, 1]n , p ∈ PP , n here [0, 1] is [0, 1] × [0, 1] × × [0, 1] . Hence, pi ∈ PP, i = 1, 2, , m . We note a number set Λ = (0, + ∞),
n
Definite two operations
Pr ⊕ Ps = ( p rj + psj − prj p sj )1×n λ
1
Ps = ( psj λ )1×n
(4)
150
S. Maoxing, L. Jun, and X. Xifeng
Pr ⊕ Ps ∈ PP , λ Ps ∈ PP . This means that two operations above defined is closed in PP . We called four elements entity ( PP, λ ; ⊕, ) as a basis space of mission assignment problem. It is necessary that to statement the ( PP, λ ; ⊕, ) is not a vector space in the sense of
such that,
λ ∈Λ ,
pr , ps ∈ PP,
then
pure mathematics, but it is an algebraic structure at last. We might as well call it as a basis space of (mission) assignment problem. The meaning of the operations can be explained as: Pr ⊕ Ps is the accomplishment vector when the mission unit U r , U s are all working, but the task accomplished by the at least one mission unit; λ P is the ebb and flow vector of capality probability vector of mission unit s
U s , we call λ as the intensifying coefficient of the accomplishment power (or work effectiveness). 2.3
Some Simple Properties
Some simple properties can be obtained easily for the two operations defined on the basis space of mission assignment problem. (1)
Pr ⊕ Ps ≥ Pr
,P ⊕ P r
s
≥ Ps
This hint “a good cooperate result of
missions”; (2)
⎧ ≥ ps , λ ≥ 1 ⎩< ps , 0 < λ < 1
λ Ps ⎨
result of mission”;
,
(3) The less the λ , the less the λ
This hint “a good potential exploring
Ps , and lim+ λ ps = (0)1×n ;
(4) The bigger the λ , the bigger the λ Prove: (1) ∵
λ →0
Ps , and lim λ ps = (1)1×n λ →+∞
p rj + psj − prj p sj = p rj + psj (1 − prj ) ≥ p rj
p rj + psj − prj p sj = p rj (1 − psj ) + psj ≥ p sj
, ( j = 1, 2,
1 ⎧ ⎪ ≥ p sj , λ ≤ 1 i.e. λ ≥ 1 ; psj λ ⎨ (2) ∵ 1 ⎪< p sj , > 1 i.e. 0 < λ < 1 λ ⎩ ⎧ ≥ ps , λ ≥ 1 ∴ λ Ps ⎨ ⎩< ps , 0 < λ < 1 1
,
,n)
,( j = 1, 2,
, ,n
),
A Basis Space for Assignment Problem
(3) ∵
,
151
0 ≤ pij ≤ 1 ∴ the less the λ , the bigger the 1 , the less the λ
1
psj λ , and 1
∵ lim psjλ = 0 + λ →0
(4) ∵
,( j = 1, 2,
0 ≤ pij ≤ 1
,
,n
),∴
lim λ p s = (0)1×n
λ →0 +
,
1
∴ the bigger the λ , the less the λ , the bigger
1
the
psj λ , and 1
psj = 1 ∵ λlim →+∞ λ
3
,( j = 1, 2,
,n
),∴
lim λ p s = (0)1×n
λ →+∞
,
Summary and a Tag
The research in this paper aim at to throw a brick to explore some jades, is attempt to mouse out a systematic, structural and practical approach for researching on the mission assignment problem. This work is more benefit to applying some modern optimal algorithms (such as GA, ANN etc.) or some conclusions of modern applications mathematics. This work is only a simple beginning of the basic matter, the related work is to found a scheme space of mission assignment that is still a basic research. We hope to attract more researcher to pay attentions to the further work and give some criticizes or improvement.
References 1. Yuanzhen, W., Maoxing, S., Cheng, N.: A space description for task assignment problem. System engineering and electric techniques 23, 19 (2001) 2. Zuiliang, Z., Changsheng, L., Wenzhi, Z., et al.: Military Operations Research. Military Science Press, Beijing (1993) 3. Naikui, L.: Basic Course on Theory of Military Operations Research. National Defense University Press, Beijing (1998) 4. Algebra Group of Geometry and algebra Sector of Peking University: Advanced Algebra. People Education Press, Beijing (1978) 5. Olkin, I., Gleser, L.J., Derman, C.: Probability Models and Applications. Macmillan Publishing Co.,Inc., New York (1980)
The Analysis on the Application of DSRC in the Vehicular Networks Yan Chen, Zhiyuan Zeng, and Xi Zhu School of Hydropower and Information Engineering, Huazhong University of Science and Technology, Wuhan, P.R. China [email protected]
Abstract. The vehicular networks relying on the intelligent transportation system, is the production of the modern vehicle entering into the information era, and it has important application on reducing the traffic congestions and traffic accidents. As the key technology of the communication of vehicular-road and vehicular-vehicular realization, Dedicated Short Range Communication (DSRC) is widely used in the vehicular networks. This article has analyzed the problems existed in the practical application of DSRC technology; proposed the technical improved methods; and confirmed DSRC has the broad prospects in the vehicular networks application. At last, it analyzed the prospects of vehicular networks and the matters that should be attached importance in the vehicular networks development. Keywords: Vehicular networks, Intelligent transportation system, Dedicated Short Range Communication, Multi-lane free flow electronic toll collection system.
1
Introduction
At World Expo 2010 Shanghai, China, people foresaw a highly effective, properly ordered vehicular networks based on the Intelligent Transportation System (ITS) form the science fiction movie“2030”: Vehicles running on the road just like fish swimming in the deep sea, which is called “Fish-school effect”[1]. Through this effect, vehicles communicate with other vehicles freely, and build the multi-direction relationship with others. Even if there may be danger at the next turning or further, the driver could realize it early. By this way, the traffic safety is provided, and the probability of the traffic accidents is reduced to zero. Through the interaction between vehicle and vehicle, the intelligent control, prevention from accidents, and other functions are achieved. The vehicular networks (also known as VANETs) are the production of the modern vehicle entering into the information era. It refers to the On Broad Unit (OBU) through the Radio Frequency Identification (RFID) and other wireless technology; realize the extraction and effective use of the vehicles’ attribute information, static information, and dynamic information on the information network platform [2] Meanwhile, all the M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 152–156, 2011. © Springer-Verlag Berlin Heidelberg 2011
The Analysis on the Application of DSRC in the Vehicular Networks
153
running status of vehicles are effectively supervised and provided comprehensive service according to different function demand. Generally it is divided into the following three layers[3]:
6HUYLFH
9HKLFOHV
,76 Fig. 1. The three layers of VANETs
The bottom is the ITS. It’s a real-time, accurate and efficient traffic integrated management and control system, which is built to make the advanced sensor technology, communication technology, digital processing technology, network technology, automatic control technology, information dissemination technology and so on organically applied in the whole traffic management system. All the infrastructure that the entire vehicular networks need is provided by ITS. The intermediate layer is the vehicles with intelligent interconnection function, which is the core of the whole vehicular networks. The top layer is the vehicular networks service, such as the vehicle self-piloting service, intelligent parking service, and emergency charge service and so on. In order to realize the VANETs, we need to complete the following three connections: First, establish the connection between people and the vehicles through smart phones and other mobile equipment. Second, build the connection between vehicle and vehicle, as well as vehicle and the road peripheral devices, such as traffic light, chargers, etc. Third, establish the connection between vehicle and the wireless network, which is the most crucial connection. In this paper, it briefly introduced the Dedicated Short Range Communication (DSRC) technology which is widely used in the connection of the VANETs. Then it discussed the deficiency of DSRC shown in the practical application and the corresponding technological innovation, hoping to enhance the practical application of the VANETs.
154
2
Y. Chen, Z. Zeng, and X. Zhu
Problems Exposed in the DSRC
Basic Analysis of the DSRC. DSRC is combined by three parts, namely the On Broad Unit (OBU), which is carried in the vehicle, the Road Side Unit (RSU), and the protocol for the dedicated short range communication. At present, the DSRC is divided into European, American, and the Japanese three main camps, whose core is CEN T278, ASTM/IEEE and ARIB T75 respectively. They have represented the research and development directions of the international ITS’s key technologies. Problems of the DSRC Exposed in the Practical Application. The most typical and successful case of the DSRC application in the VANETs is the Electronic Toll Collection (ETC) system. In 2007, China also developed its own DSRC protocol, called GB/T 20851, and ever since then, there have been more than 20 cities enrolled into the ETC application by using DSRC technology. While some potential risk and problems are exposed in the practical application, we now analyze them below: In the OBU aspect, the OBU-ESAM security module is a kind of embedded safety control module, using special smart chip to achieve the functions such as data encryption decipher, bidirectional status authentication, access authority control, and data file memory. But in the practical application in the ETC, its functions are rather limited, and the extension is bad. Moreover, each COS instruction costs comparable long time to be executed. All these defects show that the existing OBU-ESAM is unable to meet the application demand of the ETC. In the RSU aspect, it is composed of a high gain direction detection controlled read-write antenna and a radio frequency controller. Usually the work pattern of the RSU is the on-line working pattern that is the RSU works under the control of the traffic lane controller, all the RSU and the OBU interactive instruction must transmit to the traffic lane controller. Moreover all different function's instruction carried on the correspondence through the same TCP port, this kind of working efficiency is quite low. And in the multiple track unlimited stream system, because it has some peculiar circumstance in reality, for example: The vehicles may cross one traffic lane to another, or may run in one row while one passes other vehicles and so on. RSU needs to deal with many traffic lanes’ OBU at the same time, that requires the RSU’s ability is strong enough, but at present the RSU production is unable to satisfy the ETC system.
3
Measures and Solutions to Improve the Practical Application of the DSRC
Improvement of the OBU-ESAM Security Module. After fully considered the character of the ETC system, we established an improved technical program for OBU-ESAM. (1) We designed special COS instructions, such as READ DYNAMICINF, GET TAC, and SET KEYINDEX and so on to sped up the card processing speed enormously, and enhanced its operating performance.
The Analysis on the Application of DSRC in the Vehicular Networks
155
(2) Has designed the card file organization, enables it not only support the application of ETC, also support the enclosed path electronic collection, the traffic control and other applications. (3) The new OBU-ESAM Supports the multi-key storage which guarantee its security a lot. Improvement of the RSU. In order to optimize the RSU’s ability to treat the simultaneous OBU, we increased the special-purpose link establishment mechanism, that is increased the special-purpose window: request-Private window request, and the special-purpose window assignment-Private window allocation (PrWA), simultaneously the MAC control territory increases the assignment time window mechanism and assignments the special-purpose up link window downlink frame sequence control. Multiple Track Unlimited Stream Control System. Unlike the traditional single way limited, the multiple track unlimited ETC has many advantages. It doesn’t need any interception facility neither the artificial auxiliary lane. We have proposed a new multiple track unlimited stream control system to enhance the application of the DSRC protocol in the VANETs. It suits the multiple tracks multiprocessing construction, has high communication speed, can guarantee to collect fees reliably and accurately, guarantee the primary security, uniformity and integrity of the charge data. The overall structure of this control system is shown in Table.1. Table 1. The structure of the multiple track unlimited stream control system Coil examination Vehicle examination
High definition resolution flow examination Traffic statistics Accept the primitive passing record Monitor the RSU working status
Communication with RSU
Send the vehicle information table Set the RSU operational factor Clock synchronization
Image snapshot
License plate snapshot
Traffic lane device
Traffic lane vision device control
control
Traffic lane monitoring device control Receive vehicle information table Receive control command
Communicate with
Receive RSU control command
Station-lever system
Receive synchronized clock Upload the electronic trade record Upload license plate image Upload running status of the traffic lane control system Upload the running status of the RSU
156
Y. Chen, Z. Zeng, and X. Zhu
The multiple tracks unlimited stream control system has the following technical superiority and the characteristic: (1) Has the ability to control the multiple tracks, the multi-platoon antennas and RSU, receive multi-RSU transaction records in real-time; (2) Completed redundant transaction record processing mechanism; (3) Can carry on the real-time condition monitor to the multiple track control device, and upload to the toll station system; (4) When the computers of toll station fail to work or the network breakdown, the collection traffic lane can work independently; the working parameter and the data record storage are in local. When vehicular lane has worked for a long time, we can download the establishment parameter or to upload charge data through the artificial way. No matter through the automatic transmission or the manual transmission, we can fully guarantee the authenticity, reliability, integrity and the uniformity of the transaction data.
4
The Development of the Vehicular Networks
VANETs as a new application in the ITS have good technical support and broad market prospect. And have great significance to alleviate the transportation pressure, reduce the traffic accident and realize the intelligent guidance. In the technical support aspect, domestic and foreign have done some research now to the vehicular networks, and made preliminary progress to the DSRC and related technologies. In the application aspect, domestic and foreign already have many successful practical applications; the most typical one is the ETC system, which has made very good progress and ensured people's travelling quality. On the same time, there are some things we should pay attention to. In technical aspect, we should continue to do the research about the information security in the VANETs, for in VANETs, all the vehicular related information is transmitted through network, if it is disturbed by the human or the interception, will create the information revelation, and the property damage, then affects the entire vehicle networking application and promotion. Next, countries should further exchange ideas to formulate the unification VANETs standard structure, so that the regional systems and the equipment can be mutually compatible.
References 1. Ping, Y.: Welcome the Vehicular Networks Era. Traffic and the Transportation, 56–57 (June 2010) 2. Fallah, Y.P., Huang, C.-L., Senguta, R., Krishnan, H.: Analysis of Information Dissemination in Vehicular Ad-Hoc Network with Application to Cooperative Vehicle Safety Systems, vol. 60(1) (January 2010) 3. Yanxia, G.: The Design and Implementation of National DSRC Test Suits. East China Normal University (2009)
Disaggregate Logit Model of Public Transportation Share Ratio Prediction in Urban City Dou Hui Li1 and Wang Guo Hua2 2
1 Zhejiang Institute of Communications, Zhejiang Hangzhou, China, 311112 Department of Traffic Engineering, Zhejiang Provincial Institute of Communications Planning, Design and Research, Zhejiang Hangzhou, China, 310006 [email protected]
Abstract. In order to analyze the distributing condition of urban passenger flow scientifically and correctly, the disaggregate Logit model is presented to predict the public transportation share ration in city, which is carried out by means of analysis of the outer and inner factors that affect the choice of modes of transportation and is based on the random utility theory. Firstly, the factors with major contribution to modes choice are selected according to the likelihood ratio statistic. Then the parameters of are estimated and the model is constructed. Finally, according to the proposed algorithm, the public transportation share ratio forecast test is carried out using the field survey data. The results of independent sample test indicate that the model has a finer precision and stability. Keywords: Public transportation, Share ratio prediction, Disaggregate, Logit model.
1 Introduction Public transportation share ratio is the percentage of the trip by public transit to the total trip, which is the major index to evaluate the progress of transportation and the rationality of the urban traffic structure [1]. Based upon the analysis of the features of the citizen’s activities, to rationality, objectively and scientifically predict the public transportation ratio is the one of the fundamental work of transportation plan [2]. For the government and the industry departments to learn the service condition of the public transportation, to make the public transportation priority policy, to adjust and optimize the urban traffic structure, and to provide the macro-guidance to the public transit development, it has important theoretical significance and practical application value. Traffic mode share ratio prediction origins from prediction of traffic mode split, the methods of which can be divided into two categories [3]: the aggregate method based on the statistics and the disaggregate method based on the probability theory. The aggregate method takes the traffic zone as research unit and makes statistical process for the survey data of the individual or the family, such as averaging the survey data, calculating the M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 157–165, 2011. © Springer-Verlag Berlin Heidelberg 2011
158
D.H. Li and W.G. Hua
proportion and so on, and then calibrates the parameters of the model using the statistical data. During the process, the original information of the individual or the family is unitedly handled, so in order to ensure the accuracy, a considerable number of samples are necessary according to the law of large numbers. The disaggregate method is the relative complete model based on the maximum random utility theory, which commits to the objective interpretation of the traffic mode choice behavior and is currently the most widely used model for transportation mode split prediction. Disaggregate approach takes the individual behavior as the research target and uses the original survey data directly to construct the model, so it can make full use of the survey data and does not need much sample data that make it receive widely concern. The common disaggregate models are probit model, logit model, dogit model, Box-cox dogit model and so on, in which logit model supposes the traffic mode choice of the travelers obeys normal distribution, which is consistent with the trip distribution feature in the large transport. On the other hand, the form of logit model is simple, the physical meaning is definite and it is easy to calculate, so it is widely used in practice [4]. Based on the above analysis, this paper will study the traffic mode share ratio prediction using logit model. Firstly, the factors with important influence on traffic mode choice are selected. Then the parameters of the model are calibrated using the field survey data. Finally, we provide numerical examples on the field data to testify the finer precision of the model.
2 Micro-economic Analysis of Traffic Mode Choice The traffic share ratio at the agglomeration level is the general effect of traffic mode choice of all the travellers, so it is necessary to analyze the traveler’s behavior of traffic mode choice. Generally speaking, traffic modes for choice of travelers between any pair of O-D have several types, such as private car, public transport, bicycle, walk and so on, which can be called “selection branches”. The satisfactory level of selection branches is utility. Based on the usual mentality of travelers for making choice, we suppose that: (1) Every traveler always chooses the selection branches with the maximum utility when making choice of traffic mode. (2) The utility of every selection branch perceived by traveler is determined by both the traveler features and the branch features. The features of traveler include the situation of vehicle possession, age, income and so on. The features of selection branch include travel cost, travel time, comfortableness, reliability, safety and so on. The utility Vi of the ith selection branches can be expressed as follow:
Vi = β 0 + β1 xi1 + β 2 xi 2 + β 3 xi 3 +
K
= β 0 + ∑ β k xik k =1
(1)
Disaggregate Logit Model of Public Transportation Share Ratio Prediction
Where,
β 0 , β1 , β 2 , β 3 ,… denote
undetermined parameters;
159
xi1 , xi 2 , xi 3 ,… denote
th
the traveler personal features and the features of the i selection branch [5]. In the practical traffic environment, we cannot measure all the factors affecting the utility. On the other hand, because of various reasons such as the limitations of traffic information, individual characteristics difference of every traveler, for the travelers, the difference usually exits between the estimation of the utility and the real utility of the ith selection branch that can be expressed as follow:
U i = Vi + ε i
(2)
However, the basis on which the travelers choose the traffic mode is their subjective estimation utility of every mode, but not the real utility. If U i = max U j , the traveler must choose the ith traffic mode. Because Ui is indefinite, the traveler can only choose the ith traffic mode in the probability of Pi. The Probability Pi for traveler to choose the ith traffic mode can be calculated by the following formula:
Pi = P(U i > U j ) = P (ε j < Vi − V j + ε i ),
∀j ∈ C , j ≠ i
(3)
Where, C is the set of the alternative traffic modes. According to the Bernoulli weak law of large numbers [6], the probability of Pi can be regards as the utilization ratio of the ith traffic mode, which is also the proportion for travelers to choose the ith traffic mode. So the traffic demand shared by each traffic mode can be calculated by formula (3).
3 Model Construction of Public Transportation Share Ratio Prediction Logit Model. Supposed that
εi
in formula (3) obeys gumbel distribution, then we
can obtain the logit model of traffic modal splitting. Multinomial logit model can explicitly be expressed and the solution method is simple, so it receives much concern, which can give the better explanation of the macro response of the public transportation share ratio caused by the mode choice behavior of travelers. Therefore this paper intends to forecast the share ration of public transportation using logit model. Assume the passenger transport mode has J categories, and let j=1, 2…J denote the number of response variable category, and then multinomial logit can be written as follow: K ⎡ P( y = j x ) ⎤ ln ⎢ = α + β jk x k ⎥ ∑ j k =1 ⎢⎣ P ( y = J x )⎥⎦
(4)
160
D.H. Li and W.G. Hua
That is to say, if there are J categories of traffic mode, the choice probability of the jth traffic mode can be calculated by the following expression: αj+
P( y = j x ) =
e J −1
K
∑ β jk xk k =1
1+ ∑e
(5)
K
α j + ∑ β jk xk k =1
j =1
Once we have the observed values of the independent variables
x1 , x2 , , xk
and
the certain events, the share ratio of each traffic mode can be calculated [7]. Choice of Independent Variables. During the series of independent variables involved in the model, not all variables have important contribution to probability forecast. To reduce the work of data acquisition and calculation, we should firstly reject the variables with less contribution and choose those variables with important significance. This can be realized based on the significance test of the independent variables to the logistic model by means of the likelihood ratio statistic, the calculation formula of which is as follow:
[
G s ( x k ) = −2 ln Ls ( x k ) − ln L f
]
(6)
ln Ls ( x k ) is the natural logarithm value of the maximum likelihood function except the independent variable x k and ln L f is the natural logarithm value of the Where,
maximum likelihood function including all the independent variables. It has been proved in statics that the likelihood ratio statistic
Gs obeys χ 2 distribution, whose
degree of freedom equals to the number of factors retained to test. If that G s ( x k ) > 3.841 =
χ
2 0.05
, then
Gs satisfies
x k is retained, otherwise it is discarded [7].
Parameter Estimation. Assuming that we have chosen the type and number of the independent variables and the structure of the model has been determined, the parameters involved in the model should be estimated subsequently. Because of the nonlinear property of Logistic regression, we adopt the method of maximum likelihood estimation to calibrate the parameters. Construct the likelihood function first: N
L = ∏ ∏ Pnj nj ∗
C
n =1 j∈ An
The logarithm likelihood function can be written as follow:
(7)
Disaggregate Logit Model of Public Transportation Share Ratio Prediction
161
N
L = ln( L∗ ) = ∑ ∑ c nj ln( Pnj ) n =1 j∈ An
⎡ K J −1 α j + ∑ β jk xk ⎤ = ∑ ∑ c nj ⎢(α j + ∑ β jk x k ) − ln(1 + ∑ e k =1 )⎥ ⎢ ⎥ n =1 j∈ An k =1 j =1 ⎣ ⎦ K
N
α j , β jk and we can get
Take the partial derivation of expression (8) about J −1
N ∂L = ∑ ∑ c nj (1 − ∂α j n =1 j∈An
∑e
αj+
K
∑ β jk xk k =1
j =1
J −1
(8)
1+ ∑e
αj+
K
)
(9)
)x k
(10)
∑ β jk xk k =1
j =1
J −1
N ∂L = ∑ ∑ c nj (1 − ∂β jk n =1 j∈An
∑e
αj+
K
∑ β jk xk k =1
j =1
J −1
1+ ∑e
αj+
K
∑ β jk xk k =1
j =1
The above two expressions are nonlinear function about α j , β jk , which can be solved by certain software [8]. Goodness of Fit Test. After parameter estimation, we should investigate the superiority-inferiority of the model. In other words, we should evaluate whether the constructed model is suitable and also can give finer forecast accuracy. If the model can fit the observed data best, then it can be adopted to predict; otherwise we should specify the model anew. Pearson χ statistics is usually used to evaluate the Goodness of fit of logistic regression model, formula of which is 2
J
Oj − Ej
j
Ej
χ2 = ∑
(11)
,j=1,2,…,J,and J is the number of covariates. O and
In which
j
E j are observed
frequency and predicted frequency of the jth type of covariates separately. Smaller
162
D.H. Li and W.G. Hua
value of χ statistics means the model is better to fit the observed data. Otherwise we should find out the reason and modify the model [8]. 2
4
Experiment of Public Transportation Share Ratio Prediction
Data Preparation. In this paper, we take Hangzhou for example to carry out the experiments of public transportation share ratio forecast, where the passenger transport modes mainly include public transportation, private car, motorcycle (bicycle) and walking. In order to get the data of the residents travel choice feature, which is used for the analysis of the influence of each factor on the traffic mode choice, we carried out the questionnaire survey in Hangzhou. To guarantee the accuracy of the parameter estimation, 500 samples were altogether extracted. The questions for survey include the features of the traffic mode, the personal features and the travel features of the traveller [9]. As an example, three samples are shown in the table 1: Table 1. Sample Data of Trip Survey
Select No. result 1 2 3
1 3 2
Features of traffic mode Personal features of traveller Travel features inherent fee time punctuality gender age job income car distance purpose dummy Xj1 Xj2 Xj3 Xj4 Xj5 Xj6 Xj7 Xj8 Xj9 Xj10 Xj11 Xj12 Xj13 1 0 0 2 35 0.7 0 43 1 4000 0 10 1 0 0 1 1 20 0.9 0 36 2 2500 0 6 3 0 1 0 3.2 30 0.85 1 23 1 8000 1 20 1
Where, the select result 1 represents public transportation, 2 represents private car, 3represents motorcycle (bicycle) and 4 represents walking. Xj1 Xj2 Xj3 are inherent dummy, which indicate the other influence factors of the jth traffic mode not given in the expression (1). Xj4 Xj5 Xj6 respectively represent the travel fee (yuan), travel time (minute) and punctuality rate of the jth traffic mode. Xj7 Xj8 Xj9 Xj10 Xj11 represent the personal features of the traveller being investigated, which respectively are the gender(1: male; 0:female), age, professional job(1:institution; 2:enterprise; 3:self-employment; 4: teacher; 5: student; 6:unemployed; 7:else), income and the possession condition of private car (1: having private car; 0: not having private car). Xj12 Xj13 respectively represent the travel distance (kilometer) and the travel purpose (1: going to work; 2: going to school; 3: shopping and entertainment ;4: visit relatives and friends; 5:else).
、 、
、 、
、 、 、 、
、
Model Calibration. During all the influencing factors in the table 1, not all factors are important, so we carry out likelihood ratio test to choose the factors with significance by means of the likelihood ratio statistic and the calculation result according to the expression (6) is shown in the table 2.
Disaggregate Logit Model of Public Transportation Share Ratio Prediction
163
Table 2. Calculation results of the likelihood ratio statistic Factors
ln L10
X
Natural logarithm of the likelihood function
ln L9 ( x jk )
Maximum likelihood ratio statistic
[
G s ( x k ) = −2 ln L9 ( x jk ) − ln L10
]
-163.081512
j4
-182.026124
37.889224
X j5
-184.740219
43.317414
X j6
-198.859318
71.555612
X
j7
-164.101811
2.040598
X j8
-169.790223
13.417422
X j9
-164.610131
3.057238
X j10
-181.783833
37.404642
X j11
-177.431286
28.699548
X j12
-179.358576
32.554128
X j13
-172.197559
18.232094
Where, ln L9 ( x jk ) is the natural logarithm value of the maximum likelihood function except the independent variable x ik and ln L10 is the natural logarithm value of the maximum likelihood function including all the independent variables. From the table 2, we can see that the maximum likelihood ratio statistic of Xj7 and Xj9 are both less than 3.841, so they are discarded and the other factors are retained. In the retained factors, punctuality rate, travel time, travel fee, income, travel distance are important factors and the others are less important, which is consistent with the actual situation [10]. According to the chosen factors and based on the survey data, we carry out logit regression and calibrate the parameters by means of SPSS software, and then obtain the model of the public transportation share ratio, the expression of which can be written as follow:
ln(
P1 ) = 0.166 − 1.863 X j 4 − 1.185 X j 5 + 0.262 X j 6 − 0.573 X j 8 P4 − 2.194 X j10 + 0.851 X j11 + 0.462 X j12 + 1.269 X j13
ln(
P2 ) = 0.174 − 1.483 X j 4 − 1.712 X j 5 + 0.121 X j 6 + 0.931 X j 8 P4 + 0.897 X j10 − 1.208 X j11 − 1.781 X j12 + 1.019 X j13
ln(
P3 ) = 0.886 − 1.315 X j 4 − 0.989 X j 5 + 0.143 X j 6 − 1.145 X j 8 P4 + 1.359 X j10 − 0.734 X j11 + 1.513 X j12 − 0.832 X j13
(12)
(13)
(14)
164
D.H. Li and W.G. Hua
Where P1, P2, P3, P4 is the share ratio of public transportation, private car, motorcycle (bicycle) and walking separately. Share Ratio Forecast and Result Analysis. Before we carry out the tests of share ratio forecast, the likelihood ratio test is firstly conducted to inspect whether the constructed model is statistically significant. The result is shown in the table 3. Table 3. Likelihood Ratio Test of the Model Chi-Square 156.018 156.018
Model Chi-Square Improvement
d.f. 11 11
Significance 0.001 0.001
From the table 3, we can see that the L.R. χ statistic of the model is 156.018, which 2
means the model has χ statistical significance. So the proposed model is appropriate to predict the share ratio of public transportation. According to the presented model and based on the questionnaire survey data, the share ratio of each traffic mode can be calculated. The comparison of the forecast results and the real values is shown in the table 4: 2
Table 4. Share Ratios of Competing Transportation Mode Transportation Mode Public transport Private car Motorcycle bicycle Walking
(
)
Actual value 20.97 41.03 27.76 10.24
( )
Share Ratio % Forecast value 22.36 38.09 29.86 9.69
Relative error 6.6 7.2 7.6 5.4
As is shown in the table 4, the relative error of forecast is less than 8%, which means the model has a higher accuracy and the proposed model is suitable to predict the share ratio of the competing transportation mode. According to the forecast result, the share ratio of the public transportation is 22.36%. Although the share ratio is 10% higher than the national average level, there still exists a large gap, comparing to the 40 60 public transportation share ratio in Europe, Japan and the South America. During the expression (12), (13) and (14) of the model, the coefficients of Xj4 (travel cost) and Xj5( travel time) are both negative, which means the travel cost and travel time have the negative effect on traffic mode choice. The coefficients of Xj6 (punctuality rate) is positive, which means the punctuality rate has the positive effect on the travellers to choose traffic mode. Therefore, we can reduce the travel cost, travel time consumption and improve the punctuality rate of the public transportation to enhance the attraction of public transportation. Meanwhile we can carry out scientific traffic demand management of private car and motorcycle, which is to restrict the
%- %
Disaggregate Logit Model of Public Transportation Share Ratio Prediction
165
possession and frequency of use of the private car and motorcycle, so as to efficiently improve the share ratio of the public transportation and reach to goal of the traffic structure’s optimization finally.
5
Summary
In this paper,we have sdudied the public transportation share ratio prediction. According to the microeconomic analysis of traffic mode choice and base on the maximum random utility theory, the algorithm of logit regression for traffic state probability forecasting is put forward. This method firstly chooses the factors with important influence on traffic mode choice, and then estimates the parameters to construct the share ratio forecast function. Experiments of the share ratio forecast of competing transportation mode using the survey data shows that the proposed model has a finer accuracy and a better robustness, which has very high practical application value for the policy making of the public transportation priority development.
References 1. Niu, X., Wang, W., Yin, Z.: Research on method of urban passenger traffic mode split forecast. Journal of highway and transportation research and development 21(3), 75–78 (2004) 2. Wang, Z., Liu, A., Zheng, P.: Generalized logit method for traffic modal splitting. Journal of Tongji University 27(3), 314–318 (1999) 3. Liu, Z., Deng, W., Guo, T.: Application of disaggregate model based on RP/SP survey to transportation planning. Journal of transportation engineering and information 6(3), 59–64 (2008) 4. Ghareib, A.H.: Evaluation of logit and probit models in mode-choice situation. Journal of transportation engineering 122(4), 282–290 (1996) 5. Liu, C.: Advanced traffic planning. China communications press, Beijing (2001) 6. Math department of Fudan University. Probability and mathematical statistics. People’s education press, Beijing (1979) 7. Wang, J., Guo, Z.: Logisitic regression models, method and application, vol. 9. Higher education press, Beijing (2001) 8. Yu, X., Ren, X.: Multivariable statistics analysis. China statistics press, Beijing (1999) 9. Hu, H., Teng, J., Gao, Y., et al.: Research on travel mode choice behavior under integrated multi-modal transit information service. China Journal of Highway and Transport 22(2), 87–92 (2009) 10. Dou, H., Wu, Z., Liu, H., et al.: Algorithm of Traffic State Probability Forecasting based on K Nearest Neighbor Nonparametric Regression. Journal of Highway and transportation research and development 27(8), 76–80 (2010)
Design of Calibration System for Vehicle Speed Monitoring Device Junli Gao1, Haitao Song2,*, Qiang Fang3, and Xiaoqing Cai4 1 School of Automation, Guangdong Univ. of Tech., Guangzhou, China School of Business Administration, South China Univ. of Tech., Guangzhou, China 3 Guangdong Institute of Metrology, Guangzhou, China 4 School of Civil Engineering &Transportation, South China Univ. of Tech., Guangzhou, China [email protected] 2
Abstract. Design of one calibration system for vehicle speed monitoring device based on ground loop sensors using the direct digital frequency synthesis technology. The calibration system can generate one kind of sinusoidal signals attached onto the excitation loop sensors, which frequency, time interval is adjustable. The sinusoidal signal simulates the fast vehicle and couples with the signal from the ground loop sensor at maximum degree to excite the vehicle speed monitoring device, which performances can be verified by the calibration system. Keywords: Direct digital frequency synthesis, loop sensor, vehicle, calibration system.
1 Introduction Along with the substantial increase of highway mileage and the ownership number of vehicles in China, accurate detection on vehicle information is the key to achieve traffic information statistics and intelligent traffic control. The over-speed automatic monitoring system for vehicles has become an important device to guarantee road traffic security, mainly including vehicle speed monitoring device based on the principle of electromagnetic induction, Radar velocimeter using Doppler principle and laser velocimeter using laser theory [1,2]. Among them, due to low cost, reliability and maintainability, the over-speed automatic monitoring system based on ground loop sensors has been widely used. It acquires traffic flow information through ground loop sensors, and adjusts the time for releasing vehicles at road junctions timely to achieve intelligent control over traffic signal. This plays a key role in alleviating the traffic pressure on large and medium-sized cities. Currently, a large number of vehicle speed monitoring devices based on ground loop sensor are deployed on state highway, in particular, crucial junctions in city area. The existing test and annual maintenance show that the test results have drift and even misdeclaration due to the influence of the environment, construction quality and other factors [1,3]. According to stipulations in "Metrology law of P.R China", velocimeter used for vehicle speed monitoring instruments are compulsory calibration [2]. Therefore, it is very important to calibrate velocimeters accurately, quickly and expediently. *
Corresponding author.
M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 166–172, 2011. © Springer-Verlag Berlin Heidelberg 2011
Design of Calibration System for Vehicle Speed Monitoring Device
167
2 System Scheme Design The calibration methods for vehicle velocimeters mainly include substantial vehicle test methods and analog signal test methods [4,5]. The former lets the vehicle tested pass through the ground loop sensors at a constant speed. The vehicle velocimeter gives response and displays the measurement speed. The vehicle velocimeter is calibrated by comparing the velocimeter value and the known constant speed value. This method is simple and easy to implement, which is commonly used in the initial test of the equipment for velocimeter manufacturers. But it has the drawbacks such as greater error, lower precision, heavier workload, poorer reproducibility, operational risk and limited speed measuring range and so on. However, the analog signal calibration method is the one recommended in National Metrological Verification Regulation JJG527-2007 implemented on February 2008. It uses external signal attached onto the excitation loop sensors to excite the ground loop senors, which are consistent with the substantial vehicles passing through the ground loop sensors. The calibration system receives two excitation signals arrived successively and precisely measures the interval ( T). The distance (S) between adjacent excitation loop sensors are known and equal to the distance between adjacent ground loop sensors, so S/ T is the standard speed value given by the calibration system. The vehicle velocimeter is calibrated by comparing the velocimeter value and the standard speed value. This method is featured by high precision of detection, simple operation, wide measuring range and good repeatability.
△
△
Fig. 1. The calibration system scheme
The calibration system scheme for vehicle velocimeters based on analog signal calibration method is shown in fig.1, which takes the CPU Atmega64 as the core and controls the DDS chip AD9850 to generate 1-4 time-variable and frequency-variable sinusoidal signals. The signals attach onto the 1-4 excitation loop sensors successively after voltage amplification and power amplification, which are consistent with the corresponding ground loop sensors in distance. The vehicle velocimeter is calibrated by comparing the velocimeter value excited by the electromagnetic mutual inductance between the two type loop sensors and the standard speed value preset in the calibration system. Based on the principle of electromagnetic induction, the excitation loop sensors detect the sinusoidal signals in the ground loop sensors added by the vehicle velocimeter actively. The signals pass through the freqency detection module and its frequency value can be acquired. According to the value, the CPU Atmega64 configures AD9850 automatically to generate excitation signals consistent with the signals in ground loop sensors. The two type signals can generate the maximum
168
J. Gao et al.
electromagnetic mutual inductance to enhance the sensitivity of the calibration system. The system uses 1-4 excitation loop sensors to excite the vehicle velocimeter and get the average of many real-time speed values to improve the accuracy of the calibration system. The keyboard and LCD module are respectively used to realize parameters settings and calibration results displaying.
3 Design of the Hardware Circuit The system circuit mainly includes the minimum circuit of CPU Atmega64, sinusoidal wave signal generator based on DDS chip AD9850, the corresponding signal conditioning circuit and the frequency-detection circuit to detect the frequency value of the signal in ground loop sensors added by the vehicle velocimeter and realize the adaptive control on excitation loop sensors.
Fig. 2. Signal generator based on AD9850
Signal Generator Based on AD9850. The sinusoidal signal generator based on DDS chip AD9850 is shown in fig.2. Here, Y401 is the precision clock source to provide reference clock signal for AD9850. The CPU Atmega64 directly writes the frequency, phase position and other control data in the form of serial connection through data port LOAD 01, clock port WLCK 01, frequency updating clock port FQ_UD 01 to AD9850 to realize direct digital frequency synthesis. The high-fidelity sinusoidal signal can be obtained by the subsequent low-pass filter. AD9850 has 32-bit frequency control word, and the resolution of output frequency can be up to 0.0291Hz in the case of 125MHz clock signal. Signal Conditioning Circuit. The amplitude of sinusoidal wave based on AD9850 is millivolt-grade signal, which can be impose onto the excitation loop sensors only by necessary signal conditioning. The circuit includes the voltage proportion amplifier circuit with same phase and the power amplifier as shown in fig.3. The voltage amplifier circuit is superimposed with 12V DC bias voltage, which provides the static
Design of Calibration System for Vehicle Speed Monitoring Device
169
Fig. 3. Signal conditioning circuit
working point for the follow-up power amplification circuit. The sinusoidal signal can amplify up to 2A through the power amplifier circuit with high-gain bandwidth to meet the requirements for excitation ground loop sensors. Frequency Detection Circuit. The frequency detection circuit is composed of proportion amplifier circuit with same phase and Schmitt trigger, as shown in fig.4. The sinusoidal wave signal in the ground loop sensors added by the vehicle velocimeter enters into the frequency detection circuit from the pin3 of LM358 Pin3. It will be adjusted by the proportion amplifier circuit with same phase, then converts into pulse signals through the Schmitt trigger. The counter of CPU Atmega64 counts the pulses in unit time to calculate the frequency value of sinusoidal wave.
Fig. 4. Frequency detection circuit
170
J. Gao et al.
4 Design of the Application Program The application program mainly includes DDS signal control program, frequency dectection program on ground loop sensor and human-machine interface. DDS signal control program mainly consists of AD9850 reset program, initialization program and load program for frequency/phase position control word. The frequency detection program uses the capture function about the internal timer T/C1 of CPU Atmega64 to capture the output pulse signals in the unit time as shown in fig.4. Then, calculate the frequency value of the ground loop sensors added by the vehicle velocimeter. The Begin
Frequency detection GUI initialization
Select channel
Data save
?
Save & update data
Frequency detecting
Y Return to parameter setting GUI
a) Parameters setting
b) Frequency detection Begin
Simulation excitation GUI initialization Excite ? Y Enable the channel Determine excitation sequence Calculate excitation interval
Excitation Countdown N
Simulation excitation Display results Excitation Y again ? N Return to parameter setting GUI
c) Simulation excitation Fig. 5. System program flowchart
N
Design of Calibration System for Vehicle Speed Monitoring Device
171
concrete program code is omitted here. The human-machine interface is divided into system parameter setting, frequency detection of ground loop sensors and simulation excitation of vehicle velocimeter. The flowchart for system parameters setting is shown in fig.5a. It should be implemented firstly, specifically including speed simulation value for vehicle, driving direction, and the channel separation distance corresponding to 1-4 excitation loop sensors, the output signal’s frequency. The frequency detection flowchart on ground loop sensors is shown in fig.5b. The signal frequency value of ground loop sensors added by vehicle velocimeter can be measured according to fig.5b, and saved automatically as the reference data to adjust the excitation signals from the calibration system in time. The excitation process of vehicle velocimeter is shown in fig.5c. The calibration system will begin to 3 seconds countdown to excite the vehicle velocimeter accoring to the preset parameters, when push the “OK” button. If the excitation is ineffective, it can return to the parameter setting interface, then re-exciting after setting the parameters again. After excitation, the speed value detected by vehicle velocimeter is compared with the standard value preset by the calibration system to determine whether the precision of vehicle velocimeter can meet the standard specification.
5 Application and Results First, it is required to verify the accuracy of the calibration system developed by our team. We employ the time measurement instrument with the accuracy higher than 0.01% provided by Guangdong Institute of Metrology to capture the interval at the inlet of excitation loop sensor with the known distances. The interval is used to calculate the speed value actually simulated by the calibration system and compare with the preset speed value to obtain the simulation speed accuracy as high as 0.1-0.2%. Then, based on the calibration system, carried out the actual testing on Enping Shahuka road section with the assistance of Guangdong Province Enping Traffic Police Detachment. For example, the calibration accuracy is about 0.3-0.5% for the vehicle "Guangdong J 41839". This has reached the design requirements basically. The calibration system accuracy can be further improved through replacing high-speed relays and optimizing the control program.
Fig. 6. Calibration system testing site
6 Summary Integrated the application on DDS and singlechip processor technology, developed the calibration system for vehicle velocimeter based on ground loop sensors. Analog signal
172
J. Gao et al.
calibration method is adopted to compare the value of vehicle velocimeter with the standard value on the calibration system. The system has strong applicability, easy-to-use, wide testing range, good repeatability, and so on. It will not affect the normal traffic during testing. Moreover, this system with a good value for engineering applications can not only be used as the measurement tool of technology and quality monitoring institutions, but also can be used as measuring instruments of vehicle velocimeter manufacturer. Acknowledgments. The authors would like to thank the support by Guangdong Province “211 Project”-Guangdong Province Development & Reform Commission under grand [431] and Special Foundation from Chinese Ministry of Science and Technology under grand 2007GYJ003.
References 1. Gao, F., Fang, Q.: The research on method of performance testing and verification for traffic loop-based speed meter. Shanghai Measurement and Testing (204), 27–28 (2008) 2. Lin, Z.: The Calibration Scheme on Vehicle Speed Automatic Monitoring System. China Metrology (6), 107–108 (2008) 3. Hao, X., Liu, G.: Design and Implementation of Calibration System for Vehicle Loop-based Speed-measuring Meter. Science Technology and Engineering 9(13), 3912–3915 (2009) 4. Nie, G.: Principle Analysis on Identification and Calibration for Loop Sensor Velocity Measurement System. China Metrology (4), 95–96 (2010) 5. Nie, G.: The research and implementation of identification and calibration equipment for loop sensor velocity measurement. China University of Petroleum (East China), Dongying (2008)
Dynamic Analysis and Numerical Simulation on the Road Turning with Ultra-High Liang Yujuan Department of Physics and Electronic Engineering, Hechi University, Yizhou, Guangxi 546300, China [email protected]
Abstract. By analyzing the dynamic characteristics of vehicles on the road turning, the range of velocity is obtained where vehicles turn safely. A single-lane cellular automaton model is proposed, containing the ultra-high road turning, to stimulate the effect of road turning with ultra-high on traffic behaviors. The result shows that in certain range, the greater the ultra-high is, the average velocity and the average flow of system are. Therefore, reasonably setting up ultra-high on the road turning can promote the traffic capacity of the road. Keywords: ultra-high, road turning, centripetal force, centrifugal force, cellular automaton model.
1 Introduction With the rapid development of vehicle industry, many traffic problems such as congestion, accidents and energy lack have become the common issue all over the world. Setting up unimpeded and developed traffic transportation net has become the committed aim of many countries. Moreover, the traffic problems have been the hot subject of research in recent years [1-15]. The scholars in different fields put forward all kinds of models[9-15] to descript the characteristics of traffic flow, among which, the cellular automaton model is easy for computer operating and we can revise its rules nimbly in order to apply to all kinds of actual traffic conditions. Therefore, it gets widely applications and development [1-9] on the research of traffic flow. The most famous cellular automaton model is the NaSch model [9] which was put forward by Nagel and Schreckenberg. The paper is based on the NaSch model and adopts periodic boundary condition and sets up a single-lane cellular automaton model containing the road turning with ultra-high to study the effect of ultra-high on the traffic flow.
2 Dynamic Analysis of Vehicles on the Road Turning It is valid that the vehicles require centripetal force when they turn. According to Newton’s law of motion, the centripetal force is provided by normal static friction μmg for the road turning without ultra-high [1]: M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 173–178, 2011. © Springer-Verlag Berlin Heidelberg 2011
174
L. Yujuan
m
v2 = μmg r
v = rμg
(1)
(2)
Where m is the mass of vehicle, g is the acceleration of gravity, r is the curvature radii of road turning, μ is the static friction coefficient between the tire and road surface, v is the velocity of vehicle. The left of Equation (1), mv2/r, is the centripetal force for vehicles. In Equation (2), v is determined by r, μ, g these three factors (g is the gravitation constant). The greater the r and μ are, the greater the v is, and rμg is the critical velocity vc of vehicles that turn safely. When and only the velocity v≤vc, normal static friction can provide centripetal force, so those vehicles are able to turn safely. However, the friction coefficient μ gradually decreases with the time of the road service. When μmg<mv2/r, the normal static friction of the vehicle is not enough to provide centripetal force and the centrifugal force is equal to centripetal force, but their directions are opposite. Due to the effect of centrifugal force, vehicle will slip towards outside of road turning.
Fig. 1. Stress diagram of vehicle
As for ultra-high road turning, the stress of vehicle is showed as Fig. 1, among which, G, N, Fμ respectively stand for gravity, supporting force and transverse friction force. G is decompose into two parts, G∥=mgsinθ and G⊥=mgcosθ. According to the condition of force balanced, get G⊥=N, G∥ provide the centripetal force F, the centrifugal force F* is equal to centripetal force F, but the direction is opposite. Transverse friction force Fμ= Fμ1+Fμ2, whose size and direction are decided by the road situation and velocity of vehicles. When G∥ is enough to provide centripetal force, Fμ=0; when G∥ is too small to provide centripetal force, it will need Fμ to reinforce, and Fμ is downwards along the bevel; When the angle θ of ultra-high is quite great, the vehicle has the trend that they slip towards inside incline of road turning, and Fμ is upwards along the bevel. Now, assume the direction of Fμ is downwards along bevel, the formula of centripetal motion is that:
m
v2 r
= mg sin θ + Fμ = mg sin θ + μmg cosθ
( 3)
Dynamic Analysis and Numerical Simulation on the Road Turning with Ultra-High
v = rg (sin θ + μ cos θ )
175
( 4)
When θ is quite small, sinθ≈θ, cosθ≈1, therefore
v = rg (θ + μ )
( 5)
Thus, we can infer the critical velocity of vehicle that does not slip towards outside of ultra-high road turning is vc = rg (θ + μ ) , and vc is decided by r, μ, g and θ. The greater the r, μ and θ are, the greater the vc is. Comparing (2) and (5), we can know that under the same conditions r and μ, the passing speed on the ultra-high road turning is bigger than that on the nonultra-high road turning, such as r=100m, g=10m/s2, μ=0.68, θ=60, road turning without ultra-high vc≈26.08m/s≈93.88km/h and ultra-high road turning vc≈28.01m/s≈100.85km/h.
3 Model and Rule In order to predigest the question, it is assumed that the road system is divided into L=1000 cells which can be either empty or occupied by a car with a velocity v=0, 1, …, vmax , and there are only one type of cars on the road and only one road turning. Besides, the road turning is settled in the middle of the road, outside of which is higher over h than inside, and the decelerate section l is placed in the front of road turning. According to the talk above that critical velocity vc of ultra-high road turning which is gained by dynamic analyzing, we can infer that the vehicles must decelerate when passing road turning to get the speed v≤vc. Therefore, divide the maximum velocity of vehicle on the road into two kinds: on the road turning section, take vmax=vmax2, which correspond to slow cars, and on the other section, take vmax=vmax1, which correspond to fast cars. Supposing vehicles move from left to right with periodic boundary condition. Considering the effects of all the factors on the road turning and the delay probability p of vehicles on vehicle velocity, revise evolvement rules of NaSch model as follows: (1) define the maximum velocity vmax: If vehicles on the road turning section, take vmax= vmax2, else, take vmax=vmax1; (2) define the delay probability p: If vehicles on the deceleration section l before road turning, and the speed v> vmax2, take p= p1, otherwise take p= p2; (3) acceleration: vn(t) → min( vn(t)+1, vmax) ; (4) deterministic deceleration to avoid accidents: vn(t) → min( vn(t), gapn(t)) ; (5) randomization with probability p: vn(t) → max( vn(t)-1, 0) ; (6) position update: xn(t) → xn(t)+vn(t). Here vmax is the maximum velocity of the vehicle, vmax1> vmax2. xn(t) and vn(t) are the position and velocity of vehicle n, gapn(t)=xn+1(t) - xn(t)-1 denotes the number of empty cells in front of vehicle n and xn+1(t) is the position of vehicle n+1. p=p1 and p=p2 respectively denotes the larger delay probability and the smaller delay probability. The rules of this model are added step (1) and (2) comparing with NaSch model.
176
L. Yujuan
4 Numerical Results and Analysis The model parameters are set as follows: one time step corresponds to 1 s; each cell 7.5 m, and the length L of the road 7.5km, the length of deceleration section before turning l=8cells=60m; the delay probability p1=0.8, p2=0.25; r=100m, g=10m/s2, μ=0.68; take vmax1=5, which is equal to an actual velocity 135km/h, critical velocity vc = rg (θ + μ ) =100.85km/h, if take vmax2=4=108km/h, there will be vmax2>vc, and then the component force mgsinθ of gravity and the maximum normal static friction force can not offer enough centripetal force, due to the effect of centrifugal force, the vehicles will slip out towards the outside of the road turning and it is possible that serious traffic accidents occur, therefore, only can take vmax2=3, 2, 1, and which correspond to actual velocity 81km/h, 54km/h and 27km/h. Suppose that the dip angle θ of the ultra-high road turning and the maximum velocity of vehicle is direct ratio, namely, θ=kvmax2, here k is the proportion coefficient. Let k=1, and then θ=vmax2, so θ=3 is the greatest dip angle. If N is the total number of vehicles distributing on the road L, the formula of density of vehicles, average speed and average flow are showed as follows:
ρ=
the density of vehicles:
N. L
average speed:
v (t ) =
v=
1 N ∑ vn (t ) (time N n =1
step
average),
v (T ) =
1 t 0 + T −1 ∑ v (t ) (time T t =t0
average),
1 S ∑ v (T ) (sample average). S i =1
average flow:
J = ρv .
In simulation, the first t0=2×104 time steps are discarded in order to remove the transient effects and then the data are recorded in successive T=2×104 time steps. The obtained v (t ) for each time step is the average value of vn(t), v (T ) for each run is the average value of v (t ) at the last T=2×104 time steps, v and J are obtained by averaging over 10 runs of simulations. Fig. 2 describes the relationship of the average speed and the average flow on the density for different values θ. when the road turning do not exist, the average speed and average flow of system in the mid and small density sections are the greatest; Fig. 2 (a) shows that in the small density section, the average speed and the corresponding critical density of free moving state obviously increases with the increase of the inclination of ultra-high. But after exceeding the critical density, average speed reduces rapidly. Until the density is quite great, the average speeds are equal at all situations; it is found that in Fig. 2 (b), in the small density, the flow and density of the free flow phase are direct radio, and they increase in linear; in the mid density section, with the increase of inclination of ultra-high, the maximum of the flow increase obviously; in the high density section, the flow and speed are direct radio and they reduce in linear. Fig. 3 is the time-space pattern of the road turning with 300 lattices back and forth when density ρ of vehicle is 0.15. Whilst x-axis represents
Dynamic Analysis and Numerical Simulation on the Road Turning with Ultra-High
177
the position of vehicles; the t-axis represents evolutionary time; the white dot there are not cars, and also the black dot there are cars. Besides, the grey areas denote that traffic is smooth, while the black areas mean that vehicles are jam, and the congestion spreads backwards. The jam areas reduce with the increase of ultra-high. These discontinuous jam areas indicate such traffic phenomenon that vehicles go and stop on the road. Time-space pattern can also shows the change trends of Fig. 2, which explains that the ultra-high of road turning is one of the important factors effecting traffic flow; properly enlarge ultra-high can reduce the effect limiting speed of bottleneck on the road turning. 5 (a)
θ =1 θ =2 θ =3 straght lane
3
average flow
average speed
4
2 1 0 0.0
0.2
0.4
0.6 density
0.8
1.0
(b)
0.55 0.50 0.45 0.40 0.35 0.30 0.25 0.20 0.15 0.10 0.05 0.00
θ =1 θ =2 θ =3 straght lane
0.0
0.2
0.4
0.6 density
0.8
1.0
Fig. 2. The relationship of average speed and average flow on the density for different values θ
( )θ=1;
Fig. 3. The space-time diagram of the road turning with 300 lattices back and forth. a (b) θ=2 (c) θ=3
;
5 Summary Road turning is common traffic bottleneck, where the traffic accidents often happen, and it is one of the important factors which have the influence on traffic. The dynamic analysis of vehicles on the road turning shows that speed must smaller than the critical speed vc, which is decided by r, μ, g and θ these four factors, and the critical speed vc of ultra-high road turning is greater than nonultra-high's. Based on the NaSch model, the
178
L. Yujuan
result shows that the greater the ultra-high is, and the greater the average speed and average flow of system in some range are. It is concluded that reasonably setting up ultra-high on the road turning can improve the capacity of the road. Acknowledgement. This work is supported by the National Natural Science Foundation of China (Grant Nos. 10662002 and 10865001) & the National Basic Research Program of China (Grant Nos.2006CB705500) & the Natural Science Foundation of Guangxi (Grant Nos. 2011GXNSFA018145) & the research of Guangxi Education Department (Grant Nos. 201012MS206 and 201010LX462).
References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15.
Liang, Y., Xue, Y.: Acta Phys.Sin. 59, 5325 (2010) Pan, J., Xue, Y., Liang, Y., Tang, T.: Chinese Physics B 18, 4169 (2009) Jia, B., Li, X., Jiang, R., Gao, Z.: Acta Phys. Sin. 58, 6845 (2009) Liang, Y., Pan, J., Xue, Y.: Guangxi Phys. 30, 8 (2009) Li, S., Kong, L., Liu, M.: Guangxi Sciences 15, 47 (2008) Liang, Y.: Guangxi Sciences 18, 44 (2011) Zhao, X., Gao, Z., Jia, B.: Physica A 385, 645 (2007) Liang, Y.: Journal of Sichuan Normal University (Natural Science) 34 (2011) Nagel, K., Schreckenberg, M.: J.Phys (France) I 2, 2221 (1992) Bando, M., Hasebe, K., Nakayama, A., Shibata, A., Sugiyama, Y.: Physical Review E 51, 1035 (1995) Zhang, H.M.: Transportation Research B 36, 275 (2002) Helbing, D., Hennecke, A., Shvetsov, V., Treiber, M.: Transportation Research B 35, 183 (2001) Tian, J., Jia, B., Li, X., Gao, Z.: Chinese Physics B 19, 01051 (2010) Tang, T., Huang, H., Xu, X., Xue, Y.: Chinese Physics Lette. 24, 1410 (2007) Liang, Y., Liang, G.: Highways & Automotive Applications (2), 36 (2011)
Solving the Aircraft Assigning Problem by the Ant Colony Algorithm Tao Zhang, Jing Lin, Biao Qiu, and Yizhe Fu School of Information Management and Engineering, Shanghai University of Finance and Economics, Shanghai 200433, China [email protected]
Abstract. This paper sums the aircraft assigning problem up as vehicle routing problem, and constructs a mixed integer programming model. This model not only considers the link time and link airport between two consecutive flight strings, but also considers the available flying time for each aircraft. To solve this problem, an Ant Colony System (ACS) combining with the pheromone updating strategy of ASRank (Rank-based Version of Ant System) and MMAS (MAX-MIN Ant System) is proposed. Seven groups of initial flight string sets are used to test the method, and the important parameters of the algorithm are analyzed. The numerical results show that the method of this paper can effectively reduce the total link time between the continuous flight strings, and obtain the satisfactory solution with high convergence speed. Keywords: Aircraft assigning, flight string; ant colony optimization (ACO), vehicle routing problem (VRP).
1 Introduction Aircraft assigning problem (AAP) is an important task in airlines’ daily operation, which has a decisive impact on airlines’ normal operation and overall efficiency. Barnhart [1] and Boland [2] researched the scheduling model based on flight string and put forward a model which is mainly used to solve aircraft maintenance routing problem involving only one maintenance type. Boland [2, 3] regarded the aircraft maintenance routing problem as an asymmetric traveler problem with replenishment arcs and added replenishment arc set to space-time network model which was put forward by Clarke [4]. With considering the weekly aircraft assignment model which involves A check and B check maintenance types, Sriram and Haghani [5] constructed flight strings for one day, and then built a multi-commodity network model for these flight strings. In the process of making aircraft plan, Rexing [6] considered that the aircraft departure time could fluctuate within the scope of time window thus ensuring each flight assignment could have greater opportunity for getting available aircrafts. Based on these studies, Bélanger et al. [7] studied the large-scale periodic airline fleet assignment with time window and developed a new branch-and-bound strategy which is embedded in the branch-and-price solution strategy. Sherali [8] combined the flight M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 179–187, 2011. © Springer-Verlag Berlin Heidelberg 2011
180
T. Zhang et al.
assignment problem with the other flight planning process and added demand forecast about short-term travel to the traditional aircraft assignment model. Making a deeper research on the basis of [8], Haouari assigned a path with minimum cost to each aircraft under the constraints and used a heuristic method based on network flow to solve model [9]. Du Yefu [10] proposed the optimization of flight strings in the process of optimizing airlines flight frequency. Li [11] made a deep research into the flight string preparation problem occurring in the process of flight assignment and then built the Flight Strings Preparation Model for VRP(Vehicle Routing Problem). The model considered not only the constraints of flight schedule, but also the constraints of passenger flow volume. After looking into the actual demand of various domestic airlines, this paper transforms the flights assignment problem into vehicle routing problem (VRP) and studies the weekly periodic flight assignment problem and presents the concept of virtual flight string. The AAP is NP-hard combinatorial optimization problems. The accurate algorithms can hardly solve these problems in valid time, so metaheuristic algorithms become hotspot in researching both domestic and abroad. Dorigo & Gambardella [12] presented the Ant Colony System which is easier to implement but has the same property as ant colony algorithm. Owing to the ant colony algorithm has strong global search ability and the ability of finding better solution, this paper takes the ant colony algorithm as the solution strategy of VRP problem.
2 Problem and Model The aircraft assigning problem can be described as follows: To regard each flight string prepared by the commerce department as a client, each aircraft as a vehicle, the sum of the link time between the continuous flight strings as the travel time between the clients, and aircrafts (vehicles) start from the warehouse (virtual flight string) till they complete service to all flight strings (customers), and each string can only be served exactly once by one aircraft, and the weekly service time of each aircraft can not be more than the weekly maximum available time, and the departure airport of the first flight of the first flight string (except virtual flight string) must has available aircrafts. Arrange the order of the clients served by each aircraft, that is to say, to arrange a combination of several flight strings for each aircraft with the goal of minimizing the total link time. Flight string refers to the connection among flights, which is made in accordance with the “natural” link among them and consists of several continuous flights. The flights in different flight dates belong to different flight strings. The specification of parameters and variables in the model are as follows:
n : number of flight strings; m : number of aircrafts; C : the set of all flight strings, C = {1,2,3, , n} ; V : the set of all aircrafts, V = {1,2,3, , m} ; A : the set of all vertexes, A = {0} ∪ C , 0 represents virtual flights;
Solving the Aircraft Assigning Problem by the Ant Colony Algorithm
181
W : the set of all airports owning aircrafts at the beginning of a week, W = {1,2,3, , w} ; Oi : departure airport of the first flight in flight string i ;
Di : arrival airport of the last aircraft in flight string i ; t di : the time when the first flight in the flight string i leaves the airport; t ai : the time when the last flight in flight string i arrives at the airport; GT : the minimum stop time of the convergence between flights; Qk : the available flight hours of aircraft i in a week; Pe : the total number of aircrafts owned by airport e at the beginning of a week; The definition of
⎧1 , ⎩0 ,
λie = ⎨
λie
is as follows,
the departure airport of the first flight in flight string i is airport e
where, i ∈ C ,
e ∈W .
The definition of the decision variable
⎧1 , x ijk = ⎨ ⎩0 ,
,
or else
xijk is as follows,
aircraft k executes flight string j after executing flight string i , or else
i, j ∈ A, k ∈ V . In particular, x0 ik shows flight string i is the first one carried out by aircraft k , in other words, flight string i connects the virtual flight string 0. And the set of those where,
flight strings which connect virtual flight strings are called the original set of client nodes. According to the description of parameters and variables above, we develop the model of aircraft assignment VRP problem as follows: Min
∑ ∑ ∑ (t dj − t ai ) xijk .
k∈V i∈C j∈C
(1)
s.t.
∑ ∑ xijk = 1 ,
∀j ∈ C ,
(2)
∀k ∈ V , j ∈ C ,
(3)
∀i, j ∈ A, ∀k ∈ V ,
(4)
t dj x ijk ≥ (t ai + GT ) x ijk , ∀i, j ∈ A, ∀k ∈ V ,
(5)
k∈V i∈ A
∑ x ijk − ∑ x jik = 0 ,
i∈ A
i∈ A
O j ⋅ x ijk = Di ⋅ x ijk ,
182
T. Zhang et al.
∑ ∑ (t aj − t dj ) x ijk ≤ Qk ,
∀k ∈ V ,
(6)
∑ ∑ x0ik ⋅ λie ≤ Pe ,
∀e ∈ W ,
(7)
∑ ∑ x ijk ≤| S | −1 ,
∀S ∈ C , ∀k ∈ V .
(8)
i∈ A j∈C
k∈V i∈C
i∈S j∈S
Formula (1) is the objective function which minimizes the sum of link time between continuous flight strings. Constraint (2) ensures that each flight string must be served for exactly once; constraint (3) ensures each aircraft reaching at one client must leave from the client; constraint (4) ensures the departure airport and the arrival airport of two continuous flight strings served by the same aircraft must satisfying the preceding order; constraint (5) is the link time constraint; constraint (6) ensures the weekly flight hours of each flight is less than the available hours of this week; constraint (7) ensures the number of flight strings departing from airport e is less than the number of original flights of airport e ; constraint (8) eliminates sub-loop.
3 Algorithm Designing Pseudo-random Probability Selection Rule. According to the pseudo-random probability selection rule, the rule of ants selecting client node i at the client node j is determined by formula (9).
⎧⎪arg max{[ τ (i, j)] α ⋅ [ η (i, j)] β } j∈M i j=⎨ ⎪⎩u
if q < q 0
,
(9)
else
q0 is a constant, q 0 ∈ (0, 1) ; q is a randomly generated probability, q ∈ (0, 1) ; τ (i,j ) is the pheromone amount between client i and client j , η (i, j ) is the heuristic factor between client i and client j , α , β are the weight of both pheromone and heuristic factor in the total information. M i represents the set of the available client nodes when ants is selecting the next client at the client node i (A set where,
of client nodes including all that is not visited, and to meet the requirements of transit time and the airport, as well as flight hours constraints). u is an client calculated by formula (10). Randomly generating
q before selecting the next customer, if q < q0 ,
[ τ (i,j )] α ⋅ [η(i,j )] β in all available clients starting from client i and making it the next client to visit; if q ≥ q0 , then choosing the next
then selecting the maximum client
customer according to formula (10).
Solving the Aircraft Assigning Problem by the Ant Colony Algorithm
⎧ [ τ (i,u )]α ⋅ [η(i,u )] β ⎪⎪ α β Pk (i,u ) = ⎨ j∈∑M[ τ (i,j )] ⋅ [η(i,j )] i ⎪ ⎪⎩0
183
if u ∈ M i ,
(10)
else
Pk (i,u) is the state transition probability when ant k transforms from client node i to client node j .
where,
Initial Pheromone and Local Update Rules. This paper uses the basic idea of the nearest neighbor method to initialize pheromone. In the process of seeking solutions, ants prefer visiting those nodes that are nearest from the current node when choosing the next node to visit. The pheromone update includes local update and overall update. The former means that when ants transform from client i to client j , the pheromone of path ( i ,
j ) will be locally updated according to the formula (11). τ (i,j ) = (1 − ρ) ⋅ τ (i,j ) + ρ ⋅ τ 0 ,
(11)
ρ ∈ (0, 1 ) is an adjustable variable, representing the volatile factor pheromone; τ 0 =1 /( n ⋅ Tnn ) , n is the number of flight strings, Tnn is the sum where,
of of
link time in the original available solutions constructed in accordance with the nearest neighbor method. In ACO, when updating overall information, only the pheromone on the path belonging to the optimal solution can be updated. To make more effective use of better solutions, this paper, basing on the update mode of ASRank [13], measures solutions by the sum of link time took by each ant. It orders all the paths according to the total link time by ascending order, ie.,
gaptime 1 ≤ gaptime 2 ≤
≤ gaptime nnant (nnant is the number of ants) and
gives different weights to the path of each ant, in which a greater weight is given to a shorter path. The weight of the best path is w . We can use the formula (12) to update pheromone of each path. w −1
τ(i,j) = ( 1 − ρ)τ(i,j) + ∑ (w − r) ⋅ Δτ ijr + w ⋅ Δτ ijgb , r =1
where,
(12)
Δτ ijr = 1 / gaptime r , Δτ gbij= 1 / gaptime gb , ρ ∈ (0, 1) is the volatility
coefficient of pheromone,
gaptime r is the total link time of the r’th shortest path,
gaptime gb is the total link time of the overall optimum solution. Meanwhile, this paper takes the method of MMAS algorithm [14] to avoid stagnation in the process of searching. And the pheromone on each path is limited within the range τ min ,τ max , τ max = nτ 0 / ρ , to avoid the intensity difference of
[
]
pheromone on the paths are too large to result in premature local optimum.
184
T. Zhang et al.
Construction of Heuristic Function. The construction of heuristic factor is a core component of the information that is the basis of constructing solution.
η (i, j ) = 1 /(t dj − t ai ) ,
(13)
t dj − t ai is the link time between flight string i and flight string j , that is, the distance between two client nodes in VRP problem. Flight string j is one of the available next client nodes of flight string i and is stored in the candidate list Gi corresponding to flight string i .
where,
The formula of calculating total information is as follow:
total (i,j ) = [τ (i, j )] [η (i, j )] . α
β
(14)
The greater α is, the ants are more likely to choose the path went by other ants and the cooperation among ants becomes stronger. β shows the essentiality of the heuristic information in the searching process. The greater the path with shorter link time.
β
is, ants are more likely to choose
4 Computational Results and Analysis In order to test the feasibility of the model and the validity of the algorithm, this paper solves the model with taking some airline's actual data as the example. The algorithm programs with VC++ 6.0, moves on XP system Core (TM) 2 Duo CPU (1.80GHz), 1GB memory PC machine. In order to compare the initial node selecting different data to affect the results, this paper carries on the experimental contrast to these seven kinds of situations. We chooses this group of parameters combination α = 1 & β = 5 & q0 = 0.9 as the best parameter combination. Table 1 contains experiment results by seven sets of experiment data, the average of the best solution 10 times. From table 1, the second group of data (Monday and Tuesday 124 scheduled flight string as initial customer node collection) can restrain quickly in well solves 101105min, also the result is the most stable. The number the initial node collection Table 1. 7 groups of data experimental results Target value (min)
No.
Flights string
Best
Average
Worst
1 2 3 4 5 6 7
61 124 188 251 312 372 432
101940 101105 101105 101105 101105 101120 101140
102082 101108 101116 101121 101124 101134 101152
102190 101120 101140 101145 101150 101160 101180
iterations
time (s)
117 142 148 159 170 192 199
161 175 180 183 184 188 191
Solving the Aircraft Assigning Problem by the Ant Colony Algorithm
185
The total link time
contains scheduled flight string are too few, although the algorithm computing time reduced, but does not favor obtains the optimal solution; The number the initial node collection contains scheduled flight string are too many, because ant's way hunting zone expands causes the computing time to be long, and does not favor obtains the optimal solution. Based on the best parameters in the preceding text, (initial customer node collection is 124 scheduled flight strings) carrying on the solution to one week data of some airline. Figure 1 is the best solution restraining diagram of curves. 0
5
100
150
200
Number of iterations
Fig. 1. Best solution restraining curve
In Figure 1, within 10 iterations, the convergence rate is extremely quick. During the 10 to 100 iterations, the convergence rate decreases, and the best target value 101105 is obtained at about 142 iterations. In this paper, the heuristic factor structure, the initialization customer nodes selection and the pheromone update strategy can improve effectively the ACS algorithm. In order to compare the solutions obtained by our algorithm with these obtained by the manual method, we calculate airplane efficiency respectively as shown in table 2. Here,
Aircraft utilizatio n = Total flying time /(Total flying time + Total link time ) . Table 2. Comparison of aircraft utilization
ACO algorithm Manual arrangement
Total flying time (min) 140815 140815
Total link time ( min) 101105 104900
Aircraft utilization 58.2% 57.3%
From table 2, the aircraft utilization of the results obtained by our algorithm obviously is bigger than that of the result obtained by the manual method. Therefore, the model and the algorithm can reduce the link time between the flight strings effectively and enhance the airplane efficiency.
186
T. Zhang et al.
5 Conclusion This paper studied the aircraft assigning problem taking the week as unit, transformed the aircraft assigning problem to the vehicle routing problem. We proposed the virtual flight string, and established the mix integer programming model. According to the model characteristic, this paper developed improvement ant colony algorithm to solve the aircraft assigning problem. In the experiments, using the actual flight data as instances, we tested our model and algorithm. The numerical results showed the solutions obtained by our method are better that obtained by the manual method. Acknowledgements. This work is partially supported by the Natural Science Fund of Shanghai under Grant (No.09ZR1420400, 09ZR1403000), the National Natural Science Fund of China under Grant (No.60773124, 70501018), 211 Project for Shanghai University of Finance and Economics of China (the 3rd phase Leading Academic Discipline Program).
References 1. Barnhart, C., Boland, N., Clarke, L., Johnson, E., Nemhauser, G., Shenoi, R.: Flight string models for aircraft fleeting and routing. Transportation Science 32, 208–220 (1998) 2. Boland, N., Clarke, L., Nemhauser, G.: The asymmetric traveling salesman problem with replenishment arcs. European Journal of Operational Research 123, 408–427 (2000) 3. Mak, V., Boland, N.: Heuristic approaches to the asymmetric travelling salesman problem with replenishment arcs. International Transactions in Operational Research 7, 431–447 (2000) 4. Clarke, L., Johnson, E., Nemhauser, G., Zhu, Z.: The aircraft rotation problem. Annals of Operations Research 69, 33–46 (1997) 5. Sriram, C., Haghani, A.: An optimization model for aircraft maintenance scheduling and re-assignment. Transportation Research Part A 37, 29–48 (2003) 6. Rexing, B., Barnhart, C., Kniker, T., Jarrah, A., Krishnamurthy, N.: Airline fleet assignment with time windows. Transportation Science 34, 1–20 (2000) 7. Bélanger, N., Desaulniers, G., Soumis, F., Desrosiers, J.: Periodic airline fleet assignment with time windows, spacing constraints, and time dependent revenues. European Journal of Operational Research 175, 1754–1766 (2006) 8. Sherali, H.D., Bish, E.K., Zhu, X.: Airline fleet assignment concepts, models, and algorithms. European Journal of Operational Research 172, 1–30 (2006) 9. Haouari, M., Aissaoui, N., Mansour, F.Z.: Network flow based approaches for integrated aircraft fleeting and routing. European Journal of Operational Research 193, 591–599 (2009) 10. Du, Y.: A Optimal Method of Scheduled Flights for Civil Aircraft. Systems EngineeringTheory & Practice 8, 75–80 (1995) 11. Li, Y., Tan, N., Hao, G.: Study on flight string model and algorithm in flight scheduling. Journal of System Simulation 20, 612–615 (2008)
Solving the Aircraft Assigning Problem by the Ant Colony Algorithm
187
12. Dorigo, M., Gambardella, L.M.: Ant colonies for the travelling salesman problem. BioSystems 43, 73–81 (1997) 13. Bullnheimer, B., Hartl, R.F., Strauss, C.: A new rank based version of the ant system: A computational study. Central European Journal for Operation Research and Economics 7, 25–28 (1999) 14. Stutzle, T., Hoos, H.: Max-min ant system and local search for the traveling salesman problem. In: Proceedings of IEEE International Conference on Evolutionary Computation and Evolutionary Programming Conference. IEEE Press, New York (1997)
Generalization Bounds of Ranking via Query-Level Stability I Xiangguang He1, Wei Gao2,3, and Zhiyang Jia4 1
Department of Information Engineering, Binzhou Polytechnic, Binzhou 256200, China 2 Department of Information, Yunnan Normal University, Kunming 650092, China 3 Department of Mathematics, Soochow University, Suzhou 215006, China 4 Department of Information, Tourism and Literature college of Yunnan University, Lijiang 674100, China [email protected]
Abstract. The quality of ranking determines the success or failure of information retrieval and the goal of ranking is to learn a real-valued ranking function that induces a ranking or ordering over an instance space. We focus on generalization ability of learning to rank algorithms for information retrieval (IR). The contribution of this paper is to give generalization bounds for such ranking algorithm via uniform (strong and weak) query-level stability by deleting one element from sample set or change one element in sample set. Only we define the corresponding definitions and list all the lemmas we need. All results will show in “Generalization Bounds of Ranking via Query-Level Stability II”. Keywords: ranking, algorithmic stability, generalization bounds, strong stability, weak stability.
1 Introduction A key issue in information retrieval is to return useful items according to user’s requests, and the items are ranked by a certain ranking function. Therefore, the ranking algorithm is the most important issue in search engines because it determines the quality of the list which will be presented to the user. The problem of ranking is formulated by learning a scoring function with small ranking error generated from the given labeled samples. There are some famous ranking algorithms such as rank boost (see [1]), gradient descent ranking (see [2]), margin-based ranking (see [3]), P-Norm Push ranking (see [4]), ranking SVMs (see [5]), MfoM (see [6]), Magnitude-Preserving ranking (see [7]) and so on. Some theory analysis can be found in [8-12]. The generalization properties of ranking algorithms are central focuses in their research. Most generalization bounds in some learning algorithm are based on some measures of the complexity of the hypothesis used like VC-dimension, covering number, Rademacher complexity and so on. However, the notion of algorithmic stability can be used to derive bounds that tailored to specific learning algorithms and exploit their particular properties. A ranking algorithm is called stable if for a wild change of samples, the ranking function doesn’t change too much. M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 188–196, 2011. © Springer-Verlag Berlin Heidelberg 2011
Generalization Bounds of Ranking via Query-Level Stability I
189
We can learn the generalization bounds for the extension of this ranking algorithm via uniform leave-one-query-out associate-level loss stability(see [13]). However, the uniform stability is too restrictive for many learning algorithms (see [14]). In many applications, we should reduce the demand of stability. Our paper as the continue work of [13], consider some kinds of “almost-everywhere” stability-strong and weak query-level stability for extension ranking algorithm raised by [13] and the generalization bounds for such ranking algorithms are given as well. The organization of this paper is as follows: we describe the setting of ranking problem in next section, and define notions of five kinds of stabilities latter. Using these notions, we derive first result for stable ranking algorithms.
2 Setting Assume that query q is a random sample from the query space Q according to a probability distribution PQ. For query q, an associate
ω (q)
and its ground truth
g( ω ) are sampled from space Ω × G according to a joint probability distribution Dq, where Ω is the space of associates and G is the space of ground truth. Here the (q)
associate
ω (q)
can be a single document, a pair of documents, or a set of documents,
and correspondingly the ground truth g( ω ) can be a relevance score (or class label), an order on a pair of documents, or a permutation (list) of documents. Let l(f; (q)
ω ( q ) , g( ω ( q ) )) denote a loss (referred to as associate-level loss) defined on ( ω ( q ) , (q) g( ω )) and a ranking function f. Expected query-level loss is defined as: L(f;q)=
∫
Ω×G
l ( f ; ω ( q ) , g (ω ( q ) ))Dq (d ω ( q ) , dg (ω ( q ) )) .
Empirical query-level loss is defined as:
1 Lˆ ( f ; q ) = nq Where
nq
∑ l ( f ;ω j =1
(q) j
, g (ω (j q ) )) ,
(ω (j q ) , g (ω (j q ) )) , j =1 ,…, nq stands for nq associates of q, which are sampled
i.i.d. according to Dq. The empirical query-level loss can be an estimate of the expected query-level loss. It can be proven that the estimation is consistent. The goal of learning to rank is to select the ranking function f which can minimize the expected query-level risk defined as:
Rl ( f ) = EQ L( f ; q ) = ∫ L( f ; q ) PQ (dq ) Q
(1)
In practice, PQ is unknown. We have the training samples (q1, S1), …, (qr, Sr), where
Si ={( ω1 , g( ω1 )),…, ( ωni , g( ωni ))}, i=1, …, r, and ni is the number of (i )
(i )
(i )
(i )
associates for query qi. Here q1, …, qr can be viewed as data sampled i.i.d. according
190
X. He, W. Gao, and Z. Jia
to PQ, and ( ω j , g( ω j )) as data sampled i.i.d. according to (i )
(i )
Dqi , j = 1,…, ni, i =
1,…, r. Empirical query-level risk is defined as: r
1 Rˆl ( f ) = ∑ Lˆ ( f ; qi ) . r i =1
(2)
The empirical query-level risk is an estimate of the expected query-level risk. It can be proven that the estimation is consistent. This probabilistic formulation can cover most of existing learning for ranking algorithms. If we let the association to be a single document, a pair of documents, or a set of documents, we can respectively define pointwise, pairwise, or listwise losses, and develop pointwise, pairwise, or listwise approaches to learning to rank. (a) Pointwise Case For the document space D. We use a feature mapping function
φ :Q ×
→ X(= Rd) to create a d-dimensional feature vector for each query-document pair. ∀ query q, D
suppose that the feature vector of a document is x(q) and its relevance score (or class label) is y(q), then (x(q), y(q)) can be viewed as a random sample from X × R according to a probability distribution Dq. If l(f; x(q), y(q)) is a pointwise loss (square loss for example), then the expected query-level loss turns out to be: L(f;q)=
∫
X ×R
l ( f ; x ( q ) , y ( q ) )Dq (dx ( q ) , dy ( q ) ) . (i )
Given training samples (q1,S1), … , (qr, Sr), where Si={( x1 ,
y1(i ) ),…, ( xn(ii ) , yn(ii ) )},
i=1, …, r, the empirical query-level loss of query qi, (i=1, …, r) becomes:
1 Lˆ ( f ; qi ) = ni
ni
∑ l( f ; x j =1
(i ) j
, y (ji ) ) .
(b) Pairwise Case
∀ query q, z(q) = ( x1( q ) , x2( q ) ) stands for a document pair associated with it. Moreover, (
⎧ y (q ) = 1 if x1(q ) is ranked above x1(q ) ⎨ ⎩ y (q ) = −1 otherwise
. Let Y = {1,−1}.
x1( q ) , x2( q ) ,y(q)) can be viewed as a random sample from X2 × Y according to a
probability distribution Dq. If l(f; z(q), y(q)) is a pairwise loss (hinge loss for example), then the expected query-level loss turns out to be: L(q)=
∫
X 2 ×Y
l ( f ; z ( q ) , y ( q ) )Dq (dz ( q ) , dy ( q ) ) . (i )
Given training samples (q1,S1), … , (qr, Sr), where Si={( z1 ,
y1(i ) ),…, ( zn(ii ) , yn(ii ) )},
i=1, …, r, the empirical query-level loss of query qi, (i=1, …, r) becomes:
Generalization Bounds of Ranking via Query-Level Stability I
1 Lˆ ( f ; qi ) = ni
191
ni
∑ l( f ; z j =1
(i) j
, y (ji ) ) .
(c) Listwise Case For each query q, s(q) denote a set of m documents associated with it, π ( s ) ∈ ∏ denote a permutation of documents in s(q) according to their relevance degrees to the (q )
∏ is the space of all permutations on m documents. (s(q), π ( s ( q ) ) ) can be viewed as a random sample from Xm × ∏ according to a probability distribution (q ) Dq. If l(f; s(q), π ( s ) ) is a listwise loss (cross entropy loss for example), then the query, where
expected query-level loss turns out to be: L(q)=
∫
X m ×∏
l ( f ; s ( q ) , π ( s ( q ) ))Dq (ds ( q ) , d π ( s ( q ) )) .
Given training samples (q1,S1), … , (qr, Sr), where Si={( ( sni , π ( sni (i )
(i )
s1(i ) , π ( s1(i ) ) ),…,
) )}, i=1, …, r, the empirical query-level loss of query qi, (i=1, …, r)
becomes:
1 Lˆ ( f ; qi ) = ni
ni
∑ l( f ; s j =1
(i ) j
, π ( s (ji ) )) .
3 Definitions Y. Lan (see [13]) defined uniform leave-one-query-out associate-level loss stability. To use notions defined above, we define strong and weak leave-one-query-out associate-level loss stability, and uniform, strong and weak associate-level loss stability for change one element in training sample. They are also good measures to show how robust a ranking algorithm is. We assume 0< δ1 , δ 2 , δ 3 , δ 4 <1 and algorithm A satisfies the setting described above in the rest of the paper . Definition 1 (Strong leave-one-query-out associate-level loss stability). Let A be a learning for rank algorithm, {(qi, Si), i = 1,…, r} be the training set, l be the associatelevel loss function, and τ be a function mapping an integer to a real number. We say that A has strong leave-one-query-out associate-level loss stability ( τ 1 , δ1 ) with respect to l, if ∀ q ∈ Q, ( ω holds:
(q)
, g( ω
(q)
)) ∈ Ω × G , the following inequality
P{( q , S )}r { l ( f{( q , S )}r ; ω (j q ) , g (ω (j q ) )) − l ( f{( q , S )}r i
i
i =1
≥ 1- δ1 .
i
i
i =1
i
i
i =1,i ≠ j
; ω (j q ) , g (ω (j q ) )) ≤ τ 1 (r ) }
192
X. He, W. Gao, and Z. Jia r
Here ( q i , S i )}i=1,i≠ j stands for the samples (q1, S1), …, (qj-1, Sj-1), (qj+1, Sj+1), …, (qr, Sr),
f{( q , S )}r stands for the ranking function learned from
where (qj , Sj) is deleted.
i
i
i =1
r
{( q i , S i )}i =1 . We will use the notations hereafter.
Definition 2 (Weak leave-one-query-out associate-level loss stability). Let A be a learning for rank algorithm, {(qi, Si), i = 1,…, r} be the training set, l be the associatelevel loss function, and τ be a function mapping an integer to a real number. We say that A has weak leave-one-query-out associate-level loss stability ( τ 2 , δ 2 ) with respect to l, if
P{( q , S )}r i
i
i =1 ,
∀ q ∈ Q, ( ω ( q ) , g( ω ( q ) )) ∈ Ω × G , the following inequality holds:
j∈{1,", r }
{
l ( f{( q , S )}r ; ω (j q ) , g (ω (j q ) )) − l ( f{( q , S )}r i
i
i =1
i
i
i =1,i ≠ j
; ω (j q ) , g (ω (j q ) )) ≤
τ 2 (r ) } ≥ 1- δ 2 . Definition 3 (Uniform associate-level loss stability for replacement case). Let A be a learning for rank algorithm, {(qi, Si), i = 1,…, r} be the training set, l be the associate-level loss function, and τ be a function mapping an integer to a real number. We say that A has uniform associate-level loss stability τ 3 with respect to l, if ∀ qj ∈ Q, Sj ∈ (Ω × G ) j , j = 1, … , r, q following inequality holds: n
l ( f{( q ,S )}r ; ω (j q ) , g (ω (j q ) )) − l ( f i
Here
i
r , j ,q 'j
{( qi , S i )}i =1
i =1
∈ Q, ( ω ( q ) , g( ω ( q ) )) ∈ Ω × G , the
r , j ,q'j
{( qi , Si )}i =1
; ω (j q ) , g (ω (j q ) )) ≤ τ 3 (r ) . '
stands for the samples (q1, S1), …, (qj-1, Sj-1), ( q j , '
Sj+1), …, (qr, Sr), where query (qj , Sj) is changed for another query( q j , ( j' )
refers to( w1
( j' )
, g( w1
( j' )
( j' )
j
j
S 'j ) ,(qj+1,
S 'j ), and S 'j
)),…, ( wn' , g( wn' )).We will use the notations hereafter.
Definition 4 (Strong associate-level loss stability for replacement case). Let A be a learning for rank algorithm, {(qi, Si), i = 1,…, r} be the training set, l be the associatelevel loss function, and τ be a function mapping an integer to a real number. We say that A has strong associate-level loss stability ( τ 4 , δ 3 ) with respect to l, if ∀ r, q
∈ Q, ( ω ( q ) , g( ω ( q ) )) ∈ Ω × G , the following inequality holds:
P{( q , S )}r { l ( f{( q , S )}r ; ω (j q ) , g (ω (j q ) )) − l ( f i
i
i =1
≥ 1- δ 3 .
i
i
i =1
r , j ,q 'j
{( qi , Si )}i =1
; ω (j q ) , g (ω (j q ) )) ≤ τ 4 (r ) }
Generalization Bounds of Ranking via Query-Level Stability I
193
Definition 5 (Weak associate-level loss stability for replacement case). Let A be a learning for rank algorithm, {(qi, Si), i = 1,…, r} be the training set, l be the associatelevel loss function, and τ be a function mapping an integer to a real number. We say that A has weak associate-level loss stability ( τ 5 , δ 4 ) with respect to l, if ∀ q ∈ Q, (ω
(q)
, g( ω
i
{ l ( f{(q ,S )}r ; ω j , g (ω j )) − l ( f (q)
P{( q , S )}r i
)) ∈ Ω × G , the following inequality holds:
(q)
i =1 ,( q j , S j
)∈Q×( Ω×G )
i
i
(q )
r , j ,q' {( qi , Si )}i=1 j
i=1
;ω (j q ) , g (ω (j q ) ))
≤ τ 5 (r ) } ≥ 1- δ 4 . 4 Some Useful Lemmas The main tool to get generalization bounds for uniform query-level stability ranking algorithm is well known McDiarmid inequality as follows. Lemma 1 [15]. Let X1,…,XN be independent random variables, each taking values in a set C. Let
φ : C N → R be such that for each k∈ {1, …,N}, there exist ck >0 such
that sup x1 ," , x N ∈C , x k' ∈C
Then for any
φ(x1,", xN ) −φ(x1,", xk−1, xk' , xk +1,", xN ) ≤ ck ,
ε >0,
P{ φ ( X 1 ," , X N ) − E{φ ( X 1 ," , X N )}
≥ ε } ≤ 2( e
−2ε 2 /
N
∑ck2 k =1
).
In order to deal with strong or weak query-level stability case, we should use the extensions of McDiarmid inequality. Note that the express form of Lemma 2, Lemma 3 and Lemma 4 are different from the original versions. Lemma 2 [16]. Let X1,…,XN be independent random variables, each taking values in
a set C. There is a “bad” subset B ⊂ C, where P(x1,…,xN ∈ B)= δ . Let be such that for each k ∈ {1, …,N}, there exist b ≥
φ : CN →R
ck >0 such that
φ(x1,", xN ) −φ(x1,", xk−1, xk' , xk+1,", xN ) ≤ ck ,
sup x1 ," , x N ∈C − B , x k' ∈C
sup x1 ,", xN ∈C , xk' ∈C
Then for any
ε >0,
φ(x1,", xN ) −φ(x1,", xk−1, xk' , xk+1,", xN ) ≤ b.
194
X. He, W. Gao, and Z. Jia
P{ φ ( X 1 ," , X N ) − E{φ ( X 1 ," , X N )}
≥ ε } ≤ 2( e
−ε 2 /8
N
∑ck2 N 2bδ k =1 +
N
∑c k =1
).
k
Lemma 3[16]. Let X1,…,XN be independent random variables, each taking values in a N set C. Let φ : C → R be such that for each k ∈ {1, …,N}, there exist b ≥ ck >0 and N
∑c
k
δ ≤ ( k =1 )6 such that Nb P{
sup x1 ," , x N ∈C , x k' ∈C
sup x1 ,", xN ∈C , xk' ∈C
Then for any P{
φ(x1,", xN ) −φ(x1,", xk −1, xk' , xk+1,", xN ) ≤ ck } ≥ 1- δ ,
φ(x1,", xN ) −φ(x1,", xk−1, xk' , xk+1,", xN ) ≤ b.
ε >0,
φ ( X 1 ," , X N ) − E{φ ( X 1 ," , X N )} ≥ ε
}
−ε 2 2ε
≤ 2(exp{ 10(1+
N
N
15∑ck
)∑c k=1
+
2 k
k=1
N bδ 2
1/2
N
∑c k=1
exp{
k
εb N
4∑ck2
}}+N δ ). 1/2
k=1
For the proof of the second part of Theorem 1 and Theorem 2 in next paper “Generalization Bounds of Ranking via Query-Level Stability II”, we use the following simplified version for weak case. Lemma 4 [16]. Let X1,…,XN be independent random variables, each taking values in a set C. Let
φ : CN →R
be such that for each k ∈ {1, …,N}, there satisfies two
condition inequalities in Lemma 3 by substituting
λk N
for
ck , and substituting
e−KN for δ . If 0< ε ≤ min T(b, λk ,K), and N ≥ max Δ (b, λk ,K, ε ), then k
k
P{ φ ( X 1 ," , X N ) − E{φ ( X 1 ," , X N )}
≥ ε }≤4e
−ε 2 N 2 / 40
N
∑ λi2 i =1
.
Generalization Bounds of Ranking via Query-Level Stability I
The bounds T and
195
Δ are: T(b,
λk ,K)=min{
Δ (b, λk ,K, ε )=max{
b
λk
15λk λ2K , 4λk K , k }, 2 b , λk
40,3(
24 24 1 +3)In( +3), }. ε K K
Lemma 5 [17]. Let X1,…,XN be independent random variables, each taking values in a set C. Let
φ : C N → [-M, M] be such that for each k ∈ {1, …,N}, there exist ck >0
such that
sup
P{
x1 ,", xN ∈C, xk' ∈C
Then for any q ≥ 2 and
φ(x1,", xN ) −φ(x1,", xk−1, xk' , xk +1,", xN ) ≤ ck } ≥ 1- δ ,
ε >0,
P{ φ ( X 1 ," , X N ) − E{φ ( X 1 ," , X N )} where
≥ ε }≤
( nq)q / 2 ((2κ ) q / 2 ckq + (2M )q δ )
εq
,
κ ≈ 1.271.
5 Conclusion In this paper, we define five kinds of query-level loss stabilities for ranking algorithms and give some useful lemmas. Our results for the generalization bounds via uniform associate-level loss stability and strong and weak stability for level-one-out and replacement case will be given in next paper “Generalization Bounds of Ranking via Query-Level Stability II”.
References 1. Cynthia, R., Robert, E., Ingrid, D.: Boosting based on a smooth margin. In: Proceedings of the 16th Annual Conference on Computational Learning Theory, pp. 502–517 (2004) 2. Burges, C.: Learning to rank using gradient descent. In: Proceedings of the 22nd Intl Conference on Machine Learning, pp. 89–96 (2005) 3. Rong, Y., Alexander, Hauptmann, D.: Efficient margin-based rank learning algorithms for information retrieval. In: Sundaram, H., Naphade, M., Smith, J.R., Rui, Y. (eds.) CIVR 2006. LNCS, vol. 4071, pp. 113–122. Springer, Heidelberg (2006) 4. Cynthia, R.: Ranking with a P-Norm Push. In: Lugosi, G., Simon, H.U. (eds.) COLT 2006. LNCS (LNAI), vol. 4005, pp. 589–604. Springer, Heidelberg (2006) 5. Joachims, T.: Optimizing search engines using clickthrough data. In: Proc.The 8th ACM SIGKDD Intl Conference on Knowledge Discovery and Data Mining, pp. 133–142. ACM Press, New York (2002)
196
X. He, W. Gao, and Z. Jia
6. Chua, T.S., Neo, S.Y., Goh, H.K., et al.: Trecvid 2005 by nus pris. NIST TRECVID (2005) 7. Corinna, C., Mehryar, M., Ashish, R.: Magnitude-Preserving Ranking Algorithms. In: Proc. The 24th International Conference on Machine Learning. OR, Corvallis (2007) 8. Kutin, S., Niyogi, P.: The interaction of stability and weakness in AdaBoost, Technical Report TR-2001-30, Computer Science Department, University of Chicago (2001) 9. Agarwal, S., Niyogi, P.: Stability and generalization of bipartite ranking algorithms. In: Auer, P., Meir, R. (eds.) COLT 2005. LNCS (LNAI), vol. 3559, pp. 32–47. Springer, Heidelberg (2005) 10. Garwal, Niyogi, P.: Generalization bounds for ranking algorithms via algorithmic stability. Journal of Machine Learning Research 10, 441–474 (2009) 11. Cynthia, R.: The P-Norm Push: A simple convex ranking algorithm that concentrates at the top of the list. Journal of Machine Learning Research 10, 2233–2271 (2009) 12. Gao, W., Zhang, Y., Liang, L., Xia, Y.: Stability analysis for ranking algorithms. In: IEEE International Conference on Information Theory and Information Security (ICITIS), Beijing, pp. 973–976 (December 2010) 13. Lan, Y., Liu, T., Qin, T., Ma, Z., Li, H.: Query-Level Stability and Generalization in Learning to Rank, Appearing. In: Proceedings of the 25 th International Conference on Machine Learning, Helsinki, Finland (2008) 14. Kutin, S., Niyogi, P.: Almost-everywhere algorithmic stability and generalization error. In: Proceedings of the 18th Conference on Uncertainty in Artificial Intelligence (2002) 15. McDiarmid, C.: On the method of bounded differences. In: Surveys in Combinatorics 1989, pp. 148–188. Cambridge University Press, Cambridge (1989) 16. Kutin, S.: Extensions to McDiarmid’s inequality when differences are bounded with high probability, Technical report, Department of Computer Science, The university of Chicago (2002) 17. Rakhlin, A., Mukherjee, S., Poggio, T.: Stability results in learning theory. Analysis and Applications 3, 397–417 (2005)
Generalization Bounds for Ranking Algorithm via Query-Level Stabilities Analysis Zhiyang Jia1, Wei Gao2,3, and Xiangguang He4 1 Department of Information science and technology, Tourism and Literature college of Yunnan University, Lijiang 674100, China 2 Department of Information, Yunnan Normal University, Kunming 650092, China 3 Department of Mathematics, Soochow University, Suzhou 215006, China 4 Department of Information Engineering, Binzhou Polytechnic, Binzhou 256200, China [email protected]
Abstract. The effectiveness of ranking algorithms determine the quality of information retrieval and the goal of ranking algorithms are to learn a realvalued ranking function that induces a ranking or ordering over an instance space. We focused on generalization ability of learning to rank algorithms for information retrieval (IR). As a continuous research of generalization bounds of ranking algorithm, the contribution of this paper includes: generalization bounds for such ranking algorithm via five kinds of stabilities were given. Such stabilities have lower demand than uniform stability and fit for more real applications. Keywords: ranking, algorithmic stability, generalization bounds, strong stability, weak stability.
1 Introduction All the definitions and lemmas are listed in “Generalization bounds of Ranking via Query-Level Stability”(See [1]) . In this paper, our research had be continued and main results had be given.
2 Main Results and Proof The first result in this paper is refer to uniform associate-level loss stability, the trick is to use the technically in Theorem 1 of [2] and Lemma1(McDiarmid inequality [3,4]) plays an important role . Theorem 1. Let A be a learning for rank algorithm, (q1, S1),… , (qr, Sr) be r training samples, and let l be the associate-level loss function. If (1)
∀ (q1, S1),… , (qr, Sr), q ∈ Q, ( ω ( q ) , g( ω ( q ) )) ∈ Ω × G ,
l ( f{( q , S )}r ; ω (j q ) , g (ω (j q ) )) ≤ B. i
i
i =1
M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 197–203, 2011. © Springer-Verlag Berlin Heidelberg 2011
198
Z. Jia, W. Gao, and X. He
(2) A has uniform query-level stability with coefficient τ 3 . Then ∀
δ ∈
(0, 1) with probability at least 1 − δ over the samples of
r
{( qi , S i )}i=1 in the product space
∏
r i =1
{Q × (Ω × G )∞ } , the following
inequality holds:
Rl ( f{( q , S )}r ) ≤ Rˆl ( f{( q , S )}r ) +2 τ 3 (r ) +(2r τ 3 (r ) +B) i
i =1
i
i
i =1
i
ln
1
δ
2r
.
(1)
Proof. The result follows by applying the technology used in Theorem 1 in [2]. Theorem 2. Let A be a learning to rank algorithm, (q1, S1),… , (qr, Sr) be r training samples, and let l be the associate-level loss function. ∀ (q1, S1),… , (qr, Sr), q ∈ Q, (ω
(q)
, g( ω
(q)
)) ∈ Ω × G ,
l ( f{( q , S )}r ; ω (j q ) , g (ω (j q ) )) ≤ B. Then i
i
i =1
(1) If A has strong leave-one-query-out associate-level loss stability with coefficient τ 1 . Then ∀ δ ∈ (0, 1) with probability at least 1 − δ over the samples of r
{( qi , S i )}i=1 in the product space
∏
r i =1
{Q × (Ω × G )∞ } , the following
inequality holds:
8 2(4τ1r + B) Rl ( f{( q , S )}r ) ≤ Rˆl ( f{( q , S )}r ) +B+(4r τ 1 (r ) +B) ln . (2) i i i =1 i i i =1 r 4τ1rδ + Bδ − 4r 2 Bδ (2) If A has weak leave-one-query-out query-level stability with coefficient τ 2 . And
if
(4τ 2 +
3(
0<
ε ≤
min {
1 5( 4τ 2 +
k
B 2 In(1 / δ 2 ) ) r r }, and m 2M
B ) r
4(4τ 2 +
,
2
≥
max { k
2M 4τ 2 +
,
B r
B In(1 / δ 2 ) ) r r
,
B (4τ 2 + ) 40 , r
24r 24r 1 + 3)In( +3), } . Then ∀ δ ∈ (0, 1), with probability ε In(1/ δ2 ) In(1/ δ 2 )
at least 1 −
∏
r
δ
r
over the samples of {( qi , S i )}i =1 in the product space
{Q × (Ω × G )∞ } , the following inequality holds:
i =1
Generalization Bounds for Ranking Algorithm via Query-Level Stabilities Analysis
40 4 ln . Rl ( f{( q , S )}r ) ≤ Rˆl ( f{( q , S )}r ) +B+(4r τ 2 (r ) +B) i i i =1 i i i =1 r δ
199
(3)
Proof. 1) Let
ρ ({qi , Si }ir=1 ) = Rl ( f{( q , S )} ) - Rˆl ( f{( q , S )} ) ,
(4)
∫
(5)
i
Ω1
Divide
ρ
=
∫∫
Q ( Ω×G ) n1
"∫
i
r i =1
∫
Q ( Ω×G )nr
i
,
∫
Ω2
=
i
r i =1
∫∫
Q Ω×G
,
P1 (dω ) = Dqnrr ( dSr ) PQ (dqr ) " Dqn11 ( dS1 ) PQ ( dq1 ) ,
(6)
P2 (d ω ' ) = Dq (dω q , dg (ω q )) PQ (dq) .
(7)
into two terms: ρ=ρ1-ρ2, and
ρ1 ({qi , Si }ir=1 ) = Rl ( f{( q , S )} ) = ∫ l ( f{( q , S )} ; ω ( q ) , g (ω ( q ) ))P2 d ω ' , i
i
r i =1
ρ 2 ({qi , Si }ir=1 ) = Rˆl ( f{( q , S )} ) = i
i
r i =1
Ω2
1 r 1 ∑ r i =1 ni
i
i
r i =1
ni
∑ l( f j =1
{( qi , Si )}ir=1
; ω (j q ) , g (ω (j q ) )) .
(8)
(9)
If the sample set is not “bad”, i.e., it satisfy the uniform stability condition. Then, for n'
∀ (q1, S1),… , (qr, Sr), q,q’ ∈ Q, S 'j ∈ {Q × (∏×G ) j } ,( ω ( q ) , g( ω ( q ) )) ∈ Ω×G , l ( f{( q , S )}r ; ω (j q ) , g (ω (j q ) )) − l ( f i
i
i =1
r , j ,q'j
{( qi , Si )}i =1
; ω (j q ) , g (ω (j q ) )) ≤ 2 τ 1
(10)
With Eq. 10, as ρ1 is an integral function, the following inequality holds: r , j , q 'j
ρ1 ({qi , Si }ir=1 ) − ρ1 ({qi , Si }i =1 ) ≤ 2 τ 1 . As for
ρ 2 , we have
(11)
200
Z. Jia, W. Gao, and X. He r , j , q 'j
ρ 2 ({qi , Si }ir=1 ) − ρ 2 ({qi , Si }i =1 ) ≤
1 r 1 ∑ r i =1,i ≠ j ni
ni
∑ l( f
{( qi , Si )}ir=1
j =1
; ω (ji ) , g (ω (ji ) )) − l ( f
r , j ,q'j
{( qi , Si )}i=1
nj
1 1 + r nj
1 , g (ω )) − ' l( f n j ;ω ∑ {( qi , Si )}i=1 nj s =1
≤ 2τ1 +
B . r
( j) s
( j) s
; ω (ji ) , g (ω (ji ) ))
n 'j
∑ l( f s =1
; ωs( j ) , g (ωs( j ) )) '
r , j ,q ' {( qi , Si )}i =1 j
'
(12)
By jointly considering Eq. 11 and Eq. 12, we obtain: r , j , q 'j
ρ ({qi , Si }ir=1 ) − ρ ({qi , Si }i =1 ) ≤ 4 τ 1 +
B . r
(13)
For any sample set, we have r , j , q 'j
ρ1 ({qi , Si }ir=1 ) − ρ1 ({qi , Si }i =1 ) ≤ B,
(14)
r , j , q 'j
ρ 2 ({qi , Si }ir=1 ) − ρ 2 ({qi , Si }i =1 ) ≤ B.
(15)
since l is non-negative function. Therefrom, r , j , q 'j
ρ ({qi , Si }ir=1 ) − ρ ({qi , Si }i =1 ) ≤ 2B. Thus, applying Lemma 2 in [1] to
ρ , we get for any ε >0,
−ε P{( q , S )}r { ρ ({qi , Si }ir=1 ) - ∫ ρ ({qi , Si }ir=1 )Pd 1 ω ≥ ε } ≤ 2( e i
i
(16)
2
r / 8(4τ1r + B )2
Ω1
i =1
2r Bδ ) 4τ 1r + B 2
+
(17)
Note that
∫
Ω1
ρ ({qi , Si }ir=1 )Pd ρ ({qi , Si }ir=1 ) Pd 1 ω ≤ ∫ 1 ω ≤ B.
Thus we get for any
Ω1
ε >0,
(18)
Generalization Bounds for Ranking Algorithm via Query-Level Stabilities Analysis
P{( q , S )}r { ρ ({qi , Si }ir=1 ) -B ≥ ε } ≤ 2( e−ε i
2
r / 8(4τ1r + B )2
i =1
i
The result follows by setting the right hand side equal to 2) It is also shows that (q j , S j ) ∈ Q × (Ω × G)
ρ
and solving for
(19)
ε. r
{( q i , S i )}i =1 ,
and
such that it’s satisfy
i =1
i
2r 2 Bδ ). 4τ 1r + B
satisfy the condition Eq. 10. Let
l ( f{( q ,S )}r ; ω (j q ) , g (ω (j q ) )) − l ( f{( q , S )}r i
δ
+
201
i
i
i =1,i ≠ j
; ω (j q ) , g (ω (j q ) )) ≤ τ 2 (r ) .
(20)
By the proving process in 1), we have that for each j ∈ {1,…,r}, the inequalities Eq. 12 is hold by substituting τ 2 ( r ) for τ 1 ( r ) . r
For {( qi , S i )}i=1 , , the inequality Eq. 13 is also hold. It can be checked that the conditions of Eq. 11 in Theorem 1 satisfy the conditions in Lemma in [1]. Thus, applying Lemma in [1] to ρ , we obtain for ε >0 and satisfy the conditions of Eq. 11 in Theorem 1, −ε P{( q , S )}r { ρ ({qi , Si }ir=1 ) - ∫ ρ ({qi , Si }ir=1 )Pd 1 ω ≥ ε } ≤ 2( e i
i
2
r / 40(4 rτ 2 + B )2
Ω1
i =1
). (21)
Combined with
∫
ρ ({qi , Si }ir=1 )Pd ρ ({qi , Si }ir=1 ) Pd 1 ω ≤ ∫ 1 ω ≤ B.
Ω1
(22)
Ω1
Thus we get for
ε >0 which satisfy the conditions of Eq. 11 in Theorem 1, that
P{( q , S )}r { ρ ({qi , Si }ir=1 ) -B ≥ ε } ≤ 2( e−ε i
i
2
r / 40(4 rτ 2 + B )2
i =1
The result follows by setting the right hand side equal to
δ
).
(23)
and solving for ε .
Theorem 3. Let A be a learning to rank algorithm, (q1, S1),… , (qr, Sr) be r training samples, and let l be the associate-level loss function. ∀ (q1, S1),… , (qr, Sr), q ∈ Q, (ω
(q)
, g( ω
(q)
)) ∈ Ω × G ,
l ( f{( q , S )}r ; ω (j q ) , g (ω (j q ) )) ≤ B. Then i
i
i =1
(1) If A has strong associate-level loss stability with coefficient τ 4 . Then ∀
δ∈
(0, 1) with probability at least 1 − δ over the samples of {( qi , S i )}i=1 in the r
product space
∏
r i =1
{Q × (Ω × G )∞ } , the following inequality holds:
202
Z. Jia, W. Gao, and X. He
2(2τ 4 r + B) . (24) 2δτ 4 r + δ B − 4r 2 Bδ
Rl ( f{( q , S )}r ) ≤ Rˆl ( f{( q , S )}r ) +B+(2r τ 4 (r ) +B) 8 ln i i i =1 i i i =1 r
τ5
(2) If A has weak query-level stability with coefficient
min {
1 5( 2τ 5 +
k
2
. And if 0<
ε ≤
B In (1 / δ 4 ) B ) ( 2τ 5 + ) 2 r , 4(2τ + B ) In(1 / δ 4 ) , }, and r r 5 r r 2M
B 24r 24r 1 max { 2M , (2τ 5 + ) 40 , 3( + 3)In( +3), }. k ε In(1/ δ5 ) In(1/ δ 5 ) r B
m≥
2τ 5 +
Then ∀
δ ∈
r (0, 1) , with probability at least 1 − δ over the samples of
∏
r
{( qi , S i )}i=1 in the product space
r i =1
{Q × (Ω × G )∞ } , the following
inequality holds:
40 4 Rl ( f{( q , S )}r ) ≤ Rˆl ( f{( q , S )}r ) +B+(2r τ 5 (r ) +B) ln . i i i =1 i i i =1 r δ
(25)
Proof. The result follows by applying the technology as used in Theorem 2. Theorem 4. Let A be a learning to rank algorithm, (q1, S1),… , (qr, Sr) be r training samples, and let l be the associate-level loss function. ∀ (q1, S1),… , (qr, Sr), q ∈ Q, (
ω (q)
, g(
ω (q)
))
∈ Ω × G , l ( f{( q , S )}r ; ω (j q ) , g (ω (j q ) )) ≤ B. A has strong i
i
i =1
associate-level loss stability with coefficient τ 4 . Then ∀ at least 1 −
∏
r i =1
δ
δ ∈ (0, 1) with probability
r
over the samples of {( qi , S i )}i=1
in the product space
{Q × (Ω × G )∞ } , the following inequality holds: B
( n q ) q / 2 (( 2 κ ) q / 2 (2 τ 4 + )q + (2 B )q δ ) r , (26) Rl ( f{( q , S )}r ) ≤ Rˆl ( f{( q , S )}r ) + q i
Where
i
i =1
i
i
δ
i =1
κ ≈ 1.271.
Proof. By the proofing show in Theorem 2, we can get for any
P{( q , S )}r { ρ ({qi , S } ) ≥ ε } ≤ i
i
i =1
r i i =1
(nq ) q / 2 ((2κ )q / 2 (2τ 4 +
The result follows by setting the right hand side equal to
εq
δ
ε >0,
B q ) + (2 B ) q δ ) (27) r ,
and solving for ε .
Generalization Bounds for Ranking Algorithm via Query-Level Stabilities Analysis
203
3 Conclusion In this paper, The generalization bounds via strong and weak query-level stability for ranking algorithms for considering deleting one element form sample set or change one element of sample set is given. Such stabilities have lower demand than uniform stability and fit for more real applications.
References 1. He, X., Gao, W., Jia, Z.: Generalization bounds of Ranking via Query-Level Stability. In: Proceedings of 2011 2nd International Conference on Intelligent Transportation Systems and Intelligent Computing (ITSIC 2011), Suzhou, China (June 2011) 2. Lan, Y., Liu, T., Qin, T., Ma, Z., Li, H.: Query-Level Stability and Generalization in Learning to Rank. In: Appearing in Proceedings of the 25 th International Conference on Machine Learning, Helsinki, Finland (2008) 3. McDiarmid, C.: On the method of bounded differences. In: Surveys in Combinatorics 1989, pp. 148–188. Cambridge University Press, Cambridge (1989) 4. Kutin, S.: Extensions to McDiarmid’s inequality when differences are bounded with high probability, Technical report, Department of Computer Science, The university of Chicago (2002)
On Harmonious Labelings of the Balanced Quintuple Shells Xi Yue School of Computer Science, Wuyi University Jiangmen, 529020, P.R. China [email protected]
t1
t2
tr
Abstract. A multiple shell MS {n1 , n2 ,. . . , nr } is a graph formed by ti shells of widths ni, 1 ≤ i ≤ r, which have a common apex. This graph has
∑
r i =1
ti (ni − 1) + 1 vertices. A multiple shell is said to be balanced with width s
t
s
w if it is of the form MS{w } or MS{(w + 1) , w }. Deb and Limaye have conjectured that all multiple shells are harmonious. The conjecture has been shown true for the balanced double shells, the balanced triple shells and the balanced quadruple shells. In this paper, the conjecture is proved to be true for the balanced quintuple shells. Keywords: harmonious graph, multiple shell, vertex labeling, edge labeling.
1 Introduction In 1980 Graham and Sloane [3] gave a variation on graceful labeling of graphs. A simple, finite graph G with n vertices and q( ≥ n) edges is said to be harmonious if there is an injection f : V (G) → Zq, Zq is the integer group modulo q, such that the induced function g : E(G) → Zqdefined by g(xy)=[f (x) + f(y)] mod q, xy E(G) is a bijection. Such a labeling of the vertices and edges is called a harmonious labeling of graph. In a harmonious labeling the vertex labels are distinct and the induced edge labels are 0, 1, 2, …, q−1. Graham and Sloane proved that odd cycles C4m+1, C4m+3, wheels Wn, n ≥ 3 and Petersen graph are harmonious, most graphs including even cycles are not harmonious. L. Bolian and Z. Xiankun [4] proved that the graph Cn ′obtained by joining a path to a vertex of Cn is harmonious if and only if it has even number of edges, the helm Hn is harmonious when n is odd. S. C. Shee[5] gave harmonious labeling of graph obtained by identifying the center of the star Sm with a vertex of an odd cycle Cn. Yang [6] has proved that the disjoint union C2k C2j+1 of cycles C2k and C2j+1 (k ≥ 2, j ≥ 1, (k, j) ≠(2, 1)) is harmonious. For the literature on harmonious graphs we refer to [2] and the relevant references given in them. A shell Sn,n−3 of width n is a graph obtained by taking n−3 concurrent chords in a cycle Cnon n vertices. The vertex at which all the chords are concurrent is called apex. The two vertices adjacent to the apex have degree 2, apex has degree n − 1 and all the other vertices have degree 3.
∈
∪
M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 204–209, 2011. © Springer-Verlag Berlin Heidelberg 2011
On Harmonious Labelings of the Balanced Quintuple Shells t1
t2
205
tr
A multiple shell MS {n1 , n2 ,. . . , nr } is a graph formed by ti shells of widths ni, 1 ≤ i ≤ r, which have a common apex. This graph has
∑
r i =1
ti (ni − 1) + 1 vertices. A s
multiple shell is said to be balanced with width w if it is of the form MS{w } or MS{(w t s + 1) , w }. If a multiple shell has in all k shells having a common apex, then it is called k-tuple shell, i.e. double shell if k = 2, triple shell if k = 3 etc. Deb and Limaye [1] gave harmonious labeling of many families of cycle related graphs, such as shell graphs, cycles with maximum possible number of concurrent alternate chords, some families of multiple shells. They have conjectured that all multiple shells are harmonious. The conjecture has been shown true for the balanced double shells [1], the balanced triple shells [1] and the balanced quadruple shells [7]. In this paper, the conjecture is proved to be true for the balanced quintuple shells.
2 Balanced Quintuple Shells Now, we consider the case of balanced quintuple shells. We have t
5-t
Theorem. All the balanced quintuple shells with the form MS{(w + 1) , w }are harmonious. Proof. Let G be a balanced quintuple shell on n vertices and m edges. We must have t shells of size w + 1 and (5-t) shells of size w. The cycles are Cj = {v1, vsj1, vsj1+1, …, vsj2}, 1 ≤ j ≤ 5.
j ≤ t + 1, ⎧( j − 1)( w + 1) + 2, f (sj1) = ⎨ j ≥ t + 2. ⎩( j − 1)( w + 1) + 3 − ( j − t ), j ≤ t, ⎧( j − 1)( w + 1) + 2 + w, f ( sj 2 ) = ⎨ j ≥ t + 1. ⎩( j − 1)( w + 1) + 2 + w − ( j − t ), This means that v1 is the common apex and the 5w+t-10 chords are {v1vi | sj1+1 ≤ i ≤ sj2-1, 1 ≤ j ≤ 5}. Case 1: n ≡ 0 (mod 5). Let n=5w+5, then m=10w+3, t=4. Case 1.1: w ≡ 1 (mod 2). We label the vertices as follows:
⎧0, ⎪(17 w + 7 − i ) / 2, ⎪ ⎪ w + (3 − i ) / 2, ⎪ ⎪(9 w − i) / 2 + 3, f (vi ) = ⎨ ⎪7 w + (9 − i) / 2, ⎪7 w + 4 − i / 2, ⎪ ⎪(13w − i) / 2 + 4, ⎪(9w + 7 − i) / 2, ⎩
i = 1, s11 ≤ i ≤ s 32 − 1and i mod 2=0, s11+1 ≤ i ≤ s12 and i mod 2=1, s 21+1 ≤ i ≤ s 22 and i mod 2=1, s 31+1 ≤ i ≤ s 32 and i mod 2=1, s 41 ≤ i ≤ s 42 − 1and i mod 2=0, s 41+1 ≤ i ≤ s 52 − 1and i mod 2=1, s 51 ≤ i ≤ s 52 and i mod 2=0.
206
X. Yue
It is easy to verify that f is an injective function from the vertex set V(G) of G to the set {0, 1, …, m-1}. Denote by g(vi,vj) = [f(vi) + f(vj)] mod m. Now, we show that g is a one-to-one function from the edge set E(G) of G to the set {0, 1, …, m-1} Let D1={g(vi,vi+1) | s11 ≤ i ≤ s52-1, i ≠ s12, s22, s32, s42}=D11 D12 D13 D14 D15,
∪ ∪ ∪ ∪
D11={g(vi,vi+1) | s11 ≤ i ≤ s12-1} = {(19w+5)/2, (19w+3)/2, …, (17w+7)/2}, D12={g(vi,vi+1) | s21 ≤ i ≤ s22-1} = {2w, 2w-1, …, w+1}, D13={g(vi,vi+1) | s31 ≤ i ≤ s32-1} = {(7w+1)/2, (7w-1)/2, …, (5w+3)/2}, D14={g(vi,vi+1) | s41 ≤ i ≤ s42-1} = D141 D142, D141={g(vi,vi+1) | s41 ≤ i ≤ s41+(w-1)/2} = {(w-1)/2, (w-3)/2, …, 0}, D142={g(vi,vi+1) | s41+(w+1)/2 ≤ i ≤ s42-1} = {10w+2, 10w+1, …, (19w+7)/2}, D15={g(vi,vi+1) | s51 ≤ i ≤ s52-1} = {7w+1, 7w, …, 6w+3}. Let D2={g(v1,vi+1) | s11 ≤ i ≤ s52 }=D21 D22 D23 D24 D25 D26 D27, D21={g(v1,vi+1) | s11 ≤ i ≤ s32-1 and i mod 2=0} = {(17w+5)/2, (17w+3)/2, …, 7w+2}, D22={g(v1,vi+1) | s11+1 ≤ i ≤ s12 and i mod 2=1} = {w, w-1, …, (w+1)/2}, D23={g(v1,vi+1) | s21+1 ≤ i ≤ s22 and i mod 2=1} = {4w+1, 4w, …, (7w+3)/2}, D24={g(v1,vi+1) | s31+1 ≤ i ≤ s32 and i mod 2=1} = {6w+2, 6w+1, …, (11w+5)/2}, D25={g(v1,vi+1) | s41 ≤ i ≤ s42-1 and i mod 2=0} = {(11w+3)/2, (11w+1)/2, …, 5w+2}, D26={g(v1,vi+1) | s41+1 ≤ i ≤ s52-1 and i mod 2=1} = {5w+1, 5w, …, 4w+2}, D27={g(v1,vi+1) | s51 ≤ i ≤ s52 and i mod 2=0} = {(5w+1)/2, (5w-1)/2, …, 2w+1}.
∪
∪ ∪ ∪ ∪ ∪ ∪
Let D be the labels set of all edges, then we have
∪
∪D ∪D ∪D ∪D ∪D ∪D ∪D ∪D ∪D ∪D ∪D ∪D ={10w+2, 10w+1, …, (19w+7)/2}∪{(19w+5)/2, (19w+3)/2, …, (17w+7)/2}∪{(17w+5)/2, (17w+3)/2, …, 7w+2}∪{7w+1, 7w, …, 6w+3}∪{6w+2, 6w+1, …, (11+5)/2} ∪{(11w+3)/2, (11w+1)/2, …, 5w+2}∪{5w+1, 5w, …, 4w+2}∪{4w+1, 4w, …, (7w+3)/2}∪{(7w+1)/2, (7w-1)/2, …, (5w+3)/2}∪{(5w+1)/2, (5w-1)/2, …, 2w+1}∪{2w, 2w-1, …, w+1}∪{w, w-1, …, (w+1)/2}∪{(w-1)/2, (w-3)/2, …, 0}
D = D1 D2 = D142
11
21
15
24
25
26
23
13
27
12
= {10w+2, 10w+1, 10w, …, 1, 0}.
Case 1.2: w ≡ 0 (mod 2). We label the vertices as follows: i = 1, ⎧0, ⎪(19 w + 8 − i ) / 2, s 11 ≤ i ≤ s12 and i mod 2=0, ⎪ ⎪(18w + 7 − i ) / 2, s11+1 ≤ i ≤ s12 − 1and i mod 2=1, ⎪ s 21 ≤ i ≤ s 22 and i mod 2=1, ⎪(15w + 7 − i ) / 2, ⎪ f (vi ) = ⎨(12w + 6 − i ) / 2, s 21+1 ≤ i ≤ s 32 and i mod 2=0, ⎪(5w + 5 − i ) / 2, s 31+1 ≤ i ≤ s 42 and i mod 2=1, ⎪ s 41+1 ≤ i ≤ s 42 − 1and i mod 2=0, ⎪(10 w + 6 − i ) / 2, ⎪(19 w + 10 − i ) / 2, s 51 ≤ i ≤ s 52 − 1and i mod 2=0, ⎪ ⎪⎩(10 w + 7 − i) / 2, s 51+1 ≤ i ≤ s 52 and i mod 2=1.
22
141
On Harmonious Labelings of the Balanced Quintuple Shells
207
It is easy to verify that f is an injective function from the vertex set V(G) of G to the set {0, 1, …, m-1}. It is obvious that the labels of each edge are different. So, g maps E onto {0, 1, …, | E |-1}. According to the difinition of harminious graph, we can conclude that the balanced quintuple shells are harminious for n=5w+5. Case 2: n ≡ 4 (mod 5). Let n=5w+4, then m=10w+1, t=3. Case 2.1: w ≡ 1 (mod 2). We label the vertices as follows:
⎧0, ⎪(19w + 3 − i ) / 2, ⎪ ⎪(14w + 5 − i ) / 2, ⎪ ⎪(4w + 5 − i ) / 2, ⎪ f (vi ) = ⎨(9 w + 3 − i) / 2, ⎪(14w + 6 − i ) / 2, ⎪ ⎪(9 w + 4 − i) / 2, ⎪(14w + 5 − i) / 2, ⎪ ⎪⎩(19w + 7 − i) / 2,
i = 1, s11 ≤ i ≤ s 22 − 1and i mod 2=0, s11+1 ≤ i ≤ s12 and i mod 2=1, s 21+1 ≤ i ≤ s 32 and i mod 2=1, s 31 ≤ i ≤ s 32 −1 and i mod 2=0, s 41 ≤ i ≤ s 42 and i mod 2=0, s 41+1 ≤ i ≤ s 42 − 1and i mod 2=1, s 51 ≤ i ≤ s 52 and i mod 2=1, s 51+1 ≤ i ≤ s 52 − 1and i mod 2=0.
Case 2.2: w ≡ 0 (mod 2). We label the vertices as follows:
⎧0, ⎪(16 w + 4 − i) / 2, ⎪ ⎪(15w + 3 − i ) / 2, ⎪ ⎪(7 w + 3 − i ) / 2, ⎪ f (vi ) = ⎨(4 w + 4 − i ) / 2, ⎪(19w + 7 − i) / 2, ⎪ ⎪(17 w + 5 − i ) / 2, ⎪(10w + 6 − i ) / 2, ⎪ ⎪⎩(16w + 6 − i ) / 2,
i = 1, s11 ≤ i ≤ s12 and i mod 2=0, s11+1 ≤ i ≤ s12 − 1and i mod 2=1, s 21 ≤ i ≤ s 22 and i mod 2=1, s 21+1 ≤ i ≤ s 32 and i mod 2=0, s 31+1 ≤ i ≤ s 32 −1 and i mod 2=1, s 41 ≤ i ≤ s 52 − 1and i mod 2=1, s 41+1 ≤ i ≤ s 42 and i mod 2=0, s 51+1 ≤ i ≤ s 52 and i mod 2=0.
By a proof similar to the one in Case 1, we have that this assignment provides a harminous labeling for n=5w+4. Case 3: n ≡ 3 (mod 5). Let n=5w+3, then m=10w-1, t=2. Case 3.1: w ≡ 1 (mod 2). We label the vertices as follows:
208
X. Yue
⎧0, ⎪(16w + 2 − i ) / 2, ⎪ ⎪(14w + 1 − i ) / 2, ⎪ ⎪(7 w + 2 − i ) / 2, ⎪ f (vi ) = ⎨(10w + 4 − i ) / 2, ⎪(10w + 3 − i ) / 2, ⎪ ⎪(14w + 4 − i) / 2, ⎪(8w + 2 − i ) / 2, ⎪ ⎩⎪(23w + 2 − i ) / 2,
i = 1, s11 ≤ i ≤ s 22 − 1and i mod 2=0, s11+1 ≤ i ≤ s12 and i mod 2=1, s 21+1 ≤ i ≤ s 32 − 1and i mod 2=1, s 31 ≤ i ≤ s 32 and i mod 2=0, s 41 ≤ i ≤ s 42 and i mod 2=1, s 41+1 ≤ i ≤ s 42 − 1and i mod 2=0, s 51 ≤ i ≤ s 52 and i mod 2=0, s 51+1 ≤ i ≤ s 52 − 1and i mod 2=1.
Case 3.2: w ≡ 0 (mod 2). We label the vertices as follows:
⎧0, ⎪(19w − i ) / 2, ⎪ ⎪(16w + 1 − i ) / 2, ⎪ ⎪(3w + 3 − i ) / 2, ⎪ f (vi ) = ⎨(6w + 3 − i ) / 2, ⎪(15w + 2 − i ) / 2, ⎪ ⎪(13w + 3 − i) / 2, ⎪(12w + 3 − i ) / 2, ⎪ ⎪⎩(14w + 2 − i) / 2,
i = 1, s11 ≤ i ≤ s12 and i mod 2=0, s11+1 ≤ i ≤ s12 − 1and i mod 2=1, s 21 ≤ i ≤ s 22 and i mod 2=1, s 21+1 ≤ i ≤ s 22 − 1and i mod 2=0, s 31 ≤ i ≤ s 42 − 1and i mod 2=0, s 31+1 ≤ i ≤ s 32 and i mod 2=1, s 41+1 ≤ i ≤ s 52 and i mod 2=1, s 51 ≤ i ≤ s 52 − 1and i mod 2=0.
By a proof similar to the one in Case 1, we have that this assignment provides a harminous labeling for n=5w+3. Case 4: n ≡ 2 (mod 5). Let n=5w+2, then m=10w-3, t=13. Case 4.1: w ≡ 1 (mod 2). We label the vertices as follows:
⎧0, ⎪(19 w − 5 − i ) / 2, ⎪ ⎪(2 w + 3 − i ) / 2, ⎪ ⎪(13w + 1 − i) / 2, ⎪(19 w − 4 − i ) / 2, ⎪ f (vi ) = ⎨(13w − i) / 2, ⎪(7 w + 1 − i ) / 2, ⎪ ⎪(10w + 2 − i) / 2, ⎪(20w − 3 − i ) / 2, ⎪ ⎪(10w + 1 − i) / 2, ⎪ ⎩(12w + 2 − i) / 2,
i = 1, s11 ≤ i ≤ s12 − 1and i mod 2=0, s11+1 ≤ i ≤ s12 and i mod 2=1, s 21 ≤ i ≤ s 22 and i mod 2=0, s 21+1 ≤ i ≤ s 22 − 1and i mod 2=1, s 31 ≤ i ≤ s 32 and i mod 2=1, s 31+1 ≤ i ≤ s 32 − 1and i mod 2=0, s 41 ≤ i ≤ s 42 and i mod 2=0, s 41+1 ≤ i ≤ s 42 − 1and i mod 2=1, s 51 ≤ i ≤ s 52 and i mod 2=1, s 51+1 ≤ i ≤ s 52 − 1and i mod 2=0.
On Harmonious Labelings of the Balanced Quintuple Shells
209
Case 4.2: w ≡ 0 (mod 2). We label the vertices as follows:
⎧0, ⎪(2 w+2 − i ) / 2, ⎪ ⎪(13w − 1 − i) / 2, ⎪ ⎪(20 w − 3 − i ) / 2, f (vi ) = ⎨ ⎪(10 w − i ) / 2, ⎪(8w+2 − i ) / 2, ⎪ ⎪(15w − 1 − i) / 2, ⎪(14 w − i ) / 2, ⎩
i = 1, s11 ≤ i ≤ s12 and i mod 2=0, s11+1 ≤ i ≤ s12 − 1and i mod 2=1, s 21 ≤ i ≤ s 32 − 1and i mod 2=1, s 21+1 ≤ i ≤ s 22 and i mod 2=0, s 31+1 ≤ i ≤ s 42 and i mod 2=0, s 41 ≤ i ≤ s 52 − 1and i mod 2=1, s 51+1 ≤ i ≤ s 52 and i mod 2=0.
By a proof similar to the one in Case 1, we have that this assignment provides a harmonious labeling for n = 5w + 2. According to the Cases 1 − 4, we can say that the balanced quadruple shells with the t 5-t form MS{(w + 1) , w } are harmonious. m Youssef [8] has shown that if G is harmonious then G are harmonious, hence, 5 MS{w } is harmonious. By Theorem, we have Corollary. All the balanced quintuple shells are harmonious.
3 Conclusion Deb and Limaye [1] have conjectured that all multiple shells are harmonious. The conjecture has been shown true for the balanced double shells, the balanced triple shells and the balanced quadruple shells. In this paper, the conjecture is proved to be true for the balanced quintuple shells are harmonious. The conjecture is still remains open for k ≥ 6.
References 1. Deb, P.K., Limaye, N.B.: On Harmonious Labelings of Some Cycle Related Graphs. Ars Combinatoria 65, 177–197 (2002) 2. Gallian, J.A.: A Survey: A Dynamic Survey of Graph Labeling, The Electronic Journal of Combinatorics, #DS6 (2010) 3. Graham, R.L., Sloane, N.J.A.: On additive bases and harmonious graphs. Siam J. Alg. Disc. Math. 1, 382–404 (1980) 4. Liu, B., Zhang, X.: On Harmonious labelings of graphs. Ars Combinatoria 36, 315–326 (1993) 5. Shee, S.-c.: On harmonious and related graphs. Ars Combinatoria 23A, 237–247 (1987) 6. Yang, Y., Ming, L.W., Shuang, Z.Q.: Harmonious Graphs C2k C2j + 1. Ars Combinatoria 62, 191–198 (2002) 7. Yang, Y., Xu, X., Xi, Y.: On Harmonious Labelings of the Balanced Quadruple Shells. Ars Combinatoria 75, 289–296 (2005) 8. Youssef, M.Z.: Two general results on harmonious labelings. Ars Combinatoria 68, 225–230 (2003)
∪
The Study of Vehicle Roll Stability Based on Fuzzy Control Zhu Maotao, Chen Yang, Qin Shaojun, and Xu Xing School of Automobile and Traffic Engineering, Jiangsu University, China [email protected], [email protected]
Abstract. The virtual model of vehicle is built using ADAMS/Car. The yaw velocity and sideslip angle feedback controllers are designed based on the method of fuzzy control. Then co-simulation of vehicle stability is taken by ADAMS/Car and MATLAB/Simulink. It provides a useful method for co-simulation and vehicle roll stability control. Keywords: MATLAB/Simulink, Roll stability, fuzzy control.
1 Introduction With the unceasing development of automobile technology, security and handling stability have drawn more and more attention. While the vehicle is driven on the road with low adhesion coefficient, the lateral force of the tyre usually attained to the physical limit and the vehicle dynamics stability will be lost because of the exoteric disturbance or cornering, so the traffic accidents frequently happen. Therefore, it is of great significance to improve the vehicle roll stability [1,2].
2 Vehicle Dynamic Model The vehicle is rear engine drive; the front suspension is helical spring dependent suspension; the rear suspension is parallel leaf spring dependent suspension. which is shown in Fig. 1.
Fig. 1. Vehicle virtual prototype model M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 210–217, 2011. © Springer-Verlag Berlin Heidelberg 2011
The Study of Vehicle Roll Stability Based on Fuzzy Control
211
3 The Verification of Model Snake test and double lane change test were taken to prove the correctness of this virtual prototype model, which were carried out on test field. According to GB/T 6323.1-94 [3], initial speed of snake test is 50km/h, the amount of stakes is 10, L=30m. The test results and simulation results are showed as follows:
Fig. 2. Snake test of yaw velocity time response plot
Fig. 3. Snake simulation of yaw velocity time response plot
Initial speed of double lane change test is 60km/h, the test results and simulation results are showed as follows:
Fig. 4. Double lane change test of yaw velocity time response plot
212
Z. Maotao et al.
Fig. 5. Double lane change simulation of yaw velocity time response plot
According to the comparison of test results and simulations, the test results of yaw velocity and sideward acceleration keep identical with the simulations, it is shown that the virtual prototype is true, which have provided the reliable theoretic basis for the research of roll stability control.
4 Design of Fuzzy Controller Selecting yaw velocity as variable to design the controller.2-D fuzzy controller was adopted to control the yaw velocity. Input variables are the error e(r) between factual yaw velocity r and perfect yaw velocity e(r), and the magnitude of error variance ec(r). Output variable u is yawing moment MZ(r). Table 1. Yaw velocity fuzzy control rules EC NB
NM
NS
ZE
PS
PM
PB
NB
PB
PB
PB
PB
PM
PS
ZE
NM
PB
PB
PM
PM
PM
ZE
ZE
NS
PM
PM
PM
PM
ZE
NS
NS
E
NO
PM
PM
PS
ZE
NS
NM
NM
PO
PM
PM
PS
ZE
NS
NM
NM
PS
PS
PS
ZE
NM
NM
NM
NM
PM
ZE
ZE
NM
NB
NB
NB
NB
PB
ZE
ZE
NM
NB
NB
NB
NB
The Study of Vehicle Roll Stability Based on Fuzzy Control
Table 2. Sideslip angle fuzzy control rules EC NB
NM
NS
ZE
PS
PM
PB
NB
NB
NB
NB
NB
NM
ZE
ZE
NM
NB
NB
NB
NB
NM
ZE
ZE
NS
NM
NM
NM
NM
ZE
PS
PS
NO
NM
NM
NS
ZE
PS
PM
PM
PO
NM
NM
NS
ZE
PS
PM
PM
PS
NS
NS
ZE
PM
PM
PM
PM
E
PM
ZE
ZE
PM
PB
PB
PB
PB
PB
ZE
ZE
PM
PB
PB
PB
PB
Fig. 6. Yaw velocity fuzzy controller
Fig. 7. Sideslip angle fuzzy controller
213
214
Z. Maotao et al.
The fuzzy sets of variable error e(r), error variance ec(r), and output variable u are as follows: The fuzzy set of e(r) is: {NB, NM, NS, PO, PS, PM, PB},NB is the negative maximum; NM is the negative medium; NS is the negative minimum; NO is the negative zero; PO is the positive zero; PS is the positive minimum; PM is the positive medium; PB is the positive maximum; ZO is zero. Differentiating NO and PO is primary for promotion of stability precision. The principle of selected control moment is: when error is large, the yaw control moment should be selected to reduce the error as soon as possible; when error is little, the yaw control moment should be selected to prevent overshoot, make sure that the system is stable. Yaw velocity fuzzy control rules are shown in the table 1. The input and output relation is shown in the Fig. 6 Selecting sideslip angle as variable to design the controller.2-D fuzzy controller was still adopted to control the sideslip angle. Input variables are the error ec ( β ) between factual sideslip angle β variance
and perfect sideslip angle β , and the magnitude of error
ec( β ) . Output variable u is sideslip moment M Z ( β ) .The range of
definition is same as the control of yaw velocity, but the principle of fuzzy control is different. The principle is shown in the table 2, the input and output relation of controller is shown in the Fig. 7. When yaw velocity and sideslip angle are selected to joint control. Input variables are the error e( r ) of yaw velocity and the error e( β ) of sideslip angle, output variable is yawing moment M Z . The output variable of joint feedback control is the total yawing moment that has been weighted and added[4,5].
M Z = wr M Z (r ) + wβ M Z ( β )
(1)
M Z is the total yawing moment, M Z (r ) is the output of yaw velocity controller; wr is the weighted factor of yaw velocity controller; M Z ( β ) is the output of sideslip angle controller wβ is the weighted factor of sideslip angle controller. In the equation,
,
5 Co-simulation and Analysis of the Results 2 DOF model of vehicle handling and stability was created by use of MATLAB/Simulink. Yaw velocity and sideslip angle in the limit case were controlled by additional yawing moment, which was created by single wheel braking. Before co-simulation, adams_server.py, decode.m and adams_plant.dll should be put in the working directory, otherwise MATLAB can’t be connected with ADAMS in the simulation. The plant export under controls will type the input/ output variable in dialog box. The (.m) file will be created automatically after click. ADAMS Solver data file(.adm), driver control file (.dcf) and solver control file(.acf) will be created automatically after click the pavement file and control file, which were typed in file drive events. (.m) file need to be altered to match along with the control file.
The Study of Vehicle Roll Stability Based on Fuzzy Control
215
Fig. 8. Different feedback control of yaw velocity response
Altering the operating path of MATLAB, just as same as ADAMS’s. Typing the command to open the (.m) file; then typing adams_sys to call out adams_sub; connecting with the model of yaw velocity control system [6,7]. In a similar way, the whole vehicle model should be connected with the sideslip angle controller, to create the block diagram of co-simulation based on the feedback control of sideslip angle.
Fig. 9. Different feedback control of sideslip angle response
216
Z. Maotao et al.
The simulation of simple sinusoidal input was taken. The simulation conditions are as follows. Front steer angle is simple sinusoidal input, initial speed is 110km/h, adhesion coefficient of road surface is 1, frequency is 0.5Hz, and amplitude is 100°. Simulation results are shown in the Fig. 8, 9.
Fig. 10. Different feedback control of yaw velocity response
Fig. 11. Different feedback control of sideslip angle response
The Study of Vehicle Roll Stability Based on Fuzzy Control
217
The simulation of angle step input was taken. The simulation conditions are as follows. Front steer angle is step-input, initial speed is 50km/h, adhesion coefficient of road surface is 0.2, and amplitude is 100°. Simulation results are shown in the Fig. 10, 11. After comparing the figures of yaw velocity response and sideslip angle response, it showed that the yaw velocity and sideslip angle are far larger than the ideal case.
6 Summary In this paper, vehicle virtual prototype model was established in ADAMS/Car, and has been proved the correctness by tests. Fuzzy controller with feedback of yaw velocity, fuzzy controller with feedback of sideslip angle, and fuzzy controller with feedback of co-simulation were designed on the base of fuzzy control theory. Co-simulation model was established on the base of fuzzy theory. The simulations of simple sinusoidal input and angle step input for handling stability were performed. Control effects of yaw velocity control, sideslip angle control, and co-simulation control were compared, it showed that all of these control methods could control the yaw velocity, improve the stability. Co-simulation is better than single simulation, and fuzzy controller with feedback of yaw velocity is better than fuzzy controller with feedback of sideslip angle.
References 1. Li, B.: Simulation Analysis of Vehicle Handling and Stability. Huazhong University of Science & Technology, Wuhan (2006) 2. Fan, C., Xiong, G., Zhou, M.: Application and improvement of virtual prototype MSC.ADAMS software. China Machine Press, Beijing (2006) 3. GB/T6323.1_94 Controllability and stability test procedure for automobiles–Pylon course slalom test. China Standard Press, Beijing (1994) 4. An, L.: Control method and co-simulation research of car electronic stability program ESP. Nanjing University of Science and Technology, Nanjing (2009) 5. Ma, C.: The study of vehicle ESP system control model Based on MATLAB. Nanjing University of Science and Technology, Nanjing (2008) 6. Getting Started Using ADAMS/Controls (2007) 7. ADAMS/Car User‘s Guide
Fast Taboo Search Algorithm for Solving Min-Max Vehicle Routing Problem Chunyu Ren School of Information science and technology, Heilongjiang University, Harbin, China [email protected]
Abstract. The paper is focused on the Min-Max Vehicle Routing Problem. According to the features of the problem, fast taboo search algorithm is used to get the optimization solution from the overall situation. Firstly, it applies newly improved insertion method to construct initial solution in order to improve the feasibility of the solution. Secondly, it centers the longest route to design three operations for fastening the speed of convergence and efficiency. Finally, the good performance of this algorithm can be proved by experiment calculation and concrete examples for solving practical problems. Keywords: Fast taboo search algorithm, Min-Max Vehicle Routing Problem, Insertion method, three operations.
1 Introduction With the development of the modern logistics, vehicle routing problem has received widespread attention. Optimizing the problem can increase economic efficiency of logistics distribution hence to satisfy the diversified and individual needs of customers and make logistics and its service scientific to the level of modernization. In practice, there exists a type of problems, whose aim is not to demand the shortest distance of the whole route, but to demand the shortest distance of the longest sub route throughout the whole route, for which is called Min-Max Vehicle Routing Problem. For example, the arrangement of patrol route for the patrol vehicles [1], the arrangement of delivery routes for the airdropped goods in emergency [2], and the arrangement of delivery routes for the postmen [3]. Min-max vehicle routing problem is a typical NP problem. The main methods for solving are the modern intelligence algorithm. Liu Xia created a mathematical model to solve MMVRP and proposed an improved taboo search algorithm [4]. Michael firstly solved the minimum boundary value of the objective function in MMVRP, and then used taboo search algorithm to get the solution [5]. Arkin divided the n number of routes created by MMVRP into n number of sub regions and applied approximate algorithm to get the solution [6]. For the sake of raising survival rate, Ozdamar established MMVRP to reduce the distance of allocating basic life items and applied heuristic algorithm based on greed neighborhood search to solve the problem [7]. Corberan studied the arranging routes of school commuting bus and applied scattered search algorithm to solve it [8]. M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 218–223, 2011. © Springer-Verlag Berlin Heidelberg 2011
Fast Taboo Search Algorithm for Solving Min-Max Vehicle Routing Problem
219
Considering the complexity of MMVRP, this article established model of MMVRP and designed fast taboo search algorithm to solve it. Experiments proved that this algorithm can achieve not only better calculating results, but also better calculation efficiency.
2 Model ⎧ ⎫ Z = Min ⎨Max∑ ∑ ∑ X ijk d ij ⎬ i∈S j∈S k∈V ⎩ ⎭
(1)
Restraint condition,
∑ Yik = 1, i ∈ H
k∈V
(2)
∑ ∑ qi X ijk ≤ Wk , k ∈ V
i∈H j∈S
∑ X ijk i∈S
(3)
= Yik , j ∈ S , k ∈ V
(4)
∑ X ijk = Yik , i ∈ S , k ∈ V
(5)
j∈S
∑ ∑ xijk ≤ m − 1, ∀m ⊆ {2,3..., n}, k ∈ V
(6)
i∈S j∈S
∑∑ X ijk dij ≤ Dk , j ∈ H
(7)
k∈V i∈S
{
In the formula: G gr r =1,...,R} is a series of aggregations of distribution centre in the
{
place R (this essay only has one); H hi i =R+1,...R+ N} is a series of clients’ aggregations in the place N ; S{G }∪ {H
} is the combination of all distribution centers and clients.
V{vk k =1,...K} is travel vehicle k ’s aggregation; qi is the demand amount of
client i (i ∈ H ) ; Wk is travel vehicle k ’s loading capacity; d ij is the linear distance from client i to client j ; Dk is the travel vehicle k ’s maximum travel mileage.
3 Application in MMVRP of Tabu Search Algorithm Initial Solution Forming. Supposed hk is the total number of all customer points for vehicle k. Muster Rk = { yik 0 ≤ i ≤ hk } is the customer point of vehicle k. Yik is the
transportation tool of vehicle at i point. Y0 k is the distribution centre of starting point for vehicle k. The concrete steps are as followings.
220
C. Ren
Step 1: Supposed the initial residual capacity of transportation vehicle as w1k = wk , k = 0 , hk = 0 , Rk = Φ . Step 2: The demand amount of i gene in a chromosome is qi and k = 1 . Step 3: If qi ≤ w1k , w1k = Min{( w1k − qi ), wk } . Otherwise, it shifts to step 6.
Step 4: If w1k − qi ≤ wk and Di−1 + Di ≤ Dk , Rk = Rk ∪ {i} and hk = hk + 1 . Otherwise, it shifts to step 6. Step 5: If k > K , k = K . Otherwise, k = k . Step 6: k = k + 1 , shift to step 3. Step 7: i = i + 1 , shift to step 2. Step 8: Repeat from step 2 to step 7. K memorizes the total amount of all vehicles. Rk memorizes a group of feasible path. 1-exchange Neighborhood Operation. 1-exchange is to delete two clients in two routes, alternately insert them into their counterpart route, which can effectively boost the local search capability. Its neighborhood structure is the same as 1-move, but its radius can be larger. 2-opt Neighborhood Operation. 2-opt is used to conduct neighborhood search, which was to randomly choose the positions of two client nodes, and then exchange the clients between the two positions. k (i ) signified the neighbor point of the client point i in the
route l , and a (i, j ) signified to change the direction of the route from i to j . That was in the l route, the client points were: (0,1,2,..., n,0) , in it, 0 signified distribution centre. The procedures of the 2 − opt neighborhood operation were as such: Step1: i1 := 1, i := 0 ; Step2: if i > n − 2 , end; otherwise, turn to Step3; Step3: revise i2 := k (ii ), j1 := k (i2 ), j := i + 2 ; Step4: if j > n , turn to Step8, if not, turn to Step5; Step5: j2 := s ( j1 )
, change route l as such (1) a(i2 , j1 ) , (2) alternately
used (i1 , j1 ) and (i2 , j2 ) , substitute (i1 , i2 ) and ( j1 , j2 ) ; Step6: If the changed route l1 is feasible, and better than l , revise l , if not, turn to Step7; Step7: j1 := j2 , j := j + 1 , return to Step4; Step8: i1 := i2 , i := i + 1 , return to Step2.
2-opt* Neighborhood Operation. 2-opt* operates on the exchange of two edges in different routes, in order to realize optimization between routes. That is in the route l , the client points are (0,1,2,..., n,0) , in the route k , the client points are (0,1,2,..., m,0) , in it, 0 signifies distribution centre.
Step1: Randomly choose n number of client points in the route l , for each client point i , choose client point j nearby the route k , if exist, exchange chains (i, i + 1), ( j , j + 1) ;
Fast Taboo Search Algorithm for Solving Min-Max Vehicle Routing Problem
221
Step2: Conduct 2 − opt neighborhood operation in the exchanged routes l 1 and k 1 , to obtain feasible solution; Step3: Calculate the exchanged objective function f 1 , if f 1 > f , turn to Step4; if not, turn to Step5; Step4: If the current optimal solution does not exist in the tabu object, update it, input the obtained optimal solution into the object, simultaneously remove out the ban-lifted elements; otherwise, turn to Step5; Step5: i = i + 1 , turn to Step1; Step6: repeat Step1- 5, till the current optimal solution can not update. Tabu Object and Length. The study takes the best solution of each iterative as tabu object and puts them into tabu table. Tabu length is the pivotal parameter of algorithm, and its term will decide how to select solution. The study sets the tabu length randomly selecting from 5 to 10. Contempt Regular. The study adopts the regular based on fitness value. If all solutions of candidate muster are tabu solutions, liberate the best solution of candidate muster. Ending Principle. The study adopts iterative times by limited algorithm in advance as the ending principle, which refers to confirm a big enough positive number so as that the total iterative times don’t exceed this number. Iterative times in advance can effectively control operation time of algorithm and is easy to operate.
4 Experimental Calculation and Analysis The data originates from Document [4]. There are one depot and 20 client nodes, the coordinates and demand amount of each node is created randomly, as indicated in table 1(the depot’s number is 0); give six vehicles of the same type, and the vehicle’s load capacity is 8. Solution of Fast Taboo Search Algorithm. After many trails, fast taboo search algorithm adopts the following parameters as part. The maximum iterative times are max_ iter =500, taboo length is L =5-10, and candidate solution amount is 50. Randomly solve ten times and calculation results can be seen as table 1. Table 1. Known condition of examples Item x-coordinate y-coordinate amount Item x-coordinate y-coordinate amount
0 52 4 11 24 89 2.35
1 15 49 1.64 12 19 25 2.60
2 0 61 1.31 13 20 99 1.00
3 51 15 0.43 14 73 91 0.65
4 25 71 3.38 15 100 95 0.85
5 38 62 1.13 16 7 73 2.56
6 35 45 3.77 17 69 86 1.27
7 100 4 3.84 18 24 3 2.69
8 10 52 0.39 19 66 14 3.26
9 26 79 0.24 20 9 30 2.97
10 87 7 1.03
It can be known that fast taboo search algorithm in the study all get the much higher solution during the course of ten times from table 2. The average value of total distance is 1093.085(km) and the average using vehicles are six. The calculation result of algorithm is relatively steady.
222
C. Ren Table 2. Results of MMVRP using fast taboo search algorithm Calculation order
Total distance 1083.411 1095.136 1102.085 1095.813 1087.674 1097.753 1102.085 1083.411 1100.073 1083.411 1093.085 7.848
1 2 3 4 5 6 7 8 9 10 Average value deviation
Fast Taboo Search Algorithm The longest line 205.767 205.767 205.767 205.767 205.767 205.767 205.767 205.767 205.767 205.767 205.767 0
Vehicle amount 6 6 6 6 6 6 6 6 6 6 6 0
Here, the longest line is 205.767 km, the corresponding optimal total length of 1083.411 km. The concrete route can be seen in table 3 and figure 1. Table 3. Optimal results by FTS Line No. 1 2 3 4 5 6 The Total Mileage The longest line
Running Path 0-16-2-8-20-0 0-9-11-13-4-0 0-18-10-7-0 0-6-5-14-17-0 0-15-0 0-12-1-3-19-0
Mileage 181.416 201.293 152.486 197.247 205.767 145.202 1083.411 km 205.767 km
100 90 80 70 60 50 40 30 20 10 0
0
10
20
30
40
50
60
70
Fig. 1. Optimal routes on solving MMVRP by FTS
80
90
100
Fast Taboo Search Algorithm for Solving Min-Max Vehicle Routing Problem
223
Analysis on Three Algorithms. Compared the optimal scheme of reference [4], experiments proved that this algorithm can achieve not only better calculating results, but also better calculation efficiency and quicker convergence rate. Table 3. Comparison among GA, TS and This Algorithm The Total Mileage Average Mileage The longest line
Genetic Algorithm 1106.237 km 184.373 km 205.767 km
Tabu Search Algorithm 1095.136 km 182.523 km 205.767 km
This Algorithm 1083.411 km 180.569 km 205.767 km
5 Conclusions This algorithm can also enlarge the search scope of the solution; avoid local optimization so as to ensure the solution’s diversity. All in all, neither it can over scatter the solution to slow the convergence speed, nor it can make the solution over converged into local optimization. Acknowledgment. This paper is supported by project of Heilongjiang Provincial Education Department of Science & Technology (No.11551332).
References 1. Chawathe, S.S.: Organizing Hot-Spot Police Patrol Routes. In: International Conference on Intelligence and Security Informatics, vol. 1, pp. 79–86 (2007) 2. Han, Y., Guan, X., Shi, L.: Optimal supply location selection and routing for emergency material delivery with uncertain demands. In: International Conference on Information Networking and Automation, vol. 1, pp. 87–92 (2010) 3. Applegate, D., Cook, W., Dash, S.: Solution of a min-max vehicle routing problem. Informs Journal on Computing 14, 132–143 (2002) 4. Xia, L.: Research on Vehicle Routing Problem. PhD thesis of Huazhong University of Science and Technology, p. 24–44 (2007) 5. Molloy, M., Reed, B.: A Bound on the Strong Chromatic Index of a Graph. Journal of Combinatorial Theory, Series B 69, 103–109 (1997) 6. Arkin, E.M., Hassin, R., Levin, A.: Approximations for minimum and min-max vehicle routing problems. Algorithms archive 59, 1–18 (2006) 7. Ozdamar, L., Yi, W.: Greedy Neighborhood Search for Disaster Relief and Evacuation Logistics. Intelligent Systems 23, 14–23 (2008) 8. Corberan, A., Fernandez, E., Laguna, M., Marti, R.: Heuristic solutions to the problem of routing school buses with multiple objectives. Journal of the Operational Research Society 53, 427–435 (2002)
Research on the Handover of the Compound Guidance for the Anti-ship Missile beyond Visual Range Zhao Yong-tao1, Hu Yun-an1, and Lin Jia-xin2 1 Department of Control Engineering, Naval Aeronautical and Astronautical University, Yantai China 2 Department of Control Training, Naval Aeronautical and Astronautical University, Yantai China [email protected]
Abstract. For the case of the handover from midcourse guidance to terminal guidance for the anti-air missile beyond visual range, the process of the target handover was provided through the study on the radar seeker’s principle. And the solution model of the preset antenna pointing angle error was presented based on the analyses on the major error sources. Furthermore the variable structure guidance law was simulated towards the interference of the pointing angle error. The simulation results validate the robustness of the variable structure guidance law to the interference of the line of sight angle error. Keywords: radar seeker, compound guidance, target handover, pointing angle error, variable structure, robustness.
1 Introduction The midcourse and terminal compound guidance was adopted by the anti-air missile beyond visual range, and there is the problem of the target handover in the phase from the midcourse guidance to the terminal guidance[1-3].The meaning of the target handover is that the ship’s radar gives the target information to the radar seeker of the anti-air missile, then the seeker’s antenna points at the target and sets the doppler frequency[4-6]. The target handover consists of the distance handover, the velocity handover and the angle handover which is the most difficult to realize. For the case of the handover from midcourse guidance to terminal guidance for the anti-air missile beyond visual range, the process of the target handover was provided through the study on the radar seeker’s principle. And the solution formula of the preset antenna pointing angle error was presented based on the analyses on the major error sources. Furthermore the variable structure guidance law was simulated towards the interference of the pointing angle error. The simulation results validate the robustness of the variable structure guidance law to the interference of the line of sight angle error.
2 The Process of the Target Handover The radar seeker’s control system is composed of the preset loop, the stable loop and the angle tracking loop[6,7]. When turning from the midcourse guidance to the terminal M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 224–231, 2011. © Springer-Verlag Berlin Heidelberg 2011
Research on the Handover of the Compound Guidance
225
guidance, the preset loop and stable loop are turned on, and the seeker’s antenna points at the target whose information is given by the ship’s guidance radar. After the seeker captures the target, the preset loop is turned off and the angle tracking loop is turned on. The sketch map of the angle tracking of the seeker’s antenna is shown in Fig.1.
q qϕ
q
ϕ
ε
ω
Fig. 1. The sketch map of the angle tracking of the seeker’s antenna
In Fig.1, M and T respectively denotes the missile and the target. MT is the line of the sight(LOS). q and q is respectively the angle and the angular rate of LOS. qφ is the angle between the antenna axis and the horizontal plane. ε is the angle between MT and the antenna axis. φ is the pointing angle of the antenna which is the angle between the missile axis and the antenna axis. ω is the rotate rate of the antenna. In the case of the seeker’s angle tracking, the antenna is to track the LOS, which is q= qφ, ε=0.
3 The Solution Model for the Preset Antenna Pointing Angle Error The major sources of the preset antenna pointing angle error are the missile’s measurement error, the target’s measurement error and the seeker’s pointing error. 1) Analysis on the Missile’s Measurement Error The missile’s information is provided by the inertial navigation system, whose main error sources are the components error, the setting error, the initial conditions error [8,9]. Reference[10] got the error formula of the inertial navigation system, which is the angle error formula, the velocity error formula and the position error formula, and the system error caused by the zero position error of the accelerometer, the initial conditions error and the gyro drift. 2) Analysis on the Target’s Measurement Error The target’s information is provided by the ship’s guidance radar. The sketch map of the orientation for the target is shown in Fig.2.
226
Z. Yong-tao, H. Yun-an, and L. Jia-xin
Y
R
T ( x, y , z )
A
O
X
B
Z
Fig. 2. The sketch map of the orientation for the target
In Fig.2, O and T respectively denotes the ship’s guidance radar and the target. R is the target oblique distance. A is the pitch angle, and B is the azimuth angle. From Fig.2, we have
⎧ x = RcosAsinB ⎪ ⎨ y = RsinA ⎪ z = RcosAcosB ⎩
(1)
Differential the Eq.1, get
∂x ∂x ∂x ⎧ δ δ δ δB x = R + A + ⎪ ∂R ∂A ∂B ⎪ ∂y ∂y ∂y ⎪ δR + δ A+ δB ⎨δ y = ∂R ∂A ∂B ⎪ ∂z ∂z ∂z ⎪ ⎪δ z = ∂R δ R + ∂A δ A + ∂B δ B ⎩ That is
[δ x
(2)
δ y δ z ] = M [δ R δ A δ B ] T
T
(3)
⎡ cosAsinB − RsinAcosB RcosAcosB ⎤ ⎢ sinA ⎥. RcosA In Eq.3, M = 0 ⎢ ⎥ ⎢⎣cosAcosB − RsinAcosB − RcosAsinB ⎥⎦ Eq.3 is the solution formula the target orientation error caused by the guidance radar’s distance error, pitch angle error and azimuth angle error. 3) Analysis on the Seeker’s Pointing Error The main seeker’s pointing error sources are the seeker’s electrical error, the setting error, the mechanical rotating error and the radome error[2,5]. Δd and σd respectively denotes the system error and random error of the electrical error. Δa and σa respectively denotes the system error and random error of the setting
Research on the Handover of the Compound Guidance
227
error. Δj and σj respectively denotes the system error and random error of the mechanical rotating error. Δr and σr respectively denotes the system error and random error of the radome error. So the system errorΔφ1 and random error σφ1 of the seeker’s pointing error are
⎧ Δϕ1 = Δ d + Δ a + Δ j + Δ r ⎨ 2 2 2 2 2 ⎩σ ϕ1 = σ d + σ a + σ j + σ r
(4)
4) The Solution Model of the Preset Antenna Pointing Angle Error (Δxm, Δym, Δzm) and (σxm, σym, σzm) respectively denotes the system error and random error of the missile’s measurement error. (Δxt, Δyt, Δzt) and (σxt, σyt, σzt) respectively denotes the system error and random error of the target’s measurement error. When turning from the midcourse guidance to the terminal guidance, angle of LOS q is determined by the missile’s position and the target’s position, which is
⎛ q = arctan ⎜ ⎜ ⎝
yt − ym
( xt − xm ) + ( zt − zm ) 2
2
⎞ ⎟ ⎟ ⎠
(5)
Rmt denotes the relative distance between the missile and the target, which is
( xt − xm ) + ( yt − ym ) + ( zt − zm )
Rmt =
2
2
2
Rxz denotes the projection in the horizontal of Rmt, that is
Rxz =
( xt − xm ) + ( zt − zm ) 2
2
Differential the Eq.5, obtain
δq =
∂q ∂q ∂q ∂q ∂q ∂q δ xm + δ xt + δ ym + δ yt + δ zm + δ zt ∂xm ∂xt ∂ym ∂yt ∂zm ∂z t
(6)
In Eq.6,
∂q ( yt − ym )( xt − xm ) = 2 Rxz Rmt ∂xm ∂q Rxz = 2 ∂yt Rmt
( y − ym )( xt − xm ) ∂q =− t 2 Rxz Rmt ∂xt ∂q ( zt − zm )( yt − ym ) = 2 ∂zm Rxz Rmt
R ∂q = − xz 2 Rmt ∂ym
( z − z )( y − ym ) ∂q = − t m 2t ∂zt Rxz Rmt
So, get
δq =
( yt − ym ) ⎡ 2 mt
Rxz R
Rxz ⎣( xt − xm )(δ xm − δ xt ) + ( zt − zm )(δ zm − δ zt ) ⎤⎦ + R 2 (δ yt − δ ym ) mt
(7) The measurement error is the algebraic sum of the system error and random error, which are
228
Z. Yong-tao, H. Yun-an, and L. Jia-xin
δ xm = Δxm + σ x , δ ym = Δym + σ y m
m
,
δ zm = Δzm + σ z
(8)
m
and
δ xt = Δxt + σ x , δ yt = Δyt + σ y , δ zt = Δzt + σ z t
t
(9)
t
The system error Δq of q is the mathematical expectation of δq. By Eq.8 and Eq.9, obtain
Δq = E [δ q] =
( yt − ym ) ⎡ x − x Δx − Δx + z − z Δz − Δz ⎤ + Rxz Δy − Δy t) ( t m )( m t )⎦ t m) 2 2 ( ⎣( t m )( m Rxz Rmt
Rmt
(10) The random error σq of q is the mean square deviation of δq, which is 2 σ q2 = E ⎡(δ q − Δq ) ⎤
⎣
(11)
⎦
By the equations from Eq.7 to Eq.10, get
(y −y ) R2 2 2 σ = t 2 4m ⎡( xt − xm ) (σ x2 + σ x2 ) + ( zt − zm ) (σ z2 + σ z2 )⎤ + xz4 (σ y2 + σ y2 ) ⎣ ⎦ 2
2 q
Rxz Rmt
m
t
m
t
Rmt
m
t
(12) By Fig.1, the preset antenna pointing angle error δφ is
δϕ = Δq + Δϕ1 + σ q2 + σ ϕ2
1
(13)
4 Analysis on the Robustness of the Variable Structure Terminal Guidance Law Δφs denotes the beam width of the seeker’s antenna. Set Δφs=15°, |δq|=Δφs/2, (xm0, ym0)= (0m, 50m), θm0=0°, (xt0, yt0)= (20km, 10m), θt=180°, Vt=600m/s. Given two kinds of the measurement error δq of q are
⎧7.5D (1 − 5t ) t ≤ 0.2s 0 t > 0.2s ⎩
(14)
⎧7.5D δq = ⎨ ⎩ 0
(15)
δq = ⎨ and
t ≤ 0.2s t > 0.2s
By reference[1], the two kinds of variable structure terminal guidance laws are
⎡ ⎤ R 1 ⎢ − Rq + vm sin ( q − θ m ) + k u= S + ξ sign ( S ) ⎥ vm cos ( q − θm ) ⎢ R ⎥⎦ ⎣
(16)
Research on the Handover of the Compound Guidance
229
and
u=
⎡ ⎤ R 1 ⎢vr q + kvr S + ξ vr sign ( S ) ⎥ vm cos ( q − A r − θ m ) ⎢ R ⎥⎦ ⎣
(17)
Simulate for the guidance law Eq.16, the simulation results are shown from Fig.3 to Fig.5.
Fig. 3. The relative motion between the missile and the target
Fig. 4. The curve of the sliding mode S
Fig. 5. The curve of the sliding mode S
In Fig.3, the dot line 3 is the missile’s trajectory when δq =0°, the dash line 2 is the missile’s trajectory when δq is Eq.14, the thin solid line 1 is the missile’s trajectory when δq is Eq.15, and the the thick solid line 4 is the target’s trajectory. Fig.4 is the curve of the sliding mode when δq is Eq.14, and Fig.5 is the curve of the sliding mode
230
Z. Yong-tao, H. Yun-an, and L. Jia-xin
when δq is Eq.15. From Fig.3 to Fig.5, the robustness of the variable structure terminal guidance law(Eq.16) to the reference of the line of sight angle error was validated. Simulate for the guidance law Eq.17, the simulation results are shown form Fig.6 to Fig.8. In Fig.6, the dot line 3 is the missile’s trajectory when δq =0°, the dash line 2 is the missile’s trajectory when δq is Eq.14, the thin solid line 1 is the missile’s trajectory when δq is Eq.15, and the the thick solid line 4 is the target’s trajectory. Fig.7 is the curve of the sliding mode when δq is Eq.14, and Fig.8 is the curve of the sliding mode when δq is Eq.15. From Fig.6 to Fig.8, the robustness of the variable structure terminal guidance law(Eq.17) to the reference of the line of sight angle error was validated. And the robustness of Eq.17 is stronger than that of Eq.16.
Fig. 6. The relative motion between the missile and the target
Fig. 7. The curve of the sliding mode S
Fig. 8. The curve of the sliding mode S
Research on the Handover of the Compound Guidance
231
5 Conclusion For the case of the handover from midcourse guidance to terminal guidance for the anti-air missile beyond visual range, the process of the target handover was provided through the study on the radar seeker’s principle. And the solution formula of the preset antenna pointing angle error was presented based on the analyses on the major error sources. Furthermore the variable structure guidance law was simulated towards the interference of the pointing angle error. The simulation results validate the robustness of the variable structure guidance law to the interference of the line of sight angle error.
References 1. Zhao, y.-t.: Research on the compound guidance for anti-air missile beyond visual range. Naval Aeronautical and Astronautical University (2008) 2. Liu, h.-j., Wang, l.-n.: Study on handover problem of compound guidance missile weapon. Modern defence technology 34(2), 29–33 (2006) 3. Liu, X.: Precision guidance, control and simulation technology. National Defense Industry Press (2006) 4. Cheng, f.-z.: Studies of guidance and combined control in terminal phase when intercepting tactical ballistic missile. Northwestern Polytechnical University (2002) 5. Luo, x.-s., Zhang, t.-q.: Study pf handing over between midcourse guidance and terminal guidance for multi-purpose missiles. Journal of Ballistics 13(4), 47–50 (2001) 6. Peng, G.: Air Defense Missile Weapon Guidance and Control System Design. Aerospace Press (2005) 7. Mu, h.: Defense Missile Radar Seeker Design. Aerospace Press (2007) 8. Qu, j.-m.: Key technology research on the compound guidance. Aeronautical and Astronautical University, Beijing (2002) 9. Zhang, Y.-a.: Integrated Navigation and Guidance System. Haichao Press (2005) 10. Hu, x.-p.: Autonomous Navigation Theory and Application. The national defense science and technology university press (2002)
Intelligent Traffic Control System Design Based on Single Chip Microcomputer Xu Lei, Ye Sheng, Lu Guilin, and Zhang Zhen School of Mechanics and Civil Engineering, China University of Mining and Technology, Xuzhou(221008), Jiangsu, China [email protected]
Abstract. This paper mainly introduces microcontroller of counter to implement an intelligent traffic control system. The system uses an external triggered pulses to simulate, through traffic to timing/counter car collection, so as to realize the flow of traffic information digitization. Meanwhile, according to historical stored data and automatic detection to car traffic information, microcontroller will be compared, and the control model algorithm control scheme by setting proper signal ratio, realizing the intelligent control traffic signals. Keywords: microcontroller, intelligent control system, timing ⁄ counters.
1
Introduction
Urban traffic control system is mainly used in city traffic data monitoring, traffic light control and traffic persuation computer integrated management system, it has become the most important part of the modern urban traffic control command system. Therefore, each intersection installation traffic lights has become the most relieves traffic vehicles and pedestrians most common and most effective management means, and how to use advanced information technology transform urban traffic system has become the consensus of urban traffic management.
2
System Compisition
The traffic light control system is used to receive, processing intersection traffic data system, and according to the traffic flow conditions to be intelligent to adjust the traffic lights light destroy time length. The purpose of the design of the control system has two main function: Firstly ,finishing crossing the east-west and south-north two directions of traffic flow detection based on experimental data, so as to provide data modeling method; Secondly, by using the control system for intelligent control of traffic lights. Considering the actual feasibility, this system uses LED to replace the actual traffic lights, and the traffic flow with external input data is simulated the pulses. From the function can be mainly the whole control system points of traffic flow control module, inspection module and traffic light control module. The traffic lights control system of general function structure is as shown in figure 1. M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 232–238, 2011. © Springer-Verlag Berlin Heidelberg 2011
Intelligent Traffic Control System Design Based on Single Chip Microcomputer
233
As can be seen from the figure, and conventional traffic control system, this system mainly compared with the following features: • • • • •
Control module by a single chip micro-controller, every function modules in IC chips are based on development, each module and IC chips from bus way between connected nearly; Control system, signal and data flow is one-sided. Counter system provides traffic data, after micro controller operational treatment, become the traffic lights control instructions, the output to light control system; Micro controller can be timely to various signal processed; Between each device with bus way share data, system structure compact, data security good; It can be used as more intelligent control system hardware foundation.
Fig. 1. Composition of regulation system
3
System Design
This system hardware components selection is mainly made by ATMEL company in the United States, the Intel company AT89S52 SCM 8255 programmable general parallel interface and programmable timing/counter 8253. AT89S52 SCM is low power, high-performance CMOS 8-bit microcontroller. Slice the Flash memory with 8KB within, allowing within the system with programmer programming rewrite the or. AT89S52 instruction system and 80C52 fully compatible with pins, 256B RAM pieces (and the I/O, 32 mouth line, 316 timing/counters, the watchdog, six interrupt source, a full-duplex serial port, etc, and can meet the practical needs of the system. 3.1 Counter Adopting single external count chips, such as 82C53, 82C54 as special counter to external input pulse signal as count chips, to realize the count of external pulse pulse digital. Thus making road traffic flow into SCM, it can identify the pulse count for the single-chip computer completes control procedures and the output of the control order to provide basic data.
234
X. Lei et al.
Using external counter concrete process of realizing count as follows: Using external crystals produce a certain frequency counts circuit impulse; Make this frequency as the count of the count to measuring pulse signal, as an external counter count pulse, external chip in door control cycle count to count pulse; In count ends, external chip output produces low level would trigger MCU disruptions, the special chip microcontroller will read back the count value, and control algorithm of computing. Counting pulse frequency rely on external crystal frequency, although external crystals has the higher frequency of the more help to improve the accuracy of pulse count, but eventually input to external counter pulse frequency had better not exceed 10MHz. Using single chip external counter to realize pulse count method has two main inside let: the first is the hardware circuit design, including the microcontroller each port connection, the interrupt signal design etc; Once again, is MCU count program, mainly responsible for control each interrupt and the timer complete corresponding pulse count process. 3.2
Traffic Signal Control Module
The traffic lights control module drives traffic lights and kills bright by receiving to traffic light controller microcontroller issued instructions according to the requirement of the module, microcomputer in the flow of traffic information obtained through control algorithm, after the treatment, do to the current vehicle out after information correctly judge, according to each direction of flow of traffic lights make optimization of adjustment, and displayed on the LED on simulation of the actual situation. In this control system, using LED to simulate realistic traffic lights, the east, west, north and south, four directions each have three lamp that LED; Color is red olivine, total is 12 lamp; Respectively represent four intersection traffic lights. Each direction from the traffic light through 8255 driver to command four intersection traffic. Realizing the traffic lights of intelligent adjustment is the core of this control system, but also key points of control program. Getting the right about microcomputer in the east-west and south-north direction crossroads vehicle traffic data, using east-west and south-north direction intersection traffic data to compare, and then will larger values as its benchmark, the adjustment of the historical information in cars for comparison, the final again according to the result of the comparison to adjust the length of time signal light out. Figure 2 attaches this intelligent traffic control system's double circuit board PCB figure.
4 4.1
System Control Scheme Timing Signal Control Plan
If we want to effectively overcome timing control of single zone can't adapt to the control of traffic flow of change, and use many sections of control method to setting difficult disadvantages and it will timing control, thus effectively improved better
Intelligent Traffic Control System Design Based on Single Chip Microcomputer
235
control method, this method is called "induction - timing signal control", this kind of traffic control machine in induction signal control mode, general record each cycle, each phase of the actual green time, and carries on the effective statistics. If in the measurement of time, finds the actual green large variation, says this time the intersection of traffic is not very stable, can use inductive signal control methods for effective control.
Fig. 2. Intelligent traffic control system PCB figure
But if found within prescribed number of cycles in measure of actual green time in a given range, this explains this period of time in a relatively stable state cars, can use the timing signal control method, at this time control machine can immediately switch to the system operating mode, the present each phase of the best green time is the actual average statistical income. Meanwhile, continue to records and statistics actual green time, when the statistical results more than a given allow range, and switch to another new operation mode or state. This control method overcame many shortcomings, effective adapted to different time traffic characteristics. 4.2
Traffic Lights Fuzzy Control
Under normal conditions, inductive control is used in the detection of vehicle data, and through effective analysis to make the next moment pairing solutions and not
236
X. Lei et al.
real-time control the current traffic signals, must go through a period of delay. This may cause empty waiting situation, still need to prepare more and more complex calculation program. Therefore, in the design, can draw up 1 set of traffic lights, according to the fuzzy control scheme of the intersection of two directions, to make the dynamic vehicle, and the corresponding adjustments with the biggest since cars, to guarantee to prevent traffic jams happen, so as to achieve the best control traffic. In addition, in order to be able to accurately collection green direction, can be in the vehicle amount each side intersection set pressure sensor, generally in each side have set up 2, in order to get effective data. 4.3
Overall Scheme
In control, timing control though it is not very suitable for traffic flow changes, especially in the empty box above intersections, but digital display can display the current light color the remaining time which allows the driver to effectively control of your driving action, timely start and stop. Although sensors can adapt to all kinds of intersection control, but can't control, united cannot be convenient digital display to show the rest of the time when the light color. Therefore, in order to enable control machine that adapt to the intersection, and effectively online control in the system, can by two control method. In phase aspect, as four phase control have high security, but four phase control function of the intersection of driveway separation in the use, in light of the traffic flow situation traffic low efficiency. But two phase control although security is low, but it can be used on any intersection, make traffic efficiency is higher, therefore, when facing different actual situation, you can combine two phase and four phase control two ways to achieve effective control.
5
The Mathematical Model
We'll build traffic bypath mathematical model of the traffic flow:
∂n ∂q =0 + ∂t ∂x
(1)
n( x, t ) : car number through the unit length (namely density) q ( x, t ) : car number through the unit time (namely flow) If
∂q = 0 , then it means the number of cars into the number of cars is equal to ∂x
open out; If
∂q > 0 , then it means the number of cars into the number of cars less than the ∂x
open out; If
∂q < 0 , then it means the number of cars into the number of cars more than the ∂x
open out;
Intelligent Traffic Control System Design Based on Single Chip Microcomputer
Red light, only car wheels and no car exit
237
∂q < 0 ,then car density increases. ∂x
Green, yellow light, both cars into and exit, generally speaking, the car density decreased . The green light, to ensure the vehicles stranded at a red light and a part of the green into when vehicles have enough time through a bypath. So the green time including driver reaction time and car start-up time, the team through the time needed for crossing. Set for legal speed v0 , red car captain for stranded is l r , l g is the captain part when the car go into a green light , crossroads length is L , auto acceleration is a, the driver's response time and start time for T1 . So long as the Tg green,
Tg =
v0 lr + lq + L + + T1 2a v0
(2)
Yellow light, has crossed the vehicle can stop line without stop, continued stop line bans continue to pass. For drivers, into stop line after he saw after the yellow light signal decision will be: is parking or through the intersection. When deciding parking there must be sufficient stopping distance; When the decision by crossroads must have enough through the intersection between. So the yellow light time including driver reaction time, the car through crossroads and the time required braking distance. Set for legal speed v0 , body long for l0 , crossroads length is L, automobile braking tires when the friction coefficient and the road for for g, the driver's response time for
Ty = Assuming
μ , gravity acceleration
T0 .So the duration yellow light
v0 l +L + 0 + T1 2 μg v0
(3)
T0 = 0.5 s, l 0 = 4.5 m, L = 10m, μ = 0.2, v0 = 30km/h, Ty = 4.32 s
from experience to
Ty = 3s. Asher v0 =30km/ h, l g = 50m, L =10m, a = 4m/ s 2 ,
the driver's response time and start-up time for T1 = 0.8 s, we can draw a verdict as table 1 shows. Table 1. The approximate relationship between distance and time
238
6
X. Lei et al.
Summary
This paper is mainly through the use of American ATMEL company 8-bit microcontroller, relating settings, forming a simple intelligent traffic control system, the purpose is: When under the environment of the university have experiment, for students to do some design type experiment, it can be the verification or innovative, to provide students with a simulated environment platform; Also it can give related transportation management departments to provide some intelligent control ideas, and thereby more effective and reasonable to regulate the traffic. Need explanation is, although at present the most using 32-bit SCM in related research and application based on this system, but the actual need, 8-bit microcontroller enough to satisfy the design needs, so with this ATMEL8 a single-chip computer completes the entire system construction design, also can save the hardware cost, if function set more or data processing is bigger, we can consider to use 16 or 32 bits single-chip computer completes.
References 1. Chen, F., Zhu, Y., Shen, Y.: Microcontroller and PLC Application Technology, pp. 1–273. Electricity Son Industry Press, Beijing (2003) 2. Single Chip Microcomputer Application Director System Design and Practical Technology, pp. 21-319. Machinery Industry Publication Clubs, Beijing (2004) 3. Shen, G.: Based on Single-Chip Microcomputer Intelligent System Design and Realization, pp. 1–335. Electronic Industry Publication Clubs, Beijing (2003) 4. Fu, X.: The Single Chip Computer and Electromechanical Interface Technology. National Defense Industry Publication Clubs, Beijing (2002)
Calculation and Measurement on Deformation of the Piezoelectric Pump Actuator Xing Wang, Linhua Piao, and Quangang Yu Sensor Technology Research Centre Beijing Information and Technology University Beijing, China [email protected]
Abstract. The piezoelectric pump actuator deformation of the piezoelectric fluidic angular rate sensor was researched. At first, the principle of the piezoelectric pump operation was analyzed. At second, using the finite element method, the deformation of the piezoelectric pump actuator was calculated, then the deforming curve equation was obtained. At last, the method of measurement and calculation for the deformation volume fluctuations was given at the base of the deformation curve equation, which had been proved. Therefore it is possible to select the piezoelectric pump by using this method. Keywords: Piezoelectric fluidic angular rate sensors, Piezoelectric pump, Deformation.
1 Introduction The piezoelectric fluidic angular rate sensor is a function with the spinning top without the rotation parts of the traditional the spinning top and solid-state inertia device of piezoelectric spinning top. It is based on the gas as sensitive quality, the quality is extremely small, and it has advantages of resistance to high overload and strong impact, long service life. Because piezoelectric pump has advantages of simple structure, small volume, controllable and without electromagnetic interference. It has a favorable application prospect in the micro fluid conveying system. The principle of the piezoelectric pump operation is using the circulating air beams (jet flow) of piezoelectric pump; bridge circuit consists of thermal resistance wires detects the offset of air beams under the action of Corioils force, and outputs in direct proportion to the angular rate of the electrical signal [1]. The premise that the piezoelectric fluidic angular rate sensor can be sensitive to the angular rate is that the piezoelectric pump drives gas to the recycle airflow in the sensitive element. This requires piezoelectric pump actuator should have sufficient deformation caused by volume change of the pump chamber and drive gas directed into or out of the pump chamber [2]. The difficulty of installing and debugging the piezoelectric fluidic angular rate sensor is that gas circulation is always difficult, so for the structure of the sensitive elements, according to the test data to calculate the size of deformation for the piezoelectric pump actuator can filter piezoelectric pump, which has significance for reducing the M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 239–245, 2011. © Springer-Verlag Berlin Heidelberg 2011
240
X. Wang, L. Piao, and Q. Yu
difficulty of installing and debugging the piezoelectric fluidic angular rate sensor. At the same time, under a certain excitation frequency, the deformation of the piezoelectric pump actuator determines the size of air velocity, which affects directly the sensitivity of the sensor. Therefore, to consider the deformation of the piezoelectric pump actuator has significance for calculating the air velocity and improving the performance for the piezoelectric fluidic angular rate sensor. Actually, the deformation of the piezoelectric pump actuator is the deformation of the piezoceramic bimorphs under the certain constraining conditions, so using the finite element method and the ANSYS software to calculate the deformation of the piezoceramic bimorphs are analyzed in this paper, then obtain the fitted equation of the deformation curve and a simple method of the deformation for the piezoelectric pump actuator in the piezoelectric fluidic angular rate sensor.
2 Working Principle of the Piezoelectric Pump As is shown in the figure 1, the piezoelectric pump structure is made up of the piezoceramic bimorphs bonded on the pump bracket. The pump bracket fixed and the piezoceramic bimorphs are periodicity flexural vibration driven by the addition excitation voltage, then drive gas formed a stable and continuous air beams (jet flow) in the closed cavity.
Fig. 1. The structure of the piezoelectric pump
As shown in the figure 2, when the piezoelectric pump actuator (piezoceramic bimorphs) bends to the pump block and the gases, which are formed by the pump block, pump bracket and actuator, are compressed in the pump chamber. In the pump chamber, the gases pour from the hole on the actuator, which get through the fluid feed port into chute on the nozzle body and collecting chamber which is circled with internal shell, then the fluid feed pipe of the nozzle body at the pressures feed to the nozzle. The nozzle produces laminar flow air beams, and projects the hotwires which install on the heat-variable plug. The hotwires are placed to parallel. Once the angular rate signal inputs, the original air beams of the hotwires will deviate in one direction at the center of symmetry, which causes different cooling in the two hotwires, thus detects the electrical signal of the bias by the bridge. The air beams through the air bleeder of the heat-variable plug back to the exhaust chamber. The piezoelectric pump actuator, affected by the alternate excitation, summons to the opposite direction of the pump block (the dashed line in the figure 2. Then the pressure decreases in the pump cavity,
Calculation and Measurement on Deformation of the Piezoelectric Pump Actuator
241
and absorbs the gas of the exhaust chamber, thus the outgoing gas of the air bleeder will be absorbed in the pump cavity, which forms the fluid circular motion, the state of the gas movement is shown in the figure 2. With the periodic vibration of the piezoelectric pump actuator, the gas is constantly fed into collecting chamber from the fluid feed port and discharged from the air bleeder to the exhaust chamber. At last, it forms a stable and continuous air beams [3].
1-pump block 2-pump bracket 3-fluid feed port 4-heat-variable plug 5-hotwire 6-collecting chamber 7-airflow 8-exhaust chamber 9-piezoceramic bimorphs 10-nozzle Fig. 2. The piezoelectric fluidic angular rate sensor sensitive element
3 Calculation of the Piezoelectric Pump Actuator Deformation The piezoelectric pump of the piezoelectric fluidic angular rate sensor has the axial symmetry, so the cylindrical coordinates can be used, and the selecting of the coordinates is shown in the figure 3. The PZT-5 piezoelectric ceramic wafer, which is 0.2mm thick and 20mm diameter, will be select. Then the epoxy between the two piezoelectric ceramic wafers is 0.05mm thick. To reduce the computing time, which is shown in the figure 4, creates a quarter finite element model of the piezoceramic bimorphs at the ANSYS 10.0 pre-processing. The finite element model is divided into 2240 elements, and then adds to the θ direction of the displacement constraint in the
Fig. 3. Piezoceramic bimorphs
242
X. Wang, L. Piao, and Q. Yu
Fig. 4. The piezoelectric pump finite element model
oabe and odce plane. Meanwhile the abcd surface adds to the θ, r and z direction of the displacement constraint (The piezoceramic bimorphs are constrained the boundary conditions by the pump bracket equivalently). According to the results of the finite element, select the forty points equidistant direction of the piezoceramic bimorphs, which is shown in the figure 5, could get along to the displacement of the piezoelectric pump. As can be seen from the figure 5, the maximal displacement is r=0, and the displacement is z0=628μm. In other words, the maximal deformation of the piezoceramic bimorphs is the center of circle and near the edge of the piezoceramic bimorphs circle, the deformation gradually decreases. If r is greater than 8.5mm, the displacement for the each point is less than 2% of the maximum displacement. Therefore, in fitting deformation curve, the displacement that r is greater than 8.5mm can be ignored for each point, which is shown in the figure 6, the fitted curve is a parabola. Then the curve of the piezoelectric pump actuator after deformation is approximate as a parabola curve, and the post-processing graphics in ANSYS also has been confirmed. The parabolic along the z-axis forms the rotation object, whose volume is the piezoceramic bimorphs bending vibration deformation volume. The parabola equation is usually expressed by z=Ar2+Br+C
Fig. 5. The displacement curve of piezoceramic bimorphs
(1)
Calculation and Measurement on Deformation of the Piezoelectric Pump Actuator
243
Fig. 6. The displacement fitting curve
In the figure 5, the parabolic fitted equation is z=-3.68r2+105.2r-639.5
(2)
According to the parabolic fitted equation, select nine points equidistant or direction of the piezoceramic bimorphs and calculate their displacement zi1(i=0, 1…8) with the fitted curve equation, which is shown in the table 1, compare with the finite element calculation result of the zi2. As can be seen from the table 1, both the parabolic equation and the finite element method data error are very small, and then the minimum relative error is 0.2%, the maximum relative error is 6.9%, and the average relative error is 2.7%. Table 1. The deformation displacement of piezoceramic bimorphs r 10-3 mm
zi1 10-6 mm
zi2 10-6 mm
Relative error %
0
-628.8
-639.5
1.7
1
-539.2
-538.0
0.2
2
-450.0
-443.8
1.4
3
-362.8
-357.0
1.6
4
-280.1
-277.6
2.4
5
-202.5
-205.5
0.9
6
-134.9
-140.8
4.3
7
-78.0
-83.4
6.9
8
-35.0
-33.4
4.6
244
X. Wang, L. Piao, and Q. Yu
By equation (2) and the definite integral method, the piezoceramic bimorphs bending vibration once deformation volume can be obtained.
4 Testing of the Piezoelectric Pump Actuator Deformation According to the analysis, to learn the deformation of the piezoelectric pump actuator, firstly, we must confirm the undetermined coefficient A, B, C for the parabola equation. When we test, the dial indicator Mi-tutogo contact contacts with the three points on the piezoceramic bimorphs, which is r=0, r=4mm, r=8.5mm. Secondly, test their displacement and substitute their into the parabola equation. Thirdly, solve the undetermined coefficient A, B, C for the equation and confirm the parabola equation. Then calculate the deformation volume of the piezoelectric pump actuator by the definite integral method. Meanwhile, take the sample of the piezoelectric pump, whose actuator consists of two 20mm diameter and 0.02mm thick PZT-5 piezoelectric ceramic wafer, and the epoxy is 0.05mm thick between the two piezoelectric ceramic wafers. The displacement of r=0, r=4mm, r=8.5mm on the piezoceramic bimorphs, which is measured by a dial indicator, are z1=-619μm, z2=-275μm, z4=-14μm, then substitute it into equation one. At last, solve the undetermined coefficient A, B, C for the equation and confirm the parabola equation. The equation is z=-3.4r2+98.7r-610.1
(3)
According to the equation three, calculating the deformation volume of the piezoelectric pump actuator is V=3.591×104μm3. By using the finite element displacement method, compared with deformation volume of the piezoelectric pump actuator, their relative error is 9.6%.
5 Conclusions In this paper, the working conditions of the piezoelectric fluidic angular rate sensor piezoelectric pump are analyzed, and calculate the deformation condition of the piezoelectric pump actuator: (1) Using the finite element method to calculate the displacement of the piezoelectric pump actuator then fit the parabola for the displacement curve. The deformation surface of the piezoelectric pump actuator is close to the paraboloid, and the parabolic along the z axis forms the rotation object, whose volume is the piezoceramic bimorphs bending vibration deformation volume. In addition, calculate the deformation method of the piezoelectric pump actuator by the parabola equation is proposed in this paper. (2) Complied with the finite element method, the calculative displacement of the piezoelectric pump fitted by the parabola equation, whose average relative error is 2.7%. (3) Complied with the finite element method, calculated the deformation volume of the piezoelectric pump actuator in the actual test, whose relative error is 9.6%.
Calculation and Measurement on Deformation of the Piezoelectric Pump Actuator
245
Acknowledgment. National Natural Science Foundation of China (60772012); Supported by program for New Century Excellent Talents in University; Beijing Excellent Training Grant; Supported by program for Beijing Key Laboratory Open Topic; Supported by program for Beijing New Century Talents Training Grant Project; Supported by program for Beijing Education Commission Technical Innovation Platform Foundation (KM201110772020); Supported by program for Modern Detection and Control Technology Ministry of Education Key Laboratory.
References 1. Cheng, G., Yang, Z., Zeng, P.: The preliminary study of the piezoelectric film fluid pump. Journal of Piezoelectrics and Acoustooptics 20(4), 233 (1998) 2. Li, Q., Su, Z.: The adaptive-filtering of the piezoelectric fluid spinning top. Journal of Chinese Inertial Technology 6, 31 (1998) 3. Wang, R.: Underwater acoustic material manual. Science Press, Beijing (1983)
FEM Analysis of the Jet Flow Characteristic in a Turning Cavity Xing Wang, Linhua Piao, and Quangang Yu Sensor Technology Research Centre Beijing Information and Technology University Beijing, China [email protected]
Abstract. The jet flow characteristic was analyzed by the finite element method in a turning cavity. Using ANSYS-FLOTRAN CFD software, according to the actual size, building an entity model, the finite element simulation was conducted by a series of procedures, such as meshing, applying loads and solve. Then the flow field distribution was calculated with the different input angular rate. The results are as follow: (1) In static status, airflow velocity shows a symmetry distribution through the cavity central axis. The maximum velocity of gas flow appears near the cavity central axis. The airflow velocity gradually decreases on the both sides of the cavity central axis, and the velocity is zero near the wall. (2) In turning status, by the action of the Coriolis force, the centre of jet flow will be deflected. The flow field shows an asymmetry distribution through the cavity central axis. The flow velocity will change with angular rate for the each point. So the piezoelectric fluidic angular rate sensor uses this characteristic sensitized the angular rate. Keywords: Finite element method, Angular rate, Flow field, Jet flow.
1 Introduction At present, the inertial devices will develop toward the solid and miniaturization direction. The piezoelectric fluidic angular rate sensor is a solid inertial device, which uses Coriolis force to measure angular dimensions and make cycle air beams deflected. Its characteristics are that the gas as the sensitive quality is very small. In addition, it is not high-speed rotor, so it can work properly after the impact of 16000g [1]. It is reported that the piezoelectric fluidic angular rate sensor had been used for terminal guidance projectile in the United States. The piezoelectric fluidic angular sensor had been started to develop in the Beijing Information Technology Institute since 1985. Then the products pass repeatedly to the design typification and identification. The structure and technique of the piezoelectric fluidic angular rate sensor is relatively much to research. However, in the sensitive cavity, the jet condition for the piezoelectric fluidic angular rate sensor has not been reported. Therefore, this paper tries to use the finite element method to calculate the flow velocity distribution for M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 246–252, 2011. © Springer-Verlag Berlin Heidelberg 2011
FEM Analysis of the Jet Flow Characteristic in a Turning Cavity
247
two-dimension sensitive cavity with the different input angular rate, which can illustrate the jet distribution characteristic in the piezoelectric fluid angular rate sensor sensitive cavity. In addition, using this method can further simulate and analyze the piezoelectric fluid angular rate sensor. Then it provides a very simple and effective method and basis for designing and verification for these sensors.
2 Mathematical Model A physical model is shown in the figure 1. The total length of the gas flow area is 15mm, and the trapezoidal transition section is 5mm. In addition, the import wide is 2.5mm and export wide is 9mm. When the temperature is 20 in an atmospheric pressure, the air density is 1.21 kg/m3 and the viscosity is 1.81×10-5 kg/ms.
℃
ektexine
y
a import
gas flow region A f export
b V
o
x
c d
B
e
Fig. 1. Two-dimension cavity schematic diagram
In the rectangular coordinate system, the equation of continuity and motion for the two-dimension incompressible viscous fluid flow are [2]
∂v j ∂x j
=0
(1)
∂vi ∂v ∂ 2 vi 1 ∂p + v j i = fi − +ν ∂t ∂x j ρ ∂xi ∂x j ∂x j
(2)
In the formula, i, j=1,2 and j is the summation index. Then vi is the velocity component, xi and xj are the space coordinate, fi is given to the acting force component in the unit mass fluid, p is the positive pressure acting on the fluid, ν is the coefficient of kinematic viscosity and ρ is the gas density. Assuming the stream field boundary is Γ. Because Γ1 Γ2=Γ and Γ1∩Γ2=Φ, then the boundary conditions are
∪
vα = vα (on the Γ1)
(3)
pij n j = p ni (on the Γ2)
(4)
248
X. Wang, L. Piao, and Q. Yu
In the formula, Φ expresses empty set, pij is the stress tensor component in fluid,
v a is
a given velocity component on the Γ1, nj is the outward normal unit vector on the Γ2 and p ni is a given pressure component on the Γ2. According to the formula (1) and (2), combine with the boundary conditions above, then create the integral expression
:
∫∫ v
j
Ω
⎛ ∂vα
∫∫ ρ ⎜⎜ Ω
⎝ ∂t
=
+ vj
∂vα ∂x j
∂ (δp) dΩ = ∫ v n δpdΓ ∂x j Γ
⎡ ⎞ ⎛ ∂v ⎟δvi + ⎢− pδ αj + μ ⎜ ∂vα + j ⎟ ⎜ ∂x ⎢⎣ ⎠ ⎝ j ∂xα
(5)
⎞ ⎤ ∂ (δvi ) ⎫⎪ ⎟⎥ dΩ ⎟ ⎥ ∂x ⎬⎪ j ⎭ ⎠⎦
(6)
∫ p α δv dΓ + ∫∫ ρfα ∂v dΩ n
Γ2
i
i
Ω
In the formula, Ω is flow fluid on the plane, μ is dynamic viscosity coefficient and vn is velocity component on the nj. Assumed the velocity is vα(α=1,2), then in the finite element unit e, the pressure p approximate function expression is
va = vai (t )φi
(7)
p = pk (t )ψ k
(8)
In the formula, φi=φi(x1,x2) (i=1,2,…) is the selected speed interpolation function; Iv and Iv are the total number of velocity nodes in the unit; ψk=ψk(x1,x2) (k=1,2,…) is the selected pressure interpolation function; Ip and Ip are the total number of the pressure nodes in the unit; vαi(t)and pk(t) are the velocity and pressure value in the i and k nodes at the time t. Put the formula (7) and (8) into the (5) and (6). Then the unit finite element characteristic expressions can be obtained
Aij( e ) vαj + Bij( eβ)l vαj v βl + C β( eik) p k + Dβ( eiβ) l v βl = Eα( ei )
(9)
Fk(βej)υ βj = Gk( e )
(10)
The indexes of the above formula and coefficient matrix are α,β=1,2; i, j,l=1, 2, …; Iv; k=1, 2… Ip. Every coefficient matrix expressions are seen from the Reference. By the unit finite element finite characteristic expressions, the general finite element finite equation is obtained. Assuming the Reynolds number is 900, then on the boundary of baf and edc, x-axis velocity v1=0 and y-axis velocity v2=0. On the boundary cb, the import airflow velocity v1=1.5m/s and v2=0. In addition, p is zero in the export.
FEM Analysis of the Jet Flow Characteristic in a Turning Cavity
249
3 Solve with Finite Element Method The FLOTRAN CFD analysis function is used for analyzing in the two-dimensional and three-dimensional fluid flow field, which is an advanced tool in ANSYS software. It usually includes the following three steps: modeling, applying loads and solve. (1) As shown in the figure 1, a trapezoidal and rectangular surface representatives the jet cavity. Then select the FLUID141 element of the four-node quadrilateral and divide into 540 elements. (2) Setting the node along x-axis velocity is 1.5m/s and y-axis velocity is zero in the outer boundary. Then setting x-axial and y-axial velocity is zero in all walls and the zero pressure boundary condition in the export. In addition, setting the air density is 1.21 kg/m3 and viscosity is 1.81×10-5 kg/ms. At last, setting z-axial direction is angular rate direction. (3) Setting the Global iterations is 60, then solve and read in results. At last, the airflow velocity vector diagram is shown in the graphics mode. 3.1 Results and Discussion The airflow velocity vector diagrams for the different angular rate in the cavity are shown in the figure 2, and the flow field velocity with the different angular rate are shown in the figure 3. In the static condition of cavity, jet velocity is symmetric distribution along the cavity central axis (x-axis), the jet center coincides with the cavity central axis and the flow velocity gradually decreases along the flow path. When the angular rate inputs in the cavity, jet velocity is an asymmetry distribution along the cavity central axis and the flow velocity for each point changes with the angular rate in the cavity.
(a) Static state
(b) The state of angular rate 20°/s
(c) The state of angular rate 40°/s Fig. 2. Airflow velocity vector diagram
250
X. Wang, L. Piao, and Q. Yu
(a) Static state
(b) The state of angular rate 20°/s
(c) The state of angular rate 40°/s Fig. 3. Two-dimension flow field contours diagram
In order to observe the jet velocity distribution more clearly, require a straight line AB perpendicular to the x-axis of the jet, which is shown in the figure 1. According to finite element results, the velocity distribution of AB can be obtained, which is shown in the figure 4. In static status, the maximum flow velocity is the centre of jet C, and the flow velocity decreases on both sides. In addition, the velocity is zero near the wall and jet distribution is a symmetric paraboloid along the cavity central axis. When the cavity turns with 20°/s angular rate, jet center happens to deflect (paraboloid vertex deflects from the cavity central axis) from the static status C to D. When the cavity turns with 40°/s angular rate, jet center happens to deflect from the static status C to E.
Fig. 4. Airflow velocity distribution on the line AB
FEM Analysis of the Jet Flow Characteristic in a Turning Cavity
251
Fig. 5. Relation curve between voltage and angular rate
The output characteristic of the piezoelectric fluidic angular rate sensor can be verified jet characteristic in the rotating closed cavity. In the sensitive cavity with the piezoelectric fluidic angular rate sensor, setting the two hotwires is equidistant, which is perpendicular to the jet center on a straight line. Two hotwires serve as two arms of the Wheatstone bridge. In static status, airflow velocity is identical at the two hotwires; the bridge keeps balance and outputs zero signal. In turning status, airflow velocity is different at the two hotwires; the bridge loses balance and outputs a voltage with corresponding to the angular rate. The piezoelectric fluidic angular rate sensor output characteristic curve is shown in the figure 5.
4 Conclusions In this paper, using the finite element method and ANSYS-FLOTRAN CFD software to calculate the two-dimensional flow field distribution in the rotating jet cavity. The results show that: (1) In static status, the airflow velocity gradually decreases along the flow path, and the maximum airflow velocity is the centre of jet. Then the jet center coincides with the cavity central axis, flow velocity decreases on both sides of cavity central axis. The velocity is zero near the wall and jet distribution is a symmetric paraboloid along the cavity central axis. (2) In turning status, under action of the Coriolis force, jet flow will be deflected and the center of jet flow deflects from the cavity central axis. The more angular rate is large, the more the degree of deviation change large. The flow field shows an asymmetry distribution through the cavity central axis. The flow velocity will change with angular rate for the each point. So the piezoelectric fluidic angular rate sensor uses this characteristic sensitized the angular rate. Acknowledgment. National Natural Science Foundation of China (60772012); Supported by program for New Century Excellent Talents in University; Beijing Excellent Training Grant; Supported by program for Beijing Key Laboratory Open
252
X. Wang, L. Piao, and Q. Yu
Topic; Supported by program for Beijing New Century Talents Training Grant Project; Supported by program for Beijing Education Commission Technical Innovation Platform Foundation (KM201110772020); Supported by program for Modern Detection and Control Technology Ministry of Education Key Laboratory.
References 1. Zhang, F.: Modern Piezoelectricity. Science Press, Beijing (2002) 2. Zhang, B.: The Finite Element Method In Fluid Mechanics. China Machine Press, Beijing (1986)
Software Compensation of the Piezoelectric Fluidic Angular Rate Sensor Xing Wang, Linhua Piao, and Quangang Yu Sensor Technology Research Centre Beijing Information and Technology University Beijing, China [email protected]
Abstract. The software compensation technology of the piezoelectric fluidic angular rate sensor was described. The principle and procedure of the temperature and linearity compensation were provided. Using the microprocessor, the temperature and linearity compensation of sensor could be solved by combining method with the data interpolation and table look-up. The experiment results show that after software compensation, the sensor nonlinearity decreases from 2% to 0.5% and operating temperature range increases from 0 ~45 to -40 ~55 . The software compensation method is easy to use and also used for other enough precision sensors.
℃ ℃
℃ ℃
Keywords: Piezoelectric fluidic angular rate sensor, Software compensation, Microprocessor.
1 Introduction The piezoelectric fluidic angular sensor has the perfect characteristics for the other sensors. For example, the response time is less than 80ms, long service life and low cost. In addition, it can work properly after the impact of 16000g. However, the sensor output temperature characteristic and linearity are not very well, so it is difficult to apply in the hostile environment. In the paper, the temperature characteristic and output characteristic of the sensor are analyzed. Then use the microcontroller and temperature sensor to take the temperature compensation and linear compensation. Thus, broaden operating temperature range of the sensor and reduce the nonlinearity.
2 Working Principle As is shown in the figure 1, the working principle is that piezoelectric fluidic angular rate sensor makes the circulating air deviated the original track through the Coriolis force and achieves the measure of the angular dimension. Then the circulating air is generated the air beams (jet flow) by the piezoelectric pump excitation, whose velocity is Vj. The signal is sensitive by the two parallel hotwires. When the input angular rate is ωi, because of Coriolis acceleration function, the jet beams deviate the centre location M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 253–260, 2011. © Springer-Verlag Berlin Heidelberg 2011
254
X. Wang, L. Piao, and Q. Yu
and affect hotwire. The hotwire will be changed by the cooling of the jet, therefore the resistance will be changed. Then the bridge will be lost balance, which inputs directly proportional to the electrical signal of the angular rate. The sensitive mechanism of the hotwire follows the principle of energy exchange. The relationship between the angular rate and airflow beams velocity through the hotwire is [1] 2 2 2 V = V j + Y 2 = V j + 4 L2ω i
(1)
In the formula, V is the airflow velocity through the hotwire; Y is the deviation of the airflow beams and L is the distance between the hotwires and nozzle.
Fig. 1. Working principle of the sensor
3 Temperature Characteristic and Output Characteristic of the Sensor When the ambient temperature is TH, supposing the current provided the current source is I. Because the hotwires emit heat, the temperature rise relative to the ambient temperature TH is [2]
ΔT = I 2 R ⋅ η
(2)
In the formula, η=f(V) and η is the hotwire coefficient of heat transfer, which related to the airflow velocity V. Now the resistance value of the hotwire is
R = R H (1 + αΔT )
(3)
In the formula, α is the temperature coefficient of the hotwires. When the temperature is TH, the resistance value RH is
RH = R0 [1 + α (TH − T0 )]
(4)
In the formula, R0 is the resistance value of the hotwires in the normal temperature T0, then
Software Compensation of the Piezoelectric Fluidic Angular Rate Sensor
ΔT = I 2 R 0 [1 + αΔT + α (TH −T 0) + α 2 ΔT (TH − T0 )]
255
(5)
Leave out the second trace, and then consider the equation two and the coefficient of heat transfer η, which is f(V). As a result, the relationship between the resistance value of the hotwires and airflow velocity is obtained under the action of current I.
RH 1 − I R0αf (V )
(6)
R0 [1 + α (TH −T 0)] 1 − I 2 R0αf (V )
(7)
R=
2
or
R=
From the above formula, we can see when the hotwires heating current is certain, the sensor output is not only related to the airflow velocity, but also related to the fluid temperature TH. It is clear that to some extent, the change of the ambient temperature will affect the angular rate measure and introduce measurement error. Meanwhile, in the actual manufacture, the temperature coefficient for a pair of the hotwires exist a slight difference in the sensitive element. In addition, the resistance of the bridge reference arm also have different temperature coefficient. Therefore, the sensor output will alter with the change of the temperature. In a certain temperature range, the relationship between the zero position and temperature is approximately the quadratic curve for the sensor. That is
V0 (t a ) = V0 + α 1 (t a − t 0 ) + α 2 (t a − t 0 ) 2
(8)
In the formula, α2 is much less than α, tα is the ambient temperature and V0 is the electrical null in the temperature t0. The experiment proves that the impact of ambient temperature for the sensor is mainly about the sensor zero position output. Therefore, the sensor must be compensated in the output port. The experiment results show that within a small angular rate range, the sensor output voltage of the piezoelectric fluidic angular rate sensor is
Vo = V0 + K rω
(9)
At this point, the sensor output could be seen as linear. ω is the angular rate, whose unit is °/s and Kγ is a constant. But if it is more than 45°/s, the nonlinearity will be increased, which is shown in the figure 2. Therefore, it requires the linear compensation for the sensor output characteristic in designing and making the wide range piezoelectric fluidic angular rate sensor.
256
X. Wang, L. Piao, and Q. Yu
Fig. 2. Characteristic curve of the piezoelectric fluidic angular rate sensor
4 Compensation Method and Results From the above analysis we can see that the sensor compensation includes the zero temperature compensation and linear compensation. The compensation can be completed through the hardware or software compensation. The hardware compensation needs the heavy manual debugging and high cost. In addition, the compensation accuracy is limited and the working range is hard to broaden. Therefore, we use the microprocessor ADμC812 and the temperature sensor AD590 to make the software compensation. A. Section of microprocessor The ADμC812 is produced a microprocessor by the America Analog Devices Company. The chips include a compatible 8052 core and the 8K bytes Flash/ Electrically erasable program memory; 256 bytes data RAM on a chip; 640 bytes Flash/ Electrically erasable data memory. In addition, there are three 16-bit timers/counter, watchdog timer, multiple interrupt sources, UART asynchronous serial communication interface, I2C, SPI serial I/O ports, power supply monitoring function and the abundant hardware resources. Compared with the other microprocessors, the most characteristic of this chip is that it integrates the 8-way 12-bit ADC input and 2-way 12-bit D/A analog voltage output. Meanwhile, this chip operating temperature range is -40 ~+85 , and it is suitable for industrial acquisition of signal and application of controlling the intelligent system. In addition, this chip could use a simple method to connect with the computer RS232 serial port through an asynchronous communication port. Therefore, it is easy to realize the program online simulation, debugging and programming. Then it saves the simulator which is usually needed for the microprocessor. Not only this chip is easy to debug, but also cost saving.
℃
B. Compensation method The compensation principle is shown in the figure 3.
℃
Software Compensation of the Piezoelectric Fluidic Angular Rate Sensor
257
Fig. 3. Compensation schematic diagram
1) Zero temperature compensation (1) Do a temperature experiment and sample the zero position output of the sensitive element for each temperature spots. Then create a zero-temperature parameter table, write and save in a microprocessor memory address unit, which starts in an address. (2) When entering the normal operation, first of all, sample the A/D input of the AD590, and obtain the current temperature data VT. According to the data, search the zero-temperature parameter table in the program memory and obtain the temperature interval [VT1,VT2] in the range of the VT. Then read the zero output voltage data V01 and V02, which are corresponding with VT1 and VT2. (3) Use the linear interpolation formula (10) and obtain the zero voltage data V0T, which is corresponding to the VT. Then take data to output the analog voltage though the D/A. (4) Use the hardware circuit and cut off the zero position. At last, complete the zero temperature compensation process.
V0T = V01 +
V02 − V01 ⋅ (VT − VT 1 ) VT 2 − VT 1
(10)
258
X. Wang, L. Piao, and Q. Yu
2) Linear compensation (1) Test the sensor output voltage value V0 after the zero temperature compensation, which is described in the formula (10). Then obtain voltage output value which is corresponding with the different angular rate. Create the angular rate and output voltage parameter table. (2) Use the least square method to calculate the parameters in the table data and obtain the least square linear equation, which is shown in the formula (11). Then take the angular rate value into the equation and obtain the output voltage parameter table after the angular rate linear compensation. V0 =Aω+B
(11)
In the formula, V0 is the sensor output voltage and A is the slope of the least square linear, which are determined by the equation (12). In addition, ω is the angular rate and B is the intercept of the least square linear, which are determined by the equation (13). n is the number of monitoring points.
∑ω iVoi′ − n
A=
i =1
n 1 n ω ∑ i ∑Voi′ n i =1 i =1
⎞ 1⎛ n ω − ⎜ ∑ω i ⎟ ∑ n ⎝ i =1 ⎠ i =1 n
(12)
2
2 i
n
B=
2 i
i =1
′
n
∑ω ⋅∑V
oi
i =1
n
n
−∑ ω i ⋅ ∑ ω iVoi i =1
i =1 2
⎛ ⎞ n ∑ ω −⎜ ∑ ω i ⎟ i =1 ⎝ i =1 ⎠ n
n
′ (13)
2 i
(3) Corresponding with the previous parameter table without compensation, obtain the output and compensation coefficient parameter table before the compensation. Then write and save in a microprocessor memory address unit, which starts in an address. (4) When entering the normal operation, through the zero temperature compensation, the output value changes with the angular rate. Use the output value V0’ in the current angular rate and check the output-linear compensation coefficient parameter table before the compensation. Then obtain the linear compensation coefficient LV, which is corresponding to the output value V0’. According to the coefficient LV, revise the V0’ and obtain the output value after the linear compensation. At last, D/A outputs the final output value V0 of the sensor. With the different temperatures, the compare of the sensor zero output around the compensation is shown in the figure 4. And the sensor linear compensation result is shown in the figure 5. After the software compensation, the sensor linearity and working temperature have been improved by a large margin.
Software Compensation of the Piezoelectric Fluidic Angular Rate Sensor
259
Fig. 4. Output comparison with the sensor zero output compensation
Fig. 5. Output comparison with the sensor linear compensation
5 Conclusions According to the sensor temperature characteristic and output characteristic, use the microprocessor and temperature sensor to make the temperature compensation and linear compensation for the piezoelectric fluidic angular rate sensor through combining with method of interpolation and table look-up. The experimental results show that the sensor nonlinearity is reduced and operating temperature is broadened by a large margin. Then the sensor nonlinearity decreases from 2% to 0.5% and operating temperature range increases from 0 ~45 to -40 ~55 . The software compensation method is easy to use and also used for other enough precision sensors.
℃ ℃
℃ ℃
Acknowledgment. National Natural Science Foundation of China (60772012); Supported by program for New Century Excellent Talents in University; Beijing Excellent Training Grant; Supported by program for Beijing Key Laboratory Open Topic; Supported by program for Beijing New Century Talents Training Grant Project; Supported by program for Beijing Education Commission Technical Innovation Platform Foundation (KM201110772020); Supported by program for Modern Detection and Control Technology Ministry of Education Key Laboratory.
260
X. Wang, L. Piao, and Q. Yu
References 1. Zhang, F.: Sensor Electronics. National Defence Industry Press, Beijing (1991) 2. Cai, H., Zhou, Z.: A hotwires flow sensor research for measuring micro flow. Journal of Measurement Technology 8, 13–16 (1998) 3. Zhong, Y., Huang, Q.: The high accuracy method of linearity and temperature compensation for intramodulated photodetector. Journal of Measurement and Control Technology 5, 27–29 (2000)
Finite Element Analysis for Airflow Angular Rate Sensor Temperature Field and Pressure Xing Wang, Linhua Piao, and Quangang Yu Sensor Technology Research Centre Beijing Information and Technology University Beijing, China [email protected]
Abstract. The airflow angular rate sensor temperature field and pressure were analyzed by finite element method in the sensitive element. Using ANSYS-FLOTRAN CFD software, according to the actual size, building a two-dimensional entity model, the finite element simulation was conducted by a series of procedures, such as meshing, applying loads and solve. Then the temperature field and pressure were calculated in the sensitive element. The compute results show because of the action of heating in parallel placed two hotwires, the hotwires temperature rises around the jet gases, the temperature field produces gradient in the closed cavity; the pressure near the outlet side position is higher than other positions mean value 68%. Keywords: Airflow angular rate sensor, Temperature field, Pressure, Finite element method.
1 Introduction The airflow angular rate sensor is a gyro function without traditional gyro moving parts and piezoelectric gyroscope solid-state inertial suspension components [1]. The advantages of sensor are low cost and short response time. Its biggest characteristic is high impact resistance capability; the sensor can still work normally under the impact of 16000g. The working principle of the airflow angular rate sensor is that piezoelectric pump generates cycle airflow beams (jet flow), the bridge circuit, constituted by hotwires, detects to the airflow beams offset by the action of Corioils force, then outputs in direct proportion to the angular rate of the electrical signal. In this paper, the finite element analysis software ANSYS would be used. According to the sensor sensitive element actual size, building a two-dimensional model and calculating the temperature field and pressure in the sensitive element, in order to guide the optimization design of the sensitive element structure and look for improving sensor performance ways.
2 Working Principle of the Sensor As is shown in the figure 1, the working principle is that airflow angular rate sensor makes the circulating airflow deviated the original track through the Coriolis force and M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 261–266, 2011. © Springer-Verlag Berlin Heidelberg 2011
262
X. Wang, L. Piao, and Q. Yu
achieves the measure of the angular dimensions. Then the circulating air is generated the airflow beams (jet flow), the signal is sensitive by the two parallel hotwires. When the angular rate inputs, the jet beams deviate the centre location and affect the hotwires. The hotwires will be changed by the jet flow cooling, therefore the resistance will be changed, then current will be changed and the testing bridge will lost balance, inputs directly proportional to the electrical signal ΔV of the angular rate ωi.
Fig. 1. Working principle of the airflow angular rate sensor
3 Physical Model Because the jet flow deflection of airflow angular rate sensor works in the nozzle body, choosing airflow channel as research object in the nozzle body. As is shown in the figure 2, in order to build modeling and calculate, the nozzle body can be simplified as two-dimensional structure, the hotwires(r1,r2) can be simplified as point heat source(q1,q2). Through the nozzle body symmetry axis, making a section and getting the two-dimensional air flow region.
Fig. 2. Two-dimension sensitive cavity simplified diagram
Finite Element Analysis for Airflow Angular Rate Sensor Temperature Field and Pressure 263
The 20×13mm2 rectangle stands for cavity, the nozzle is a 0.5mm radius hole, the outlet is a 2mm radius hole. When the temperature is 293K in an atmospheric pressure, the air density is 1.21 kg/m3, the viscosity is 1.81×10-5 kg/ms and the heat source temperature(q1,q2) is 350 K. Setting the airflow import velocity is 2m/s, Reynolds number is 5300.
Fig. 3. The finite element model after meshing
4 Solve with Finite Element Method The FLOTRAN CFD analysis function of the ANSYS software usually includes the following three steps: modeling, applying loads and solve. A. Building the Two-Dimensional Entity Model (1) Choose analysis type: Choose FLOTRAN CFD analysis function of the ANSYS software. The FLOTRAN CFD analysis function is used for analyzing with the two-dimensional and three-dimensional fluid flow [2]. (2) Define element type: Choose for FLUID141 element of the four nodes and quadrilateral shape. This element can be used for solving single-phase Newton fluid two-dimensional temperature and pressure distribution. (3) Generate entity model: In this paper, a modeling method for bottom-up will be used. First define the keypoints, and then define lines, areas and volumes through the keypoints. (4) Meshing: The meshing stand or fall directly affect the calculation results accuracy [3]. Through controlling the line segments to mesh grid, different lines length are different segments [4]. In this paper, the whole section divide into 3600 elements, then the result is shown in the figure 3.
264
X. Wang, L. Piao, and Q. Yu
B. Appling Loads (1) Set boundary conditions: Setting the outer boundary nodes velocity along the x-axis and y-axis are zero, maintaining normal temperature (298K); setting the jet flow velocity is 2m/s with the nozzle; setting the outlet pressure is zero; setting the heat source temperature is 350K. (2) Set analysis conditions: Setting the analysis type is thermal analysis; setting the global iteration number is 200; setting the documents covering frequency is 50Hz; setting the output summary frequency is 50Hz. (3) Set fluid characteristics: Setting the fluid density, viscosity coefficient, coefficient of heat conductivity and specific heat are AIR-SI. (4) Set environment parameter: Setting gravity acceleration along the y-axis is g. The finite element model after applying loads is shown in the figure 4.
Fig. 4. The finite element model after applying loads
C. Solve Using the TDMA solver, then set the iteration step number is 100. At last, calculate and read the results and show temperature field and pressure graphs.
5 Calculation Results and Discussion Analysis Because the temperature conditions of outer boundary nodes are 298 K, the outer temperature field distribution is the identical, thus the internal temperature field distribution is unable to know. In order to observe temperature field and pressure distribution in the closed cavity, need to use the ANSYS-FLOTRAN CFD slice observation function. Through the translation coordinate system, the two-dimensional temperature field and pressure graphs will be obtained in the closed cavity.
Finite Element Analysis for Airflow Angular Rate Sensor Temperature Field and Pressure 265
Fig. 5. The airflow angular rate sensor temperature field
The temperature field of airflow angular rate sensor in the sensitive element is shown in the figure 5. The results show because of the action of heating in parallel placed two hotwires, the hotwires temperature rises around the jet gases, the temperature field produces gradient in the closed cavity, and relative to the y-axis symmetrical distribution. The pressure field in the sensitive element is shown in the figure 6. The results show that the pressure near the outlet side position is higher than other positions mean value 68%. Because the pressure distribution uniformity makes for sensor system stability in the cavity, designer should consider the small gas outlet is placed to the both sides outlet, prevent large angular rate inputs, jet column deflection is too large, then the jets intensity is overlarge in the cavity, caused the pressure distribution unevenness in the cavity and affect the system stability.
Fig. 6. The airflow angular rate sensor press field
266
X. Wang, L. Piao, and Q. Yu
6 Conclusions In this paper, the airflow angular rate sensor temperature and pressure field are analyzed by finite element method. T he compute results show: (1) Because of the action of heating in parallel placed two hotwires, the hotwires temperature raises around the jet gases, the temperature field produces gradient in the closed cavity, relative to the y-axis symmetrical distribution. (2) The pressure near the outlet side position is higher than other positions mean value 68%. As has been noted, the two-dimensional entity modeling analysis method can accurately reflect the airflow angular rate sensor temperature field and pressure distribution in the sensitive element. The analysis results are correct, reliable and easy to understand. At the same time, it has also provided beneficial attempt for the airflow angular rate sensor application research. Acknowledgment. National Natural Science Foundation of China (60772012); Supported by program for New Century Excellent Talents in University; Beijing Excellent Training Grant; Supported by program for Beijing Key Laboratory Open Topic; Supported by program for Beijing New Century Talents Training Grant Project; Supported by program for Beijing Education Commission Technical Innovation Platform Foundation (KM201110772020); Supported by program for Modern Detection and Control Technology Ministry of Education Key Laboratory.
References 1. Zhang, F.: Modern Piezoelectricity. Science Press, Beijing (2002) 2. Wang, G.q.: Practical Engineering Numerical Simulation and ANSYS Practice. North Western Polytechnical University Press, Xi’an (1999) 3. Wang, R., Chen, H., Wang, G.: Analysis of ANSYS finite element mesh dividing. Journal of Tianjin Polytechnic University 21(4), 8–11 (2002) 4. Cheng, K., Yin, G.: The research of building model in ANSYS. Journal of Computer Applications Technology 32(6), 39–40 (2005)
Control System of Electric Vehicle Stereo-Garage Wang Lixia1, Yang Qiuhe1, and Yang Yuxiang2 1
2
Hangzhou Vocational & Technical College, China Hangzhou Good Friend Precision Machinery Co., Ltd., Hangzhou, China [email protected]
Abstract. The stereo-garage for electric vehicle recharging has been studied and developed. The charging fees management system of the garage adopts IC inductive card, intelligent charging system, safety charging socket and charging state monitoring technology. The intelligent and automatic parking and charging have been achieved, which provides effective protection for the development of electric vehicles. Keywords: electric vehicle, stereo-garage, charge, PLC.
1 Introduction Under the dual constraints of energy crisis and environmental protection, the electric vehicle is the main development direction of the vehicle. But the difficulties of charging limit the development of electric vehicles. Vehicle " travel " is much shorter than "stop" time, if electric vehicles can charge automatically while parking in the garage, the use of electric vehicles will become very convenient, which will greatly promote the electric vehicle industry. Since the 90s of the 20th century, the relevant state departments have started to organize a plan to promote the electric vehicle R & D and industrialization [1]. According to the prediction of the National Ministry of Science and 863 major energy-saving and new energy vehicles project office, by 2015, electric vehicle ownership in China is expected to reach 2.66 million. Electric vehicle charging equipment on the market today are charging stations and charging piles. The charging pile installed in the ground, can not effectively conserve floor space. Charging station needs more investment, so there are not many charging stations. For private cars, charging needs to run dozens of kilometers away, and wait for eight or nine hours in charging stations, which is quite inconvenient. Charging station is currently only suitable for demonstration of electric vehicles of public transport operations, and covers a larger area. Along with the advance of the urbanization process, land supply has become tighter; there are more and more needs for parking equipment [2]. The types are lifting and transferring [3], laneway, vertical lifting type, etc. [4]. But at present parking equipment is designed for fuel vehicles, electric vehicles can not solve the problem of charging while parking. Therefore, we developed stereo-garage for electric vehicle recharging parking. M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 267–272, 2011. © Springer-Verlag Berlin Heidelberg 2011
268
W. Lixia, Y. Qiuhe, and Y. Yuxiang
2 The Choice of Charging Mode Charging modes of electric vehicles generally have the following three [5]: General Charging Mode. Charging cable connects with the AC power plug through not less than the rated current of 16A socket which meets the requirements of GB 2099.1. It normally takes 5 to 8 hours, and is suitable for families and parking lots. Fast Charging Mode. Require a higher charge current value, generally between 150 ~ 400A [6]. Charge the battery 50-80% in 20 to 30 minutes. The disadvantages are high cost and lower battery life, etc. The Replacement of The Battery. [7] Electric car removes the run-out battery, replaces with a fully charged battery at charging stations. There are two limitations: first, the need for standardization of the battery specifications; Second, a manual operation, not suitable for full automation. Therefore, the electric vehicle automatic stereo-garage charging adopts the ordinary charging mode, the intelligent charger or the AC power.
3 Stereo-Garage Control System Stereo-garage control system structure is shown in Fig. 1.
Safe operation system
Charge management system
Monitoring system
Auto-access control system
Stereo-garage control system
Charging fee system
Stereo-garage control system
Fig. 1. The composition of the stereo-garage control system
Stereo-garage control system is controlled by PLC [8] programmable logic controller (FX2N) PLC control, and is implemented by contactors S-P11. RS485 serial communication protocol with the host computer (industrial control computer) and the PLC communication transmission [9], composed of local area network, to achieve the overall management of parking for all parking.
Control System of Electric Vehicle Stereo-Garage
269
4 Intelligent Charging System Charging fees management system diagram is shown in Fig. 2. Charge management system is controlled by a parking control system, including: IC device [10], human-machine interface, charging devices, charging monitoring devices. human-machine interface IC device
Stereo-garage control system
BMS
charging monitoring devices
BMS
charging monitoring devices
BMS
charging monitoring devices
......
charging devices
charging devices
charging devices
Intelligent charger
Intelligent charger
Intelligent charger
Parking spaces 1#
Parking spaces 2#
Parking spaces n#
Fig. 2. Schematic diagram of charging fees management system
Electric vehicle charging control circuit is mainly done by the embedded ARM processor, the user can make user authentication, balance inquiries, billing inquiries and other functions by self-service card. Users can select the time and man-machine interface instruction billing charge, charging by power, auto-filled, charging mode and charging by the mile. Electric vehicle charging control system and the garage control system data exchange using wired Internet. Electric vehicle charger system controller and battery management system (BMS) use CAN bus for data exchange. Human-machine Interface. Human-machine interface is the interface users communicate with the device, using the button mode of operation for parking scheduling, access to cars, charging and other operations. There are two modes: manual and automatic operations. Hit the manual selector switch to manual mode for maintenance personnel. Hit the automatic transmission into automatic mode, the normal use for the user. IC Swipe Devices. The entrances to stereo-garage facilities are equipped with IC swipe device, used for parking, charging the start and end, costs and deductions, printing paper and so on. Charging Device. Install a dedicated safety socket in the stereo-garage spaces. Monitoring Device of Charging. State of charge monitoring device is composed of the ammeter, voltmeter, and kilowatt-hour meter installed in the parking control box.
270
W. Lixia, Y. Qiuhe, and Y. Yuxiang
5 Operational Program of Stereo-Garage Garage operation block diagram is shown in Fig. 3. Vehicle storage, the owners press on the man-machine interface "parking " button, parking instructions will be transmitted to the control system, control system command scheduler executing agency called parking arrive at the designated location. The access control system is open. The owner drives the car into the parking space, puts the charging cable of the electric vehicle into the special charge socket on parking spaces. IC card owner swipes the card at the card swipe device at the entrance, the system starts timing and the intelligent charging system begins to release energy and through the parking control system charge current will be delivered to the stop of the parking spaces for the charging of electric vehicles. At the same time, charging status monitoring device starts to monitor the charging current, voltage and other parameters. Access control system shut down. Man-machine interface can instruct car owners to leave the parking equipment, storage completed. start
callIing location parking
Pick
Open the access control system
alarm
Y
size of the car is out of range
device for parking space mobile IC
N
Chargeback, print N to plug into the socket
Open the access control system
choice of charging mode IC Open charging system
IC
Unplug the charger plug
device for parking space mobile finishing Fig. 3. Operational program of stereo-garage
Control System of Electric Vehicle Stereo-Garage
271
Car out of a library, press "Pick" button of the human-machine interface, parking instructions will be transmitted to the control system, control system control actuator scheduling called parking to arrive at the designated location. Open the access control system. IC card owners swipe the card at credit card swipe device 4, the system shut down timing and intelligent charging system finishes power output. IC card swipe device displays the amount of fees and completes the IC card chargeback amount, print out the charging documents. Electric car owners will pull the charging line out of dedicated electric vehicle charging device and retracts directly, then exits the parking equipment, access control system shut down. The library completes.
Fig. 4. The stereo-garage
6 Conclusion Developed stereo-garage (as shown in Fig. 4) works well, where all kinds of energy cars can be parked. The charging fees management system adopts IC, security charging socket and charge condition monitoring technology. It is safe, efficient and easy to operate. It realizes automation of parking and charging, which provides effective protection for the development of electric vehicles.
References 1. Cao, B.: New Progress in China’s electric vehicle technology 1, 114–117 (2007) 2. Wu, Y., Niu, W., Zhu, J.: Lifting and transferring parking monitoring and management system. Handling equipment 7, 21–25 (2010) 3. Xu, N.: Smart parking application and research. Mechanical and electrical product development and innovation 1, 60–62 (2009) 4. Chang, H., Yang, Y.: Mechanical and electrical integration of the PLC control parking. Handling equipment 8, 52–55 (2007)
272
W. Lixia, Y. Qiuhe, and Y. Yuxiang
5. Yang, J.: About Electric Vehicle Charging Mode of. Electrical Technology 10, 10–11 (2010) 6. Information on, http://wenku.baidu.com/view/191dfecdda38376baf1fae7b.html 7. Wang, X.: Smart Charging Electric Vehicle Design and Research. In: Proceedings of Society of Automotive Engineers of China Annual Conference 2003, pp. 169–174 (2003) 8. Liao, c.: PLC based and application. Mechanical Industry Press, Beijing (2011) 9. Information on, http://china.makepolo.com/product-detail/100024149628.html 10. Wang, J., Jiang, J.: Electric Vehicle Charging Station Information Management System Design and Implementation. Micro-computer information 5, 16–17 (2006)
Research the Ameliorative Method of Wavelet Ridgeline Based Direct Wavelet Transform Yan Zhe and Li Ping National Key Laboratory of Mechatronic Engineering and Control, Beijing Institute of Technology, Beijing, China [email protected]
Abstract. In this paper, we had advanced the ameliorative method of wavelet ridgeline based direct wavelet transform, in research based on the intra-pulse characteristic analysis method based on wavelet transform theory. It can effective estimate the phase mutations of the phase-coded signals, reduce the date of the initial frequency estimation error because of interruption and boundary effect, obtain accurate intra-pulse modulation parameters, improve anti-noise ability. The method can solve the complex electromagnetic environment, aviation electronic reconnaissance system for phase-coded signal edc, special system radar signal of reconnaissance. Keywords: Wavelet transform, Method of Wavelet ridgeline, Phasic coding.
1
Introduction
In the electromagnetic environment of the modern battlefield, the number of various radiant source signals has increased dramatically, that makes acquisition system in highly intensive signal environment and the form of objective radiant source signal increasingly complex. As the frequency MRT change, intra-pulse FM (phase-modulation), PRI (Staggered), use class noise of spread spectrum signal, polarization agility, polarization diversity, polarization encoded signal etc, that destroy the signal regularity of the signal sorting and identification, So that the five parameters (carrier frequency, pulse width, pulse amplitude, arrival time, DOA) that the traditional sorting method based on are difficult to fully describe the modern radar signal characteristics, and make many problems in the actual application of Aviation Electronic Surveillance System, as Increasing batch phenomenon seriously, low positioning accuracy, less number of available positioning radars, more false positioning etc. In order to solve the problem of radar signal detection in the complex electromagnetic environment, both at home and abroad are used in a variety of new electronic intelligence reconnaissance technology and signal processing method, and comprehensive utilization of various signal characteristics of sorting, identification and accurate positioning, Among them, intra-pulse feature analysis technology in recent M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 273–281, 2011. © Springer-Verlag Berlin Heidelberg 2011
274
Y. Zhe and L. Ping
years and subtle as high-speed DSP chip's appearance and DRFM technology maturity and get fast development, put forward the time-domain related law, modulation domain analysis, timing cepstrum method, the digital intermediate frequency processing method, the extraction method, and began to try applied in practical reconnaissance system. At present, the new generation of electronic counter reconnaissance system, in continue to use five classical parameters to sorting radar signal at the same time, have a certain amount of radar signal Intra-pulse feature analysis ability. In [3], [4], respectively by using wavelet transform and the approximate wavelet transform is proposed to the radar pulse signal feature within the effective extraction method and can be used in radar signal sorting classic features added. This paper also in research based on wavelet transform intra-pulse characteristics, on the basis of analysis method was proposed based on wavelet transform improvement directly wavelet ridge line method, used to estimate the phase-coded signals effectively reduce the phase mutations value, because the data truncated and boundary effect caused by the initial frequency estimation error, obtain the accurate intra-pulse modulation parameters and improve the anti-noise ability, guarantee convergence.
2
Wavelet Ridge Line Theory
By using wavelet transform deriving radar reconnaissance signal instantaneous frequency the key is calculating the wavelet ridge line. Through search gradual wavelet transform 3d video figure peak can get wavelet ridge line, this kind of method to search the entire time-frequency domain, computation is very large. While using iterative method for wavelet ridge only need a few iterations can convergence, namely from signal initial freely choose an initial scale moments, and calculates the signal in this time scale point of wavelet coefficients phase, and then taking the derivative iterative search way find a measurement value can satisfy formula (4), without the need for calculating the whole time-frequency plane of wavelet coefficients can get ridge line estimate. A. Wavelet ridge line method Suppose a radar signal s (t ) = As (t ) cos[ϕ s (t )] is progressive signal Analytic expression is: Z s (t ) = As (t ) exp[jϕ s (t )] Suppose the wavelet function ψ (t ) ∈ L2 ( R) is progressive wavelet Analytic expression is: Zψ (t ) = Aψ (t ) exp[jϕψ (t )] By phase stability gradual spread theory gained real signal wavelet transform approximation formula: 1 a
CWT s ( a , b ) =
≈
t −b )dt a t −b 1 Z (t ) ψ ∗ ( s ) π π s s a a ( e j 4 sgn[( ϕ '' ( t )] a ,b s ' ' 2 ϕ a ,b (t s )
∫
+∞
−∞
Z s ( t ) Z ∗ψ (
( 1)
Research the Ameliorative Method of Wavelet Ridgeline
275
Definition of wavelet ridge line is:
R = {(b, a ) ∈ H 2 ( R ), t s (b, a ) = b}
( 2)
In wavelet ridge line:
ϕba′ (ts ) |b = 0 , ϕψ′ (0) / ϕ s′ (0) = ar (b)
( 3)
ar (b) , ar (b) characterization of the signal instantaneous frequency. When wavelet ridge line ar (b) extracted after Therefore wavelet ridge line can be used to represent wavelet
coefficients
can
be
obtained
from
the
original
signal
CWTs (a, b) instantaneous frequency and amplitude:
ωs = ϕ s′ (0) =
As (b) =
ϕψ′ (0) ar (b)
WTs (ar (b), b)
=
ω0
( 4)
ar (b)
ar' (b)ϕ ψ (0) + ϕψ'' (0)
(5)
π Aψ (0) 2
Iterative process are described below: Any given a0 (t0 ) is a0 (t0 ) for initial value, Suppose k = 0 , then we do the following iterative formula: Steps 1:
ai +1 (t0 + kT ) = Among them, the
( 6)
Db [ψ ai (t0 + kT )]
Db be expressed as b difference operators. This step 2:
Db [ψ ai (t0 + kT )] = When meet
ω0
ψ a (t0 + kT ) −ψ a (t0 + ( k − 1)T ) i
( 7)
i
T
ai +1 − ai ≤ ε ( ε is given arbitrarily small positive) , a (t0 + kT ) = ai , ai
a0 (t0 + (k + 1)T ) = ai (t0 + kT ) , k = k + 1 , Otherwise continue to step 2. Repeat the above iteration, until he finished all calculation point. Through the rapid iteration algorithm can get wavelet ridge line
ar (b) , and then
calculate the instantaneous frequency signal. This comes from the fixed point algorithm of ridge line iterative algorithm which is wavelet ridge line method.
276
Y. Zhe and L. Ping
B. Wavelet ridge line method Wavelet ridge line method advantages and disadvantages Simulation experiments show wavelet ridge line method for signal instantaneous parameters extraction has good results, it can extract single carrier frequency signal, linear frequency modulation signal, nonlinear frequency-modulated signals (such as sinusoidal modulation signal), the phase-coded signals and so on many kinds of modulation signal instantaneous parameters. But posada ridge line method still exist in the following weakness: 1) Freely choose the initial scale a0 (t0 ) as a0 (t0 ) for the initial value cannot guarantee the convergence of the iterative algorithm. This is because in algorithm based on the convergence conditions is the guarantee of wavelet ridge line scale and the instantaneous frequency signal existed between the corresponding relationship, if selected, may make the initial blindly iterative algorithm cannot convergence. 2) The results of iterative algorithm estimate exist disturbance, making the estimation results and the real value between have large deviation. 3) Because of the existing data truncation, boundary effect and ω0 in Morlet wavelet improper selection, causing signal instantaneous frequency estimation in signal start with actual frequency has very big deviation, showed signal instantaneous frequency initial frequency estimation variances greatly. 4) Due to the wavelet ridge line method is by estimating signal instantaneous frequency to get signal intra-pulse modulation information, so for phase-coded signals can’t completely identify its intra-pulse modulation features, and greatly influenced by noise, must study new used to extract the phase-coded pulse letter within the algorithm of the instantaneous parameters.
3 Based on the Direct Wavelet Transform Improve Wavelet Ridge Line Method Study of the phase-coded (PCM) signal has revealed that can’t highlights the phase modulation function of phase-coded signals, if this kind of time-varying signal based on instantaneous frequency improvement wavelet ridge line method extracted instantaneous frequency curve will be a line. And that we can’t according to the principle of phase stability calculation PCM signal wavelet transform coefficients because of the PCM signal frequency is constant, not satisfy incremental approximate conditions. Fortunately, its can be get through direct integral phase-coded signals of wavelet transform expression. A. Some problems need to be solved 1) Not freely choose initial scale a0 (t0 ) as the initial value of ar (t0 ) .The initial value should meet such principles: Given a signal in the corresponding instantaneous frequency ωs (t0 ) at times of t0 , we should ensure ωs (t0 ) that the corresponding a0 (t0 ) fall in
ar (t0 ) , ar (t0 ) and a0 (t0 ) there should be
no orders of magnitude more between the difference. We can take center finite
Research the Ameliorative Method of Wavelet Ridgeline
277
t0 moment corresponding instantaneous frequency ωs (t0 ) , then according to formula (4) out the a0 (t0 ) . 2) According to the estimate of the initial instantaneous frequency value ωs (t0 ) , difference (CFD) method to get
we will appropriate choice Morlet wavelet parameters
ω0 .
In order to make
Morlet wavelet meet gradually and allow, the values of ω0 than 5.33. For Morlet wavelet speaking, the selection of frequency extraction error,
ω0
has the extremely vital significance for
ω0 choosing different, in different frequency section
are obviously different extraction of error, different frequency band selected corresponding ω0 can be more accurate frequency estimation. 3) Because the signal frequency range is very wide, and signal sampling frequency is different, so if directly in accordance with the signal real frequency calculation of words, in the iteration step 2 of ε is not easy choice. Therefore in improving the simulation algorithm of wavelet ridge line in the process, we
f s = 10 , and then calculate. We will the
assume signal sampling frequency is
actual signal frequency be normalized in accordance with sampling frequency. Calculate the actual results later, we can get the actual signal frequency only use the results multiply by the actual sampling frequency. 4) Adopt beg expectations, or polynomial modeling method to estimate results are smooth processing, to eliminate noise caused by randomized to reduce estimate of variance. B. The phase-coded signals of wavelet transform By continuous wavelet transform (CWT) expressions get defined type:
CWTψ s ) =| a |−1/ 2
∫
+∞
−∞
s (t )ψ (
t −b )dt a
( 8)
PCM signal analytical expressions:
s (t ) = A exp[jϕ (t )] exp(jωt )
( 9)
Morlet wavelet function expression:
ψ (t ) = exp(−t 2 / 2 + jωt ) = Aψ (t ) exp[jϕψ (t )]
( 10)
Infers PCM signal wavelet transform is: (CWTψ s )(a, b ) = 2 π A exp[ −
By (11), When
(ω a − ωψ ) 2 2
]exp{j[ω b + ϕ (b)]}
ωa −ωψ = 0 , | (CWTψ s )(a, b) |
( 11)
have great value that is mold of
wavelet transform coefficient and time scale factor b is irrelevant. PCM signal wavelet ridge line is a straight line at this moment, will ωa − ωψ = 0 generation into (11) obtains in the ridge line of wavelet coefficients phase:
278
Y. Zhe and L. Ping
b + ϕ (b ) (12) a By formula (12) can see, only request a wavelet ridge line wavelet transform coefficient of phase can get PCM signal phase modulation function ϕ (b) . Because in practical projects, our concern is PCM signal in phase point mutations phase changes of concrete numerical value, so just extracted adjacent code element between the phase mutations value. At this time, definition phase modulation function ϕ (b) the difference in value is: Arg ( a , b ) = ω b + ϕ ( b ) = ω ψ
ϕ ' (b) = Arg(b − Δb, aψ ) − Arg(b + Δb, aψ )
( 13)
Methods and steps described as follows: 1) According to improve wavelet ridge line algorithm to find the wavelet ridge line of ar (b) and instantaneous frequency signal ω . 2) Ask out the wavelet coefficient distribution
(CWTψ s )(ar (b), b) along wavelet ridge line
ar (b) .
3) Along the time axis movement on b-valued, according to formula (13), ask out A group of difference in wavelet ridge line phase commissioning function of ϕ (b) . C. The phase-coded signals of wavelet transform In view of some type electronic reconnaissance system docking by the phase-coded signals the sorting of pretreatment are not perfect, lead to increased batch phenomenon seriously, and caused thereby positioning accuracy reduced, we adopt based on wavelet transform improvement directly wavelet ridge line method to replace the original signal processing algorithm, the actual test conditions as follows: 1) the actual measurement of Bi-phase codes 1 0.8 0.6 0.4
ᐙ ᑺ ؐ
0.2 0 -0.2 -0.4 -0.6 -0.8 -1
0
50
100
150
200
250
300
350
400
ᯊ䯈ᑣ߫
Fig. 1. The time domain waveform figure for monopulse of Bi-phase codes
Research the Ameliorative Method of Wavelet Ridgeline
279
0
ࡳ⥛䈅ᆚᑺ˄ऩԡ˖
dB˅
-10
-20
-30
-40
-50
-60
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
乥⥛˄ऩԡ˖ Hz ˅
5 7
x 10
Fig. 2. The frequency domain power spectrum
Fig. 3. The jump phase estimate of Bi-phase codes diagram for monopulse of Bi-phase codes
2) the actual measurement of four phase codes. 1 0.8 0.6 0.4
ᐙ ᑺ ؐ
0.2 0 -0.2 -0.4 -0.6 -0.8 -1
0
50
100
150
200
250
300
350
400
450
500
ᯊ䯈ᑣ߫
Fig. 4. The time domain waveform figure for monopulse of four phase codes
280
Y. Zhe and L. Ping 0
ࡳ⥛䈅ᆚᑺ˄ऩԡ˖dB˅
-10
-20
-30
-40
-50
-60
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
乥 ⥛ ˄ ऩ ԡ ˖ Hz ˅
5 7
x 10
Fig. 5. The frequency domain power spectrum
Fig. 6. The jump phase estimate of four phase codes diagram for monopulse of four phase codes
The actual comparative tests have proved: based on instantaneous phase extraction improvement wavelet ridge line method can accurately extracted phase-coded signals of phase modulation function, and can distinguish between the Bi-phase codes and four phase codes. Quantization error range less than two sample point. Because extraction is phase mutations in phase, and value point mutations wavelet modulus far outweigh the other sampling point value, so the algorithm has a strong antinoise ability.
4 Conclusion Based on traditional wavelet ridge line method for phase-coded signals cannot completely identify its intra-pulse modulation features, and affected by noise larger problem, this paper proposes an extract phase encoding letter within the instantaneous pulse parameters of the modified method. In this method, we get the wavelet transform
Research the Ameliorative Method of Wavelet Ridgeline
281
expression of phase-coded signals through direct integral, and built based on wavelet transform improvement directly wavelet ridge line method. It can efficiently estimate the phase-coded signals of phase mutations value, and reduce the initial frequency estimation error because of the data truncated and boundary effect caused, guarantee convergence. Through the simulation test, we found that this method may obtain accurate intra-pulse modulation parameters, and computational complexity is lesser, anti-noise ability is stronger.
References 1. Zhao, G.-Q.: The radar against principle. Xian University of Electronic Science and Technology Publishing House, Xian (2001) 2. Wu, S.-H.: Radar intra-pulse feature extraction method research. Shipboard electronic warfare 25(1), 25–28 (2002) 3. Zhu, Z.-B., He, M.-H., Wang, Y.-S.: Extraction of the In-pulse Characteristics of Signal Based on Approximate Wavelet Transform. Journal of air force radars university 16(4), 8–10 (2002) 4. Li, H.-S., Han, y., Cai, Y.-W.: Overview of the crucial technology research for radar signal sorting. Systems Engineering and Electronics (December 2005) 5. Yu, C.-l., Wan, J.-w., Xu, R.-H., Han, Y.-m.: Algorithm Analysis and Simulation of Improved Wavelet-ridge Method. Modern Radar 27(8) (2005)
Study on the Transportation Route Decision-Making of Hazardous Material Based on N-Shortest Path Algorithm and Entropy Model Ma Changxi1, Guo Yixin1, and Qi Bo2 1
School of Traffic and Transportation, Lanzhou Jiaotong University, China 2 Personnel Department, Lanzhou Jiaotong University, China [email protected]
Abstract. The transportation accident of hazardous materials possibly causes heavy personnel casualties, long-term environmental pollution and huge economic losses, the public and the media are very sensitive to the accident. Selecting the reasonable transport route is of great significance for the prevention of accidents. Aiming at hazardous materials highway transportation network, this paper proposed the definition of integrated transport costs and integrated danger coefficient firstly, then the N-shortest path algorithm was used to determine the alternative transportation route parameter of hazardous materials, thirdly, the algorithm steps of finding the best transportation route were constructed through using entropy model, finally an example demonstrated the feasibility of the decision-making method. This decision-making method can help the government relevant authority and the transport enterprises choose the reasonable transportation route. Keywords: Hazardous material, Transportation route, Decision-making, N-shortest path algorithm, Entropy model.
1 Introduction The transportation of hazardous materials is an important link of their life cycle, their freight volume is increasing year by year .The accident rate of hazardous materials is very low currently, however the accident will cause heavy casualties, long-term environmental pollution and huge economic losses, regarding this, the public and the media are very sensitive, which would cause social instability. Such as transportation accidents of explosive hazardous materials could cause serious harm to transport tunnels, bridges, even may cause paralysis of transportation routes in a long time, and explosions may also result in heavy personnel casualties and property losses, violently poisonous hazardous material divulges on the transportation routes which will contaminate water resources and will give the region a negative impacts on production and life, the transportation incident of radioactive hazardous materials could even have a serious potential risk to the urban residents, public buildings etc., once the accident occurred in these areas, it will result in disastrous consequences. Therefore it is necessary to select a path which has higher security and has the least influence on the M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 282–289, 2011. © Springer-Verlag Berlin Heidelberg 2011
Study on the Transportation Route Decision-Making of Hazardous Material
283
surrounding people, things and environmental from a number of transportation routes, provides decision support to the government monitoring department and hazardous material production enterprises. The decision-making problem of hazardous materials transportation route is greatly concerned by governments and the public. Joy etc. have used the shortest path algorithm to carry in a number of empirical studies among the transport of hazardous materials, Brogan etc. have expanded their applications. Glickman, Ivancie, Pijawka etc. and Kessler etc. studied the route choice of the hazardous materials transportation through using of population coverage fraction for the target[1-6]. Batta and Chiu have regarded the product of the total minimum distance between hazardous materials transport vehicles and population accumulation centers of transport paths of influence areas and influencing personnel total as the selection criteria[7] . Because single-target model can not resolve the conflicts between the transportation risks and transportation costs, the multi-objective route problem of hazardous materials transportation was further developed. Shorys etc. studied the multi-objective problem most early. Shorys considered two objectives, minimize driving distance and minimize population coverage rate, he noted that the optimal decision-making must come from the Pareto optimal solution set, people can apply different weights to combine the two objectives[8]. Saccamonno and Chan tested transport strategy of three different hazardous materials, minimize risk, minimize accident rates, minimize operating costs. They simultaneously assessed the sensitivity analysis of such as the state of roads and route under the certain environment, and analysised the benefits and losses of each strategy for the accident[9]. Current etc. put forward a two-target model, minimum path including the population and minimize driving distance, resulting in a connection in the OD combination balance exchange curve which determined in advance[10]. Abkowitz and Cheng proposed a considering the risks and costs of two-goal path model, they through regarded the disaster, injury and property damage as a whole to form that the risk of hazardous materials transports ,single OD combination of the Pareto optimal choice was composed of these integrated risks and costs[11]. Zografos and Davis proposed the path model following rules based on the hazardous materials, population affected by the risk, the risk under special populations, loss of property, using transport time instead of the cost, the model have been treated as a shortest path problem under multi-criterion and using the target program with a priority to seek the solution of the problem [12] .Ma Changxi has studied the transport path of hazardous materials problem under the complex environment[13-16] . However, when chooses the transportation route of hazardous materials, carries on the decision-making only based on the integrated transport cost which is smallest is not appropriate. Take the path which its integrated danger coefficient is smallest as the optimal choice, although this idea has certain rationality, but without taking into account the problem of transport costs. Usually the integrated danger coefficient is lowest , which means transport will through the densely populated areas, take the path of the remote, increase the transport length, lead to high transportation costs. For carriers, when choose the path these two goals must overall consideration. This paper, in view of hazardous material transport network, proposes the definition of integrated transport costs and integrated danger coefficient, then transforms the hazardous material highway transportation path decision-making question as the
284
M. Changxi, G. Yixin, and Q. Bo
shortest route problem, and uses the N-shortest path algorithm and the entropy model to solve the problem. The paper composition arrangement is as follows, section one defines the integrated transport costs and integrated danger coefficient, section two determines the hazardous material alternative transportation path parameter by using the N-shortest path algorithm, section three describes the steps of finding the best path based on entropy model, section four is the example analysis, section five is the research conclusions.
2 The Definition of Integrated Transport Cost and Integrated Danger Coefficient for Hazardous Materials The Integrated transportation cost is defined as follows. Hazardous materials should carry on the consideration, according to transportation when the degree of hazard is biggest, transportation sum of which the expenses are reduced on this road section produces, including fuel costs during road transportation process and maintenance fee of toll expressway, namely transportation cost. The integrated danger coefficient is defined as follows. Hazardous materials should carry on the consideration, according to transportation when the degree of hazard is biggest , the comprehensive index of the accident's probability caused by transports and the accident creates the consequence order of severity occur in this road section , which reflected the risk degree index of hazardous materials transportation at this path, to assume the value is from 0 to 1, the bigger the value, the more the expression does not suit the transportation hazardous materials. This article assumes that, when integrated danger coefficient greater than 0.6, deletes the section, selects the other road transports.
3 To Determine the Hazardous Material Alternative Path Parameter Based on N-Shortest Path Algorithm Decision-making in the transport of hazardous materials route, only knows the integrated transport costs and danger coefficients of the section, how to choose the best path? We firstly use the N-shortest path algorithm to calculate the network parameters of all alternative transportation paths, where the parameters of each transport path, including integrated transport cost and integrated danger coefficient, then use the entropy model to determine the best path .N-shortest path algorithm uses the mean to calculate all network parameters of the alternative transportation path is as follows: If there exists the directed network G=(V,E,W) (V-vertices; E-edge; W-edge length), is required to find the shortest path from s to t. Defines P is complete way set from s to t. We first use the labeling algorithm to find the shortest path p1 from s to t, take out p1 from P and place at the set of Y . P=P-{p1}
(1)
Then take out the shortest path p2 from P again. Put p2 in Y. Based on this method, undergoing n times selections, then the way is deposited in Y namely for asking. Now the problem to be solved is how to find a shortest path from P, therefore giving the following definition.
Study on the Transportation Route Decision-Making of Hazardous Material
285
If the path VsVb(m1)Vb(2)…Vb(k)V b(k+1)Vt exists in the set of P , for some one path pm in the set Y , the edge (Vb(k), V b(k+1)) is not in pm , and V b(k+1) to Vt this part is the same with pm (allowing V b(k+1) = Vt), Then the path pm is the deviation from the path with Vb(k), V b(k+1 , denoted by D(p m, Vb(k), V b(k+1). Any path length of P set is greater than or equal of any deviation from the path of Y set .According to the above, you can obtain the second shortest path p2 on the basis of the shortest path which already be obtained. On the path p1, starting with Vt, from back to front, to a later vertex of Vs, for every vertex Vj, to find all the (Vi, Vj) belong to P-{p1}, and obtaining the deviation from the path D(p1,Vi, Vj ) and its length. In all he deviation from the path which has the smallest length way is namely for the second shortest path p2 . Under normal circumstances, If pj is the deflected way of pi, V(pjpi) =V(pj)- V(pi) is the difference between the apex collection of pj and the apex collection of pi ,where V(pj) and V(pi) is respectively the vertex set of pj and pi, then when calculates D(pj), only needs those vertices in the V(pjpi) .The same way, we will gradually obtain any number of shortest paths (as long as it exists). Calculation process of undirected networks is basically the same with the directed networks. If the calculated network is loop-free directed network, then the shortest path that firstly obtained, from Vs to all vertices (i.e. all vertices of the permanent marking) is applicable to the subsequent expansion of the shortest path's calculation each time. But regarding undirected network (as each edge is directed edge which has two opposite directions and posses the same value), the first time obtains permanent marking value of each point (shortest path length), when afterward requests deviation from the path might not be used at the afterward requests . Generally, when the computer searches for any deflected path with (vi, vj), firstly to inspect the permanent marking L(Vi) of the vertices Vi at the time of calculating pi, if L(Vi)+W(i,j) unequal to L(Vj), then you come to find the shortest path from Vs to Vi is valid when seeking p1. Otherwise, you should disconnect all the ejaculated edges of Vi, recalculate the shortest path from Vs to Vi .Hence, we have got each parameter of the alternative transportation path.
4 The Decision-Making Steps of Hazardous Materials Transportation Route Based on the Entropy Model To avoid the deviation of decision result being caused by subjective factors, here introduces entropy model to determine the weight coefficient. In information theory, entropy value reflects the degree of information disorder, the smaller the value, the smaller the degree of the system disorder, so the information ordering and effectiveness of system which are received through the evaluations of information entropy can be available. Namely through the judgment matrix which is dependent on the constructive of evaluating indicator value to definite index weight, it can try to eliminate each index weight calculation of human disturbance, so let the evaluation is more realistic. The calculation steps are as follows. Step1. To construct m evaluating indicators’ judgment matrix of the n transportation path. R=(xji)nxm (i = 1, 2 , …, m; j =1, 2, …, n)
(2)
286
M. Changxi, G. Yixin, and Q. Bo
Step2. Elements of normalized matrix B, B which received from making judgment matrix normalize are as follows.
b ji = (x ji − min x ji ) (max x ji − min x ji )
( 3)
Where minxji and maxxji respectively is the least satisfactory value and the most satisfactory value under the same evaluation index system j . Step3. Calculating the entropy of evaluating index. According to the definition of entropy, n appraisal things, m evaluating indexs ,the entropy of evaluating index is
Hi = −
1 ⎡n ⎤ f ji ln f ji ⎥ ∑ ⎢ ln n ⎣ 1 ⎦ f ji =
( 4)
b ji
( 5)
n
∑b J =1
ji
When fji=0, lnfji is insignificant, its definition should be amended, is defined as follows.
f ji =
1 + b ji
∑ (1 + b ) n
( 6)
ji
J =1
Step4. To calculate the weight of evaluating index.
W = (Wi )1×m
( 7)
m ⎛ ⎞ Wi = (1 − H i ) ⎜ m − ∑ H i ⎟ i =1 ⎝ ⎠
( 8)
m
∑W i =1
i
=1
(9)
Step5. To apply the comprehensive value algorithm of AHP to calculate the integrated evaluating value of each transport path to determine the optimum transport path. In order to avoid the speed of decision-making being affected by too many alternative paths, this article has adopted two approaches. First, before using N-shortest path algorithm to search alternative path N deleting the sides with integrated danger coefficient greater than 0.6, simultaneously this is also to guarantee that hazardous materials transportation safety. Second, obliterating the integrated transport costs transportation paths of the larger integrated transport costs, reserving a certain number alternative paths.
Study on the Transportation Route Decision-Making of Hazardous Material
287
5 An Example Figure 1 is South China local partial road network chart, value between two nodes is the integrated transportation cost and integrated danger coefficient of each road section. Now, it is assumed that a batch of flammable hazardous materials should be transported from V1 to V7, how to determine the optimum transportation route? We use N-shortest path algorithm, put the weight into the matrix, but should note that when this program is in the realization of the N-shortest path was introduced a virtual point, therefore the serial number of each node should add 1 again. The point matrix is expressed as follow: {{-1, 0,-1,-1,-1,-1,-1,-1}, {-1,-1, 1,-1,-1, 5,-1,-1}, {-1,-1, -1, 4, 2, -1,-1,-1}, {-1,-1,-1, -1, 1,-1,-1, 4}, {-1,-1,-1,-1, -1, 2, 2, 4}, {-1,-1, 1, -1,-1,-1, -1,-1}, {-1,-1,-1,-1, -1, 3, -1, 1}, {-1,-1,-1,-1,-1,-1,-1, -1}} Firstly according to the integrated danger coefficient, deleting the road sections of which integrated danger coefficient greater than 0.6, the integrated danger coefficient of V6 road section is 0.65, greater than 0.6, therefore deleting the road section. Then working out the program of N-shortest path algorithm, the computation obtains each transportation route parameter is shown in table 1.
Fig. 1. South of china partial road network
After we have arrived at table 1, then to calculate the comprehensive evaluation value of alternative paths above according to the calculation steps of entropy model,
288
M. Changxi, G. Yixin, and Q. Bo Table 1. The calculation results based on the N-Shortest path
No.
Path
Integrated transport Cost
Integrated danger coefficient
Biggest integrated danger coefficient
1
1→4→6→7
64
0.40→0.33→0.36
0.40
2
1→2→5→7
70
0.35→0.29→0.30
0.35
3
1→3→5→7
71
0.42→0.40→0.30
0.42
4
1→4→6→5→7
73
0.40→0.33→0.41→0.30
0.41
5
1→2→5→6→7
75
0.35→0.39→0.41→0.36
0.41
through the calculation, the optimum path is 1 → 2 → 5 → 7, the integrated transport cost is 70, the largest integrated danger coefficient is 0.35, the integrated danger coefficient which along the road section respectively is 0.35, 0.29 and 0.30.
6 Conclusions The accident rate of hazardous materials is very low during transportation, but we have to draw enough attention. Because once it happened which will causes heavy personnel casualties, long-term environmental pollution and huge economic losses. Aiming at hazardous materials highway transportation network, this paper proposed the definition of integrated transport costs and integrated danger coefficient firstly, then the N-shortest path algorithm was used to determine the alternative transportation route parameter of hazardous materials, thirdly, the algorithm steps of finding the best transportation route were constructed through using entropy model, finally an example demonstrates the feasibility of the decision-making method. This decision-making method can help the government relevant authority and the transport enterprises choose the reasonable transportation route. The construction of decision support system for the hazardous materials transportation is the focus of next research. Acknowledgments. The authors would like to thank the supports by Natural Science Foundation of China with No.61064012 and No. 60870008, and by the Science of Technology Plan Project in Gansu Province. The paper was also supported by Lanzhou and Baiyin Municipal Science of Technology Projects.
References 1. Joy, D., et al.: Predicting Transportation Routes for Radioactive Wastes. In: Proc. Symp. Waste Mamt., pp. 416–425 (1981) 2. Brogan, J., Cashwell, J.: Routing Models for the Transportation of Hazardous Material-State Level Enhancement and Modifications. Trans. Res. Rec. 1020, 19–22 (1985)
Study on the Transportation Route Decision-Making of Hazardous Material
289
3. Clickman, T.S.: Rerouting Railroad Shipments of Hazardous to Avoid Populated Areas. Accident Anal. Prevent 15, 329–335 (1983) 4. Ivancie, F.: Hazardous Material Routing Study. Final Report prepared by Portland Office of Energy Management, Portland, Ore. (1984) 5. Pijawka, D., Foote, S., Soedilo, A.: Risk Assessment of Transporting Hazardous Material: Route Analysis and Hazard Management. Trans. Res. Rec. 1020, 1–6 (1985) 6. Kessler, D.: Establishing Hazardous Materials Truck Routes for Shipments Through the Dallas- Fort Worth Area. In: Recent Advances in Hazardous Materials Transportation Research. Transportation Research Board, pp.79-87 (1986) 7. Batta, R., Chiu, S.: Optimal Obnoxious Path on a Network: Transportation of Hazardous Materials. Operations Research 36, 84–92 (1988) 8. Shorys, D.: A Model for the Selection of Shipping Routes and Storage Location for a Hazardous Substance (dissertation). Johns Hopkins University, Baltimore (1981) 9. Saccamonno, F.F., Chan, A.: Economic evaluation of routing strategies for hazardous road shipments. Trans. Res. Rec. 1020, 12–18 (1985) 10. Current, J.R., et al.: The Minimum-Covering Shortest-Path Problem. Decision Sci. 19, 490–503 (1988) 11. Abkowtiz, M., Cheng, P.: Developinga Risk-Cost Frame work for Routing Truck Movement of Hazardous Materials. Accident Anal.Prevent 20, 39–51 (1988) 12. Zografos, K.G., Davis, C.F.: Multiobject programming approach for routing hazardous material. Journal of Transportation Engineering 115, 661–673 (1989) 13. Ma, C.: Hazardous Material Highway Transportation Route Multi-attribute Decisionmaking Based on Neural Network Theory under Undeveloped Transport Network Environment. Modern Traffic Technology 6, 84–86 (2009) 14. Ma, C.: Hazardous Material Highway Transportation Route Decision-making under Developed Transport Network Environment. Journal of Transportation Engineering and Information 9, 134–139 (2009) 15. Ma, C.: Highway Transportation Route Decision-making Model of Hazardous Material under certain Linguistic Environment. Journal of Lanzhou Jiaotong University 28, 115–118 (2009) 16. Ma, C., Li, Y., He, R.: Highway Transportation Route Decision-making Model of Hazardous Material under Uncertain Linguistic Environment. Journal of Wuhan University of Technology(Transportation Science and Engineering) 34, 916–919 (2010)
Encumbrance Analysis of Trip Decision Choosing for Urban Traffic Participants Li Zhen-fu, He Jian-tong, and Zhao Chang-ping College of Transportation Management, Dalian Maritime University, Dalian, China [email protected]
Abstract. It is difficult to explode the encumbrance of trip decision choosing for traffic participants because that is deliberate. The origin of encumbrance of trip choosing for urban traffic participants is analyzed. Depending on the research of the sustainable development of urban transportation and summarizing the transportationdevelopment history and foreign cities’ experiences, a comprehensive strategy system to explode the encumbrance of trip decision choosing is proposed which includes mechanismconstruction, political, and capital construction (transportation infrastructure construction) as well as the technique system. Keywords: Trip Decision Choosing, Urban Traffic, Traffic Participant, Encumbrance Analysis.
1 Introduction The essence of urban transportation sustainable development is realizing the coordinated development of people in the city, nature, and society. A city with sustainable developed transportation should be a living organism with the high-efficiency running city transport, and it should be emphases on the overall coordination and development of the big traffic system in the city. During the process of realizing the sustainable developed transportation of the city, part of the encumbrance is caused by lacking of knowledge, but most are encumbrance of trip decision choosing. And because the act of the behavior person is deliberate, so the damage caused by the choosing is more destructive.
2 Analysis on Encumbrance Classifications of Trip Decision Choosing for Urban Traffic Participants The encumbrance which is caused by lacing of knowledge is a kind of “means no harm” to the sustainable development, and the “injurer” do not have the “deliberate” idea of damaging the ecological environment, undermining the social stability, or profiting at the expense of others. However, most of the encumbrances do not because of lacing of knowledge at present. Human society develop into today, was fairly advanced in science and technology, and information was transported fast, so the M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 290–296, 2011. © Springer-Verlag Berlin Heidelberg 2011
Encumbrance Analysis of Trip Decision Choosing for Urban Traffic Participants
291
knowledge about preliminary sustainable development has been known by the policymaker and masses. Now the masses have known some common knowledge like: car exhaust will cause air pollution; the noise made by the vehicle will damage the urban environment; and the construction of urban road will decrease the crop acreage, etc. But under the leading of some mechanism, the people do the bad choosing, someone break the law deliberately, someone even break laws while in charge of their enforcement. Of course, these deliberate behaviors are concerned with the quality of the individuals and groups, but also with the decision-making mechanism. 2.1 Decision Choosing Encumbrance in the Action of the Reference Point From the psychology, when people judge a feeling, like light and dark; cold and hot; far and near; soft and hard; long and short; high and low; light and weight; old, middle-age and young; big, middle and small, they always identify a suitable level( reference point) according to the former experiences, and then feel the outside stimulus by the point. The individual feelings are always affected by the former consumption, experiences, forecasting and other reference points, and has been gradually integrating into the model of economic. So when the people make the urban trip action decision according to these, the results would be opposite to the sustainable development. Government policy-makers aggregate the marginal utility value of members according to the public interest, so the value at last should be accepted by the members. But, in fact, there are contradictions among the public, the individual motivation and government policy-makers because of different reference point. The decision of the public, the individual motivation and government policy-makers is beyond reproach from the aspect of reason, but they may do the action which opposite the group decisions for the interest of themselves. For example, when the government realizing that transport by the motor would cause congestion for the urban traffic, they will take some measures to limit the transport time of motors in the city or charge for it. If the fine is 5 yuan per hour, then someone will not consider about it. And if more than half of the people do this, this policy will lose its meaning. In other word, even if the government policy truly aggregated the priority of every member and formed a fair and reasonable public utility value, but it can not act by the max of this value actually, so the decision encumbrance caused by reference point is formed. Another appearance of these decision encumbrances is that the policy-makers will give up the case which is benefit for most of the people who are affected by the decision because of the deferent reference point between the policy-makers and these people, and it will affect the sustainable development of the urban transportation. For example, the government would make the economy to be its reference point and choose developing the motor industry, encouraging the people to buy cars, but ignore the external diseconomy caused by the transportation by cars and damaging the environment by transportation pollution and encumbrance. 2.2 Decision Choosing Encumbrance in the Action of the Tragedy of the Commons In 1968, Hardin had published an article named The Tragedy of the Commons, he talked about the inevitability of common pasture degenerating [1]. He took an open pasture as an example, and express that rational herders would graze livestock as much
292
L. Zhen-fu, H. Jian-tong, and Z. Chang-ping
as possible. Because the interests of increasing livestock belong to the individual, but the result of grazing excessively beard by all the herders. So the common pasture degenerating is inevitable. Hardin extended his theory to expressing the questions of increasing population and environment pollution. The tragedy of the commons has two performances, one is not valuing the cost when using it, because the share of the cost for everyone is negligible; the other is doing nothing for the projects which could produce public interests, because the share of these interests for everyone is also negligible, and the costs for the action are paid by all the users. So no one do the good things, but someone do the bad things. In the urban transportation system, the joiners are the herders, they all want to have the most convenient travelling experiences, but do not consider too much about its damage for the environment. For example, if a traveler chosen a car for trip, even thought this kind of trip would cause great harm to the environment, and also increase the wasting of transportation resource, but the share of the environment pollution cost for this traveler is very small, and it could be ignored. This is the encumbrance caused by the decision mechanism of commons tragedy for the sustainable development of the urban transportation. 2.3 Decision Choosing Encumbrance in the Action of Lacking of Information Symmetry and the Second Newtonian Law Newton's second law indicates that, if the composite force of an object is zero, its motion state will be not change. But if the force is not zero, the symmetry will be lacking, and then the acceleration is produced. This theory could be also used to express the production of decision choosing encumbrance of the transportation joiners which is based on the urban transportation sustainable development. If the information of each party in the competition is symmetrical, which means that the opportunity for all parties to get information is fair, and all parties have the same means to deal with the information, so even thought the distort competition can not be avoided, but it cannot cause a serious unbalance. For equality and balance is difficult to break. But if the lacking of information symmetry happened, the unbalance would aggravate, so the situation would develop to the direction which is oppose to the sustainable development. The essence of urban transportation is using the time (cost) as few as possible to achieve the people and object move towards their purposes. But because of lacking of information symmetry, the travelers cannot get the information about the state of traffic network, the bus running conditions, the most convenient traffic road, etc. In the action of Newton's second law, these would cause the travelers make a wrong decision, waste the transportation resource, and cause traffic jams, so the sustainable development of urban transportation is retarded. 2.4 Decision Choosing Encumbrance in the Action of Social Psychological Factors In the eyes of warm and unvarnished countryman, the urban human always famous for their inhospitality, and the bigger the city is the more distant the people are. After a serious of psychological tests, the social psychologists summered several theories
Encumbrance Analysis of Trip Decision Choosing for Urban Traffic Participants
293
about affecting altruistic behavior, and drawn a conclusion that the environment with high density and more information is the main reason for the inhospitality of the city. Because the urban human in the environment with high density and more information for long time, so they do not want to know what they thought was irrelevant to their needs, interests and demands, they prefer to pass over the unimportant communication and show insensible to others’ appeal. So from some aspect, the city inhospitality could be considered as a coping mechanism and reflection of the urban human for protecting themselves. The people don’t want to make more communication with others, so they can avoid the unimportant energy consuming or emotion engaging. Finally, by the improving of density and information level, the communication between the people is reducing. On the aspect of urban transportation, the city inhospitality present as the people like individual traveling much more, especially traveling by the car. If the people have this condition, they would ignore the principle and idea of sustainable development, and choose the car, so the transportation problems like pollution and jams are caused. The other influence of city inhospitality is that, the people always ignore the communication among the neighbors, but choose the long-distance trip like outing, shopping, etc. which increase the transportation pressure. These trips could be avoided originally, but increased heavily because of the city inhospitality, and they are also cause the sustainable development encumbrance of urban transportation 2.5 Decision Choosing Encumbrance in the Action of "Face" Factor "Face"is and important and typical social and culture appearance, it is produced in the social communication. For Chinese people, "face" represents reputation and status, is got through success and swank. "Face"is the symbol for social position and reputation of individuals; "face" is the external reflect for respect and self-esteem of individuals. Pay attention to "face" is an important principle to control and regulate the behaviors of Chinese. From the aspect of individual psychological development, the production of "face"idea has an inner connection with the gradual forming and developing of individual self-awareness. By the forming and developing of self-awareness, the individual in the social life will gradual pay attention to the impression of oneself to others and the response and evaluation of others for oneself. At the same time, they also care about their relationship with the people around them. So the self-awareness affect and control one’s thinking about others’ attitudes and evaluations for oneself, and in different social situation, what kind of impression should leave for others, what kind of relationship will be formed because of different interaction and behavioral responses between each other. The factor of "face" make the people prefer to use the car for travelling. But it is not necessary to use the car every time. The situation now is the traveler would like to use the car for trip even if he can do it by walking. Travelling by car has became a symbol of one’s status, these include that using the car of one’s working unit can represent one’s power and social class, using the private car can represent one’s wealth and status, all of these can became the important material symbol during the process of communication among the people, and also is the "face" present in the transportation. But, it is no doubt making more transport pollution and resources wasting.
294
L. Zhen-fu, H. Jian-tong, and Z. Chang-ping
3 The Strategies for Breaking the Decision Encumbrance of Urban Transportation Joiners To break the decision encumbrance of urban transportation joiners, it should start from the conception of producing the actions, and establish the institution, responsibilities, plans, etc. which is benefit for the transportation. Specific strategies are as following: A. Clearly define responsibilities The typical performance of the tragedy of the commons is not taking care of the public costs, and ignoring the public interests. The reason for their producing is the share of public cost and public interest for individual is too small. If the responsibility can be defined clearly, and use some methods to connect the “common area” with individual, then the public cost is connect with the individual, so does the public interest, and everyone will take care of the costs and would be like to contribute for the public interests. The appropriate approach to deal with the trip by the car is using the economy method to control the amount of usage but not the amount of ownership. The motor industry has been developed as the country's pillar industries, and limiting the amount of ownership is not only oppose to the national macro policy, but also retard the national economy and the technology development of motor industry, it is from an extreme to another one. In particular, as the major consumption source, the trend of owning cars in every family in the city is obviously by the developing of economy and increasing of the inhabitants’ lives level. It is also hard to present the principle of fair when control the amount of ownership during the practice of transportation management. Besides this, all the pollution and energy waste of the car is produced in the course of using it, so controlling the amount of usage will reduce the pollution and wasting from the source. The policy maker can learn from experiences of many foreign cities who make the policy based on the principle of "who use, who pay", and when controlling the amount of usage, using the method of economy and administration to lead the type development of cars towards low discharge and high technology, so it can ensure the structure of cars in the city will fulfill the needs of reducing the pollution for the city but improving the speed of car. So this paper suggested that push the fuel tax policy as soon as possible to guide the car owners to control the usage of cars [2]. B. Clearly define responsibilities The asymmetry of transportation information is the main reason for restricting the sustainable development of urban transportation. If the symmetry of information can be realized, the situation of reducing the times of blind travel can be achieved, so does saving the travel time and distance. To be the comprehensive of advanced technology, the intelligent transportation system can reduce the wasting of urban road resource, improve the speed of transportation, save the trip time and reduce the trip pollution. So it can be said that the practicing of this system present the requirement of the urban transportation sustainable development. From the aspect of transportation management in the city, at present, the government is in great need of using the intelligent transportation management technology actively on the aspects of collecting, dealing with and publishing the information. These can
Encumbrance Analysis of Trip Decision Choosing for Urban Traffic Participants
295
achieve two goals: one is realizing effective transportation management, and improving the efficiency of using the transportation infrastructure construction; the other is realizing the information of transport service, it can provide the travelling reference for the inhabitant and optimize the travelling structure [3]. To the trip of the inhabitant, the transportation information fully or not is not only concern with the travelling time, but also with the travelling structure of the road. To the drivers, duly transportation information can ensure the most economic travelling passage (or the less time or the shortest distant or the better comprehensive result), reducing the unnecessary circumambulating or waiting time when the jams happened, the travelling time, the fuel consumption, exhaust gas emission, and relieve the traffic congestion situation. To the public, duly transportation information is the reference for deciding the travel time, type, line and train, so they can avoid the uneconomical travelling caused by the insufficient information (e.g. long waiting time during the travelling, complex travelling road, etc.) C. Improve the transportation service level according to the public trip psychology The public trip psychology is mainly concern with the technology indicators, service level, the flexibility and environmental factors of the traffic type and social and economic characteristics, behavior pattern and value orientation of the public themselves, so the development of public transportation should be considered from the aspects as following: 1. Satisfy the public psychology of time and effect The road users always want to get a punctual and fast service, but at present, the service of bus which is encouraged during the sustainable development of urban transportation do not satisfy this need, so it is necessary to develop the public transportation of large quantity and build the quick way for the bus. These can satisfy the psychological requirement of punctual and fast, and at the same time, because the bus with big quantity can provide a more comfortable traffic environment than the common bus, so this kind of bus can attract more travelers effectively. But developing the bus of big quantity has to consider about the situation of our country, develop gradually, and cannot only pursue the are high, new and special results but forget the initial purpose. 2. Develop good transportation values of public Rely on the government policy to develop good transportation values of public in the society, and make the public reorganized that using the bus for travelling is a good virtue but not an appearance of common man. So travelling by bus can attract the car user effectively and this psychology is also advocated [4]. 3. Satisfy the public psychology of convenience There is no one dislike to solve problems at the nearest place without going to other place. To the urban transportation, if the government can establish a rational environment for living, enjoying the life, service, communication and working, then the people can resolve their problems seldom or even without leaving home, so these can not only fulfill the demand of the people, but also reduce the amount of travelling in the society. On the other side, the road users always want to get more convenient transfer conditions. If the transport indicators are not clarity and brevity, and the transfer are not convenient, and then the service for the travelers would be not very good. So it is
296
L. Zhen-fu, H. Jian-tong, and Z. Chang-ping
necessary now to establish rational junction stations for providing convenient transfer conditions for the travelers, at the same time, the station manager can create art atmosphere to make the boring space to be a little interest, even add some media facilities, so the boring trip would be not stuffy as before, and add some shops which can provide convenience for the travelers. The best result is that these shops can satisfy their needs and make them stop their planed long trip.
4 Summary City is the place where human activities are concentrated, so keeping the sustainable developing of it relates to the sustainable development of the whole region or even the country. But by the guidance of some trip decision, the people choose the actions which retard the sustainable development of urban. Of course these deliberate actions are related to the qualities of individual and group, but also have connection with the decision mechanism. Base on discussing the factors of hamper the city joiners’ trip decision, this paper proposed three strategies for reducing this problem, and provided a new aspect for studying the sustainable development of cities.
References 1. Hardin, G.: Tragedy of the Commons. Science 196(162) 2. Pem, L., Ren, F.: Urban Traffic Management According to the Theory of Sustainable Development. Road Traffic and Safety 5(5), 4–8 (2005) 3. Yuan, H., Xu, A.: The Conception,Principle,and Development Strategy of Sustainable Transport. Road Traffic and Safety 5(5), 11–13 (2005) 4. Li, L.-b., Wu, B.: Effects of traveler psychology factors on development of public traffic. Journal of ChongQing Jiao Ton University 23(3), 94–97 (2003)
Study on Indicators Forecasting Model of Regional Economic Development Based on Neural Network Yang Jun-qi, Gao -xia, and Chen Li-jia Shaanxi University of Science and Technology in Xi'an, Shaanxi, China [email protected]
Abstract. Based on neural network method and by using the relevant social and economic development history index data, this paper establishes the mathematical model and neural network model that is used to predict future land resource demand, and the network is trained by using data from 1992 to 2005. Accordingly, the trend analysis model, which is to predict and analyses the indicators of population, production value, GDP, etc., is also established, and applied to predict the construction land demand from 2010 to 2020. Here, taking the trend forecasting data of the population, production value and GDP as trained network input, it calculates the future land demand. The simulation result of this method is proved to be satisfactory after comparing it with traditional statistical model forecasting results. Keywords: Forecasting, construction land, neural network, GDP, added value.
1 Introduction Land resources are the valuable non-renewable resources. With the accelerating of China's rapid economic development and urbanization, the demand for land resources is becoming increasingly urgent. It is a very important issue for the management department and policy makers to properly manage and use the limited land resources. However, many factors can impact the construction land, such as the level of economic development, population, non-agricultural population, industrial output, fixed assets investment scale, etc. But the impaction relations of these factors are very complex nonlinear relationship, which is difficult to study by using the traditional analysis methods. By using neural network method, this paper establishes the analysis and forecasting model, and compares its simulation results with traditional statistical trend model forecasting’s results, so as to explore the feasibility and practicality.
2 MATLAB Neural Network Model Construction and Implementation Bp Network and MATLAB Implementation. This paper uses the neural network model to predict the construction land demand. Here, the GDP, secondary, tertiary industry added value, population, non-agricultural population, the fixed assets M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 297–304, 2011. © Springer-Verlag Berlin Heidelberg 2011
298
Y. Jun-qi, G. -xia, and C. Li-jia
investment scale are taken as the input of the neural network system, and the construction land demand as output. A three-tier BP network (Back propagation NN).
3 BP Network MATLAB Code Implementation[1] 3.1 Neural Network Model Simulation Learning Based on the Function Relations between Construction Land and Relevant Factors Here, the GDP, secondary, tertiary industry added value, population, non-agricultural population, the fixed assets investment scale are taken as the input of the neural network system, and the construction land demand as output, which have impact on the construction land area. The data from 1992 to 2005 of Shanxi Province Bin County are used to be network training data. In order to forecasting the future construction land demand, the trend forecasting model for some items is established, and some relevant index of 2010 and 2020 for single forecasting are taken as network input, so the future construction land scale and amount can be forecastinged.
4 Sample Input Data Processing The three tier BP neural network is used to make simulation forecasting of construction land, the nodes of input tier, hidden tier and output tier respectively are 5×10×1 activation function: sigmoid,logsig, learning rate η=0.9, learning training algorithm: back propagation (BP) algorithm. The GDP, secondary, tertiary industry added value, population, non-agricultural population, the fixed assets investment scale are taken as the input of the neural network system, and 14 years of historical data as the training network samples. Because each the data of each item are quite different, so as the input and output data of the neural network, normalizations are firstly made, processed data in Table 1. Table 1. The value of the raw data normalized
year
Non-agric ultural population
secondary, tertiary industry added value
the fixed assets investment
1992
0.337920
0.008023
0.003405
1993
0.365237
0.010325
0.004853
…
….
…
…. 0.053745
2005
0.5240809
0.070658
population
Construction land
0.725872
0.939033
0.736743
0.942399
…
…
…
0.080 416
0.789368
0.966863
GDP
0.010 215 0.013 290
Study on Indicators Forecasting Model of Regional Economic Development
299
Sample Output Data Processing. Construction land demand is taken as network output; the data is in Table 1. Nmodel, the network is established, trained and simulated by using functions NEWFF(), train(), sim()in matlab. The hidden and output tier transfer function is tansig and logsig, target error is 0.000001, training process in Fig.1. The training process is fast and simulation error is small. The results are very satisfactory for there are almost no differences between the system simulation value and the actual value, the comparisons between the simulation value and the actual value is shown in Table 2 and Fig.2. So this model can effectively simulate the complex mapping relations of the construction land demand and the related factors.
Fig. 1. BP neural network system training process
Fig. 2. BP comparisons between network
300
Y. Jun-qi, G. -xia, and C. Li-jia
sample simulation value and real value Table 2. Sample data and network simulation results year
construction land
Bp simulation value
BP simulation deviation
1992
10068.51
10072.504
3.9936221
1993
10104.6
10074.023
-30.577036
…
…
…
…
2005
10366.9
10367.635
0.73548979
Network Simulation Forecasting for Future Construction Land Demand. The trained network can be used to make simulation forecasting for future construction land demand (network output) based on different impaction factors combinations (network input). In order to forecasting the demand for construction land of Bin County in 2010 and 2020, the GDP, secondary and tertiary added value, population, non-agricultural population, the value of fixed assets investment scale of the same region and same year should be forecasted, for all this, the paper establishes the average growth rate forecastings, the average increase and decrease amount forecasting, the linear trend forecasting, the quadratic curve trend forecasting and index trend forecasting model. the index data and forecasting models and forecasting results from 1992 to 2005 are shown in Table 3. Considering there is some uncertainty and from every single item Table 3. The index of Bin County from 1992 to 2005 secondary, tertiary industry added GDP( billion population(ten value Yuan) thousand)
year
Construction land
non agricultural population(ten thousand)
1992
10068.51
3.34
2.58
0.87
4.85
46.07
1993
10104.6
3.61
3.32
1.24
6.31
46.76
…
…
…
…
…
…
…
2005 Average development rate The average increase or decrease Linear trend equation
10366.9
5.18
22.72
13.73
38.18
50.1
1.002249
1.034332
1.182159
1.236418
1.172007
1.006471
22.953076 0.141538 1.549230 24.8664*t0.1476*t1.2661*t39426.4173 290.5667 2520.6931 -2.1098*t^2+ 0.00096*t^2 0.0729*t^2Quadratic 8457.9324*t-3.7232*t+ 290.1086*t+ trend equation 8466133.3906 3577.2357 288634.3028 Index trend EXP(0.002428 EXP(0.03432*t- EXP(0.1462 equation *t+4.3853) 67.1311) *t-289.9877) Forecasting 2020 Average development rate 10722.2011 8.594756 279.607644
0.989230 2.563846 0.31 0.8227*t2.0532*t0.2900*t1640.2054 4086.3181 531.3493 0.1348*t^20.1151*t^2 -0.0022*t^2+9 538.0037*t+ -458.0136*t+ .1021*t536779.8304 455633.5402 9336.7717 EXP(0.2081*t EXP(0.1324*t EXP(0.006011 -414.9099) -261.8663) *t-8.1359)
331.24764
412.84854
55.189980
Study on Indicators Forecasting Model of Regional Economic Development
301
Table 3. (continued) The average increase or decrease Linear trend forecasting Quadratic trend forecasting Index trend forecasting Forecasting 2010 Average development rate The average increase or decrease Linear trend forecasting Quadratic trend forecasting Index trend forecasting
10711.1962
7.303076
45.958461
28.568462
76.637692
54.75
10803.613
7.505406
36.804461
21.566462
61.119230
54.539516
9862.62038
7.937315
69.317126
81.690692
112.455179
53.556233
10818.6181
8.974686
188.104438
222.14114
254.607015
54.950876
10484.0059
6.132407
52.455202
39.673266
84.4282
51.74224
10481.6654
5.887692
30.466153
18.676154
50.999230
51.65
10554.9495
6.029802
24.143582
13.339868
40.587362
51.639076
10310.207
6.142137
32.599791
28.97756
53.939313
51.383335
10559.1477
6.367534
43.618832
27.731097
67.757078
51.745071
Data from Bureau of Statistics of Bin County Table 4. Factors value normalized of 2010 and 2020 and construction land BP simulation value
year 2010 low value combination 2010median value combination 2010 high value combination 2020 low value combination 2020 median value combination 2020 high value combination
secondary, tertiary industry added value
the fixed assets investment
0.527373
0.138663
0.092269
0.151153
0.692956
9414.8159
0.620439
0.163132
0.108552
0.177827
0.815243
8829.8678
0.713505
0.187602
0.124835
0.204501
0.937529
9835.5379
0.739130
0.739130
0.739130
0.739130
0.739130
9426.8866
0.869565
0.869565
0.869565
0.869565
0.869565
9426.8873
1
1
1
1
1
9426.8675
Nonagricultural population
GDP
population
Construction land
302
Y. Jun-qi, G. -xia, and C. Li-jia
forecasting model, selecting the best model that can reflect the trends to forecast the GDP, secondary and tertiary added value, population, non-agricultural populationhe, value of fixed assets investment scale. Then take this as a basis, properly enlarge or narrow predictive value of the relevant departments. And then from each index three values, respectively small, medium and large values (see Table 4) were taken as network input to forecast. The individual factor trend and analysis results are shown in Table 3and Fig.3 shows the construction land use trend from 1992 to 2005, a downward trend in recent years is shown in it. Network simulation results show: Take the low value: 71.7640, 44.5869, 43.9809, 5.2125, 23.5714, which are respectively the index of the GDP, secondary and tertiary added value, population, non-agricultural population, and the value of fixed assets investment scale of 2010, the network simulation: 9414.8159 Take the forecasting value: 84.4282, 52.4552, 51.7422, 6.1324, 27.7311, which are respectively the index of the GDP, secondary and tertiary added value, population, non-agricultural population, and the value of fixed assets investment scale of 2010, the network simulation: 8829.8678 Take the high value: 97.0924, 60.3235, 59.5036, 7.0523, 31.8908, which are respectively the index of the GDP, secondary and tertiary added value, population, non-agricultural population, and the value of fixed assets investment scale of 2010, the network simulation: 9835.5379 Take the low value: 350.9213, 237.6665, 46.9115, 7.3055, 188.8200, which are respectively the index of the GDP, secondary and tertiary added value, population, non-agricultural population, and the value of fixed assets investment scale of 2020, the network simulation: 9426.8866 Take the forecasting value: 412.8485, 279.6076, 55.1900, 8.5948, 222.1411, which are respectively the index of the GDP, secondary and tertiary added value, population, non-agricultural population, and the value of fixed assets investment scale of 2020, the network simulation: 9426.8873 Take the high value: 474.7758, 321.5488, 63.4685, 9.8840, 255.4623, which are respectively the index of the GDP, secondary and tertiary added value, population, non-agricultural population, and the value of fixed assets investment scale of 2020, the network simulation: 9426.8675 From the simulation results, it can be seen that the demand of future construction land shows downward trend. In 2010, different construction land areas appear under different levels of economic development, but construction land areas appear no different under different economic growth rate in 2020, which may be caused by more stringent land management. Fig.3 also shows downward trend, matching the single consistent trend forecasting model forecasting (Table 3).
Study on Indicators Forecasting Model of Regional Economic Development
303
Fig. 3. The data change trend of Bin county from1992 to 2005
5 Conclusion This paper mainly studies the neural network model, which is about how the GDP, secondary and tertiary added value, population, non-agricultural population, and the value of fixed assets investment scale impact region construction land area. And the simulation results of the model are proved to be satisfactory after being compared with the simulation results of traditional statistical trends forecasting model. It can be applied to the simulation forecasting of construction land. To make better network simulation, there must be more historical data as a sample of training data, and it is also possible to consider increasing or reduce the number of input indicators in order to screen the indicators of which having a greater impact on the indicators. Being short of data in actual situation, the sample data are not being divided into training data and test data. When data are enough, the division can be considered. In order to facilitate
304
Y. Jun-qi, G. -xia, and C. Li-jia
readers to use the model, this paper gives the main MATLAB code. Interested readers may contact the author for more code and pattern content which are not presented for space limit.
References 1. Yang, J.-q.: Model design and case analysis of fuzzy neural network of artificial customer’s beverage taste. Beverage Industry (3), 27–30 (2006) 2. Dai, Q.-l.: Training and simulation on gear position decision for vehicle based on optimal algorithm of network. Chinese Journal of Mechanical Engineering 38(11), 124–127 (2002) 3. Huang, S.H., Zhang, H.C.: Artificial neural network in manufacturing: concepts applications, and perspectives. IEEE Transaction on Components, Packaging and Manufacturing Technology 7(2), 212–228 (1994) 4. Li, Y.-q.: Mixed method of artificial neural network and ITS application on fault diagnosis for rotational machine. Chinese Journal of Mechanical Engineering 40(1), 127–130 (2004)
An Adaptive Vehicle Rear-End Collision Warning Algorithm Based on Neural Network Zhou Wei1, Song Xiang2, Dong Xuan1, and Li Xu2 1
Key Laboratory of Operation Safety Technology on Transport Vehicles, Ministry of Communication, PRC, Research Institute of Highway Ministry of Communications, Beijing, China 2 School of Instrument Science and Engineering, Southeast University, NanJing, China [email protected]
Abstract. Most of the existing algorithms of vehicle rear-end collision have poor adaptive, high false alarm and missed alarm rates. A two-level early warning model based on logic algorithm of safe distance is discussed. The influence of road conditions, driver status and vehicle performance on the warning distance of rear-end collision in the driving process is analyzed. And for different driving conditions, a warning algorithm of vehicle rear-end collision based on neural network with adaptive threshold which can adapt to different status of the three main elements, human-vehicle-road is proposed. Also the comparison of the warning distance whether using adaptive strategies for the rear-end collision algorithm through changing the real-time status of human-vehicle-road is presented .The result of the simulation shows that the algorithm proposed is self-adaptive to the warning distance and region, and the feasibility of the algorithm is verified. Keywords: rear-end collision, adaptive, warning algorithm, nerual network.
1 Introduction In recent years, vehicle accidents occur frequently with the rapid increasing of vehicle population. The number of casualties and property loss caused by the accidents is amazing. Statistics of the vehicle accidents show that: rear-end collisions occupy a considerable proportion of these accidents. In 2009, number of rear-end collision is 25033, occupying 10.50% of total accident number, and the direct property loss caused by rear-end collisions is more than 200 million RMB, occupying 23.80% of total loss[1]. The research from Daimler-Benz shows that: half of collision accidents can be avoided if the driver could be warned 0.5 seconds before the accident. And 90% of collision accidents can be avoided if the driver could be warned 1 second before[2]. Therefore, rear-end collision warning systems are developed in many countries, and the algorithm of the system is the key technology. Generally speaking, two-parameter algorithms from distance to collision and time to collision are used to find out the real-time warning value for rear-end collision avoidance[3].The time to collision algorithm is to determine the security state by compare the calculated collision time of two vehicles and the security time M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 305–314, 2011. © Springer-Verlag Berlin Heidelberg 2011
306
Z. Wei et al.
threshold[4].The distance to collision algorithm is to determine the security state by compare the actual distance between the two vehicles and the security distance threshold[5]. In practical applications, as the driving characteristics of different drivers vary widely, the requirement of the safety time threshold is not consistent. It’s Difficult to meet the driver's driving habits, so the collision warning systems which base on time to collision algorithm are rarely. This study is also based on the distance to collision algorithm. Summarized the existing algorithm model based on safe distance, there are three typical models: safe distance model based on kinematic analysis of vehicle braking[6][7], safe distance model based on time-headway[8], and driver preview safe distance model. However, factors which lead to rear-collision are very complicated. Driver, road and vehicle itself, driving environment constitutes these factors. The importance of these complex factors is often ignored in the traditional collision warning algorithms. And the alarm threshold cann’t be adjusted adaptively acccording to the vehicle technical condition, road conditions, natural environment and driver status ,which leads the higher false alarm rate and miss alarm rate .At the same time, the adaptability was also poor. Based on these problems, this paper presents an adaptive vehicle rear-end collision warning algorithm base on time-headway ,which can adjusts the alarm threshold according to the real-time status of human-vehicle-road three elements.
2 Rear-End Collision Warning Model A two-level warning model was used in this study: caution warning and danger warning. The safe distance which consider the driver's driving proficiency is analyzed as follows .At first, the parameters is set as follows: v1(m/s): absolute speed of following vehicle, collected by the speed sensor. vr(m/s): relative velocity between the following and preceding vehicles, which collected by the radar or CCD camera. Since rear-end collisions tend to occur in the condition that vr >0, so the warning is started only if vr > 0. v2(m/s): speed of preceding vehicle.v2= v1- vr a (m/s2) : maximum deceleration of vehicles t1(s): brake delay time, that means the time interval from the alarm signal is issued to the braking force begin work. There contains two components: driver reaction time ta and braking system coordination time tc.in t1, the braking force effect is ignored, the vehicle is considered to do uniform motion. According to the related literature, tc=0.2. t2(s): vehicle deceleration increase time, that means the time interval of deceleration increase from zero to maximum. In t2, vehicle Braking deceleration constant change the vehicle is considered to do variable deceleration motion. According to the related information,t2=0.2s. t3(s): continuous braking time, that is the time from the braking deceleration reach it's maximum to the vehicle stop motion. In t3, vehicle is considered to do uniform deceleration motion.
An Adaptive Vehicle Rear-End Collision Warning Algorithm Based on Neural Network
307
d(m): real-time interspacing (relative distance) between the following and preceding vehicles, which collected by the radar or CCD camera. d0(m): security heading offset. dr(m): warning critical distance from the traditional algorithm. d1(m): caution warning critical distance d2(m): danger warning critical distance in condition 2. According to the time-headway warning algorithm, the warning critical distance is braking distance of following vehicle plus security heading offset with minus the driving distance of the preceding vehicle. The preceding vehicle is active braking without driver reaction time but the following vehicle start braking after the driver reaction time. So dr is given by
,
1 1 2 v1 ∗ t 2 + a ∗ t 3 + d 0 2 2 2 ⎛ 1 1 ⎛v ⎞ ⎞ − ⎜ v2 ∗ tc + v 2 ∗ t 2 + a ∗ ⎜ 2 ⎟ ⎟ ⎜ 2 2 ⎝ a ⎠ ⎟⎠ ⎝ d r = v1 ∗ t1 +
(1)
v − (v1 − v r ) = v1 ∗ t a + 0 .3v r + 1 + d0 2a 2
2
The driver's proficiency influence on the warning distance is reflected in the driver reaction time ta and the security heading offset d0. According to the related literature, the driver reaction time is normally taken 0.3-2 seconds, and the security heading offset is usually taken for 0-6 meter. The general values of the traditional algorithm are the 0.8s and 2m, respectively. So
v 1 − (v 1 − v r ) +2 2a 2
d r = 0 .8 v1 + 0 .3v r +
2
(2)
In this warning model, ta and d0 are the general values, so the distance is too long for the skilled drivers and too short for the unskilled drivers. Based on this consideration, This study adopts two-tier warning system, the caution warning distance is adapt to the unskilled driver and the danger warning distance is adapt to the skilled driver. Caution Warning Distance. In this case, the unskilled driver is considered, the driver reaction time is longer, and longer security heading offset is required, so the driver reaction time is taken 1.5s, and the security heading offset is taken 4m.That,
v1 − (v1 − v r ) +4 2a 2
d1 = 1.5v1 + 0.3v r +
2
(3)
The condition is the most dangerous extreme conditions, thus d1 is considered the largest warning distance. Once d
308
Z. Wei et al.
Danger Warning Distance. In this case, the skilled driver is considered, the driver reaction time is shorter, and the required security heading offset is shorter so the driver reaction time is taken 0.5s, and the security heading offset is taken 1m. That,
v 1 − (v 1 − v r 2a 2
d 2 = 0 .5 v 1 + 0 .3v r +
)2
+1
(4)
Obviously d1>d2, once d
η =
d − d1 , d − d2
(5)
The analysis shows
⎧ d > d 1 ⇔ safety ⇔ 0 < η < 1 ⎪ ⎨ d 2 < d < d 1 ⇔ caution ⇔ η < 0 ⎪ d < d ⇔ danger ⇔ η > 1 2 ⎩
(6)
3 Adaptive Strategy Above model is derived in the general situation, it means the related parameter of the driver - vehicle - road three indicators are take the general or average values. In practice, due to the differences in real-time status of the driver-vehicle-road, the warning distance of the model tends to too large or too small, which resulting in the higher false alarm rate and miss alarm rate. Therefore, the impact on the model by the various factors is analysis as follows, and implementations the adaptive adjustment of the model according to the various factors. Road Factors. Road factors mainly in the road friction coefficient, thus further affecting the maximum braking deceleration, and ultimate expression of the warning distance. In traditional algorithm, In the traditional algorithm, the maximum braking deceleration is generally taken the mean value:6m/s2.And in this algorithm, the maximum braking deceleration is calculated by the formula
a = ϕg
(7)
In (7), g is 9.8 m/s2which means the acceleration of gravity. φ is the road friction coefficient. The warning distance of this algorithm can adaptively adjust based on the road condition, and the road information may acquired real-time through the in-board sensors. The relationship of the Road conditions, friction coefficient(φ) and maximum braking deceleration a(m/s2) is shown in Table 1.
An Adaptive Vehicle Rear-End Collision Warning Algorithm Based on Neural Network
309
Table 1. Friction coefficient and maximum braking deceleration of different Road conditions Road type
drying
moist
φ
a
φ
a
cement
0.75
7.4
0.65
6.4
asphalt
0.70
6.8
0.60
5.9
soil
0.65
6.4
0.45
4.4.
gravel
0.55
5.4
\
\
snow
0.15
1.5
\
\
freeze
0.07
0.7
\
\
Driver Factors. The driving intentions and fatigue are considered in this section. a) Driving intentions The driving intent can reflected if the vehicle is in the driver's control or the driver is already aware of danger, and to determine whether the alarm. So the warning is shielded in the following states: (1) When the driver stepped on the brake pedal, this state indicates that the driver's attention is focused, and the vehicle is in properly control. so the warning is not necessary. This state is identified by detecting the brake pedal pressure. (2) When the driver releases the accelerator pedal, this state reflects the driver is correct control the vehicle, so the warning is not necessary. This state is identified by detecting the accelerator pedal pressure. (3) when the turn light is turned on, this state shows Driver is conscious of the steering or lane changing operation, so the warning is not necessary. This state is identified by detecting the turn light signal. (4) When the vehicle Speed is less than 40 km/h, this state is said that the driver’s attention is focused, so the warning is not necessary. This state is identified by detecting the speed signal. b) Driver fatigue Driver fatigue is important factor which cause a rear-end collision, which can Evaluation by BP neural network through detect the road offset, steering wheel angle and velocity[10]. And the fatigue factor R was calculated. when R = 0. 0~0. 5, driver is not considered in fatigue status ,when R = 0. 51~1. 0, driver is considered in fatigue status. An adjustment coefficient k is used in this algorithm, which value can reflect the influence of driver fatigue on the warning distance. MATLAB software is employed to stepwise regression analysis, and the outcome of the Simulation and verification is analyzed, k is defined as follow:
310
Z. Wei et al.
⎧⎪1 k =⎨ ⎪⎩ 1 + ( R − 0.5) 2
0 ≤ R ≤ 0.5 0.5 < R ≤ 1
(8)
the earning distance di(i=1,2)=di*k. Vehicle Factors. In several of the vehicle performance, the braking performance is the most importance factor influence on the earning distance. The current vehicle braking performance is fuzzy judged by Fuzzy Neural Network through detect the related brake parameter. and the braking performance is blurred classified as Normal braking, poor braking and braking failure. Another adjustment coefficient is used in this algorithm. MATLAB software is employed to fuzzy calculation and simulation. and the values of λ are given corresponds to the three status were 1 1.5 and 2. The earning distance di(i=1,2)=di*λ
。
、
Adaptive Algorithm Based on Neural Network. According to the above analysis of the rear-end collision model and the adaptive strategies, The algorithm employed the two-levers neural networks, the first lever consists of two sub-neural networks, which
Fig. 1. The flow chart of the adaptive vehicle rear-end collision warning algorithm based on neural network
An Adaptive Vehicle Rear-End Collision Warning Algorithm Based on Neural Network
311
input are the real-time information detected the in-board sensor, and the output are driver fatigue and vehicle braking performance. the input of the second lever are the output of first lever and other variables such as road friction coefficient ,relative speed and distance. the output of the second lever are the caution warning and danger warning distance. So the pre-warning level is determined with driver intention. The flow chart is shown in Figure 1.
4 Simulation MTALAB is used to build a simulation system, the effect of this system is to compare the anti-rear-end collision model which taken the adaptive strategy to the non-adaptive model. First, the preceding vehicle is assumed static, the driver of the following vehicle is not fatigue, and two vehicles driving in the dry cement road, and the braking performance of the following vehicle is normal. The traditional algorithm and the adaptive algorithm proposed in this paper were used respectively. The warning distance d of the traditional algorithm and the two-level warning distance d1 and d2 of the adaptive algorithm were calculated through the following vehicle speed changing. The results are shown in Table 2 and the simulation results are shown in Figure 2. The following vehicle speed is begin to considered from 40km/h because The algorithm strategy is set that the alarm starting from the following vehicle faster than proceeding vehicle and the speed of following vehicle is faster than 40km/h. Table 2. The warning distance comparison between the traditional and adaptive algorithm Speed of following vehicle [km/h] traditional d [m] adaptive d1 [m] d2 [m]
40 24.51 Shielding Shielding
60 43.48 52.77 33.10
80 67.60 77.37 52.14
100 96.86 106.14 75.36
120 131.26 139.08 102.74
Algorithm Simulation 180 160
warning distance(m)
140 120 100 80 60 40 20 0 40
50
60
70
80 90 100 110 the vehicle speed(m/s2)
120
Fig. 2. Algorithm simulation and comparison 1
130
140
312
Z. Wei et al.
Shown in Figure 2, the blue line is the traditional method warning distance, the Green Line as a caution warning distance of the adaptive algorithm, red line represent the danger warning distance of the proposed algorithm. it Can be seen in this condition, the caution warning distance obtained by the proposed algorithm is greater than the traditional algorithm warning distance , so to the inexperienced driver, there are enough time to respond. but the danger warning distance is less than the traditional algorithm warning distance because the driver is in a non-fatigue state and road conditions are good, so to the experienced driver, the warning distance Significantly reduced, warning region also reduced, and the false alarm can effectively prevented. Then assumed that the preceding vehicle traveling at a constant speed of 60km / h, the driver of the following vehicle is fatigue, and two vehicles driving in snow soil road, and the braking performance of the following vehicle is normal. The following vehicle speed is begin to considered from 40km/h according to the Alarm strategy. the simulation results are shown in Figure 3, the blue line is the traditional method warning distance, the Green Line as a caution warning distance of the adaptive algorithm, red line represent the danger warning distance of the proposed algorithm. Alogrithm Simulation 500
warning distance(m)
400
300
200
100
0
-100 40
50
60
70
80 90 100 110 the vehicle speed(m/s2)
120
130
140
Fig. 3. Algorithm simulation and comparison 2
It can be seen from Figure 3, the caution warning distance and the danger warning distance of the proposed algorithm are greater than the traditional algorithm warning distance. so the warning distance is increased, the warning area is also increased.,and the miss alarm can effectively prevented, the Security has been enhanced. The traditional algorithm does not take into account the driving conditions in real time, in this condition, driver is in fatigue state and road adhesion coefficient is very low, even for skilled drivers, should also be cautious. Compared with Figure 2, the preceding vehicle speed increases, the friction coefficient decreases, and the driver fatigue is different, so the warning distance and the warning area change clear. Therefore, the algorithm can achieve warning distance adaptive adjust according to the real-time differences of human-vehicle-road factors.
An Adaptive Vehicle Rear-End Collision Warning Algorithm Based on Neural Network
313
The simulations of Figures 2 and 3 are assumed the speed of the preceding vehicle is constant. Then the impact of vehicle speed changes is considered, and it can be analyzed through the velocity change of the preceding and following vehicle. the
Fig. 4. Traditional algorithm warning distance
Fig. 5. Caution warning distance of the adaptive algorithm
Fig. 6. Danger warning distance of the adaptive algorithm
314
Z. Wei et al.
following vehicle speed is begin to considered from 40km/h because The algorithm strategy is set that the speed of following vehicle is faster than 40km/h . At this time, it assumed that people-vehicle-road factors are in normal state. The simulation results shown in Figure 4 to Figure 6, the three axes, respectively, representing the preceding vehicle speed, following vehicle speed and the warning distance.
5 Conclusions The impact on the rear-end collision warning distance by Road conditions, the driver status and the vehicle performance during vehicle operation are analyzed in this article. A adaptive rear-end collision warning algorithm which can adapt to the real-time human - vehicle - road state and adjust the adaptive threshold by Neural network was proposed. And the Feasibility of the algorithm was verified through Simulation of changing the real-time human - vehicle - road status, to achieve a adaptive adjustment of the warning region and the warning distance . Acknowledgement. This research is supported by National High Technology Research and Development Program 863 (Grant No.2009AA11Z216).
References 1. Ministry of Public Security Traffic Management Bureau. PRC Road Accidents Statistical Report (2009) p. 8 Ministry of Public Security Traffic Management Research Institute, Wuxi (2010) 2. Shanghai Municipal Education Commission. Modern Automotive Safety Technology. Shanghai Jiaotong University Press, Shanghai (2006) 3. Chang, T.-H., Hsu, C.-S., Wang, C., Yang, L.-K.: Onboard Measurement and Warning Module for Irregular Vehicle Behavior. IEEE Transactions on Intelligent Transportation Systems 9(3), 501–513 (2008) 4. Yoshida, H., Awano, S., Nagai, M., Kamada, T.: Target Following Brake Control for Collision Avoidance Assist of Active Interface Vehicle. In: SICE-ICASE International Joint Conference 2006, Bexco, Busan, Korea, October 18-21, pp. 4436–4439 (2006) 5. Lee, K., Peng, H.: Evaluation of automotive forward collision warning and collision avoidance algorithms. Vehicle System Dynamics 10(43), 735–751 (2005) 6. Li, X.x., Li, B.c., Hou, D.z., Chen, G.w.: Basic study of rear-end collis ion warning system. China Journal of Highway and Transport 14(3), 93–95 (2001) 7. Wang, W.q., Wang, W.h., Zhong, Y.g., Yi, S.p.: Car-following safe distance control algorithm and implemen tat ion based on fuzzy inference. Journal of Traffic and Transportation Engineering 3(1), 72–75 (2003) 8. Xu, J., Du, W., Sun, H.: Safety distance about car-following. Journal of Traffic and Transportation Engineering 2(1), 101–104 (2002)
A Kind of Performance Improvement of Hamming Code Hongli Wang Department of Mathematical and Information Sciences, Tangshan Teacher’s College, The No. 156 in Jianshe Bei Road, Lubei District, 063000, Tangshan, China [email protected]
Abstract. Hamming code is a parity code, the information bits and test bits must be mixed arrangement at a fixed location to check for errors s and correct the wrong. Based on the design principle of Hamming codes, this paper gives a improved method, proposed to arrangements the test bits in the last digit of code word, so that extract the information bit more convenient, and hardware implementation will be easier. Keywords: Hamming code, parity code, performance improvement, error-correcting code.
1 Introduction Hamming code is a extension of parity code. It uses a number of verification code, in which each parity bit check in all the different bits of information and data, through arrangements each parity bit reasonably for parity combination to the original data, achieve to find errors and correct the wrong. Suppose there are m-bit data bits, how to set the length of parity k in order to meet the requirements of correct an error? We are here to do a simple derivation. k bits check code can have 2 K values. Obviously, one of the values show that the data is correct, and the remaining 2 K − 1 values means that there are errors in the data, if it can meet: 2 K − 1 ≥ m+k (m + k is the total length after encoding), k parity can determine which bit is error in theory, Hamming encoding steps:
[1] According to the information bits to determine the check digit, 2 K ≥ k + m +1, where, m is the information bits, k is the check bits. Calculate the minimum k to meet the inequality, it is the check digit. [2] Calculate the check digit position, Hamming code is starting number from the left by the order within the code, the 1 bit is No.1, the 2 bit is No. 2, …, the n bits is No. n, the bit of number 2’s power (1 bit, 2 bits, 4 bits, 8 bits, etc.) is parity bit, and the rest fill in m-bit data. [3] The formation of parity bits, each parity bit represents part of the data bits’s parity in code word, its location determine to skip and verify the order of the bits. Location 1: Check one, skip one, check one, skip one (1,3,5,7,9,11,13,15, ...) Location 2: check 2 bits, skip 2 bits, parity 2, skip 2 (2,3,6,7,10,11,14,15, ...) M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 315–318, 2011. © Springer-Verlag Berlin Heidelberg 2011
316
Hongli Wang
Location 4: check 4, skip 4 bits, parity 4, skip 4 bits (4,5,6,7,12,13,14, 15,20,21,22,23, ...) Location 8: check 8, skip 8 bits, parity 8-bit, skip 8 bits (8-15,24-31,40-47, ...) [4] As long as the information bits are determined, Hamming code can be obtained from the above method . [5] Based on the distribution rules of the Hamming code information bits and the parity bits for the receiving, find out the received data’s information bits and parity bits and verify at the receiving end: the method is to verify each check bits. Write down all the wrong parity bits. In general, check all parity bits, added all error check location together, the result is the location of an error message. [6] Correct the error, the location of the error message code word can be taken against. Through the above mentioned principles and examples of Hamming code, we know that Hamming code can detect error and correct a mistake. But in the Hamming code, the information bits and test bits must be mixed arrangements at a fixed location, such as the above 8421 Hamming code , the parity bit must be placed in 8421 position. In that case, extract information bits would be more inconvenient, so the use of hardware implementation becomes complicated.
2 Performance Improvement Try to arrange the test bits not in the fixed location, and only lay them in the last digit of code word. It is easier to extract information bit. Hamming codes can now be improved, and construct a new error-correcting code, the new code is also composed by information-bit and test code together, but this time K the information bit and parity bit has the relation as follows: 2 − 1 =n+k . n is the information bits, k is the check digit bits. This performance improvement is based on the encoding principle of Hamming codes. Use k parity bits can test in any one of errors in a code word for 2 K − 1 information bits, regardless of any one is error in 2 K − 1 information bits , which can be test out only. In this performance improvement, if the test bits are k , the information bits are not 2 K − 1 , but 2 K − 1 -k, , let check digit occupies part of it. The following example descript error correction principle detailedly: Based on the above relation, we take k = 4, n = 11, and the data = 10111001011, error-correcting code are p1, p2, p3, p4. 1
1
0
1
0
0
1
1
1
0
P4
1
P3
P2
P1
b b b b b b b b b b b b b b b 15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
In this correction, detection method of the information bits is separate from check bits. we first seek error detection code:
A Kind of Performance Improvement of Hamming Code
317
p1 = b15 ⊕ b13 ⊕ b11 ⊕ b9 ⊕ b7 ⊕ b5 =0, p2 = b15 ⊕ b14 ⊕ b11 ⊕ b10 ⊕ b7 ⊕ b6 =1 p3 = b15 ⊕ b14 ⊕ b13 ⊕ b12 ⊕ b7 ⊕ b6 ⊕ b5 =1, p4 = b15 ⊕ b14 ⊕ b13 ⊕ b12 ⊕ b11 ⊕ b10 ⊕ b9 ⊕ b8 =1 then
p1 p 2 p3 p 4 =1110, let p = p1 ⊕ p 2 ⊕ p3 ⊕ p 4 , so p =1
[1] If the situation is the following:Send the code word: 1
1
0
1
0
0
1
1
1
0
1
1
1
1
0
0
1
1
1
0
1
1
1
1
0
Received the code word: 1
0
0
C15 C14 C13 C12
1
0
C11 C10
C9
C8
C7
C6
C5
p4 ' p3' p2 ' p1
/
First inspect the error detection code for errors: / ' ' ' p1 ⊕ p2 ⊕ p3 ⊕ p4 ⊕ p =0, No error
then p1 =C15 ⊕ C13 ⊕ C11 ⊕ C9 ⊕ C7 ⊕ C5 ⊕ p1 =0, /
/
p2 ' =C15 ⊕ C14 ⊕ C11 ⊕ C10 ⊕ C7 ⊕ C6 ⊕ p2 ' =1 p3' =C15 ⊕ C14 ⊕ C13 ⊕ C12 ⊕ C7 ⊕ C6 ⊕ C5 ⊕ p3' =1, p4 ' =C15 ⊕ C14 ⊕ C13 ⊕ C12 ⊕ C11 ⊕ C10 ⊕ C9 ⊕ C8 ⊕ p4 ' =1 then P1′ P 2′ P3′ P 4′ =1110, Corresponding to the decimal number is 14, that is detected the 14-bit code word error, correct its mistakes, change 0 to 1, that is the correct code word. [2] If the situation is the following: Send the code word: 1
1
0
1
0
0
1
1
1
0
1
1
1
1
0
Received the code word: 1 D15
1 D14
0
1
0
D13 D12 D11 D10 D9
0
1
D8 D7
1 D6
D5
1
0
1
1
0
1
0
p 4 p3 p2 p1
First inspect the error detection code for errors:
p1 ⊕ p2 ⊕ p3 ⊕ p 4 ⊕ p =1, it is error. At this point you can know the error detection code is wrong, so that information bit is correct, we can directly extract information bit code word, this time it does not need to check which bits error, and do not need to correct.
318
Hongli Wang Table 1. For the relation between parity bit of and information bits The number of parity bit k 2 3 4 5 6 7 · · ·
The number of information bit n 1 4 11 26 57 120 · · ·
The number of error-correcting codes 3 7 15 31 63 127 · · ·
Relative to the 8421-bit Hamming code, the improved error correction code has the following characteristics: [a] The error correction code is facilitate to extract information bits than Hamming code, so hardware support is relatively simple. [b] The error correction code correct bi information bits error only, do not need to consider its dislocation problem of parity bit, error correction time can be shortened.
3 Conclusion Similarly, we can also lay test-bit in front of code word .But regardless of the Hamming code described above, or improved the error-correcting code, both of which can only discovered and corrected one mistake. If two wrong occurred during the transfer, then the two lost a role. But the general channel, the occurrence probability of more than two and two code word error is very small so they have great value.
References 1. Shen, S., Chen, L.: Information and coding theory. Science Press, Beijing (2002) 2. Cao, X., Zhang, Z.: Information and coding theory. Beijing University of Posts and Telecommunications Press (2001) 3. Yu, H.: The principle and construction method of Hamming code. Computer and Modernization 2, 146–150 (2001) 4. Shan, Y.: Implementation of cyclic codes and their application in the forward error correction. Journal of The University of Petroleum 5, 98–99 (2001) 5. Cai, M.: Error control coding technology and its application in data communication. Journal of Shaoxing University 1, 44–46 (2003)
Intelligent Home System Based on WIFI Zhang Yu-han and Wang Jin-hai School of Electronics and Information Engineering, Tianjin Polytechnic University (TJPU), Tianjin, 300160, China [email protected]
Abstract. In this paper, we have designed an Intelligent Home System Based on WIFI. We use IEEE802.11/WIFI standards and the GPRS network to manage the family internal network. We mainly talk about the network structure, the design of nodes and remote control of GPRS network. The system which is less peripheral circuits, small power consumption, easy to control and low cost, can be widely used in ordinary households. Keywords: WIFI, wireless sensor nodes, GPRS network, AT commands.
1 Introduction With the development of economic, people are looking forward to have a safe, comfortable, energy-saving and convenient living environment. Smart home can be centralized or remote monitoring for family affairs by integrating the computer technology, embedded technology, sensor technology, network communication technology and control technology. Smart Home combines security control subsystem, lighting control subsystem, multi-media entertainment subsystem, network connection and other functions, which meets the pursuit of high quality life in the information age. The design combines various home life subsystems together through network communications technology, achieving remote monitoring of the residential and real-time management, providing a full range of multi-functional services for users and making a relaxed home life for users.
2 The Overall Program of System Design The system consists of WIFI wireless local area network, GPRS network, multiple wireless sensor nodes, the central control and mobile; The idea of design system is: on the one hand, the terminal module controller collect data, when it detects an exception occurs, such as fire, wireless sensor nodes to be composed of self-organization network, the data collected by the multi-hop transmission sent to the central controller, the central controller sends alarm SMS to the head of the household through the GPRS network to inform head of the household to immediate deal with; on the other hand, head of household controls appliances of home through mobile phones, he send a short message to the central controller, central controller receives the short message and M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 319–327, 2011. © Springer-Verlag Berlin Heidelberg 2011
320
Z. Yu-han and W. Jin-hai
sends the appropriate commands to the corresponding terminal after analysis; Finally, the software platform of smart home system controls home electronic equipment working status well [1]. Overall system design diagram shown in Fig. 1:
Internet
Mobile
Intelligent Home System
Central Controller
WIFI
Temperature Module
Gas density Acquisition Module
WIFI
Video Module
Other
Fig. 1. System design plan
3 Intelligent Home System Network Structure and Network Technology The system uses a central controller and features a combination of sub-modules, so select the star topology network. Wireless access network mode is selected, using the LAN bus and WIFI networking for network expansion. The home network is connected to internet network through a form of ADSL modem. Remote access to the browser can be provided by the embedded operating system and its TCP/IP protocol stack, set up Web servers, and the development of a complete HTTP protocol based on standard remote control site, the central site through remote control controller. For sending SMS while dangerous situations, we also added the GPRS module to complete the text messaging function in the system.
4 Central Controllers S3C2410A processor manufactured by Samsung is the core module in this system [2]. With the help of AW-GH321 WIFI MC35i, modules designed are good for wireless sensor network interface and the GPRS network interface, the processor and modules through two interfaces are connected. Architecture of the system shown in Fig. 2:
Intelligent Home System Based on WIFI
NAND flash NandFlash Controller
Buzzer
JTAG Debug the equipment
SDRAM
NAND flash
Memory Controller
JTAG
UART
GPRS Module
SPI
WIFI Module
GPIO
S3C2410 LCD
321
LCD Controller
Touch panel Controller
RESET
VCORE
RESET CIRCUIT
CORE SUPPLY
Touch panel
V-I/O
SYSTEM SUPPLY
Battery Management
Fig. 2. Controller hardware architecture of smart home system
Controller Design of Peripheral Circuits. Peripheral circuits include power supply, reset circuit, alarm circuit, LCD display module, JTAG debug circuitry and other parts. WIFI Interface Circuit Design. WIFI module is used Hayward's AW-GH321 as the main chip. AW-GH321 company with Marvell 88W8686 kernel supports IEEE802.1lb/g WLAN protocols to provide SDIO/G-SPI two interfaces, and supports multiple power-saving mode, with the design simple, short development cycle and low power consumption [3]. WIFI hardware connection module shown in Fig. 3:
Fig. 3. WIFI hardware interface connection diagram
322
Z. Yu-han and W. Jin-hai
Chip pins 3V_IO, 3V_PA, VIO_X2, VIO_X1 are connected with 3.3V voltage for the chip run and digital I/O port power; pins VDD18_X3 and VDD18A are connected with 1.8V voltage for analog I/O port power. EXT_REF_CKK is external 26MHZ clock and SLEEP_CLK is 32.768KHZ. SPI_CLK, SPI_SDI, SPISCS, SPI_SDO SPI_CLK, SPI_SDI, SPISCS, SPI_SDO are for the SPI interface pins. SPI_SINT interrupt pin is connected to S3C2410A the EXINT2.
5 WIFI Wireless LAN Technologies WIFI is an IEEE-defined industry standard for wireless network communications, namely: IEEE 802.11. IEEE 802.11b wireless networking specification is an extension of IEEE 802.11 specification, the maximum bandwidth is 11Mbps. If the signal is weak or interference, the bandwidth will adjust to 5.5Mbps, 2Mbps and 1Mbps, Automatic adjustment of bandwidth is effective protection to the network stability and reliability. Its main features are high speed, high reliability. In an open area, communication distance is up to 305 meters. In a closed area, the communication distance is 76 meters to 122 meters, to facilitate integration with existing wired Ethernet, the group net lower cost [4]. 802.11 WIFI network protocol defines the physical layer, media access control layer and logical link control layer, the layer structure shown in Table 1: Table 1. The three-tier structure 802.11 802.2 LLC(Logical Link Control) 802.11 MAC 802.11 PHY FHSS
802.11 PHY DSSS 802.11b 11Mbit/s 2.4GHz
802.11 PHY IR/DSSS
802.11 PHY OFDM
802.11 PHY DSSS/OFDM
802.11a 54Mbit/s 5GHz
802.11g 54Mbit/s 2.4GHz
The Configuration of the Wireless LAN Based on Linux System. OpenWrt Wireless LAN configuration file located in the/etc/config/wireless, in the SecureCRT window, we enter the following command to enable the built-in editor to edit files on the wireless: root@OpenWrt:~# vi /etc/config/wireless
Change the wireless for:
Intelligent Home System Based on WIFI
323
config wifi-device radio0 option type mac80211 option channel 7 option hwmode 11g config 'wifi-iface' option 'device' 'radio0' option 'ssid' 'OpenWrt' option 'mode' 'ap' option 'network' 'lan' option 'encryption' 'wep' option 'key' '000000' The implication is that the wireless LAN feature is enabled, and be set the SSID number for OpenWrt, mode 802.11g, channel number 7, the access mode for the Access Point, access network LAN, encryption as WEP and the password is 000000. S3C2410A wireless LAN configured is finished. Design of Resistance to Interference. Interfering signals may affect the WIFI network. In the IEEE 802.11b/g technology specifications, WIFI is defined in the frequency of 2.4 GHz. The 2.4 GHz spectrum is divided into 14 overlapping, staggered 20MHz wireless carrier channel and the center frequency is 5MHz. This is not all 14 channels can be used, depended on regulatory constraints in different countries [5]. To avoid duplication generated by the WIFI signal channel interference, we change the WIFI channel number. Through the following command to enable built-in editor to edit files on the wireless, the file is located in/etc/config/in: root@OpenWrt:~# vi /etc/config/wireless In which option channel 7 Modify the other values as show in 1-13. Security Design. The level of WIFI encryption is WEP by default in this design, and in order to improve network security, you can change it to WPA encryption. WPA is a subset of IEEE 802.11i, IEEE 802.1x, and its core is the Temporal Key Integrity Protocol (TKIP) [6]. To enhance the security of WPA, WPA2 encryption we can increase the agreement, the use of automated key exchange mechanism similar to, WPA2 WPA to protect the user's investment in infrastructure. If you want the design of the WIFI WEP encryption revised by the WPA, WIFI simply by modifying the configuration file can be realized. In/etc/config/wireless in: option 'encryption' 'wep' option 'key' '000000' Change the value of the encryption to WPA, and reconfigure the key value. Test Results. Fig. 4 shows the results of the data packet loss rate curve which is tested by iperf software tools, in the case of wireless LAN no encrypted; the results of the data
324
Z. Yu-han and W. Jin-hai
packet loss rate curve. Fig. 5 shows the throughout capacity which is tested by iperf tool in the wireless LAN with WPA encryption mode.
Fig. 4. Shows packet loss rate
Fig. 5. Shows network throughput
6 Design of Wireless Sensor Nodes Intelligent home network based on WIFI primarily consisted with a number of wireless sensor nodes, wireless executive device, and home wireless control centers. The sensor nodes are distributed in the region needed to monitor for the data acquisition, processing and communications, etc; wireless executive device is for enforcing agencies start sound and light alarm, surveillance and other functions; home wireless control center is for processing, forwarding the information from the wireless sensor nodes, and for remote control with the GPRS network. This system is mainly collect room temperature, the concentration of combustible gas and induction of human wavelength by sensor nodes, to achieve the detection of indoor environment and the availability of illegal intrusion. For an example, the temperature sensors detect the location temperature. The acquisition process is once every 10S in case of no alarm; it is once every 3s in case of alarm coming, and waking up WIFI communication services at the same time, sending the temperature data to the central controller; when the dangerous values is coming, acquisition process is once every 1S, and waking up WIFI communication services, the temperature data is sent to the central controller.
Intelligent Home System Based on WIFI
325
7 Design of GPRS Module Send and Receive SMS The GPRS Modem is MC35i manufactured by Siemens in this system. This GPRS module is high performance, supporting dual-band EGSM900MHz and GSM1800MHz [7]. The module is connected through the serial port. The user mobile phone make communication with the GPRS Modem through GPRS, the controller controls external devices by parsing the serial commands, using the AT command as software interface control. Systems Time Sequence of Sending and Receiving SMS 1) Time sequence of sending SMS. When S3C2410A transmute collected data to mobile phone users, the controller will first sent the order AT + CMGS = "number" to the GPRS Modem, then waiting for GPRS Modem feedback information. If it is correct, the controller will sent SMS to users. When GPRS Modem detected the end flag, it will return OK which represents sent one times successfully. The flow chart of send short message process is shown in Fig.6. 2) Time sequence of receive SMS. When users send messages to S3C2410A, the S3C2410A will first detects the new message instructions CMTI. When it detects a new message, S3C2410A CMGR sends instructions and read the message. As the limited capacity of the SIM card, it will call a subroutine CMGD to delete the message after reading a new message. The flow chart of receive short message process is shown in Fig.7.
N
Y
N
Y
N
Y N
Y
Fig. 6. Send short message process flow chart
326
Z. Yu-han and W. Jin-hai
Fig. 7. Receive short message process flow chart
Test Results. In this system, the design of serial port is base and also the most important part. As the hardware designed, we added the RS232 interface to connect with PC. We can observe the situation of S3C2410A sending and receiving data by the serial debugging assistant [8]. Fig. 8 shows that send text messages in TEXT mode, the test interface.
Fig. 8. TEXT mode to send SMS test interface
8 Conclusion As the development of wireless networks and data communication technologies, people’s demands for the living environment have become increasingly. The concept of
Intelligent Home System Based on WIFI
327
information was introduced to the home environment in the building, which opened a family a new chapter of information and intelligence. The system realizes a smart home system based on WIFI network which is the core of this system, making home life easier through integrated management of the network. The innovation of author: because the system is using WIFI wireless network, less the external circuit, less power consumption, simple control and low cost, it can be widely applied to ordinary families; as it is using WIFI wireless sensor network and the GPRS network as the transmission network using, it solved the problems of traditional smart home system wiring complexity and high cost. Future development direction: ensuring the security and stability of the system by improving software and hardware design, speeding up the system's emergency response capacity, reducing system power consumption, in a form of an independent from the PC, perfect free digital space and a wide range of services, this will be suitable for all people.
References 1. Cai, J.: Intelligent Home System Design and Implementation Based on Linux. Wuhan University, Wuhan (2010) 2. Samsung. S3C2410X 32-BIT Risc Microprocessor User’s Manual (2004) 3. Azure Wave Corp. AW-GH321 Datasheet Version 0.2 (May 31, 2007) 4. Xiong, H.: Discussion about WIFI Technique and Its Application. Communications and Information Technology (168) (2007) 5. Matthew, G.: 802.11 Authoritative Guide to The Wireless Network. Tsinghua University Press, Beijing (2002) 6. Li, L., Zhu, L., Ju, T.: The Research of 802.11 Wireless Network WPA security system. Journal of Nanjing University of Posts and Telecommunications 24(1), 78–81 (2004) 7. Simcom. SIM35i Hardware Interface Description v2.02 (June 2007) 8. Guo, H.: Design of The Wireless Terminal with Send and Receive Short Message Based on GSM Module TC35i. Southwest Petroleum University (2004)
A Channel Optimized Vector Quantizer Based on Equidistortion Principal and Wavelet Transform Wang Yue Institute of Electronic Information, Zhejiang Gongshang University, Hangzhou, 300018 , P.R. China [email protected]
Abstract. The paper presents a new algorithm to design channel optimized vector quantizer(COVQ) based on the equidistortion principal and wavelet transform. The algorithm creates new codebook vector which is in the near place of the sub region with the biggest sub-distortion and then replaces the codebook vector with the smallest subdistortion with this new codebook vector, therefore to equilibrate sub-distortion of all sub regions. The algorithm achieves a significant improvement of COVQ performance under noisy channel, as confirmed by experimental results. Keywords: COVQ, wavelet transform, noisy channel, equidistortion principal.
1 Introduction Vector Quantization (VQ) is an important and successful source coding technique in many digital communication applications. Vector quantizer(VQ) operates by mapping a large set of input vectors into a finite set of representative codevectors. The transmitter transmits the index of the nearest codevector to the receiver over noisy channels, while the receiver decodes the codevector associated with the index and uses it as an approximation of the input vector. However transmitting VQ data over noisy channels changes the encoded information and consequently leads to severe distortions in the reconstructed output. One approach is to optimize the vector quantization (VQ) index assignment to minimize the distortion. Channel optimal vector quantizer(COVQ) is to design optimal vector quantization encoder according to the channel status and channel noisy, it can get the best performance under certain pre-know channel condition. The key point of designing a vector quantizer is codebook. That is to say, to find a codebook and an assignment algorithm to make the total distortion of vector sequence the smallest. The traditional LBG algorithm [1] is the vector quantization based on the nearest condition and centroid condition, but LBG algorithm has many shortcomings like the dependence to original codebook and the algorithm is easy to fall in local convergence. Forbidden search codebook algorithm [2] can get global convergence with temporal memory performance. R.Cierniak[3] proposed an frequency sensitive compete study algorithm to overcome local optimization. J.S.Pan[4] combined the LBG and genetic algorithm together. When design channel optimal vector quantization M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 328–334, 2011. © Springer-Verlag Berlin Heidelberg 2011
A Channel Optimized Vector Quantizer Based on Equidistortion Principal
329
it is not guaranteed to the global optimal if only satisfied the nearest condition and centroid condition. Gresho[5] proposed equidistortion principal. Zhu C proposed Minimax partial distortion competitive learning (MMPDCL), to decrease the sub region distortion. Ueda N[6]paoposed an algorithm based on neural network to make each sub region distortion equally. Hamidreza[7] proposed a COVQ for symbol-by-symbol maximum a posterioro(MAP) hard decision demodulated channels. In [8] an algorithm (EAIAA) was proposed based on the evolution algorithm to assign index assignment over noisy channels. An algorithm to design channel optimized vector quantizer(COVQ) based on the equidistortion principal and wavelet transform is proposed. The algorithm creates new codebook vector which is in the near place of the sub region with the biggest sub-distortion and then replaces the codebook vector of the smallest subdistortion with this new codebook vector, therefore to equilibrate sub-distortion of all sub regions. The feasibility and efficiency of our algorithm is confirmed by experimental results.
2 Vector Quantization over Noisy Channel A typical VQ system contains a finite predetermined collection of codevectors (a codebook), and a vector distortion measure, which, when given two vectors, yields a distance (or distortion) between them. A sequence of input vectors is coded by the VQ system by associating each input vector with the binary index of a codevector whose distance from the input vector is minimized. This coding processing can be well done by properly assigning the indices for the codevectors, which is the index mapping π . Then this index is subsequently transmitted through noisy channel to a receiver which decodes the codevector associated with the index (by a lookup operation) and uses the codevector as an approximation to the original input vector. A block diagram depicting a noisy channel vector quantizer is shown in Fig.1.
Fig. 1. block diagram of VQ on discrete memoryless channel
In figure 1 En is quantizer encoder, De is quantizer decoder, π is index mapping. If channel input sequence is I, output sequence is j,then channel error probability is P( j i ) , index assignment is π (i ) , then total average distortion[11] can be presented as
330
W. Yue
N −1 1 N −1 p ( X ){ p( j π ( i ))d ( X , Y j )}dx ∑ ∑ ∫ k i =0 S i j =0
D= Then to a given
π
(1)
and codebook Y, the optimal channel vector quantizer should
satisfy: N −1
Si = { X | ∑ p( j π ( i )) d ( X , Y i ) = min
0 ≤ j ≤ N −1
i =0
N −1
∑ p( Yi =
j
i =0
π (i )
i =0
j
i =0
π (i)
) d ( X , Y j )}
Si
j
(2)
) ∫ Xp ( X )dX
N −1
∑ p(
N −1
∑ p(
π (i )
(3)
) ∫ p ( X ) dX Si
3 Channel Vector Quantizer Based on Equidistortion Principal and Wavelet Transform (1) codebook selection after wavelet transform To concentrate the energy of image data, and decrease quantization distortion to enhance recover image quality, in our algorithm, a double orthogonal 9/7 wavelet [9] is used to make a 2 or 3 level transform to an original image. In general, The vector selection after transform is to select different number vectors and dimensions in each different sub band and take train separately. In our algorithm, considering the father and son’s characters in zerotree encoding, we combine the same position dates in different sub band together to make a vector, then we can get 16 or 64 dimension vector as showed in fig2, fig3. Thus, to use a unified codebook, and after quantization, entropy encoder can be used to enhance compression performance.
Fig. 2. 16 dimension codebook
Fig. 3. 64 dimension codebook
A Channel Optimized Vector Quantizer Based on Equidistortion Principal
331
(2) Equidistortion principal Gersho[5] proposed a theorem about the finite big bitrate , small distortion vector quantizer bitrate and average distortion. As to probability function p(X), when the number of codebooks is big enough, the sub region of the most optimal vector quantizer will have the same influence to average distortion. This progressive result is an ideal condition under the assumption that N is an infinite number, in fact, in real vector quanlization, the number of codebook is finite, and sub region distortion is not equal. Then another necessary condition about optimal vector quantizer is to make them as equal as possible. In our algorithm, we introduce this principal to the design of channel vector quantizer and make it satisfy three conditions: the nearest condition, centroid condition, equidistortion principal. (3) The average of distortion In our algorithm, first we make sub region distortions in order, and then to make variation to the codebook vector which has the biggest distortion, and to generate a new codebook vector near the codebook vector with the biggest distortion and then replace the codebook vector of the smallest distortion with this new one. Then, one in one, to replace m(t) codebook vectors. m(t) should be a decrease function, and will decrease with the increase of the iteration number,
1
m(t ) = [c × N ×
2π × shape
exp(
− ( tT )2 )] 2 × shape 2
(4)
N is the number of codebook, c is weighted coefficient, shape is shape factor,[] is integer operator. To each need variation codebook vector j, random vector
δ kj
and
Ykj
*j
to make new vector Yk
Yk* j = Yk j + δ kj
(5)
δ kj = d × N (e 2jk ,0) × e −
t
T
d is the weighted coefficient, k is the k component of vector, random
with mean=0, variance=
e 2jk
,e
jk
(6)
N (e 2jk ,0) is Gaussian
is the average distortion of the kth
component of the jth codebook.
e jk =
1 M
∑x
m k
− y kj , (m = 1,2,..., M )
M is the training number of the jth sub region,
()
(7)
m
x km is the kth component of the mth
training vector, (5), 6 and (7) will make the new variation vector falling into the nearby of the training vector’s average distribution point.
332
W. Yue
(4) Algorithm Step 1: initialize: to create codebook randomly. Set iteration number t and codebook vector number m(t). Step 2: average of distortion: first calculate the distortion of sub region, and make them in increase order, then select the vector with the biggest distortion, and variate it by (5), and replace the smallest distortion codebook vector with this one, by this way, to eliminate the vector with smaller distortion gradually until operation number is m(t). Step 3: make each training vector into each sub region by function (2). Setp 4: calculate each sub region center by (3), and generate a new codebook vector. Step 5: end decision: t=t+1,if t
4 Experiment Result In our algorithm, the training vectors are all from coefficients of lena image after wavelet transform. Vector is 16 dimensions, as showing in Fig.2. The codebook size is 32, 64,128,256 separately. Noisy channel is binary symmetrical channel, channel error probabilityε=10-5,10-4,10-3,10-2,10-1. The number of iteration is 200, c is 0.4, d is 0.001,shape is 0.5. In our algorithm, SNR result under different channel error probability is given in Fig.4,codebook size is 256,vector dimension is 16. In this figure, we compare the performance of traditional GLA, WT+GLA wavelet transform + GLA with our algorithm. Fig.6 and Fig.7 are the experiment results, codebook size is 256 and 128 separately, and channel error probability is 0.01.
(
Fig. 4. Performance comparison under 3 Algorithms
)
A Channel Optimized Vector Quantizer Based on Equidistortion Principal
333
=
Fig. 5. Lena original
Fig. 6. PSNR 27.604 (N=128)
=
Fig. 7. PSNR 28.385(N=256)
Table 1 shows the SNR under 3 algorithms with different codebook size , channel error probability is 0.01. Table 1. PSNR comparison
From the experiment result, we can see our channel optimal vector quantizer can get SNR increase 1.5-2.4dB compare with GLA algorithm, and SNR increase 0.5-1.4dB compare with WT+GLA. With the condebook size increase, the performance will increase too. When codebook size is 16, our algorithm can get 0.2-0.5dB performance increase comparing with WT+GLA. When codebook size is 256, our algorithm can get 0.8-1.4dB performance increase comparing with WT+GLA. This also illustrates the feasibility of equidistorration principal.
334
W. Yue
5 Summary We proposed a channel optimal vector quantizer based on equidistortion principal and wavelaet transform, this algorithm can average the distortion of different vectors and to increase channel vector quantizer’s performance under noisy channel. Simulation result conforms that the algorithm yields a significant reduction in average distortion compared to other algorithms.
References [1] Linde, Y., Buzo, A., Gray, R.M.: An algorithm for vector quantizer design. IEEE Trans. Commun., COM 28(1), 84–95 (1980) [2] Glover, F., Laguna, M.: Tabu Search, pp. 1–354. Kluwer Academic Publishers, Dordrecht (1997) [3] Cierniak, R., Rutkowski, L.: On image compression by competitive neural networks and optimal linear predictors. SP.Image Communication 15, 559–565 (2000) [4] Pan, J.S., McInnes, F.R., Jack, M.A.: VQ Codebook Design Using Genetic Algorithms. Electronics Letters 31(17), 1418–1419 (1995) [5] Zeger, K., Gersho, A.: Pseudo-Gray coding. IEEE Trans. Commun. 38(12), 2147–2158 (1990) [6] Ueda, N., Nakano, R.: A new competitive learning approach based on an equidistortion principle for designing optimal vector quantizers. Neural Networks 7(8), 1211–1227 (1994) [7] Saffar, H.E., Alajaji, F.: COVQ for MAP Hard-decision Demodulated Channels. IEEE Communication letters 13(1) (January 2009) [8] Li, T., Yu, S., Zhang, G.: An vector quantizer index assignment. The journal of electronical 30(6), 876–879 (2002) [9] Daubechies, I.: Ten Lectures on Wavelets. Rutgers University and AT&T Bell Laboratories (1992) [10] Shapiro, J.M.: Embedded image coding using zerotrees of wavelet coefficients. IEEE Trans. On SP 41(12), 3445–3462 (1993)
ESPI Field Strength Data Processing Based on Circle Queue Model Hongzhi Liu and Shaokun Li College of Computer and Information Engineering, Beijing Technology and Business University, Beijing, 100048, China [email protected]
Abstract. The GSM-R field strength testing system was designed based on switched Ethernet for the need of high-speed railway. The testing system is mainly used to test GSM-R field strength around the railway and speed of train is above 250km/h.System receives field strength data, display real-time curve, replay the curve and analyze field strength data. In the data gathering part of the GSM-R field strength testing system a circle queue model was used to process real-time field strength data. The GPS was used to get current geography information of the train and calculate its average speed in a certain period of time for reference. The field strength graph was drawn by the testing system software in simulation experiment. Keywords: field strength testing, GSM-R, circle queue, high-speed railway.
1 Introduction With the development of national economy, China's railway construction has undergone tremendous changes. Has experienced several great speed, the train speed has increased dramatically, particularly CRH, its speed is usually up to 200km/h or more[1]. In the September 28, 2010, on the Shanghai-Hangzhou high-speed railway test from Hangzhou to Shanghai Hongqiao the "Harmony" CRH380A developed by the Sifang Locomotive Co.,Ltd. had a maximum speed of 416.6km/h, which again refresh the top speed of high-speed railway in the world. At the same time, high-speed railway also has higher requirements for the safety of running on the road. As the basis for modern railway communication scheduling, GSM-R field strength signal is extremely important to cover along the railway in accordance with relevant standards[2]. In China GSM-R base stations are generally constructed on the basis of the original railway communication systems[3]. Therefore, it is necessary to measure GSM-R field strength coverage of the signal for the corresponding optimization after the reconstruction[4]. It requires analyzing a large number of GSM-R field strength signal data collected along the railway. The most intuitive form of data analysis is the curve. So it is very critical to pre-process large number of data. M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 335–342, 2011. © Springer-Verlag Berlin Heidelberg 2011
336
H. Liu and S. Li
2 Consist of the System ESPI is a spectrum and network analyzing equipment developed by Rohde & Schwarz GmbH company of Germany[5]. It could measure the spectrum in the range of 9kHz~7GHz, and supports remote control via LAN interface. By the ESPI manual, each time it could test up to 10 channels(including four transmission channels, three low-adjacent channels and three high-adjacent channels),and a test computer could connect up to 6 ESPI via Ethernet[6]. The testing system was consisted of several ESPI by switched Ethernet technology. Switched Ethernet uses store and forward mode, so each ESPI constitutes a separate control domain with host to avoid CSMA/CD in the traditional Ethernet data transmission and improve data throughput of the testing network[7]. Consist of EPSI testing system was shown in Fig.1.A computer installed GSM-R field strength testing software control all the ESPI connected through the switch. GPS provide time trigger pulse for ESPI and axle rangefinder provide distance trigger pulse.
Fig. 1. Consist of ESPI testing system
In the process of testing GSM-R field strength, the test computer has to establish good physical and logical connection with ESPI, and then tester could send control commands to receive the GSM-R field strength data. The control commands include initializing of ESPI, gathering the current state of ESPI and settings about the way of data collection, etc. In addition, the computer will also receive field strength data from the ESPI via Ethernet. Under normal circumstances ESPI connected to the computer were less than 6 sets, and data communication occurs only between the test computer and each ESPI. There was no data communication between each ESPI. So the testing network topology was relatively simple. We could establish a connection state table to represent if each ESPI has been established a good logical connection with the computer or not and record the status of connection between the computer and each ESPI.
ESPI Field Strength Data Processing Based on Circle Queue Model
337
Fig. 2. Flow chart of detecting ESPI connection
3 Detect Connection of System After the connection was established between the computer and each ESPI via LAN interface, we had to pass the IP address or host name of the ESPI to RSDLLibfind() function to establish logical connection. The function provides a handle for access to the ESPI. If ESPI is not found, the handle has a negative value. The function prototype is short WINAPI RSDLLibfind(char far *udName, short far *ibsta, short far *iberr, unsigned long far *ibcntl). The parameters of the function are respectively represent the device name or IP address of the ESPI, the status word of the RSIB interface, error variable of status word and the count variable which is updated with the number of transferred bytes each time a read or write function is called. The codes of connecting the computer with the ESPI are as follows: short ESPIConnect(char *ESPI_IPAddr) { short hESPI = -1;//ESPI Handle for(;;) { hESPI = RSDLLibfind(ESPI_IPAddr, //IP Address of ESPI &ibsta,//RSIB status word &iberr,//Error variable &ibcntl//Not used );
338
H. Liu and S. Li
if (hESPI!=-1)//Connect succeed {//Set to remote control RSDLLibwrt(hESPI, //Command of set to remote control "INST:SEL REC", &ibsta,&iberr,&ibcntl ); } } //Connect succeed, return equipment handle return hESPI; } The flow chart of detecting connection between the computer and ESPI is shown in fig.2. We could get the ESPI handle as return value after passing the IP address of the ESPI to ESPIConnect() function when connecting the computer with the ESPI. Then we could use the computer to communicate with the ESPI, such as sending commands or receiving GSM-R field strength data from the ESPI.
4 Circle Queue Model In the testing process, ESPI received large amount of data and transmitted it to the computer for real-time processing. When the speed of the train was 350km/h, each ESPI would receive more than 2,400 data samples per second on testing only one frequency. The data type was float. So it would need about 80kbps data transfer rate. Moreover, in order to improve efficiency, each ESPI would also test more than one channel. Therefore, real-time data need to be processed by the test computer was quite large. Generally, the test computer was connected to several ESPI when receiving field strength data. To accelerate the speed of data processing, the system software would create a data processing thread using the circle queue model for each ESPI. As shown in
Fig. 3. Circle queue model
ESPI Field Strength Data Processing Based on Circle Queue Model
339
Fig.3, the size of data receiving buffer was n. When the field strength data was read by the test computer it would be put into the buffer from 1 to n according to the order. In the same time the data in the circle queue was copied to the data processing buffer. So the circle queue buffer received data from the ESPI circularly. After the data processing buffer was full, it would be processed by the data processing thread and then send to the curve drawn thread. Also the data would be stored in the database. We could define a data structure to represent the circle queue buffer. So that we could create and initialize the circle queue buffer and then read or write the buffer, compute the used space and free space of the buffer or flush it. The data structure is defined as follows: typedef struct CircleBuffer { CircleBufferUninitialise Uninitialise; CircleBufferWrite Write; CircleBufferRead Read; CircleFlush Flush; CircleGetUsedSpace GetUsedSize; CircleGetFreeSpace GetFreeSize; CircleSetComplete SetComplete; CircleIsComplete IsComplete; BYTE* m_pBuffer; unsigned int m_iBufferSize; unsigned int m_iReadCursor; unsigned int m_iWriteCursor; HANDLE m_evtDataAvailable; CRITICAL_SECTION m_csCircleBuffer; BOOL m_bComplete; } CircleBuffer; In the data structure, let m_iBufferSize denote the size of the circle buffer. Let m_iReadCursor and m_iWriteCursor denote current read cursor and write cursor of the buffer respectively. Let m_evtDataAvailable denote the handle of the available data in the buffer and m_csCircleBuffer denote the critical section for multiple thread. In the data receiving we need to use GPS to get current latitude, longitude and altitude information of the train[8]. So we could compute the distance of the train within a time. In the measurement accuracy permit, we could consider the earth as a sphere approximately. And the average radius of earth is about 6378km. So actual radius of the current location of the train is:
r = 6378 + cur _ alti ÷ 1000
(1)
Let cur_alti denote the altitude of the current location of the train. The distance of one latitude degree is:
lat _ unit = π r ÷ 180 The distance of one longitude degree is:
(2)
340
H. Liu and S. Li
lon _ unit = 111.43 × cos(cur _ lat × π ÷ 180)
(3)
lon _ diff = cur _ lon − old _ lon
(4)
lat _ diff = cur _ lat − old _ lat
(5)
And let
Let cur_lon denote the longitude of the current location of the train. Let old_lon denote the longitude of last location of the train. Let cur_lat denote the latitude of the current location of the train, and let old_lat denote the latitude of last location of the train. Then the distance of the train within a certain period of time could be calculated by the following formula: distance =
(lon _ diff × lon _ unit )2 + (lat _ diff × lat _ unit ) 2
(6)
We could use it to compute the average speed of the train in a certain period of time for reference.
5 Simulation Result In Figure 4, the testing system had drawn out a curve about a railway from K621 to DaTong Station based on the field strength data. The frequency of testing was 933MHz, including the peak value, the RMS value and the mean value. The horizontal axis denotes km mark and the vertical axis denotes the field strength value(in unit of dBm).We respectively used red, green, and blue color to denote the GSM-R field strength of the peak value, RMS value and the mean value.
Fig. 4. Field strength curve of K621 to DaTong
ESPI Field Strength Data Processing Based on Circle Queue Model
341
We could find that the current speed of the train was 69.7km/h,the next station was K559 and its km mark was 559.5km.In addition, we could conclude that the field strength usually distributed between -110dBm and -50dBm.The field strength near the station is stronger. We could also conclude that the driving direction of the train was up direction. Part of the field strength data in Fig.5 was shown in Table 1, including the testing frequency, km mark, the field strength value and so on. Table 1. Field strength data of K621 to DaTong Frequency [MHz] 933.200 933.200 933.200 933.200 933.200 … 933.800 933.800 933.800 933.800 933.800
Mark [Km] 62.100 62.090 62.080 62.070 62.060 … 32.760 32.750 32.740 32.730 32.720
Field strength [dBm] Peak -51.38 -50.00 -50.71 -54.16 -53.45 … -92.94 -93.95 -93.76 -54.59 -50.00
RMS -57.49 -50.00 -54.18 -57.59 -57.95 … -99.29 -100.12 -100.11 -57.88 -50.00
Mean -56.33 -50.00 -53.57 -57.00 -57.51 … -98.27 -99.16 -99.11 -57.27 -50.00
6 Conclusion This paper described the ESPI field strength data processing based on circle queue model. In GSM-R field strength testing system, using the circle queue model for each receiving thread could improve data throughput and improve the software system for data processing speed. So that the GSM-R network designers could find problems in the GSM-R network in time and conduct effective analysis and resolutions to ensure that GSM-R communication scheduling system could work properly.
References 1. Long-term railway network plan (2008 adjustment). Republic of China Ministry of Railway (2008) (in Chinese) 2. Liu, H., Lin, M., Ma, J.: Research on Vehicle GSM-R Field Strength Testing System. Microcomputer Information (15), 115–117 (2007) (in Chinese) 3. Wu, H., Zhong, Z.-d.: GSM-R business and the application in China Railway. Mobile Communications (08), 18–22 (2007) (in Chinese) 4. Li, H., Liu, H.-z.: Design and realization of field-strength data acquisition system for GSM-R based on ESPI. Journal of Beijing Technology and Business University 27(3), 51–55 (2009) (in Chinese) 5. Supplement A to Operating Manual EMI Test Receiver ESPI3 and ESPI 7 (Firmware Version 1.72). Rohde & Schwarz Corporation
342
H. Liu and S. Li
6. Remote Control of R&S Spectrum and Network Analyzers via LAN. Rohde & Schwarz Corporation (2003) 7. Li, S.-k., Liu, H.-z., Lin, M.: Study of GSM-R field strength testing system based on switched Ethernet. In: The latest progress of multi-agent system-papers of 2010 the Sixth National Multi-Agent System and Control Annual Conference, pp. 231–234. National Defense Industry Press, Beijing (2010) (in Chinese) 8. Liu, H., Bo, L.: Research on The Visualization of GSM-R Field Strength Testing Data. In: 2010 the 3rd IEEE International Conference on Computer Science and Information Technology, pp. 595–600 (2010)
The Research on Model of Security Surveillance in Software Engineering Based on Ant Colony Algorithm Hongzhi Liu and Xiaoyun Deng College of Computer and Information Engineering, Beijing Technology and Business University, 100048, Beijing, China [email protected]
Abstract. Combining the current situation of the construction of software engineering and construction safety risk, this paper analyzes the importance of software engineering construction risk and evaluation. To ensure safety in the construction of software engineering, this paper presents the model of security surveillance based on ant colony algorithm. Then realize and simulate this model, and analyze the result. The model can quantify the project's risk assessment, achieve security surveillance functions effectively. Keywords: security surveillance, ant colony algorithm, software engineering, risk assessment.
1 Introduction With the increasing scale of software and software is becoming more complex, software quality and security have become increasingly important. As a software quality assurance and an important means to improve security, software engineering surveillance[1] in the software development cycle is also becoming increasingly important. The purpose of surveillance[2] is to find the potential risks that exist in software engineering, to ensure that the software is put into use after the failure probability of the threat as small as possible. In software engineering construction, security surveillance is very important part of the process, surveillance mechanisms in recent years has been in software engineering attention. The traditional probability optimization method did not consider accident consequences, sometimes can lead to a greater risk of project decision-making, according to the most severe faults and the scene safe operation limit to determine dynamic often leads to high operation costs. The model of security surveillance based on ant colony algorithm can effectively implement the various stages of the project risk assessments, carried out in the field of great theoretical and practical value.
2 Establishment of Surveillance Model Ant colony algorithm is a new type of probabilistic search algorithm. The algorithm selected the biological information pheromone as the basis of the ants’ follow-up actions, through the ant coordination complete the optimization process. This paper M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 343–349, 2011. © Springer-Verlag Berlin Heidelberg 2011
344
H. Liu and X. Deng
based on the ant colony algorithm for applications of software engineering risk assessment carried out some research. Choose a reasonable goal of the project risks and risk control program[3] should be minimal considering the risk of loss probability, risk control measures and the relevant influence of selection the services targeting. According to the task of software engineering risk management objective, the form of the objective function is
max fm = Vm − Wn .
(1)
Wn = aWt + (1 − a )Wc .
(2)
m
n
Wc = ∑∑ cij .
(3)
i =1 j = 0
m
n
Vm = ∑∑ Vijqijc .
(4)
i =1 j = 0
fm is the net income of software engineering. Vm is the income expectations of software engineering. Wn is the price paid to achieve goals, including the cost of risk threat(Wt) and the input cost(Wc) of implementation the risk control measures. Because of budget constraints and the control side was bounded rationality, the risk management resources provided by software service must be limited. i is the numbers of software services income, i=1,2,…,m. m is the service number. j is the numbers of risk cost, j=0,1,2, …,n. The 0th measure of risk i expressed the risk does not selected any measure, n is the number of risk management measures that choose from the services. cij is the risk j control measures cost of the service i. Risk factors a expressed the weight ratio between the cost of risk loss and the cost of the implementation of risk control measures; qij expressed the implementation of the software services i-target success probability, Vij indicated the value in implementation of the software services i-targets. According to the inevitability and objectivity of the nature of risk, risk events can occur by the system state, environmental decisions, the level of risk management and control is closely related to the attitude of nothing to do with the decision makers. Therefore, in this process, the risk that the threat of costs can be approximately proportional to the risk of loss and the integration of measures cost, so m
n
Wt = ∑∑ [ ∫ (1 − qij )lijdc ] .
(5)
i =1 j = 0 c
According to the (1) - (5), the objective function is m
n
f = ∑∑ (Vijqij − (1 − a )cij − [ ∫ (1 − qij )lijdc ]) . m
i =1 j = 0
c
(6)
The Research on Model of Security Surveillance in Software Engineering
345
3 Realization of Surveillance Model with Algorithm Content Framework for Software Engineering Security Surveillance. The software engineering security analysis and the specific content of surveillance shown in Fig.1, the surveillance process can be divided into three phases: pre-surveillance, site surveillance and post- surveillance.
Fig. 1. Content framework for software engineering security surveillance
Early stage in which the main source code review, site supervision, including procedures for security checks stage verification, system information flow analysis, security configuration check, security vulnerability check and security penetration testing[4], Post stage, a security analysis of system architecture. Algorithm Principle and Realization. In ant colony algorithm, an ant in action left some information hormones can be the same ant colony of ants feel, and later as a kind of signal affect those after the action, and then to the left by the information of the original information hormones, and so to strengthen the existing information hormone ,and so the cycle continues. In this way, the path through the more ants was selected after the ants more likely. Because in a certain period, the short path will be more ants access, thus accumulating information hormones are also more, the next time the possibility of selected by other ants is bigger also. This process continues until almost all the ants walked the shortest that path so far. In solving risk planning problem, m ants representative m software project risk goals. First, m ants positioning in the starting point of question, each ants use certain
346
H. Liu and X. Deng
state transition rules moving from one state to another state, until finally reached service target, and complete the corresponding candidate of the service target measures to prevent risks (risk programming problem with a feasible solution). Then, in all m ants choose their candidate risk control measures, according to history get the minimum cost the candidate k risk measures information[5], reuse biological information hormone changes to the rules superposition current m possible target to revise each risk control measures of biological information hormone intensity. The revised process simulation the ant release biological information hormone and biological information hormone natural volatile role. Finally, according to the revised biological information hormone can guide ant search to software engineering target risk control optimal solutions. Above mentioned two rules can be expressed as follows: (1) The ant state transition rules According to the characteristics of every risk target by an ant, then each ant i have j position can choose .Transition probability is
⎧ τ ijα ⋅ η ijβ ⎪⎪ α β k ∈ Ji ( k ) Pijk = ⎨ ∑τ ik ⋅ η ik . else ⎪ k ⊂ Ji (k ) ⎪⎩0
(7)
In the formula, pijk said ants k selection risk measures of probability, decided to every project i ants on the possibility of choice measures j. Pheromones τij representative to the ith service project use the jth processing measures (ants in position j) the pheromone strength; α,β is negative parameter, respectively express the important degree of pheromones and elicitation information; ηij represents inspiration information, says risk threats and risks of controlling cost transition probability influence. ηij expression is
η ij =
1 [a ∫ (1 − q ij )l ij dc]ξ [(1 − α )c ij ]γ
.
(8)
c
There ξ and γ for nonnegative parameters, respectively represent risk price and cost control important degrees. (2) Biological information hormone correction rules Once the all ant completed their candidate risk control measures in the selection process (find a feasible solution of risk planning problem), each risk measures must be to make a complete biological information hormone correction, the correction rules as follows
τ ijnew ← ρτ ijold + Δτij . Wn =
m
(9)
n
∑ ∑ω i =1 j = 0
ij
.
(10)
The Research on Model of Security Surveillance in Software Engineering
ω ij = a ∫ (1 − q ij )l ij dc + cij .
347
(11)
c
⎧ q ijVij ω ⎪ τ ij = ⎨ω ik − ω i , j +1 . ω ⎪ 0 ⎩
< ω > ω
i, j +1 i, j +1
ij
(12) ij
m
Δτ ij = ∑ Δτ ijk .
(13)
k =1
⎧Q / ω ij ant k choose measure Δτ ijk = ⎨ . not choose ⎩ 0
j
(14)
There 0 < ρ < 1 is parameter, means pheromone residue degrees[6], ωij means the cost of each ant implement the goal, Q is constant .Biological information hormone fixed purpose is to allocate more biological information hormone has more small risk to the cost of preventive measures, the amendment rule not only storage biological information hormone, also properly evaporation them. Amendment rule is not achieved by individual ants, but rather through the all ants measures implementation to store experience, have distributed of long-term memory fast response effect, including the very good timeliness for risk response ability. Algorithm Process. Step 1. nc←0 (nc is iterations), to start setting risk measures j of the service project i,τij←0(c is smaller positive) , τij←0. Step 2. The m ants are placed in n-risk measures, for each ant k, in {0,1,2, ..., n} randomly select a measure to generate an initial solution and calculate the corresponding objective function value. Step 3. For each ant k according to equation (7) calculate the transition probability pij select control measures, according to equation (6) calculate the corresponding objective function value. If the target is better than the current optimal value, then it is assigned to the current optimal value. Step 4. nc←nc+1. Judge whether nc meet the specified number of iterations, and if so, turn to step 5; otherwise, according to equation (9) update the path back to step 3. Step 5. Output optimal objective function value and risk control measure combinations.
△
4 The Simulation Results and Analysis The selection of parameters in ant colony algorithm directly affects the efficiency of the algorithm, therefore, through the simulation analysis to determine reasonable parameter combination, thus obtains the optimal solution.
348
H. Liu and X. Deng
Assume that the project goals are software maintenance cost and later the stability of the products, there are four risk factors, and eight profit goals in the project. The number of each risk processing measures (not do any processing also as a kind of method) as shown in table 1, under different risk measures the probability p of risk factors according to experience, the problem without constraint are 3456 kind of alternatives[7]. Table 1. The number and probability of each risk target measures Risk target Number of measures Probability weight
1 2 0.5
2 3 0.4
3 3 0.25
4 2 0.3
5 4 0.6
6 3 0.4
7 2 0.35
8 4 0.55
To prevent algorithm converges to local optimal, iterations, but for 1,000 primary nc will definitely increase calculation, through calculation iteration value 150. If the parameters α, β, ρ set incorrectly, leading to solve the very slow and relatively poor quality of obtained solutions. After a lot of computing the value α = 0.0004; β = 5; ρ = 0.7; Q = 20; ξ [0.0001,0.0009]; γ [0.0001,0.0009]; the minimum calculated risk coefficient a = 0.3. The calculation results show that no matter in the optimal rate or on runtime, the ant colony algorithm has obvious advantages, the algorithm excellent performance.
∈
∈
5 Conclusion This paper presents the model of security surveillance based on ant colony algorithm, the realize and simulate this model. Through the simulation results show that the surveillance model can effectively guarantee the safety of software engineering implementation. Especially for the treatment of risk measures that can effectively choose the best to enable the project to minimize the risk value. This paper used the algorithm model has the main advantage in the two aspects: (1) With the increasing scale of risk, risk measures appear combinatorial explosion. Ant colony algorithm of random search of the essence, it is not easy to fall into local optimum; same times, the probability of evolution based on fitness characteristics of the algorithm and ensure fast and reasonable, showing a dynamic characteristic. (2) The optimal measure is the search through the cooperation of many ants are, and for most of the ants of the selection process synergistic. These features is consistent with the many requests of software services and Internet marketing, so using ant colony algorithm for the online service for planning risk management is important. In the next step we will optimize the risk loss probability, and in actual or close to the actual simulation environments in order to improve the effectiveness of the model. Acknowledgment. The authors would thank Professor Naikang Ge for valuable comments. This research is supported by Scientific Research Common Program of Beijing Municipal Commission of Education (No.KM200610011008).
The Research on Model of Security Surveillance in Software Engineering
349
References [1] Liu, H., Ge, N.: Information Engineering Surveillance. China Electric Power Press, Beijing (2009) (in Chinese) [2] Deng, X., Liu, H.: The Study of The Application for Internet of Things Security Surveillance with Cloud Computing. In: The First Session of the National Technology and Applications of Things, pp. 97–101 (June 2010) (in Chinese) [3] Gao, L., Liu, H.: The Study of HMM-based Information Engineering Surveillance and Quality Assessment Model. In: Sixth National Multi-Agent Systems and Control Conference, pp. 214–218 (October 30, 2010) (in Chinese) [4] Gao, L., Liu, H.: The application of Hidden Markov Model in Information Engineering quality Surveillance. Microcomputer and its Applications, 176–180 (November 2010) (in Chinese) [5] Wang, G., Li, X., Wang, J.: Online Services Risk Management with ANT Algorithm. Operations Research and Management Science 18(3), 52–56 (2009) (in Chinese) [6] Zhu, K., Yang, J., Chai, Y.: Risk bubbles concept in risk analysis modeling. Jtsinghua Univ. (Sci. &Tech.) 44(10), 1372–1375 (2004) [7] Huang, M., Wu, X., Wang, X.: Vitual enterprise risk programming based on ant system under electronic commerce. Computer Integrated Manufacturing Systems 11(10), l456–1460 (2005) (in Chinese)
Realization on Decimal Frequency Divider Based on FPGA and Quartus II Hu XiaoPing and Lin YunFeng Lishui University, Lishui, China [email protected]
Abstract. Altera Quartus II software is a comprehensive development tool that integrates Altera's FPGA / CPLD development process involved all the tools and third-party software interface, by using this comprehensive development tool, designers can create, organize, and manage their own design. In textile machinery needle-election spending-control system, this paper proposes a design of the equal duty ratio arbitrary integer frequency divider based on FPGA. it introduces the characteristics and application domain of FPGA in brief. Then it discusses a few traditional integer frequency dividers. Finally presents the frequency division theory and circuit design of dual modulus preset decimal frequency divider based on FPGA. The frequency divider is simulated with Quartus II. Keywords: FPGA, Quartus II, VHDL, circuit simulation.
1 Introduction With the development of micro-electronics design technology, and digital integrated circuits gradually developed into today’s application specific integrated circuit (ASIC) from the tubes, transistors, small and medium-scale integrated circuits, ultralarge scale integrated circuit (VLSIC). The emergence of ASIC reduces production costs, and improves system reliability, reduces the physical size of the design and promotes the process of digitalization of society. But the long design cycle of ASIC, revision and massive investments and poor flexibility restricts its wider application. Hardware engineers hope to have a more flexible design approach, if necessary, largescale digital logic could be designed and changed in the laboratory, and to develop its own ASIC and immediately put into use, it is the basic idea of programmable logic devices[1-4]. Programmable logic devices with the development of microelectronics manufacturing process have made considerable progress. From the simple logic functions programmable read-only memory, UV-erasable read-only memory and electrically erasable read-only memory (E2PROM) to programmable array logic (PLA) and generic array logic (GAL), now we can complete today a complex combination of ultra-large-scale logic and timing complex programmable logic device (CPLD ) and field-programmable logic devices (FPGA). As the process technology development and market needs, a new FPGA/CPLD with ultra-large-scale, highspeed, low power consumption has appeared. The new generation of FPGA even M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 350–356, 2011. © Springer-Verlag Berlin Heidelberg 2011
Realization on Decimal Frequency Divider Based on FPGA and Quartus II
351
integrates the central processing unit (CPU) or digital processor (DSP) cores, can do the hardware and software co-design in an FPGA and provides a Powerful hardware support[3] for the realization of on-chip programmable system[5].
2 FPGA Hardware Design of Experiment Platform There are at present more than a dozen productions of CPLD/FPGA companies, a higher market share is three Altera, Xilinx, and Lattice Corporation. Which, Altera and Xilinx have more than 60% market share. Most of the above companies have their own different FPGA product family, such as Lattice's EC / ECP series of devices, Altera's Apex, Cyclone, Stratix series devices, as well as Xilinx's Spartan and Virtex series of devices. The company's product mix is different design methods and scope of application is also different. In the comprehensive comparison of various vendor devices, development software, market conditions, based on the paper selected a price higher than the general chip market applications, that Altera's Cyclone series of EP1C3T144C8, as an experiment to study the main chip board[6]. The logic unit is 2910, 13 internal RAM modules, the maximum available I/O port 104. The chip's versatile strong, affordable, develop a more suitable learning environment software platform. FPGA configuration chip EPCS 1, single-chip capacity of 1M bits, can be powered down to save the data. Download circuit with JTAG and AS modes of two to download in order to achieve online and download the debugging [7]. Software platform uses Altera's Quartus II development environment.
3 Principle about Dual Modulus Preset Decimal Frequency Divider There are many ways to realize decimal division, but its basic principle is of the same way. That is, takinga special way to make a few cycles count one more or one less number in a number of sub-frequency cycles making the entire count cycle divide in a decimal division in average sense. Established to carry out decimal frequency division ratio of the K, K can be expressed as: K
= N + 10 − n X
Where: n, N, X are positive integers; n to X for the median, that is, K has n decimal places. On the other hand, sub-frequency ratio can be written: K = M / P Where: M for the presale input pulses; P for the output pulses.
M = KP = ( N + 10 − n X ) P
So that P = 10n, then: M = 10 N + X n
352
H. XiaoPing and L. YunFeng
The above is a way to realize decimal divider, trying to multi-input X-pulse when make 10N to N-sub-band. Circuit Composition.Each cycle makes N +10- n.X frequency division, the circuit of dual modulus preset decimal frequency divider composed of N/N +1 dual modulus frequency divider, control counter and control logic. When a point voltage level is 1, make÷ N frequency division and when a point voltage level is 0, make÷N+1 frequency division. In sub-10N frequency divider cycle it has carried out X times ÷ N +1 sub-frequency when sent 10n plus from f0 and enter the X. (N +1) + (10n-X).N pulses in the fi by appropriately design of the control logic, that is, pulses 10n.N + X. The principle is shown in Fig 1.
Fig. 1. Decimal divider principle diagram
With in-depth study of PLD devices, PROM and PLA in the form or the array structure with exposed some flaws. The former number in the input increases, and the array input signal lines to increase the number of the series at 2; the latter complex manufacturing process, the device is slow. Or array-based structure of these two types of devices and has been eliminated at the edge, and PAL (GAL) or the array structure and the form has certain technical advantages, is a simple PLD current mainstream of development. GAL PAL and the same basic structure of gate arrays, and arrays are programmable, or arrays of fixed connections, easy to implement and low cost program, usually in PAL and GAL products, the largest number of product terms of up to 8, PAL and GAL's largest The difference is that the output structure is different. PAL output structure is fixed and can not program the chip model is chosen, the output structure also selected, according to the output and feedback of the structure is different, PAL devices are: programmable input / output structure, type register with feedback structure, different, or structure, special combined output feedback structure and the structure of the arithmetic strobe, PAL products are 20 different models; GAL output structure can be user defined, is a flexible programmable output structure, GAL's two basic Model GAL16V8 and GAL20V8 devices can replace dozens of PAL, which is called general-purpose programmable logic devices. PAL will also be different compared with the GAL, PAL can only be programmed once the GAL process can be repeated by EECMOS programming, programming times by up to a hundred times or more, or even thousands of times over, so GAL PAL gained more than a wide range of applications. But the main drawback is that GAL device density is not large enough, the pin is also not enough, during the design of large systems must be used FPGA or CPLD. When the circuit design using VHDL, the hardware, with the traditional method of circuit design, compared with the following characteristics: The general requirements of the system starting from top to bottom and gradually refine the contents of the design, and finally to complete the overall design of the system hardware. In the design process, the system is divided into three levels of topdown design:
Realization on Decimal Frequency Divider Based on FPGA and Quartus II
353
The first level is the behavior described. The so-called behavior description, is essentially a mathematical model of the system description. In general, the behavior of the system is to try to describe the purpose of the initial phase in the system design, described by the simulation of system behavior to detect design problems. Stage in the behavior description, do not really consider the actual operation of the methods and algorithms used to achieve, but consider the system structure and working of the process is to reach the system design requirements. The second level is the way RTL description. This is called register transfer level description (also known as data flow description). As mentioned earlier, with the behavior of the system structure of the program described, and its high degree of abstraction, it is difficult to map directly to specific logic element structure. To get the hardware realization, behavior must be described in VHDL language program rewrite mode is described in VHDL RTL language program. In other words, the system described in RTL mode, the system can export the logical expression is required for logic synthesis. The third level is the logic synthesis. Namely the use of logic synthesis tools, the RTL description of the program into a way of basic logic components that use the file. At this point, if necessary, may be the result of logic synthesis to the way the schematic logic output. Since then the results can be consolidated at the gate level simulation, and check their timing relationships. Application logic gate-level synthesis tools generate a netlist, convert it into a PLD programming code point, you can use the hardware circuit design PLD. We can see from the top-down design process, from designing to the final overall behavior of the logic synthesis, simulation should be checked at every step, this will help early detection of design problems, which can greatly shorten the design cycle. PLD chips due to the current number of factories manufacturing its tools support the VHDL programming language. Therefore, the design using VHDL, digital systems, the hardware circuit design according to the needs of their own personal use of the ASIC design using PLD chips, and not subject to common components of the restrictions. (1) The system for the early simulation. From the top-down design process can be seen in the system design process to conduct three simulations, the behavior level simulation, RTL level simulation and gate level simulation. This three-level system simulation throughout the entire process, which can be found in the design of the early system design problems, greatly reducing system design cycle, saving a lot of manpower and material resources. (2) The difficulty of reducing the hardware design. In the traditional design approach, often require the designer wrote in the design of the circuit before the circuit logical expressions or truth table (or sequential circuit state table.) This work is very difficult and complicated, especially when the system is relatively complicated even more so. The use of VHDL hardware design language, it allows designers to write the logical expression or remove the pain of truth table, which greatly reduces the design effort, but also shorten the design cycle.
354
H. XiaoPing and L. YunFeng
(3) The main design document is written in VHDL source code. And compared to the traditional circuit diagram, using the VHDL source code has many advantages: One is the data volume is small, easy to save. The second is inheritable good. When the design of other hardware, you can use some of the library files, processes and processes described in some local hardware procedures. Third, it is easy to read. Reader than the reading circuit schematics to be easier to read can easily be seen in the program do the work of a circuit theory and logic. The push from the circuit diagram in principle of knowing the hardware you need more knowledge and experience. The counting value is 10 minus the fractional part of frequency ratio each frequency division. If the cumulative results is less than 10 carry out N frequency division when cumulative all of the value, contrarily carried out N+1 frequency division. The total value is (10-8) =2 in this example and the results of 4 times cumulatively are less than 10, so for the 8 frequency division. The cumulative result of the fifth is 10 so remove 10-bit number let the results turn to 0 then for 7 frequency division at the same time. The process of frequency division is shown as in Table 1. Table 1. The process of frequency division frequency ratio 7.8 serial number
cumulative results
frequency ratio
1
2
2
4
8
3
6
8
4
8
8
5
10
8
6
2
8
7
4
8
8
6
8
The process of frequency division The procedure of 7/8 dual modulus preset decimal frequency divider with VHDL Design is as follows(the VHDL procedures of 7 and 8 frequency division omitted): module fd78bits(reset,clkin,a,clkout); input reset,clkin,a; output clkout; reg clkout; wire clkout1,clkout2; fd8bits fd71(reset,clkin,clkout1); fd9bits fd81(reset,clkin,clkout2); always @(a or posedge clkiin)
Realization on Decimal Frequency Divider Based on FPGA and Quartus II
355
begin if(a==1) clkout=clkout1; else clkout=clkout2; end endmodule
4 Clock Source Module Design Cyclone devices built-in enhanced PLL 1-2, can be developed with high performance clock management programs, such as frequency synthesis, programmable phase shift, off-chip clock output, programmable duty cycle, loss of lock detection and Highspeed differential clock signal input and output. Cyclone devices through the rational use of the internal PLL, can simplify board level design test board timing problems, provide better timing control scheme. Cyclone series FPGA contains an internal PLL circuit, an external standard of 20MHz clock frequency, different frequencies required to produce the system clock. Each PLL can provide three different frequency output. PLL ratio by multiplying or dividing factor m (nx and enlarging the counter), which m, n, and after scaling the counter can be set to 1 to 32 parameters. Cyclone PLL can be achieved on an application for time division multiplexing function, so for some particular circuit can run multiple times in a single clock cycle. Through time division multiplexing, the designer can use fewer resources to achieve the desired logic function, using this approach to shared resources can increase the resources available within the chip. Each PLL can have a differential or single-ended-chip clock output. Each has a pair of off-chip PLL clock output pin, the output pins can support multiple I / O standards. Chip clock output can be used onboard system clock or synchronization of various devices. The clock feedback feature can be used to compensate for the delay or aligned within the output clock phase with the input clock.
5 Summary Experimental FPGA hardware platform selected for the main chip companies ALTERA CYCLONE series EP1C3T144C8, designed the power supply module, clock module, the key input module, digital tube, LED seven segment display module, serial communication module. Part of the Quartus II software development environment, sample programs with the VHDL language design to verify functionality of each module. The frequency divider implemented with Flex series chip is described in VHDL, and synthesized and simulated with Quartus II. The simulation results based on FPGA testified that the output clock frequency can satisfy signal detecting of the absolute encoder, which guarantee the functional requirement of the control system. It is also confirmed that the method has many advantages such as feasible implementation, convenient upgrading, stable quality and high efficiency and the application of the divider also lays a good foundation for further improvement of the knitting machine performance.
356
H. XiaoPing and L. YunFeng
Acknowledgment. The work is supported by Zhejiang Provincial Natural Science Foundation of China No. Y1080434.
References 1. Ma, Y., Wang, D.-L., Wang, L.-Y.: CPLD / FPGA programmable logic devices and practical tutorial, pp. 10–12. Mechanical Industry Press, Beijing (2006) 2. Wang, C., Wu, J.-H., Fan, L.-Z., Xue, N., Xue, X.-G.: Altera FPGA/CPLD Design (Fundamentals), pp. 11–13. People’s Posts & Telecom Press, Beijing (2005) 3. Wang, Z.: Programmable logic device design principles and procedures, pp. 16–18. National Defense Industry Press, Beijing (2007) 4. Huang, Z.: FPGA system design and practice, pp. 15–19. Electronic Industry Press, Beijing (2005) 5. Xu, Z.-j., Xu, G.-h.: CPLD / FPGA Development and Application. China Electronics Press, Beijing (2002) 6. Zhang, J.-g., Wu, X.-g.: CPLD Design and Simulation Technology Used on the Electronic Jaquard Circle Knitting Machine. Journal of Wuhan University of Science & Engineering (2005) 7. Behr, H.: New weft knitting machine peripheral equipment developments. Melliand Textilberichte 82(9), 698–699 (2001)
Design of Quality Control System for Information Engineering Surveillance Based on Multi-agent Hongzhi Liu1, Li Gao2, and GuiLin Xing3 1,2
College of Computer and Information Engineering, Beijing Technology and Business University, 100048, Beijing, China 3 Beijing Municipal Engineering Consulting Corporation, 100031, Beijing, China [email protected]
Abstract. Application of multi-agent to the field of Information Engineering Quality Surveillance (IEQS) will make IEQS adjust to the diversity and polymorphism of network environment better and make Information Engineering Surveillance (IES) enter the intelligent era. We can solve the problem of efficiency and business intelligence for IES through the research of multi-agent applied to quality control for IES. This paper firstly designed the function of Quality Control System (QCS) for IES and carried out the agent-oriented analysis and design of the system by using Gaia method. Moreover, the designed HMM was applied to the module of quality assessment. Finally, we proposed a plan to optimize the communication mode and strategy of the system model. The quality control hierarchy based on multi-agent designed in this paper to some extent has a sense of guidance to the development of future QCS for IES based on multi-agent. Keywords: Information Engineering Surveillance, Multi-Agent, Quality Control System, Gaia method.
1 Introduction IEQS is a management process which needs high technology [1]. Supervisors are required to trace the whole process dynamically including establishing and keeping the documents timely and searching surveillance data immediately as well as adding up surveillance results objectively, etc. It becomes more and more difficult for traditional surveillance methods to accomplish the task and the induction of computers aided management greatly improved the working efficiency, which can supervise the foundation of information system engineering better. According to the standard of ISO/IEC14598, the software life cycle-oriented and engineering practice-conformed assessment process model was defined on the basis of KPI and software quality assessment model and a software engineering quality measure model based on fuzzy technology and factor neural network was established. So was the IPSS based on J2EE. In light of this, multi-agent technology was introduce into this paper and we finished multi-agent modeling of QCS and quality assessment model of HMM was applied to the module of quality assessment [2]. We improved the QCS in order to realize the intelligence of software quality assessment. M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 357–363, 2011. © Springer-Verlag Berlin Heidelberg 2011
358
H. Liu, L. Gao, and G. Xing
2 System Design of IPSS-MASQC The design of QCS based on multi-agent was based on the IPSS frame [3] and optimization and progress was made in the modules of quality control. We also applied the hidden Markov quality assessment model to the system. We described the functions of IPSS-MASQC system by using use case graph. See Fig.1. The use case is a functional unit of the system and can be described as a time of interaction between participants and the system. UML use case graph can be used for agent-oriented modeling in order to carry out the complete analysis and design of the function for the system [4].
R e p o rt D e liv e ry
D e v e lo p e r
P ro je c t In fo rm a tio n
C o o p e ra te a n d C o m m u n ic a te Q u a lity A ss e s s m e n t
A gent
L o g in
S u p e rv is o r A gent
O w ner
D ocum ent M anagem ent
S e a rc h S c h e d u le M em o S y s te m M a in ta in in g
A d m in is tra to r
Fig. 1. Use case graph for IPSS-MASQC system
3 Agent-Oriented System Modeling Gaia is one of the methods firstly trying to use MAS exclusively. The main idea of Gaia is considering the process of analysis and designing MAS as a process of establishing a computing organization. We can use Gaia method to establish role model and interactive model in analysis stage, agent model, service model and acquaintance model in design stage [5].Usually, there are three steps to define role model: define system role, analyze relation model and define role model. We abstracted two roles after analysis: Supervisor Handler responsible for connections between developers and supervisors; Developer Handler responsible for connections between supervisors and developers. Take the role of Developer Handler for example. Permissions and activities can be defied as following:
Design of Quality Control System for Information Engineering Surveillance
359
Developer Handler Permissions: Reads quality plan Generates check information …… Generates real quality description Generates exception notice Activities: Sign to agree in the delivery report of quality plan Compare with quality plan After the determination of the system roles, meanwhile, we have to examine the interactive relations between every role and other roles and establish the interactive model. Analysis Stage. (1) Interactive Model. The interactive relationship of the two roles in the system can be seen in Fig.2 and Fig.3. Quality Plan Check Request Supervisor Handler
Developer Handler
Request supervisors check quality plan
Upload plan Check
Quality Plan Check Response Developer Handler
Supervisor Handler
Accept supervisors’ quality plan
Quality plan Check information
Fig. 2. Check quality plan
Quality Exception Request Supervisor Handler
Developer Handler
Request supervisors trace quality
Quality Quality description
Quality Exception Response Developer Handler
Supervisor Handler
Accept supervisors’ quality exception notice
Discover exception Exception notice
Fig. 3. Dispose quality exception
(2) Role Model. Take Developer Handler for example, the role model was defined according to the template, shown in Fig.4.
360
H. Liu, L. Gao, and G. Xing
Role Model: Developer Handler Description: Contact with developers Protocols and Activities: Quality Plan Check Response, Quality Exception Response, Sign to agree, Compare with quality plan Permissions: Read quality plan, Generates check information, Generates real quality description, Generates exception notice Responsibilities: Liveness: (Quality Plan Check Response, Sign to agree)||(Compare with quality plan, Quality Exception Response) Safety: True Fig. 4. Role model of Developer Handler
Design Stage. A classical process of design is to convert the model generated in analysis stage to a model which has a lower level of abstraction and can be implemented easily. Agent-oriented design focuses on how agent society cooperates and how every agent needs to deal in order to achieve the system goal. (1) Agent Model. Agent model of Gaia describes the basic agent models of the system. When designing, we firstly have to design basic agent models and then elaborate them and optimize the system structure. According to the map between roles and entities, we divided the agents of QCS into three categories:
① ② ③
Internal Interface Agent: Manage system interactions between developers and supervisors. Application Form Agent: Manage submission and read of all the application forms, supervising the process of upload and download. Quality Agent: Manage the process of quality trace; be responsible for comparing with the quality plan to find quality exception. The relationship between three kinds of agents is shown in Fig.5. Internal Agent Interface
Application Agent Form
Quality Agent
Internal Agent Interface
Quality
ApplicationAgent Form
Supervisor
Quality Agent
Developer
Fig. 5. Structure of multi-agent model
Design of Quality Control System for Information Engineering Surveillance
361
Every kind of agent needs dividing according to the functions and load in order to form other agents responsible for the disposition of every function. For example, quality agent can be divided into quality trace agent, quality description agent and quality exception agent; application form agent can be divided into quality plan check agent, quality plan upload agent and quality plan download agent. (2) Service Model. Gaia service model determines the service that every agent can provide, namely functions provided. Especially, every activity in the analysis stage corresponds with a service, while a service may not correspond with an activity. Every service of agent is described by four attributes: input, output, precondition, and post condition. It is easy to get input and output from protocol model. Precondition and post condition represent the constraints of every service, which can be obtained from the attribute of role safety. Take quality trace service of internal interface agent in the system for example and the service model is shown in Fig.6.
Agent Type Service
Internal Interface Agent Quality Trace
Input
Application Information
Output
Quality Check Information
Precondition Post condition
True True
Fig. 6. Service description of quality trace
(3) Acquaintance Model. Acquaintance model is the simplest one of Gaia design models. It defined the communication connections between the agents, but neither defined the contents and formats of the interactive information in detail, nor defined when to send or accept messages. The main purpose of creating acquaintance model is to find the potential bottle neck of communication when the system is sun, making sure that the system is loosely coupling.
4 Optimization of Communication Mode and Strategy The communication mode is generally divided into two categories which include blackboard mode and information passing mode in multi-agent systems [6]. We plan to adopt the mixed mode of the two modes in QCS. The information open to owners, developers, and supervisors will be put on the blackboard for all the people to consult, with low reliability; the information secret to someone will be put on the predetermined point for people owning authority to consult it and issue opinions, while other members don’t have the authority to participate, with high reliability; some information of top secret and real time needs private line, a way of synchronized transmission, with
362
H. Liu, L. Gao, and G. Xing
,
highest reliability. After analysis of multi-agent communication strategy we plan to design agent communication models of QCS based on KQML [7] and optimize the communication and cooperation between agents with ant colony algorithm. The model of ant colony algorithm [8] is as following: Suppose the number of ants in ant colony is m at t, d ij (i, j = 1,2, " n) represents the length of communication path between agent i and j. We adopted undirected graph G(N,E), namely,
d ij = d ji , N is the
number of agent and E is the edge set of agents communicating with each other. n
Suppose
Ai (t ) represents the number of ants located on agent i at t. So m = ∑ Ai (t ) i =1
and
τ ij (t ) represents the amount of left information on edge e(i, j) at t. Initially the
amount of information on every path is equal. Suppose
τ ij (0) = C (C is a constant,
usually C=0). Ant k (k = 1,2 ", m ) determines diverting direction according to the amount of information on every path during the movement. The probability of ant k whose position changes from i to j can be represented by the following formula:
⎧ (τ ij (t ))α (η ij ) β ⎪ p ijk (t ) = ⎨ ∑ (τ ij (t ))α (η ij ) β , j ∈ allowed k ⎪ 0, others ⎩
(1)
allowed k = {N − tabu k } represents the agent that next step ant k is permitted to choose, tabu k represents k th ant’s taboo table and tabu k (s ) represents s th And
element in the taboo table;
η ij
determined in detail according to a certain kind of
elicitation algorithm represents expectancy of agent’s shift from i to j. When η ij >0, which stands for the moving probability of ant from neighbor i to j according to the probability of p ij ; when η ij ≤ 0 , the ant in neighbor i does neighbor search whose searching radius is r; α and β which are comparatively important parameters of controlling the intensity of information hormone and visibility respectively represent the accumulated information of the ants’ movement and different functions of the elicitation factors in the process of ants choosing paths. Shifting probability is the balance of visibility and intensity of information hormone at t. We plan to finish the implementation of the system by using JADE which is a software frame completely developed by Java and meets the norms of FIPA. This middleware greatly simplifies the process for programmers to develop multi-agent system.
5 Conclusions In this paper, we have achieved a tentative result of the IEQS based on multi-agent on the respects of theory study and system design, which lays both theoretical and
Design of Quality Control System for Information Engineering Surveillance
363
practical foundation of further developing QCS for IES and made a beneficial exploration to establish an IES system based on multi-agent. Application of multi-agent technology to the field of IES is a new project in the whole world; therefore the related information system for surveillance is still in the rudimentary stage. The system designed in this paper is yet in the rough and next stage we will improve the design and finish the implementation. Acknowledgment. The authors would thank Professor Naikang Ge for valuable comments. This research is supported by Scientific Research Common Program of Beijing Municipal Commission of Education (No.KM200610011008).
References 1. Liu, H., Ge, N.: Information Engineering Surveillance. China Electric Power Press, Beijing (2009) (in Chinese) 2. Gao, L., Liu, H.: Research to Quality Assessment Model for Information Engineering Surveillance Based on HMM. In: The Sixth Academic Annual Conference of National Multi-Agent System and Control, pp. 258–261. National Defense Industry Press (2010) (in Chinese) 3. Yang, R., Liu, H.: Research to Surveillance Assessment of Designing Stage for the Process of E-Government. Computer Engineering and Applications 45, 177–179, 182 (2009) (in Chinese) 4. You, X., Shuai, D., liu, S.: Design and Research of an Agent-Oriented Modeling Language Based on Extended UML. Mini-Micro Systems 27(3) (March 2006) (in Chinese) 5. Wang, Y.: Research on Intelligence E-Business System Based on Multi-Agent. Wuhan University of Technology, Wuhan (2005) (in Chinese) 6. Mina, R., Yoshiteru, N.: Agent-Based Approach to Complex Systems Modeling. European Journal of Operational Research 166(3), 717–725 (2005) 7. Tweedale, J., Ichalkaranje, N., Sioutis, C., Jarvis, B., Consoli, A., Phillips-Wren, G.: Innovations in Multi-Agent Systems. Journal of Network and Computer Applications 30(3), 1089–1115 (2007) 8. Yan, J., Li, W., Liu, M.: Task Allocation for MAS Based on Hybrid Ant Colony Algorithm. Application Research of Computers 26(1) (January 2009) (in Chinese)
A Study about Incentive Contract of Insurance Agent Hu Yuxia1,2 1
Econcmy and Finace Insititute of HuaQiao University, Fujian, China, 362000 2
LiMing vocational college, Fujian, China, 362000 [email protected]
Abstract. Insurance agent has been playing an very important part in the development of insurance industry.but bad behaviour’s appearance restricts the unceasing development, If we want to lead the insurance agent to do as expected, It is necessary to study the contract. This article used the method of game theory and information economy to analyse the incentive contract, with the analysis results, we finally give some reasonable and feasible advice for insurer in order to make a better contract. Keywords: Insurance agent, incentive contract, game theory and information economy.
1 Introduction Insurance agent refers to the organization or Individual, who handle the business for insurer and ask for the brokerage by the quantity of premium based on the delegation of insurer. We refer the word “Insurance agent “in particular to individual insurance agent. Throughout the history, Insurance agent has been playing an very important part in the development of insurance industry. Especially in Developing the Market and Business of insurance industry. However, everything have two-sidedness, with the development of insurance industry, the disadvantageous aspects of Insurance agent’s behaviour appeared, for example, faking the insurance policy(the event about XIECHENG WEB who fake the policy of Air Transportation Risks)、 default of Insurance Premium、 pocketing or diverting the Premium、 refunding the premium out of the regulation、 bribing the insurance applicant、 misleading or deceiving the insurance applicant etc.These bad behaviour caused most insurance applicants distrust the insurer, and dare not to take out (an insurance) policy even they needed it.in addition, the remain ratio of Insurance agent is quite low, this condition result in the bad service quality, all problems such like those mentioned will hinder the development of insurance industry healthily. “The behaviour decided by the system ” If we want to change the behaviour of the insurance agent, we must make an in-depth study on the insurance agent system. generally, the insurance agent system mainly included incentive system and regulatory system, for the insurance industry in China developed more late than in other countries, The laws and regulation about it are imperfect, so what affect the behaviour most is incentive system.In our country, the insurer incent the insurance agent mainly by the commission, that is, the insurance agent was paid some proportional Insurance
,
M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 364–369, 2011. © Springer-Verlag Berlin Heidelberg 2011
A Study about Incentive Contract of Insurance Agent
365
Premium by the insurer.so the incentive system was replaced the commission system completely .The level about the commission proportion will affect the insurance agent’s behaviour to a large extent. , For the apparent reason, more higher commission can prompt the insurance agent work harder, and the gross earnings of the insurer will be increased by the hard working of the agent.and the insurance market will develop effectively and healthily.however, for the insurance market is a typical Information asymmetry market, when the insurance agent worked, the insurer can’t supvise them effectively, the moral risk’s appearance of the insurance agent caused their behavious deviate the anticipation of the insurers.so high commission can’t put an end to bad behavious, and more and more problems emerged. Therefore, the current insurance agent incentive system need to be improved, the contract to insurance agent need to be optimized.we used the method provided by game theory and information economy to study the incentive contract, just in order to provide theoretic support for the adjustment of the incentive contract.
2 The Foundation and Analysis of Incentive Contranct Given A is the collection of the choice the insurance agent can choose. a ∈ A stands for a special action of insurance agent, suppose that the insurance agent have kinds of action, so a should be a multidimensional variate, but for the study’s convenience, we suppose the action of the insurance agent is unique, so a is a single variate, but a have two values: a 1 (stands for work normally)and a 2 ( stands for work with irregular behaviour), given θ is a random variate which can’t be controlled by insurance agent, for example, “nature condition”, θ ∈ Θ . With the together effect of the exogenous variable θ and endogenous variable a , It will generate a observable outcome ( premium) π (a, θ ) , suppose π ∈ π ,π , if the insurance agent work
{
}
normally( a = a1 ) , the distribution function and density function of π are supposed as F1 (π ) and f 1 (π ) separately; if the insurance agent work with irregular behaviours (a = a 2 ) , the distribution function and density function of π are supposed as F2 (π ) and f 2 (π ) separately. Suppose π is a rigid increasing concave function about a , that is, if θ is given, the harder the insurance agent worked, the higher the outcome, but the marginal outcome is fallend decreasing. Especially, when insurance agent chooce work normally, the insurer can obtain the premium π , the premium is equal to the virtual premium the insurance agent received from the insurance applicant : π = π 实实 , but when the insurance agent work with irregular behaviour, even though the insurer obtain the premium π , but he will have a loss d about long-dated premium( becauce the irregular behaviour will compromise the insurer’s fame, then reduce the future volume of business) , for convenience, suppose π = π 实实 − d , that means the outcome π is equal to virtual premium π 实实 minus the loss about the fame d .so the distribution functionof π in this article meet the condition of first-order , there stochastic dominance.that means as for insurer, when π ∈ π ,π
{
}
366
H.Yuxia
always F 1 ( π ) ≤ F 2 ( π ) , so the probability of high outcome when the insurance agent worked normally is bigger than the probability when worked with irregular behaviour.In addition, π is a rigid increasing concave function about θ , , if the action of insurance agent is given, the bigger θ is,the higher the outcome because advantageous external circumstances make the public accept the insurance product easily. According to the usual insurance agent system in Chinese insurance market, After making the contract with insurance agent, insurer will gain the outcome π (a, θ ) by the insurance agent‘s work, at the same time, he must give the commission s (π ) to the insurance agent as the payment , ( In China, commission mainly decided by the virtual premium) . So we suppose the expected utility function of insurer as v(π − s (π )) 。 For insurance agent, After signing the contract with insurer, he choose his action a and use his effort c(a ) , collect the virtual premium π and deliver it to insurer, then get his commission s (π ) 。 So we suppose the expected utility function of
insurance agent as u ( s (π ) − c(a )) . We suppose that both insurer and insurance agent are risk aversion rational people, then v ′ > 0, v ′′ < 0; u ′ > 0, u ′′ < 0; c ′ > 0, c ′′ < 0; so insurer want insurance agent work normally as he can, ( a = a1 ) , and eliminate or reduce irregular behaviors in order to increase π and v finally. Suppose π is equal, c(a1 ) > c(a 2 ) , so if insurance agent want to gain the same quantity of premium, he will make more effort with normal work than with irregular work, ifinsurance agent want to minimize his effort c(a1 ) and increase c(a 2 ) as possible, then he will gain high s (π ) , therefore, the purpose of insurance agent will differ from insurer, if the insurer want insurance agent work as he expected, he must choose appropriate incentives s (π ) to maximize hisutility function。 The contract must satisfy two constraints: IR and IC. This article use “Parameterize Method of distribution function” gived by Mirrlees( 1976) and Holmstrom( 1979) , we represent the problems as following:
max ∫ v(π − s (π ) f 1 (π )dπ s (π )
s.t. ( IR) ∫ u ( s (π ) f 1 (π )dπ − c(a1 ) ≥ u
( IC) ∫ u ( s(π )) f (π )dπ − c(a Make Lagrange function as: 1
1
) ≥ ∫ u ( s (π )) f 2 (π )dπ − c(a 2 )
L( s (π )) = ∫ v(π − s (π ) f 1 (π )dπ + λ[ ∫ u ( s (π ) f 1 (π )dπ − c(a1 ) − u ] + μ[ ∫ u ( s (π )) f 1 (π )dπ − c(a1 ) − ∫ u ( s (π )) f 2 (π )dπ − c(a 2 )] The first-order condition of optimization is: −v ′f 1 (π ) + λu ′f 1 (π ) + μu ′f 1 (π ) − μu ′f 2 (π ) = 0
A Study about Incentive Contract of Insurance Agent
367
Then:
(
)
f (π ) v ′(π − s (π )) = λ + μ[1 − 2 ] u ′( s (π )) f 1 (π )
(1)
Holmstrom 1979 had proved μ > 0 when under information asymmetry condition because μ = 0 will destroy the incentive compatibility constraint.. so insurance
,
agent must take some risk under this contion, suppose s λ (π ) as the best risk-taking v ′(π − s (π )) =λ contract when suppose s (π ) as incentive contract when u ′( s (π )) f (π ) v ′(π − s (π )) = λ + μ[1 − 2 ] then: u ′( s (π )) f 1 (π )
,
,
f 2 (π ) ≥1 s (π ) ≤ s λ (π ), if f 1 (π ) f 2 (π ) <1 s (π ) > s λ (π ), if f 1 (π ) That is: for a outcome π which has been given, If insurance agent have higher probability to reach π by irregular work than by normal work, insurer will downward adjustment the commission rate to reduce insurance agent’s outcome in this condition that the insurance agent more tend to work by irregular work; on the contrary, If insurance agent have lower probability to reach π by irregular work than by normal work, insurer will upward adjustment the commission rate to increase insurance agent’s outcome in this condition that the insurance agent more tend to work by work normall. In fact suppose γ as the prior probability of insurer who considerd the insurance agent should choose work normally.in fact, insurer adjust his posterior probability f 1γ γ~ (π ) = through the outcome π that he can observed, and he just use f 1γ + f 2 (1 − γ )
,
f 2 (π ) to effect the income of the insurance agent. f 1 (π ) So as the principal, insurer should adjust the commission rate through the product of the insurance agent, Always raising the commission rate can’t make insurance agent eliminate bad behaviors completely This conclusion illuminate that the incentive system adopted by all insurance companies should be adjusted. If we further suppose that the insurer not only can observe not only the outcome π but also the other variables x in connection with the action of the insurance agent the likelihood
。
,
,
without any cost for example the insurance market supervise the credit of the insurance agent by writing down all transaction’s information then through writing the credit variable x in the contract will reduce the risk cost of the insurance agent and make the insurance agent choose work normally or even work harder.
,
368
H.Yuxia
We suppose the joint distribution density function of π and x as f 2 (π , x) , if π and x are written in the contract, then the problem of the insurer is how to choose s (π , x) to maximize his utility function. Then: max ∫∫ v(π − s(π , x)) f 1 (π , x)dxdπ s (π , x ) πx
s.t. ( IR)
∫∫ u ( s(π , x)) f 1 (π , x)dxdπ − c(a1 ) ≥ u πx
( IC) ∫∫ u ( s (π , x)) f 1 (π , x)dxdπ − c(a1 ) ≥ ∫∫ u ( s (π , x)) f 2 (π , x )dxdπ − c(a 2 ) πx
πx
Make Lagrange function as: L( s (π , x)) = ∫∫ v(π − s (π , x)) f 1 (π , x)dxdπ + λ[ ∫∫ u ( s (π , x)) f 1 (π , x)dxdπ − c(a1 ) − u ] πx
πx
+ μ[ ∫∫ u ( s (π , x)) f 1 (π , x)dxdπ − c(a1 ) − ∫∫ u ( s (π , x)) f 2 (π , x )dxdπ − c(a 2 )] πx
πx
The first-order condition of optimization is: −v ′f 1 (π , x) + λu ′f 1 (π , x) + μu ′f 1 (π , x ) − μu ′f 2 (π , x) = 0 then: f (π , x) v ′(π − s (π , x )) = λ + μ[1 − 2 ] u ′( s (π , x)) f 1 (π , x) obviously, only when
(2)
f 2 (π , x) f 2 (π ) ≠ , it is meaningful to add the credit variable f 1 (π , x) f 1 (π )
x in the contract, suppose f i (π , x) fullfill the first-order random dominant condition and monotonous likelihood, that means if the insurance agent have chosen a = a 1 , the possibility of lower π and x can appeare together is lesser than the possibility of lower π appeare alone. conversely, if the insurance agent have chosen a = a 2 , the possibility of higher π and x can appeare together is lesser than the possibility of lower π appeare alone.so, if the insurance agent is a risk-averse, he will be afraid being punished mistakenly, in order to reduce his risk cost , he will choose work normally rather than work by irregular behaviors. Though, as is known to all that it’s very meaningful to supervise the insurance agent’s behaviors, but the supervision’s cost is very high , It can’t fill the condition “the other variables x in connection with the action of the insurance agent without any cost”, so under the condition that the law and the system is imperfect, it is pointless to write the variable in the contract, because it is incredible for the insurance agent. 3 Conclusion Through the analysis of the incentive contract of the insurance agent, we can draw some conclusions as following: Firstly, Single incentive for example commission
A Study about Incentive Contract of Insurance Agent
369
can’t eliminate the bad behaviors of the insurance agent。 Secondly, multivariant incentive may decrease some bad behaviors of the insurance agent to some extent.for example, through supervising the credit of the insurance agent and give corresponding reward or penalty to them according to the credit at the time of incenting them by commission, this method can decrease the bad behaviors indeed.Thirdly, it’s meaningful to supervise the behaviors of the insurance agent effectively.but this need to be on the basis of perfect law and system, or the threats are just talk, it can’t restrain effectively and give correct guidance to the behaviors of the insurance agent. Combine the conclusion with the actual condition of the insurance industry in our country it can give some useful reference to insurer when they design the incentive contract for the insurance agent consequently it can incent the insurance agent work as the insurer expecte as far as possible and it can make the insurance industry develop healthily .
,
,
,
,
References 1. Zhang, W.Y.: Game Theory and Information Economics. SUP Bookstore, Peoples Publishing House of Shanghai, Shanghai (1996) 2. Liu, D.J.: Multi-dimensional examination and comprehensive management of the credit of insurance intermediary agent 3 (2006) 3. David Cummins, J., Doherty, N.A.: The economics of insurance intermediaries. Journal of Risk and Insurance (2010) 4. Kojima, K., Okura, M.: Effort Allocation of Insurance Agent under Asymmetric Information: An Analytical Approach. Journal of Risk and Insurance (2008)
Scientific Research Management/Evaluation/Decision Platform for CEPB Zhang Shen1, Liu Zhongjing2, and Wang Hui-li3 1
Shang-qiu Electric Power company of HeNan, China 2 Zhengzhou electric power college, China 3 San-men-xia Electric Power company of HeNan, China [email protected]
Abstract. Aiming at problems in scientific research management of County Electric Power Bureau(CEPB), such as low efficiency in the management and without considering relevance among various details in working process cause repetitive work even cannot evaluate the scientific research level objectively, this paper puts forward "CEPB scientific research management/ Evaluation/Decision Platform system based on SSH and Ajax integrated framework ". Then, the paper introduced the design of the system structure, function module, and choose the key skills to realize the system. The system can effectively improve the efficiency and standardizing of the scientific research management, and to realize the life period management of the scientific research work, to better serve scientific research person in power energy. Keywords: CEPB scientific research, R&D, SSH, Ajax, Management/ Evaluation/Decision Platform.
1 Introduction At present, domestic electric power company research management department has better hardware and network condition, and management mode of China electric power system is intensification, county electric power bureau(CEPB) in large power system management of electric power company belongs to the end. Although there is very good scientific research management software, but CEPB in this system just as a declaration of a data acquisition terminal of various scientific research data. The scientific research management of CEPB relies on artificial, or through the Excel, word and other office software to handle large amounts of data. This kind of means can hardly control the latest scientific research timely and effectively, and increased workload, also not address CEPB’s need for development provide decision-making basis for scientific research direction. Therefore, it is necessary to develop a set of technical reform, trade mark, scientific research personnel capacity assessment, scientific research project management for the integration of county bureau scientific research management evaluation decision platform. M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 370–375, 2011. © Springer-Verlag Berlin Heidelberg 2011
Scientific Research Management/Evaluation/Decision Platform for CEPB
371
2 System Requirements Power system of scientific research project experience roughly six stages: project originality and argumentation stages, setting up stage, design stage, the development stage, concluding stages, popularized phase. Project Originality and Argumentation. As for CEPB, project originality and argumentation stages mostly due to the site, the creative power worker no shortage of numerous careful technical personnel, they observed a lot of site accident and phenomenon, concerned about how fast solving the problems, and they can come up with a lot of good ideas, which is doing research projects and technical reform good background. Therefore, it is necessary to provide an interface convenient these personnel easy to write standard format background materials. Need through this platform novelty search by scientific research personnel, convenient write technical analysis, system analysis, feasibility report. Setting Up. Scientific research projects of technological reform and who are interested in many studies, but as scientific research project management unit must in limited resources conditions ensure scientific projects within the quality, schedule and cost, at the same time try to reduce project risk of failure. Therefore, it is necessary to evaluate research staff ability for maybe success in declare provincial company research projects, thus decision-making team members, again by this part personnel to consult and collecting data and documents, determine the research content, design project research plan, the technical route, undertake project feasibility analysis to determine whether project or declare. Need this platform to provide scientific research personnel capacity assessment, costing tools and tips. Once the province company project approval down, county bureau project management of project startup that need to experience the processes, project planning process, project execution, project monitoring process and project ending acceptance process. This period to offer document format can completely consistent with National Power Grid Corporation, convenient and scientific software compatible priorities. Design and Development. Project design phase is mainly complete planning, including research task decomposition, resource allocation, progress estimation and project budget. Request the platform has certain intelligent, and schedule, project budget in have error reminder. According to plans to arrange project implementation is an important guarantee to the success of the project, especially for large projects, no plan for the project was unthinkable, even plan may have greater adjustment, also should as far as possible according to plans to advance the project progress. Therefore, request the platform for scientific research personnel and to scientific research administrative staffs have certain remind function. Project into the implementation stage, will make full use of person, tools, knowledge, technology, to develop requirements. Period, may need to various work, such as the organization structure adjustment, personnel implementation, design information transmission, plan and control and adjustment, up, down, left and right, therefore, request the communication etc to relevant personnel have certain platform cues. This stage is the most easy to see results due to the stage, but many unforeseeable factors exist; it also is the hardest control stage.
372
Z. Shen, L. Zhongjing, and W. Hui-li
Concluding. Need to write research summary, technical summary, work summarizes and user manuals in this stage. Consider to mark, this stage also need to write papers with the experimental data and research process observing, apply for intellectual property or patent. And the hardware device needs to develop inspection authority, will get inspection reports and project approval documents above documents submitted identification or acceptance, together files to the user or expert for evaluation; the acceptance of the project weather meets the expected goal. Request the platform can provide corresponding template. Popularized Phase. Popularized phase is a certain stage to promote projects further work in power companies, will provide certain funds. Therefore required the Platform can provide an upload for have been completed projects, public online, plays a certain advertising effect, convenient program promotion.
3 System Architecture Design In view of the above requirements, we designed county power bureau by research management assessment system of B/S architecture, can satisfy the need of network office and also make the scientific research information management from time, place and software environment restriction, can also bring information bulletin out online. As shown in fig1 system network structure, the system can be shown in county power bureau worker and related technical support for the research personnel to provide an open research information platform, and also for county power bureau provides scientific research management convenient way.
Fig. 1. System network structure
This system applied JavaEE technology, integrated SSH and Ajax framework. The system structure is shown in fig2. Ajax in the presentation layer, and CSS and HTML MVC framework together in plays the role of views. The scientific research
Scientific Research Management/Evaluation/Decision Platform for CEPB
373
Fig. 2. Integrated SSH and Ajax framework
management evaluation decision system of CEPB by SSH framework implementation backstage function, the front office and the user interaction through by Ajax, and to the server forwarding users Ajax request, and will server response results in the JSP reveal. For reliable and effective to realize the above design, this system apply Struts2 + Spring3 + Hibernate3 (SSH) framework development. Struts2 is a framework based on MVC framework, it with Webwork design thought as the core, absorbed some of the strengths Strutsl, is a highly scalability framework. Spring is an open-source framework. It is created by Rod Johnson. Spring is using the basic JavaBeans to finish before the may only by the EJB finish things, provides a lightweight solution for enterprise application development. Hibernate is an open source object-relational mapping framework, it was very lightweight to JDBC object encapsulation, makes Java programmer can follow one's inclinations use object programming thinking to manipulate database. In order to reduce the complexity of the system design, to make system has good expansibility and maintainability, the scientific research management system adopt three-layer structure. This design can be realized by SSH framework of JavaEE three-layered structure in which Hibernate for lasting layer, used to finish data persistent application; Spring is business logic layer ,through Bean to management component (including DAO, business logic and the Action of Struts); For the presentation layer, Struts used to implement the control jump page. In order to improve the user operation experience, use RIA technology (Rich Internet Application), RIA improved Web applications of user interactions, offers a richer and more interactive and responsiveness user experience. This system of the rich client applications mainly adopts is Ajax. With Ajax, you needn’t having to download any client; the compatibility of software is better, can with HTML seamless integration. Ajax framework to better solve the web application development, enhance the flexible
374
Z. Shen, L. Zhongjing, and W. Hui-li
use of experience, increase customer satisfaction of development. And the insufficient in Struts framework or Spring framework are resolved.
4 System Function Module Design The purpose of CEPB’s R&D management/evaluation/decision-making system platform lies in the realization of R&D management information and standardization, effectively improve the R&D management efficiency, and realize the whole management of R&D work, better service to R&D personnel and the overall interests of the company. The system includes: pre-project management, contract management, project management, fund management, results management, file material management, R&D personnel assessment, research direction, and existing content such as decision-making science research management height consistent, meet the current and future needs of R&D management. Combining units’ R&D management present situation and the development, to determine the function of the system module, fig3 shows.
Fig. 3. The function module of CEPB’s R&D platform
5 Summary CEPB’s R&D management/evaluation/decision-making platform use mature and stable SSH framework based on software architecture that can guarantee the stability of system and expansibility, make interface operation more convenient and humanization. This platform really has practical and popularization value, this system can effectively regulate R&D management and improve the efficiency, the key is to make R&D project management department out of repeatedly tidy freed, focus on the management of project itself, thus strengthening technical innovation work, promote science and technology achievements to the realistic productive.
Scientific Research Management/Evaluation/Decision Platform for CEPB
375
References 1. Niu, X.-j., Guo, C.-k.: University scientific research management information system design and implementation. Journal of Agronomy (2007) 2. Zou, X.-f.: University scientific research management important effect and scientific research management innovation way. Journal of Chinese University Technology and Industrialization (2006) 3. The management of electric power research works in Japan on, http://www.6lib.com 4. Ceng, L.J.: MVC pattern in university scientific research files the application management information system. Submitted to Science and Technology Management Research (2010) 5. Ni, J.X.: Based on Struts + Hibernate framework are + research management system design and implementation. Submitted to Computer Knowledge and Technology (2009)
Necessary and Sufficient Condition of Optimal Control to Stochastic Population System with FBM RenJie and Qimin Zhang School of Mathematics and Computer Science, Ningxia University, China [email protected]
Abstract. In this paper, a class of stochastic population system with fractional brownian motion (FBM) is introduced. The necessary and sufficient of optimal ' control to stochastic population system are provided. The analysis use Itoˆ s formula and stochastic maximum principle for our purpose. Keywords: Stochastic population system, Stochastic maximum principle, Necessary and sufficient.
1 Introduction Lately a great deal of attention has been given to systems with stochastic multiplicative noise, due to the fact that the modeling of uncertainties using this kind of formulation has found many applications in engineering and finance, population models and immunology, etc. For example, Abel Cadenillas consider a stochastic control problem with linear dynamics with jumps, and provides both necessary and sufficient conditions of optimality [1]. The paper study the stochastic maximum principle in singular optimal control ,and give the necessary condition of singular stochastic control problems with non smooth data[2]. It is well known that the control problems of population system are of great significance for Species conservation, exploitation and management of renewable resources, and epidemic intervention. As far as age-structured population systems are concerned, researchers have developed an almost perfect theory for the optimal control problems of deterministic population system [3-5]. We consider the stochastic population system
∂p ⎧ ⎪dt p = [− ∂r + kΔp − μ(r,t, x) p + f (r, t, x) ]dt ⎪ ⎪+g(r, t, x)dBH (t) + ∫ h(r,t, x,η) N(dη, dt), E ⎪ ⎨ A ⎪ p(0, t, x) = ∫0 β(r, t, x)P(r,t, x)dr, ⎪ ⎪ p(r,0, x) = P0 (r, x), ⎪⎩ p(r,t, x) = 0,
in Q= (0, A ) × (0,T ) ×Ω , in QT = (0,T) ×Ω , in QA = (0, A) ×Ω , in Σ= (0, A) × (0,T ) ×∂Ω .
M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 376–382, 2011. © Springer-Verlag Berlin Heidelberg 2011
(1)
Necessary and Sufficient Condition of Optimal Control to Stochastic Population System
where d t p
=
∂p , t ∈ (0, T ) ∂t
377
, r ∈ (0, A) , x ∈ Ω ⊂ R (1 ≤ n ≤ 3) , P(r , t, x) n
denotes the population density of age r at time t and in the location x, β ( r , t , x ) denotes the fertility rate of females of age r at time t and in spatial position x, μ ( r , t , x ) denotes the morality rate of females of age r at time t and in the location x,
Δ denotes the Laplasce operator with respect to the space variable, k > 0 is the diffusion coefficient. f ( r , t , x ) + g ( r , t , x ) dB H ( t ) + ∫ h ( r , t , x ,η ) N ( dη , dt ) E
denote effects of external environment for population system, such as emigration and earthquake and so on. The effects of external environment has the deterministic and random parts which depend on r, t, x and p, H ∈ (0,1 2] . It is more realistic to put the effects of the stochastic environmental noise into age-structured population systems. When H = 1 2 , Zhang gave several results for a stochastic age-structured population system with diffusion [6-8].However, The papers on the optimal control of the stochastic population system are few. Difference of our work from the [6-8] is that we put both Fractional Brownian motion ( H ∈ (0,1 2] ) and Poisson process into the age-structured population system together. The stochastic model is an extension of Zhang [9]. In Section 2, we begin with some preliminary results, and give the problem studied by the paper. In Section 3, we shall prove the necessary and sufficient condition of optimal control to stochastic population system.
2 Preliminaries Let
(Ω, F , ( Ft ), P) be
filtration
a probability space with a P-completed right-continuous
( Ft ) . Let M be a stationary ( Ft ) -Poisson point process on a fixed nonempty
subset E of
ℜ1 .
We denote by
m ( dη )
the characteristic measure of M and
by N ( dη , dt ) the counting measure induced by M. We assume that m ( E ) < ∞ . We then define N (dη , dt ) := N (dη , dt ) − m(dη )dt. We note that N is a Poisson martingale measure with characteristic m (dη ) dt . Let
f (t , p, v ) = At p + Br v + Ct , g (t , p, v) = Dt p + Er v + Ft , h(t , p, v,η ) = Gt (η ) p + H t (η )v + I t (η ), To sure that the above stochastic differential equation make sense, we shall consider only those ( Ft ) − predictable control processes T
P { ∫ | Bv t t | dt <∞ , 0
T
∫
0
| Ev t t | dt <∞ ,
v :[0, T ] × Ω 6 U ⊂ ℜk that satisfy
T
∫ ∫ | H ( η )v | m( dη )dt <∞ }=1. 0 E
t
t
(2)
378
RenJie and Q. Zhang
Definition 2.1. The Multiparameter Fractional Brownian Motion (MpfBm) is defined as the centered Gaussian process
BH = {BH (t ); t ∈ RTN } such that
1 ∀s, t ∈ R+N ; E[BH (s)BH (t )] = [b([0, s])2H + b([0, t ])2H − b ([0, s] Δ[0, t ])2H ], 2 Where b is a measure on R
N
and H ∈ (0, 1 ] is called the index of similarity. 2
Problem 2.1. We want to study the following control problem
J(v) =
1 |p(r,t, x;v) − zd (r,t, x)|2 drdx + ρ‖‖ v . ∫ Ω 2
That in, we want to select a control
vˆ ∈ u
(3)
that minimizes the criterion J, Where p(r, t,
x; v)is the solution of (1), zd ( r , t , x) is the expected state of stochastic population system. Let
L(r,t, x, p, v) = ∫ | p(v) − zd |2 dx + ρ∫ | v |2 dx. Ω
(4)
Ω
ℜn , for any x ∈ V , we shall denote by u ( x ) the class of control processes v :[0, T ] × Ω 6 U , which Definition 2.2. Let V be a fixed, nonempty, convex subset of
are ( Ft ) − predictable, satisfy condition (2), and we such that the corresponding trajector
pv
of (1) satisfy
P{∀t ∈ [0, T ] : p v ∈ V } = 1 . They will be called
admissible controls. Whenever x is fixed, We shall denote danger of condition.
u ( x)
by
u ,
without
Definition 2.3. A triple (q, s, w) of stochastic processes q :[0, T ] × Ω 6 ℜ n ,
s :[0, T ] × Ω 6 ζ ( R d , R n ) , and w :[0, T ] × Ω× E 6 R n , is a solution of the adjoint equation if q is adapted, s and w(η ) are predictable, and they satisfy
∂p ⎧ = [ − ( − + kΔq −μq + Aq ) − Ds dq L p ⎪ ∂r ⎪ ⎪ −∫ G (η) w(η) m(dη) ] dt + sdBH (t) + ∫ w(η) N(dη, dt) , in Q , ⎨ E E ⎪ in ΩT , ⎪q (A,t, x) = 0, ⎪⎩q (r,T, x) = 0, in ΩA.
(5)
Necessary and Sufficient Condition of Optimal Control to Stochastic Population System
H defined by
Definition 2.4. The Hamiltonian is the function
H (t, q, λ, y, v) = −L + 〈q,
379
∂p + k Δp − μ p + f 〉 + 〈 q, g〉 + ∫ 〈 w(η ), h(η )〉m(dη ). E ∂r
Lemma 2.1. J in a Gaˆ teaux-differentiable with differential given by T
〈J′(u),v〉 = E [∫ {〈Pv, Lp 〉+〈v, Lv 〉} dt +〈q, pv 〉].
(6)
0
2.2 The Necessary and Sufficient Condition of Optimality ,
Applying Itoˆ s formula, we get t 〈q, pv 〉−〈q0 , p0v 〉 = ∫ ⎡ pv , Lp − Aq − Ds − ∫ G(η) w(η) md(η) + f , g ⎤ dy E 0⎣ ⎦ t
t
+ ∫ ⎡⎣ pv , s + g, q ⎤⎦ dBH ( y) +∫ ∫ ⎡⎣ pv , w(η) + h, g ⎤⎦ N(dη, dy) 0 0 E t
t
0 E
0 E
+ H∫ ∫ y2H−1 g, s dy + H∫ ∫ w(η), h m(dη) dy. The above equation may be written as t
t
t
0
0
0 E
R v =〈q0, p0v 〉+∫ 〈q, C〉+ H ∫ y2H−1〈s, I(η)〉ds + H ∫ ∫ 〈w(η), I(η)〉 m(dη) dy +Qv , (7) Where we denote for every
v ∈ u, t ∈ [0, T ]
t
Q v :=∫ ⎡⎣ q, Dpv +Ev+F + s, pv ⎤⎦ dBH(y) 0 +∫ ∫ ⎡ pv, w(η ) + q, G(η )p+ H(η )v+ I (η ) ⎤ N(dη , dy), 0 E⎣ ⎦ t
(8)
t
R v : =〈q, pv 〉 −∫ 〈 pv, Lp − Aq −Ds〉+〈Ap+Bv, q〉dy 0
t
+H ∫ y 0
t
〈s,G (η )p+H (η ) v〉dy +∫ ∫ 〈w (η ), H (η )v 〉 m (dη )dy.
2H−1
We have to consider the following two cases: Case1.
∀v ∈ u , E ( Rtvˆ ) ≤ E ( Rtv ),
Case2.
∀v ∈ u , E ( Rtvˆ ) ≥ E ( Rtv ).
0 E
(9)
RenJie and Q. Zhang
380
Let us consider the function
H :[0, T ] × Ω × U 6 R defined by
H (t , w, v ) = L (t , p , v ) − 〈 q ( w), B ( w)v〉 − 〈 s ( w), E ( w)v〉 − ∫ 〈 s (η ), H (η )v〉 m ( dη ). E
Theorem 3.1. If case 1 holds, then a necessary condition for a control vˆ to be optimal for Problem 2.1 is that for every v ∈ u : T E[∫ {〈H v (t, w,vˆt (w)),vt (w) −vˆt (w)〉}dt] ≥ 0.
(10)
0
On the other hand, if case 2 holds, then inequality is a sufficient condition of optimal control vˆ . Proof. The optimal control problem consists of minimizing J (v ) over v ∈ u , where J
ˆ aux-differentiable convex functional with derivative give by (6). Hence is a Gate according to Proposition 2.2.1 of [9],a necessary and sufficient condition for vˆ to be a
v ∈ u : 〈 J ' (vˆ), v − vˆ〉 ≥ 0. Thus, vˆ is an
solution of Problem 2.1 is that for every optimal control if and if
∀v ∈ u
T
〈J′(vˆ),v −vˆ〉 = E[∫ {〈pv − pvˆ , Lp〉+〈vt −vˆt , Lv 〉}dt +〈 pv − pvˆ , q〉] ≥ 0.
(11)
0
In Case 1, we see that for every
v ∈ u
T
E [∫ {〈H v (t, w, vˆt (w)), vt (w) − vˆt (w)〉}dt] 0
T
= E [∫ {〈Lv , vt − vˆt 〉+〈q, B(vt − vˆt )〉+〈s, E(vt − vˆt )〉+ ∫ 〈w(η), H(η)(vt − vˆt )〉} md(η)]dt E
0
T
= E [∫ {〈Lv , vt − vˆt 〉+〈q, B(vt − vˆt )〉+〈s, E(vt − vˆt )〉+ ∫ 〈w(η), H(η)(vt − vˆt )〉} md(η)]dt E
0
T
− E [∫ {〈pv − pvˆ , Lp 〉+〈vt − vˆt , Lv 〉}dt +〈 pv − pvˆ , q〉] 0
T
+ E [∫ {〈pv − pvˆ , Lp 〉+〈vt − vˆt , Lv 〉}dt +〈 pv − pvˆ , q〉]. 0
Thus, in Case 1 and in conjunction with (9), a necessary condition for a control vˆ to be optimal is for every v ∈ u
E [∫ {〈H v (t, w, vˆt (w)), vt (w) − vˆt (w)〉} dt] ≥ E [∫ {〈pv − pvˆ , Lp 〉+〈vt − vˆt , Lv 〉} dt +〈 pv − pvˆ , q〉] , T
T
0
0
Which is equivalent to (10).
Necessary and Sufficient Condition of Optimal Control to Stochastic Population System
381
On the other hand, in Case 2, for every v ∈ u T
E [∫ {〈H v (t, w, vˆt (w)), vt (w) − vˆt (w)〉} dt] 0
T
= E [∫ {〈Lv , vt − vˆt 〉+〈q, B(vt − vˆt )〉+〈s, E(vt − vˆt )〉+ ∫ 〈w(η), H(η)(vt − vˆt )〉} md(η)] dt E
0
T
= E [∫ {〈Lv , vt − vˆt 〉+〈q, B(vt − vˆt )〉+〈s, E(vt − vˆt )〉+ ∫ 〈w(η), H(η)(vt − vˆt )〉} md(η)] dt E
0
T
− E [∫ {〈 pv − pvˆ , Lp 〉+〈vt − vˆt , Lv 〉} dt +〈 pv − pvˆ , q〉] 0
T
+ E [∫ {〈pv − pvˆ , Lp 〉+〈vt − vˆt , Lv 〉} dt +〈 pv − pvˆ , q〉]. 0
Thus, in Case 2 and in conjunction with (9), a sufficient condition for a control vˆ to be optimal is for every v ∈ u T
T
0
0
E [∫ {〈H v (t, w, vˆt (w)), vt (w) − vˆt (w)〉} dt] ≤ E [∫ {〈pv − pvˆ , Lp 〉+〈vt − vˆt , Lv 〉} dt +〈 pv − pvˆ , q〉]. Thus, in case 2, a sufficient condition for a control vˆ to be optimal is that (10) holds for every v ∈ u .
3 Conclusions We put both fractional brownian motion(FBM) and Poisson process into the age-structure population system, and discussed the optimal control of stochastic ,
population system. Applying the stochastic maximum principle and Itoˆ s formula we pride both necessary and sufficient conditions of optimal control to stochastic population system.
References 1. Cadenilas, A.: A stochastic maximum principle for systems with jumps, with applications to finance. Systems and Control Letters 47, 433–444 (2002) 2. Bahlali, K., Chighoub, F., Djehiche, B., Mezerdi, B.: Optimality necessary conditions in singular stochastic control problems with unsmooth data. Math. Anal. Appl. 355, 479–494 (2009) 3. Renee Fister, K., Lenhart, S.: Optimal control of a competitive system with age-structure. Math. Anal. Appl. 291, 526–537 (2004) 4. Li, J.: Existence of the optimal distributed control for a class of non-liner population diffusion system. ACTA Mathematic Application Sinice 4, 15–23 (2005) 5. He, Z., Hong, S., Zhang, C.: Double control problems of age-distributed population dynamics. Nonlinear Analysis: Real World Applications 10, 3112–3121 (2009)
382
RenJie and Q. Zhang
6. Zhang, Q., Han, C.: Convergence of numerical solutions to stochastic age-structured population system with diffusion. Applied Mathematics and Computation 07, 156 (2006) 7. Zhang, Q.: Exponential stability of numerical solutions to a stochastic age-structured population system with diffusion. Journal of Computational and Applied Mathematics 220, 22–33 (2008) 8. Zhang, Q.: Existence and uniquenss for a stochastic age-structured population system with diffusion. Applied Mathematical Modelling 32(2), 2197–2206 (2008) 9. Ekeland, J., Temam, R.: Convex Analysis and Variational Problems. North-Holland, American Elsevier, Amsterdam, New York (1976)
The Research on Newly Improved Bound Semi-supervised Support Vector Machine Learning Algorithm Xue Deqian Huzhou Teachers College, Huzhou, Zhejiang, China [email protected] Abstract. SVM is the structural risk minimization of statistical learning theory developed on the basis of a pattern recognition method, based on limited sample information and the complexity of the model to find the best compromise between the generalization ability. As there is a supervised learning method, the standard SVM classification requires supervised learning algorithm based on the principle: from a limited number of labeled samples to learn the rules and the rule extended to the unknown non-tag samples. Keywords: Semi-supervised Support Vector, learning algorithm, branch, bound.
1 Branch and Bound Semi-supervised Support Vector Machines BBS3VM main idea of the algorithm in Figure 1 shows the tree branch and bound. First of all, there are labels in the initialization of the training sample set and get on the objective function value (as root), then divide the original problem, namely: to select a sample of different labels to its label, respectively, the corresponding objective function value obtained (as a child node). At this point, traverse the search with the deepening of the new sample is more marked, labeled sample set gradually expanded until the search to all unlabeled samples are marked (ie leaf nodes), in the search process , the added non-labeled sample of the price tag makes the junction point of loss of value of the objective function value increases. There are labels for the initial root sample set is fully marked leaf node after the sample set[1-4].
Fig. 1. Branch and Bound Tree M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 383–390, 2011. © Springer-Verlag Berlin Heidelberg 2011
384
X. Deqian
There are two key issues BBS3VM algorithms need to be addressed: (1) the definition of node lower bound; (2) when the unlabeled samples branching choice.
2 Construction of the Tree Branch and Bound Improved branch and bound semi-supervised support vector machine learning algorithm gradually labeled samples of unlabeled branch and bound tree structure, in the process to set upper and lower bounds and branching strategies to achieve the branch and bound tree branching and pruning. Adopt a similar strategy of divide and rule, all non-tag vector satisfies the constraint as a viable field, upper and lower bounds as constraints to exclude a large number of non-optimal solution feasible in the domain area, end up with a global optimal solution[5]. Firstly an improved branch and bound semi-supervised support vector machine learning algorithm related to the meaning of the variables. Branch and bound tree in the form of binary tree, each node on which is expressed as a triple, which is labeled sample set, as no label has not yet been labeled sample set, a time to make them ,, the lower bound of the node. Branch and bound tree for the upper bound (the next section gives the detailed definitions.) Two children for the purpose of the sample, mark the nodes of different label. At that time, the node for the leaf nodes, all the samples have their labels[6-7], the organization shown in Figure 1.
3 Bound Rules Improved branch and bound in the semi-supervised support vector machine learning algorithm, characterize upper and lower bounds for the condition branching and pruning, for improving the speed of global search plays a crucial role. Details are given below the upper bound, lower bound of the definition. Define the Upper Bound. IBBS3VM upper bound algorithm for the current semi-supervised support vector machines the minimum objective function, that is the current best feasible solution, is used to define the conditions of sub-tree is pruned. Branch and bound tree in the global search process, only leaf nodes are all non-label type of sample dimension, on which the objective function (2.3) is a function of the optimal value of the combinations[8-11]. Definition of branch and bound tree for the current upper bound of the minimum leaf nodes, namely: k leaf nodes, initialization can be defined as the upper bound of infinity, of course, if there is enough prior knowledge to define a more reasonable upper bound, a strict upper bound to a certain extent, accelerate the cut branches of the process[12]. With the deepening of global search, branch and bound tree, the number of leaf nodes increase, in the process on a tree branch and bound circles are constantly being updated until the completion of all of the search, obtain the global optimum solution. Define the Lower Bound. For branch and bound algorithm, the original optimization problem is difficult to directly find the lower bound, but can continue to modify the branch and bound tree lower bound of each node approach to implementation.
The Research on Newly Improved Bound Semi-supervised Support Vector Machine
385
Define the lower bound for node node a lower bound on the objective function, namely:
lb = inf Fi ( w, b, y u )
(1)
According to the principle of branch and bound tree, the original optimization problems can be transformed into each feasible region (ie all branches) of sub-optimization problems, on the basis of its dual form constructor: n
D(α, y u ) = ∑ α i − i =1
1 n ∑ αiα j yi y j K (xi , x j ) 2 i , j =1
n δ ij δ pq ⎞ 1⎛ l − ⎜ ∑ α iα j + ∑ α iα j ⎟ 2 ⎝ i , j =1 2C p , q =l +1 2C * ⎠
(2)
α i ≥ 0 i = 1," , n , To facilitate the description of said formula (2) for the pseudo-dual function. Branch and bound algorithm is given below in the definition of a reasonable lower bound proof. Theorem 1. Branch and bound tree, each node on the pseudo-dual function as a function of the value of the original branch and bound tree, the lower bound of an estimate of optimization problems, that is. Proof: For any given constraints are consistent with
D(α, y u ) ≤ max W (α )
(3)
Which, for the dual problem, by the duality theorem,
max W (α ) = min F ( w, b ) α
w ,b
(4)
According to principles of a branch and bound
min F ( w, b, y u ) = I ( w, b, yU ) w ,b
(5)
That can be used as the lower bound of the original optimization problem Proved. Obviously, for all the samples of leaf nodes have labels, there is no lower bound of the problem, this time there. Algorithm used in IBBS3VM an indicator vector that the current state of node,, node, respectively, and bound, visit the state. When a non-leaf node after the implementation of delimitation of operation, the delimitation of the node status indicator is set to 1; When the implementation of the branch operation node, or the application of the pruning rules without further decomposition can be The access node status indicator is set to 1, indicating that the node has been visited[13-14].
386
X. Deqian
4 Branching Method Pruning branch and bound algorithm based on the principle of operation generated by the branch node pruning criteria are met without further decomposition, the following definitions IBBS3VM pruning criteria are as follows: Pruning criteria 1: Optimize the test, the lower bound of the current node is greater than branch and bound tree industry. Pruning Guideline 2: Constraint test, the node no label on the sample set or set of semi-labeled samples do not meet the balance constraint. Pruning Rule 3: feasibility test, the sub-problem solving is completed, all the samples have a label, leaf node is reached. When a node has a lower bound and bound pruning conditions do not meet the needs of the branch nodes are operating. There are two branches during the key issues to consider: (1) the selection of sub-nodes strategy: no tags which should be selected as the next sample labeled samples; (2) labeling strategy for child nodes of the selected sample of the non-label label which label. In this paper, similar to the center distance ratio method to determine the label-free label samples on the current samples marked the highest reliability, to reduce the search process to modify the number of branches.
5 Sample Definition of Credibility In the two classification problems, the definition of unlabeled samples to the distance from the center positive class and negative class distance from the center to the ratio of the unlabeled samples on the working class credibility, namely:
μ+ ( xi ) =
D( xi , a ) D( xi , b)
(6)
Which are currently the center of class and negative class, the traditional Euclidean distance can not fully reflect the spatial distribution of the data, this paper documents [2] proposed to adjust the length of the line density had no label for the sample to the class centers distance, that is, the scaling factor for the data and the Euclidean distance. Such definitions can be adjusted by scaling factor to enlarge or reduce the line between two points on the Euclidean distance. No tags for all samples were calculated on the credibility of positive class, and the results are sorted, sort the results based on samples selected mark without labels, selection and labeling rules are as follows: When, then the label; Otherwise, be labeled.
The Research on Newly Improved Bound Semi-supervised Support Vector Machine
387
6 Algorithm Description IBBS3VM algorithm description of the problem are as follows: Premise: the initial labeled data set, no label sample set, the goal is to find algorithms based on Semi-supervised support vector machine classifier and the label-free label sample set vectors. Initialization: In the sample set on a label on the training to find the upper and lower bounds. Algorithm procedure is as follows: Step 1, calculate the credibility of unlabeled samples, select samples of the highest reliability of branching and tagging in its category, generating node, respectively, perform push (nodej), push (nodej +1) operations; Step 2, pop (nodej +1), computing the lower bound of the node, if, then return to step 1, otherwise go to step 3; Step 3, based on pruning criteria to judge, to meet the criteria 1 and 2 nodes were pruned; Step 4, when the reach the leaf nodes of the node computing the upper bound Repeat steps (1), (2), (3), (4) until the stack is empty, and output. With the binary tree search depth, there are always new items to the loss of function of the expression of the objective function, each search to a leaf node will be an optimal value. IBBS3VM algorithm in the implementation of branch and bound tree using depth-first search operation, the data structure used for the stack. The main advantages of this approach are: (1) all the options, it says the most space-saving; (2) the stack data structure that can be effectively bound by the last operation of information, when a sub-problem is not eliminated, the last bound operations can retain most of the information generated, directly to accelerate the next sub-problem fixed community.
7 Simulation and Results Analysis To validate the effectiveness of the algorithm proposed IBBS3VM, respectively, in 5 groups of simulation experiments on the sample set. According to the results from the classification accuracy, training time and parameter sensitivity analysis of three aspects. Data Sets. Table 1 shows the characteristics of experimental data sets of data. g22c sample data sets were generated by the two normal distribution functions, labeled sample was 2 (1 each per class); g33c sample data sets were generated by the three normal distribution function, selected from each category 10% of labeled samples, the other 90% as a non-Label. 2moons data sets is semi-supervised learning algorithm for a benchmark data set, the data set of the two labeled samples are fixed, the other unlabeled random samples generated in the experiment. Text sample set for the Summer, who provided the sample concentration newsgroups20 subset of mac and windows, this article only after finishing their training in selected sample set, two subsets of a total of 780 samples, each sample has 7511 dimensions, including labels for 50. COIL3 as Nene [1], who provided three goals from different angles by the gray value of the data, randomly selected from each category two goals as a Label.
388
X. Deqian Table 1. Description of experiments data sets Data set
Classes
Dims
Points
Labeled
g22c
2
2
500
2
g33c
3
3
311
28
2moons
2
3
512
2
Text
2
7534
781
45
COIL3
3
1013
221
5
Classification Accuracy. Was chosen for the Gaussian kernel function. For ease of comparison, the same data set all of the algorithms on the same parameters, with 5 re-(5-fold) cross-validation test classification performance. In addition, because this is only semi-supervised learning algorithm in performance within the comparison, the parameters of equilibrium constraints in the unlabeled sample set according to the label to determine the actual. Table 2 shows the comparison of the algorithm accuracy. Table 2. Comparision of classification precision
▽
S3VM
CCCP
S3VMlight
BBS3VM
IBBS3VM
g22c
99.1
98.2
98.4
100
100
2moons
45.5
38.7
34.8
100
100
g33c
54.3
48.7
66.2
100
100
Text
95.7
95.5
92.4
100
100
COIL3
38.9
54.3
44.7
98.6
97.1
IBBS3VM 2moons data sets in the g22c and the results. Can be seen from Table 2 the proposed algorithm and BBS3VM IBBS3VM has reached 100% accuracy, which also shows that these two different algorithms can be used as semi-supervised support vector machine learning algorithm performance baseline for comparison. COIL3 sample set for semi-supervised support vector machine learning algorithm is a challenge, the data set itself does not fully comply with clustering hypothesis, from Table 2 can also be seen by S3VM, CCCP, S3VMlight such as the classification algorithm is not satisfied[15-16] . Since the proposed algorithm and BBS3VM IBBS3VM is achieved global search, the classification is much better than the first three algorithms. All of the algorithms on the data set g22c very good classification results, mainly due to the data set in full compliance with clustering hypothesis. Data set g33c, COIL3 more than two categories for the problem, in its effect on the classification is not ideal, this was mainly due to use
▽
The Research on Newly Improved Bound Semi-supervised Support Vector Machine
389
of this classification for more than 1-n strategy, and this method will lead to a serious imbalance in the number of samples. In addition, the use of 1-n strategy, each classifier is constructed for a class as a "positive class" sample set, while all other categories mixed up as a "negative class" in this process as a "negative class" mixed the formation of multiple small samples of the cluster assembly, which led to semi-supervised learning problem into a local minimum. However, the strategy adopted 1-1, for the minimal number of samples in terms of semi-supervised learning is hard. Comparison of Training Time. Since ▽ S3VM, CCCP, S3VMlight entropy used in the training process asymptotic strategy, the training speed is not only related to the design and algorithm is more related with the number of iterations, in which case the training time of each algorithm to compare very little meaning, this paper BBS3VM and IBBS3VM only two algorithms to compare the training time to compare the results shown in Table 3. Table 3. Comparision of training time BBS3VM(s)
IBBS3VM(s)
g22c
218
176
2moons
181
149
g33c
282
226
Text
899
575
COIL3
365
231
As can be seen from Table 3, compared with BBS3VM IBBS3VM algorithm has faster algorithm execution speed, and with the increase in the number of samples that advantage more pronounced. Also in the course of the experiment, the proposed algorithm COIL3 IBBS3VM data set is smaller than the global optimal solutions to other local optimal solution, in which case, in the beginning of training a large number of branches have been cut off , which is the speed of its operation one of the reasons.
References 1. Hodge, V., Austin, J.: A survey of outlier detection methodologies. Artificial Intelligence Review 22(2), 85–126 (2004) 2. Bishop, C.M.: Novelty detection and neural network validation. IEEE Proceedings-Vision, Image and Signal processing 141(4), 217–222 (1994) 3. Toosi, A.N., Kahani, M.: A new approach to intrusion detection based on an evolutionary soft computing model using neuro-fuzzy classifiers. Computer Communications 30(10), 2201–2212 (2007) 4. Zanero, S., Serazzi, G.: Unsupervised learning algorithms for intrusion detection. In: IEEE Network Operations and Management Symposium, Osaka, pp. 1043–1048 (2008)
390
X. Deqian
5. Yeung, D., Chow, C.: Parzen-window network intrusion detectors. In: Proceedings of the 16th International Conference on Pattern Recognition, Québec, pp. 385–388 (2002) 6. Campbell, C., Bennett, K.P.: A linear programming approach to novelty detection. In: Advances in Neural Information Processing Systems, Vancouver, pp. 395–401 (2001) 7. Feng, A., Chen, B.: Based on the local density of single-class classifier improvement algorithm for LP. Nanjing University of Aeronautics and Astronautics 38(6), 727–731 (2006) 8. Alberto, M., Javier, M.: One-Class Support Vector Machines and Density Estimation: The Precise Relation. In: Proceedings of Progress in Pattern Recognition, Puebla, pp. 216–223 (2004) 9. Tsang, I.W., Kwok, J.T., Li, S.: Learning the Kernel in Mahalanobis One-Class Support Vector Machines. In: International Joint Conference on Neural Networks, Vancouver, pp. 1169–1175 (2006) 10. Tao, Q., Wu, G., Wang, J.: A new maximum margin algorithm for one-class problems and its boosting implementation. Pattern Recognition 38(7), 1071–1077 (2005) 11. Dolia, A.N., Harris, C.J., Shawe-Taylor, J.S., et al.: Kernel ellipsoidal trimming. Computational Statistics and Data Analysis 52(1), 309–324 (2008) 12. Juszczak, P.: Learning to recognise, a study on one-class classification and active learning. PhD thesis, Delft University of Technology, pp. 117–125 (2006) 13. Langford, J., Shawe-Taylor, J.: PAC-Bayes and margins. In: Advances in Neural Information Processing Systems, Vancouver, pp. 439–446 (2003) 14. Tax, D., Juszczak, P.: Kernel whitening for one-class classification. International Journal of Pattern Recognition and Artificial Intelligence 17(3), 333–347 (2003) 15. Leung, K.K., Liu, S., Wu: A Reduction Algorithm for Support Vector Domain Description RSVDD. Xi’an University of Electronic Technology 35(005), 927–931 (2008) 16. Pyo, J.K., Hyung, J.C., Jin, Y.C.: Fast incremental learning for one-class support vector classifier using sample margin information. In: Proceedings of 19th International Conference on Pattern Recognition, Tampa, pp. 1–4 (2008)
The Application of Wireless Communications and Multi-agent System in Intelligent Transportation Systems Wei Xiaowei Chang’an University Shaanxi College of Communication Technology [email protected]
Abstract. With the help of wireless communications each automotive vehicle will be a unique node on the global communications network in the future. This vehicular communications network, in turn, will support interactions within the automobile, with the surrounding environment and directly with nearby vehicles. In summary analysis, using the multi-agent technology in the traffic control and guidance can effectively alleviate the problems which computing Collaborative too concentrated and the computational load big. In traffic control and traffic guidance systems, its consultative mechanism can collaborate with multi-objective optimization goal conflicts. Keywords: Intelligent Transportation Systems, Cooperative Communications, Autonomic Systems.
1 Introduction The past few years have seen renewed and rapidly growing demands for vehicular communication technologies to enable safer, smarter, and greener transportation. Developing cost-effective wireless communication technologies to improve traffic safety have been a priority for government transportation agencies and automakers. Agent technology integration control and guidance system based on multi-agent, will seek to achieve the fusion of two functions which are traffic control and traffic guidance. It also can effectively avoid the waste of communication resources and the break of the information exchange between the two systems brought by the model of traffic control and guidance system separating. By using distributed hierarchical system structure, we can fully balance the system of computing resources, and the full realization of traffic control and guidance system of information sharing. This organizational structure is totally self-style between a slave and the master of a structure, which not only can play Agent's autonomy and flexibility, but also can, to a certain extent, control the system coordinately.
2 Traveling Safer with Communications Technologies Vehicle safety can be achieved using passive or active safety techniques. Passive safety techniques, such as safety belts, protect drivers and passengers after accidents occur. M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 391–397, 2011. © Springer-Verlag Berlin Heidelberg 2011
392
W. Xiaowei
Active safety techniques seek to prevent accidents from happening. Communications technologies can enable active safety by, for example: • •
•
Detecting and alerting drivers to potential hazardous conditions ahead, such as icy roads, heavy fog, road constructions, or accidents. Detecting and warning drivers of imminent dangers so that they can take immediate actions. Such dangers may include possibilities of unintended lane departure, crashing with a nearby vehicle, running a red light, or crashing into another vehicle that is running a red light. Taking control over the vehicle (e.g., slowing down or stopping the vehicle) proactively to prevent an imminent crash in case the driver fails to act.
Extensive research has taken place to develop vehicle-to-vehicle (V2V) communication technologies based on short-range radios, such as the Dedicated Short-Range Communications or DSRC radio in the licensed 5.9GHz frequency band, to support active safety applications [7]. Heavy research and trials have also been devoted to developing vehicle-to-infrastructure (V2I) technologies to support safety, mobility, and sustainability applications.
3 Intelligent Transportation System On the coordination study of traffic control and induction, scholars proposed a number of coordination models and algorithms, H.ShimiZu, M. Kobayashi & Y.Yonezawa (1995) have proposed a conceptual combination of traffic flow and traffic signal control for induction one of two traffic control system, shown in Figure 1.
Traffic Control System
GPS
Circuit Guiding System
Input/Output Signal
Signal Marks
Signal Control System
Traffic Singal
Traffic Monitoring Information
Fig. 1. H. Shimizu two control systems
Late 90's, China began to research this area, especially, Jilin university, Tianjin university, tong university, Hebei university of technology and other research institutions more such, they are relying on the national natural science funds and other project in traffic flow guidance and traffic control and so on a series of cooperative research. Tianjin university XuYanyu, one of WangLiang Shoufeng sick, sick master
The Application of Wireless Communications and Multi-agent System
393
have described various combination patterns to domestic and international traffic control and transportation induced, analysis the existing shortage, put forward one's own, and proposes an improved framework of the combination of two system (see figure 2). Cooperation of total amount control model implementation and guidance, in the same important position, with the full consideration of the coupling between the two systems, it can in the shortest possible time two methods, first opened a new traffic management system overall index best road network's purpose, a unified optimization model, solved controlled and incentive strategy; Another solution is to draw the idea, the induction hierarchical control system and control in low level, solved in high-rise coordinated and optimize interact. In addition, tsinghua university LiRuimin model dr, puts forward a multi - agent - based traffic control and traffic guidance system integration, Hebei industrial university professors, dr. Put forward a kind of WeiLianyu theory and HanZhiqing master traffic coordination strategy is proposed based on self-organization and synergy. In summary, the coordinated development model of traffic flow guidance and control strategy based on domestic and international main attributed to emphasize the traffic information acquisition and sharing, in that you can focus on and traffic flow and traffic control, or both, to achieve partnerships road network traffic flow smoothly. Intelligent transportation system to solve the fundamental problem is smooth; traffic guidance road system is an important part of the intelligent transportation system is involved in transportation, electronics, electrical appliances, communication, computer, geography and many other subjects, comprehensive application fields. Through the application of route guidance system can increase tourism travel time and efficiency, reduce cost, reduce fuel consumption and environment pollution, now is one of the best programs can feasible solve the urban transportation problem, a growing concern of society.
4 Computational Intelligence Applications in Traffic Induced Information technology revolution as the center of modern science and technology revolution is the rise in global prosperity of mankind, marks from industrial society to the information society, as a historic leap, information, knowledge has become the world's most obvious characteristics of The Times. In the last century, the mid - 80s traditional artificial intelligence face severe difficulties in the perception, understand, learning, association and the thinking in images, classic optimization algorithm that limitation in solving the practical engineering problems increasingly complex computer ability and the continuous improvement of computational speed, massively parallel processing technical generation and theoretical gradually mature, for the production of computational intelligence technology to provide favorable conditions and realistic possibility. As a new field of study, puts forward computational intelligence, immediately attracted the attention of experts in many fields, as an interdisciplinary research focus. Through the multi-agent system of cooperation several agents to complete the task to solve, this is one of the important constituent, how to design and construction of the agent.
394
W. Xiaowei
Optimal criteria
Manager
Control System
Guiding Systems Traffic Flow Information
Traffic Flow Predictive
ATIS
Real Time Traffic Flow
Traffic Flow Information Walker Route Selection
Traffic Flow Information User-Optimal Criterion
Fig. 2. Combination of framework control
The single agent and building research model theory, is very important of multi-agent system establishment and coordination and cooperation agents. The single agent research mainly includes agent model, system structure, ability, etc. General agent should include sensors, decision control, mental state, knowledge, communication equipment segments. A typical puerperal in agent model mainly includes model, of deliberative, reactive power and mix. BDI Model. Puts forward a model based puerperal in 1987, including her faith (beliefs), expect (desire) and intention (intent) represent their own knowledge, ability, to achieve the objectives, the structure is shown in figure 3. The mothers by artificially agent in a country beliefs, the current aspirations and goals for planning, libraries and a intend to structure and composition. Planning the library provides a complete set of planning, such as target and satisfies the current situation, to generate reasonable choice behavior intention structure: plan, according to choice of the belief systems of the plan and goal done function system. Structural Model of Deliberative Agent. There is a clear consultation agent, knowledge based system of capabilities, a symbol of model of environment and logical reasoning intelligent behavior. Consultation agent and inquiry-based learning software is a
The Application of Wireless Communications and Multi-agent System
Emotion Inference
Conviction
Environment
Transducer
395
Programme
Emotion Knowledge
Effector
Intention
Ratoinal Inference
Desire
Fig. 3. The BDI model of Agent
Sensors
Environment
Information Fusion
Programme
Inner State
Knowledge Base
Effector Action
Objective
Fig. 4. Deliberative Agent Map
representative internal state. It includes the knowledge representation, environmental representation, problem solving, concrete communication protocol. Wear in architectural dominant. The structure is shown in Figure 4. The Structure Model of Reaction Agent. Response Agent (Reactive agent) is representation model with no symbols. It include some sensors which can sense the state changes from internal and external, a process which is respond to a group of related events, a system based on sensor to activate the information. It dominates in the distributed systems. The structure is shown in Figure 5.
396
W. Xiaowei
Sensors Current World
Environment
Condition Action Rule
Effector Action
Fig. 5. Reaction Agent Map
The Structure Model of Hybrid Agent. The characteristic of Hybrid Agent structure model contains two or more subsystems, a deliberative subsystem and a reactive subsystem. It is also the hierarchical organization, the former based on the latter. Hybrid Agent model overcome shortcomings of the two previous models. The functions of these models are not overall, the structure is not flexible enough. At the same time, the Agent model with comprehensive capabilities can act and cooperate with its own in the guidance of real-time response target. The structure is shown in Figure 6.
Fig. 6. Hybrid Agent Map
The Application of Wireless Communications and Multi-agent System
397
Multi-Agent System (MAS) is a hot problem in artificial intelligence research, and it is an importance branch in Distributed Artificial Intelligence (DAI). The goal of MAS is to construct large and complex systems (software and hardware systems) to a small, communication and coordination with each other and easy management system.
5 Conclusion In the urbanization process becomes more and more with great economic development and urban population growing and motor vehicle, urban road traffic congestion has become a common phenomenon in the world. The traffic accident, the energy waste and environmental pollution problem has already become the important factors, restricts the sustainable development of society and economy. Therefore, intelligent transportation system has become an important direction of future development can solve the urban transportation problem.
References 1. Huang, W., Chen, L.D.: The Introduction of Intelligent Transportation Systems (ITS). People’s Communications Press, Beijing (2001) 2. Chen, T.: Based on collaboration of the Urban Traffic Control and Guidance System theory and method of coordination. Jilin University (2006) 3. Chen, X.F.: Dynamic Optimization of Urban Traffic Signal Control Technology. Northwestern Polytechnic University (2003) 4. Liu, X.H., Wei, W., Peng, C.: Multi-agent-based urban traffic control and coordination of induction. Highways & Automotive 5 (2007) 5. Xu, L.Q.: Urban traffic flow guidance and control theory and model of integration. Jilin University (2000) 6. Li, R.M., Shi, Q.X.: Multi-agent systems based on urban traffic control and integration of induction. Highway and Transportation Research (May 2004) 7. He, J.L.: Development of multi-agent systems. Anhui University, Hefei (2004) 8. Du., C.H.: The study on the Computational Intelligence used in Urban Traffic Guidance System. Chongqing University PhD thesis (2009)
Study on Actuator and Generator Application of Electroactive Polymers Jia Ji, Jianbo Cao, Jia Jiang, Wanlu Xu, Shiju E., Jie Yu, and Ruoyang Wang College of Engineering, Zhejiang Normal University, Jinhua, Zhejiang 321004, P.R. China [email protected]
Abstract. In order to alleviate the conflict among the economic development, energy and environment, the research and experimental analysis for electroactive polymers was done. The electroactive polymers were a new style of clean and efficient energy materials. Their performance and power principle were introduced. Meanwhile the application status at home and abroad was analyzed. The broad application prospects and development trends of electroactive polymers were revealed. Keywords: Electroactive Development trend.
1
polymers,
Application,
Power
generation,
Introduction
The electroactive polymer (EAP) was a new style material. Because of its unique and admired electric and mechanical feature, people pay great concern to EAP [1]. EAP has many advantages, such as cheap, clean, highly efficient, easy to manufacture, and soft texture. Although the material has been researched for almost 30 years, only in recent years, people really gained actuator material, such as United States (Stanford Research Institute, University of Illinois), Ireland, United Kingdom, Italy and Japan. Moreover, AMI (Artificial Muscle Inc.) Company of United States specialized in electroactive polymer-based new technology platform, and is currently serving approximately $ 4 billion market including industrial, medical, consumer, automotive and aerospace area [2]. Application research on EAPs in our country is concerned increasingly. In recent years, domestic authors successively introduced a summary of relevant research and application of EAPs [3-7]. And there is a small amount of relevant material research reports [8,9]. Introduction and research in our country are mostly located in the area of driving. However, as power generation material, EAP is still in its infancy.
2
Research and Application Abroad
The EAPs have great adaptability. When be used in the actuator field, EAP actuator could be produced with light weight, high drive efficiency, vibration-resistant performance. EAPs are potential biomimetic materials. In addation, when EAPs are used in the generator field, they can generate electricity through changing in size and shape of the polymers. They can be widely applied to power generation field. M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 398–403, 2011. © Springer-Verlag Berlin Heidelberg 2011
Study on Actuator and Generator Application of Electroactive Polymers
399
Machine Plastic Fish. Japan Osaka Eamex company produced an electroactive polymer-based robot fish [10]. The fish did not contain any machinery parts: no motors, drive shaft, gear, and even batteries. The plastic fish was driven by inherent force. It could be the same as real fish in the water. The fish could swim, because its visceral in plastic was bent back and forth. This is the first batch of commercial products based on EAPs. Simulation of Muscle. EAPs showed their high toughness, high-level transmission strain and internal damping capacity in the imitation of biological muscle. EAPs are the materials for the manufacture of biological signal bio-inspired robot. Yoseph Bar-Cohen belonged to one of the pioneers in this field. He was committed to develop the robot arm with a simulated muscle. The muscle designed by him has been able to be better than the one of real man (shown as Fig. 1). Progress in this area will result in huge benefits. Especially in the medical field, it can be used to manufacture the powerful artificial limbs and so on [11].
Fig. 1. Simulation of muscle
Electrode Materials for Lithium-Ion Battery. F. Denton [12] and others described the feasibility of using EAPs as the overcharge protection mechanisms. The overcharge protection was achieved by significant change of EAP electrical conductivity in the state of oxidation and reduction and the reversibility of the process. The process is shown in Fig. 2. In 1987, Japan put the button batteries of Li-AI/LiBF4-PC/PAn on the market. They became the first commercial plastic batteries [13].
Fig. 2. Principle of overcharge protection
400
J. Ji et al.
Eelectroactive Membrane. In the early 1980s, Burgmayer and Murray proposed the concept of gated ion membrane based on CEP (Conducting Electroactive Polymer). Fig. 3 shows the general principle of dynamic CEP membrane [14]. The appliance is the migration battery with three-electrode system. It used standalone poly/p-toluene sulfonate (pTS) membrane as working electrode. The state of oxidation and reduction of polymer membrane could be converted by this appliance. Therefore the ion exchange of the polymer can be achieved and controlled.
Fig. 3. Principle of dynamic CEP membrane
Portable Generator Using Human Physical Energy. EAPs have high energy ratio, large strain and flexibility, low material density, and good impact resistance, so they can be directly coupled with the actions of human joints and designed as a light wearable generator with low cost and high efficiency. It is promising to use human actions (such as joints, heel shock) to generate power. Fig. 4 shows a shoe heel shock generator developed by United States Stanford Research Institute (SRI) [2].
Fig. 4. Heel generator based on electroactive polymers
EAPs have many excellent characteristics so that they have broad application prospects. With the development of EAP materials manufacturing process, the research on power generation of EAP will be taken seriously.
Study on Actuator and Generator Application of Electroactive Polymers
2.1
401
Domestic Research and Application for EAP
The research and application for EAP materials is concerned increasingly in china. EAPs could work in actuator and generator mode [15]. The current domestic introduction and research are mostly located in the area of actuator. The 863 Project “Design and Application Technology of Large-Deformation EAP material for Natural Muscle Simulation” was approved in 2006. However, so far there are very few research reports and articles about EAP generator, structures and application in domestic universities and institutes. Through tracking foreign research, Sun Yat-sen University had gotten the project for research on EAPs actuator and sensor from Guangdong Province. It mainly studied single and dual pre-stretching actuator and electromechanical coupling driven nature by using acrylic polymer films, and developed a cylindrical single-freedom actuator [7]. In addition, Norwegian University of Science and Technology [16] and China Tongji University [17] began the research on EAPs power generation. Hefei University of Technology started to design a one-dimensional stretching actuator of EAP [18] and studied the mechanical property model of EAPs [19]. Anhui University of Technology conducted the research and analysis of mathematical modeling of EAP material properties [20]. Chongqing
Fig. 5. Experimental circuit schematic diagram
Fig. 6. Experimental result of EAP power generation
402
J. Ji et al.
University of Technology studied the failure mechanism of EAPs [21]. E Shiju and other teachers from Zhejiang Normal University discussed the basic mechanism of power generation mode of EAP materials by structural characteristics of thin film EAP materials. They studied the energy conversion process of the material with simulation and experiment. The experimental power circuit schematic diagram of EAPs is shown in Fig. 5. Experimental result of EAP power generation is shown in Fig. 6 [15]. 2.2
Development Trends
As a new material, EAP is gradually payed more attention. It has broad application prospect because of many advantages. 1) EAP has light weight, low power consumption, low price and other characteristics. In particular, EAP is soft and its operating characteristic is similar to the muscles of animals. Consequently, EAP has potential application in the field of flexible robot, micro-machine, medical treatment, bio- machine, aerospace, military, toys and so on. 2) Acrylic polymer and silicone rubber are typical representatives of the elasto plastic insulation, which is a type of EAP. The materials have many advantages, such as quickly responding, large deformation and high conversion efficiency. They can be applied to many situations, such as displacement sensor, artificial muscle arm, pump, robot, bow sensor and curved profile sensor. 3) In the field of power generation, EAPs are particularly suitable to produce electricity on the occasions of low frequency and large deformation energy sources, such as power generation of the wind, wave, tidal power, as well as individual soldier equipment. With the development of research, the application of EAPs is very promising.
3
Conclusion
EAPs have received wide attention in the field of actuator. Because of their good biocompatibility, they become biomimetic materials with development potential. In the field of power generation, EAPs have broad application prospects in energy industry because of cheap cost, clean and efficient feature. However, in terms of the current situation, a lot of research on EAPs should be done to achieve these applications. Energy crisis is concerned by the world. The use of EAPs will improve our environment and economy greatly.
References 1. Jumaily, A.M., Assim, A.: Proceedings of SPIE, vol. 5051, pp. 400–403 (2003) 2. Pelrine, R., Kornbluh, R., Eckerle, J., et al.: Proceedings of SPIE, vol. 4329, pp. 148–156 (2001) 3. Zhao, C.S., Yang, L.: Small & Special Electrical Machines (10), 1–6 (2006) 4. Han, F.F., Wang, Y., Liu, L.H., et al.: Mechanical Science and Technology 25(2), 214–232 (2006)
Study on Actuator and Generator Application of Electroactive Polymers
403
5. Wei, Q., Chen, H.L.: Sensor World (4), 6–10 (2007) 6. Peng, H., Yang, L., Li, H.F., et al.: Small & Special Electrical Machines (2), 57–61 (2008) 7. Dai, F.J., Qi, F.M., Zheng, S.S., et al.: Journal of Materials Science and Engineering 26(1), 156–160 (2008) 8. Wei, Y.Y., Feng, Z.H., Liu, Y.B., et al.: Journal of Functional Materials and Devices 12(6), 501–504 (2006) 9. Huang, W.S., Cong, Y.Q., Lin, B.P., et al.: Polyurethane Industry 21(3), 18–21 (2006) 10. Steven, A., Ke, J.H., Li, L.Z.: Scientific American (12), 44–51 (2003) 11. Cong, X.Q., Ma, X.D.: Recent Developments in Science & Technology Abroad (6), 18–19 (2002) 12. Howard, D.J.N., Anani, A.A., Fernandez, J.M.: Self-switching electrochemical cells and method of making same, USP: 6228516, (2001-06-02) 13. Tang, Z.Y., Liu, Q., Chen, Y.H., et al.: Chinese Journal of Power Sources 30(12), 1017–1019 (2006) 14. Peng, Z.L., Zhou, N.Q., Xie, X.L.: Modern Chemical Industry 23(S1), 48–51 (2003) 15. E, S.J., Zhu, X.L., Cao, J.B., et al.: Journal of Agricultural Machinery 41(9), 194–198 (2010) 16. Wang, K.: SINTEF Report (2008) 17. Lin, G.J., Chen, M.: 2009 International Conference on Energy and Environment Technology, Guilin, China, pp. 782–786 (2009) 18. Li, G.: Design of Unilateral Push-pull Actuator of Dielectric Elastomers China (2007) 19. Feng, M.L.: Mechanical Property Modeling of Dielectric Elastomer China (2007) 20. Chen, J., Lu, X.S.: Journal of Anhui University of Technology (Natural Science) (1), 50–54 (2007) 21. Zhao, C.Q., Lu, X.S., Zhang, Y.: Journal of Chongqing Institute of Technology (1), 25–28 (2007)
Research on Chinese Mobile e-Business Development Based on 3G Li Chuang School of Economics and Management, Henan Polytechnic University, Jiaozuo 454000, China [email protected]
Abstract. With the rapid increase in mobile users, the continuous optimization of mobile networks and the continual emergence of new business, Chinese mobile e-commerce shows the trend of rapid development, which is reflected in the rapid growth in business volume, increasing user acceptance, further subdivided user groups and so on. This article mainly described the concept of mobile e-commerce, its advantages compared with the traditional e-commerce, mobile e-commerce’s technology and features, the impact of 3G technology on mobile e-commerce, and mobile e-commerce’s security and trends. Then, the article analyzed the factors hindering the development of Chinese mobile e-business and put forward some suggestions. Keywords: Mobile e-commerce, Development situation, China, 3G.
1 Introduction Mobile e-commerce is the promotion, buying, and selling of goods and services through electronic data communication networks that interface with wireless (or mobile) devices [1]. It makes use of mobile phones, PDA and handheld computers, wireless terminals to do B2B, B2C or C2C business. It so perfectly combined the internet, mobile communication technology, short-range communication technology, and other information processing technology that people can achieve a variety of business activities at any time and any place, such as online and offline shopping and trading, online electronic payment, business activities, financial activities and related integrated services activities. The development of mobile e-commerce is closely related with the development of mobile communication technology. In the 90s of 20th century, mobile communication technology developed rapidly, mobile e-commerce gradually came into view, and academics began to concern mobile e-commerce from the theoretical and technical aspects. In the development of mobile e-commerce, Japan, Korea, and some European countries hold leading position. Japan's mobile e-commerce has been completely into M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 404–409, 2011. © Springer-Verlag Berlin Heidelberg 2011
Research on Chinese Mobile e-Business Development based on 3G
405
the third generation mobile communication (3G) era built on the broadband infrastructure. 3G networks can provide higher data speeds, is a high-speed information transmission platform to support mobile commerce [2]. Since 2009, China's mobile internet has entered a rapid development period, the evolution of communications technology, network tariff reduction, enhancement of the terminal hardware performance, and improvements of application services all provide a good prerequisite for users accessing to internet through mobile devices. The issuance of 3G licenses streamlined the business division of three major telecom operators. Large-scale construction of 3G network infrastructure started, which makes the application service provider based on the framework of the telecommunications network operator provide more rich and interactive services by high speed data transmission and network bandwidth extension. Mobile e-commerce is one of the most important wireless application services [3]. IResearch recently issued the "2010 China's mobile e-commerce market research report". The data show that in 2009 the scale of China's mobile e-commerce transactions maintain rapid growth, reaching 5.3 billion Yuan, jumped 248.7 percent year on year. According to the forecast of IResearch, in the next 2-3 years, China's mobile e-commerce will usher in the explosive growth stage.
2 The Characteristics of Mobile e-Commerce (1) Convenience Mobile terminal is both a mobile communication tool and a mobile POS machine, as well as a mobile bank ATM machine. Users may at any time, any place conduct e-commerce transactions and banking services, including payment. (2) Security Customers using mobile bank can make use of high-capacity SIM card (Subscriber Identity Module) and bank’s reliable key to encrypt the information and use the ciphertext during all transmission, which ensure the safety and reliability. (3) Rapid and flexible Users may flexibly choose access and payment methods, and set personalized information format. The more e-commerce services, the simple services form, the faster development of mobile e-commerce.
3 Key Technology of Mobile e-Commerce Mobile e-commerce is generated through combining internet, mobile communications and other technologies, its key technologies include [4]:
406
L. Chuang
(1) Wireless Application Protocol (WAP), one of the core technology of mobile e-commerce. WAP is a communications protocol that provides an open, unified technology platform, users can easily access to information and services by internet or intranet. (2) Mobile IP technology. It is a network technology that can realize TCP / IP network roaming function, mainly solve the problem that local area network can not be extended. It provides the necessary supplement for TCP / IP network protocol and makes TCP / IP support network roaming. (3) "Bluetooth" technologies. It is a short-range wireless connectivity standard jointly introduced by Ericsson, IBM, Nokia, Intel and Toshiba. As a low cost, low power, short-range wireless communications technology, bluetooth technologies intended to replace the cable connection and realize wireless connection between digital devices in order to ensure that the most common computer and communications devices can easily communicate. (4) General Packet Radio Service (GPRS). It is a wireless wide area network technology based on GSM (Global System for Mobile Communications) standard, and the rate is up to 115kb / s, fast access to data networks. (5) 3G technology, the third generation mobile communications technology. Its available rate can reach 144kb / s, or 384kb / s at least in all wireless environments, and can provide 2Mb / s in low mobility or indoor environment, so it can provide excellent broadband multimedia services to support high-quality voice, packet data, and multimedia business. (6) Mobile positioning system. It can provide emergency rescue, fleet management, ticketing, location-based charges, local news, weather and hotel and other information for tourists and office staff.
4 The Business Model of Mobile e-Commerce The success or loss of the business content development is considered to the key of the third generation mobile communication technology development (3G). And compared to current mobile communication networks, 3G has higher data rates, can transmit up to 2.4 megabytes per second, which is 100 times than that of 2G and 20 times than that of 2.5G. Messaging, positioning, mobile commerce, and video games will become the key elements for establishing the best mobile interconnection services combination. Due to the transmission speed and increased frequency utilization, 3G can provide more variety of business contents. Data, images, music, web browsing and other data services will enable people to communicate each other. Therefore, high speed, multimedia, and personalization should be the focus of the third generation (3G) services. Figure 1 explains the technology functions and business segments of the third generation mobile e-commerce [5].
Research on Chinese Mobile e-Business Development based on 3G
407
3. return the completed order form 2. send orders trading posts
user terminals 1. order mobile gateway A.upload application and authentication information
positioning platform
provide business application of knowledge B.provide the user password and authentication information
encryption and authentication
location-based services
4. delivery ķregistered
bill
financial transactions
micropayment
advertising
user database
ĸcertification request Ĺencryption and authentication mobile e-commerce platform Fig. 1. Mobile e-commerce function module
5 The Factors Hindering the Development of Chinese Mobile e-Business One of the most important factors hindering the development of Chinese mobile e-commerce is inappropriate regulatory and administrative measure, such as "micropayment" and "the third-party payment". Current policies are at a crossroad, government departments can’t confirm what should be managed or what shouldn’t. Enterprises cannot follow a legitimate road, only play "edge ball". Obviously, as a new
408
L. Chuang
business, mobile e-commerce is lack of clarity of industry standard, including access policy, regulatory policy, interconnection roaming, resource sharing, service quality assurance, service standards and so on, all these need to be clearly ruled to support the healthy and stable development of the 3G market. The advantages of mobile e-commerce applications didn’t give full play, and the key reason is that the ambiguous control policy brings operators a lot of doubts, some innovative application can’t be developed widely. Take the core business of mobile e-business, the mobile payment, for example, its related policy has become the focus of all parties. The electronic signature law and the electronic payment guidelines laid the foundation for the mobile payment in policy and legal status, but mobile payment and third-party payment intertwined, which has been considered sensitive areas, grey areas, the development is slow. Mobile e-commerce is a trend in China. Mobile has brought many new changes for electronic business. At the user level, the characteristics of large users, safety, and the correspondence benefit the development of electronic commerce; at the technical level, the convergence of fixed and mobile, the integration of transmission and content, result in new business and development model; at the market level, the development of network, terminal, browser, applications, content lay the basis for integrated innovation; at the business level, pre-paid mobile communications and the model bundled mobile phone and credit card, is conducive to the development of mobile commerce; at the policy level, the country gives encouraging policy for the development of e-commerce, the release of e-commerce eleventh five-year plan is a major positive factor. In short, relative to the fixed-line e-commerce, mobile commerce will break through the bottlenecks existing at support level, especially in the security and credit, which has a positive impact on the development of payment and also will promote the development of technical standards, at the same time give users important new value in the transaction, payment, business and management [6].
6 Conclusion In short, we need to pay attention to many issues in the development of mobile commerce, which can be summarized in five areas: first, telecom operators will become the comprehensive information service provider, it finally become the backbone of e-commerce service industries. Second, the penetration rate of mobile information for SMEs is the important foundation of mobile commerce development. Third, the development of mobile commerce must take application as the focus and take the appropriate strategy according to the current user’s major obstacles. In consumer habits, promote and nurture the user’s interest of mobile e-commerce applications. As for the user's security concerns, strengthen and ensure the security of network, applications, transactions, and information; in the credit system, implement brand strategy, carefully choose partners to create famous brand; in the service, innovate mode and industrial chain. Fourth, because of the differences between the financial development and user payment habits, mobile payment in China is currently in early stages of development, the basic characteristics is underdevelopment rather than overdevelopment. Fifth, facing the situation that mobile e-commerce policies have no restricted area, operators need to learn pioneering spirit of the private enterprise, and
Research on Chinese Mobile e-Business Development based on 3G
409
actively communicate with the financial regulatory authorities to promote the development of mobile e-commerce [6]. Acknowledgement. This paper is supported by PhD Funds of Henan Polytechnic University (No: B2006-13), here with heartfelt thanks.
References 1. Peter, T.: Wireless/Mobile E-Commerce: Technologies, Applications, and Issues. In: Seventh Americas Conference on Information Systems, pp. 435–438 (2001) 2. Cao, S., Tian, C.: Development of Mobile E-commerce at Home and Abroad. Science Mosaic 7, 222–223 (2010) 3. Zhang, J.: A Review of Mobile E-commerce Development. Economic Research Guide 1, 189–190 (2011) 4. Wang, F.: On the Status and Development Tendency of M-Commerce. China Market 45, 109–110 (2008) 5. Lv, F.: 3G-based Mobile E-commerce Platform. Communications Today 11, 20–23 (2005) 6. Guo, X.: Mobile E-business Development Prospects and Policy Strategies, http://news.rfidworld.com.cn
The Statistical Static Timing Analysis of Gate-Level Circuit Design Margin in VLSI Design Zhao San-ping Hebi Vocational and Technical College [email protected]
Abstract. This paper investigated the effect of design margin relaxation on overall circuit performance metrics such as operating frequency, area and power. From the experimental results, by designing the circuit using relaxed design margin, we can reduce the waste of design resources, and gain some advantages by using deterministic design infrastructure which is widely used in modern circuit design. In addition to these, if we apply post-silicon optimization to compensate the yield loss generated by design margin relaxation, the yield of the circuit can be raised to the target timing yield with area and power benefit. Keywords: VLSI, Gate-level design margin, process variation.
1 Introduction A well-known method that mitigates the variation of circuit performance is worst-case corner-based design [3]. The worst-case corner-based design estimates the characteristic of a circuit such as timing and leakage assuming that process parameters of the circuit have worst-case values. Because this method is relatively simple to deal with process variation, it still has been widely used these days. Unfortunately, in recent technology with the size of process variation is growing and its aspect is more complex, the pessimism of the worst-case corner-based design is increasing nowadays [4]. The actual worst-case value of each process parameter occurs rarely, but designing a circuit based on the worst-case value, means that meeting a given specification is difficult, and that the over-design problem which needs excessive resources to meet the design specification is becoming serious. Therefore, the necessity of new design method to replace the worst-case corner-based design is increasing. To overcome this problem, statistical design, which analytically evaluates the circuit performance variation, is proposed as a new and promising variation-aware analysis technique [5]. In statistical design, we use statistical static timing analysis (SSTA) for timing analysis. SSTA replaces the deterministic delay values from static timing analysis (STA) with random variables, those mean and sigma which are captured in manufactured parts are used to represent delay of circuit. On the contrary, it is not easy to apply SSTA to existing design to follow reasons. One reason is statistical methods have generally suffered from being too slow to be readily usable in a practical design methodology, aside from the inherent difficulty of gathering statistical data representative of various levels of process variations and M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 410–416, 2011. © Springer-Verlag Berlin Heidelberg 2011
The Statistical Static Timing Analysis of Gate-Level Circuit Design Margin
411
extracting from this data information that can be effectively useful for the designer [6]. Second reason is statistical design tools for the flow are not enough as well. Finally, changing the conventional deterministic design flow to statistical design flow requires many design process changes. Although these issues are not resolved in the short term, the size of process variation continues to increase due to the aggressive technology scaling. Therefore, before applying statistical design technique to existing design practically, new design methodology is required to substitute worst-case corner-based design. In [7], they proposed a new design methodology based on traditional static timing analysis (STA) considering process variations (die to die (D2D) variation, within die (WID) random variation, WID systematic variation) using a relaxed corner, which is somewhere within the extremes of device behavior, thus within the 3-sigma corner. The main idea of relaxed corner design is that “nominal” corner design may call for a setting of L = 0 (for channel length variation), the “worst” corner design may call for a setting of Δ
L = +3σ L , and “relaxed” corner design may call for a setting of
L = +δσ L . If we design the circuit using relaxed corner (i.e., if the setting of Δ L = +δσ L is used for all devices), and if the circuit timing is verified using deterministic transistor-level STA, then the circuit will achieve the desired timing yield and can reduce the waste of design resources. Through this, we can easily design the circuit which is low power, high performance to meet the given design spec when compared with worst-case corner-based design. However, no published papers have considered mitigating the pessimism of conventional gate-level STA using relaxed design margin considering process variation. In gate-level STA, because each gate delay in the chip is set to the worst-case value (i.e., 3- σ σ point delay of each gate delay distribution), it also gives pessimistic results, and generates serious over-design problems. For this reason, this paper investigates the effect of gate-level design margin relaxation on overall circuit performance metrics such as yield and power. To verify this, we make extensive comparison of the results of relaxed design margin based STA. In addition, this paper explains the benefits of design resources such as area and power that can be obtained by optimization process. This paper is organized as follows. Chapter 2 briefly introduces the mechanism of gate level and transistor level STA. In section 3, we show the experimental results about the effect of the corner relaxation on overall circuit performance metric with discussion. Finally, chapter 4 summarizes and concludes the paper.
2 Background Transistor-Level Static Timing Analysis. In transistor-level STA, the worst-case delay of each gate can be represented as
DelayGate = f ( L + 3σ L ,V + 3σ V , T + 3σ T ,…)
(1)
where L is the channel length, V is doping dependent threshold voltage and T is gate oxide thickness. By designing the gate with 3- σ corner of each process parameter, the
412
Z. San-ping
induced gate delay covers the effect of process variation. After we obtained the gate delay, we can calculate the overall circuit delay (CDWC) implementing STA.
DelayGate STACDWC
(2)
⇒
In case of the transistor-level relaxed corner-based design, if we substitute 3 to (1), and then implementing STA, we can get the overall circuit delay.
δ
in
Gate-Level Static Timing Analysis. In gate-level STA, the worst-case delay of each gate can be represented as
DelayGate = μGate + 3σ Gate where
μGate
μGate Gate and σ Gate
is the mean of gate delay and
(3)
σ Gate
is its standard deviation.
are prior calculated by SPICE simulation at transistor level.
By designing the gate with 3-σ σ Gate margin of each gate delay distribution, the induced gate delay covers the effect of process variation. In case of the gate-level relaxed design margin based design, if we substitute 3 to in (3), and then implementing STA, we can get the overall circuit delay.
δ
3 Experimental Results The reason for the pessimism of conventional gate-level worst-case design is that when implementing STA, each gate delay in the chip is set to the worst-case value. Because each chip has many gates, if chip size continues to increase, the degree of pessimism will further increase. In contrast to worst-case design, if we implement the gate-level STA using relaxed design margin (i.e., within the 3-σ σ Gate margin), then the circuit will achieve the desired timing yield. Table 1. Experimental Conditions Technology Process Parameters Parameter Variations Benchmark Circuit Optimization Method Synthesis Placement Yield Analysis
32 nm predictive technology model (PTM) [8] Gate length (L), Gate oxide thickness (T), Doping dependent threshold voltage (V) 3- σ values of each parameter variations: 14.1 % of its mean value; ration of the D2D variation and the WID variation: 1:1 ISCAS-85 Circuit Gatesizing based optimization Design Compiler (Synopsys Inc.) Astro (Synopsys Inc.) Monte-Carlo simulation with 10,000 samples
The Statistical Static Timing Analysis of Gate-Level Circuit Design Margin
413
To verify the effect of the gate-level design margin relaxation on overall circuit performance metrics, we investigated the effect of design margin reduction on timing yield using Monte-Carlo simulation and gate-level STA. We also verify the area and power benefits that can be achieved by the design margin relaxation. Experimental conditions are given in Table 1.
4 Comparison of the Effect of Design Margin Relaxation on Overall Circuit 4.1 Performance Metric If the relaxed design margin ( δ
− σ Gate ) is used for gate-level STA, the operating
frequency of benchmark circuits increases compared with that of conventional method, which uses 3- σ Gate as a design margin. Figure 1 shows the average operating frequency of ISCAS-85 circuits depending on the design margin point of each gate delay distribution. If we reduce the design margin to 1-σ σ Gate margin, we can increase the operating frequency almost 30%. Because the increased operating frequency can be used for area and power optimizations, so if the degree of design margin relaxation increases, the area and power of a circuit also increased by optimizations process. However, if the design margin is reduced to 1- σ Gate margin, yield loss occurred. Therefore, before we performed the gate-level STA based on the relaxed margin, we have to find the minimal design margin to ensure the target timing yield. To find the minimal design margin of each benchmark circuit, we performed gate-level STA several times at various target timing yields.
Fig. 1. Average operating frequency (y) of ISCAS-85 circuits vs. design margin point (x) of gate delay distribution
Table 2 shows the minimal design margin of each benchmark circuit at various target timing yields (99.87%, 99%, 90% and 85%). From Table 2, the minimal design margin of each benchmark circuit is relatively smaller than worst-case margin; almost 39% overestimated at 99.87% target timing yield.
414
Z. San-ping
Table 2. Minimal Design Margin of Benchmark Circuits vs. Target Timing Yields (99.87%, 99%, 90% and 85%) Circuit c432 c499 c880 c1355 c1908 c2670 c3540 c5315 c6288 c7552 AVG
99.87 2.16 2.40 2.02 2.22 2.20 2.10 2.00 2.12 2.27 2.13 2.16
Design Margin Point 99 90 1.02 1.72 1.26 1.95 0.87 1.57 1.12 1.79 1.04 1.75 0.93 1.63 0.91 1.57 0.97 1.67 1.10 1.80 0.99 1.68 1.71 1.02
85 0.86 1.10 0.71 0.97 0.88 0.76 0.75 0.81 0.94 0.83 0.86
To investigate the effect of design margin relaxation on the circuit area, we performed area optimization by changing the gate whose slack is large. Figure 2 shows the area optimization result of circuit c432 depending on the relaxed design margins. Compared to the gate-level worst-case design, if design margin decreases from 3- σ Gate margin to 2.1- σ Gate margin, the area of circuit c432 decreased almost 46%. The 46% area reduction is a meaningful value; it increases the number of manufactured chips in a wafer.
Fig. 2. Area optimization result (y) of c432 circuit vs. design margin point (x) of gate delay distribution; normalized by area of worst-case design
In addition to the area benefit, the power of benchmark c432 circuit designed depending on the design margins must be verified. The power optimization result of circuit c432 was in Figure 3. The optimization process was also performed by changing the gate whose slack is large. Compared to worst-case design, if design margin decreases from 3- σ Gate margin to 1- σ Gate margin, the area of circuit c432 decreased almost 47%.
The Statistical Static Timing Analysis of Gate-Level Circuit Design Margin
415
Fig. 3. Power optimization result (y) of c432 circuit vs. design margin point (x) of gate delay distribution; normalized by area of worst-case design
By reducing the design margin of each gate delay, we can mitigate the pessimism of conventional gate-level worst-case design. However, because the relaxed design margin is used for circuit design, if the degree of relaxation exceeds certain limit, yield loss occurred. To verify the degree of yield loss, we performed 10,000 times Monte-Carlo simulations. Table 3 shows the timing yields of benchmark circuits depending on the design margins (1.6- σ Gate , 2.1- σ Gate , and 2.6- σ Gate ). If the 2.6- σ Gate and 2.1- σ Gate margins are used for circuit design, the circuit meets the target timing yield. In contrast, if the 1.6-σGate margin is used for circuit design, the circuit cannot meet the target timing yield; it induces yield loss. Therefore, to meet the target timing yield, we have to control the degree of design margin relaxation or we have to apply additional yield enhancement techniques such as adaptive body bias (ABB) or adaptive supply voltage scaling (ASV) methods to the entire circuit without leakage failure. Finally, we will be able to raise the yield of the circuit to the target timing yield with area and power benefit. Therefore, a new relaxed design margin calculation method is required to mitigate the pessimism of conventional gate-level worst-case design. Table 3. Timing Yield of Benchmark Circuits vs. Design Margin Point Circuit c432 c499 c880 c1355 c1908 c2670 c3540 c5315 c6288 c7552 AVG
1.6 99.36 98.22 99.65 98.82 99.13 99.46 99.50 99.27 99.48 99.51 99.24
Design Margin Point 2.1 99.98 99.72 99.92 99.80 99.89 99.98 99.93 99.93 99.92 99.93 99.90
2.6 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00
416
Z. San-ping
5 Conclusions Due to the continued scaling down for the process technology of the semiconductor, the variations of process parameters such as gate length, oxide thickness, and threshold voltage have increased more and more. In VLSI designs, these variations cause large variation in the behavior characteristics of a circuit design such as timing, an important metric related with the yield of a circuit. In this regime, process variation must be taken into consideration during the design process in order to ensure that a manufactured circuit meets its specifications.
References 1. Borkar, S., Karnik, T., Narendra, S., Tschanz, J., Keshavarzi, A., De, V.: Parameter variations and impact on circuits and microarchitecture. In: Proc. Des. Autom. Conf., pp. 338–342 (2003) 2. Nassif, S.R.: Statistical Worst-Case Analysis for Integrated Circuits Statistical Approach to VLSI, ser. In: Director, S.W., Maly, W. (eds.) Advances in CAD for VLSI, vol. 8, North-Holland, Amsterdam (1994) 3. Nassif, S.R.: Design for variability in DSM technologies. In: Int. Symp. Quality Electron. Des., pp. 451–454 (2000) 4. Gattiker, A., Nassif, S., Dinakar, R., Long, C.: Timing yield estimation from static timing analysis. In: Proc. IEEE Int. Symp. Quality Electron. Des., San Jose, CA, March 26-28, pp. 437–442 (2001) 5. Visweswariah, C., Rayindran, K., Kalafala, K., Walker, S.G., Narayan, S.: First-order incremental block based statistical timing analysis. In: DAC, pp. 331–336 (2004) 6. Blaauw, D., Chopra, K., Srivastava, A., Scheffer, L.: Statistical Timing Analysis: From Basic Principles to State of the Art. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 27(4) (April 2008) 7. Najm, F.N., Menezes, N., Ferzli, I.A.: A yield model for integrated circuits and its application to statistical timing analysis. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 26(3), 574–591 (2007) 8. Balijepalli, A., Sinha, S., Cao, Y.: Compact modeling of carbon nanotube transistor for early stage process design exploration. In: ISLPED (2007)
Forensic Analysis Using Migration in Cloud Computing Environment Gang Zhou1, Qiang Cao1, and Yonghao Mai2 1
Computer Science Department, Huazhong University of Science & Technology, Wuhan, China 2 Electronic Evidence Laboratory, Hubei University of Police, Wuhan, China [email protected] Abstract. Currently, with the popularity of cloud computing applications, the traditional way of evidence becomes outdated. Under the new forensic environment, it needs more ideas and means of evidence. Based on the analysis of cloud computing environment, this paper demonstrated the new challenges of the evidence in the cloud computing application environment. In the evaluation of the current new forensic technology, it proposed the method of migration of VM image as key data on the cloud. We carried out experiments in the cloud computing environment, and we achieved a prototype system. Keywords: Cloud Computing, Computer Virtualization, Virtual Machine Image.
Forensics,
Data
Migration,
1 Introduction The cloud can support low business cost to ensure the IT needs of customs. Many companies can be achieved their IT needs though the cloud. Individual users through
Fig. 1. Cloud computing functions and application structure M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 417–423, 2011. © Springer-Verlag Berlin Heidelberg 2011
418
G. Zhou, Q. Cao, and Y. Mai
the cloud computing to achieve tremendous repository of large scale information sharing (see Figure 1). While users of these applications in cloud can provide enormous benefits, but can also be used by illegal activities. How to achieve the relevant data and evidence collection in this new mode of cloud computing applications is a very huge challenge[1,2].
2 Limitation of Traditional Forensics Over the past several years, crimes in network have increased rapidly. Variety of network monitoring tools enhanced our means of computer forensics in routine works. But with the use of the new cloud computing, these forensic methods have become outdated, and ideas on forensic technology have occurred fundamentally wrong[3]. The change from file-based evidence to electronic-based evidence has necessitated a rapid change in the routine works, but that new network technology changes in the computer forensics is not enough. In fact, many forensics methods are different, but the basic idea is very similar[4].
3 The Challenges of the Evidence in Cloud Computing With the growing popularity of cloud application, crime with a cloud computing applications will rise sharply[5]. Cloud services are especially difficult to control, because logging and data for multiple customers may be co-located and may also be separated in any positions. The principles of computer forensics are to save the forensic object device as a complete image as much as possible. Computer forensics is the principle of reconstructing the activities leading to an event and determining the answers to “What did they do?” and “How did they do it?” Traditional forensics process is in a relatively safe laboratory environment for analysis and processing, which led to the data handling of the evidence as a "closed static analysis ". Evidence of the work is the lack of dynamic data processing, and analysis tools are relatively simple[6]. E-discovery and live forensics are two evolving areas of digital forensics that an investigator can add to their weapons to fight against e-crime[7]. The typical model of E-discovery can see Figure 2. Electronic discovery refers to any process in which electronic data is sought, located, secured, with the intent of using it as evidence in a cyber crime. E-discovery can be carried out offline on a particular computer or it can be performed on a network. Live forensics is another method in the fight against cybercrime, which is the means and technique of obtaining artifacts of evidential value from a machine that is running at the time of analysis. This could prove pivotal in cases where evidence is obtained from the machines volatile Random Access Memory (RAM) for instance[8]. In recent years, general computing has traditionally seen users run duplicates of software programs on every computer they use. All of the files created by those software programs are stored on the local machine that created them or the network to which they are connected [9]. If a computer is connected to a network, the other machines connected to that network can share those files, yet computers beyond that
Forensic Analysis Using Migration in Cloud Computing Environment
419
network will have no access to the data. As the use of cloud computing to the traditional way has changed. Cloud computing is software that runs not on PC’s or company servers but instead on computers and servers available on the Internet. The cloud offers private end-users and companies of all sizes a huge pool of resources at remote terminals[10].
Fig. 2. The typical model of E-discovery
In cloud computing, users can be able to access their variety of file types from any device capable of connecting to the Internet, from virtually anywhere in the world. With this concept, the inevitability is that it will grow and grow, as users become more independent from their traditional desktop machines and require portability, coupled with the ability to share all their data resources with whomever, whenever and wherever they choose. With the advent of 3G, the network can offer cloud access from virtually any location in the world, facilitating that movement whilst maintaining an network connection to the data required for routine works. The current action is focused on hacking, phishing, pharming, DDOS, viruses, trojans, spyware and worms. Although the cloud model is security, the cloud computing services are provided by many users. It causes the cloud environment not safe. In addition to the cloud service provider, the user’s confidential data is an important issue. To use cloud computing services, users need place large amounts of data in the network and adopt the web-based applications. Supervisor can’t guarantee safety and effectiveness of these important data, because these are the peripheral data in cloud services. Though cloud computing service is security, it can’t guarantee the safety of the surrounding data. Required cloud service providers to ensure the safety of all the external data is impractical, and it can’t guarantee long-term protection. Some other ways in terms of the current forensic technology can’t be controlled[11,12]. If the VPN access network in open environment, there is no way to detect it more. With the large number of new multimedia terminal device through wireless access to an open network, the access action can’t be detect and control.
420
G. Zhou, Q. Cao, and Y. Mai
4 The Way of Computer Forensics in Cloud Computing In general, the conventional computer forensics process comprises several steps, it includes “Access”, “Acquire”, “Analyze” and “Report” (shown in figure 3). The popularity of cloud computing, for the investigation and evidence gathering is a challenge. In addition to the current method can be used the aforementioned E-discovery and live forensics, we also can use of legislation to monitor and control the business processes cooperating with cloud service providers, and analyze the system information include the log and snapshot.
Fig. 3. The computer forensics steps in cloud computing environment
Fig. 4. Cloud computing architecture diagram
Forensic Analysis Using Migration in Cloud Computing Environment
421
As the cloud can be seen as multiple virtual machines to work together (shown in figure 4), we will focus on collecting evidence on a virtual machine. We propose the possibility of treating VM (Virtual Machine) images as evidence for in a cloud environment. VM images are unique entities in the cloud with special traits. VM images need high integrity, because they determine the initial states of running virtual machines, including their security states. The security and integrity of such images are the foundation for the overall security of the cloud. Many of the VM images are designed to be shared by different and often unrelated users. Each running virtual machine can be seen the target device we wanted to analyze. Without affecting the normal operation of the entire platform, we can obtain the virtual machine images based on the migration. Figure 4 is a typical structure of the cloud diagram, platform providers provides effective account manage and access control for individual users on the user interaction layer. Services presentation layer deploys the services to the end-user such as online storage, software as a services and etc. Virtualization layer through the system virtualization, storage virtualization and network virtualization technology manages distributed computing resources. Virtualization layer allows users do not care the location of the host, maintenance, fault tolerance and so on. During the migration process, the platform needs to change the service mode, the service response process is divided into two parts: One part provides of front-end services running, the other supports the back-end synchronization of data migration. Migration data including sensitive data: (1) Page table entries and directory entries, from the write mode to read-only mode. (2) Virtual machine control module thread priority levels, from high to switch to low-level, authority reduced. (3) VMM (Virtual machine monitor) on the virtual machine to migrate their scheduling data. (4) VMM on the virtual machine to migrate their calling hardware device data. The entire VM imaging migration process specific steps shown in Figure 5.
Fig. 5. Implementation process of cloud migration
Implementation process of cloud migration: (1) VMM connect to VMM control command processing, send migration command; (2) Native processing thread and the native process services filled of sensitive data;
422
G. Zhou, Q. Cao, and Y. Mai
(3) After wake up source of services, sensitive data read from the native corresponding sensitive data; (4) Native process services SYN Sensitive data; (5) Imaging process services receive SYN request, the corresponding services response into the imaging process services; (6) Target response services processing services returns to the source of services; (7) Source feedback received after the SYN services filled with the sensitive data, the native processing service wake-up imaging; (8) After wake-up imaging processing services, sensitive data read from SYN services; (9) Native processing services will be sent to mission control response services; (10) Native processing services migration of virtual machine to be SYN to the Imaging processing services, and migration to the target; (11) Migration is complete, run again (6) - (9) steps; (12)Migration process, the VMM controlling of main services periodically check the progress of the source parameters of the task of providing and dynamically modify the migration rate and other parameters. After the above processes, the whole systems can complete data migration task, and then the migrational imaging files can be seen as the evidence. We can reproduce the VM imaging into real host environment.
5 Access and Manage Virtual Machine Images In order to better access and manage the virtual machine images, we propose a system adopting read only mode that can access the VM images. Through adopting the database, we shall combine the VM images and traditional hard disk images into a virtual electric evidence library system.
Fig. 6. The VM images Storage Structure and Management System
Forensic Analysis Using Migration in Cloud Computing Environment
423
The total storage structure shown in Figure 6, for VM images of access, we can reload up the virtual machine, one way is loaded into the virtual machine platform, one is directly implemented in real hardware. This ensures that the read-only mode on the VM images to analyze evidence.
6 Conclusion In summary, for the popularity of cloud computing applications, the current evidence is still relatively backward methods. Many of the key aspects of evidence can’t guarantee effective control, forensics tools and forensic techniques need further development to meet the increasingly fragmented change of data access method. The future work, we need further study the model of cyber forensics, the legal validity of the process of migration on the proof. We can use isolation techniques, dynamic migration of protection mechanisms to achieve the migration process of the legal validity of the evidence.
References 1. Liao, N.D., Tian, S.F., Wang, T.H.: Network forensics based on fuzzy logic and expert system. Computer Communications 32(17), 1881–1892 (2009) 2. Khatir, M., Hejazi, S.M., Sneiders, E.: Two-dimensional evidence reliability amplification process model for digital forensics. In: Third International Annual Workshop on Digital Forensics and Incident Analysis (2008) 3. Hansen, J., Jul, E.: Self-migration of operating systems. In: Proceeding of the 1lth Workshop on ACM SIGOPS European Workshop. ACM, New York (2004) 4. Soltesz, S., Potzl, H., Fiuczynski, M., Bavier, A.: Container based operating virtualization: a scalable, high-Performance alternative to hypervisors. In: Proceedings of the 2007 Conference on EuroSys, pp. 275–287. ACM Press, NewYork (2007) 5. Rogers, M.K., Seigfried, K.: The Future of Computer Forensics: A Needs Analysis Survey. Computers & Security 23(1), 12–16 (2004) 6. Gartner. Tough questions: Gartner tallies up seven cloud-computing security risks (2008), http://www.mbtmag.com/article/CA6578305.html 7. Menken, I.: Cloud Computing – The Complete Cornerstone Guide to Cloud Computing Best Practices. United States of America. Emereo Pty Ltd (2008) 8. Espiner, T.: Can business trust “immature” cloud computing? Not yet, warn experts (2008), http://software.silicon.com/security/0,39024655,39362814,00. htm (2009) 9. Mansfield-Devine, S.: Danger in the clouds. Network Security (12), 9–11 (2008) 10. Armbrust, M., Fox, A., Griffith, R., Joseph, A.D., et al.: Above the clouds: A berkeley view of cloud computing. Technical Report UCB/EECS-2009-28 (2009) 11. Garnkel, T., Rosenblum, M.: When virtual is harder than real: Security challenges in virtual machine based computing environments. In: Tenth Workshop on Hot Topics in Operating Systems (2005)
Research on Constitution and Application of Digital Learning Resources of Wu Culture Minli Dai, Caiyan Wu, Hongli Li, Min Wang, and Caidong Gu Department of Computer Engineering, Suzhou Vocational University, Suzhou, China [email protected]
Abstract. As splendid traditional culture, Wu Culture plays a very important role in accelerating the constructing of school culture and human resources training. With the increasing demands for Wu culture resources of the vast teachers and students in scientific research and teaching, it appears so imperative and important to constitute digital learning resources of Wu culture. Digital media technology and internet technology are applied to do digital regulating, sorting, and developing of related resources of Wu culture, which is finally put into teaching and scientific researching activities. The constitution of digital learning resources of Wu culture, can effectively exhibited the kernel of Wu culture, better meet the demand of teaching and researching of teachers and students, besides, closely keep pace with the culture heritage digitalization home and abroad. Keywords: Wu culture, digital, learning resource, national culture.
1 Introduction Present human resource training in high schools is launched mostly focusing on the local economical development. Under this kind of background, more and more high schools, integrating with local splendid traditional culture, open related human civilization courses. As for Suzhou, taking Wu culture education for students, makes great significance in strengthening the training of students’ own human civilization, better serving local economic developing, efficiently inheriting and developing of Wu culture. In practical teaching and scientific researching, problems exist that the directly and effectively available Wu culture resources are so limited. Reasons are mainly as follows: the distribution of the resources is somewhat loose, the types of the resources are not rich, the form of the resources is single, the applying fields are not definite, and the suitable objects are not clear. After digital regulating, sorting, and developing of related resources of Wu culture, it can be put into teaching and scientific researching activities, so testifying that the constitution of digital resources of Wu culture has very important means and significances.
2 Related Digital Technology In the field of digital learning resource system constitution, there contains three main technologies, and they are digital resource creating technology, digital resource managing technology, and digital resource serving technology. M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 424–429, 2011. © Springer-Verlag Berlin Heidelberg 2011
Research on Constitution and Application of Digital Learning Resources of Wu Culture
425
Digital resource creating technology. This sort of technology mainly used in creating all kinds of digital learning contents, like the creating of digital files, digital videos and audios, animations, course wares, learning implemental soft wares. Presently the technologies applied in learning resource digitalization are word process technology, figure and picture process technology, video and audio process technology, internet technology, and virtual reality technology, and so on. Digital resources managing technology. This sort of technology is for the effective management of all kinds of digital learning contents. For example, data base is used to do add, delete, and set popedom of learning contents. The main technologies referred are recourse storing technology, resource transmitting technology, resource copyright protection technology, resource sorting technology, etc. Digital resources serving technology. This sort of technology is mainly used to provide users the service of learning resources and learning environment, like creating individual learning environments for users, providing convenient searching service and resources transporting service, presenting learning resources in user-preferred form. The main technologies referred are user managing technology, resource registering technology, resource releasing technology, resource retrieving technology, resource transporting technology, resource presenting technology, resource reusing technology, resource evaluating technology, and so on.
3 The Constitution of Digital Learning Resource of Wu Culture The Significance of the Constitution of Learning Resources of Wu Culture. Wu culture is a kind of regional culture, an important component of Chinese culture. The researching, protecting, developing and utilizing of splendid culture draws more and more attention from the government and learners. Presently, many schools have opened related culture civilization courses, and Suzhou Vocational School as one of them. This school began to open Wu culture course for majors like tourist management and human resource management since 2000. After years’ effort, related textbooks have been compiled and published, the corresponding multi media course ware and internet testing system have been developed, Wu Culture Corner, a Wu culture exhibiting center full of features, has been built, and the teaching achievements have gotten many rewords. [1] However, with the rapid development of teaching and scientific researching, the existing Wu culture resources can no longer well meet the demands of the vast teachers and students. On the one side, the course was first opened as culture civilization education course for high school students, which makes the benefiters so limited; on the other side, as the limitation of technical means and cost, the digitalization grade of the resources is not high. The existing digital resources mainly are multi-media course ware, internet testing system, while many excellent resources are still staying in file office and exhibition hall, never gotten sufficient application. For example, in the existing Wu Culture Corner that combining culture exhibition, scientific research intercommunication, course giving and students’ cultural activity together, due to the limitation of space and time, the related Wu culture contents can be exhibited in Wu Culture Corner is very limited, and the number of the teachers and students receipted is largely influenced. So, the
426
M. Dai et al.
launch of research and developing work of digital learning resources of Wu culture makes great significance on enriching and fulfilling course giving, improving individual learning, meeting scientific research demand, exhibiting Wu culture kernel, and so on. The Sorting of the Learning Resources of Wu Culture. Wu culture is rich and deep, and the referred contains are very wide, with long time span and vast region span. In allusion to the rich resource content, complex resource type, many user objects in learning and researching of Wu culture, One of the key contents, in constitution of digital learning resources of Wu culture, is how to organically organize Wu culture resources, to make reasonable sorting of learning resources for application in classroom teaching and non-classroom teaching. Here we make sorting of digital resources of Wu culture by manifestation, resource type, resource format and object audience four factors. [2] The manifestation of Wu culture. In material culture field, Wu culture mainly embodies in water conservancy culture, rice planting culture, silk culture, residence architecture and gardening; while in nonmaterial culture field, the contents of Wu culture is extremely rich, such as education, traditional medicine, literature, arts, sculpture arts, Kunqu, Pingtan, folk handcrafts, etc. The resource type of Wu culture. Consulting the sorting standard of education resource in CELTS-41, the resource type of Wu culture can be divided into multi-media materials, literature files, multi-media course wares, teaching application cases, teaching design schemes, exam questions, testing papers, resource catalog index, internet courses, information based learning implemental soft wares, and so on. Resource format of Wu culture. Consulting the sorting of media format in CELTS-42, digital learning resources of Wu culture can be divided into text, figure and pictures, audio files, video files and internet applications[3]. The suitable objects of Wu culture resources. In allusion to different people, digital resources of different fields and depth are provided, which is better for improving the efficiency of resource obtaining and transporting. Here the object audience of digital learning resources of Wu culture is divided into students, social benefiters and professional researchers. For students, they can be divided into students from preschool education, elementary school, junior middle school, senior middle school, junior college, under graduators, masters, doctors; social benefiters can be divided into workers, farmers, teachers, doctors and so on; professional researchers can be divided into experts from ancient architecture, gardening, archaeology, literature, arts, traditional opera, etc. Based on this, it can be done to expand the sorting according to practical situation. The Content and Mathod of the Constitutiong of Learning Resource of Wu Culture. The creating of digital basic resources. According to the demands of teaching research and exhibition, the creating of digital learning basic resource of Wu culture is based on the sorting of the learning resources of Wu culture. These contents mainly contain multi-media materials, multi-media course wares, teaching application cases,
Research on Constitution and Application of Digital Learning Resources of Wu Culture
427
literature documents and index, exam questions and reference key, internet courses and alternative virtual exhibition system. The existing book literatures, arts works, ancient buildings, operas, handcraft skills were created into multi-media materials, and they can be all kinds of formats, like texts, imagines, audios, videos and so on. Different manifestations of Wu culture can be created into different formats, such as drawing works can be created into all kinds of imagine files through digital camera technology, while operas are created into all kinds of digital audio files. For some special works of them, taking special art works in Wu field as an example, besides ordinary image digitalization work being done, possibly much more complex information needs to be collected, like 3D shape data, texture data, etc. According to teaching needs, basic multi-media materials are applied to develop solitude computer and web based internet course wares, and corresponding teaching cases is designed. The content management of learning resources. Data base technology is applied to build an item information managing platform. All kinds of digital basic resource information that created is stored in the data base, which is convenient for aggregating and managing of all kinds of learning resources. Internet presentation of learning resources. In order to get digital learning information, users need an alternative interface. Presently, most of the learning resource serve systems take portal websites as an interface like this, and the digital learning resources of Wu culture created here is one example. Here according to resource suitable objects sorting, three quick shortcut entrances are provided, which are separately aiming at students, social benefits, and professional researchers. Users can make quick choose of needing resources according to their own situation. The implement of this function is selectively presenting the needed information for users according to resource sorting of Wu culture. For example, after entering student link, the information presented is mainly internet course wares, teaching application cases, related testing information, etc; after entering social benefiter link, the contents presented are mainly showed by all kinds of video, audio, animator and virtual reality interfaces, so one can easily obtain related culture resource information, and using virtual reality interface, one can impel alternative experience by rambling in scenes with virtual avatars; and after entering professional researcher link, the information presented is all kinds of literature resources, professional information which is convenient for the use of researchers.
4 The Typical Application of Digital Learning Resource of Wu Culture The constitution of digital learning resource of Wu culture is well applied in teaching, scientific researching, presenting, etc. This takes the application in teaching field as an example. When Wu culture course is taken as a basic culture course in majors like tourist, this learning resource can effectively expand learning contents, and enrich the width and depth of the content. Besides, for the learning of professional courses in majors like tourist guide, it is possible to do individual teaching, improving the teaching
428
M. Dai et al.
efficiency and quality. As guide training is influenced and limited by many factors like cost, time, region, climate, security, etc, the chances are slim for students to go to all tourist scenes for practical learning. Under this situation, the virtual presenting platform can be used for practical training, which can in great grade breakthrough the limits on time and space, largely improve teaching efficiency, reduce student manage risk, and cut down scene teaching expense. The greatest problem of that Wu culture course faces, as a civil quality elective course for all students, is resource inadequacy caused by too many choosers. With this platform, it is possible to provide teaching resources for thousands of students at the same time, and also a browse and alternative communication environment, increasing the teaching knowledge and gusto property. For students from multimedia and related majors, it is a work creating and practicing platform. Obtaining splendid Wu culture information from resource platform as work creating materials can not only upgrade one’s professional skills, but also help learning related knowledge about Wu culture, no doubt a good way of learning and practicing.
5 Conclusion This article discussed the sorting methods, contents concluded and constituting method of digital learning resource of Wu culture, and introduced typical cases of the application of this resource in teaching and researching activity. The constitution of digital learning resource of Wu culture can relief the increasing conflict between limited teaching resource of Wu culture and teaching demands, allowing more persons to enjoy the splendid Wu culture resources and take part in construction and transmitting activity of Wu culture. As the constitution of digital learning resources is a complex systematic project, many more contents need further research. And hereafter, digital resource service technology aspect would be an important research direction. Acknowledgements. This research was supported by the Innovation Project of SuZhou Vocational University, the Opening Project of JiangSu Province Support Software Engineering R&D Center for Modern Information Technology Application in Enterprise(No. SX200901), and the School Youth Fund (2010SZDQ14).
References 1. Wu, E.: Wu Culture Courses and its Teaching material construction. Journal of Suzhou College of Education (4) (2008) 2. Liu, Q., Liu, M., Xie, Y., Li, H., Hu, M.: Research and Application on Classification system of educational resources for towns and villages. E-Education Research (12) (2010) 3. Wu, D., Zhao, S., Yang, X., Zhang, Y.: A Study on E-Learning Resource Cataloging and Coding Strategy. Distance Education in China (1) (2009) 4. Dai, M., Zhou, D., Shi, B., Wang, M., Zhang, L., Gu, C.: Research on Virtual Tourist Guide Training System Based on Virtual Reality Technology. In: Proceedings of KAM, pp. 155–158 (2010)
Research on Constitution and Application of Digital Learning Resources of Wu Culture
429
5. Wu, E.: The historical background and value of the emergence of suzhou city wall. Journal of Suzhou College of Education (2) (2009) 6. Tan, G., Zhong, Z.: The application of storytelling in digital museums. In: Proceeding 2009 IEEE 10th International Conference on Computer-Aided Industrial Design and Conceptual Design: E-Business, Creative Design, Manufacturing - CAID and CD 2009, pp. 1638–1641 (2009) 7. Tan, G., Zhong, Z.: Design and implementation of an intelligent character model in virtual environment. In: 2009 International Conference on Web Information Systems and Mining, WISM 2009, pp. 418–422 (2009)
Research on Digital Guide Training Platform Designing Minli Dai, Caidong Gu, Jinxiang Li, Fengqiu Tian, Defu Zhou, and Ligang Fang Department of Computer Engineering, Suzhou Vocational University, Suzhou, China [email protected]
Abstract. In order to better meet the demand of tourist guide resources in new age, digital guide training platform is designed. The existed problems and shortages of traditional simulate guide training system is expatiated, the procedure and demand of guide training is analyzed, and the method to constitute digital guide training platform is studied. This system is propitious for students to quickly master knowledge, improve guiding ability, and it can also efficiently cut down training expense, reduce managing risk, and largely improve teaching effect and efficiency. Keywords: digital, guide training platform, micro-teaching, virtual reality.
1 Introduction With the rapid development of tourism, the demand of the society for high qualified guiders becomes larger and larger. For vocational schools and high schools, the constitution of advanced guide training system is an important part in training high qualified guide resources. Presently, tourism and related majors in most high schools have established relevant training system, but many problems urgently needed to be solved are still existed, for the limitation such as managing idea, expense, technique means. First, the teaching method is single and the effect is not ideal, still basing on textbook, with the help of multi-media courseware, and the tourist scenes and environments being exhibited through words, pictures, sceneries, models, sand tables and so on. Then, the teaching environments and facilities are so backward; the teaching contents do not meet the needs in real working position; and the teaching resources are not rich enough. The students have much difficulty in touching real working environments; the knowledge and skill learned can not get sufficient and efficient practice, influencing the training effect. Using internet technology, digital media technology to constituted digital guide training platform, can provide supports in learning contents, learning methods, learning environments, evaluating methods, solve the existing problems in traditional guide training teaching, and improve guide training teaching effect and efficiency.
2 Related Technology Digital guide training platform constitution come down to many technologies, such as word process technology, figure and image process technology, video and audio M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 430–435, 2011. © Springer-Verlag Berlin Heidelberg 2011
Research on Digital Guide Training Platform Designing
431
process technology, internet technology, data base technology, virtual reality technology, etc. Here we only focus on introducing related technology of contents exhibition display in training platform. Panorama Browse Technology. [1]Panorama browse technology is the technology to get 3D stereoscopic impression through panorama browser when observing panorama image. With panorama browser or related plug-in components, keyboards and mouse are available to control the browsing of scenes to achieve functions such as enlarging, shrinking and turning images. Panorama browse, to the spectators, has a feature of strong scene feelings; to the developers, has an advantage of short creating cycle, and low cost; and in internet applicators, has an advantage of high download speed as it has few data. Panorama Ramble Technology. [1]Single observe spot panorama browse can not make ramble observing of scenes. Panorama ramble technology is developed to solve this problem, which linking many single spot panorama browse images to make flat transition and ramble of spots and spaces, is some improvement of single spot panorama browse. This makes better browse effect, and, in certain degree, increases the scene feeling of multi-spot scene ramble. Web 3D Technology. [2,3]Presently, there are so many software for virtual reality developing, and familiar ones are Vega Prime,Virtools,Quest3D,VRP, etc. Vega Prime is currently one of the most widely used soft wares for 3D video scene simulation, which achieves good balance between real time effect and vividness of large areas in imitation. Virtools provides a visible figure operating interface, so that developers can build many interact programs with the build-in work models. Quest3D is a virtual interact developing platform both for art designers and advanced program designers, which has a better romance effect than Vega. VRP is a domestic 3D virtual reality platform soft ware. It is designed directly for art designers being able to create very beautiful virtual scenes without the participation of programmers, and provides SDK for advancing developing. Digital Content Record Technology. Record and plat system can make synchronized record of computer pictures, video signal, audio signal, and create standard format files. In the same time of recording teaching activity, other users can watch online live teaching broadcasting through computer. With camera recording the video of learners, timely feed back and evaluation can be done to the learning activity of learners, and also self evaluation of learners’ learning activity, evaluation between students and comments from teachers. As every micro-teach classroom are connected through internet, teachers can make long-distance monitor and record of student’s whole training process in the console cabinet, and create network resources, which make it possible for resources browsing, uploading, downloading and so on. 3
Analyze of
Application Demand
According to different teaching content, practical teaching activities of guide resources training can be divided into different phases: cognize practice teaching,
432
M. Dai et al.
classroom imitate teaching, training course teaching, and field practice teaching[4]. Teaching activity follows the principle of theory and practice, knowledge and skill keeping the same pace. Cognize Practice Teaching. In guide work practice, related theory knowledge is requisite. For knowledge like history, architecture, civilization, geography, professional courses should be opened. Besides, for the theory courses referred in tourist guide, not only relative training should be done, but also strengthen training environment and simulated testing system should be provided. Classroom Simulate Teaching. After the study of related knowledge of tourist guide, classroom simulate demonstration is expected to be done. Usually, teachers first narrate in class the related background knowledge and related information of scenes, and then, teachers make guide explanation examples for students, showing the guide activity process for students; thirdly, some students, following the teachers’ example, make imitating explanation on the platform; and finally, teachers make comments on the exhibition of the students. Practical Courses Teaching. This phase is supposed to do tourist guide explanation strengthen practical training, and it is mainly done in micro-classroom. Basing on related theory knowledge being mastered and basic tourist guiding process being understood, strengthen practical training does repeating training focuses on the explanation content, explanation manner, appearance in guide process. Field Practice Teaching. Usually, teaching in this phase is done in real scenes, and it is a whole process of tourist guide, all of which is real. In practice, the form of it is that, students of a class go out for a tour as a group, taking turns to play roles of national guides, local guides, and leaders, experiencing the guiding process themselves. For students available, they can go to travel agency for group following and group leading study. This is the last key learning process before students take their pace for real job.
4
Practical Training Platform Design
Whole Frame Design. According to different applying needs, whole practical training platforms are designed differently to fulfill the needs of teaching. The whole digital practical training platform contains five sub-systems; those are information resource subsystem, virtual imitate subsystem, micro teaching system, learning activity management subsystem and maintain management subsystem. Information resource subsystem. Information resource subsystem is the base of the whole practical training platform, as all the information and knowledge used in practical training is stored in this subsystem. The forms of the related information mainly are texts, video and audio, multi-media courseware, tests, scene panorama rambling, scene 3D resource, besides, dynamically created resource in teaching activities (such as students’ exercises) can also be a kind of resource. In this subsystem, all sorts of resources are divided for managing, and adding, deleting and revising of the resources can be done.
Research on Digital Guide Training Platform Designing
433
Digital guide training platform
Information resource subsystem
Resource assort
Resource record in
Resource revise
Resource delete
Virtual imitate subsystem
Resource browse
Real Time record and play
Micro teaching subsystem
Real Time record
Resource browse
Record content upload
Practical training process monitor
Learning activity manage subsystem
The learning process recordings
Online test and exam
Exam question management
Score management
Maintain management subsystem
User management
Content check
Fig. 1. The whole frame figure of the digital practical training platform
Virtual imitate system. Virtual imitate system is set mainly for cognize teaching and classroom imitate teaching. Also, some practical training contents in field practice teaching phase can also be done by using this system. It is no doubt the best way to take real imitating training in real scenes; however, it is influenced and limited by many kinds of factors to organize students out, such as cost, time, space, climate, security, etc. Virtual imitate subsystem can solve this problem in great extend. It mainly uses multi-channel screen projecting system to imitate real scenes. With the help of scene panorama rambling resources and scene web 3D resources, students can observe classical guide lines, learn classical guide explanations, and they can alternatively control the scenes, making the whole training process so close to real. What’s more, the whole training process can be recorded through camera and be uploaded to the training platform resources base. The feature of this system is that contents are nonlinear, and the scene observing angle is not single, and it is possible for control, which provides much alternativeness. Micro teaching subsystem. Micro teaching system is set mainly for students’ strengthen training. In traditional guide strengthen training, the common way is to hide the voice and subtitle of records, and make dubbing training. The sound is
434
M. Dai et al.
recorded for checking and evaluation. If students’ appearances and postures are considered in strengthen training, this method makes no sense. Considering this problem, micro teaching subsystem is designed. Every computer in micro teaching subsystem is equipped with camera, microphone, and it can make use of other related resources in information resource subsystem through internet at any time. In strengthen training, the students do no only be able to get sorts of studying resources, but also observe their own real time pictures captured by cameras. Students can record themselves the whole training process (including computer image plane, training video and sound), and generating an individual resource for inspecting and evaluating. Teachers can monitor training pictures of all students through internet, and make record and upload of students’ training process. Learning activity manage subsystem. Learning activity subsystem contains functions like learning process recording, exam question managing, online testing and score managing. Using this subsystem, it is able to record students’ information of their logging in and studying, besides, it is possible to do work like creating examine paper, manual grading, score inquiring. Maintain manage subsystem. Maintain manage subsystem mainly contains functions of user managing and content checking. Here it is able to manage data of students and teachers, and check the data of new users. Role Design of System. This system mainly designed three roles of teacher, student, and manager. All roles can browse all kinds of teaching resources after logging in. however, the popedom of teachers concludes exam questions uploading, revising and deleting, examine paper creating and checking, virtual imitate training real time recording, record resources uploading, micro teaching process monitoring, student learning record counting, etc. After logging in, students can make real time recording of their own training process, and update the recording resources in the micro teaching system, and they can use exam question data base to self testing online, exam online, and require score. Managers mainly manage system users, check the storing of exam questions uploaded, the content of the questions, and the real time record resources, and make sorting, recording, revising, deleting of basic resources.
5 Conclusion According to the different needs of guide training teaching, digital guide training platform is designed specially. This platform is better for students to master knowledge, improve guiding ability, and it make much achievement in cutting training cost, reducing managing risk, increasing teaching effect and efficiency. Digital guide training platform is a complex systemic job. Although, the basic plan of platform designing is provided here, there is still much concrete work needs to be done. Much more research and development has to be done to solve problems like the way of quickly creating abundant learning resources, the sorting standard and searching skill of learning resources, the way of studying activity managing.
Research on Digital Guide Training Platform Designing
435
Acknowledgements. This research was supported by the Innovation Project of Suzhou Vocational University, the Opening Project of JiangSu Province Support Software Engineering R&D Center for Modern Information Technology Application in Enterprise(No. SX200901), and the School Youth Fund (2010SZDQ14).
References 1. Dai, M., Tan, G., Zhou, D., Wang, M.: Research and Applications on the digital exhibition of Lacquer Chest of tomb of Marquis Yi of the Zeng State. In: Proceedings of CNMT, pp. 1014–1017 (2009) 2. Dai, M., Zhou, D., Shi, B., Wang, M., Zhang, L., Gu, C.: Research on Virtual Tourist Guide Training System Based on Virtual Reality Technology. In: Proceedings of KAM, pp. 155–158 (2010) 3. Zhou, M., Tan, G., Zhong, Z., Hu, F.: Design and implementation of a virtual indoor roaming system based on Web3D. In: Proceedings of the 1st International Workshop on Education Technology and Computer Science, ETCS 2009, vol. 3, pp. 985–988 (2009) 4. Wang, J.: Constructing Tourist Practice Teaching System in Colleges. Journal of Jiaxing College 17(S1) (October 2005) 5. Tan, G., Zhong, Z.: The application of storytelling in digital museums. In: Proceeding 2009 IEEE 10th International Conference on Computer-Aided Industrial Design and Conceptual Design: E-Business, Creative Design, Manufacturing - CAID and CD 2009, pp.1638–1641 (2009) 6. Tan, G., Zhong, Z.: Design and implementation of an intelligent character model in virtual environment. In: 2009 International Conference on Web Information Systems and Mining, WISM 2009, pp. 418–422 (2009)
A Hypothesis Testing Using the Total Time on Test from Censored Data as Test Statistic Shih-Chuan Cheng Department of Mathematics Creighton University Omaha, Nebraska 68178-2090, USA [email protected]
Abstract. The total time on test for censored data from an exponentially distributed population has been proved to be a prediction sufficient (or adequate) statistic by Nair and Cheng [10] in light of the works by Cheng and Mordeson [4], and others [1, 9, 12-14]. The test may be repeated for many times. As a result, since the total time on test until the r-th ordered failure time is observed will be recorded for each test, several total time on test (for censored data) for the same system are available for use in analyzing the reliability of the system. The main objective of this proposal is to create a statistical test for the parameter of the exponential distribution on the basis of the total time on test from censored data. Keywords: Sufficient statistics, Conditional independence, Prediction sufficient statistics, Factorization criterion, Order statistics, Censoring data, Total time on test.
1 Introduction A basic problem of statistics is that of predicting a future (that is, not yet observed) random variable Y on the basis of an observable random variable X, where θ is the parameter that does not concern us directly. The notion of prediction sufficient (or adequate) statistics initiated by Fisher [7] and Skibinsky [11] would deal with this concern. Subsequently, it has been extensively investigated in literature such as [1, 2, 6, 9-11]. In this article, we make a comprehensive study of a prediction sufficient (or adequate) statistic, the total time on test, when the data is assumed to be exponentially distributed and censored after the r-th failure. In the reliability theory, sometimes all components of a system with several identical components may be put on test until an r-th smallest failure time occurs and the total time on test is subsequently calculated. The total time on test from exponentially distributed censored data has been proved to be a prediction sufficient (or adequate) statistic by Nair and Cheng [10] in light of the works from Cheng and Mordeson [4], and other literatures such as [1, 9, 12-14]. Consider an observable random variable X and a not yet observed random variable Y. Throughout the context, we let P = { Pθ : θ Θ} denote the family of joint
∈
M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 436–444, 2011. © Springer-Verlag Berlin Heidelberg 2011
A Hypothesis Testing Using the Total Time on Test from Censored Data
437
probability functions for (X, Y), that is, a family of probability distributions (measures); and let PX = { f ( x;θ );θ ∈ Θ} denote the family of marginal joint probability distributions for X. Also, let T = T(X) be a measurable function of X.
2 Conditional Independence Two random variables X and Y are said to be independent if the joint probability distribution of X and Y is equal to the product of the respective marginal probability distributions of X and Y. We write X ⊥ Y | θ to indicate that X and Y are independent with respect to the probability distribution Pθ . Similarly, two random variables X and Y are said to be conditionally independent given a third random variable T if the joint conditional probability distribution of X and Y given T is equal to the product of the respective marginal conditional probability distributions of X and Y given T. By writing X ⊥ Y | T , θ , we mean that X and Y are conditionally independent given T with respect to the probability distribution Pθ . We now also present a short list of few theorems concerning the notion of conditional independence. They can be found in the literatures. Theorem 1. If X
⊥ Y | T , θ and W = W(X), then X ⊥ Y | W , θ .
Theorem 2. The following assertions are equivalent.
⊥ Y | T ,θ . (2) X ⊥ (Y , T ) | T , θ . (3) ( X , T ) ⊥ (Y , T ) | T , θ . (1) X
Theorem 3.
X ⊥ Y | T , θ if and only if , for any W = W(X), E(W|Y,T) = E(W|T).
Theorem 4. Let T = T(X) and W = W(T). If then X
⊥ Y | W ,θ .
Theorem 5. If X
T ⊥ Y | W , θ and X ⊥ Y | T , θ ,
⊥ Y | W , θ , T = T(X) and W = W(T), then X ⊥ Y | T ,θ .
3 Sufficient Statistics In practice, the X could denote a random sample X 1 , X 2 , ⋯, X n of size n from a population, and the statistic T = T(X) is a measurable map of the observable X. By virture of Fisher [7], a statistic T = T(X) is said to be a sufficient statistic for the parameter θ if the conditional probability distribution of X given T = T(X) does not depend on θ. Let PX denote the family of marginal distribution for the observable X. We write T suff (X; PX ) (to be read as "T is a sufficient statistic with respect to
PX ") which denotes the fact that T = T(X) is a sufficient statistic for θ.
438
S.-C. Cheng
x 2 , ⋯, x n ) be an observed value of random sample X = ( X 1 , X 2 , ⋯, X n ) of size n. The likelihood, L( x1 , x 2 , ⋯, x n ;θ) or simply L(θ), of the sample is defined to be the joint probability function evaluated at x1 , x 2 , ⋯, x n , We let x = ( x1 ,
that is, n
L(θ) = L( x1 ,
x 2 , ⋯, xn ;θ) = ∏ f ( xi ;θ ) i =1
A sufficient statistic may be identified in view of the following famous factorization criterion for the sufficient statistics. Theorem 6 [8]. The statistic T is a sufficient statistic for θ if and only if the likelihood of the sample, L, can be factored into two nonnegative measurable functions, L( x1 ,
x 2 , ⋯, xn ;θ) = g (t ,θ ) h( x1 , x 2 , ⋯, xn ),
where g (t , θ ) is a function only of t and θ and h( x1 , involving θ.
x2 , ⋯, x n ) is a function not
4 Prediction Sufficient Statistics In the colorful language of R. A. Fisher [7] we may loosely define the notion of prediction sufficiency (or adequacy) in the following terms: The statistic T = T(X) is said to be prediction sufficient (or adequate) if T exhausts and summarizes in itself all the relevant information about Y that is contained in X. In 1967, Skibinsky [11] proposed the following mathematical framework of prediction sufficient (or adequate) statistics. Definition 7. A statistic T is said to be prediction sufficient (or adequate) for X with respect to (Y, P) if (1) T suff (X; PX ), and (2) X
⊥ Y | T , θ for all θ ∈ Θ .
Following Skibinsky [8], we write T p-suff (X; Y, P) [to be read as "T is prediction sufficient (or adequate) for X with respect to (Y, P)"] as a mathematical shorthand for the proposition "T adequately summarizes (and exhausts) all information in X about Y with respect to the model P". This is in conformity with our use of the notation T suff (X; PX ) when we restrict our attention to the model PX . The intuitive content of the above definition appears to be embedded in the following two propositions: (1) Given T = T(X), no further details about observable X can yields any additional information about θ. In other words, for making an inference on θ, the statistician needs to record only the T-value on X. (2) If the inferred value of θ is the one that actually obtains, then the conditional distribution of Y given X depends on X only through T. In other words, if the
A Hypothesis Testing Using the Total Time on Test from Censored Data
439
statistician proposes to predict Y by first predicting (or estimating) θ and adequately summarizes (and exhausts) all information in X about Y with respect to the model P".
5 Total Time on Test for Censored Data We now investigate the result for the total time on test from censored data. Let
X1,
X 2 , ⋯, X n denote a random sample of size n from a population with probability density function from an exponential family, and let X (1) ≤ X ( 2 ) ≤ ⋯ ≤ X (n ) denote X 1 , X 2 , ⋯, X n . Let 1 ≤ r ≤ m ≤ n . Consider the problem of predicting a future X (m ) after observing X (1) , X ( 2 ) , ⋯, X (r ) . The total time on
the order statistics of test is given by
r
T = ∑ X (i ) + ( n − r ) X ( r ) . i =1
It has been proved in Nair and Cheng [10] that the total time on test from exponentially distributed censored data is a prediction sufficient (or adequate) statistic for a future failure time X (m ) . That is, Theorem 8. Let X (1) , X ( 2 ) , ⋯, X (r ) denote the order statistics of a random sample X 1 , X 2 , ⋯, X n of size n from a population with probability density function from an exponential family: k
{ f ( x;θ ) = B (θ ) H (θ ) exp[∑ Q j (θ ) R j ( x)] : θ ∈ Θ} . j =1
Let 1 ≤ r
≤ m ≤ n . Consider the problem of predicting a future X (m ) after observing X (1) , X ( 2 ) , ⋯, X (r ) . Let P denote the family of joint probability
distributions of the observable X = ( X (1) , X ( 2 ) , ⋯,
X (r ) ) and not yet
observed X (m ) . Then r
(
∑X i =1
(i )
, X (r ) ) p-suff (X; X (m ) , P).
r
That is, (
∑X i =1
(i )
, X (r ) ) is a prediction sufficient (or adequate) statistic for
predicting X (m ) , 1 ≤ r 5.1
≤ m ≤ n , on the basis of observable X (1) , X ( 2) , ⋯, X (r ) .
Likelihood Ratio Test
Assume that X 1 , X 2 , ⋯, X n denotes a random sample of size n from an exponential distribution with mean θ, that is,
440
S.-C. Cheng
f ( x;θ ) =
1
θ
e−x /θ
for x > 0
X (1) , X ( 2) , ⋯, X (r ) . The survivor function is given by
with the order statistics
S ( x;θ ) = e − x / θ
for x > 0.
H 0 : θ = θ 0 against H a : θ ≠ θ 0 on the basis of observable X (1) , X ( 2 ) , ⋯, X ( r ) and censored after X ( r ) .
Consider the problem of testing
r
In view of Theorem 8, the total time on test
T = ∑ X (i ) + (n − r ) X ( r ) is a i =1
r
function of prediction sufficient (or adequate) statistic (
∑X i =1
predicting X ( m ) where 1 ≤
(i )
,
X ( r ) ) for
r ≤m≤n.
Let U be the subset of {1, 2, ⋯, n} corresponding to observations that are uncensored and C be the subset of {1, 2, ⋯, n} corresponding to observations that are right-uncensored. The likelihood function of the sample in this case is defined by
L(θ ) = ∏ f ( xi ;θ ) ⋅ ∏ S ( xi ;θ ) . i∈U
i∈C
The first product can be written as the joint distribution of X (1) , X ( 2 ) , ⋯, given by
∏ f ( x ;θ ) = (r!) f ( x i
;θ ) f ( x ( 2 ) ;θ )
(1)
f ( x ( r ) ;θ )
i∈U
=
r!
θr
exp(−
1
θ
r
∑x i =1
(i )
)
for x (1) ≤ x ( 2 ) ≤ ⋯ ≤ x ( r ) , and the second product can be approximated by
1
∏ S ( x ;θ ) = exp[− θ (n − r ) x i
(r)
].
i∈C
Then
L(θ ) = = =
r!
θ
r
r!
θ
r
r!
θ
r
exp(−
1
r
∑x θ i =1 r
(i )
1 ) ⋅ exp[− (n − r ) x ( r ) ]
θ
1 exp[− (∑ x(i ) + (n − r ) x( r ) )]
θ
i =1
T exp(− ) .
θ
X (r )
A Hypothesis Testing Using the Total Time on Test from Censored Data
441
The log-likelihood is thus given by
ln L(θ ) = ln(r!) − r ln(θ ) −
T
θ
.
Ω 0 = {θ : θ = θ 0 } and Ω a = {θ : θ ≠ θ 0 } denote two disjoint (or mutually exclusive) sets of values of θ under H 0 and H a , respectively, and let Ω = Ω 0 ∪ Ω a = {θ : θ > 0} . Let max L(θ ) denote the maximum value of L(θ)
Let
θ ∈Ω 0
for θ ∈ Ω 0 ; it is usually the likelihood function with all unknown parameters replaced by their maximum-likelihood estimators, subject to the restriction that
L(θ ) represent the θ ∈ Ω 0 . Similarly, let max θ ∈Ω θ ∈ Ω which can be obtained in a similar fashion. Under Ω 0 (or H 0 : θ = θ 0 ), we see that r!
max L(θ ) =
θ
θ ∈Ω 0
Under
Ω,
r 0
exp(−
maximum value of L(θ) for
T
θ0
).
d r T ln L(θ ) = − + 2 = 0 θ θ dθ yields the maximum-likelihood estimator of θ: ∧
T 1 r θ = = [∑ X ( i ) + ( n − r ) X ( r ) ] . r r i =1 Thus
r!
max L(θ ) =
∧
θ ∈Ω
The likelihood ratio test of region of the form
(θ ) r
T exp(− ∧ ) .
θ
H 0 : θ = θ 0 versus H a : θ ≠ θ 0 has a rejection
λ ≤ c , where
max L(θ )
∧
θ 1 1 λ= = ( ) r exp[−( − ∧ )T ] , θ0 θ max L(θ ) θ 0 θ ∈Ω θ ∈Ω 0
for some appropriate constant c. Clearly,
0 ≤ λ ≤ 1 . Note that λ is an exponential
r
function of the total time on test
T = ∑ X (i ) + (n − r ) X ( r ) . Hence, the rejection i =1
region λ ≤ c has an equivalent form in term of T, that is, appropriate constant k. (See the figure below.)
T ≤ k for some
442
S.-C. Cheng
We have Thus established the following theorem. Theorem 9. Consider an exponential distribution with mean θ. The likelihood ration test for
H 0 : θ = θ 0 against H a : θ ≠ θ 0 on the basis of observing X (1) , X ( 2) ,
X ( r ) and censoring after X ( r ) has a rejection region of the form T ≤ k ,
⋯,
r
where
T = ∑ X (i ) + (n − r ) X ( r ) is the total time on test, for some appropriate i =1
constant k. 5.2
An Illustration
Consider an exponential distribution with mean θ. Now consider a censoring situation in which n units are put on test and the observation continues until r units have failed. Suppose that the numbers of millions of revolutions of thirty ball bearings are censored after the twenty-third failure [6]. The ordered data to failure are 17.88 28.92 33.00 41.52 48.40 51.84 51.96 54.12 55.56 67.80 68.64 68.64 68.88 84.12 93.12 98.64 105.12 105.84 127.92 128.04 173.40
42.12
45.60
Assume that the number of millions of revolutions of the ball bearing has an exponential distribution with mean θ. Calculate
A Hypothesis Testing Using the Total Time on Test from Censored Data
443
23
∑X i =1
(i )
=1,661.06.
23
In accordance with Theorem 8, (
∑X i =1
(i )
),
X ( 23) ) is a prediction sufficient (or
X ( m ) (for m = 24, 25, ⋯, 30) based upon the information of the observed exponentially distributed data ( X (1) , X ( 2 ) , ⋯, X ( 23) ).
adequate) statistic for predicting
Consequently, the total time on test is given by r
T = ∑ X (i ) + ( n − r ) X ( r ) = 1,661.06 + (7)(173.40) = 2,874.86 i =1
and the maximum-likelihood estimator of θ is given by ∧
θ=
2,874.86 = 124.99. 23
: θ = 120 versus H a : θ ≠ 120 , it is usually difficult to obtain the critical number k of the rejection region T ≤ k . In this case, we may apply the In order to test H 0
following theorem.
X 1 , X 2 , ⋯, X n denote a random sample with the likelihood function L(θ). Consider testing H 0 : θ ∈ Ω 0 versus H a : θ ∈ Ω a . Under certain regularity conditions, − 2 ln(λ ) is asymptotically chi-square distributed with ν degrees of freedom under H 0 , where Theorem 10. Let
ν
= (number of independent parameters under
Ω)
-(number of independent parameters under
Ω0 )
Now we calculate ∧
1 1 θ − 2 ln(λ ) = −2r ln( ) + 2T ( − ∧ ) θ0 θ0 θ 124.99 1 1 ) + 2(2,874.86)( = −2(23) ln( − ) 120 120 124.99 = 0.0387998 that is asymptotically chi-square distributed with ν = 1 degrees of freedom under H 0 : θ = 120 . Since
444
S.-C. Cheng
p − value = P( χ 2 > 0.0387998) > 0.10 , we do not reject H 0 : θ = 120 . References 1. Basu, D., Cheng, S.-C.: A note on sufficiency in coherent models. Internat. J. Math. & Math. Sci. 4(3), 571–581 (1981) 2. Buckley, J.J.: Fuzzy Statistics. Springer, Heidelberg (2004) 3. Cheng, S.-C.: Crisp Confidence Estimation of the Parameter Involving in the Distribution of the Total Time on Test for Censored Data. In: Proceedings of Academy of Business and Information Technology – 2008 Conference, ABIT 2008 (2008) 4. Cheng, S.-C., Mordeson, J.N.: A Note on the Notion of Adequate Statistics. Chinese Journal of Mathematics, ROC 13(1), 23–41 (1985) 5. Cheng, S.-C., Mordeson, J.N.: Fuzzy Confidence Estimation of the Parameter Involving in the Distribution of the Total Time on Test for Censored Data. In: Proceedings of 2008 North American Fuzzy Information Processing Society Annual Conference, NAFIPS 2008 (2008) 6. Crowder, M.J., Kimber, A.C., Smith, R.L., Sweeting, T.J.: Statistical Analysis of Reliability Data. Chapman & Hall, London (1994) 7. Fisher, R.A.: On the mathematical foundations of theoretical statistics. Phil. Trans. Roy. Soc. London A 222, 309–368 (1922) 8. Halmos, P.R., Savage, L.J.: Applications of the Radon-Nikodym Theorem to the theory of sufficient statistics. Ann. Statist. 3, 1371–1378 (1949) 9. Lauritzen, S.L.: Sufficiency and time series analysis. Intst. Math. Statist. 11, 249–269 (1972) 10. Nair, P.S., Cheng, S.-C.: An Adequate Statistic for the Exponentially distributed Censored Data. In: Proceedings of INTERFACE 2001, vol. 33, pp. 458–461 (2001) 11. Skibinsky, M.: Adequate subfields and sufficiency. Ann. Math. Statist. 38, 155–161 (1967) 12. Sugiura, M., Moriomoto, H.: Factorization theorem for adequate σ-fields (in Japanese). Sūgaku 21, 286–289 (1969) 13. Takeuchi, K., Akahira, M.: Characterizations of prediction sufficiency (adequacy) in terms of risk functions. Ann. Statist. 3, 1018–1024 (1975) 14. Torgenser, E.N.: Prediction sufficiency when the loss function does not depend on the unknown parameter. Ann. Statist. 5, 155–163 (1977)
Collaborative Mechanism Based on Trust Network Wei Hantian1 and Wang Furong2 1
School of Software, NanChang University, China 2 School of Foreign Language, JiangXi University of Finance and Economics, China [email protected]
Abstract. The way people interact in collaborative environments and social networks on the web has evolved in a rapid pace over the last few years. Web-based collaborations and cross-organizational processes typically require dynamic trust between people and services. However, finding the right partner to work on joint tasks or to solve emerging problems in such scenarios is challenging due to scale and temporary nature of collaborations. by calculates the pessimistic value of one path, the optimistic value of one path, and the comprehensive value in all path, the results obtained give a new method to estimate the risk probability of collaboration. Keywords: Collaborative Mechanism, Trust Network, Social Network.
1 Introduction People interact in collaborative environments and social networks on the Web have evolved over last years with the development of information technology and global scale economy. Effective collaborative mechanism is conducive to increase productivity of enterprises and individuals, and promote social harmony and economic development. The paradigm shift from closed systems to open, loosely coupled Web-services-based systems requires new approaches to support interactions. In distributed, cross-organizational collaboration scenarios, agent register their skills and capabilities as Human-Provided Services using the very same technology as traditional Web services to join a professional online help and support community. This approach is inspired by crowdsourcing techniques following the Web 2.0 paradigm. Agents represents different interests of entities in real life, each Agent's goal is to achieve his own behalf the interests. Agents wish to achieve maximum benefits in a task. There are various kinds of security risks should inevitably appear in the process of cooperation. So trust relationship among agents to promote collaboration system in such network environment attracts more attention. Trust mechanism plays an important role in the cooperation in human society. Many scholars believe that it is an effective way to solve the agent cooperation. Most of current trust model belongs to evaluation models, such as trust evaluation model proposed by Hu, Grandison. An interpersonal trust mechanism is proposed based on social network in the paper to build trust transfer mechanism in coordination system. M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 445–451, 2011. © Springer-Verlag Berlin Heidelberg 2011
446
W. Hantian and W. Furong
2 Related Works 2.1 Collaborative System Formation of Collaborative System. Use the success probability of collaboration between the agent as the trust index, thus formed a collaboration mechanism based on trust cooperative system. One agent can be the collaboration agent of other’s in the system, in which each entity has assigned different levels of trust. The agent with high trust value to others will be the collaborative object in the future, so after a certain period of time a relatively stable cooperative system will be formed. Evolution of Collaborative System. Agent and their link relationships can be represented as a directed graph in trust mechanism-based collaboration system. Some nodes in the topology based on trust with high entry for part of the node can provide relatively better service. node can form different self-organized nature of the gathering, which also has a "small world" and "power law" characteristics of the phenomenon in this structure has been reflected in the cooperative system. Reduce the average distance between the agents and increased the degree of convergence of the network agent, is conducive to the node discovery service resources. Power law arises because the phenomenon occurs when agent in collaboration with a certain preference, the interconnection probability values is uneven, new node agent biased in favor of higher reliability in collaboration with others, which result in some nodes’ degree is increasing. Social Network. Social network refers to social actors (ACTOR) and the relationship between the collections. It can also say that a social network is composed of multiple points (social actors) and the connection between points (the relationship between actors). The “point” in Social network is the various social actors; "edge" refers to a variety of actors in social relations. Relationship can be directed, or undirected. At the same time, social relations can be expressed in multiple forms. For example, it stands undirected s for communication and relationships between members, or trade relations between countries. Social network analysis (SNA) is the quantitative research to the relationship between of actors in the social network. Graph theory is the basis of social network analysis of mathematical theory, the formal description of social networks can be divided into social network maps and social matrix. In graph theory, social network can also be divided into digraph and undirected graph. Social network graph is composed by a set of nodes N = {n1, n2, ..., nk}, and the connection between nodes L =
Fig. 1. Digraph of social network
Fig. 2. Matrix of social relations
Collaborative Mechanism Based on Trust Network
447
{l1, l2, ... lm}. In undirected graph, the connection between nodes uses line, and in digraph (fig.1) connections between nodes are directional, with arrows. Matrix of social relations (fig. 2) is converted from the network of social relations. The matrix element represents the relationship between actors. Matrix of social relations is more standardized, which will help computer for processing, is the basis for quantitative analysis for computer. Out-Degree and Degree. Out-degree density is the sum of the other participants who send messages to the node; In-degree density is the sum of the node send messages to other participant. Pathway, Path, Shortest Distance. Pathway is sequence line from the beginning to the end point. One pathway can include many participants or similar relationship between the beginning and end. Path is special emphasis on all the points and lines are not repeated way, it is one sequence line of pathway. Shortest distance is the shortest path, a set of the line between two points short cut. Trust Network. The trust network is a subset of the network, in this subset, the network between participants and between the other participants will have a more close and warm relationship. It can be divided into four kinds based on different degrees of strictness. Here are a few commonly used measure of network analysis indicators, these are the follow-up study in this paper will be involved. N-cliques: define trust members if distance of the participants and organizations only one. N-clans: as long as the nodes have a relationship with some members, or not detract from the n-step (usually 2) can be defined to be a member of trust network. K-plexes: If the node have a direct relationship with k members in trust network, then the node is a member of n-size trust network. K-cores: if the participants have a relationship with k members, allow it access to the trust network no matter how many members they have no connection.
3 Trust Model Framework Trust Model. The key to build coordination mechanisms is establishing universal trust among all aspects. People only trust each other, then have mutual cooperation. Trust each other on the premise that to understand each other, master the other party certain information and the feeling of recognition of each other. The important way to understand each other can through the direct or indirect contact. Social networks based on trust can be established according to their association. All traders are encouraged in the absence of contacts. with the occurrence of exchanges, the two sides established a trust relationship based on feedback, and gradually build their social network based on trust. Review assessment of collaboration represent the quality of transactions, but also the trust status of both the reaction. Fig. 3 described the trust relationship between all the agents it by a directed graph.
448
W. Hantian and W. Furong
Fig. 3. Trust network
Trust Calculation of Pathway. In the trust network, the node in the graph is an agent, direct trust between node XI and XJ descried as XI→XJ, if there is no direct trust relationship between XI and XJ , it can be calculated through the trust transmission chain of XI and XJ. For example, there are two trust chain A→C: A → F → C and A → B → C. If there are many trust chain between X1 and Xn the trust value must composite many recommendation agent. Described trust probability event between X1 and Xn is
,
P( X 1 → X n )
,then
P( X 1 → X n )) means untrust probability event of X1 and
Xn. m is the number of trust chain. The probability is computed as eq.1, and likely trust value is eq.2.
If
P( X 1 → X n ) =1- P( X 1 → X n ))
(1)
T (X 1 , X n ) = 1− T (X 1 , X n )
(2)
Tm ( X 1 , X n ) means the trust value of the m trust chain, and the over trust value is
as eq.3 and eq. 4.
T ( X 1 , X n ) = (1 − T1 ( X 1 , X n )) × (1 − T2 ( X 1 , X n )) × ..(1 − Tm ( X 1 , X n )) = ∏ i =1 Ti ( X 1 , X n ) m
T ( X 1 , X n ) = 1 − ∏ i =1Ti ( X 1 , X n ) m
(3)
(4)
Trusted Computing of Path. If there is a trust path: X1 → X2 → X3 ... → Xn, according to the conditions of trust transfer, T (X1, Xn) stands for trust value X1 in Xn can be regarded as the event X1, X2, ... Xn-1 Xn con-occurrence Probability p, calculated as eq.5.
Collaborative Mechanism Based on Trust Network
449
T ( X 1 , X n ) = T ( X 1 , X 2 ) × T ( X 2 , X 3 )..., ×T ( X n −1 , X n ) (5)
= ∏ i =1 T (X i , X i +1 ) n −1
Direct Trust Formation. The evolution of direct between each node depends on the direct interaction or experience; they modify their direct trust value. This direct interaction is shown in direct link between A and B in the network topology in fig 3. This portion of trust is critically important since it does not depend on the recommendation of any trusted neighbor. So when the two nodes, A and B, interact directly, they change their direct trust value based on the satisfaction level of that interaction in fig. 4. Behavior
1
Postitive
Neutral
0.8
0.6
Negative
0
0.2
0.4
Fig. 4. Classification of behavior
Optimal Hop Length. In our model, a node has the flexibility to define the maximum length of a recommendation path through the field of the request packet. The upper bound of the length is related to the weight factor and disposition value.
Len ≤ θϕ Where θ is disposition value, and ψ is decrement rate of weighting factor per node.
4 Case Analysis For path{A,B,C,D,E,F,G},{A,H,G},{A,I,J,G} in fig.5, the algorithm calculates the pessimistic value of one path, the optimistic value of one path, and the comprehensive value in all path. In our scenario, we considerθ=0.5, and ψ=0.05. So the maximum value for length 10. The trust relationship can’t build if the trust value below 0.5. From the trust value of each node in fig.6, the min value is greater than 0.6, so all the paths are effective. If the value line is b fluctuant badly, it may have more uncertain trust. With the recommendation node increasing, the pessimistic value of one path, optimistic value of one path, and the comprehensive value in all paths were obtained in fig.7. It is obviously that they have a big deviation. A big deviation value may be means there is some risk in collaboration, so it can used to the estimate aspiration to collaborate with each other.
450
W. Hantian and W. Furong
1
8 0.
0. 7
Fig. 5. A topology of devices with trust relationship
1.2 1 0.8 path 1 path 2 path 3
0.6 0.4 0.2 0 node 1
node 2
node 3
node 4
node 5
node 6
Fig. 6. A topology of devices with trust relationship 0.9 0.8 0.7 0.6 pessimistic optimistic comprehensive
0.5 0.4 0.3 0.2 0.1 0 1
2
3
4
5
6
7
8
9
10
11
Fig. 7. A topology of devices with trust relationship
Collaborative Mechanism Based on Trust Network
451
5 Summary The proposed collaboration solutions is effectively promote collaboration between agent efficiency and improve the success rate of the amount of common tasks, but also improve the interactive performance of the entire network system to ensure the reliability of selected collaborators. The paper is drawing collaborative mechanisms by trust model based on interpersonal social network. By use of direct trust, trust chain approach to assess the credibility of the agent, and make a detailed discussion of the calculation method of the cooperative system about trust and collaboration System formation and evolution mechanisms. Finally, by calculates the pessimistic value of one path, the optimistic value of one path, and the comprehensive value in all path, the results obtained give a new method to estimate the risk probability of collaboration.
References 1. Zuo, Y., Panda, B.: Component based trust management in the context of a virtual organization. In: ACM Symposium on Applied Computing, pp. 1582–1588 (2005) 2. Moody, P., Gruen, D., Muller, M.J., Tang, J.C., Moran, T.P.: Buniness activity patterns: a new model for collaborative business applications. IBM Systems Journal 45(4), 683–694 (2006) 3. Kochikar, V.P., Suresh, J.K.: Towards a knowledge-sharing organization: Some challenges faced on the infosys journey. Infosys Technologies Limited, India (2004) 4. Perry-Smith, J.E., Shalley, C.E.: The social side of creativity: A static and dynamic social network persperctive. Academy of Management Review 28, 89–106 (2003) 5. Liu, L., Antonopoulos, N., Mackin, S.: Managing peer-to-peer networks with human tactics in social interactions. Journal of Supercomputing 44, 217–236 (2008) 6. Bowles, S., Gintis, H.: The Evolution of Strong Reciprocity: Cooperation in Heterogeneous Populations. Theoretical Population Biology 65, 17–28 (2004) 7. Roy, L.: Tacit knowledge and knowledge management: The keys to sustainable competitive advantage. Organizational Dynamics 29, 164–178 (2001) 8. Cross, R., Borgatti, S.P., Parker, A.: Beyond answers: Dimensions of the advice network. Social Networks 23, 215–235 (2001) 9. Cross, R., Prusak, L.: The people who make organizations go or stop. Harvard Business Review 6, 5–12 (2002) 10. Sabater, J., Sierra, C.: Social regret, a reputation model based on social relations. SIGecom Exchanges 3(1), 44–56 (2002) 11. Dustdar, S., Hoffmann, T.: Interaction pattern detection in process oriented information systems. Data and Knowledge Engineering (DKE) 62(1), 138–155 (2007)
Design for PDA in Portable Testing System of UAV’s Engine Based on Wince YongHong Hu, Peng Wu, Wei Wan, and Lu Guo No. 365 Institute Northwestern Polytechnical University Xi’an, China [email protected]
Abstract. As the core equipment of UAV, the engine’s working conditions can directly affect the UAV flying safely and reliably or fly out. There is a need for testing the UAV’s engine real time, reducing security risks. Portable Testing System of UAV’s Engine, it takes PDA as PC, WINCE as the operating system, testing software designed by EVC. After test, the testing software can complete the test task. Keywords: UAV Engine, Testing, Portable, WINCE.
1 Introduction UAV(unmanned aerial vehicles) includes aircraft system, avionics system, engine system and data link system. Any problem in subsystem is likely to lead directly to the failure of UAV’s combat missions, which requires all systems in advance of the comprehensive testing to ensure system stability and reliability of UAV. In Ground testing, the test of each system can use existing testing system having a good test. As the company's engine test system only have test-drive table, so the engine system can not be tested when UAV flying in the air. As the core equipment of UAV, the engine's working conditions directly affect the UAV whether can fly safely and roll reliable flight[1], so a set of efficiency and accuracy testing system for UAV is need to test engine in real time, while the engine factory testing on the ground have achieved automation and digitization, and can significantly improve the precision and efficiency, reduce security risk, significantly improve the success rate of flying, in the development process of new models, the saved test data also provides a reference to the late Research and Development.
2 Design of Testing System Portable testing system for the UAV aim at the oil and gas mixing ratio of engine, the system includes incentives, the measured unit, testing unit[2]. Data collection box section includes the microcontroller, data acquisition unit, pressure and flow sensors and power supply unit, handheld computer includes interface module, display module, memory modules and self-test module(Fig.1). M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 452–457, 2011. © Springer-Verlag Berlin Heidelberg 2011
Design for PDA in Portable Testing System of UAV’s Engine Based on Wince
453
Fig. 1. UAV’s Engine Ttesting system
3 Operating System and Develop Software for PDA PC chose the PDA, because compared with industrial machines, notebook computers, PDA inexpensive, low power consumption, easy flow of work, but fully able to meet the requirements[3]. Embedded systems typically include both hardware and software. The hardware platform includes CPU, memory, I / O ports. Software components including support for multi-task real-time embedded operating system operations and applications, the application controls the operation and behavior of the system, and the operating system controls the interaction between applications and hardware. Due to the current CPU and reliable connections are used for peripheral hardware, software will become a bottleneck in the development restrictions of embedded systems. On the upper application developers, embedded systems need a highly concise and friendly interface, reliable and easy to develop, more tasks, and low-cost operating system. Therefore, once the embedded processor and peripheral hardware selected, then the most work concentrated on the selection and development of embedded software. LINUX development more difficult because of system maintenance is difficult, requiring a higher level of technical support, and relatively WINCE development is easier, short development cycle, kernel improved, rich in GUI, powerful development tools, so we choose WINCE as the underlying operating system of PDA. Embedded operating system and customization or configuration tool have close contact, integrated development environment. For WINDOWS CE, we can not buy the WINDOWS CE operating system, we can just buy the Platform Builder for CE.NET integrated development environment, referred as PB, and it can be tailored and customized a consistent need WINDOWS CE.NET operating system. According to platform using EVC developing the serial port and the MFC of user interface software, including several key functional units: self-test unit, the serial read unit, data display unit, data storage unit. The work of the whole process of handheld computers: handheld computer sends self-test command to microcontroller, test subsystems, if test result is right, then read the normal data (there will be alarm if excessive), and real-time data will show, store the data in the form of txt file.
454
Y. Hu et al.
4 PDA Functions Handheld computer software components including self-test, data display, data storage, and the realization of excessive alarm. Handheld computer self-test: computer systems rely mainly on self-test program in the memory, memory card, touch screen and other hardware function checks; Data acquisition and processing unit serial self-test: When the computer self-test module is complete, the engine automatically starts the self-testing system software modules. Computer send the self-test data, start MCU in self-test procedure, if an acknowledge signal to the computer microcontroller, then the communication function is normal; Data Display: Display the real-time data which read through serial port; Data storage: The real-time measurements to be saved for later data analysis; Data exceeded alarm: once the real-time data detected abnormal, there will be alarm.
5 Software Design The Structure of Testing Software as shown in Fig.2.
Fig. 2. Structure of Testing Software
The main window is used to display the real-time data(Fig.3): hCom=CreateFile(_T("COM1:"),GENERIC_READ|GENERIC_WRITE,0,NULL, OPEN_EXISTING,0,NULL); GetCommState (hCom, & dcb); dcb.BaudRate = 9600; //Set BaudRate dcb.ByteSize = 8; //Set ByteSize SetCommState (hCom, & dcb);
Design for PDA in Portable Testing System of UAV’s Engine Based on Wince
455
Fig. 3. Testing Interface Flow Chart
Display real-time data to editable text box: SetDlgItemText (IDC_FLOW2_CUR, (CString) data.flow2); Editable text box to access the content for store: MyGetDlgItemText (IDC_TEMP2_CUR, data.temp2, sizeof (data.temp2)); Set of parameters used to achieve a modal dialog, some of the key code as follows: Join the dialog box to add new a new class, such as CStartDlg, this can be in the main dialog box, use the following statement creates a modal dialog box, select the realization of the pop-up models, parameter settings, and the necessary dialogue window. CStartDlg dlg; dlg.DoModal (); Benefits of modal dialog: if no operation is complete in this dialog box, the main window is not allowed to operate, it can avoid misuse of the main window[4]. Self-test is used to send and read data through the serial port, then compared, some of the key codes as follows(Fig.4): WriteFile (hCom, buf1, sizeof (buf1), & len, NULL); ReadFile (hCom, buf, sizeof (buf), & len, NULL); from the serial port
// Read return data
456
Y. Hu et al.
UCHAR key = (UCHAR)0xXX; // head UCHAR* ptr = (UCHAR*)strstr((char*)buf, (char*)&key); if(ptr != NULL){ ptr++; if(*ptr == (UCHAR)0xXX){ ptr++; if(*ptr == (UCHAR)0xXX){ ptr++; int tp[8]={0}; for(int i=0; i<8; i++){ tp[i] += *ptr; tp[i] <<= 8; ptr++; tp[i] += *ptr; ptr++}}}} if(tp[i]!=Y){strcat(str0,str1); } MessageBox (str0, "message", 0); // Pop-up message box
Fig. 4. Self-Test Flow Chart
Store button is directly detected real-time data stored in the corresponding txt file under the file name, codes as follows[5]: Get the system time GetLocalTime (& time); memset (datetime, 0, sizeof (datetime)); datatime are set to 0
// The contents of the
Design for PDA in Portable Testing System of UAV’s Engine Based on Wince
457
sprintf (datetime, "% 4d% 02d% 02d% 02d% 02d% 02d", time.wYear, time.wMonth, time.wDay, time.wHour, time.wMinute, time . wSecond); // Make the system time to a string. hStoreFile = CreateFile (filename, GENERIC_READ | GENERIC_WRITE, 0, NULL, OPEN_ALWAYS, 0, NULL); // Open (create) a file and get a file handle, this file is named Engine Model + system time WriteFile (hStoreFile, buf, strlen (buf), & len, NULL);// Write data to the file.
6 Summary Through a series of tests, the handheld computer program work well, achieve the storage, excessive alarm function. Initial selection is serial port to achieve the purpose of testing data, the interface can be extended later such as USB, Ethernet port or other device in test, enabling the test system to meet the needs of different interfaces. Handheld computers, testing software is written with C which has good portability and scalability to achieve the applicability of most systems.
References 1. Niu, H.F., Niu, M.B., Lei, J.K., Yang, N.: Design of Auto Testing System for Multi Parameters and Multi Helming Mechanism. Experimental Technology and Management (2006) 2. Ding, L.J., Ding, H.S.: Design and Implementation of the Synthetic Measurement System of UAV’s Engine. Computer Automated Measurement & Control (2004) 3. Chen, Z.Q., Zhang, J.B., Jiang, H.: Design & Implementation of Portable Testing System for UAV. Measurement & Control Technology (2007) 4. He, Z.J.: Windows CE Embedded Systems. Beihang University press (2006) 5. Zhang, D.Q., Tan, N.L., Wang, X.M., Jiao, F.C.: Windows CE Practical Development of Technology. Publishing House of Electronics Industry (2006)
Adaptive Particle Swarm Optimizers Li Li and Qin Yang* Sichuan Agricultural University, China [email protected]
Abstract. Particle swarm optimization (PSO) is a relatively new swarm intelligence-based heuristic global optimization technique. Since PSO is easy to understand and implement, requires little parameter tuning and exhibits robust global convergence, it has become the target of increasing interest from scientific and engineering communities. In Particle Swarm Optimization, each particle moves in the search space and updates its velocity according to best previous positions already found by its neighbors (and itself), trying to find an even better position. This approach has been proved to be powerful but needs parameters predefined by the user, like swarm size, neighborhood size, and some coefficients, and tuning these parameters for a given problem may be, in fact, quite difficult. Keywords: adaptation, optimization, particle swarm.
1 Introduction PSO has also shown fast convergence speed and good global search ability. However, it has sometimes a slow fine-tuning ability of the solution quality [2]. When solving problems with many local optima, it is more likely that PSO will explore local optima at the end of the run. To tackle this problem, many improved method have been proposed. Shi and Eberhart introduced a parameter called as inertia weight into PSO to balance the global exploration and the local exploitation [3, 4].
2 Particle Swarm Optimization Pi , its position xi (t ) = ( xi ,1 (t ),..., xi ,d (t ),..., xi , D (t )) and its velocity vi (t ) = (vi,1 (t ),..., vi,d (t ),..., vi, D (t )) at time t+1, depending on the ones at time t. To present the adaptation principles, here we choose a simple version of PSO, which is known as PSO Type 1’’. This has just one coefficient, but the principles can easily be extended to more complicated systems. It is defined in detail in, and we give here just the main formula: Any
PSO
algorithm
defines,
for
a
given
* Corresponding author. M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 458–460, 2011. © Springer-Verlag Berlin Heidelberg 2011
particle
Adaptive Particle Swarm Optimizers
⎧ ϕ ⎞ ⎛ ⎜ vi ,d (t ) + rand (0... )( pi ,d (t ) − xi ,d (t ))⎟ ⎪ 2 ⎟ ⎪⎪vi ,d (t + 1) = χ ⎜ ⎟, ⎜ ϕ ⎨ ⎟ ⎜ + rand (0... )(g i ,d (t ) − xi ,d (t )) ⎪ 2 ⎠ ⎝ ⎪ ⎪⎩ xi ,d (t + 1) = xi ,d (t ) + vi ,d (t + 1)
459
(1)
with
⎧ϕ > 4 ⎪ 2 ⎨χ = ⎪ ϕ − 2 + ϕ 2 − 4ϕ ⎩ where pi = ( pi ,1 ,..., pi ,d ,..., pi , D ) is the best position found so far by the particle Pi , and g i = ( g i,1 ,..., g i,d ,..., g i, D ) the best position found so far by the neighbors (remember that one of them is the particle itself), and where rand (0...u ) stands for “a random number (uniform distribution) in [0, u ] .” Note that this coefficient is a priori different for each component, so we can not write just a vector equation like
ϕ ⎞ ⎛ ⎜ vi (t ) + rand (0... )( pi (t ) − xi (t ))⎟ 2 ⎟ vi (t + 1) = χ ⎜ ⎟ ⎜ ϕ ⎟ ⎜ + rand (0... )(g i (t ) − xi (t )) 2 ⎠ ⎝
(2)
3 What a Particle Knows At each time step t, each particle Pi can obtain the following information from particles Pj of its neighborhood: the position x j (t ) (written just x j , to simplify),
the best position ever found p j , the first error function evaluation (we will need it for improvement evaluation), i.e. f ( x j (t 0, j )) , where t 0, j is the time step at which Pj has been generated, the velocity v j , and also, from itself: the coefficient ϕ i used instead of ϕ in (1), the neighbour list hi , the last time instant it has performed any adaptation task, the previous position value f(xi(t-1)). Although the process is designed to be as local as possible, the particle also needs a few global information:
460
L. Li and Q. Yang
an adaptive threshold Δ for “enough improvement”, as defined below, the time t, for, as we will see, adaptation has to be made just from time to time, the swarm size N (t ) (called simply N throughout the rest of the paper).
4 What a Particle Can Do Using this information, the particle has to “think locally, act locally.” We assume it can perform, as in non-adaptive PSO, all computations required by the system (1), and can also compute f(x) for any given position x. Now, for the adaptation process, four actions are possible: First, we have to define how to compare the particle Pi and the particle Pj , i.e. to define a relation order ; (read “better than”) on the set of particles. We do not take into account the current positions, but only the best previous ones: f ( pi ) ≤ f ( p j ) ⇔ Pi ; Pj
(4)
Note that it means we have Pi ; Pi . More generally, the order relation makes sense even for particles with exactly the same best error function value (it may occur, particularly in a discrete search space).
5 Improvement The improvement for a given particle Pi is defined by
δ ( Pi ) =
f ( xi (t 0,i )) − f ( p i ) f ( xi (t 0,i )) + f ( pi )
.
(5)
It is a relative improvement, always defined, for if the denominator were equal to zero, it would mean a solution has already been found, and the algorithm already stopped. This paper presents an improved PSO algorithm. When the current particle is far from the global best particle, larger step sizes of mutation are conducted. While the current particle is near to the global best particle, smaller step sizes of mutation are conducted.
References 1. Angeline, J.: Using Selection to Improve Particle Swarm Optimization. Presented at IEEE International Conference on Evolutionary Computation, Anchorage, Alaska (May 4-9, 1998) 2. Carlisle, A., Dozier, G.: Adapting Particle Swarm Optimization to Dynamics Environments. Presented at International Conference on Artificial Intelligence, Monte Carlo Resort, Las Vegas, Nevada, USA (1998) 3. Clerc, M.: The Swarm and the Queen: Towards a Deterministic and Adaptive Particle Swarm Optimization. Presented at Congress on Evolutionary Computation, Washington DC (1999) 4. Kennedy, J.: Stereotyping: Improving Particle Swarm Performance With Cluster Analysis. Presented at Congress on Evolutionary Computation (2000); Shi, Y., Eberhart, R.C.: A modified particle swarm optimizer. In: Proceedings of the Conference on Evolutionary Computation, pp. 69–73. IEEE Press, Piscataway (1998)
Based on Difference Signal Movement Examination Shadow Suppression Algorithm Hu ChangJie YuQuan south road 17, XiaoGanShi, Hubei, China [email protected]
Abstract. Widely applies based on statistics background statistics extraction method in the movement examination product, but its shortcoming is also obvious, because uses the sole background model, this method antijamming ability is insufficient.The Gauss model has used many background model, but the model number had decided the method performance effect, the use too many model speech can reduce the system the efficiency.This article unifies statistics background extraction with the multi-color space, reduces during guarantee system efficiency comes from the environment disturbance.After the confirmation, this method may the effective suppression come from the disturbance which the light intensity change brings. Keyword: Background extraction, Shadow suppression, Brightness, Chromatic aberration.
1 Introduction With the social and personal increasing demand for security, video surveillance has become increasingly popular, and only provide images of video surveillance has become increasingly unable to meet demand, so more and more embedded video surveillance equipment intelligent detection module. Moving foreground extraction is usually the first step in intelligent control, and then the feature is the analysis for the prospects of treatment [1]. Where the monitoring is to monitor the camera is usually a fixed location, it will not move, for this feature, a common approach is background subtraction. This method first images collected as a background model, then the collected each frame and background model subtraction, get moving foreground. There are usually three stages of the background subtraction [2]: (1) Background Modeling and Training: This is the process of establishing a background model. Extracted from a video image sequence, through study, to extract the background characteristics of the image sequence. Usually the brightness of the image information as a background model. (2) prospects of detection: This is a promising detection process, an image of the video sequence processing, and the first step in establishing the background model to compare, draw prospects for moving objects. Usually the background subtraction. (3) Background Updated: Is used as the basis for detecting the brightness as the background, and the brightness of the external environment is changing over time, so in M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 461–466, 2011. © Springer-Verlag Berlin Heidelberg 2011
462
H. ChangJie
the future, after extraction, the background should be updated in real time. Usually infinite impulse response method. But such methods have limitations in the use of the process, such as sunlight into the shadow cast upon the object will be extracted as a prospect, although changes in shadow brightness greater than the actual moving object, in fact, but the shadow and not that we are interested in part. This article presents a DSP-based statistical background of improved extraction method inhibited the extraction of light and shadow on the background of the impact.
2 Algorithm Description Statistical Background Subtraction. DSP core frequency is lower than the PC's CPU. Hence the need for appropriate statistical background subtraction to improve to meet the needs of DSP platforms. Background Model. Set up a sequence threshold, the number of frames when the image acquisition to achieve this threshold, the use of averaging in front of the image sequence, thus establishing the basis of average brightness and color model [3]. If equation (1) below: N
f ′( x, y ) =
∑ f ( x, y ) i =1
N
(1)
1 ≤ x ≤ n,1 ≤ y ≤ n , m and n is the image width and height, N is a sequence of threshold. When f ( x, y ) is the brightness signal, the model is the brightness model, f ( x, y ) as is the color difference signal, the model is the color model. Where,
Threshold used to initialize a global variance model. The subsequent calculations, the decimal will be used frequently, and DM64x + series of fixed-point DSP is the DSP, so the background model is established, the use of fixed-point decimal format. Extraction of Moving Foreground. Collected using a frame compared with the background model, when to meet formula (2), that the pixels as foreground.
1.( f (x, y ) − B ( x, y )) > V ′( x, y ) 2
2.( f (x, y ) − B ( x, y )) > V ( x, y ) 2
(2)
B( x, y ) is the brightness of the background model , V (x, y ) is initialized, when V ′( x, y ) the variance model is updated through the model variance model. Where
Because the result is a binary image, in order to save space, representative image of each bit of a pixel. Background Model Update. he brightness of the control environment may change over time, requiring background model updated in real time also followed. Update, no update of non-background pixels, so the first step to extract the binary image will be
Based on Difference Signal Movement Examination Shadow Suppression Algorithm
463
updated as the basis for further background pixels. Shadow is not a useful experimental data required, but the shadow of the pixels can not be updated as a background, so the background after the update operation to complete extraction in the future, rather than after shadow suppression. Background updated in real time, including the background model and variance model update. Background updated in real time based on formula (3).
B′( x, y ) = (1 − α ) • B ( x, y ) + α • f ( x, y )
V ′( x, y ) = (1 − α ) • V ( x, y ) + α • ( f ( x, y ) − B′( x, y ))
2
Among them,
B ( x, y )
(3)
Is the brightness of the background model, B′( x, y ) is
the result of the background updated mode.
V ( x, y ) Is the initialization of the
V ′( x, y ) is the result of the variance updated mode。 f ( x, y ) is the pixel value of point ( x, y ) , α is the weight coefficient update process. variance model,
Shadow and Light Inhibition. Statistical background of classical extraction methods, the paper made the following improvements. First, the background and after taking out the prospect after morphological operations for statistics, record each individual's prospects for the region boundary values. In order to separate foreground regions can quickly search for the boundary values, the first of the results after background extraction region of China Unicom to achieve, 1. Will be an image from top to bottom, left to right twice a scan. In the first scan, scan to assume that the first non-background point is A(i,j), So check it out to the left of A (i-1, j) and above A (i, j-1) of the two adjacent pixels. 2. If A (i-1, j) and A (i, j-1) are not marked, then to A (i, j) is assigned a new tag; 3. If A (i-1, j) and A (i, j-1) has a marked, then to A (i, j) the same marker; 4. If A (i-1, j) and A (i, j-1) are marked, then: if two tags are the same, to the A (i, j) the same tags, if two markers characters are different, then the point A (i, j) is marked as one of the tag, and note the equivalent of two tags. 5. Is equivalent to the table with the lowest tag in the table to replace each equivalent marker, thus belonging to the same region and China Unicom are marked for different pixel tags to re-mark. Then search for each individual connected regions, identified a single connected region around the upper and lower boundaries are the boundaries of these prospects, you can identify with the various prospects of the corresponding rectangle. A second of each rectangle background extraction, but the basis of color difference signals. Connected by a rectangular border regions, extracted from a frame within a rectangle in the color difference signals, Cb and Cr signals included. Within the rectangle color difference signals Cb and Cr and Cb and Cr are color difference in the background model subtraction. Comparison of variance model with the color. If the rectangle of pixels within the color difference signal to meet the formula (4), then that pixel is the color difference signals of these prospects.
464
H. ChangJie
1.( f c (x, y ) − Bc ( x, y )) > Vc′( x, y ) 2
2.( f c ( x, y ) − Bc ( x, y )) > Vc ( x, y ) 2
(4) is the
f c ( x, y ) is a new frame color difference signal, Bc ( x, y ) background color model is updated in real time after , Vc′( x, y ) color variance model, Vc ( x, y ) is the initial component of variance model. Among them,
Taking into account the object into view for a long time to become the new fixed background color models have established real-time updates, follow the color model update formula (5).
Bc′ ( x, y ) = (1 − β ) • Bc ( x, y ) + β • f c ( x, y )
2 Vc′( x, y ) = (1 − β ) • Vc ( x, y ) + β • ( f c ( x, y ) − Bc′ ( x, y ))
(5)
Bc (x, y ) is the initial background color model is updated in real time after Bc′ (x, y ) color background model, f c ( x, y ) is a new frame color difference signal, Vc ( x, y ) is the initial color variance model, Vc′( x, y ) is updated in real time color
Where
variance model.
Fig. 1. Classical statistical background subtraction
Based on Difference Signal Movement Examination Shadow Suppression Algorithm
465
Fig. 2. Improved statistical background subtraction
The formula (2) and formula (4) integration of the results extracted from those who not only satisfy the formula (2) also satisfy the formula (4) of the pixels is a real sport prospects of the combined binary image obtained by morphological operation, you can fill holes and remove noise. Experimental Results In this study, vehicle detection algorithm is applied to the purpose of the vehicle as the prospects for detection. Daytime vehicle shadow cast great influence on the vehicle, sometimes causing the increase of computation and computational accuracy of classification. Figure 1 is extracted using the classical results of statistical background. Figure 2 is a background in classical statistics based on the extraction of moving foreground color difference signal based on the results of further extraction. After verification, the method can effectively suppress the impact of the shadow of moving objects.
3 Conclusion Zhendui field environment of a moving body outside influence intense brightness changes the characteristics of the statistical background of an improved extraction method in the original classical statistical background subtraction based on the use of two color difference signals to establish Seci model. After the object is light, the most significant changes is the brightness, so the brightness contrast color at this time to better reflect the characteristics of moving objects. Luminance moving object extraction based on the area where, after further screening of these regions, these regions with the previously established color models were compared, only the color model in the region with some of the difference greater than the threshold value is considered as a prospect. This eliminates the shadows and lighting effects. Experiments show that this method has better shadows and light inhibition, but the downside is that when the moving object and background colors are very close, and based on the brightness of the background will extract the foreground objects in the
466
H. ChangJie
same way as the deterioration caused by [4 ]. With the outline proposals for further extraction of the object
References 1. Minamata, Y.J., Tao, L.d., Xu, G., Peng, Y.n.: Camcorder under free movement of the background modeling. Journal of Image and Graphics 13(2) (February 2008) 2. Li, X., Yan-Yan, Zhang, Y.-J.: A number of background modeling methods of analysis and comparison. Thirteenth meeting of the National Academic Graphics 3. Shuai, F., Xue, F., Xu, X.: Based on background modeling of dynamic object detection algorithm and simulation. System Simulation 17(1) (January 2005) 4. Zhou, Z.Y., Hu, F.Q.: Dynamic scene based on background modeling target detection. Computer Engineering 34(24) (December 2008)
Application of Clustering Algorithm in Intelligent Transportation Data Analysis Long Qiong, Yu Jie, and Zhang Jinfang School of Civil Engineering, Hunan City College, Yiyang, China
[email protected]
Abstract. With the continuous development of data mining technology, to apply the data mining techniques to transportation sector will provide service to transportation scientifically and reasonably. In intelligent transportation, the analysis of traffic flow data is very important, how to analyze the traffic data intelligently is more difficult problem, so using a new data mining techniques to replace the traditional data analysis and interpretation methods is very necessary and meaningful, clustering algorithm is the collection of physical or abstracting objects into groups of similar objects from the multiple classes of processes. This paper describes all kinds of the data mining clustering algorithms, clustering algorithm is proposed in the method of dealing with traffic flow data, and applied to the actual traffic data processing, and finally the clustering algorithm is applied to each of highway toll station Various types of car traffic volume data analysis. Keywords: Clustering algorithm, intelligent transportation, data analysis, application.
1 Introduction With the increasing popularity of the intelligent transportation system concept and rapid development of applications, traffic accident data collection and transportation system testing has become the most important part of it, so be developed in priority. Basic traffic information and traffic accidents mainly include a traffic flow, speed, vehicle spacing, vehicle type, road share information on illegal vehicles, traffic accident detection information. Traffic flow data collection and traffic information commonly use induction coil to detect. Using new data mining technology to replace the traditional methods of data analysis and interpretation method is necessary and meaningful, for uncertainty of traffic information, traffic system, on the basis of the traditional database, knowledge base and model base decision support system, taking use of data warehouse, OLAP, data mining and expert system related theory and technology to build a new generation of data analysis system, the application of data mining methods (classification algorithm, clustering algorithm, decision tree algorithm, time sequential algorithm , neural network algorithms, etc.), study the establishment of traffic information for the specific mining model to deal with traffic data flow information. Data flow information includes a variety of sensors (CO / VI detector, light intensity detectors, vehicle loop detector, wind speed and direction M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 467–473, 2011. © Springer-Verlag Berlin Heidelberg 2011
468
L. Qiong, Y. Jie, and Z. Jinfang
detector, etc.) dynamically collected information, which also includes the speed of traffic, traffic, lane occupied rate data. However, these vast amounts of data in the past have not been effective organized and done utilization of the deep-level processing, at present, with the constant development of data mining technology in different areas, the way which people find is constantly changing. Currently the transportation flow data is very huge, amount of data are regarded as “abundant data, but lack information,” fast growing amount transportation data is generally stored in the database, then how to get useful information through data mining in the amounts data, how to find the interconnection among data becomes a very essential problem, and the application research of data mining technology in the transportation will promote the development of future highway. So to study the application data mining technology in transportation flow data is very meaningful work, with the continuous development of data mining technology, applying the data mining technology to the transportation industry reasonably and scientifically will effectively serve the transportation.
2 The Summary of Clustering Algorithm Clustering algorithm is a collection of physical or abstracting objects into groups of similar objects from the multiple classes of processes. Clusters is generated by the cluster which is a set of data objects in the collection of these objects with the same object in a cluster similar to each other, and other objects in different clusters. Cluster analysis can be used as a standalone tool to obtain the distribution of data to observe the characteristics of each cluster, focused on specific clusters for further analysis. In addition, clustering analysis can be used as the other algorithms (such as features and classification of the pre-processing step) of the preprocessing algorithm, that is, before the implementation of other algorithms using clustering algorithms to find potential relationship, you can use these algorithms in the Clusters generated for processing. The quality of the clustering influenced analysis directly, thus data mining has the basic requirements on the clustering algorithm: 1. Scalability: In many clustering algorithms, data objects have robust in a small data set, but for millions of data objects including a large-scale database clustering, the bias may lead to different results. This requires clustering algorithm to be highly scalable. 2. constraint-based clustering: may be required in the practical application of the constraints under the different cluster, it is only to be found to meet specific constraints, but also good clustering properties of the data packet is a challenging Task.3. To discover clusters of arbitrary shape, using Euclidean distance or Manhattan distance of many of the clustering algorithm to determine the clustering, tend to find that the density and size with nearly spherical clusters, but may be any of a cluster switch. Therefore, the proposed switch found in any cluster algorithm is very important. 4. Insensitive to the input sequence: some clustering algorithm is sensitive to the order of input data, such as for the same data set, presented in a different order to the same algorithm may produce very different clustering results. 5. High-dimensional data processing: a database may contain a number of dimensions or attributes, many clustering algorithms are good at handling one-dimensional or
Application of Clustering Algorithm in Intelligent Transportation Data Analysis
469
low-dimensional data, however, rare to the low-dimensional clustering quality assurance. Usually the case in the multi-dimensional can well determine the quality of clustering. Therefore, clustering algorithms need to be able to handle high dimensional data. 6. Anti-jamming capability: In the actual application, majority contains isolated points, unknown data, the vacancy or the wrong data. Therefore, such a clustering algorithm should be able to have the ability to resist the noise data, otherwise the quality of clustering results can not be ensured.
3 K-Means Algorithm K-means algorithm is an iterative clustering algorithm, in the iterative process the object of clusters constantly move until get the ideal cluster, each cluster will be represented by the mean value of the cluster objective. The cluster got by the k-means algorithm, the cluster objects has the high similarity; different cluster dissimilarity between objects is also very high. Algorithm process is as followed: (l) Data objects from the n-k objects randomly selected as the initial cluster centers; (2) The average calculated for each cluster and uses the average of the corresponding cluster representatives; (3) Calculated the distance of each objects and these center object, and according to the minimum distance re-classify the corresponding object; (4) switch to the second step, re-calculated for each (with changes) the average of the cluster. This process is repeated until the criterion function is no longer a significant change or not change until the clustering of objects; Generally, k-means algorithm use squared error criterion, defined as: k
n
E = ∑ ∑ p − mi
2
(1)
i =1 pε ci
In which, E means variance sum of all objects and corresponding cluster center in the data set, p is given data objects, mi means value of cluster Ci (p and m are multi-dimension). K-means algorithm is relatively scalable and efficient for the large database, the time complexity of algorithm is O (thn), and t is the number of iterations. Under normal circumstances it ends in the local optimal solution. However, k-means algorithm only can be used in the case mean value is meaningful, not applicable for the classification variables, the number of clusters generated should be given in advance, is very sensitive to the noise and abnormal data, can not process the data on non-convex shape .
4 K-Center Algorithm K PAM algorithm is also known as a center point algorithm, each cluster is represented by an object near to center point. First, randomly selected a representative object for each cluster, the remaining objects was assigned to the nearest cluster according its distance with the representative distance and then repeatedly with representatives of
470
L. Qiong, Y. Jie, and Z. Jinfang
non-representative objects instead of objects, in order to improve the quality of clustering. Algorithm process is as followed: (l) From several data objects randomly selected k object as the initial cluster (center) representative; (2) According to center representative objects of each cluster, and the distance of each object and these center objects, and according to the minimum distance re-clarify the corresponding objects. (3)Randomly choose a "non-central" object Orandom, calculate entire cost variance of center objects and center objects OJ exchange; (4) If distance cost variance is negative, then exchange Orandom and Oj constitute the K-center objects of new cluster; (5) Turn to the second step, re-calculated each (changeable) cluster center point. This process is repeated until no significant change in a criterion function or object does not change. In which, Criterion function is the same with the K-means algorithm. When there is noise and outliers’ data, k-center algorithm is better than the k-means algorithm, but the K-center computation is costly, time complexity of the algorithm can not scale well to large databases up;
5 Model-Based Clustering Model-based approach is to assume a model for each cluster, and then go to look for a good data set to meet this model data set. Such a model may be data points' distribution density function in space or others, an underlying assumption is that: object data set is decided by a series of probability distribution. There are usually two attempts directions: statistics-based methods and neural network-based approach. COBWEB algorithm is a popular simple incremental concept clustering algorithm, its input object is described by classification attribution, COBwEB create the hierarchical clustering by the form of classification tree. Classification trees and decision tree are different. Each node in the tree responds to a concept that encompasses the concept of a probability description; an overview was in the node object. Probability description includes the probability of concept and conditional probability like P( Ai
= Vij Ck ) , here Ai = Vij
is attribute -value pairs, Ck is concept class (counts were accumulated and stored in each node to calculate probabilities). This is the difference with the decision, decision tree mark branch not non-nodes, and takes logical descriptors, not the probability descriptor. Form a division in the brother nodes of certain level in classification tree. In order to use the classification tree to classify an object, use a partial matching function moving down the path along the "best" matching node in the tree. COBWEB takes use of a heuristic evaluation method (called classification ability) to help with tree structure. Classification ability (CU) is defined as followed:
⎡ (∑ P (ck ) ⎢ ∑ 1 ⎣ i
n
∑ p( A = V i
j
ij
⎤ Ck )2 − ∑∑ P ( Ai = Vij )2 ⎥ ) / n t j ⎦
(2)
Application of Clustering Algorithm in Intelligent Transportation Data Analysis
N is a node formed on the certain level of tree to classify
471
{c1 , c2 ,..., cn } ,concept or the
number of “category”, classification effectiveness return the category similarity and evaluate classification effectiveness return category similarity and dissimilarity between-class. (1) Probability P ( Ai
= Vij Ck ) represent dissimilarity between-class. The bigger the
value is, the category member portions which share “attribution-value” pairs is lager, more predict this “attribution-value” pair is category member. (2) Probability P (Ck
= Ai Vij ) represent dissimilarity between-class, the bigger this
value is, the category member portions in the contrast category objects share “ attribution-value” pairs is less, more predict this “ attribution-value” pair is category member;
6 The Application of Clustering Algorithm in the Transportation The application of clustering algorithm in the aspect of transportation is wide, the main application areas are the following areas: Cluster analysis of traffic flow, used in urban transportation corridor planning; the urban intersection cluster analysis is applied to traffic management and Traffic flow forecasting: plan and design of the highway used extensively in data mining clustering algorithm; the clustering algorithm is applied to identification method of highway accident-prone points etc. Clustering algorithms have five categories, namely, division-based clustering algorithm, based on hierarchical clustering algorithms; density based clustering algorithm, grid-based clustering algorithm and model-based clustering algorithm. Now the question is how to choose the appropriate algorithm for analysis. Density-based method regards cluster as high-density object area separated by the low-density regions in the data space, cluster the data space as separated by high-density area of the object, suitable for filtering noise and find clusters of arbitrary shape; Grid-based clustering method suitable for handling high-dimensional data set; Model-based algorithm locates cluster by constructing the spatial distribution density function which reflects data point, because this data is the number of vehicles which passed the different toll station, therefore, taking use of based on hierarchical clustering approach and K-Means algorithm, the implementation of the two algorithms is efficient, but also a fast clustering method, So consider using hierarchical clustering method, it provides a cluster analysis function, can do the cluster analysis of variables and samples for a variety of data types. Several issues should be paid attention when selecting the clustering factor: To meet the needs of cluster analysis, clustering factor if you choose can not meet the needs of cluster analysis, or can not provide good discrimination for the cluster analysis, cluster analysis will be difficult. 1. All values should not be a difference of magnitude; this can use a standardized method to solve. 2. All variables can not have a strong linear relationship.
472
L. Qiong, Y. Jie, and Z. Jinfang
3. Clustering factor choice, strong representation should be chosen, can reflect the characteristics of the various toll traffic properties. Taken together, the data clustering factor in the choice for passenger traffic constitutes the proportion of type 1, passenger traffic constitutes the proportion of type 2, bus 3 percentage composition of traffic, passenger traffic constitutes the proportion of type 4, trucks 1 type of traffic constitutes the proportion of truck traffic constitutes the proportion of type 2, consisting of truck traffic volume ratio of 3, 4 trucks, and truck traffic constitutes the proportion of traffic constitutes the proportion of type 5. There are two types of hierarchical clustering, namely, respectively Q and R-type clustering. Q-type clustering is clustering the samples, which allows the sample with similar characteristics together to make a big difference samples separated. R-clustering is clustering variables, which applies a variable with a similar gathering, a large difference variables are separated, Can choose the minority representative variables in the similar variables to analyze, achieve a reduction of variables, up to the purpose of dimension reduction. This study is clustering each toll station , uses the sample cluster, so it is a Q-type clustering.The first step of clustering algorithm, each toll station can be seen as a category, so the initial n toll stations can be divided into n classes, then calculate the distance of each toll station by certain algorithm, then the two closest distance toll station was merged into a category, so that n class becomes n-1 class, while there are a lot of methods to calculate the distance, here we used the calculation of Euclidean Distance. The equation is:
EUCLID( x, y ) =
k
∑ i =1
( xi − yi ) 2
(3)
Take the above vector into the equation, calculate the Europe Distance of each Toll, after taking use of the distance of each category, measure the close level of the left individual and small category, and cluster the most intimated individual and small category into a category, the average distance is the average distance of each individual and subgroup individual. In other words, if a class contains more than one toll project, then the center of this kind is the average distance of one element, namely the mid-point. After the above process is repeated continuously to all individuals and small class gathered into a growing category, until all individuals come together to form a category so far. Firstly, to do the calculation, calculate the distance of each toll, the program is as followed: #include<jostream> #include<math.h> Using namespace std; double a 14 9 = Int main () { double s; For (int i=0; i<14; i++) { s=0; For (int k=0; K++) { cout<<”EUCLID (“<
[ ][ ] {
Application of Clustering Algorithm in Intelligent Transportation Data Analysis
473
7 Summary With the continuous development of data mining technology, to apply the data mining techniques to transportation sector will provide the service to transportation scientifically and reasonably. In intelligent transportation, the analysis of traffic flow data is very important, how to analyze the traffic data intelligently is more difficult problem, so using a new data mining techniques to replace the traditional data analysis and interpretation methods is very necessary and meaningful, clustering algorithm is the collection of physical or abstracting objects into groups of similar objects from the multiple classes of processes. This paper describes all kinds of the data mining clustering algorithms, clustering algorithm is proposed in the method of dealing with traffic flow data, and applied to the actual traffic data processing, and finally the clustering algorithm is applied to each of a highway toll station Various types of car traffic volume data analysis.
References 1. Wang, B., Wu, Z.: Data Mining Technology and its Application Situation. Statistics and Strategy 10, 1–2 (2009) 2. Cao, W.: Application Analysis of Data Mining in the Intelligent Transportation. Computer Engineer 7, 91–92 (2008) 3. Xiang, B., Qian, G.: Clustering Algorithm Study Summary Database and Information Management. Database and Information Management, 1500–1501 (2009) 4. Zhou, D.: Research and Application of Clustering Algorithm in Data Mining, pp. 23–24. Tianjing University (2008) 5. Mao, G., Duan, L., Wang, S., Shi, Y.: Data Mining Principles and Algorithms, pp. 174–181. Tsinghua University Press, Beijing (2007)
Exploration and Research of Volterra Adaptive Filter Algorithm in Non-linear System Identification Wen Xinling, Ru Yi, and Chen Yu Zhengzhou Institute of Aeronautical Industry Management, China [email protected]
Abstract. In order to reduce vibration and noising of the aircraft main wing through the active control algorithm, research of non-linear adaptive filter algorithm is very important. This paper carries out preliminary exploration research on Volterra LMS algorithm and RLS algorithm, compares this two kinds of algorithm in theory. The system parameter in slowly changing can be tracked in using Vloterra adaptive filter algorithm, and through simulation, the convergence speed of RLS algorithm is quicker than LMS algorithm, but it has large amount of calculation. In the future, traditional algorithm will have many shortages such as slow convergence speed and convergence precision, etc because of its complex non-linear characters, which will be the deeper research emphasis. Keywords: Volterra series, LMS, RLS, non-linear adaptive filter, convergence speed.
1 Introduction Aircraft in large attack angle or lesser attack angle of maneuver, the aircraft’s wing will produce unstable, separation high strength and eddy current load. The interaction of these eddy current load and airplanes can cause vibrating strongly on the surface of aircraft, wing, and tail, the serious can influence aircraft’s maneuverability, under long-term vibration can also reduce the fatigue life of aircraft’s main wing, etc. According to a certain control strategy, getting control function signal through enlargement recuperation, which can realize vibration control through driving component on corresponding structure after amplifying and processing. Because eddy current load of aircraft main wing is complex, it usually expresses a kind of nonlinear chaotic relationship, in order to make aircraft main wing vibration reduction and de-noising, research of non-linear adaptive filter algorithm is particularly important. In solving non-linear problem, in recent years, people has established various non-linear adaptive filter method, such as neural network method, homomorphism filter, morphology filter, Volterra filtering, etc. These methods are widely used in system identification, chaotic forecasting, image processing and spread spectrum communication, etc. Because Volterra series is a kind of functional, under the conditions of satisfying the input signal energy limited, most non-linear system can be expressed approximated to the arbitrary accurate degree by using Volterra series. M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 474–479, 2011. © Springer-Verlag Berlin Heidelberg 2011
Exploration and Research of Volterra Adaptive Filter Algorithm
475
Therefore, Volterra filters adaptive filter algorithm and the applied research has aroused people's attention.
2 Volterra Filter of Non-linear System The relationship between input x( n) and output y ( n) of discrete non-linear Volterra system can be expressed in Volterra series as formula (1): [1] N −1
N −1 N −1
m1 = 0
m1 = 0 m2 = m1
y ( n) = h0 + ∑ h1 ( m1 )x( n − m1 ) + ∑ N −1 N −1
N −1
∑∑ ∑
+
m1 = 0 m2 = m1 m3 = m2
+ ∑ m =0 ∑ m = m "∑ m N −1
N −1
1
2
N −1
1
d
= md −1
∑ h ( m , m )x ( n − m ) x ( n − m ) 2
1
2
1
2
h3 (m1 , m2 , m3 )x (n − m1 ) x (n − m2 ) x (n − m3 ) + "
hd (m1 , m2 ," , md ) x (n − m1 ) x (n − m2 ) " x(n − md ) + " (1)
From formula (1) we can see, Volterra series considers the dynamic behavior of the system, and it can be regarded as a having memory’s Taylor series under the circumstance of existing expansion, therefore, it can describe non-linear dynamic systems. Form formula, hd (m1 , m2 ," , md ) is called d orders Volterra kernel, N describes memory length. Formula (1) describes that non-linear system has infinite Volterra kernel. But, in fact application, we must carry on truncation processing, truncation processing contains order number d and memory length N. We should truncate the non-linear system according to the actual type and requirements precision, usually, we adopts second-order Volterra series truncation model. And we suggest h0 = 0 , and then formula (1) can be simple into formula (2):
y ( n) =
N −1
N −1 N −1
∑ h (m )x(n − m ) + ∑ ∑ h (m , m )x(n − m ) x(n − m )
m1 = 0
1
1
1
m1 = 0 m2 = m1
2
1
2
1
2
(2)
From formula (1) and (2), when order d and summary times N is large, the calculate amount of filter is large, the identification problem of non-linear system based on Volterra series is for unknown non-linear system using observations of input and output signal, in a sort of identification rule, by using the online recursive methods identify system various order, determine Volterra kernel coefficients, then define the non-linear system. Formula (2), if Volterra kernel is Symmetrical, for any p! times of m1 , m2 ," , md conversion,
hd ( m1 , m2 ," , md )
are
equal,
thus,
formula
(2)
has Num = N + N ( N + 1) numbers Volterra series kernel. Define system input vector 2 X (n) , output vector Y (n) , kernel vector H (n) , which is shown as formula (3).
476
W. Xinling, R. Yi, and C. Yu
X (n) = [ x(n), x(n − 1)," , x(n − N + 1),
x 2 (n), x(n) x(n − 1),..., x(n) x(n − N + 1), x 2 (n − 1),...x 2 (n − N + 1)]T H (n) = [ h1(0; n), h1(1; n),..., h1( N − 1; n), h2(0, 0; n), h2(0,1; n),..., h2(0, N − 1; n), h2(1,1, n),..., h2( N − 1, N − 1; n)]T Y ( n) = H T ( n ) X ( n )
(3)
Formula (2) and (3) express that a non-linear system can be represented a each component’s linear combination of input vector X ( n) through the state expansion, which is represented by the advantages of non-linear system using by Volterra series model. [2] Through the application of Volterra adaptive filter algorithm, we can identify unknown nonlinear system, which making the error signal e(n) in a sense for minimal, also making some a cost function J(n) of e(n) into a minimum value. From Figure 1, W ( n) is coefficient vector of Volterra filter, when cost function J(n) achieved minimum, we can think kernel vector H ( n) ≈ W ( n) . According to the difference of the cost function J(n), we can get different adaptive algorithm.
3 Volterra LMS Algorithm Because the definition of Volterra input vector and linear filter input vector is different, it make the convergence condition is different, the desired signal of Volterra series adaptive filter (i.e. the estimate signal) is a( n) . If defining cost function J(n) is shown as formula (4).
J (n) = e2 (n) = [ y(n) − W T (n) X (n)]2
(4)
It is consist of the traditional least mean-square error (LMS) algorithm, Volterra filter LMS adaptive algorithm is shown as formula (5).
e(n) = y(n) −W T (n) X (n) W (n + 1) = W (n) + μ X (n)e(n)
(5)
Among formula (5), the initial value W(0) of W(n) can be define by prior knowledge, or we can simply select W(0) = [0 0 …0 ]. u is step factor, and it is a certain value, which value is selected by the convergence speed, tracking performance and stability of LMS algorithm. X ( n)e( n) is the iterative update direction vector. And Volterra algorithm are same as all adaptive LMS algorithm, which existing convergence character problem of mean-square error attaining minimum best weight vectors (namely Volterra kernel vector). Usually when meeting formula (6), weights can ensure convergence. [3]
0<μ <
1
λmax
(6)
Exploration and Research of Volterra Adaptive Filter Algorithm
In formula (6),
λmax
477
is the biggest characteristic value of correlation matrix
R = E[ X (n) X T (n)] . In addition, because
λmax
is not bigger than the trace of input
correlation matrix (the summation of matrix diagonal elements), therefore, when value of u meeting formula (7), which weights’ convergence can better guaranteed.
0<μ <
1 tr[ R ]
(7)
Because the diagonal elements’ summation of input matrix correlation R is usually easy to estimation than its eigenvalue, formula (7) is more facilitate in applications. In practical applications, u value is selected generally about the 1/10 of upper limit value in formula (7). [4] In the second order truncated model, usually Volterra series first order item and second item using different convergence factors, and the algorithm iteration formulas weight vectors is shown as formula (8).
⎡u W ( n + 1) = W ( n ) + ⎢ 1 ⎣0
0⎤ e( n ) X ( n ) u2 ⎥⎦
(8)
If we select scalar form, it is shown as formula (9).
w1, m1 (n + 1) = w1, m1 (n ) + μ1e(n ) x (n − m1 )
w2, m1 , m2 ( n + 1) = w2, m1 ,m2 ( n) + μ 2e( n) x( n − m1 ) x(n − m2 )
(9)
In formula (9), m1 = 0,1," , N − 1 ; m2 = 0,1,", N − 1 . LMS algorithm has the advantages of small amount of calculation, but because the system with non-linear, which making the input signal correlation matrix eigenvalues to the extended greaten, and its convergence speed become slow. [5]
4 Volterra RLS Algorithm If defining cost function J(n) is shown as formula (10). n
J (n) = ∑ λ n− k [ y(k ) − W T ( N ) X ( K )]2
(10)
k =0
In formula (10), λ is called forgetting factor, it is mainly to tracking the introduction of the non-stationary input signal components. If we make J(n) is equal to zero about W(n), we can get the best W(n) of recursion algorithm, i.e. RLS. RLS algorithm can be summarized as formula (11).
K ( n) =
λ −1 P(n − 1) X (n) 1 + λ −1 X T (n) P(n − 1) X ( n)
ε (n) = y(n) − W T (n − 1) X (n)
478
W. Xinling, R. Yi, and C. Yu
W (n) = W ( n − 1) + K ( n)ε ( n)
(11)
RLS algorithm has advantage of high convergence speed, and its shortcoming is large amount of calculation. [6]
5 Simulation and Summary Using adaptive Volterra filter can be well for non-linear systems identification, and the system parameter in slowly changing can be tracked. RLS algorithm convergence speed is quicker than LMS algorithm, but it has large amount of calculation. Through simulation Volterra LMS and RLS algorithm by Matlab, Weight coefficient error norm of LMS is shown as Figure1.
Fig. 1. The weight coefficients norm of LMS
Weight coefficient error norm of RLS is shown as Figure2.
Fig. 2. The weight coefficients norm of RLS
Exploration and Research of Volterra Adaptive Filter Algorithm
479
Because Volterra series can describe non-linear systems with memory effect, by using adaptive Volterra filter can solve broad class of non-linear system identification problem. Usually, we aim at Volterra second order system to research, which can easily discuss promotion to higher-order Volterra system, but, when the order number is higher, the number of filter coefficient increases soon and calculation becomes a large quantity. Acknowledgement. This paper is supported by the Aeronautical Science Foundation in China. ( No.2009ZD55001) and (No.2010ZD55006).
References 1. Liu, L., Hu, P., Han, J.: A Modified LMS Algorithm for Second-order Volterra Filter. Journal of China Institute of Communications, 122–123 (2002) 2. Long, J., Wang, Z., Xia, S., Duan, Z.: An Uncorrelated Variable Step-Size Normalized LMS Adaptive Algorithm. Computer Engineering & Science, 60–61 (2006) 3. Feuer, A., Weinstein, E.: Convergence analysis of LMS filters with uncorrelated Gaussian data. IEEE Trans. Acoust. Speech, Signal Process, 222–229 (1985) 4. Mathews, V.J.: Adaptive polynomial filters. IEEE Signal Processing Magzine, 10–26 (1991) 5. Gelfand, S.B., Wei, Y., Krogmeier, J.V.: The stability of variable step-size LMS algorithms. IEEE Trans. on Signal Processing, 3277–3288 (1999) 6. Wang, G., Wang, C.: Nonlinear systems identification based on adaptive Volterra filter. Electronics Optics & Control, 43 (2005)
Application of Improved Genetic Algorithms in Structural Optimization Design Shengli Ai1 and Yude Wang2 1
Station of Construction Quality Supervision, Zhengzhou Coal Industry(Group) Co., Ltd. Zhengzhou, Henan, China, 450000 2 Security Division of Hebei University of Engineering, Handan Hebei, China, 056038 [email protected]
Abstract. The genetic algorithms (GAs) are broadly applicable stochastic search and optimization techniques. However, there exists premature convergence phenomenon in some GAs. To overcome the deficiency, two improved genetic algorithms are proposed in this study. The first one is a hybrid algorithm of the genetic algorithm and downhill simplex method, while the second one is the combination of genetic algorithm and conjugate gradient method. Then, the mathematical optimization model of the 10 bar truss is built and both of the improved algorithms are identified by the numerical example and compared with the simple genetic algorithm. The simulation results indicate that the two purposed techniques show stronger robustness in finding feasible optimum designs than the simple genetic algorithm. Keywords: genetic algorithm, downhill simplex method, conjugate gradient method, optimization, truss.
1 Introduction Structural optimization design is to select design variables such that the cost or weight of the structure is minimized, while all the design constraints are satisfied. Genetic algorithms, as popular and powerful biologically inspired optimization techniques, are perhaps the most widely known types of evolutionary computation methods today. In the past few years the genetic algorithm community has turned much of its attention to optimization problems in industrial engineering and many improved genetic algorithms have been proposed. Vedat Togan and Ayse T. Daloglu [1] discussed two new self-adaptive member grouping strategies and a new strategy to set the initial population in the genetic algorithm (GA). D. Devaraj [2] presented an improved genetic algorithm approach for solving the multi-objective reactive power dispatch problem. Eysa Salajegheh and Saeed Gholizadeh [3] proposed two different strategies to reduce the computational cost of standard GA. Kaveh and Khanlari [4] used the genetic algorithm to identify the mechanism corresponding to the least possible load factor. Ahn et al. [5] developed a memory-efficient elitist genetic algorithm for solving complex M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 480–487, 2011. © Springer-Verlag Berlin Heidelberg 2011
Application of Improved Genetic Algorithms in Structural Optimization Design
481
optimization problems quickly and effectively. Wang, RL and Okazaki, K. [6] presented an improved genetic algorithm in which crossover and mutation are performed conditionally instead of probability. Kaveh A and Kalatjari V. [7] performed size and topology optimization of trusses using a genetic algorithm, the force method, and some concepts of graph theory. In this study, two improved genetic algorithms are proposed and identified through an example of 10 bar truss problem. The analysis shows that both of the improved genetic algorithms can provide better results in finding optimal solutions than the simple genetic algorithm.
2 Combination of Genetic Algorithms and Other Algorithms Regular Genetic Algorithms. Genetic algorithms (GAs) are robust stochastic global optimization techniques based on the mechanism of natural selection and natural genetics. They are population-based search algorithms with selection, crossover, mutation, and inversion operations. These population-based search techniques distinguish genetic algorithms from traditional point-by-point engineering optimization techniques. Regular genetic algorithms can be described as follows:
GA = ( P (0), N , L, s, g , p, f , t ) , where
P (0) = ( p1 (0), p 2 (0), … p n (0)) ∈ I N
is
(1) the
initial
population;
I = B N = {0,1}L is the encoding space of binary strings, whose length is L; N is the population size; L is the length of binary strings; s is the selection strategy; g are genetic operators, including the selection operator Qr , the crossover operator Qc and the mutation operator Qm ; p are the operated probabilities of genetic operators, including the selection probability Pr , the crossover probability Pc and the mutation probability Pm ; f is the fitness function; t is the stop criterion. Downhill Simplex Method. The downhill simplex method is due to Nelder and Mead. The method requires only function evaluations, and not calculation of derivatives. In the N-dimensional space, a simplex is a polyhedron with N+1 points (or vertices). We choose the N+1 points and defined an initial simplex. The method iteratively updates the worst point by four operations: reflection, reflection and expansion, one-dimensional contraction, and multiple contraction [8].
f ( x N +1 ) > ... > f ( x 2 ) > f ( x1 ) , then, generate a trial point xr by reflection, such that x r = x g + a × ( x g − x N +1 ) ,
First, order and re-label the N+1 points so that
where
x g is the centroid of the N best points in the vertices of the simplex [9]. The
downhill simplex method repeats a series of steps as shown in Fig.1.
482
S. Ai and Y. Wang
Start Reflection Generate x1
xr<x1
x1<xr<xN
xN<xr
Expansion
xr←xN+1
Generate xe
xe<xr
xr<xe
xe←xN+1
Contraction
Generate xc
xN+1<xc
xr←xN+1
Contract along dimensions
xc<xN+1
xc←xN+1
Toward x1 Fig. 1. Procedure of the downhill simplex method
Conjugate Gradient Method. Conjugate gradient method can be applied to minimize a non-linear function
f by considering its gradient ∇f ( x k ) . Suppose
1 T x Qx + b T x + c , x ∈ R n , (2) 2 in which Q is a real symmetric positive definite matrix, b is a constant vector and c is a constant. Think of x1 as the initial starting point and p1 = −∇f ( x (1) ) as f ( x) =
the first step direction. We can get
( x 2 , p 2 ),...( x k , p k ) by an iterative process in
the form of
x ( k +1) = x ( k ) + α k p k ,
(3)
α k = arg min f ( x ( k ) + αp k ) .
(4)
in which If ∇f ( x
k
) = 0 , the iteration will come to an end, or else let p k +1 = −∇f ( x ( k +1) ) + λk p k .
(5)
Application of Improved Genetic Algorithms in Structural Optimization Design
483
Here
λk =
p kT Q∇f ( x ( k +1) ) . p kT Qp k
(6)
For the purpose of reducing computational complexity, Eq.(6) should be revised before the conjugate gradient method is applied to general minimization problems[10,11]. The procedure of modification is as follows:
λk =
p kT Q∇f ( x ( k +1) ) p kT Qp k
(Qα k p k ) T ∇f ( x ( k +1) ) = (Qα k p k ) T p k =
[Q( x ( k +1) − x ( k ) )]T ∇f ( x ( k +1) ) [Q( x ( k +1) − x ( k ) )]T p k
=
(∇f ( x ( k +1) ) − ∇f ( x ( k ) )) T ∇f ( x ( k +1) ) (∇f ( x ( k +1) ) − ∇f ( x ( k ) )) T p k
=
∇f ( x ( k +1) ) ∇f ( x ( k ) )
2 2
(7)
Numerical Example. The 10 bar cantilever truss problem, due to its simple configuration, has been used as a benchmark to verify the efficiency of diverse optimization methods [12]. The initial geometry of the structure is shown in Fig.2, and the loading conditions, material properties and constraints are given in Table 1. 914.4cm
914.4cm
④
③
3
914.4cm
5
1
⑥ ⑩ 6
⑤
⑧ ②
⑨ 4 P
⑦ ①
Fig. 2. Initial geometry of the 10 bar truss
2 P
484
S. Ai and Y. Wang Table 1. Loading and design condition for the 10 bar truss problem shown in Fig.2
Properties
Symbol
Value
Nodal load
P
454 (KN)
Young’s modulus
E
6.896×104 (MPa)
Density
ρ
2.77×10-3 (Kg/cm3)
Bar length
l
914.4 (cm)
[σmax]
±177.25 (MPa)
μmax
±5.08 (cm)
Allowable stress Allowable vertical displacement Cross-sectional area (A1~ A4) Cross-sectional area (A5, A6) Cross-sectional area (A7~A10)
x1
0.645 (cm2)≤x1≤200 (cm2)
x2
0.645 (cm2)≤x1≤200 (cm2)
x3
0.645 (cm2)≤x1≤200 (cm2)
The mathematical optimization model is as follows: 10 ⎧ ρ i li Ai min W ( x , x , x ) = ∑ 1 2 3 ⎪ i =1 ⎪ ⎪s.t. σ 1 ≤ σ max ⎪ ...... ⎪ ⎪ σ 10 ≤ σ max ⎨ ⎪ μ1 ≤ μ max ⎪ ⎪ ...... ⎪ μ 4 ≤ μ max ⎪ ⎪ 0.645 ≤ x , x , x ≤ 200 1 2 3 ⎩
Fig. 3. 3D graph of the iteration process of weight
(8)
Application of Improved Genetic Algorithms in Structural Optimization Design
485
As mentioned before, the 10 bar truss problem is a benchmark problem for structural optimization. This problem was solved by numerous mathematical as well as heuristic methods. In this paper, two improved algorithms based on genetic algorithms are applied to solve the problem. The improved algorithm 1 is a mixed algorithm of genetic algorithm and downhill simplex method, while the improved algorithm 2 is the combination of genetic algorithm and conjugate gradient method. For the purpose of comparison, the results of simple genetic algorithm are also provided as shown in Fig.4. 295000
Improved algorithm 1 245000
Improved algorithm 2 Genetic algorithm
Weight
195000
145000
95000
45000
-5000 1
3
5
7
9
11 13 15 17 19 21 23 25 27 29
Fig. 4. Iteration process of the objective function
From the results shown in Fig.4 it can be gleaned that the iteration processes of the two improved algorithms show basically the same tendency comparing with that of simple genetic algorithm. Except for three design variables x1 , x 2 , x3 , there are also 14 state variables,
σ 1 ,..., σ 10 and μ1 ,..., μ 4 . They should be subjected to stress and
displacements constraints. With respect to the simple genetic algorithm, the two improved genetic algorithms in this study provide comparable results for the 14 state variables as shown in Fig.5.
Fig. 5. Comparison of state variables among the three different algorithms
486
S. Ai and Y. Wang
A comparison of the results from the two improved algorithms with the simple genetic algorithm is presented in Table 2 and Fig.6. It can be seen from the comparison that results of the two improved algorithms are almost consistent and both purposed techniques provide better results than the simple genetic algorithm. Table 2. Comparison of results using different algorithmsfor the 10 bar truss problem Algorithms Improved algorithm 1 Improved algorithm 2 Simple genetic algorithm
70
x1 (cm2)
x2 (cm2)
x3 (cm2)
W (N)
64.081
13.684
49.701
1430.678
64.079
13.682
49.703
1430.682
63.160
10.273
53.804
1462.871
Improved algorithm 1
1470
Improved algorithm 2 60
1460
Genetic algorithm
50 1450
40 1440
30 1430
20 1420
10 0
1410
X1
X2
X3
W
Fig. 6. Comparison of results for the 10 bar truss problem
3 Conclusions In this paper, two improved genetic algorithms have been proposed to overcome the premature convergence deficiency existing in genetic algorithms. The first improved algorithm is a hybrid approach of genetic algorithm and downhill simplex method, while the second one is the combination of genetic algorithm and conjugate gradient method. One of the advantages of the two algorithms is that they depend on the initial population little. The new approaches have been identified through an example of 10 bar truss problem and compared with the simple genetic algorithm. The numerical and graphical results clearly show that the improved algorithms are more robust in terms of accuracy, efficiency and reliability and satisfy all the claims.
References 1. Vedat, T., Daloglu, T.: An improved genetic algorithm with initial population strategy and self-adaptive member grouping. Computers and Structures 86(12), 1204–1218 (2008)
Application of Improved Genetic Algorithms in Structural Optimization Design
487
2. Devaraj, D.: Improved genetic algorithm for multi-objective reactive power dispatch problem. European Transactions on Electrical Power 17(6), 569–581 (2007) 3. Salajegheh, E., Gholizadeh, S.: Optimum design of structures by an improved genetic algorithm using neural networks. Advances in Engineering Software 36(12), 757–767 (2005) 4. Kaveh, A., Khanlari, K.: Collapse load factor of planar frames using modified genetic algorithm. Commun. Numer. Meth. Eng. 20, 25–911 (2004) 5. Ahn, W., Kim, P., Ramakrishna, S.: A memory-efficient elitist genetic algorithm. In: Wyrzykowski, R., Dongarra, J., Paprzycki, M., Waśniewski, J. (eds.) PPAM 2004. LNCS, vol. 3019, pp. 552–559. Springer, Heidelberg (2004) 6. Wang, R.L., Okazaki, K.: An improved genetic algorithm with conditional genetic operators and its application to set-covering problem. Soft computing 11(7), 687–694 (2007) 7. Kaveh, A., Kalatjari, V.: Topology optimization of trusses using genetic algorithm, force method and graph theory. Int. J. Numer. Meth. Eng. 58(5), 771–791 (2003) 8. Floridia, C., Moraes, D.: Fast on-line OSNR measurements based on polarisation-nulling method with downhill simplex algorithm. Electronics Letters 44(15), 926–927 (2008) 9. Robin, F., Orzati, A., Moreno, E., Homan, J., Bachtold, W.: Simulation and evolutionary optimization of electron-beam lithography with genetic and simplex-downhill algorithms. Transactions on evolutionary computation 7, 69–82 (2003) 10. Ademoyero, Bartholomew-Biggs, C., Davies, J., Parkhurst, C.: Conjugate gradient algorithms and the Galerkin boundary element method. Computers and Mathematics with Applications 48(3), 399–410 (2004) 11. Maurin, B., Motro, R.: Investigation of minimal forms with conjugate gradient method. International Journal of Solids and Structures 38, 2387–2399 (2001) 12. Nicholas, A., Behdinan, K., Fawaz, Z.: Applicability and Viability of a GA based Finite Element Analysis Architecture for Structural Design Optimization. Computers and Structures 81, 2259–2271 (2003)
Research on Intelligent Schedule of Public Traffic Vehicles Based on Heuristic Genetic Algorithm Liangguo Yu School of Information Engineering, Nanchang Institute of Technology, Nanchang, Jiangxi, China [email protected]
Abstract. This paper based on heuristic genetic algorithm of bus intelligent scheduling research on genetic algorithm, and the specialization of each operator handling. This method makes full use of the characteristics of genetic algorithm, improved intelligence of bus intelligent scheduling and improved the operation efficiency and effectively improved the static scheduling of public traffic vehicles. Keywords: Intelligent Schedule, Genetic Algorithm, Encoding.
1 Introduction Modern traffic jams phenomenon showed that the way to solve the urban traffic problems, the key is to establish public transportation systems more perfect and improve road communication abilities and public vehicle management level. This paper research based Beijing 220 bus as the object , the public transportation system intellectualization research, realize the heuristic genetic algorithm of bus intelligent scheduling and improve operational efficiency.
2 Heuristic Genetic Algorithm to Optimize Based on the Transit Vehicles Scheduling A Transit vehicles operating scheduling task is effective management and reasonable allocation limited resources, adjust supply and demand balance vehicle, the desired target best, will artificial intelligence method introduced in transit vehicles operating scheduling management by using intelligent algorithm, scheduling, experience and expertise to reduce the search space and to find satisfactory solution, form a reasonable effective regulation plan.
3 The Basic Principle of Genetic Algorithm Genetic Algorithm (GA) is proposed by Professor John Holland of the university in the United States in 1975.[1] It is a kind of natural selection and natural simulated M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 488–491, 2011. © Springer-Verlag Berlin Heidelberg 2011
Research on Intelligent Schedule of Public Traffic Vehicles
489
biological genetic mechanism of random search algorithm is racing days for reference in Darwin's content of survival of the fittest, muck, genetic selection and natural eliminated the calculation model of the biological evolutionary process. In the genetic Algorithm, the answer to the question be expressed as chromosomes, each chromosome is an individual; each individual is given a fitness, in representing this individual to the environment of fitness. Composed by several individual population, in population of each generation of evolutionary engineering, by selecting, crossover and mutation genetic operation produce new population. Fitness value big individual be inherited by the probability is big also, exchange and variation operation can produce fitness greater individual. [2]Under the action of the genetic algorithm, the group not only evolution, finally converge to the problem optimal answer.
4 The Basic Principle of Heuristic Genetic Algorithm Heuristic genetic scheduling algorithm will heuristic algorithm and genetic algorithm combining both shown, the algorithm is shown in figure 1. Public transport scheduling problem means in fixed driving route, according to different time, according to certain order relations, reasonable arrange transport vehicles running homework form, in order to supply and demand balance, satisfy the system performance indicators.[3] The indexes of algorithm optimization is in peacetime or peak time, according to the bus operation norm, make transport vehicles operating cycle time shortest as optimized object, and try it’s best not to delay. So defines its objective function is:
GC and primary population Heuristic search module
Initial chromosome population Scheduling calculation and evaluation module
Genetic operation module
NO optimization
YES Result Fig. 1. Flow chart of Heuristic Genetic algorithm
490
L. Yu
t = ∑ max Tt + p ∑ max(0, t sj − t dj ) i∈M
(1)
j∈J
Where t is the time during the first bus start and the NO.K bus return; one transportation;
Ti is the time of
t sj , t dj show the service time reach the number j station and set
time separately; p is the rate of punishment. The paper presents the design of the genetic algorithm adopted "real number coding method", a chromosome of each gene seat enter
[
the real value is the decision variables. Corresponding X = x1 , x 2 , x3 ......x n chromosome X is
]T
to
x1 , x2 , x3 ......xn . Part of initial population have heuristic
scheduling algorithm get, and another part through randomly constructed a coding chain algorithm produces. Genetic operator including selected operator, crossover operator and mutation operator, they in turn role in P(t), resulting in a new generation of P(t+1) as figure 2: J=0
Initial P(j) YES Stop CR NO
END Crossing
Evaluate Generate J=J+1 NO Going
pop Yes
Selection
Restart
Fig. 2. Flow chart of Genetic algorithm
The flow of Genetic algorithm 1.
3. 4. 5.
Initialization the population, the size of population is 50. Half by a heuristic scheduling algorithm, half in randomly generated produced. In P(t), According to the above for each coding string objective function formula fitness. Make choice to determine their elder generation. Single point across, generates two new generations. Variation operation, The Probability is p m = 0.001.
6.
To the newborn two population coding chains are heuristic search
2.
Research on Intelligent Schedule of Public Traffic Vehicles
7. 8.
(
491
)
Calculating the fitness of offspring, In subsequent generations p t + 1 to choose between parents into the next generation of high fitness function Repeat 3~7, until p t + 1 to t − 1 , Finally the fitness group's highest
(
)
p (t ) individual genetic into p(t + 1) , t = t + 1 directly, output optimal answer.
5 Conclusion This paper studies the intelligent scheduling based on heuristic method of bus. According to the specific operation situation of bus, using fast and convenient intelligent scheduling algorithm to optimize the operating problems of bus, make a practical operation scheme of bus, further improve the efficiency of management, improve the service level.
References 1. Karaoglu, B., Topcuoglu, H., Gurgen, F.: Evolutionary algorithms for location area management. In: Rothlauf, F., Branke, J., Cagnoni, S., Corne, D.W., Drechsler, R., Jin, Y., Machado, P., Marchiori, E., Romero, J., Smith, G.D., Squillero, G. (eds.) EvoWorkshops 2005. LNCS, vol. 3449, pp. 175–184. Springer, Heidelberg (2005) 2. Muhammad, J., Hussain, A., Neskovic, A., Magill, E.: New neural network based mobile location estimation in a metropolitan area. In: Duch, W., Kacprzyk, J., Oja, E., Zadrożny, S. (eds.) ICANN 2005. LNCS, vol. 3697, pp. 935–941. Springer, Heidelberg (2005) 3. Cho, S.-B.: Fusion of neural networks with fuzzy logic and genetic algorithm. Integrated Computer-Aided Engineering 9(4), 363–372 (2002)
The Integration Framework of Train Scheduling and Control Based on Model Predictive Control Chao Mi and Yonghua Zhou School of Electronic and Information Engineering, Beijing Jiaotong University, China [email protected]
Abstract. The integration of railway signaling and communication realizes the bidirectional information transmission between trains and ground control centers. In order to take full advantage of such information integration, this paper presents the integration architecture of train scheduling and control based on the model predictive control with the hierarchical structure of top (macroscopic), middle (mesoscopic) and bottom (microscopic) levels. The top level realizes the real-time train scheduling, the middle level guarantees trains to run with safety distances and implement the real-time operation plans produced by the top level, and the bottom level accomplishes the energy control and drives the trains to run according to the set-points engendered by the middle level. The proposed hierarchical feedback control can optimize the train operation, shorten the departure interval, improve the railway line density, ensure the safety of railway transportation, and eventually fulfill the Real-time, Automatic, Predictive, Intelligent Scheduling and control for hIgh-Speed Trains (RAPISIST). Keywords: train scheduling, train control, integration, model predictive control.
1 Introduction With the development of integration technique of railway signaling and communication, train control leaps into the networked, intelligent and synthetic advancement. This integration makes the information can be transmitted through the fiber optic cable and wireless network besides the traditional metal wire, which makes the ground control center can fast acquire the real-time information of train operation status, and thus realize the information integration. In the traditional train control system, train determines the movement authority (MA) through the coding information of track circuits. Under the environment of integration of railway signaling and communication (IRSC), trains can report their positions and speeds through the communication network to the radio block center (RBC), and RBC transmits the MA back to the trains. However, the advantages of bidirectional information transmission can not be sufficiently brought forth at the aspect of ensuring high safety, speed, density and robustness of train operation. The locomotive equipments mainly control the train from the perspective of safety, comfort, energy efficiency, etc. However, the ground control center should optimize the operation within the specific zone or the whole of railway network. Thus, when the train operation status changes, if it is necessary, the ground M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 492–499, 2011. © Springer-Verlag Berlin Heidelberg 2011
The Integration Framework of Train Scheduling and Control
493
control center such as centralized traffic control (CTC) system should make the plans be correspondingly adjusted. The current CTC system mainly constitutes, issues and adjusts the running plans at the participation of human, enacts and issues the plan of temporary speed restraint, and control the remote signal equipments as well as train operation. But its ability of automatic, real-time and intelligent adjustment is not so powerful that the system lacks the powerful ability to resist the disturbances such as traction failure, communication interruption, balise loss, track circuits failure, unexpected delay, bad whether and so on. The IRSC provides the data support. Real-time commands lead to the corresponding control, and the control results incurred or the external factors will conversely result in the real-time adjustment of plans. Under the environment of IRSC, the real-time scheduling and control are dependent and correlated, and their real-time integration is a pending problem for the current train control. Besides, under the environment of IRSC, the adjacent trains can know the real-time operation status and model information. If the advantage of such information is fully taken to establish the new algorithm for train control, the safety will be further improved, the departure interval will be correspondingly shortened, and the railway line density will hence be increased [1]. The model predictive control (MPC) has been utilized for train scheduling [2, 3]. And the novel control algorithm based on model predictive control under the environment of IRSC has been proposed in [1]. The hierarchical integration architecture of real-time train scheduling and control based on MPC has been presented in [1]. This paper will further address the related conceptual problems about the integration architecture of real-time train scheduling and control.
2 Review of Train Scheduling and Control Train Scheduling. Similar to the mathematical description of resource scheduling in shop floor, train scheduling can be described by the models of mathematical programming with constraints. The description about resource (block sections or stations) constraints is that the train can not release its current occupied resource before it finds the available resources. The solution methods are mainly based on constraint handling [4], heuristics [5-11], simulation [12] and sub-problems [13], etc. Caprara et al. [4] adopted the Lagrangian relaxation to deal with the constraints for the real-word timetabling problem. References [5-9] dispatch trains based on the alternative graphs proposed in [10]. D'Ariano et al. [5] employed the approach of branch and bound to deal with the conflict decision in train scheduling, which was compared with the algorithms of First Come First Served (FCFS), First Leave First Served (FLFS) and Avoid Most Critical Completion time (AMCC), and whose advantages over those three methods were demonstrated. D'Ariano et al. [6] solved conflict through the strategy of speed coordination under the perturbations of real-time timetables. D'Ariano et al. [7] introduced the buffer time into the optimization of timetable and compared the performances of four scheduling approaches in [5]. Corman et al. [8] utilized the Tabu algorithm to accelerate the implementation of scheduling algorithm. Liu and Kozan [9] proposed the feasibility satisfaction procedure to schedule trains. Zhou and zhong [11] presented a solution method with guaranteed optimality combining the method of branch and bound and Langrangian relaxation. Li
494
C. Mi and Y. Zhou
et al. [12] put forth the simulation-based approach to solve the problem of train scheduling which can faster obtain the feasible scheduling plans based on the global information of trains. Salido et al. [13] brought forward the domain-dependent distributed constraint models for railway scheduling problems. Train Control. The classic PID controller, modern control theory [14-18], intelligent control [19-22], multi-objective method [14, 15, 19] and so on have been utilized to the train control. Zhuan and Xia [14] presented the output feedback to adjust the speed of heavy haul trains. Chou and Xia [15] proposed a closed-loop cruise controller to minimize the running cost of heavy-haul trains simultaneously considering the velocity tracking, in-train force management and energy usage. Howlett et al. [16] put forward a calculation method of the critical switching points in the globally optimal sense with the steep gradients using the new principle of local energy minimization. Ding et al. [17] brought forward the algorithm for energy-efficient train control under the moving block system to avoid unnecessary braking. Gu et al. [18] adopted the smooth variable structure controller to achieve high precision position and velocity tracking through directly designing the control rate. Jia and Zhang [19] advanced a fuzzy multi-objective optimal control to realize the intelligent automatic train control system. Jandaghian, et al. [20] devised the fuzzy controller-based train speed and dispatching. Lin and Sheu [21] applied the neuro-dynamic programming method to automatically regulate the trains of metro line. Wang et al. [22] presented a novel algorithm of automatic train operation based on the iterative learning theory. From the above survey, we can see that at present the research of train scheduling and control lies at their respective independent fields. However, under the environment of IRSC, in order to take full advantage of such information integration, the real-time organic integration of train scheduling and control is indispensible to accomplish the optimized railway operation with high speed, density and safety, and robust ability to resist the disturbances.
3 The Hierarchical Integration Architecture The hierarchical integration architecture of real-time train scheduling and control is shown in Fig. 1. Top (Macroscopic) Level. Disturbed by the external factors such as communication interruption and balise loss, the train can not run according to its predetermined plans, which brings about that the actual train graph deviates from the plan. At the macroscopic level, the automatic adjustment of train operation plans are fulfilled according to the current status of trains such as positions, speeds and so on, and the optimized scheduling strategies are enacted which demonstrate the start time and the running time corresponding to the various track or block or wireless block sections. Middle (Mesoscopic) Level. The adjusted plans of train operation are transmitted to the locomotive. The locomotive executes the adjusted plans under the precondition of safety distance, and engenders the relationship curve between the acting force (including traction and braking forces), motor speed and the section running time so that train runs through the specified distance with the corresponding speeds.
The Integration Framework of Train Scheduling and Control
Planned timetable Top level (Macroscopic level)
Conflict detection and operation optimization
Predicted timetable
495
Optimized scheduling strategies Top-level speed setup
Feasible scheduling strategies Train operation simulation
Ground control center Top-level speed setpoint Train safety control
Safe distance setpoint Middle level (Mesocopic level)
Predicted distance Predicted train speed
Feasible control strategies
Traction force setpoint
Simulation of train dynamics Optimized control strategies Traction Motor control system system
Motor speed setpoint
Bottom level (Microscopic level)
Predicted speed
Simulation of motor dynamics
Train Operation status (position, speed, etc.)
Feasible control strategies Motor speed
Automatic train protection (ATP) Automatic train operation (ATO) CTC System
RBC system
Motor control system
Communication network
Position, speed and model of this train
Positioning, speed sensor, model identification, and control system
Adjusted plan, and positions, speeds and models of adjacent trains
Fig. 1. The hierarchical integration architecture of real-time train scheduling and control [1]
Bottom (Microscopic) Level. According to the relationship curve between the acting force, motor speed, and the section running time, the bottom level controls the energy supply, drives the traction motors and corresponding appliances, and makes train run at the corresponding speeds (including acceleration and deceleration).
4 Optimization Models at Different Levels Each level has its own optimization objective where the upper level provides the optimized set-points, and the lower level imposes constraints on the upper level. The core of the optimization problem is that how the bottom level imposes the appropriate constraints on the upper level so that the strategies of train scheduling and control are implementable, do not effect the transport efficiency, and thus have appropriate robustness and flexibility.
496
C. Mi and Y. Zhou
Top-Level Optimization Model. The objective is to make the deviation of the prediction time from the planned time as small as possible, and to satisfy the corresponding constraints, i.e.:
min J1 = ∑∑ wi , j (ti , j − Ti , j ) 2 i
where
j∈S
(1)
j
wi , j denotes the weighting coefficients, t i , j and Ti , j stand for the
prediction and planned time that train i arrives at station j, and S is the subset of stations. The constraints include the railway lines, start time of originating stations, running time and occupation of block sections, safety distance, entering and leaving time of trains into and from stations. Middle-Level Optimization Model. At the middle level, the acting force and corresponding speed are solved so that the train runs through the specified section with specified time, i.e.:
∫
t i , j +1
ti , j
where
vtop (i, t )dt = ∫
t i , j +1
ti, j
vmid (i, t )dt
(2)
vtop (i, t ) and v mid (i, t ) denote the speed of train i at the top level and the
middle level, respectively, and
t i , j specifies the arrival time of train i at station j.
Based on Eq. (2), combining the acceleration and deceleration curves and the stable speed, v mid (i, t ) can be obtained. The objective function at the middle level is as follows:
min J 2 =
t0 + P
∑ (v
mid
(i, t ) − v opt (i, t )) 2
(3)
t0
where P is the prediction horizon,
v opt (i, t ) is the speed set-point engendered by the
middle level for the bottom level. The basic relationship model between the acting force and the train speed is:
m
dv opt dt
= F − w(vopt ) − g ( x)
(4)
where m is the train mass, F is the acting force (when F<0, it is a braking force), w(vopt ) is the resistance related to the train speed vopt , g (x) is the revised quantity of the resistance corresponding to the train position x. The safety distance constraint is denoted as:
xi1 − xi 2 ≥ d (vopt )
(5)
The Integration Framework of Train Scheduling and Control
which implies that the difference of position than the set value
497
xi1 and xi 2 of train i1 and i2 is greater
d (v opt ) .
Bottom-Level Optimization Model. The control objective is to make the speed approach the set-point v opt (i, t ) and at the same time to satisfy the balance relationship of energy conversation. The objective function is as follows:
min J 3 =
t0 + P
∑ (v
opt
(i, t ) − v (i, t )) 2
(6)
t0
where
v(i, t ) is the speed of motor. The power conservation is:
∑u i
k k
k
where
= F (t )v mid (t ) + ∑ Pklost
(7)
k
u k ik is the input power of the kth motor, and Pklost is the power loss of the
kth motor during the procedure converting the mechanical energy into electric one. Other constraints include the dynamic equation of motor, and input voltage, current and frequency. At different levels, the models have their respective simplification, thus, the corresponding adjustment and compensation should be undertaken. MPC is applied to the three levels, which is to make the errors be minimized between the prediction and the set-point from the upper levels, and satisfy the corresponding constraints. At the macroscopic level, the optimization model should satisfy the dependent relationship between time, space and speed where the speed and time constraints originate from the middle level and are eventually determined by the bottom level. At the mesoscopic level, the relationship among the acting force, acceleration and train mass should be satisfied where the acting force constraints, which are equivalent to the input current constraints of motor, originate from the bottom level. At the bottom level, the relationship of energy conservation should be satisfied when the energy release is controlled. At the top level, the optimization problem is NP hard which can be solved by the heuristic rules, Lagrangian constraint relaxation and branch and bound. At the middle and bottom level, the optimization method with constraints such as Levenberg-Marquardt algorithm [3] and genetic algorithm can be employed. For the high-speed trains, the optimization algorithm should consider the dynamic variation of train positions during the computing time, and thus the corresponding compensation should be taken.
5 Summary In this paper, we have proposed the hierarchical integration architecture of real-time train scheduling and control based on MPC in order to promote safety, increase speed and line density, and ameliorate the ability to resist the disturbances under the
498
C. Mi and Y. Zhou
environment of IRSC. The solution technique can utilize the constraint relaxation and the error propagation like the weight optimization of neural network. Especially, we emphasize the equivalency of running time (constraint) at each track or block section at the top level [2], thus very accurate simulation aligned with the actual case is generally not very necessary at the top-level prediction model. Similarly, the optimization model at the middle level considers the operability of the acting force although in the actual case the acting process might be a little different. Thus, the equivalence method of problem through imposing constraints does not lose its generosity and applicability although the prediction model might be inaccurate. The real-time integration of train scheduling and control will accomplish the Real-time, Automatic, Predictive, Intelligent Scheduling and control for hIgh-Speed Trains, in short RAPISIST which can be interpreted as the assistant of rapid trains. Acknowledgment. This work is supported by the Fundamental Research Funds for the Central Universities (Grant No. 2009JBM006).
References 1. Zhou, Y., Wang, Y.: Key Engineering Materials 467-469, 2143 (2011) 2. Zhou, Y., Wang, Y., Wu, P., Wang, P.: Proceedings of International Conference on Electrical Engineering and Automatic Control 2010 (ICEEAC 2010), vol. 3, p. 410. IEEE Press, Los Alamitos (2010) 3. Wang, P., Zhou, Y., Chen, J., Wang, Y., Wu, P.: Proceedings of 2010 Second Global Congress on Intelligent Systems (GCIS 2010), p. 47. IEEE Press, Los Alamitos (2010) 4. Caprara, A., Monaci, M., Toth, P., Guida, P.L.: Discrete Applied Mathematics 154, 738 (2006) 5. D’Ariano, A., Pacciarelli, D., Pranzo, M.: European Journal of Operational Research 183, 643 (2007) 6. D’Ariano, A., Pranzo, M., Hansen, I.A.: IEEE Transactions on Intelligent Transportation Systems 8, 208 (2007) 7. D’Ariano, A., Pacciarelli, D., Pranzo, M.: Transportation Research, Part C 16, 232 (2008) 8. Corman, F., D’Ariano, A., Pacciarelli, D., Pranzo, M.: Transportation Research, Part B 44, 175 (2010) 9. Liu, S.Q., Kozan, E.: Computer & Operations Research 36, 2840 (2009) 10. Masics, A., Pacciarelli, A.D.: European Journal of Operational Research 143, 498 (2002) 11. Zhou, X., Zhong, M.: Transportation Research, Part B 41, 320 (2007) 12. Li, F., Gao, Z., Li, K., Li, Y.: Transportation Research, Part B 42, 1008 (2008) 13. Salido, M.A., Abril, M., Barber, F., Ingolotti, L., Tormos, P., Lova, A.: Knowledge-Based Systems 20, 186 (2007) 14. Zhuan, X., Xia, X.: Automatica 44, 242 (2008) 15. Chou, M., Xia, X.: Control Engineering Practice 15, 511 (2007) 16. Howlett, P.G., Pudney, P.J., Vu, X.: Automatica 45, 2692 (2009) 17. Ding, Y., Bai, Y., Liu, F., Mao, B.: Proceedings of 2009 World Congress on Computer Science and Information Engineering, p. 498. IEEE Press, Los Alamitos (2009) 18. Gu, Q., Tang, T., Song, Y.: Proceedings of 2010 Chinese Control and Decision Conference, p. 3239. IEEE Press, Los Alamitos (2010)
The Integration Framework of Train Scheduling and Control
499
19. Jia, L.-M., Zhang, X.-D.: Engineering Applications of Artificial Intelligence 6, 153 (1993) 20. Jandaghian, M., Setayeshi, S., Keymanesh, M., Arabalibeik, H.: Proceedings of the 11th International IEEE Conference on Intelligent Transportation Systems, p. 681. IEEE Press, Los Alamitos (2008) 21. Lin, W.-S., Sheu, J.-W.: Proceedings of 2009 International Joint Conference on Neural Networks, vol. 1807. IEEE Press, Los Alamitos (2009) 22. Wang, Y., Hou, Z., Li, X.: Proceedings of 2008 International Conference on Service Operations and Logistics, and Informatics, p. 1766. IEEE Press, Los Alamitos (2008)
A Design of Anonymous Identity Generation Mechanism with Traceability for VANETs An-Ta Liu1, Henry Ker-Chang Chang2, and Herbert Hsuan Heng Lai3 1
Ph. D. Student, Graduate Institute of Bus Business Administration, Taiwan 2 Professor, Henry Ker-Chang Chang, Graduate Institute of Information Management, Taiwan 3 Master, Graduate Institude of Information Management, Taiwan [email protected]
Abstract. Right now, wireless technologies is very popular. Because of using wireless technologies, many industries become more expandable like vehicle and transportation industry. PAPAGO or Garmin are the examples. Vehicles can get any kinds of information which they need from other vehicles or communication infrastructures, so governments can use this property to improve road safety, traffic management, driver convenience and related applications with services. In this article, we propose a secure anonymous identity generation mechanism and a secure anonymity trace mechanism. The anonymous identities combine with identity-based cryptosystem can realize basic security requirement and privacy protection in VANETs. The anonymous property is attached to the proposed mechanism to protect personal privacy and the anonymity’s real identity can be traced by police or law enforcement authority when necessary. The authors want to revise Wang’s et al. work. Keywords: VANETs, Telematic, Security, Privacy, ID-based, anonymity.
1
Introduction
Traffic safety and management have been serious issues that various countries concern all the time. Intelligent Transportation System (ITS) is an approach that can facilitate road safety, traffic management, and traffic information integration for drivers, passengers, and managers [1]. PAPAGO or Garmin are the examples. With Telematic, ITS can be more powerful and increase road safety by giving drivers more time to react when it is dangerous. The safety-related information and traffic-related information also give drivers more time to make right decision. VANET is a wireless communication network based on mobile ad-hoc network (MANET) topology. The topology of MANET changes frequently [2]. In VANET, nodes are presented by vehicles which have high speed (60km/h~300km/h). The MANET communication standards and protocols are not suitable for VANET because vehicles have no low energy power, low capability, and low memory problems. The potentially challenges will be the high speed and large dimensions of VANET [3]. M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 500–510, 2011. © Springer-Verlag Berlin Heidelberg 2011
A Design of Anonymous Identity Generation Mechanism with Traceability for VANETs
501
Many governmental ITS projects and researches have been proposed in many organizations like Taiwan Intelligent Transport Society, U.S.A. Vehicle Safety Consortium [4, 5], European Union Car-2-Car Communication Consortium [6], European Road Telematics Implementation Coordination [7], etc. IEEE Dedicate Short Range Communications (DSRC) [8] research team proposed about 40 kinds of applications in the early then other applications are in sustainable development. VANET applications can be divided into two groups. They are safety related and non-safety related applications. Then every group can be divided into three sub-groups which are vehicle-to-infrastructure (V-2-I), vehicle-to-vehicle (V-2-V), and vehicle-to-person (V-2-P) [9]. Fig. 1 illustrates a safety-related application.
Fig. 1. The accident avoidance message is be broadcast in VANETs when it happened
If we can properly design a security mechanism and a secure architecture from the beginning of implementation, it will not difficult to prevent the security or privacy attacks. Raya and Hubaux [10]. and Wang et al. [11] proposed using anonymous key pairs to protect personal privacy, but both ideas just a concept. They didn’t describe the details. In 2008, Lin et al. [12] pointed out the privacy, and identity related problem in IEEE standard 1609.2. So we propose a mechanism based on identity-based encryptosystem to apply anonymous identities and corresponding key pairs.
2 Related Work The authors want to revise the review of Wang’s et al .[11] secure communication scheme; in this section we review on the session key establishment and group session key establishment of Wang’s secure communication scheme. Fig. 2 shows the pairwise session key establishment scheme of the scheme. Before two vehicles start to communicate, they have to exchange a session key through a secure way. Wang’s
502
A.-T. Liu et al.
scheme is based on Diffie-Hellman key exchange. For the group applications, establishing secure groups with secret group keys is a better solution. There is a group leader L in the center of the group cell. L distributes the group key SK to members by broadcasting and encrypt the SK by member’s public key. L also sends hash values of the receivers’ public keys to help the receivers identify which encrypted group key to decrypt and sign a signature by L‘s private key to ensure the legality of key distribution messages. The group session key establishment scheme show in Fig. 3.
A → B : M 1 (ask B to communicate message with Diffie Hellman parameter a , q, YA ), Sig PrKA [ M 1 | T ], Cert A B → A : M 2 (respond with Diffie - Hellman parameter YB ), Sig PrKB [ M 2 | T ], Cert B , HMAC sk ( M 2 ) A → B : M 3 ( session key is built ), HMAC sk ( M 3 ) Transmit subsequent encrypted message with signature: A → B : Esk [ m | Sig PrKA [ HMAC sk ( m )]] Fig. 2. Wang’s pairwise session key establishment scheme
D is trib u te th e g ro u p k e y S K to A , B a n d C : L → ∗ : H A , { S K } P u K A , H B ,{ S K } P u K B , H C , { S K } P u K C S ig P r K L [ th e w h o le m e ss a g e ] U sin g S K to e n c ryp t m e s sa g e : L → ∗ : E SK [ m ] E n c ry p te d m es s a g e w ith sig n a tu re an d H M A C : L → ∗ : E S K [ m | S ig P rK L [ H M A C S K ( m )]] o r L → * : E S K [ m ], S ig P r K L [ H M A C S K ( E S K [ m ])] W h e n a n e w n o d e D en te r th e g ro u p : L → D : { S K } P u K D , S ig P r K L [{ S K } P u K D ] Fig. 3. Wang’s group session key establishment scheme
A Design of Anonymous Identity Generation Mechanism with Traceability for VANETs
503
In both two kind of key establishment schemes can not protect the personal privacy when necessary. The privacy of broadcast message sources should be considered, too.
3 Anonymous Key Pair Application Mechanism and Anonymity Trace Mechanism In the proposed mechanism, there are two sub mechanisms which are anonymous key pair application mechanism and trace mechanism and our proposed mechanism is focused on V-2-V communication. The anonymous identities combine with Identity-Based Cryptography (IBC) [12] can realize basic security requirement and privacy protection. In the proposed schemes, we assume that key generation center (KGC) and MVO are legal and secure for information protection. Fig. 4 illustrates the operations in anonymous key pair apply mechanism. When a vehicle buyer buys a new vehicle, the buyer has to apply a physical and unique LP for the new vehicle. The electronic license plate (ELP) and anonymous key pair can be applied at the same time.
Fig. 4. Anonymous key pair application mechanism description and architecture
The purpose of the system is to generate anonymous key pairs securely and prevent inside attack form KGC or MVO. The MVO apply anonymous key pairs from KGC for applicant, but the applicant’s real identities are only known by MVO. KGC can not get the real identities in the period of generating procedure. MVO can not know the anonymous key pairs even these key pairs have to transmit to applicant through MVO. So, the anonymity can not be matched with the real identity by any one side. We assume that all the network transportations are protected by secure socket layer (SSL).
504
A.-T. Liu et al.
The operation procedure and principle of the proposed mechanism are as follows: 1.
2. 3.
4.
5.
6. 7. 8. 9. 10. 11. 12.
13.
3.1
The applicant takes or sends an application of LP, ex-factory statement and the manufacture’s certificate of the vehicle to MVO with applicant’s related information that can proof the applicant’s real identity. After receive the related information and certificate from applicant, MVO verifies them first. MVO generates a unique ELP for the corresponding LP. The ELP is an electronic data that can be stored in smart car or in hardware security module (HSM). HSM is a physical device in form of a plug-in card or external security device that can be attached to general propose computers. HSM can secure generated data, secure store data, and use cryptographic or sensitive approaches. HSM provides both logical and physical protection of data. Many HSM systems have means to securely backup the keys they handle either in a wrapped form via the computer’s operating system or externally using a smartcard. MVO uses a HMAC function with any one of applicant’s real identities like name, phone number, e-mail address, or identification number to calculate a unique MAC value for the ELP. MVO signs the MAC value and the related information by MVO’s private key. The signature and the certificate can proof that the MAC value and the messages are sent by MVO. MVO stores MAC value and applicant’s related information in their database securely. MVO sends the MAC value and the information that KGC needs to know to KGC through a secure socket channel (SSL). KGC verifies the message by checking the signature and the certificate of MVO. KGC generates one or more than one identities. These identities will combine with the ELP to produce a group of anonymous identities for the applicant. KGC uses these anonymous identities to generate corresponding key pairs. These anonymous key pairs will be used in the identity-based VANET environment. KGC stores the MAC, anonymous identities, and corresponding key pairs in their database securely. KGC sings these anonymous identities and corresponding key pairs by KGC’s private key then sends them back to MVO. The signature and the certificate can proof that the message is sent by KGC. The most important thing is that MVO just can see the MAC value and the information which are not related to anonymous identities. MVO singed the message from KGC and gave it to the corresponding applicant. After applicant gets the anonymous identities and corresponding key pairs, he/she can use them in all communication applications. Application Phase
Shown in Fig. 5, applicant chooses two random numbers (r1, r2). These two numbers are used as secret keys so MVO and KGC can send the message back secretly. It assures that MVO cannot see the context of the message which sends back by KGC, and the context of message which sends to applicant by MVO can not be seen by others, too.
A Design of Anonymous Identity Generation Mechanism with Traceability for VANETs
Fig. 5. Application phase Table 1. Mechanism symbols
Table 2. Example of RIN
505
506
A.-T. Liu et al.
After MVO verifies the applicant’s identity and the vehicle’s related information, MVO generates the ELP. Because we want to let KGC can not see the correct ELP, MVO uses a MAC function to encrypt an ELP. MVO uses any one of applicant’s real identity to be the encrypt key and use a number (RIN) to record which real identity is the encrypt key. Table 2 shows an example of (RIN). MVO uses area code (AC) to record which MVO applies anonymous identities for applicant. (RIN) and (AC) are used in trace mechanism which we will descript the detail in chapter 3.2. The ELP is unique so the MAC value will be unique. Because of the property of MAC and KGC does not know the MAC encrypt key. Although KGC have the MAC value, it can not calculate the correct ELP. The last operation in this phase is that MVO sends the MAC value with a timestamp (t1) and MVO’s certificate (CertM) to KGC. Because we have to sure the confidentiality of message, the message will be encrypted by KGC’s public key (PUK) before transmit. 3.2
Key Generating Phase
Shown in Fig. 6, KGC will generate anonymous identities and corresponding key pairs for applicant. After KGC verifies the message and MVO’s identity, it generates a sequence number (SN) which is a record number and a group of user identities (UIDi). The parameter (i) is the order of (UID). KGC will use a MAC function to calculate MAC value for each (UIDi). The MAC encrypt key is the MAC value of the ELP.
Fig. 6. Key generate phase
For the propose of tracing in the future, police or law enforcement authority put a secret key in KGC. KGC uses the secret key to encrypt the (SN) then concatenate it with the MAC value of (UID). And the corresponding key pair will be generated by the scheme which we reviewed in chapter 2.2 at the same time. We show the operation is in Fig. 7.
A Design of Anonymous Identity Generation Mechanism with Traceability for VANETs fo r ( i = 1; i ≤ { A ID Q
A ID
i
i
= M A C = H
1
507
j; j + + ) E M A C
(U I D i ) || S E
( A ID i ), D
A ID
i
= s ⋅Q
k
P K
(SN )
A ID
i
} // j is th e to ta l n u m b e r o f U ID // f o r a ll i , j ≥ 1
Fig. 7. Operation of the anonymous identities generating phase
After the applicant gets the anonymous identities, he/she can optionally use real identity or anonymous identities to require service or broadcast messages when he/she wants to be anonymity to prevent trace action from others. Anonymous key pairs can store in a smartcard or store in intelligent car key to prevent impersonation when vehicle was stolen. 3.3 Anonymity Trace Mechanism Polices or law enforcement authorities can trace the anonymity by the safety related broadcast message when necessary by using this mechanism. Fig. 8 shows the proposed mechanism.
Fig. 8. Anonymity tracing mechanism
If police or law enforcement authorities have had the message already, they can conduct the anonymity trace mechanism. The operation steps of the proposed trace mechanism are described as below: 1.
The police or law enforcement authorities use the secret key which is shared with KGC which only to get the record sequence number (SN) then generate a request message. Request sender will make a signature for the request and hidden the (SN) by encrypting it.
508
A.-T. Liu et al.
2. 3.
4. 5.
6. 7.
8.
4
The police or law enforcement authorities send the complete request message ( RKGC ) to KGC. KGC verifies the signature and decrypts the ciphertext to get the (SN). Then find the corresponding (EMAC) and ( SEkM ( RIN || AC ) ) according to the (SN). Before sends it back, KGC will make a signature for it and hidden the (EMAC) and ( SEkM ( RIN || AC ) ) by encrypting it, too. KGC sends the response message (E5) back to the request sender with secure protect. Gets the (EMAC) and ( SEkM ( RIN || AC ) ) which matches the (SN) from (E5). Then generates another request message ( RMVO ). Request sender will also make a signature for the request and hidden the (EMAC) and ( SEkM ( RIN || AC ) ) by encrypting it. The police or law enforcement authorities send the complete request message ( RMVO ) to MVO. MVO verifies the signature and decrypts the ciphertext to get (EMAC, RIN, AC). Then find the related information like real identities about the anonymity according to the (EMAC, RIN, AC). Before sends them back, MVO will make a signature for it and hidden the related information by encrypting it. Sends the response message (E6) back to the request sender with secure protect. Then the police or law enforcement authorities can get the information they want after verify and decrypt (E6).
Security Analysis
In this section we firstly analyze the security about the anonymous key pair generation mechanism and the anonymity tracing mechanism, then analyze the security in VANETs communication by using anonymous key pairs and compare the security with other schemes. 4.1
Security Analysis for Anonymous Key Pair Application Mechanism
Anonymity: The ELP is the material of anonymous key pairs, but MVO blinds this material by HMAC function. Even KGC have the HMAC value, KGC does not know the HMAC key and can not calculate the original input (ELP). So, KGC does not know the real identity about applicant. 1. Unforgeability: The material of anonymous key pairs will be signed by MVO, so KGC can verify the legality of the material. On the other hand, MVO can verify the signature of KGC to sure the legality of the anonymous key pairs. 2. Authentication: Because the MVO and KGC attach their certificate and signature, they can recognize each other’s identity. 3. Confidentiality: All messages are protected by encrypting and the security socket layer, so the confidentiality can be ensured. 4. Insider attack protection: The anonymity can not be matched with the real identity because of the functionality of (r2) and the anonymity (1) of the mechanism. The
A Design of Anonymous Identity Generation Mechanism with Traceability for VANETs
509
random number (r2) is be encrypted by KGC’s public key before applicant transmit it, so random number (r2) can not be known by MVO. After the anonymous key pairs are generated, KGC encrypt them by using the secret key (r2) before send them back to applicant through MVO. So the anonymous key pairs can not be seen by MVO. When inside attacker in any of the two departments want to match the anonymity with the real identity, he/her must have the authority that polices or law enforcement authorities have. Oppositely, attacker can not get the information from two sides. 4.2 Security Analysis for Anonymity Tracing Mechanism 1. Authentication: Because polices or law enforcement authorities attach their certificate and signature, they identities can be recognized. 2. Confidentiality: All messages are protected by encrypting, so the confidentiality can be ensured. 3. Reply attack protection: This problem can be solved by using timestamp. 4.3 Security Analysis in VANETs Communication by Using Anonymous Key Pairs 1. Anonymity: Driver can random choose an anonymous identity to make a signature for messages before transmits. 2. Authentication: The anonymous signature can be verified by the corresponding anonymous identity. If the signature can not be verified, that prove the anonymous identity is not generated by KGC under IBC. 3. Confidentiality: Sender can use the anonymous identity of receiver to encrypt messages before transmits. 4. Traceability: Polices or law enforcement authorities can trace anonymities by using anonymity tracing mechanism. 5. Integrity: We using hash value to prevent that attacks modify messages in the air. 6. Non-repudiation: The broadcasted messages all have to been signed by the legal anonymous identity. If someone forges the traffic or safety-related message, the source’s real identity can be traced by the forged message.
5
Conclusion
In this work, we propose an anonymous key pair generation mechanism and an anonymity tracing mechanism that satisfies the security requirements with privacy preservation for IBC of the VANETs. It makes the anonymous key pairs generating procedure securely and the anonymous identity can not be matched by insider attack easily, but it can be traced by polices or law enforcement authorities when necessary.
References 1. Qian, Y., Moayeri, N.: Design of secure and application-oriented VANETs. In: Proceedings of IEEE VTC 2008-Spring, Singapore (May 11-14, 2008) 2. Intelligent Transport Systems of Taiwan, http://www.its-taiwan.org.tw/
510
A.-T. Liu et al.
3. Plöíl, K., Federrath, H.: A privacy aware and efficient security infrastructure for vehicular ad hoc networks. Computer Standards & Interfaces 30(6), 390–397 (2008) 4. Li, C.-T., Hwang, M.-S., Chu, Y.-P.: A secure and efficient communication scheme with authenticated key establishment and privacy preserving for vehicular Ad-Hoc networks. Computer Communications (2008) 5. Kamat, P., Baliga, A., Trappe, W.: An identity-based security framework for VANETs. In: International Conference on Mobile Computing and Networking, Los Angeles, California, USA, pp. 94–95 (September 2006) 6. Hubaux, J.-P., Capkun, S., Luo, J.: The security and privacy of smart vehicles. IEEE Security & Privacy 2(3), 49–55 (2004) 7. ITS America, http://www.itsa.org/ 8. European Road Telematics Implementation Coordination (ERTICO), http://www.ertico.com/ 9. U.S. Dept. Transportation, National Highway Traffic Safety Administration, Vehicle Safety Communications Project - Final Report (April 2006), http:// wwwnrd.nhtsa.dot.gov/pdf/nrd-12/060419-0843/PDFTOC.htm 10. Raya, M., Papadimitratos, P., Hubaux, J.-P.: Securing vehicular communications. IEEE Wireless Communications Magazine (2006) 11. Wang, N.-W., Huang, Y.-M., Chen, W.-M.: A novel secure communication scheme in vehicular Ad-Hoc networks. Computer Communications 31(12) (2008) 12. Chen, L.: Identity-based cryptography http://www.sti.uniurb.it/events/fosad06/papers/ Chen-fosad06.pdf
A Improvement of Mobile Database Replication Model Yang Chang Chun, Ye Zhi Min, and Shen Xiao Ling Department of Information Science and Engineering, Changzhou University, Changzhou China [email protected]
Abstract. This article is on the basis of three-level model, put related affairs result sets as synchronous processing model, it put the traditional tuple or affairs result sets which are taken as the synchronous granularity into related affairs result sets and according to related affairs conflict resolution strategy, reduce the amount of data synchronization transmission effectively, improve the efficiency that mobile database are in the use of wireless link resources. Meanwhile combined with the improved upload affairs queue structure, this article put the reading sets, write sets and result sets which are taken as form of affairs upload traditionally into the form of data sets and result sets, improve the output efficiency. Finally, through the simulation experiments, the improved model is compared to the traditional synchronous processing model, proof the superiority of this model. Keywords: mobile database, synchronous granularity, Data synchronization.
1 Introduction Data replication cache technology is one of the most important technologies in the mobile database. Its main purpose is to improve the database system’s availability, reliability and the performance of data access. In recent years, people have made a lot of research work on copy of mobile database. Many versions conflict resolution methods [1]: Advantages: affairs don’t need to redo on the server, reduce the burden of server, and managed by means of a multi-versioning, greatly improve the success rate of transaction validation. Shortcoming: high system complexity, so the usability and scalability is poor. Three-level replication mechanism [2]: Advantages: introduction of a mechanism for data broadcasting, increasing system scalability. Crop of three-level replication mechanism can be applied by a variety of mobile database flexibly. Shortcoming: its realization is complex, consuming more system resources and the higher performance of mobile host requirements. This shows the existing synchronous replication mechanism is not perfect, the article presents a kind of processing models which takes the results of a related transaction sets M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 511–516, 2011. © Springer-Verlag Berlin Heidelberg 2011
512
Y. Chang Chun et al.
as a synchronization-grain, binding conflict resolution strategies of related transactions, at the same time improves the upload transaction queue structure, effectively reduces the amount of synchronous transmission data, improve the efficiency that mobile database are in the use of wireless link resources.
2 Improve Replication Model A New Upload Transaction Queue Structure. Upload transactions are stored in the queue have been committed to client, and wait for the upload, interacting with the server. The unit of traditional transaction in the upload queue is formed by reading set, write set and result set (refer with table1). To read all data objects stored in the reading sets, as a result of limitations of mobile device itself conditions and wireless communication uplink bandwidth, to do so, not only waste the mobile device space, take up too much bandwidth, but also compare the excess data in the process of collision detection, increase the system costs and reduce the transmission efficiency. Table 1. Traditional upload transaction queue structure Mobile transaction MT1
Reading set Readingset1
Write set Writeset1
Result set Resultset1
MT2
Readingset2
Writeset2
Resultset2
Therefore, this article applies a new upload transaction queue structure, save buffer space, reduce the time of collision detection and improve the efficiency of synchronization effectively. Following to introduce this new upload transaction queue structure[3](refer with table2): Assuming the database management system generates a unique number NUM for per row. Definition 1 Dataset: before the transaction commits, use the SQL statements to analyze transaction operation sequences, convert to a Select operation, get the collection of data objects, including serial number. When a Delete operation in a transaction, for example: delete * from student where age=30, then convert to the corresponding Select statement is: select * from student where age=30. When an Update operation in a transaction, for example: update from student set name=’Job’ where id=5, then convert to the corresponding Select statement is: select name from student where id=5. When an Insert operation in a transaction, you do not need to convert, directly set to φ. Definition 2 Resultset: after the transaction commits, get the collection of all update of insert data objects, including serial number. When a Delete operation in a transaction, you do not need to convert, directly set to φ.
A Improvement of Mobile Database Replication Model
513
Table 2. Improved upload transaction queue structure Mobile transaction MT1 MT2
Dataset Dataset1 Dataset2
Resultset Resultset1 Resultset2
For example: a transaction MTi contains the following sequence of actions(refer table3): Select Name from student where ID=4; Update from student set age=33 where name=’wangw’; Insert into student (ID, Name, Sex, Age) values (6,’xiaol’, Female,20); Delete * from student where ID=2; Table 3. Student ID 1 2
Name Zhangsan Lisi
Sex male male
Age 20 23
3
Wangw
female
25
4
XiaoHong
female
19
Parsed, get an upload transaction unit(refer table4): Table 4. Parsed upload transaction unit Mobile transaction MTi
Dataset NUM ‘XiaoHong’; NUM 25; φ; NUM 2,’Lisi’,’male’,23;
Resultset φ; NUM 33; NUM 6,’Xiaoli’,’female’,20; φ;
Related Transaction Result Set as the Synchronization Granularity. Mobile synchronization granularity is divided into data sets synchronization and transaction-level synchronization. But there are imperfect. This article put related transactions result sets as synchronous granularity, it is a synchronous way between data sets and transaction-level. Definition 3[4]: In the mobile transactions upload queue, for two mobile transactions MTi and MTi (i<j>=n), MTi is committed before MTj, if DataSet(MTi) ∩ DataSet(MTj)=a≠φ, it acknowledged that the two transactions are related affairs about a. Definition 4[5]: In the process of collision detection, because of a data object of a transaction conflict, so this transaction conflict. Conflict Resolution Strategy Based on Related Transactions Theorem 1[6]: In the process of collision detection, if MTi conflicts because of a, so in the subsequent transactions of MTi, All transactions associate with MTi on a conflict is bound to all.
514
Y. Chang Chun et al.
Therefore, In the process of collision detection, if a transaction conflict due to a data object, so not only this transaction needs to roll back, but also all transactions which associate with this transaction on the data object need to roll back too. Using this method, the burden on upload link could be reduced in a certain extent, saving time of collision detection.
3 Improved Synchronization Algorithm Collision Detection. According to section, in the process of collision detection, server first generate dataset which corresponding to the client by dataset-resultset-generator, just compare dataset in the upload queue and server’s dataset, if the same, pass the collision detection, then the transaction’s resultset are global submitted to the master database; if not the same, the transaction is rolled back, and determine the serial number of data object which cause the collision, search for the tuple which has the same serial number in subsequent transaction, if it is discovered, then search for the corresponding data object which is in the collection that is administered by the serial number, if find the data object, roll back the transaction which contains the data object. For INSERT operation, when the collision detection, only needs to determine whether break the master database’s entity integrity, if not, the server will generate a serial number for the new tuple, finally all the transactions finish collision detection, server finish the new modification, by means of download, new serial numbers are passed to the client, refresh the local database. Upload Process. Step1: client’s transactions convert to the form of dataset, resultset by dataset-resultset-generator, generate the upload transaction unit and deposit in the upload queue. Step2: locking the corresponding tuple, compare the first mobile transaction and the second in the upload transaction queue, according to the definition of the associated transaction to determine whether the MT1 associates with MT2. Step3: if MT1 associates with MT2, mark the associated data object as UT1, then merge the MT1 and MT2’s dataset and resultset, generate UnionSet(MT1,MT2), the merged resultset is the value after the last associated transaction commits; if MT1 and MT2 unrelated, they are in synchrony with master database from MT1 to MT2 orderly, and delete both in the upload queue. Step4: get MT3 from transaction upload queue, according to the definition of the associated transaction to determine whether the MT3 associates with UT1. If relevant,model the step3,merge MT3 and UnionSet(MT1,MT2),generate UnionSet(MT1,MT2,MT3), this is a collection of three interrelated services merge. If appear this case, synchronize with the master database directly instead of caching them, waiting for judging the fourth; if MT3 and UT1 unrelated, they are in synchrony with master database from UnionSet(MT1,MT2) to MT3 orderly. Step5: for MT4, if all transactions’ synchronization have completed, then execute step2, or execute step3 by analogy. Download Process. This article adopts the mixed download broadcasts mode, that is when the client is busy, simply requests some data that is not on the client; when the
A Improvement of Mobile Database Replication Model
515
client is in the idle state, requesting to update existing data; clients get some hot data information by data broadcasting at regular intervals. For the first and the second, the download process in divided into the following three steps: Step1: according to the last download time, scanning database system’s log files on server, or the download request, determine which collection of data objects is needed to download. Step2: for ensuring the consistency of read data, apply the appropriate tuple blockade. Step3: transfer data to the output buffer.
4 Improved System Testing and Analysis Experiment Environment. Experimental system uses JAVA language to develop, select MyEclipse for development tool, and take LAN environment to simulate a wireless environment, by means of sending data with space to control link bandwidth. There are a server, three clients in LAN(refer with table5), create multiple client threads to simulate multiple clients, set the average time is 10ml. Table 5. Software and hardware environment Role
CPU
MEMORY
SYSTEM PLATFORM
Server
Intel i5 2.53GHZ Intel Core 2 Duo 2.2GHZ
2G 1G
Windows7 Windows XP
AMD Dual-core Opteron 2212 2000MHZ Intel Core2 E7500 2930MHZ
1G
Windows 2000 Server
1G
Linux
Client
Keep a TCP connection between clients and server, clients submit through the uplink and accept the results through the downside link. Communication between the clients and the server is completed by socket[7](refer with Fig.1).
Fig. 1. socket communication model
516
Y. Chang Chun et al.
Experimental Result. To illustrate the advantage of improved methods of this article, add the related affairs resultset as simultaneous granularity to improved upload transactions in the queue, combine with conflict resolution strategies, which compare with only improve upload affairs queue. After the experiment, we can see the related affairs resultset as synchronous granularity are combined to the original improved model , the quantity of synchronous data obviously less than the latter, this is because the former merges the related transactions resultset, reduce the amount of intermediate data, at the same time the quantity of data in transmission process is drop accordingly, meet the result of theoretical analysis. So we can see the superiority of taking related affairs resultset as a synchronous granularity.
References 1. Kistler, J.: Disconnected Operation in the Coda file system, pp. 6–22. Carnegie-Mellon University, Pittsburgh (1993) 2. Keller, A.M., Basu, J.: A predicate-based caching scheme for client-server database architectures. VLDS Journal 5(1), 38–52 (1996) 3. Pissinou, N.: A new framework for handling mobile elients in a dient-server database system. Computer Communications 23, 899–952 (2000) 4. Cao, G.H.: Proactive Power-Aware Cache Management for Mobile Computing Systems. IEEE Transactions on Computers 51(6), 596–621 (2002) 5. Gray, J., Helland, P., O’Neil, P., et al.: The dangers of replication and solution. In: Proceedings of the ACM SIGMOD Conference, pp. 173–181 (1996) 6. Zheng, B., Lee, D.L.: Processing Location-Dependent Queries in a Multi-Cell Wireless Environment. In: MobiDE 2001, Santa Barbara, CA, USA, pp. 68–73 (2001) 7. Want, R., Schilit, B.: Expanding the Horizons of Locaion-Aware Computing. IEEE Computer, 1124–1131 (2001)
Software Design and Realization of Altimeter Synthetically Detector Shi Yanli, Tan Zhongji and Shi Yanbin Dept. Electronic Engineering Aviation University of Air force Changchun, 130022, China [email protected]
Abstract. The test of the altimeter is the important content for function measurement of the plane navigation system, Synthetically Altimeter Detector can make the some type altimeter test standardized, this paper introduce the overall design and system composition of the altimeter detector, we analyzed emphatically the software design procedure of the altimeter detector appearance, and carried on experiment test to some performance of altimeter detector finally. Practice indicates: the design of the altimeter detector has a sixteen bit D/A converter, it can improve detection precision, all microwave channels and fittings are installed within the instrument at the same time, such structure makes the instrument effectiveness further improve, have mistake of defending, the characteristic easy to use, have advantages such as good military and economic benefits. Keywords: Altimeter, Procedure Structure, Microwave Control, Experiment.
1 Introduction According to the need of overhauling factory's maintaining, check-up and army, the altimeter synthetically detector is a comprehensive testing instrument, it is developed by the test craft of the related radar altimeter and relevant requirements. This instrument adopts the comparatively advanced technology at present, and by measuring technology accurately as the backing, it can be used to measure the receiver of altimeters or indicator alone under the cooperation of the all purpose instrument, and can test all test projects of the related radar altimeter too, for example: the sensitivity, altitude precision, voltage slope, and measurement altitude zero point of the related radar altimeter receiver, at same time it can test indicator precision, alarm altitude, self detection, alert flag control of the related radar altimeter indicator, it can carry on these functions of check-up accurately. We can use this tester to test and check the some type radar altimeter, and can make its every performance index meet the technical requirement.
2 The Global Design of System This instrument adopts FPGA device EP1C6Q240C8N as core control device that ALTERA Company produces, this instrument carries on the integrated design, mainly M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 517–521, 2011. © Springer-Verlag Berlin Heidelberg 2011
518
S. Yanli, T. Zhongji, and S. Yanbin
made up of several following parts: Circuit of the power, programming device EP1C6Q240C8N control circuit, keyboard and indicator circuit, voltage gather circuit, velocity simulation circuit, altitude simulation circuit, microwave control circuit, high voltage output circuit etc. The system is controlled and finished every work by the programming device, choose the working state through the keyboard, and reveal the working state through the display, the principle block diagram of the instrument is as Fig. 1 shows.
3 The Software Design of System System software design adopts the programming device EP1C6Q240C8N that ALTERA Company produced, it use VHDL language to design in Quartus5.1 integrated environment, and finish all control functions of system, data gathering and handling function, modulus converter and reveal function. This circuit mainly finishes the following functions: Handling panel switch and button control to check the working state of the altimeter detector; Control the voltage to gather, vary and deal with; Control the formulation of the distance analog signal; Control the formulation of the velocity analog signal; Control the production of various voltage needed in measuring; Control the decay and delay of the microwave passway.
Fig. 1. Block diagram of altimeter comprehensive detector
Software Design and Realization of Altimeter Synthetically Detector
519
Beginning Equipment initialization
Equipment checking
N
Y
Keyboard choosing Indicator checking Alarm altitude checking Sensitivity testing Altitude simulation
Data calculation
Number indicator
Velocity simulation Voltage measuring Fig. 2. The overall procedure control diagram
The overall test procedure as shown in Fig. 2: Instrument added behind the power, at first the detector go on the initialization, this part include: programming-controlled attenuator, radio frequency relay, A/D converter, D/A converter and modulation circuit. Then the checkout equipment starts working, when the equipment open, it will check capture time and - 50V voltage, we can choose to test the project under the control of the keyboard, according the testing project can controls each port switch, thus can carry on filtering and calculation of gather data, finally measure result send to
520
S. Yanli, T. Zhongji, and S. Yanbin
the number indicator. This detector adopts the portable box structure, small, light, easy to use, easy to operate.
4 Experimental Results 1. Decay of the radio frequency passway Requirement: resolution ratio:1dB; precision: ±1dB. Table 1. Decay of the radio frequency passway Requirement Measurement value
52 51.2
63 63.5
65 64.3
71 70.2
72 72.3
Unit: [dB]
74 73.8
80 80.4
82 82.3
114 114.5
127 127
2. Delays time of the radio frequency output Requirement: zero height 0m, precision: ±0.3 m; high height 500m, precision:±1.8 m. Table 2. Delays time of the radio frequency output Requirement Measurement value
0 0.1
Unit: [m] 500 500.3
2. Distance simulation Requirement: error is smaller than 0.15m under 150 m; error is smaller than 0.75m above 150 m. Table 3. Distance simulation Requirement Measurement value
0 0.1
30 30.05
60 60.1
90 89.9
Unit: [m] 120 120.1
300 300.2
600 600.1
900 900.2
1200 1200.5
1500 1500.4
The experiment indicates, the detector test the above every performance of altimeter and all the appearance meet the requirement of testing, the precision is relatively high. From the experiment results, we can get some conclusion: the design of the altimeter detector has a sixteen bit D/A converter, it can improve detection precision, all microwave channel and fittings are installed within the instrument at the same time, such structure makes the instrument effectiveness further improve, have mistake of defending, the characteristic easy to use, have advantages such as good military and economic benefits.
5 Conclusion The altimeter detector is checked by numerous experiments in different army, it can meet the need when measuring the receiver alone or measuring the indicator alone or testing the system function, also can test all test projects of the some type radar
Software Design and Realization of Altimeter Synthetically Detector
521
altimeter, its accuracy and dependability can meet the demands. This instrument is a kind of very practical, reliable apparatus, the altimeter detector adopt new technology and new device, its dependability gets greater improvement, it not merely make the engineering level, precision of designing apparatus reach the current standard, and has offered the condition that the ground apparatus digitization that is more and more adopted generally exchanged information.
References 1. Yang, Q.: The credibility standard employs the guide. China Standard Press, Beijing (2000) 2. Support and Rationale Document for the Software Communications Architecture Specification Appendix C. Step 1 Architecture Definition Report (ADR), 63 (June 2000) 3. Xie, H., Zhou, J.: Prediction and elimination of a district in the radiation field of guidance station of TACAN. Electronic technology of aviation 33(1), 20–23 (2002) 4. Hao, S., Hu, Y.: Production domesticization of navigation software correcting pare the some type plane. The journal of the air force - project university 3(1), 48–50 (2002) 5. Chang, S., Tan, Z.J., Li, N.: The design of interphone communicating and recording detector. The journal of Computer measurement and control 16(1), 135–137 (2007) 6. Tan, Z.J., Chang, S., Li, N.: The apparatus study of Some type TACAN system. The journal of Computer measurement and control 16(1), 87–89 (2007) 7. Tan, Z.J., Gao, Y., Xiao, L.Z.: The design of language and information system. The journal of Computer measurement and control 14(2), 230–232 (2006) 8. Makaya, C., Pierre, S.: An Interworking Architecture for Heterogeneous IP Wireless Networks. In: Third International Conference on Wireless and Mobile Communications, ICWMC 2007, p. 16 (March 2007)
Emulational Research of Spread Spectrum Communication in the More-Pathway Awane Channel Shi Yanli, Shi Yanbin, and Yu Haixia Dept. Electronic Engineering Aviation University of Air force, Changchun, 130022, China [email protected]
Abstract. The technique of spread spectrum communication is a way to deliver information, it has better anti-interference abilities. This paper introduces the basic principles and the structures of direct sequence spread spectrum, in the Systemview software we set up emulatioanal model of the spread spectrum communication, and joined Rummler interference channel and self-defining channel, then carry on emulation and analyze comparatively transmission of the spread spectrum signal in different channel, through research analysis certificate to the anti-interference ability of the spread spectrum communication. Keywords: Spread Spectrum Communication, Rummler, Self-defining, Antiinterference.
1 Introduction In more than 30 years since middle period of the fifties, spread spectral communication develop rapidly, and get more and more extensive application, in respects such as communication, data transmission, information privacy, making a reservation, finding range and many location technology and so on, spread spectral communication reveal its extremely strong vitality. There is a problem exists in satellite communication, scatter communication, mobile communication, plane and satellite communication, this problem is The More-pathway interference, but the spread spectral system can overcome this problem effectively. This paper carries on emulational research on the issue, and analyzes the anti-interference ability of direct sequence spread spectrum communication.
2 Emulational Analysis of Direct Sequence Spread Spectrum Communication In the Spread Spectrum Communication system, its transmitter increased the modulation of Spread Spectrum Communication and its receiver increased demodulation of Spread Spectrum Communication. The direct sequence spread M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 522–527, 2011. © Springer-Verlag Berlin Heidelberg 2011
Emulational Research of Spread Spectrum Communication
523
spectrum (DS) and the frequency hopping(FH) are two technology employed widely most in the spread spectrum communication. The emulational model of the DS can be set up in the Systemview software, in the transmitter, the data information produced by adopting the random array change into yard, the absolute yard become relative yard, by 2PSK modulator person can modulate the signal on the radio frequency signal, And then with expanding frequency yard multiplying, will produce the centre frequency expand spectrum signal; in the receiver, the signal through band-pass filter and mixer, this signal change into a intermediate frequency signal, finally vary instead yard, the relative yard change into absolute yard, and resume primitive information. Emulational fruits the following every picture shows:
Fig. 1. Original yard
Fig. 2. Demodulated Spread Spectrum output signal
Fig. 3. Spread Spectrum signal
524
S. Yanli, S. Yanbin, and Y. Haixia
Fig. 4. Spectrum of Spread Spectrum signal
In the ideal channel have not joined with The More-pathway interference, The direct sequence spread spectrum communication system model can transmit signal that have not distortion, the resumptive signal from demodulated Spread Spectrum in the receiver is identical with the original signal in the transmitter, only have certain delay.
3 Emulational Analysis of Rummer Channel Rummler declined channel is a channel model it have three routes, it is composed by the directed and reflected energy mixed directly, the reflected signal change amplitude and phase of the directed signal, so can form the mulriple signal. This kind of three routes decline channels is applied extensively Emulation of the digital
Fig. 5. Original yard
Fig. 6. Demodulated Spread Spectrum output signal
Emulational Research of Spread Spectrum Communication
525
Fig. 7. Spread Spectrum signal
Fig. 8. Spectrum of Spread Spectrum signal
microwave relaying communication chain of the range of visibility (LOS). It is a statistics model based on the transmitting function. According to the artificial model, we operate circuit described above after setting up system and module parameter, through the Systemview analysis window and the receive calculator, it can provide the wave form of each point signal directly and analyze the ability to resist noise of system, emulational results the following every picture shows. Follow from the above-mentioned artificial wave forms: after the direct sequence spread spectrum communication system added the More-pathway interference and gausses noise, time spectrum of Spread Spectrum signal appears serious declination, and frequency spectrum of Spread Spectrum signal appears serious interference, but distortion did not take place in demodulated Spread Spectrum output signal, so the signal can be transmitted accurately and reliable. It is obvious, when Rummler channel bring More-pathway interference, the Spread Spectrum communication system have better anti-interference ability.
4 Self-defining Channel Systemview Artificial Analysis In fact, in the project employment, the channel model is far more complicated than the ones that supposed. City and suburb, plain and mountain region, climate change, when these conditions changed, corresponding channel models and parameter of delaying are all different. At this moment, the idealized equivalent model can not already respond to the request that we have been artificial, But Systemview software consider the situation already, basic communication storehouse offer another one special artificial model only for More-pathway channel, emulational fruits the following every picture shows:
526
S. Yanli, S. Yanbin, and Y. Haixia
Fig. 9. Original yard
Fig. 10. Demodulated Spread Spectrum output signal
Fig. 11. Spread Spectrum signal
Fig. 12. Spectrum of Spread Spectrum signal
Emulational Research of Spread Spectrum Communication
527
Follow from the above-mentioned artificial wave forms: after the direct sequence spread spectrum communication system added the More-pathway interference and gausses noise, time spectrum of Spread Spectrum signal appears serious declination, and frequency spectrum of Spread Spectrum signal appears serious interference, but distortion did not take place in demodulated Spread Spectrum output signal, so the signal can be transmitted accurately and reliable. It is obvious, when self-defining channel bring More-pathway interference, the Spread Spectrum communication system have better anti-interference ability.
5 Conclusion Follow from the above-mentioned artificial study, we know that the Spread Spectrum communication system have better anti-interference ability. As interference systems that new, advanced, multi-functional, ultra high-power, ultra broadband appear and evolve, the war of electronic countermeasure will be fiercer, In order to guarantee essential information is conveyed safely and reliably in the battle field of high technology in the future, the anti-interference technical research of communication will go on in more extensive field, the anti-interference ability of military communication especially will certainly be strengthened, the anti-interference technology of communication is sure to get greater development.
References 1. Li, S.: The development and prospect of spread spectrum communication. The journal of University of electronic Science and Technology (1996) 2. Wang, X.-l.: Digittal communication theory. Xian University of electronic Science and Technology (2009) 3. Wang, H.-k.: Theory and technology of mobile communication. Qinghua university, China (2009) 4. Stiffle, J.: Theory of Synchronous Communieation. Prentiee-Hall, Inc., Englewood Cliffs (1971) 5. Miki, T., Ohya, T., Yoshino, H., Umeda, N.: The Overview of the 4th Generation Mobile Communieation System. In: 2005 Fifth International Conference on Information, Communieations and SignalProeessing, pp. 1600–1604 (2005) 6. Benali, O., EI-Khazen, K., Garrec, D., Guiraudou, M.: Framework for an evolutionary Path toward 4G by means of cooperation of networks. Communications Magazine 42, 82–89 (2004) 7. Makaya, Christian, Pierre, Samuel: An Interworking Architecture for Heterogeneous IP Wireless Networks. In: Third International Conferenee on Wirelessand Mobile Communications, ICWMC 2007, p. 16 (March 2007) 8. Li, Y., Jia, Y.: Design of Experimental Teaching in the System of Communication Based on SystemView. agricultral internet information, 119–122 (2010)
A Pilot Study on Virtual Pathology Laboratory Fan Pengcheng1,2, Zhou Mingquan1, and Xu Xiaoyan3 1
School of Information Science and Technology, Beijing Normal University, Beijing 100875, China 2 College of Mathematics and Science, Inner Mongolia Normal University, Hohhot 010020, China 3 Dept. of Pathology, Pathology Staffroom, Inner Mongolia Medical College, Hohhot 010059, China [email protected]
Abstract. Virtual reality (VR) is being applied to a wide range of medical areas. In the medical education field, VR technology provides a cheap and simple way to learn and practice pathology experiment in any place and at any time. This paper reviews the development of Virtual Pathology Laboratory (VPL) and listed the possibilities of this new type of tutorial application, and then describes the framework of a simple VPL system using virtual slides and 3-D specimen models. We hope this can give a hint on how to teach pathology experiment in colleges, especially for those less developed areas in China. Keywords: Virtual Reality, Pathology, education, laboratory.
1 Introduction Pathology mainly researches the happening, development and transforms rules of the disease from morphological perspective, and it has very strong intuitiveness and practicalness. So, in the pathology education, it is necessary for students to watch and study sufficient quantity and variety of typical cases of gross and biopsy specimens through experiment. Therefore, the experimental teaching plays a very important role in pathology education. However, in recent years, as the expansion of enrollment scale of China's higher education, the laboratory construction is lag behind the rapid expansion of college students scale, which makes lots of colleges generally be in the predicament of experimental teaching. In pathology, this predicament usually shows in the following aspects: Lack of experimental site makes the practice rounds decreased; Insufficient of specimens specimen makes the labs have not enough chances supply for students; Limited experimental class makes students have no sufficient time in the laboratory to review, watch specimens; Shortage of experiments staff make many teachers must guide the experiment course all day long, which influence the teaching effect and the teachers‘health. To solve the difficult problems, we must rely on the development of modern technology, and one of which is virtual reality (VR). By using the VR technology, along with database technology and network technology, we can establish a virtual pathology laboratory, where the limitation of the teaching resources M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 528–535, 2011. © Springer-Verlag Berlin Heidelberg 2011
A Pilot Study on Virtual Pathology Laboratory
529
has been removed. What a pathological student need is just a computer equipped by some special hardware and software, by which he/she can browse, learn and practice at any time and in any place. This will be very useful for the wider development of pathology experimental teaching and the ascension of the experimental effect. In this paper, we design a prototype of pathology experimental teaching system based on virtual reality. And we hope it will give some hints for higher medical education.
2 The Possibility for Virtual Pathology Laboratory 2.1 Virtual Reality and Its Application in Pathology Virtual reality (VR) is a very powerful and compelling computer application. It provides humans with a means to interact with these models through new human computer interfaces and, thus, to nearly realistically experience these models. In 1993, Kaltenborn[1] foretelled some medical applications of VR and outlined future prospects of medical VR applications for the first time. Nowaday, with the development of this attractive technology, VR is being applied to a wide range of medical areas, including medical education, training, surgery and diagnostics assistance. In the medical education field, VR opens new realms in the teaching of medicine and creates new effective learning procedures for the students and one of which is the VR-based pathology education. Before the emergence of VR-based pathology (briefly as Virtual Pathology), telepathology, the practice of pathology at a long distance, has advanced continuously since 1986. In the last few years, benefited from the progress in the technology of image digitalization and transmission through the world and “virtual slides” is introduced to the pathology diagnosis and education and so that the concept “telepathology” is prompted to “virtual pathology”[2].Today, VR-based pathology systems, so-called Virtual Pathology, are being widely used for education applications. The trend at medical schools, in the United States, is to go entirely digitalization for their pathology courses, discarding their student light microscopes, and building virtual slide laboratories. In the United States and Europe, the Virtual pathology technology has been commercialized by more than 30 companies and Virtual Pathology is already the enabling technology for new, innovative laboratory services. Some practical VR-based pathology teaching systems have been developed and promoted to the medical education, such as the “Virtual Pathology” of Stanford University[3], “Pathology Project” of Medlink Company[4], “Second Life” of Sunderland University[4], The Medical Readiness Trainer (MRT) [5],VisualizeR[6], TelePol[7] ,VizClass[8], etc. “Virtual Pathology” is a Virtual Pathology Lab contains web pages with virtual slides that allow zooming and panning in huge images of whole tissue sections, covering fundamental aspects of Pathology appropriate. “Pathology Project” is a virtual pathology course in addition to the physical course, which will contain nearly all of the same content as the actual 'physical' pathology course and will support a superb opportunity to take the course from the students’ own home or school at the their own paces. “Second Life” is a project which supplies a shared learning resource centered around a virtual pathology lab. And which will be used to deliver a pilot scenario whereby students will be able to study and carryout a virtual Full Blood
530
F. Pengcheng, Z. Mingquan, and X. Xiaoyan
Count task. MRT constitutes a medical education environment in which the integration of fully immersive virtual reality with highly advanced medical simulation technologies and medical databases exposes the trainee to the vast range of complex medical situations. VisualizeR is a teaching system of the University of California San Diego (UCSD) School of Medicine's Learning Resources Center (LRC) , which has been actively engaged in the development and implementation of a virtual reality (VR) based application for education and training. TelePol is a project of telepathology labs, which is designed for School of Medicine, University Malaysia Sabah. It gave us a better understanding of static & dynamic telepathology and the role of TelePol in medical tele-education, which concluded with a positive effectiveness in education. VizClass provides a completely digital, interactive workspace for research and education. For the resource of VR-based pathology, there are also some research and projects which can be preparations for the development of VR-based pathology education. In Ackerman’s article[9], he described the Visible Human Project (VHP). In VHP, there was a Visible Human database, which provided computer simulation of the live human body, this was a good resource for the gross specimen recognition and will be very helpful for the pathology education. Digital Prospect[10] was an experiment which had shown a fully automated procedure to digitalizing an entire pathology slide. Thus one CD-ROM can store a whole glass slide, as well as a self-installing program to provide a microscope simulator facility. This allows pathologists to examine the virtual case from their computer in a similar manner to looking at a glass slide on a conventional microscope. This permits a new, computer-based approach to proficiency testing in histopathology and cytopathology. Use of virtual slides should encourage the diffusion of national quality assurance programs, which at present suffer from certain organizational and logistical limitations. 2.2 Advantages of Virtual Pathology Laboratory Virtual Pathology Laboratory system (VPL) is an integrated environment based on computer graphics, virtual reality, database, and computer network. In this environment, users can effectively use the distribution of various virtual slides data, information, 3-D simulated equipment and virtual gross specimen, etc. For improving the usefulness and verifying the effectiveness of the Virtual Pathology Laboratory system, lots of researchers have studied in this area. Grimes[11] proposed a prototype of a virtual microscope which provides the same diagnostic capabilities as a real microscope. From which, the remote pathologist is in a virtual world with the same capabilities as a "real" microscope. Sato[12] reviewed applications of virtual microscopy and virtual slides for teaching and diagnosis, and pointed out the more widely using than those of real microscope and real glass slides. Costello[13] and Fontelo[14] evaluated the diagnostic accuracy and acceptability of the Virtual Pathology Slide for telepatholgy and results from this study showed that the Virtual Pathology Slide can be used to make a correct diagnostic decision and can become a potential applications in medical education in countries with developing economies. Hudson[15] compared quality of traditional learning formats with the courses of interaction with the computer and show the advantage of the computer-aided learning in pathology. Dee[16] gave further evidence of the advantages of Virtual microscopy
A Pilot Study on Virtual Pathology Laboratory
531
(VM) in education. And he showed the accessibility and efficiency of VM for teaching histopathology to medical students at the University and pointed out that VM may also be effectively implemented in other medical-student teaching models, including integrated and problem-based learning curricula and the classical pathology laboratory. Zosia Miedzybrodzka[17] demonstrated that VR-based pathology can be used in a variety of settings to increase curriculum flexibility. Rocha[18] and Weinstein[19] discussed the applications and challenges of Virtual Pathology and foretelled the prospects for the future. From the above, the researchers did lots of experiments to acheive such an evidence that compared with the traditional education, Virtual Pathology Laboratory system does have some advantages, including: y y y y y
Elimination of restrictions on time and place, students can practice at anytime and anywhere; Supplying deeper and more widely teaching content by sharing of educational resources; Cultivating the ability of students for solving analyzing and solving problems independently, so as to deepen the understanding of concepts. Solving the problem of lacking of teachers, especially in those less developed areas; Liberating teachers from heavy teaching tasks and so they have more time to focus on the organization and presentation of learning content.
So the Virtual Pathology Laboratory system will be more and more adopted by medical colleges and will be more widely used in the near future.
3 Architecture and Functions of Virtual Pathology Laboratory System 3.1 The Architecture There are two main tasks for the pathology experiment. One is to help the students to understand the interrelation of anatomical structures of gross specimen in 3D space so as to grasp the technique of making pathology slides; the other is to learn how to read pathology slides and this the most important technique for the students of pathology. For these purpose, we design a architecture of pathology experimental education system. This based on the research of our lab, and will give the pathology students more abundant learning materials and motivating learning experience. The general architecture of Virtual Pathology Laboratory is illustrated in Fig. 1. The architecture is a Sever-Client structure. For the Sever, learning materials such as 3D experimental devices and virtual slides are modeled through some special equipment and techniques and then put into a multidimensional data base, along with the knowledge about the learning resources. The server and client use HTTP as a communication protocol, CGI and servlets for server applications. For client applications, the platform of Virtual Pathology Laboratory is designed for the users. On which, 3D models of gross specimen and virtual slides will be demonstrated and the users can experience a true
532
F. Pengcheng, Z. Mingquan, and X. Xiaoyan
sense of presence, and can choose a very personalized way of learning. Moreover, through the network, students can practice the techniques almost at anytime and anywhere and as many times as they want. After the students finish their courses, there is also an evaluation system to give them appropriate feedback. The architecture of the whole system is highly flexible and easy to extend. We can offer more functionality by adding more server applications so that it can be offered to the other field of medical education. Furthermore, the distributed computing environment can also be applied to accelerate the computation and to keep the load balance.
Fig. 1. The general architecture of Virtual Pathology Laboratory System
3.2 The Function of the Sub-systems 3.2.1 Access to Experimental Resources The Access to experimental resources is a preparation work for the whole system and some new techniques and equipments will be used. In this stage, we will use some imaging equipments, such as MRI /CT or ultrasound to get the digital images of the human organ to make 3D reconstruction using our own 3D software. Or we can get these 3D models directly by scanning the gross specimens with 3D laser scanner. For the pathology sections, we can use the digital slide scanner to make whole slide imaging (as we called virtual slides), which is commercialized for about 10 years. 3.2.2 Multimensional Database Database the foundation of the whole system, on which to precede all of the following works. So it is very important to the whole system. Since this is a learning system, we need not only the 3D models themselves, but also some teaching knowledge and usually do some online analytical processing (OLAP). There for the database should be multidimensional one. The multidimensional database can receive data from a variety of relational databases and structure the information into categories and sections that can be accessed in a number of different ways and this will meet our needs properly.
A Pilot Study on Virtual Pathology Laboratory
533
3.2.3 Learning Platform Learning Platform is client software which is designed for the students. Pathological practice is the main content of this sub-system. The required learning materials can be retrieved from the sever-side through network. By well-organized forms of software interface, students can easily get the 3D gross specimen and virtual slides, and learn the relevant knowledge. The platform can also record the learning procedure and give feedback properly. On the learning platform, we can also join a few auxiliary functions. Such as Clinical Pathology Discuss, Self-Test, Pathological common sense, etc. Clinical discuss is very helpful to expand the students’ pathological knowledge and improve the ability of analyzing and solving difficult clinical problems, but it used to be difficult to implement in the traditional teaching because of the time limit. Now, it can be realized in the virtual learning environment created by the Virtual Pathology platform. Moreover, the platform can cultivate students' interest which would be the motive power of study. In Self-Test module, some specimens and virtual slides along with some diagnostic questions will be shown randomly according to the selected chapters. After students answer these questions, the correct answers and some comments will be given. This will enhance the learning effect. Pathological common sense mainly introduces conventional slice production process, the tissue dyeing methods, conventional pathological based method, increase students' knowledge cognition of pathology.
4 System Features 4.1 Immersion The most important feature of VR is the immersion. So the immersion characteristic of VPL utilizes role-playing and varying perspectives as a fundamental way of studying. To enhance immersion, we can use stereo glasses to see the images shown on the display, or we can use 3D display to show the stereo images. 4.2 Interaction Unlike the common video instruction material, VPL provides learner the ability of controlling the learning system. During the studying experience, the users can interact with the virtual environment and manipulate the virtual slides or 3D virtual human organs shown on the display. For example, the users can zoom in and out the virtual slides to get a proper view or rotate the model to watch from different viewpoints. 4.3 Collaboration VPL is a kind of Virtual learning environment (VLE), which is based on network, and the resources are free to share. Therefore, the study process can enhance the collaboration between learners in one group. In the VPL, each learner can be represented as a virtual avatar, and several avatars can do cooperative study, and make necessary interactions between each other.
534
F. Pengcheng, Z. Mingquan, and X. Xiaoyan
5 Summary VR-based education and training in medicine is the trend in the future. The prototype of Virtual Pathology Laboratory System (VPL) we have presented here takes advantage of the potential power of the network, database, and VR technology. Along with the higher education popularity, it is necessary to create a no fence university and an unlocked virtual laboratory. This can implement the resources sharing and brand-new teaching ideas. And This is a necessary means of promoting education information and achieving educational equity. By the way, Legal and regulatory issues in telepathology are being addressed and are regarded as a potential catalyst for the next wave of telepathology advances, applications, and implementations. After these question solved satisfactorily, the Virtual Pathology will be used for the clinical application one day. Acknowledgements. This project is supported by 863 project of China (Grant No: 2008AA01Z301).
References 1. Kaltenborn, K.F., Rienhoff, O.: Virtual reality in medicine. Methods of Information in Medicine 32(5), 407–417 (1993) 2. Kayser, K., Kayser, G., Radziszowski, D., et al.: From telepathology to virtual pathology institution: the new world of digital pathology. Rom. J. Morphol. Embryol. 45, 3–9 (1999) 3. Virtual Pathology, http://virtualpathology.stanford.edu/ 4. Patholgy Project, http://www.medlink-uk.org/contact.htm 5. Pletcher, T., Bier, K., Lubitz, D.V.: An Immersive Virtual Reality Platform for Medical Education: Introduction to the Medical Readiness Trainer, vol. 5025. IEEE Computer Society, Washington, DC, USA (2000) 6. Hoffman, H., Murray, M., Curlee, R., et al.: Anatomic VisualizeR: Teaching and Learning Anatomy with Virtual Reality. Information Technologies in Medicine I, 205–218 (2001) 7. Iftikhar, M., Masood, K., Song, T.T.: A model proposal for tele-pathology labs (TelePol). In: eConf 2009, pp. 11–12. ACM, New York (2009) 8. VizClass, http://vis.ucsd.edu/mediawiki/index.php/ Research_Projects:_VizClass 9. Ackerman, M.J.: The Visible Human Project: a resource for education. Academic Medicine 74(6) (1999) 10. Demichelis, F., Della, M.V., Forti, S., et al.: Digital storage of glass slides for quality assurance in histopathology and cytopathology. J. Telemed Telecare 8(3), 138–142 (2002) 11. Grimes, G.J., Mcclellan, S.A., Goldman, J., et al.: Applications of virtual reality technology in pathology. Stud. Health Technol. Inform. 39, 319–327 (1997) 12. T.S.T.M.H.N., et al.: Progression in diagnostic pathology; development of virtual microscopy and its applications 55(4), 344–350 (2007) 13. Costello, S.S., Johnston, D.J., Dervan, P.A., et al.: Development and evaluation of the virtual pathology slide: a new tool in telepathology. J. Med. Internet. Res. 5(2), e11 (2003)
A Pilot Study on Virtual Pathology Laboratory
535
14. Fontelo, P., Dinino, E., Johansen, K., et al.: Virtual Microscopy: Potential Applications in Medical Education and Telemedicine in Countries with Developing Economies, vol. 153. IEEE Computer Society, Washington, DC, USA (2005) 15. Hudson, J.N.: Computer-aided learning in the real world of medical education: does the quality of interaction with the computer affect student learning? Medical Education 38(8), 887–895 (2004) 16. Dee, F.R., Meyerholz, D.K.: Teaching Medical Pathology in the Twenty-First Century: Virtual Microscopy Applications. Journal of Veterinary Medical Education 34(4), 431–436 (2007) 17. Zosia Miedzybrodzka, N.M.H.H., Haites, N.: Teaching undergraduates about familial breast cancer: comparison of a computer assisted learning (CAL) package with a traditional tutorial approach (2009) 18. Rocha, R., Vassallo, J., Soares, F., et al.: Digital slides: present status of a tool for consultation, teaching, and quality control in pathology. Pathol. Res. Pract. 205(11), 735–741 (2009) 19. Weinstein, R. S., Graham, A.R., Richter, L.C., et al.: Overview of telepathology, virtual microscopy, and whole slide imaging: prospects for the future. Hum. Pathol. 40(8), 1057–1069 (2009)
Research and Practice on Applicative “Return to Engineering” Educational Mode for College Students of Electro-mechanical Major Jianshu Cao Dept. of Mechanical Eng., Beijing Institute of Petro-chemical Technology, Beijing 102617, China [email protected]
Abstract. It’s an educational feature of our school to cultivate senior applicative engineering technical talents with a sustainable development. “Return to Engineering” and the cooperative education of manufacture, study and research are important methods to cultivate applicative talents and improve the educational quality on engineering students. Based on the analysis on “Return to Engineering” and the cooperative education of manufacture, study and research, the Paper makes a positive exploration and practice on talent cultivation mode, course system reform and practical educational system construction for mechanical majors, so as to develop the engineering application ability and awareness of students and improve their comprehensive qualities through “Return to Engineering” and the cooperation of manufacture, study and research. Keywords: Return to Engineering, educational and teaching system, cooperative of manufacture, study and research.
1 Introduction Senior engineering education is aimed on the cultivation of engineers needed by industrial circle and adapted to social developing demand. Since the birth date of the first senior engineering educational schools including Warrington College in the middle of 18th century, senior engineering educational circle has been working on this aim. The educational concept “Return to Engineering Practice” is proposed in the current background that senior engineering education is disjointed with practical engineering and deviate from the talent cultivation target. It is compliant with senior educational rules, able to guide senior engineering schools to step onto the right route for the cultivation of required modern engineer and shall be the direction to insist for senior engineering educational reform [1]. Since 2000, engaged in the research and practice of a series of projects including the Educational -Ministry educational reform project “Research and Practice on Modern Engineering Technical Talent Cultivation System Reform of Common Engineering Schools”, integrate petrochemical industrial and social development demand, try to create an excellent educational environment and management M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 536–543, 2011. © Springer-Verlag Berlin Heidelberg 2011
Research and Practice on Applicative “Return to Engineering” Educational Mode
537
mechanism, strengthen disciplinary professional construction, further engineering educational and teaching reform, consider the urgent demand of petrochemical and mechanic industries and Beijing municipal economic construction and social development on talent cultivation, implement educational and teaching reform with the consideration to engineering actualities, and has gained enriched practical experiences and reform achievements on talent mode reform and engineering applicative talent cultivation, and thus laid the theoretical and practical foundation for the establishment and development of “Return to Engineering” and the educational and teaching system with the cooperation of manufacture, study and research of common engineering schools.
2 Reform Thinking and Contents Reform Thinking: Under the guidance of “Macro-Engineering Perspectives” and “Return to Engineering”, with disciplinary professional construction as the core carrier, taking the integration of talent cultivation, disciplinary construction and professional construction as the acting points, featured with the cooperative education of manufacture, study and research, targeted on the development of students; engineering practical ability, innovative ability and employability, explore and try to establish a set of educational and teaching system for “industry oriented engineers with a sustainable development” of common engineering mechanical majors.
①
The main contents of the Research include: research the engineering educational features and rules of a group of domestic and foreign universities, condense disciplinary professional direction, and construct the cultivation plan for engineering application oriented mechanical professional talents. based on the course design thinking integrating knowledge, ability and quality, construct the course contents and teaching system of mechanical majors. with “engineering formation” concept, construct a practical teaching system “based on experiment and technique skill training, cored with design and relied on engineering training, scientific research training and social practice”. according to the concept that “rooted in Beijing, oriented on industry, feature maintained and Return to engineering”, construct an engineering educational practical base cored with “the cooperation of manufacture, study and research”. Further educational reform and strengthen engineering practice and the development of innovative ability.
③
②
④
⑤
3 Analysis on “Return to Engineering” Education of Senior Engineering Education Since the 1990s, American engineering educational circle set off the “Return to Engineering” trend, advocating engineering education’s return to the original meaning of engineering. The proposal of “Returning to Engineering” thinking had a profound social background, and it embodies human society’s profound introspection on the prevailing and overuse of socialism since the 19th century. In recent years, people have a growing awareness that human’s utilization on natural resources and the protection of
538
J. Cao
living environment etc are having a severe crisis. Essentially, this crisis is the result and embodiment of socialism’ s overuse, as when scientific technology’s effects are exaggerated, people put themselves on the opposite side of nature, and become the conqueror and even invader of nature. After 1980s, human beings have had a profound introspection on this crisis, and have realized that social development shall pay more attention on sustainability, integrity and fairness etc. In engineering educational field, due to the direct influence of socialism, the vision of engineering education is limited within the scope of scientific technologies for a long term, the environments where science, humanity and engineering are in are dissevered, and the original connotation of engineering as a system has been dissimilated. Therefore, when socialism is profoundly introspected, engineering education’s scientific orientation is suspected as well. It’s gradually been the common view of engineering circle to return to engineering’s original connotation and stress the integrity and complexity of engineering [1-3]. Similar to engineering itself, senior engineering education is multi-leveled and diversified. In academic circle, there are many debates on the specification and level of engineering education for talent cultivation. However, it’s been the common view that every country and school has their own educational features and talent cultivation orientation. As researches revealed, American and German engineering education have two different modes, and both of them have aroused extensive attention as successful models. American senior engineering education has an apparent “scientific mode”, while Germany stresses “technical mode” more. However, both countries have basically similar requirements on engineer cultivation. The difference lie on that German engineers’ basic training is finished in their study in school, while American engineers acquired it through university and corporate training. Some scholars also found in research that American engineering education has gradually developed from “technical mode”, “scientific mode” to the current “engineering mode”, and are aiming on a future new mode at all times. In consideration of the current economic system of China, enterprises are impossible to cooperate in engineering education with schools, so most engineering practical training of China engineering education can only be finished in schools, similar to German engineering education. In 2004 and 2005, NAE and NSF co-published “The Engineer of 2020: Visions of Engineering in the New Century” and "Educating the Engineer of 2020: Adapting Engineering Education to the New Century". Based on the detailed analysis on the technical, social, international and professional backgrounds of 2020 engineering practice, both reports described the expectation on future engineers and their key characteristics, to build common visions on future engineering and engineers for engineers, teachers, employers and students, then make a strategic design for the reform and development of engineering education, and provide the reform method and specific measures adapted to future demand.
4 Analysis on Cooperative Education of Manufacture, Study and Research The cooperation of manufacture study and research is the cooperation of enterprises, universities and research institutes. It a social and economic activity for enterprises,
Research and Practice on Applicative “Return to Engineering” Educational Mode
539
universities and research institutes to rely on respective advantageous resources, achieve the optimized combination of different elements, cultivate innovative talents, make as many as possible innovative achievements and promote industrial development. Cooperative education of manufacture, study and research is a new-mode educational mode through the cooperation of schools, enterprises and research institutes, with the integration of theoretical study and practical training, to develop students’ practical ability and innovative spirits, so as to comprehensively improve their qualities. It is compliant with the current economic and technical development rules, and features as the common trend for global senior educational reform and development. It’s urgent for senior educational reform and of a great practical significance for the cultivation of new-type talents to actively explore for the new mode with the cooperative education of manufacture, study and research on the new situation. The implementation of cooperative education of manufacture, study and research is compliant with social and economic development and national senior educational development trends and enterprises’ demand on market competitiveness with the help of scientific and technical progresses, facilitates the development of senior educational course, so as to lay a solid foundation for the cultivation of applicative talents with innovative awareness and innovative thinking ability [3].
5 Construction and Practice of “Returning to Engineering” Educational and Teaching System of Electro-mechanical Majors 5.1 Engineering Application –Oriented Cultivation Plan of Electro-mechanical Majors Keep Taking disciplinary professional construction as the main line, letting enterprises participate in the talent cultivation process and optimizing the applicative talent cultivation environment of electro-mechanical majors. In consideration of the development demand of the country, Beijing City and petroleum and petrochemical industries, relying on prevailing disciplines, three professional feature directions have thus been condensed: mechanical design and manufacture major and its automation Beijing municipal key construction major take advanced manufacturing technologies as the main professional direction; mechanical electronic engineering Beijing municipal brand building major takes the optical, mechanical and electronic integration technologies as the main professional direction; measurement and control technology and instrumentation major take precision instruments and machinery as feature direction. Centered on the “improvement of students’ engineering awareness and practical ability”, targeted on the cultivation of “industry-oriented engineers with sustainable development”, electro-mechanical majors shall pay equal attention on mechanical and electronic education, integrate optical, mechanical, electronic and hydraulic education, and construct engineering application-oriented talent cultivation plan and course system for electro-mechanical majors (as shown in Fig. 1).
540
J. Cao
Fig. 1. Frame Diagram on Engineering Applicative Talent Cultivation System of Electro-mechanical Majors
5.2 Build a “3 Platform + Professional Orientation” Course System Centered on Course Group In consideration of the demand on cultivation of engineering applicative talents, plan a new course system comprised of 5 serial courses including 3 educational platforms, platform on humanistic quality education, natural science and technical basics, platform on related discipline basics, and platform on professional orientation. At the same time, attach importance onto the construction of course groups in professional system, as shown in Fig. 2.
Professional Courses
Platform on Professional Basic Courses of Electro-Mechanical Department Platform on General Education Courses
Course Group on Course Group on Course Group Optical, Mechanical Modern Digital on Modern and Electrical Design and Testing Integration Manufacturing Instrumentation Basic Course Group on Mechanical Manufacturing Basic Course Group on Mechanical Design Basic Course Group on Electrical and Electronic Technology Basic Course Group on Electro-Mechanical Measurement and Control Technology Public Courses, Basic Courses on Humanities and Social Science, Basic Courses on Natural Science
Fig. 2. Course Group System of Electro-Mechanical Department for Undergraduate Students
Research and Practice on Applicative “Return to Engineering” Educational Mode
541
¾ Platform on Public Basics (Courses): it is comprised of common (general) education, Mathematics and Physics Basics and engineering quality training, embodying a “macro-engineering perspectives”. It is mainly to make students have a broad scope of knowledge and solid foundation on mathematics and physics and give them trainings on basic engineering qualities. ¾ Public Platform of Electro-Mechanical Department: it is comprised of common professional basic courses of electro-mechanical department and professional practical training. According to course group construction plan, construct the basic course group on electro-mechanical measurement and control technology, course group on optical, mechanical and electrical integration, course group on modern digital design and manufacturing, and the course group on modern testing instrumentation. It presents the principle of “multidisciplinary intersection and integration” and mainly develops students’ ability on solving practical engineering problems in consideration of engineering actualities with a comprehensive utilization of the acquired knowledge. ¾ Educational Platform on Personality Development: it includes public elective courses, interdisciplinary course electives, and extracurricular disciplinary contests etc. ¾ Professional Orientation and Graduation Design: based on engineering frontline and current engineering technical talent demand, flexibly set orientation courses and consider graduation design as a whole. 5.3 Build a Practical Educational System Adapted to “CDIO Engineering Education” In accordance with “engineering formation”, build a practical education system “based on experiments and technical skill training, centered on design and relied on engineering training, research training and social practice”. ¾ Based on Experiments and Technical Skill Training, it is to change the past mode that experiments mainly rely on theory courses, and achieve the optimized integration of experiments and teaching contents. Through mechanical engineering training, metalwork internship, electrical and electronic technique internship, mechanical basic experiments, and computer and control basic experiments etc, systematically develop students’ technical operational skills and practical ability, as well as their basic experimental skills and ability on problem comprehensive analysis. ¾ Centered on Engineering Design, Implement design education in the whole process of engineering practical education, including introducing mathematics experiments and modeling for freshmen. Through the implementation of design experiment projects with different contents, disciplinary basic course design and professional course design and graduation design etc, strengthen the practical education on design in the whole 4 years, and thus prepare students for future engineers. ¾ Relied on Engineering Practical Training and Disciplinary Contests, through the courses of modern mechanical engineering training, scientific research training, innovative design and college student engineering research training, integrate
542
J. Cao
practical engineering training, graduation design and disciplinary contests, and develop students’ ability in independent thinking and engineering project research and implementation. 5.4
Build a Practical Base for Engineering Education Cored with “Cooperation of Manufacture, Study and Research”
In accordance with the concept “rooted in Beijing, oriented on industry, feature maintained, returning to engineering”, build the practical base for engineering education cored with the “cooperation of manufacture, study and research”. ¾ Built a First-Class Professional Laboratory that Focuses on the Development of Students’ Engineering Applicative and Innovative Ability among the Similar Domestic Colleges. The basic thinking for laboratory construction is to purchase and self-design industrial application-oriented advanced experimental equipments and build a first-class innovation and practice base among similar domestic colleges. The laboratory is the educational practice-oriented innovative educational base of Educational Ministry Senior Scientific and Engineering Educational and Teaching Reform Project at the Beginning of 21st Century on “Construction of Innovative Educational System and Its Practical Base”. ¾ Implement Cooperation of Manufacture, Study and Research. Cooperation agreements have been signed with 12 enterprises and companies; rely on social power to co-construct highly-leveled professional educational laboratories that can embody disciplinary features and professional features, establish a close connection between schools and enterprises, which greatly improve professional educational laboratories’ overall capabilities. ¾ Expend Internship Practical Channels, to list the Exhibition exhibiting the latest international electro-mechanical equipment technologies into practical educational system. Organize students to participate in highly-leveled industrial exhibitions such as International Machine Tool Exhibition, International Instrumentation Exhibition and International Mechanical-electrical Integration Technologies Exhibition, to expand students’ visions, help them understand the latest disciplinary developing trends and arouse their initiatives in study. 5.5 Further Educational Reform and Strengthen Engineering Practice and Cultivation on Innovative Ability Subject Team actively conducts educational research and reform. Since 2000, hosted and participated in 20 educational reform projects initiated by Educational Ministry, Beijing City and schools, gained and participated in 55 URT projects and teacher scientific research projects, with over 500 participant students, multiple research and contest achievements have been acquired, including over 50 awards such as the third place of National Robot Contest and the first place of Beijing Municipal Electronic Design Contest.
Research and Practice on Applicative “Return to Engineering” Educational Mode
543
6 Conclusion Since 2000, a complete practice has been done in the three majors of electro-mechanical department (mechanical electronic engineering national feature major, mechanical design and manufacture and automation Beijing municipal key construction major, and measurement and control technology and instrumentation major), and enriched practical experiences and reform achievements have thus been gained in talent mode reform and engineering applicative talent cultivation, so as to lay a solid theoretical and practical foundation for the establishment and development of educational and teaching system cored with “Return to Engineering” and the cooperation of manufacture, study and research” of common engineering schools.
References 1. Liu, D., Wang, X., Hua, Y.: Research and Practice on Mechanical Engineering Applicative Talents Cultivation and Educational System Cored with Ability Development. China Senior Education Research (11), 85–88 (2009) 2. Li, W., Cao, J., Jiao, X.: Construction and Practice of Applicative Educational System of Electro-Mechanical Majors. Laboratory Research and Exploration 26(10), 88–91 (2007) 3. Zhu, Y., Ni, T., Chang, X.: Stick to Cooperation of Manufacture, Study and Research, Cultivate Applicative Talents. China Senior Education Research (2), 47–48 (2004) 4. Liu, J., Wang, C., Li, K.: Levels of Applicative Talents and Cultivation of it in Practical Link. Heilongjiang Senior Education Research 136(8), 126–128 (2005) 5. Zhong, X., Qu, Z.: Exploration and Practice on Construction of Modern Mechanical Engineering Applicative Talent Cultivation Plan for College Students. China Education of University (11), 37–38 (2005)
Engineering Test of Biological Aerated Filter to Treat Wastewater Weiliang Wang College of Population, Resources and Environment, Shandong Normal University, Jinan Shandong 250014, China [email protected]
Abstract. The BAF loaded with new ceramic filter material was researched to treat wastewater in comparison with the BAF loaded with biogenic ceramisite agitator. The operational parameter was optimized such as hydraulic retention time, reflux ratio, backwash cycle, etc. The result showed that when the average influent CODcr was 135.6 mg/L, NH4+-N 42.1 mg/L and TP 0.69 mg/L, the removal efficiencies of The BAF loaded with new ceramic filter material were 81.2%, 99.8%, and 68.1% respectively at the condition of the HRT=1.5h. New ceramic filter material required less water to backwash and had better effect than biogenic ceramisite agitator at the same condition because its low density and strong mechanical robustness. The optimal backwash periodicity was two days and expansion coefficient was 20%-30%. The optimal reflux ratio was 150% when two BAFs were connected in series for nitrogen remove. Keywords: new ceramic filter material, BAF, hydraulic retention time, backwash, reflux ratio.
1 Introduction Biological aerated filters (BAFs) are attractive and have many merits such as small cubage, high efficiency, high quality of effluent, and simple flow process without secondary sedimentation tank. Biological aerated filters can remove SS, COD, BOD, AOX, nitrogen, phosphorus, etc. from the wastewater. The mechanisms of BAF mainly include filtration, adsorption and biological metabolism. Traditional filter medias have many disadvantages such as big specific gravity, weak mechanical robustness, and inconvenient backwashing(Ryu et al.,2008; Farabegoli et al.,2009; Chang et al.,2009;Pujol et al.,1998; Moore et al.,2001; Clark et al.,1997; Mann and Stephenson,1997; Peladan et al.,1996; Gilmore et al.,1999; Kent et al.,1996). In this paper, a new-type of filter media, new ceramic filter media, was applied in BAF.
2 Materials and Methods Wastewater Quality. The influent water for BAF was the effluents from the diatomite enhanced primary treatment system. Table 1 presents the influent water quality. M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 544–550, 2011. © Springer-Verlag Berlin Heidelberg 2011
Engineering Test of Biological Aerated Filter to Treat Wastewater
545
Table 1. Influent water quality / mg·L-1
CODcr 96.5-180.7
BOD5 60.1-101.8
SS 61.8-104.6
NH4+-N 28.9-47.6
PO43--P 0.2-0.9
TP 0.4-1.3
Performance Parameters of Media. New ceramic filter media was loaded in one BAF and ceramisite agitator in the other. The ceramisite agitator is made up of clay mainly, but the new ceramic filter media is mainly made up of alumina and silicon oxide and has many merits, such as high mechanical robustness, coarse surface, small specific density and taking a quantity of positive charge. Main performance parameters of the two medias are presented in table 2. Process Flowchart and Test Facility. Two BAFs were used in the experiment. Both of them have the same height of 2.8m and inner diameter of 0.15m. The bottom of BAFs is mixing chamber of inflow and air and the height of media is 1.5m. The phototype (Figure 1) is the BAFs process. Table 2. Main performance parameters of ceramisite agitator and new ceramic filter media
Parameter Diameter/mm Stacking density /g ·(cm3)-1 Density /g ·(cm3)-1 Media Ceramisite agitator New ceramic filter media
~6 4 ~6 4
0.89
1.56
0.05
1.12
WREDFNZDVKWDQN
inflow tank
SXPS
SXPS %$)
SXPS %$)
DLUFRPSUHVVRU
Fig. 1. Biological aerated filters process
EDFNZDVK WDQN
546
W. Wang
3 Results and Discussion Influence of HRT in BAFs. Test parameters and operational conditions of BAFs are summarized in table 3. Table 3. Test parameters and operational conditions of BAFs
Stage Parameter
℃
T/ Flow rate /L·h -1 HRT /h Applied load /kgCOD·(m3·d) -1 Applied load /kgCOD·(m3·d) -1 Filtering velocity /m·h-1 Backewash periodicity /d DO /mg·L-1
Ⅰ Ⅱ
Stage A1 Stage B1
Stage A2 Stage B2
Stage A3 Stage B3
18-25 13.3 2 1.18 1.18 0.75 2 3
18-25 17.7 1.5 1.76 1.76 1 2 3
18-25 26.5 1 2.03 2.03 1.5 2 3
Experimental results of two BAFs with different HRTs and different media are presented in Table 4. The backwash frequency was 2d and DO was 3mg/L in all stages. When HRT of BAFs was 2h, removal efficiency of CODcr and NH4+-N was 81.4% and 100% respectively of BAF with new ceramic filter media and that was 81.6% and 100% respectively of BAF with ceramisite agitator. When HRT of BAFs was 1h, removal efficiency of CODcr and NH4+-N was 70.6% and 74.6% respectively of BAF with new ceramic filter media and that was 70.8% and 73.8% respectively of BAF with ceramisite agitator. It can be concluded that the effectiveness of removing pollutant in two BAFs with two different medias is similar under the same experimental conditions. When HRT of BAFs was 2h, the average effluent CODcr, NH4+-N and TP was 22mg/L, 0mg/L and 0.17mg/L respectively and pollutant removal effectiveness was the best, but it require more construction expenditure and management cost in the same inflow. Removal efficiency of CODcr and NH4+-N was 81% and almost 100% respectively when HRT of BAFs was 1.5h and the pollutant removal effectiveness was similar to that of HRT=2h. When HRT of BAFs was decreased to 1.5h, pollutant removal effectiveness fell down greatly. So 1.5h was the optimal HRT of BAFs. Backwash Experiment in Two BAFs Water quality of effluent was adopted to determine backwash frequency for the head loss of two BAFs which was quite small in the whole backwash periodicity. Evolution of effluent after backwash of the BAF with new ceramic filter media when HRT was 1.5h and backwash periodicity was 3d is presented in table 5. It is obvious that water quality of effluent deteriorated at the 3rd day after backwash. Therefore, 2d was the optimal backwash periodicity of BAFs.
Engineering Test of Biological Aerated Filter to Treat Wastewater
547
Table 4. Experimental results of two BAFs with different HRT and different media
Parameter Stage Stage A1
Stage A2
Stage A3
Stage B1
Stage B2
Stage B3
Influent/mg·L-1 Effluent/mg·L-1 Removal efficiency / % Influent/mg·L-1 Effluent/mg·L-1 Removal efficiency / % Influent/mg·L-1 Effluent/mg·L-1 Removal efficiency / % Influent/mg·L-1 Effluent/mg·L-1 Removal efficiency / % Influent/mg·L-1 Effluent/mg·L-1 Removal efficiency / % Influent/mg·L-1 Effluent/mg·L-1 Removal efficiency / %
CODcr BOD5 NH4+-N PO43--P
TP
120.5 22.4 81.4
70.2 5.4 92.3
40.3 0 100
0.40 0.15 62.5
0.73 85.6 0.17 7.2 76.7 91.6
0.11 30.0
0.20 0.45
135.6 25.6 81.2
78.6 6.2 92.1
42.1 0.1 99.8
0.51 0.20 60.8
0.69 81.9 0.22 7.9 68.1 90.3
0.11 29.8
0.21 0.41
119.6 35.2 70.6
74.5 10.3 86.2
41.3 10.5 74.6
0.44 0.29 34.1
0.72 85.2 0.31 14.1 56.9 83.4
0.12 20.4
0.20 0.58
120.5 22.2 81.6
70.2 5.3 92.5
40.3 0 100
0.40 0.13 67.5
0.73 85.6 0.16 7.4 78.1 91.4
0.11 30.2
0.20 0.40
135.6 26.1 80.8
78.6 6.1 92.2
41.1 0.2 99.5
0.51 0.19 62.7
0.69 81.9 0.23 7.9 66.7 90.4
0.11 29.6
0.21 0.45
119.6 34.9 70.8
74.5 11.2 85.0
41.3 10.8 73.8
0.44 0.30 31.8
0.72 85.2 0.31 13.9 56.9 83.7
0.12 20.6
0.20 0.52
SS
NO3--N NO2--N
Table 5. Evolution of effluent after backwash /mg·L-1
Time Parameter CODcr BOD5 SS TP NH4+-N NO3--N
The 1st day after backwash 24.2 6.1 7.8 0.21 0 29.9
The 2nd day after backwash 25.9 6.5 8.5 0.23 0.2 29.8
The 3rd day after backwash 39.6 11.9 17.5 0.31 2.1 27.6
Water quality variation of effluent during backwash when HRT was 1.5h and backwash periodicity was 2d is presented in table 6. It is obvious that backwash impact water quality of effluent mainly during 1.5h after backwash. Water quality index of effluent, such as CODcr,SS and BOD5,were impacted obviously because microorganism existed in the upper part of BAF and activity of microorganism was impacted during backwash.
548
W. Wang Table 6. Water quality variation of effluent during backwash /mg·L-1
Time after backwash CODcr(mg/L) BOD5(mg/L) SS(mg/L) TP(mg/L) NH4+-N(mg/L) ˉ NO3 -N(mg/L)
ˉ0.5h
0
25.9 6.5 8.5 0.23 0.2 29.8
26.7 5.3 9.2 0.22 0.2 16.1
0.5h
1h
32.1 10.5 18.6 0.26 0 15.6
26.5 7.4 9.5 0.23 0 25.4
1.5h 24. 6.2 7.8 0.22 0 29.8
2h 24.1 6.1 7.8 0.21 0 29.9
Density of ceramisite agitator and new ceramic filter media was 1.56g/cm3 and 1.12g/cm3 respectively. Ceramisite agitator levitated in groups and it is very difficult to scatter, but new ceramic filter media dispersed and levitated and it is easy to backwash. Degree of expansion of new ceramic filter media was 30%, but degree of expansion of ceramisite agitator was less than 20% at the same amount of backwash water and gas. To obtain the same effectiveness on conditions that HRT was 1.5h and backwash periodicity was 2d, the consumed water of BAF with ceramisite agitator was 90L, but that of BAF with new ceramic filter media was 60L. It is obvious that new ceramic filter media whose specific weight is less has many merits such as less backwash water quantity, better backwash effectiveness, convenience to backwash and saving energy consumption. The diameter of ceramisite agitator diminished obviously after the six months experiment and the height of ceramisite agitator was reduced by 5%, but the diameter of new ceramic filter media was not changed any more. It can be concluded that mechanical robustness of new ceramic filter media is superior to that of ceramisite agitator. Experiment of Two BAFs in Series Two BAFs were connected in series and internal circulation was added for increasing denitrification ability. One BAF was aerated for nitration and the other was not aerated for denitrification. The influent flow from denitrification BAF to nitration BAF in series. Reflux ratios, such as 100%,150 and 200 were taken and the HRT of nitration BAFs was 1.5h.Test parameters and operational conditions of BAFs in series are summarized in table 7.Experimental results of two BAFs in series with different reflux ritio are presented in Table 8. The results showed that when the reflux raito was 200%, the effluent NH4+-N was 1.4mg/L, NO3--N 9.5mg/L and the sum of NH4+-N concentration and NO3--N concentration was the minimum of 3 stage. But the sum of NH4+-N concentration and NO3--N concentration was vary from 12.5 to 11.2 only when reflux raito changed from 150% to 200% and energy consumption was increased and nitration was impacted for short HRT. So 150% was the optimal reflux raito of BAFs in series.
%
%
Engineering Test of Biological Aerated Filter to Treat Wastewater
549
Table 7. Test parameters and operational conditions of BAFs in series
Numeric value Parameters T ˄ć˅ Flow rate˄L/h˅ HRT (h) Applied load [kgCOD/˄m3gd˅] Filtering velocity˄m/h˅ Backewash periodicity˄d˅ DO˄mg/L˅ Reflux ratio˄ˁ˅
Stage 4
Stage 5
Stage 6
25-27 17.7 3h 0.86 1 2 3 100
25-27 17.7 3h 0.81 1 2 3 150
25-27 17.7 3h 0.86 1 2 3 200
Table 8. Experimental results of two BAFs in series with different reflux ritio
Parameter
CODcr BOD5 NH4+-N PO43--P
Index
Stage 4
Stage 5
Stage 6
Influent/mg·L-1 Effluent/mg·L-1 Removal efficiency / % Influent/mg·L-1 Effluent/mg·L-1 Removal efficiency / % Influent/mg·L-1 Effluent/mg·L-1 Removal efficiency / % Influent/mg·L-1 Effluent/mg·L-1 Removal efficiency / %
125.5 18.2 85.5 75.4 4.5 94.0 41.5 0 100 0.45 0.09 80.0
120.6 19.6 83.7 78.5 5.0 93.6 40.9 0 100 0.46 0.10 78.3
129.8 22.6 82.6 75.9 6.1 92.0 42.0 1.4 96.7 0.45 0.10 77.8
4 Summary The optimal HRT of BAF with ceramisite agitator and BAF with new ceramic filter media was both 1.5h and the two BAFs had similar effectiveness in optimal stage. when the average influent CODcr was 135.6 mg/L, NH4+-N 42.1 mg/L and TP 0.69 mg/L, the removal efficiencies of The BAF loaded with new ceramic filter material were 81.2%, 99.8%, and 68.1% respectively at the condition of the HRT=1.5h. New ceramic filter media is superior to ceramisite agitator and has many merits such as less density, less backwash water quantity, better backwash effectiveness,
550
W. Wang
convenience to backwash, better mechanical robustness and saving energy consumption. It is thus feasible to substitute ceramisite agitator with new ceramic filter media. The optimal backwash periodicity of BAFs was 2d and the optimal degree of expansion of media was 20%-30%. Backwash impact water quality of effluent mainly during 1.5h after backwash. Two BAFs were connected in series and internal circulation was added for increasing denitrification ability. Denitrification ability was increased when two BAFs were connected in series and internal circulation was added and the optimal reflux ratios was 150%. Acknowledgment. Tuandao Wastewater Treatment Plant of Qingdao supplied field and convenience for the experiment and Environment Protection Bureau of Shandong Province financed the project(Grant no. 200208). The authors wish to thank teachers and students of the research team for their assistance in the lab and their contribution to the discussion section of this paper.
References 1. Clark, T., Stephenson, T., Pearce, P.A: Phosphorus removal by chemical precipitation in a Biological Aerated Filter. Water Research 31(10), 2557–2563 (1997) 2. Chang, W.S., Tran, H.T., Park, D.H., Zhang, R.H., Ahn, D.H.: Ammonium nitrogen removal characteristics of zeolite media in a Biological Aerated Filter (BAF) for the treatment of textile wastewater. Journal of Industrial and Engineering Chemistry 15(4), 524–528 (2009) 3. Farabegoli, G., Chiavola, A., Rolle, E.: The Biological Aerated Filter (BAF) as alternative treatment for domestic sewage. optimization of plant performance. Journal of Hazardous Materials 171(1-3), 1126–1132 (2009) 4. Gilmore, K.R., Husovitz, K. J., Holst, T., Love, N. G.: Influence of organic and ammonia loading on nitrifier activity and nitrification performance for a two-stage biological aerated filter system. Water Science and Technology 39(7), 227–234 (1999) 5. Kent, T.D., Fitzpatrick, C.S.B., Williams, S.C.: Testing of Biological Aerated Filter media. Water Science and Technology 34(3-4), 363–370 (1996) 6. Mann, A.T., Stephenson, T.: Modelling Biological Aerated Filters for wastewater treatment. Water Research 31(10), 2443–2448 (1997) 7. Moore, R., Quarmby, J., Stephenson, T.: The effects of media size on the performance of Biological Aerated Filters. Water Research 35(10), 2514–2522 (2001) 8. Peladan, J. G., Lemmel, H., Pujol, R.: High nitrification rate with upflow biofiltration. Water Science and Technology 34(1-2), 253–347 (1996) 9. Pujol, R., Lemmel, H., Gousailles, M.: A keypoint of nitrification in an upflow biofiltration reactor. Water Science and Technology 38(3), 43–49 (1998) 10. Ryu, H.D., Kim, D., Lim, H.E., Lee, S.I.: Nitrogen removal from low carbon-to-nitrogen wastewater in four-stage biological aerated filter system. Process Biochemistry 43(7), 729–735 (2008)
The Design of Propeller LED Based on AT89S52 Xu zelong1, Zhang Hongbing2, Hong Hao2, and Jiang Lianbo1,* 1
College of Mechatronics and Control Engineering, Shenzhen University, China, 518060 2 Laboratory and facility management division, Shenzhen University, China, 518060 [email protected]
Abstract. Based on the vision persistence principle, a novel clock with rotating LED display is developed. 8-bit microcontroller AT89S52 is used as the core controller. The rotating board is made up of the core controller, 20 LEDs and other components. Magnet is used to complete the synchronization. Each time the board passed the same place, a Hall Effect sensor perceived the magnet and generated an interruption. During operation, the microcontroller keeps track of time and changes the pattern of 20 LEDs with exact timing to simulate a 20x180 array of LEDs while the propeller is spin by the motor. The illusion of a flat image is produced by the moving LED array. The power source comes from the spinning armature of a DC motor since the clock is on a spinning piece of board. Keywords: component AT89S52, persistence of vision, DC motor, propeller LED.
1 Introduction Light emitting diodes (LEDs) have been used for years, primarily in the electronics industry for circuit board lights. Increased performances (brightness), lower production costs and the availability of a variety of colors have led to the successful use of LEDs in replacing incandescent lamps found in architectural, commercial and industrial applications. LEDs present many advantages over incandescent light sources including lower energy consumption, longer lifetime, improved robustness, smaller size, faster switching, and greater durability and reliability [2]. In the information age, as a result of the rapid release of all types of information requirement, an increasing number of combination-type LED dot matrix display is used to publish multimedia information.[3] With the development of LED display technology, embedded technology in the large-screen LED display system has been widely used, but it need to many hardware, the system power consumption increases, the probability of failure has also been increased, and the design is not easy to achieve the future changes and upgrades. [4,5] There is no need to use so many LEDs. As the Figure 1 shows, we only need 20 LEDs, which are in a line equally spaced points. With the board rotating, the move LED array produce a floating image. So the desired image such as clock, words or Chinese can be programmed and displayed. It is easy to use, simplifying the system design; reduce the size of the system and the cost of production design. *
Corresponding author.
M. Zhu (Ed.): ICCIC 2011, Part VI, CCIS 236, pp. 551–557, 2011. © Springer-Verlag Berlin Heidelberg 2011
552
X. Zelong et al.
Due to the versatility of this design, it can be widely used in many occasions that need to display information.
Fig. 1. The overview of LED propeller
2 System Overview The programming of PIC is using C Language. This project consist two main circuits: the rotating circuit board and DC motor controller board. The rotating board has several primary parts. It includes 1 microcontroller ‘AT89S52’, 20 LEDs, 2 capacitors of 1000 μF each, 1 hall sensor, resistances of 220Ω and 5kΩ, insulated copper wires and connecting wires. There is a small piece of magnet to act as a trigger point of remote sensing. The DC motor controller circuit is actually a slip ring as 1 electrode and a contact wire attached to motor shaft as the other electrode. It has 7805 IC for 5V regulation to motor and board respectively. 5VDC will be used to supply the power for motor controller circuit. Then the motor controller circuit will provides power to LED circuit and DC motor. The system block diagram is shown in Figure 2.
Fig. 2. System block diagram
The Design of Propeller LED Based on AT89S52
553
3 Hardware Design The design of hardware mainly includes five modules which are the core controller AT89S52 system, 5V regulator used to supply the power for motor controller circuit, 20 LEDs for display, Hall Effect sensor and motor control. It is a simple and effective circuit for the propeller LED control system. 3.1 Main Control Circuits AT89S52 single chip microcomputer is selected to control the circuit. It is an 8bits CMOS single chip microcomputer with low power consumption and high performance. This chip contains 8KB ISP Flash EPROM which can be wrote more than 1000 times and 256B RAM. The apparatus is made of adopting the nonvolatile storage technology with high density of ATMEL Company. It has many functions such as special applied 16bits timing/counter, and it is compatible with standard 80C51 instruction system, and the chip integrates 8bits central processor and ISP Flash location and it can offer solution with high cost performance for many embedded control systems [6]. 3.2 Voltage Regulator As to the 5V supply, a LM7805 fixed 5V regulator is used. Beside the regulator itself some capacitors at the input and output are required. The values of the capacitors are taken from the datasheet. The circuit is shown in Figure 3.
Fig. 3. 5V Regulator schematic
3.3 LED Circuit LEDs are placed in a row and attached to the rotating board. The design of LED circuit use high brightness LEDs. Also, Leds are not lighted very long so the power must be high. I drive the LED's with the highest possible current to achieve brightest possible picture. A minimum of 20mA is necessary. In order to display a 7x5 standard font, it need 7 LEDs at least. To show pictures and Chinese (Chinese is display on 16x16 standard font) it's nicer to have more LED's so i take 20 LED's instead of only 7. During operation the LEDs converts the electrical signals from the microcontroller to optical signals for the human eyes. 3.4 Hall Effect Sensor The Hall effect sensor is a transducer that varies its output voltage in response to change in magnetic field. Hall sensors are used for positioning. Magnet is used to
554
X. Zelong et al.
complete the synchronization. Each time the hall sensor passed the magnet, the Hall effect sensor will generated an interruption. When the Hall sensor is directed to a rotating magnet division, induction into its existence will be restored to its original state, even if the former does not show the completion of a process. The Hall Effect sensor’s interface circuit is shown in Fig.4.
Fig. 4. Hall effect sensor interface
3.5 DC Motor Driving In order to rotate the circuit, DC motor is used. PIC microcontroller purposed is to execute the program and transmit the signal to LED. As output, a line of LED is used to transmit the desired signal. In order to display the images, DC motor will rotate the circuit board. If the displayed image is not clear, the DC motor speed is adjusted until the displayed image is visible. When DC motor is rotating, the floating image will appear. The synchronization of DC motor speed and LED blink cause the image visible to human eyes. Motor Controller circuit is controlling the DC motor speed and the speed is adjustable. DC motors seem quite simple. Apply a voltage to both terminals, and it will spin. DC motors are non-polarized which means that it can reverse voltage so the motor will rotate in two directions, forward and backward. The DC motors I chose is rated about 6V .Voltage is directly related to motor torque. The more voltage supplied, the higher the torque will be produce. The DC motor circuit is shown in Figure 5.
Fig. 5. DC motor driving circuit
The Design of Propeller LED Based on AT89S52
555
4 Software Design The programming flow chart is shown in Figure 6.When the program started, it will go to "INITIAL". After that it will go to "have turn a circle”., which is to check whether the rotating board have turn around. If it has turned around, the Hall Effect sensor must produce a signal. This signal will cause the core controller go to the interrupt process. At this time, the AT89S52 will calculate the time and execute the program and transmit the signal to LED. As output, the LEDs are used to show the desired signal. With the rotating board running, the LEDs will display the image we want. If the timer doesn't count finish till 30ms, it will loop back to "the current display". If it does, it will proceed to the next display type. When this process finished the final block, it will start again at "INITIAL ". This process continues until the power is off.
Fig. 6. Software chart flow
5 Conclusion This design of propeller LED control system takes single-chip microcomputer as its core technology. The design is specialized for its simple circuit, compact structure, low cost of production design. It adopts two control technologies, one is motor control and the other one is led control. The propeller LED system has great practical significance. In factories, hospitals, office buildings and other environments, it will be widely used. Fig.7 shows the rotating board, the motor controller and the whole hardware. Fig.8 shows the display effect, including the display of analog clock, the display of Chinese words SZU.
556
X. Zelong et al.
Fig. 7. Hardware components
Fig. 8. Display effect
The Design of Propeller LED Based on AT89S52
557
Acknowledgment. This work is supported by Laboratory and Equipment Administration opening fund of Shenzhen University with Grant No. 2010152 and undergraduate innovative lab of Guangdong province with Grant No. 2011036.
References 1. Wikipedia, http://en.wikipedia.org/wiki/Persistence_of_vision 2. Wikipedia, http://en.wikipedia.org/wiki/Led 3. Chen, X., Tang, C.: Intelligent LED Display Technology with SCM and CPLD. In: Proc. of IITA 2009, pp. 260–263 (2009) 4. Davies, J.H.: Digital Input, Output, and Displays. MSP430 Microcontroller Basics, 207–274 (2008) 5. Cai, M.Q.: MCS-51 series single-chip microcomputer system and its application. Higher Education Press, Beijing (August 1992) 6. Intel corporation, MCS-51(tm) family of single chip microcomputers: Users manual. Santa Clara: Intel Corporation (1981)
Author Index
Ai, Shengli VI-480 Aithal, Himajit IV-351 AliHosseinalipour V-36 Anil kumar, A. IV-351 Aslam, Mohammed Zahid
IV-260
Babaei, Shahram V-36 Bailong, Liu III-39, III-47 Bangjun, Lu VI-124 Bangyong, Hu II-60 Bao, Liwei IV-93 Bi, Guoan IV-224 Bin, Dai II-436, II-443 Bin, Li III-151 Bing, Hu V-476 Bingxue, Han IV-265 Bo, Qi VI-282 Bo, Sun I-526, II-475 Bo, Wu V-74 Bo, Yang IV-1 Bo, Zhou V-461 Bu, Yingyong I-335 Cai, Nengbin IV-402 Cai, Ning II-87 Cai, Xiaonan I-519 Cai, Xiaoqing VI-166 Cao, An-Jie IV-376 Cao, Fengwen IV-189 Cao, Jianbo VI-398 Cao, Jianshu VI-536 Cao, Qiang VI-417 Cao, Yukun III-232 Chang, Henry Ker-Chang VI-500 Chang, Ling-Wei V-483 Chang, Yinxia IV-427 Chang, Zhengwei IV-167 ChangJie, Hu VI-461 Chang-ping, Zhao VI-290 Changxi, Ma VI-282 Changyuan, He V-211 Chao, Hu IV-42 Chao, Yan II-530 Chaoshi, Cai I-395, III-321
Chen, Bin V-170, V-175 Chen, Cheng II-157 Chen, Chuan IV-369 Chen, Chun IV-376 Chen, Haijian III-508 Chen, Haiyuan III-9 Chen, Hong-Ren III-407 Chen, Huiying V-201 Chen, Lingling V-125 Chen, Weiping III-508 Chen, Xiaodong V-175 Chen, Xinglin VI-110 Chen, Yan VI-152 Chen, Yanhui VI-84, VI-89 Chen, Yu-Jui V-10 Cheng, Jiaji IV-280 Cheng, Li V-321 Cheng, Shih-Chuan VI-436 Cheng, Yingjie III-74 Chengcheng, Jiang I-288 Chenguang, Zhao IV-136, IV-144, IV-151 Chi, Xiaoni I-143 Chiang, Yea-Lih III-407 Chong, Guo VI-50 Chuan, Tang I-29 Chuang, Li VI-404 Chujian, Wang I-191, I-366 Chun, Huang I-36 Chun, Yang Chang VI-511 Chunhong, Zhang II-326 ChunJin, Tian III-207 Chunling, Zhang V-381 Chunqin, Zhang III-369 Congdong, Li I-288 Congmei, Wan V-321 Cui, Kang IV-59 Cui, Yanqiu I-550 Cui-lin, Zhang V-461 Da, Zheng V-94 Dai, Minli VI-424, VI-430 Dai, Wei-min V-100 Danxia, Bi V-105
560
Author Index
Dasen, Li II-405 Deng, Fang II-294 Deng, Hui V-201 Deng, Jianping IV-189 Deng, Nan IV-402 Deng, Xianhe II-396 Deng, Xiaoyun VI-343 Deng, Xubin VI-26 Deng, Yibing V-316 Deqian, Xue VI-383 Ding, Feng II-350 Dong, Hao VI-1 Dong, Liu III-292 Dong, Xu III-39, III-47 Dong, Yu VI-50 Dong-Ping, Liu II-303 Du, Jiang V-523 Du, Maobao V-365 Du, Wencai III-1 E., Shiju
VI-398
Fan, Hongda VI-1 Fan, Jihua III-515 Fan, Tongliang IV-433 Fan, Zhao IV-441 Fang, He II-172 Fang, Ligang VI-430 Fang, Qiang VI-166 Fang, Sun IV-242 Fang, Yuan II-274 Fanjie, Bu II-382 Fei, Zhou II-281 Feng, Lei V-304 Feng, Lou III-312 Feng, Lv II-101 Feng, Pengxiao IV-172 Feng, Wenlong III-1 Feng, Yuan II-194 Fengling, Wang I-262 Fengxiang, Chen V-234 Fu, Wenzhi IV-172 Fu, Xixu V-43 Fu, Yizhe VI-179 Fuhua, Xuan I-275 Furong, Wang VI-445, V-511 Gaijuan, Tan V-234 Gai-ning, Han VI-39 Gan, Jing III-427
Gang, Chen I-492 Gao, Cheng I-359 Gao, Fei III-427 Gao, Haiyan III-433 Gao, Jin IV-306 Gao, Junli VI-166 Gao, Li VI-357 Gao, Shuli V-226 Gao, Wei VI-188, VI-197 Gao, Xin II-391 Gao, Xiuju V-365 Gao, Zhijie III-17 Gong, Jun V-529 Gong, Xiaoyan II-194 Gong, Xizhang V-43 Gu, Caidong VI-424, VI-430 Guan, Xianjun I-21 Guangyu, Zhai IV-503 Guilin, Lu VI-232 Guo, Changgeng III-442 Guo, Fachang IV-382 Guo, Lejiang II-194 Guo, Lina III-494 Guo, Lu VI-452 Guo, Shuting V-288 Guo, Wei V-428, V-435 Guo, Wenping III-488 Guo, Xinbao I-403, I-409 Guo, Yanli V-226 Guo, Zhiyun III-284 Guo, Zirui I-359 Guohong V-64 Guohong, Li I-248 Guojin, Chen III-299, III-305 Guojing, Xiong II-267 Guo-song, Jiang I-320, I-328 Haicheng, Xu IV-10 Hailong, Sun I-161 Hai-qi, Feng V-374 Haitao, Hong VI-50 Haiwen, Li IV-335 Haixia, Wan I-484 Haixia, Yu VI-522 HamidehJafarian V-36 Han, Baoyuan IV-450 Han, Dong IV-464 Han, Hua I-478 Han, Xinchao III-401 Han, Xu V-100
Author Index Hang, Ling-li V-268 Hantian, Wei VI-445, V-511 Hao, Fei Lin V-304 Hao, Hong VI-551 Hao, Yitong II-143 Hau, Chuan-Shou V-239 He, Jilin V-137 He, Juan III-103 He, Li III-174 He, Lijuan III-337 He, Siqi II-420 He, Weisong VI-94, VI-100 He, Xiangguang VI-188, VI-197 He, Yang II-312 He, Yinghao IV-32 He, Yong V-304 He, Zhuzhu II-458 Heng, Chen III-89 Hengkai, Li I-213 Hong, Liang IV-181 Hong, Lu I-465 Hongbing, Zhang VI-551 Hongjun, Liu I-533 Hong-li, Zhang I-302 Hongmei, Jiang IV-159 Hongmei, Tang III-345, III-355, III-363 Hongwei, Luo III-183 Hou, Shouming III-337 Hou, Xuefeng IV-369 Hu, Caimei IV-81 Hu, Jianfeng IV-456 Hu, Jun V-409 Hu, Wenfa V-281, V-288 Hu, YongHong VI-452 Hu, Zhigang V-246 Hu, Zhiwei IV-392 Hu, Zong IV-54 Hua, Wang Guo VI-157 Huan, Wang V-518 Huang, Changqin V-258 Huang, De-Fa V-239 Huang, Haifeng I-428 Huang, Hanmin I-94 Huang, Hexiao III-508 Huang, Jun V-268 Huang, Qiong II-287 Huang, Tao III-174 Huang, Weitong V-117 Huang, Xiaodi V-468 Huang, Yu-Chun V-239
Huang, Zhiqiu V-409 Huanhuai, Zhou V-334 Huijuan, Ying V-334 Hui-li, Wang VI-370 Huili, Zhang I-161 Huixia, Wang II-150, II-172 Huixin, Jin I-248 Jangamshetti, D.S. IV-351 Jen, Yen-Huai V-483 Ji, Jia VI-398 Jia, Guangshe V-125 Jia, Zhiyang VI-188, VI-197 Jian, Wang V-374 Jian, Zhou III-143 Jiang, Fuhua II-396 Jiang, Jia VI-398 Jiang, Xuping III-482 Jiang, Yuantao II-17, II-95 Jian-Hao, Xu VI-34 Jianhong, Sun IV-1, IV-10 Jian-Min, Yao III-151 Jianping, Li I-395, III-321 Jianping, Tao III-143 Jianqi, Han VI-50 Jian-tong, He VI-290 Jianwen, Cao IV-503 Jianxin, Gao V-518 Jianzheng, Yi IV-342 Jiao, Linan V-189 Jia-xin, Lin VI-224 Jie, Jin II-303 Jie, Quan I-413 Jie, Xu V-82 Jie, Yu VI-467 Jieping, Han I-132 Jin, Haiyi V-328 Jin, Min II-73 Jin, Wang V-82 Jinfa, Shi III-453, III-465 Jinfang, Zhang VI-467 Jing, Liang V-207 Jing, Tu I-184 Jing, Zhao III-292 Jing, Zhou I-66, I-71 Jing-xin, Chen I-343, I-351 Jingzhong, Liu II-318 Jin-hai, Wang VI-319 Jinhui, Lei III-207 Jinwei, Fu IV-10
561
562
Author Index
Jinwu, Yuan III-420 Jiuzhi, Mao I-313 Jou, Shyh-Jye V-10 Jun, Li VI-148 Jun, Song V-137 Jun, Wang V-82 Jun, Zhang VI-45 Jun-qi, Yang VI-297 Junsheng, Li IV-1 Jyothi, N.M. III-328 Kai, Zhang V-133 Ke, Xiaoyu II-73 Kebin, Huang II-150 Kewen, Geng VI-124 Kun, Shi VI-66 Lai, Herbert Hsuan Heng VI-500 Lan, Jingli II-518 Lee, Xuetao II-226 Lei, Xu VI-232 Lei, Yang II-109 Lei, Yu V-193 Li, Chen II-303 Li, Cungui II-499 Li, Deyang III-263 Li, Dou Hui VI-157 Li, Fengri VI-20 Li, Fengying IV-101, IV-110 Li, Guanglei III-433 Li, Guangzheng III-81 Li, Haibin I-115 Li, Haiyan IV-233 Li, Hongli VI-424 Li, Houjie I-550 Li, Hua I-380 Li, Hui III-174 Li, Jia-Hui IV-316 Li, Jianfeng IV-297 Li, Jianling IV-233 Li, Jinglin III-502 Li, Jinxiang VI-430 Li, Kuang-Yao V-239 Li, Li VI-458 Li, Liwei III-241 Li, Luyi III-192, III-394 Li, Mingzhe IV-172 Li, Na II-202 Li, Peng V-56 Li, Qi I-359
Li, RuZhang V-18 Li, Shaokun VI-335 Li, Shenghong IV-441 Li, Shijun IV-233 Li, Wan V-416 Li, Wang V-346 Li, Wenbin IV-392 Li, WenSheng I-101 Li, Xiangdong III-401 Li, Xiumei IV-224 Li, Yang IV-42 Li, Yanlai IV-297 Li, Ying V-443, V-449, V-455 Li, Yu II-128 Li, YuJing V-18 Li, Zhenlong III-488 Lian, Jianbo I-451 Lianbo, Jiang VI-551 Liang, Wen-Qian II-450 Liang, Yuechen III-232 Liang-feng, Shen I-255 Liangtao, Sun I-492 Liao, GaoHua VI-7, V-498, V-504 Liao, Jiaping III-174 Lieya, Gu I-8 Lifen, Xie II-34 Li-jia, Chen VI-297 Lijun, Shao V-105 Li Jun, Sun II-428, II-436, II-443 Liminzhi I-513 Lin, Chien-Yu V-483 Lin, Haibo II-414 Lin, Ho-Hsiu V-483 Lin, Jing VI-179 Lina, Wang IV-59 Ling, Chen I-29, IV-59 Ling, Shen Xiao VI-511 Lingrong, Da II-373 Li-ping, Li V-221 Liping, Pang V-82 Lisheng, Wang V-234 Liu, An-Ta VI-500 Liu, Bao IV-427 Liu, Bingwu I-437 Liu, Bojia V-111 Liu, Bosong IV-32 Liu, Chunli III-255 Liu, Daohua V-491 Liu, Deli II-181 Liu, Gui-Ying II-342
Author Index Liu, Hong VI-74 Liu, Hongming III-116 Liu, Hongzhi VI-335, VI-343, VI-357 Liu, Jia V-504 Liu, Jiayi II-1 Liu, Jingwei IV-491 Liu, Jixin IV-360 Liu, June I-143 Liu, Jun-Min II-493 Liu, Li V-258 Liu, Lianchen III-158 Liu, Lianzhong II-164 Liu, LinTao V-18 Liu, Linyuan V-409 Liu, Shiwang II-181 Liu, Tao III-442 Liu, Wenbai V-316 Liu, Xiaojing V-117 Liu, Xiaojun I-59, II-164 Liu, Xin V-491 Liu, XingLi IV-181 Liu, Yang VI-110 Liu, Yanzhong II-235 Liu, Yongsheng I-471 Liu, Yongxian III-337 Liu, Yuewen II-458 Liu, Zhaotian IV-233 Liu, Zhi-qiang III-112 Liu, Zhixin I-177 Liurong, Hong V-389 Liuxiaoning V-64 Lixia, Wang VI-267 Lixing, Ding V-50 Li’yan II-22 Li-yan, Chen I-76, I-83 Liyu, Chen I-166, I-172 Liyulong I-513 Long, Chen VI-50 Long, Hai IV-25 Long, Lifang II-181 Long, Shun II-450 Long, Xingwu IV-252 Lu, Hong V-428, V-435 Lu, Hongtao IV-392, IV-402 Lu, Jing I-519 Lu, Ling V-258 Lu, Xiaocheng II-294 Lu, Y.M. V-353, V-359 Lu, Zhijian II-47 Luo, Rong I-222
Luo, Yumei II-414 Lv, Qingchu IV-93 Lv, Rongsheng II-211 Lv, Xiafu IV-280 Ma, Chunlei I-471 Ma, Jian V-164 Ma, Lixin V-328 Ma, Qing-Xun II-116, II-122 Ma, Sen IV-369 Ma, Yuan V-189 Ma, Zengjun IV-93 Ma, Zhonghua III-103 Mai, Yonghao VI-417 Mamaghani, Nasrin Dastranj III-22 Maotao, Zhu VI-210, V-211 Maoxing, Shen VI-148 Masud, Md. Anwar Hossain V-468 Meilin, Wang V-181 Meng, Hua V-69 Meng, Yi-Le V-10 Mengmeng, Gong V-82 Mi, Chao VI-492 Miao, J. V-353 Milong, Li I-457 Min, Ye Zhi VI-511 Ming qiang, Zhu I-150, I-206 Mingqiang, Zhu II-81 Mingquan, Zhou VI-528 Na, Wang I-233 Naifei, Ren V-133 Nan, Li I-533 Nan, Shizong IV-476 Nie, GuoXin V-523 Nie, Zhanglong VI-138 Ning, Ai V-334 Ning, Cai I-36 Ning, Yuan V-416 Nirmala, C.R. III-328 Niu, Huizhuo IV-369 Niu, Xiaoke IV-289 Pan, Dongming V-43 Pan, Min I-446 Pan, Rong II-1 Pan, Yingchun V-170 Pan, Zhifang IV-392 Pei, Xudong VI-105
563
564
Author Index
Peng, Fenglin III-482 Peng, Hao IV-335 Peng, Jianhan III-508 Peng, Jian-Liang II-136 Peng, Yan V-309 Pengcheng, Fan VI-528 Pengcheng, Zhao II-405 Piao, Linhua VI-239, VI-246, VI-253, VI-261 Ping, Li VI-273 Pinxin, Fu V-181 Qi, Lixia I-124 Qi, Zhang IV-219 Qian, Minping IV-491 Qiaolian, Cheng V-370 Qin, G.H. V-89 Qin, Zhou I-302 Qingguo, Liu III-130 Qinghai, Chen IV-335 Qingjia, Geng V-105 Qingling, Liu IV-273, V-24 Qingyun, Dai V-181 Qinhai, Ma I-238 Qiong, Long VI-467 Qiu, Biao VI-179 Qiu, YunJie IV-402 Qiuhe, Yang VI-267 Qiyi, Zhang VI-124 Qu, Baozhong III-255 Qun, Zhai III-143 Qun, Zhang III-377, III-386 Ramaswamy, V. III-328 Rao, Shuibing V-137 Ren, Chunyu VI-218 Ren, Hai Jun V-56 Ren, Honge VI-131 Ren, Jianfeng III-494 Ren, Mingming I-115 Ren, Qiang III-276 Ren, Shengbing V-246 Ren, Wei III-81 RenJie VI-376 Rijie, Cong I-132 Rubo, Zhang III-39, III-47 Rui, Chen II-303 Rui, Zhao I-248, I-313 Ruihong, Zhang II-253 Ruirui, Zhang IV-204, IV-212
Runyang, Zhong V-181 Ru’yuan, Li II-22 Saghafi, Fatemeh III-22 Samizadeh, Reza III-22 San-ping, Zhao VI-13, VI-410 Sha, Hu IV-470 Shan, Shimin IV-32 Shang, Jiaxing III-158 Shang, Yuanyuan IV-369, IV-450 Shangchun, Fan V-321 Shao, Qiang I-109 Shaojun, Qin VI-210 Shen, Ming Wei V-304 Shen, Qiqiang III-95 Shen, Yiwen II-47 Shen, Zhang VI-370 Sheng, Ye VI-232 Shi, Danda V-316 Shi, Guoliang IV-48 Shi, Li IV-289 Shi, Ming-wang V-143 Shi, Wang V-105 Shi, Yan II-414 Shi, Yi VI-131 Shidong, Li V-296 Shou-Yong, Zhang II-428, II-436, II-443 Shu, Xiaohao III-95 Shu, Xin V-164 Shuai, Wang V-221 Shuang, Pan III-30 Song, Haitao VI-166 Song, Meina III-284 Song, Yichen IV-73 Song, Yu IV-73 Sreedevi, A IV-351 Su, Donghai V-404 Sun, Qibo III-502 Sun, Zhaoyun V-189 Sun, Zhong-qiang V-100 Sunqi I-513 Sunxu I-484 Suozhu, Wang I-14 Tan, Liguo VI-110 Tang, Dejun V-328 Tang, Fang Fang V-56 Tang, Fei I-478 Tang, Hengyao II-274 Tang, Peng II-1
Author Index Tang, Xin II-17 Tang, Xinhuai V-1 Tang, Yong V-258 Tanming, Liu II-331 Tao, Li IV-204, IV-212 Tao, Zedan II-357 Tian, Fengbo IV-280 Tian, Fengqiu VI-430 Tian, Ling III-241 Tianqing, Xiao IV-10 Ting, Chen I-387 Tong, Guangji II-499, II-510, II-518 Tong, Ruo-feng IV-198 Tu, Chunxia I-46, I-59 Wan, Hong IV-289 Wan, Wei VI-452 Wan, Zhenkai III-9, IV-484 Wang, Bing II-484 Wang, Chen V-328 Wang, Chengxi I-451 Wang, Chonglu I-222 Wang, Chunhui I-500 Wang, Dan IV-433 Wang, Dongxue IV-297 Wang, Fei I-88 Wang, Feng V-201 Wang, Fengling II-128 Wang, Fumin I-198 Wang, Haiping III-224 Wang, Hongli VI-315 Wang, Huasheng II-350 Wang, Hui-Jin II-450 Wang, JiaLian V-252, V-422 Wang, Jian II-211 Wang, Jianhua IV-48 Wang, Jianqing V-111 Wang, Jie V-404 Wang, Jing V-100 Wang, Jinyu I-335 Wang, Li-Chih V-483 Wang, Lijie V-529 Wang, Linlin V-275 Wang, Luzhuang IV-93 Wang, Min VI-424 Wang, Qian III-166, III-284 Wang, Ruoyang VI-398 Wang, Ruo-Yun IV-376 Wang, Shangguang III-502
565
Wang, Shanshan II-458 Wang, Shijun IV-464 Wang, Shi-Lin IV-376, IV-441 Wang, Shimei I-428 Wang, Shuyan II-10 Wang, Tiankuo II-510 Wang, Ting V-449, V-455 Wang, Weiliang VI-544 Wang, Xia I-21 Wang, Xiaohong III-241 Wang, Xiaohui I-269 Wang, Xiaoya II-420 Wang, Xiaoying V-117 Wang, Xing VI-239, VI-246, VI-253, VI-261 Wang, Yan IV-508 Wang, Y.C. V-359 Wang, Yiran III-247 Wang, Yongping III-276 Wang, YouHua V-18 Wang, Yu IV-252 Wang, Yude VI-480 Wang, Yuqiang V-170 Wang, Zhenxing III-166 Wang, Zhizhong IV-289 Wei, Cai II-150, II-172 Wei, Cheng-Wen V-10 Wei, Fengjuan II-235 Wei, Guo IV-252 Wei, Li III-30 Wei, Lin VI-79 Wei, Ling-ling III-212, III-218 Wei, Liu I-351 Wei, Ou V-409 Wei, Xianmin IV-418, IV-422 Wei, Yang II-253 Wei, Yu-Ting III-407 Wei, Zhou VI-305 Weihong, Chen III-89 Weihua, Liu III-183, III-369 Weihua, Xie V-30 Weimin, Wu IV-242 Weiqiong, He IV-219 Weiwei, Fang III-321 Weixi, Han I-465 Wen, Chengyu I-177 Wen, Jun Hao V-56 Wendi, Ma II-364 Wenping, Zhang I-184 Wu, Bin V-246
566
Author Index
Wu, Caiyan VI-424 Wu, Di II-143 WU, Guoshi III-247 Wu, Hao I-380 Wu, Kaijun V-43 Wu, Peng VI-452 Wu, Xiaofang IV-32 Wu, Xiwei II-357 Wu, Xue-li V-69 Wu, Yanqiang IV-470 Wu, Yong II-493 Wu, Zhongbing I-109 Xi, Ba IV-325 Xi, JunMei VI-7 -xia, Gao VI-297 Xia, Li IV-273, V-24 Xiang, Hongmei VI-94, VI-100 Xiang, Jun II-181 Xiang, Qian V-215 Xiang, Song VI-305 Xiang Li, Wang II-428 Xianzhang, Feng III-453, III-465 Xiao, Weng III-130 Xiao-hong, Zhang IV-411 Xiaolin, Chen II-281 Xiao-ling, He I-320, I-328 Xiaona, Zhou I-313 XiaoPing, Hu VI-350 Xiaosai, Li V-340 Xiaosheng, Liu I-213 Xiaowei, Wei VI-391 Xiaoxia, Zhao III-207 Xiaoya, He II-259 Xiaoyan, Xu VI-528 Xiao-ying, Wang I-343 Xiaoyong, Li II-364, II-382 Xie, Dong IV-25 Xie, Hualong III-337 Xie, Li V-1 Xie, Lihui II-218 Xie, Luning III-122 Xie, Qiang-lai III-212, III-218 Xie, Xiaona IV-167 Xie, Xing-Zhe IV-316 Xie, Zhengxiang IV-280 Xie, Zhimin I-21 Xifeng, Xue VI-148 Xijun, Liu V-476
Xilan, Feng III-453, III-465 Xiliang, Dai VI-124 Xilong, Jiang IV-219 Xin, Xiao IV-204, IV-212 Xin, Zhanhong I-222 Xing, GuiLin VI-357 Xing, Wang V-461 Xing, Xu VI-210 Xinhua, An II-40 Xinling, Wen VI-66, VI-474 Xinzhong, Xiong II-52 Xinzhong, Zhang I-373 Xi ping, Zhang I-150, I-206 Xiucheng, Dong V-340 Xu, Dawei IV-450 Xu, Jing III-166 Xu, Kaiquan II-458 Xu, Li VI-305 Xu, Ming III-95 Xu, Shuang I-550 Xu, Wanlu VI-398 xu, Wenke VI-20 Xu, Zhifeng II-17 Xu, Zhou III-64 Xuan, Dong VI-305 Xue, Qingshui IV-101, IV-110 Xuemei, Hou V-193 Xuemei, Li V-50 Xuemei, Tang V-158 Xuhua, Chen IV-342 Xuhua, Shi V-148, V-153 Xuhui, Wang II-22 Xun, Jin IV-252 Xuxiu IV-521 Xu-yang, Liu V-207 YaChao, Huang V-30 Yachun, Dai V-133 Yamin, Qin I-373 Yan, Fu I-14 Yan, Jun I-115 Yan, Peng V-74 Yan, Qingyou II-420 Yan, Shou III-158 Yan, Yunyang IV-120 Yan, Zhang IV-325 Yanbin, Shi VI-517, VI-522 Yang, Chen VI-210 Yang, Fangchun III-502
Author Index Yang, Fei IV-88 Yang, Hao III-122 Yang, Lianhe IV-476 Yang, Li Chen III-292 Yang, Liming V-365 Yang, Qian II-164 Yang, Qin VI-458 Yang, Renfa II-287 Yang, Song I-238, I-395 Yang, Wei III-112 Yang, Xinhua IV-450 Yang, Xue I-101, I-124 Yang, You-dong V-164 Yang, Yue II-87 Yanli, Shi VI-517, VI-522 Yanli, Xu IV-136, IV-144, IV-151, IV-159 Yanping, Liu III-369 Yan-yuan, Zhang VI-79 Yanzhen, Guo I-248 Yao, Lei-Yue III-218 Yaqiong, Wei I-132 Yazhou, Chen V-211 Ye, H.C. V-89 Yi, Cui V-82 Yi, Jing-bing III-54 Yi, Ru VI-474 Yifan, Shen III-312 Yihui, Wu IV-335 Yin, Jinghai IV-456 Yin, Qiuju IV-66 Yin, Zhang I-313 Yinfang, Jiang V-133 Ying, Mai I-280 Yingfang, Li IV-1 Yingjun, Feng IV-144, IV-151 Yingying, Ding III-89 Ying-ying, Zhang IV-42 Yixin, Guo VI-282 Yong, Wu I-156 Yong-feng, Li VI-39 Yongqiang, He III-413, III-420 Yongsheng, Huang V-518 Yong-tao, Zhao VI-224 Yongzheng, Kang I-161 You, Mingqing III-200 Youmei, Wang IV-18 Youqu, Lin II-326 Yu, Chen VI-66, VI-474 Yu, Cheng III-377, III-386
567
Yu, Chuanchun IV-120 Yu, Deng I-8 Yu, Jie VI-398 Yu, Liangguo VI-488 Yu, Quangang VI-239, VI-246, VI-253, VI-261 Yu, Shuxiu II-226 Yu, Siqin II-95 Yu, Tao I-244 Yu, Tingting V-281 Yu, Yan IV-48 Yu, Yao II-143 Yu, Zhichao I-52 Yuan, Fang I-76, I-83 Yuan, Feng III-473 Yuan, Qi III-312 Yuan, Xin-wei III-54 Yuanqing, Wang II-530 Yuanquan, Shi IV-204, IV-212 Yuanyuan, Zhang V-374 Yue, Wang VI-328 Yue, Xi VI-204 Yu-han, Zhang VI-319 Yuhong, Li I-465 Yuhua, He I-1 Yujuan, Liang VI-173 Yun, Wang V-133 Yun-an, Hu VI-224 Yunfang, Chen IV-242 YunFeng, Lin VI-350 Yunna, Wu IV-325 Yuxia, Hu VI-364 Yuxiang, Li V-105 Yuxiang, Yang VI-267 zelong, Xu VI-551 Zeng, Jie III-433 Zeng, Zhiyuan VI-152 Zhai, Ju-huai V-143 Zhan, Yulong II-143 Zhan, Yunjun I-421 Zhang, Bo I-437 Zhang, David IV-297 Zhang, Fan IV-32 Zhang, Fulin III-200 Zhang, Haijun I-437 Zhang, Haohan IV-172 Zhang, Hongjing II-10 Zhang, Hongzhi IV-297
568
Author Index
Zhang, Huaping VI-1 Zhang, Huiying V-416 Zhang, Jian VI-131 Zhang, Jianhua V-69 Zhang, Jie IV-93 Zhang, Kai V-404 Zhang, Kuai-Juan II-136 Zhang, Laishun IV-464 Zhang, Liancheng III-166 Zhang, Liang IV-484 Zhang, Liguo IV-508 Zhang, Ling II-466 Zhang, Minghong III-135 Zhang, Minghua I-451 Zhang, Qimin VI-376 Zhang, RuiTao V-18 Zhang, Sen V-529 Zhang, Shu V-43 Zhang, Shuang-Cai II-342 Zhang, Sixiang IV-427 Zhang, Tao VI-179 Zhang, Tingxian IV-360 Zhang, Wei II-350 Zhang, Wuyi III-64, III-74 Zhang, Xianzhi II-194 Zhang, Xiaolin I-500 Zhang, Xiuhong V-404 Zhang, Yi III-268 Zhang, Yingqian II-414 Zhang, Yong I-269 Zhang, Yu IV-189 Zhang, Yuanliang VI-117 Zhang, Yujin IV-441 Zhang, Yun II-66 Zhang, Zaichen IV-382 Zhang, Zhiyuan III-135 Zhangyanpeng I-513 Zhanqing, Ma V-399 Zhao, Hong V-275 Zhao, Huifeng II-244 Zhao, Jiyin I-550 Zhao, Liu V-321 Zhao, Min IV-306 Zhao, Xiaoming III-488 Zhao, Xiuhong III-433 Zhao, Ying III-276 Zhaochun, Wu II-26 Zhaoyang, Zhang II-530 Zhe, Yan VI-273 Zhen, Lu II-187
Zhen, Ran V-69 Zhen, Zhang VI-232 Zhen-fu, Li VI-290 Zheng, Chuiyong II-202 Zheng, Fanglin III-394 Zheng, Jun V-409 Zheng, Lin-tao IV-198 Zheng, Qian V-321 Zheng, Qiusheng III-401 Zheng, Xianyong I-94 Zheng, Yanlin III-192, III-394 Zheng, Yongliang II-181 Zhenying, Xu V-133 Zhi, Kun IV-66 Zhi’an, Wang II-22 Zhiben, Jie II-259 Zhibing, Liu II-172 Zhibo, Li V-193 Zhi-gang, Gan I-295 Zhi-guang, Zhang IV-411 Zhijun, Zhang II-109 Zhiqiang, Duan IV-342 Zhiqiang, Jiang III-453, III-465 Zhiqiang, Wang I-262 Zhisuo, Xu V-346 Zhiwen, Zhang II-101 Zhixiang, Tian II-373 Zhiyuan, Kang I-543 Zhong, Luo III-442 Zhong, Shaochun III-192 Zhong, Yuling II-181 Zhongji, Tan VI-517 Zhongjing, Liu VI-370 Zhonglin, He I-1, I-166 Zhongyan, Wang III-130 Zhou, Defu VI-430 Zhou, De-Qun II-466 Zhou, Fang II-493 Zhou, Fanzhao I-507 Zhou, Feng I-109 Zhou, Gang VI-417 Zhou, Hong IV-120 Zhou, Jing-Jing VI-58 Zhou, Lijuan V-309 Zhou, Wei IV-427 Zhou, Yonghua VI-492 Zhou, Zheng I-507 Zhu, JieBin V-498 Zhu, Jingwei III-508 Zhu, Libin I-335
Author Index Zhu, Lili II-420 Zhu, Linlin III-135 Zhu, Quanyin IV-120, IV-189 Zhu, Xi VI-152 Zhuanghua, Lu V-389
Zhuping, Du V-193 Zou, Qiong IV-129 Zunfeng, Liu V-381 ZuoMing IV-514 Zuxu, Zou II-81
569