Lecture Notes in Geoinformation and Cartography Series Editors: William Cartwright, Georg Gartner.Liqiu Meng, Michael Peterson
Alias Abdul-Rahman • Sisi Zlatanova· Volker Coors (Eds.)
Innovations in 3D Geo Information Systems
With 438 Figures
~ Springer
Editors:
Dr. Alias Abdul-Rahman Department of Geoinformatics, Faculty of Geoinformation Science and Engineering Universiti Teknologi Malaysia 81310 Skudai, Iohor, Malaysia Email:
[email protected]
Dr. Sisi Zlatanova Section GISt, OTB Delft University of Technology Postbus 5030 2600 GA Delft, The Netherlands Email:
[email protected] Dr. Volker Coors Geoinformatics Faculty Geomatics, Mathematics and Computer Science University of Applied Sciences Schellingstr. 24 70174 Stuttgart, Germany Email:
[email protected]
ISBN 10 ISBN 13
3-540-36997-x Springer Berlin Heidelberg New York 978-3-540-36997-4 Springer Berlin Heidelberg New York
Library of Congress Control Number: 2006930254 This work is subject to copyright. All ri~hts are reserved, whether the whole or l?,art of the material is concerned, specifically the nghts of translation, reprinting, reuse of Illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always Deobtained from Springer-Verlag. Violations are liable to prosecution under the German Copyright Law. Springer is a part of Springer Science+ Business Media sprin&eronline.com © Springer-Verlag Berlin Heidelberg 2006 The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Cover design: E. Kirchner, Heidelberg Production: A. Oelschlager Typesetting: Camera-ready by the Editors Printed on acid-free paper 30/2132/AO 54321
Foreword Three-dimensional (3D) geoinformation has been the subject within geoinformation (GIS) community and other related professionals since early 90's. Recently the subject is getting more attention when users demand better information of the phenomena and of the surrounding objects. A lot of efforts and time have been invested for this kind of 3D geoinformation solution but operationally such system is still hardly available in mass market. For this reason the Department of Geoinformatics, Faculty of Geoinformation Science and Engineering, Universiti Teknologi Malaysia took the initiative to organize the first International Workshop on 3D Geoinformation held in South East Asia, held on 7-8 August, 2006, Kuala Lumpur, Malaysia. The workshop has covered fundamental aspects of 3D geoinformation as well as 3D data collection (LIDAR and other means), 3D spatial data modeling, 3D databases, 3D spatial analysis, 3D visualization, and some applications in 3D geoinformation. This workshop has gathered researchers, system developers, practitioners, and end users from 21 countries, coming from 5 different continents. They have the great opportunity to discuss new research development, ideas, theories, and possible applications in this very specialized but fast emerging field. This event has been supported by several other universities amongst which TU Delft, The Netherlands and University of Applied Sciences, Stuttgart, Germany; government agencies such as Department of Survey and Mapping Malaysia (JUPEM); professional unions such as ISPRS (International Society of Photogrammetry and Remote Sensing) and Land Surveyors Board of Peninsular Malaysia (LIT); and the international software vendor like Autodesk. Their commitment and support gives us strength and surely motivates all of us in this very specialized subject for better solutions and services.
Goal and objectives The workshop emphasizes on the third dimension of geographical information science (GISc.). It is meant to be an interdisciplinary forum for leading researchers in related areas to present the latest 3D developments and 3D applications, to discuss cutting-edge 3D technology, to exchange re-
vi
search ideas and to promote international collaboration in this field. The conference will concentrate on the following areas: • Data collection and modeling: advanced approaches for 3D data collection, 3D re-construction and methods for representation. • Data Management: topological, geometrical and network models for maintaining of 3D geo-information. • Data analysis and visualization: Frameworks for representing 3D spatial relationships, 3D spatial analysis and algorithms for navigation, interpolation, etc. Advanced VR, AR, MR visualization, 3D visualization on mobile devices • 3D applications: City models, cadastre, LBS, etc.
The Paper Selection Process
The International Workshop on 3D Geoinformation is a refereed workshop. We received more than 70 technical papers. After a careful selection we have admitted 51 papers, which are included in this book. The selected papers were divided into two categories, i.e, oral and poster presentations as indicated in this book.
Acknowledgement
We thank to our major sponsors like Autodesk, Land Surveyors Board of Peninsular Malaysia (LJT), and Department of Survey and Mapping Malaysia for their excellent supports and contributions. Thanks also go to software and geo services vendors and companies for their participation in the exhibition. Finally, we highly appreciate the efforts, helps and contributions from committee members (local and international), and other individuals.
Alias Abdul Rahman, Sisi Zlatanova, Volker Coors. 10 July, 2006
Table of Contents
Keynotes 3D Geometries in Spatial DBMS Sisi Zlatanova (The Netherlands)
1
A Web 3D Service for Navigation Applications
15
Jorg Haist, Thorsten Reitz, Volker Coors (Germany)
3D Spatial Data Acquisition - LIDAR and Digital Photogrammetry
Oral Contributions Integration of Photogrammetric and LIDAR Data in a 29 Multi-Primitive Triangulation Environment A. F. Habib, S. Shin, C. Kim, and Muhannad Al-Durgham (Canada and South Korea) LIDAR-Aided True Orthophoto and DBM Generation System A. F. Habib and C. Kim (Canada)
47
Surface Matching Strategy for Quality Control of LIDARData A. F. Habib and Rita W. Cheng (Canada)
67
On-line Integration of Photogrammetry and GIS to Generate Fully Structured Data for GIS Hamid Ebadi, Farshid Famood Ahmadi (Iran)
85
x
3D Spatial Data Modelling and Representation
Oral Contributions
3D Integral Modelling for City Surface and 95 Subsurface Wang Yanbing, Wu Lixin, Shi Wenzhong and Li Xiaojuan (China, and Hong Kong) Spatial Object Structure for Handling 3D Geodata inGRIFINOR Eric Kjems and Jan Kolar (Denmark)
107
The Study and Applications of Object-Oriented Hyper-Graph Spatio- Temporal Reasoning Model Luo Jing, Cui Weihong and Niu Zhenguo (China)
119
Using 3D Fuzzy Topological Relationships for 129 Checking of Spatial Relations between Dynamic Air Pollution Cloud and City Population Density Roozbeh Shad, Mohamad Reza Malek, Mohammed Saadi Mesgari, and Alireza Vafaeinezhad (Iran) 3D Modelling Moving Objects Under Uncertainty Conditions 139 Tala Shokri, M. R. Delavar, M. R., Malek, A. U. Frank, and G. Navratil (Iran, and Austria) Research on a Feature Based Spatio- Temporal Data Model 151 Weihong Cui, Wenzhong Shi, Xiaojuan Li, Luojing, and Zhengguo Niu (China and Hong Kong) OD Feature in 3D Planar Polygon Testing for 3D Spatial Analysis Chen Tet Khuan and Alias Abdul Rahman (Malaysia)
169
Definition of the 3D Content and Geometric Level of Congruence of Numeric Cartography R. Brumana, F. Fassi, and F. Prandi (Italy)
185
xi
3D Multi-scale Modelling of the Interior of the Real Villa of Monza (ITALY) C. Achille and F. Fassi (Italy)
195
3D GIS Frameworks
Oral Contributions On the Road to 3D Geographic Systems: Important Aspects of Global Model-Mapping Technology Jan Kolar (Denmark)
207
Cristage: A 3D GIS with a Logical Crystallographic ............•..... 225 Layer to Enable Complex Analysis B. Poupeau and Olivier Bonin (France) The Democratizing Potential of Geographic Exploration Systems (GES) Through the Example of GRIFINOR Lars Bodum and Marie Jaegly (Denmark)
235
The Integration Methods of 3D GIS and 3D CAD Li Juan, Tor Yam Khoon, and Zhu Qing (China and Singapore)
245
3D Navigation for 3D GIS - Initial Requirements 259 Ivin Amri Musliman, Alias Abdul Rahman, and Volker Coors (Malaysia and Germany) Web-based GIS-Transportation Framework Data Services Using GML, SVG and X3D Hak-Hoon Kim and Kiwon Lee (South Korea)
269
3D Geo-Database Implementation Using ..........•...•.................. 279 Craniofacial Geometric Morphometries Database System Deni Suwardi and Halim Setan (Malaysia) GIS-based Multidimensional Approach for Modelling .........•....• 295 Infrastructure Interdependency Rifaat Abdalla, Haris Ali, and Vincent Tao (Canada)
xii
Conception of a 3D Geodata Web Service for the Suppport of Indoor Navigation with GNSS Stephen Mas, Wolfgang Reinhardt and Fei Wang (Germany)
307
3D Objects Reconstruction
Oral Contributions Reconstruction of 3D Model based on Laser Scanning Lu-Xingchang and Liu-Xianlin (China)
317
Automatic Generation of Pseudo Continous LoDs 333 for 3D Polyhedral Building Model Jiann-Yeou Rau, Liang-Chien Chen, Fuan Tsai, Kuo-Hsiao and Wei-Chen Hsu (Taiwan) Reconstruction of Complex Buildings Using 345 LIDAR and 2D Maps Tee-Ann Teo, Jiann-Yeou Rau, Liang-Chien Chen, lin-King Liu, and WeiChen Hsu (Taiwan) Building Reconstruction - Outside and In Christopher Gold, Rebecca Tse, and Hugo Ledox (United Kingdom)
355
Skeletonization of Laser-Scanned Trees in the 3D Raster Domain Ben Gorte (The Netherlands)
371
Automated 3D Modelling of Buildings in Suburban Areas based on Integration of Image and Height Data Kourosh Khoshelham (Iran)
381
Automatically Extracting 3D Models and 395 Network Analysis for Indoors Ismail R. Karas, Fatmagul Batuk,Abdullah E. Akay, and Ibrahim Ba: (Turkey)
xiii
3D City Modelling
Oral Contributions Improving the Realism of Existing 3D City Models Martin Kada, Norbet Haala, and Susanne Backer (Germany)
405
Different Quality Level Processes and Products 417 for Ground-based 3D City and Road Modeling Bahman Soheilian, Olivier Toumaire, Lionel Penard, Nicolas Paparoditis (France) Texture Generation and Mapping using Video 429 Sequences for 3D Building Models Fuan Tsai, Cheng-Hsuan Chen, Jin-Kim Liu, and Kuo-Hsing Hsiao (Taiwan) Design and Implementation of Mobile 3D City Landscape AuthoringlRendering System Seung-Yub Kim and Kiwon Lee (South Korea)
439
Macro to Micro Archeological Documentation: 447 Building a 3D GIS Model for Jerash City and the Artemis Temple Nedal Al-Hanbali, Omar Al Bayari, Bassam Saleh, Husam Almasri, Emmanuel Baltsavias (Jordan and Switzerland) Building 3D GIS Modelling Applications in Jordan: Methodology and Implementation Aspects Nedal Al-Hanbali, Eyad Fadda, and Sameeh Rawashdeh (Jordan)
469
3D Mapping, Cadastre and Utility
Oral Contributions Moving Towards 3D - From a National Mapping Agency Dave Capstick and Guy Heathcote (United Kingdom)
491
xiv
An Approach for 3D Visualization of Pipelines Y. Du and S. Zlatanova (China and The Netherlands)
501
Developing Malaysian 3D Cadastre System 519 Preliminary Findings Muhammad Imzan Hassan, Alias Abdul Rahman, and Jantin Stoter (Malaysia and The Netherlands) Developing 3D Registration System for 3D Cadastre Mohd HasifAhmad Nasruddin and Alias Abdul Rahman (Malaysia)
535
3D Visualization
Oral Contributions Volumetric Spatio Temporal Data Model 547 Mohd Shafry Mohd Rahim, A. R. Sharif, S. Mansor, A. Rodzi Mahmud, and Mohammad Ashari Alias (Malaysia) Use of 3D Visualization in Natural Disaster Risk ........•.............. 557 Assessment for Urban Areas S. Kemec and H. S. Duzgun (Turkey) Development and Design of 3D Virtual Laboratory 567 for Chemistry Subject based on Constructivism-CognitivismContextual Approach Hajah Norasiken Bakar, and Halimah Badioze Zaman (Malaysia) The 3D Fusion and Visualization of Phototopographic Data 589 Yang Ming Hui, Ren Wei Chun, Guan Hong Wei, and Wang Kai (China) Integrating a Computational Fluid Dynamics 599 Simulation and Visualization with a 3D Virtual Walkthrough A Case Study of Putrajaya Puteri Shireen Jahnkassim, Maisarah Ali, NoorHanita A. Majid, and Mansor Ibrahim (Malaysia)
xv
A Geospatial Approach to Managing Public Housing on Superlots Jack Barton and Jim Plume (Australia)
615
3D Visualization and Virtual Reality for Cultural Heritage Diagnostic L. Co Lizzi, F. D Pascalis, and F. Fassi (Italy)
629
3D Terrain Modeling and Digital Orthophoto Generation
Oral Contributions
True Orthophoto Generation from High Resolution Satellite Imagery Ayman F. Habib, K. I., Bang, C. J. Kim, and S. W. Shin (Canada)
641
Development of Country Mosaic using IRS-WIFS Data 657 Ch. Venkateswara Rao, P. Sathyanarayana, D. S. Jain, A. S. Manjunath, and K. M. M. Rao (India) Digital Terrain Models Derived from SRTM 673 Data and Kriging T. Bernardes, I. Gontijo, H. Andrade, T. G. C. Vieira, and H. M. R. Alves (Brazil) The St Mark's Basilica Pavement: The Digital Orthophoto 3D Realization to The Real Scale 1:1 for The Modelling and The Conservative Restoration L. Fregonese, C. C. Monti, G. Monti, and L. Taffurelli (Italy)
683
xvi
Poster Contributions
The Application of GIS in Maritime 695 Boundary Delimitation I Made Andi Arsana, Chris Rizos, Clive Schofield (Indonesia and Australia) Integration of GIS and Digital Photogrammetry in Building Space Analysis Mokhtar Azizi Mohd Din and M. Yazi: Ahmad (Malaysia)
721
An Integration of Digital Photogrammetry and GIS for Historical Building Documentation Seyed Youse! Sadjadi (United Kingdom)
737
Reconstruction of Three Dimensional Ocean Bathymetry Using Polarised TOPSAR Data Maged Marghany and Mazlan Hashim (Malaysia)
749
xvii
Author Index A. U. Frank, 139 A.F. Habib, 67 A.S.Manjunath, 657 Alias Abdul Rahman, 259, 519,535, 169 Alireza Vafaeinezhad, 129 Ayman Habib, 47 B. Poupeau, 225 Bahman Soheilian, 417 Ben Gorte, 371 C. C. Monti, 683 Ch.Venkateswara Rao, 657 Changjae Kim, 47 Chen Tet Khuan, 169 Cheng-Hsuan Chen, 429 D.S,Jain, 657 Dave Capstick, 491 Deni Suwardhi, 279 Eric Kjems, 107 Eyad Fadda, 469 F. De Pascalis, 629 F. Fassi, 185,629 F. Prandi, 185 Fuan Tsai, 333, 429 G. Monti, 683 G. Navratil, 139 Guan Hong Wei, 589 Guy Heathcote, 491 H. Andrade, 673 H. M. R. Alves, 673 Halim Setan, 279 Halimah Badioze Zaman, 567 Harris Ali, 295 I. Gontijo, 673 Ivin Amri Musliman, 259 Jack Barton, 615 Jan Kolar, 107 Jan Kohli" 207 Jantien Stoter, 519 Jiann- Yeou Rau, 333, 345 Jim Plume, 615 Jin-Kim Liu, 429 Jin-King Liu, 345
Jorg Haist, 15 K.M.M.Rao, 657 Kourosh Khoshelham, 381 Kuo-Hsin Hsiao, 333 Kuo-Hsing Hsiao, 429 L. Colizzi, 629 L. Fregonese, 683 L. Taffurelli, 683 LIJuan, 245 Li Xiaojuan, 95 Liang-Chien Chen, 333, 345 Lionel Penard, 417 M. R. Delavar, 139 M. R. Malek, 139 Maged Marghany, 749 Mazlan Hashim, 749 Mohamad Reza Malek, 129 Mohammad Saadi Mesgari, 129 Mohammed Yaziz Ahmad, 721 Mohd Hasif Ahmad Nasruddin, 535 Mokhtar Azizi Mohd Din, 721 Muhammad Irnzan Hassan, 519 Nedal AI-Hanbali, 469 Nicolas Paparoditis, 417 Norasiken Bakar, 567 O. Bonin, 225 Olivier Toumaire, 417 P.Sathyanarayana, 657 R. Brumana, 185 R.W.T. Cheng, 67 Ren Wei Chun, 589 Rifaat Abdalla, 295 Roozbeh Shad, 129 Sameeh Rawashdeh, 469 Seyed Yousef Sadjadi, 737 ShiWenzhong,95 Shokri, T, 139 Sisi Zlatanova, 1 T. Bemardes, 673 T. G. C. Vieira, 673 Tee-Ann Teo, 345 Thorsten Reitz, 15 TOR Yam Khoon, 245
xviii Vincent Tao, 295 Volker Coors, 15,259 Wang Kai, 589 Wang Yanbing, 95
Wei-Chen Hsu, 333, 345 WuLixin, 95 Yang Ming Hui, 589 ZHU Qing, 245
xix
Program committee Chair Alias Abdul Rahman, Universiti Teknologi Malaysia Co-chair Sisi Zlatanova, Delft University of Technology, The Netherlands Members Christopher Gold, University of Glamorgan, United Kingdom Daniel Holweg, Fraunhofer Institute, Germany Halim Setan, Universiti Teknologi Malaysia Jae-Hong Yom, Sejong University, South Korea Jantien Stoter, lTC, The Netherlands Jiyeong Lee, University of North Carolina at Charlotte, United States Lars Bodum, Aalborg University, Denmark Lixin Wu, China University of Mining and Technologies, Beijing, China. Martin Kada, University of Stuttgart, Germany Milan Konechny, University of Masaryk, Brno, Czech Republic Michael Goodchild, University of California at Santa Barbara, United States Michael Hahn, University of Applied Sciences Stuttgart, Germany Monika Sester, University of Hannover, Germany Morakot Pilouk, ESRI, California, United States Norbert Haala, University of Stuttgart, Germany Qingquan Li, Wuhan University, China Roland Billen, Universite de Liege, Belgium Ryosuke Shibasaki, University of Tokyo, Japan Sylvie Servigne, INSA-Lyon, France Taher Buyong, International Islamic Univ., Malaysia Thomas Kolbe, University of Bonn, Germany Vincent Tao, York University, Toronto, Canada Volker Coors, University of Applied Sciences Stuttgart, Germany Wenzhong Shi, The Hong Kong Polytechnic University, Hong Kong Local Organizing Committee The Local Organizing Committee consists of staff members of the Department of Geoinformatics, Faculty of Geoinformation Science and Engineering, Universiti Teknologi Malaysia (UTM) , and the Department of Survey and Mapping, Malaysia (JUPEM)
xx
Abd Razak Abu Bakar, UTM Ahmad Fauzi Nordin, mPEM Alias Abdul Rahman, UTM Halim Setan, UTM Iyin Amri Musliman, UTM Mohamad Nor Said, UTM Muhammad Irnzan Hassan, UTM Norkhair Ibrahim, UTM Shahabudin Amerudin, UTM Zamri Ismail, UTM
3D Geometries in Spatial DBMS Sisi Zlatanova GISt, Delft University of Technology, The Netherlands
[email protected]
Abstract Database management systems (DBMS) have significantly changed in the last several areas. From a system dealing with management of administrative data they have involved to a spatial DBMS providing spatial data types, spatial indexing and extended spatial functionality. Mainstream DBMS support mostly a geometry model but the first natively supported topology model is already a fact. Although the provided functionality is limited to the second dimension, various options exist for management of three-dimensional data. This paper discusses some of them, present current research and outline directions for further extended management of 3D data. The paper is limited to the geometry model, i.e. topology issues are not covered.
Evolving to Spatial DBMS DBMS have been traditionally used to handle large volumes of data and to ensure the logical consistency and integrity of data, which also have become major requirements in handling spatial data. For years, spatial data used to be organized in dual architectures consisting of separated data management for administrative data in a Relational DBMS (RDBMS) and spatial data in a GIS. This is to say spatial data has been managed in single files using proprietary formats. This approach could easily result in inconsistency of data, e.g. when deleting a record in the database no mechanism exists to check the corresponding spatial counterpart. A solution to problems of dual architecture was a layered architecture, in which all data is maintained in a single RDBMS. Since spatial data types were at that time
2
Sisi Zlatanova
not supported at DBMS level, knowledge about spatial data types was maintained in middle ware (Vijlbrief and van Oosterom, 1992). Presently, most mainstream DBMS offer spatial data types and spatial functions usually in an object-relational spatial extend to RDBMS (Zlatanova and Stoter, 2006). Storing spatial data and performing spatial analysis can be completed with SQL queries. Additionally, integrated queries on both spatial and non-spatial parts of features can be executed at database level. The spatial data types and spatial operations reflect only simple two-dimensional features, though embedded in 3D space. This support of 3D/4D coordinates allows for alternatives in management of 3D features. This paper elaborates on current possibilities of DBMS to maintain 3D data. The next section discusses management and visualisation of volumetric objects, 3D lines and 3D points. Then the paper reports on prototype implementations of new data types completed at section GISt, Delft University of Technology, The Netherlands. Standardization activities within Open Geospatial Consortium are briefly outlined with respect to recent new initiatives. Last section concludes on demands and expectations to the 3D geometry model.
3D data in the DBMS within current techniques Providing the spatial extend, DBMS have immediately been challenged by the third dimension. A number of experiments were performed by several researchers to investigate possibilities to store, query and visualize features with their 3D coordinates (Arens et al 2006, Stoter and Zlatanova 2003, Pu, 2005, Zlatanova et al 2002, Zlatanova et al 2004). The good news is: 3D data can be organized in DBMS, retrieved and rendered by front-end applications. However, there is also a bad news: since no 3D data type is currently supported by any DBMS, the user remains self responsible for the validation of the objects as well as for implementing true 3D functionality. These conclusions, with small variations, are consistent for all mainstream DBMS: Oracle, IBM DB2, Informix, Ingres, PostGIS and MySQL. All of them offer 2D data types (basically point, line, polygon) but support 3D/4D coordinates (except Ingres, which is 2D) and offer a large number of functions more or less compliant with the Open Geospatial Consortium (OGC) standards (see bellow). Most of the functions are only 2D (except PostGIS, which supports few 3D operations), i.e. although not reporting error, they omit the z-coordinate. A bit frustrating is the implementations
3D Geometries in Spatial DBMS
3
of spatial functions and operations : it varies with the DBMS. The statement: select c from b where a < 100, where c,a are numerical data type, can be executed in every DBMS. However if c is a spatial data type and a is a given distance, the SQL statement becomes depended on the specific implementation. In some cases (e.g. Oracle Spatial), even the names of the spatial data types are not that apparent. Oracle Spatial has one complex data type sdo jgeometry composed of several parameters indicating type geometry, dimension, and an array with the x,y,z coordinates. At present the geometry model of Oracle Spatial is the most supported by GIS and Architecture, Engineering and Construction (ABC) Systems, but there is a strong tendency for changes (e.g. PostGIS is supported by GRASS and ESRI). The text below discusses possible organisation of the 3D real-world features (volumetric, line, point) in current DBMS. Note, the presented approaches are not dependent on the DBMS.
Fig. 1. Visualisation of buildings and surface, represented by simple polygons in Oracle Spatial
3D volumetric objects Most discussed 3D features are volumetric objects, which can be used for modelling of man-made objects, such as buildings, and natural objects, such as geological formations. To have those managed in the database, the
4
Sisi Zlatanova
user can choose between: 1) using DBMS data types polygon and multipolygon or 2) creating a used-defined data type. The three candidates for a simple volumetric object are polyhedron, triangulated polyhedron and tetrahedron (see for definitions next section) and all three can be easily realized with provided data types. Moreover, there is no practical difference in the implementation of the polyhedron and triangulated polyhedron, since a separate triangle data type does not exist. Tetrahedron would allow for a bit simpler representation since it has only four triangles. The first option, i.e. defining a 3D feature as a list of polygons can be realized by two columns in one relational table, i.e. feature_ID and a column for the spatial data type (i.e. polygon). If the reality is quite complex, leading to many 3D features with shaped polygons, the DBMS table should be normalized. This means that the polygons have to be organized in a separate table (containing polygon_ID and a column for the spatial data type) and the 3D feature table should contain the indices to the composing polygons. Clearly, the separate relational table for volumetric objects would be simpler if the volumetric object is tetrahedron. It can be organized in a table with finite number of columns: one for the ID of the tetrahedron and four for the composing polygons (triangles). In general, such an organization can be seen as a partial topological model; since the 3D object is defined by references to the composing polygons. Fig. 1 is an example of a two-table, polyhedron data storage. In the second approach, a 3D object is stored using the data type multipolygon, i.e. all the polygons are listed inside the data type, which is practically one record in the relational database. This case requires only one table, which may contain only two columns: feature_ill and column for the spatial data type. Various examples of these representations can be found in Stoter and Zlatanova, 2003. An apparent advantage of the 3D multipolygon approach is the one-toone correspondence between a record and an object. Furthermore the 3D multipolygon (compare to list of polygons) is recognized as one object by front-end applications (GIS/CAD). For example, a 3D multipolygon is visualised as grouped polygons in Bentley Microstation. However, in case of editing, the group still has to be ungrouped into composing polygons (Zlatanova et al 2002), i.e. the group is not recognised as 3D shape. Indeed, both representations are not recognized by DBMS as a volumetric object, i.e. they are still polygons and thus the 3D objects cannot be validated. The objects can be indexed as 3D polygons but not as 3D volumetric objects. Spatial operations can be performed, as well, but on the 'flat' polygons, i.e. the z-coordinate will not be considered. Moreover, both representations are highly inefficient in terms of storage space. The coordinates of the points in a volumetric object are repeated at least three
3D Geometries in Spatial DBMS
5
times (being part of minimum three neighbouring polygons) either in 3D multipolygon record or in the list of polygons. User defined spatial data types can be implemented using different approaches from the simple SQL create data type statement, to more complex implementations, using a Procedural Language (PL) , Java, C++, etc. The common drawback of such an implementation is impossibility to apply the native spat ial functionality (operations and indexing) of DBMS. Moreover the user-defined spatial data types cannot be stored in the same column of the natively supported spatial data types. Visualisation in frontend applications would be possible only by developing individual connections. User-defined spatial data types, however, are very useful for prototyping for approval of new concepts. Two examples of user-defined spatial data types are discussed later in the text.
Fig. 2. 3D visualisation of pipelines, organised as lines in Oracle Spatial
3D line objects Typical examples of 3D line objects are utility networks: pipeline and cable networks. Utility data and systems have been always predominantly two-dimensional. Only recently, investigations have been initiated towards maintaining utility networks in three dimensions. Motivation for this is extended usage of underground space and therefore the apparent need of
6
Sisi Zlatanova
more sophisticated mechanisms for visualizing several networks in one environment. Utility networks (represented as lines with 3D coordinates) can be readily managed in DBMS using the supported spatial data types line or multiline. If required by an application, some point objects (valves, connectors, etc.) can be separately organized as points. The only trouble of 3D lines is the visualisation in 3D scenes. It is often recommended 3D lines to be substituted with tiny cylinders when rendered. Indeed, such a substitution can not be justified only for visualisation purposes. Therefore Du and Zlatanova (2006) suggest keeping the original data as 3D lines and creating 3D cylinders on the fly only for the visualization (Fig. 2).
(;lJI, Cl ./,0,
~ A, t~ -.
Fig. 3. 3D visualisation of point clouds , managed as points in DBMS
3D point clouds Until recently, 3D point objects were relatively rare in real-world data sets and a little attention was given on them . But, with the advances of sensor technology, laser scanning techniques become available, which produce large amounts of very specific 3D point data. Theoretically, these points can also be organised in DBMS by either 1) using the supported spatial data types point (Fig. 3) and multipoint or 2) creating a user-defined type. Depending on the type of data (raw or processed data), the user might de-
3D Geometries in Spatial DBMS
7
cide for either of the representations. Usually the most common format of processed laser scan data consist of seven parameters: x,y,z-coordinates, intensity and colour represented by r,g,b-values. The advantage of point data types is the possibility to manage all these attributes for each individual point. The major disadvantage refers data storage and indexing, which are very expensive (one record per point). The multipoint data type can be efficiently indexed, but the points lose their identification, which might be important for modelling purposes (Meijers et al 2005). Furthermore, the number of points in one multipoint has to be carefully considered for an acceptable performance (Hoefsloot, 2006). Depending on the point distribution and size of the point cloud, the operations can become time consuming an thus difficult to handle.
New 3D spatial data types As shown above, 3D real-world feature can be stored and indexed in DBMS and retrieved for visualisation and editing in front-end application but they can be analysed only as 2D features. A true 3D geometry data type (polyhedron anellor tetrahedron) and corresponding 3D spatial operations (validation, volume, length, intersection, etc.) are missing in all DBMS. Furthermore, the simple 3D volumetric data type would be still insufficient for handling many shapes (cone, sphere) available in ABC/CAD applications. The sections below will briefly present two implementations of new 3D data types, i.e. polyhedron and NURBS surface. 3D polyhedron A 3D spatial data type is the first most important development to be made by DBMS. A simple 3D object can be represented basically in three different ways as a polyhedron (consisting of arbitrary number of planar polygons which have arbitrary number of points), triangulated polyhedron (consisting of an arbitrary number of triangles) or tetrahedron (composed of four triangles). All the three representations have advantages and disadvantages (Zlatanova et al 2004). Tetrahedron is the simplest 3D object consisting of a finite number of points and triangles. While advantageous for computations and consistency check (Peninga, 2005), tetrahedrons are less appropriate for modelling of man-made objects such as buildings, bridges, tunnels, etc., because the interior would require a subdivision into tetrahedrons (which should be omitted for visualisation). However, tetra-
8
Sisi Zlatan ova
hedr ons are widely used in modelling geologic al formations (Breunig and Zlatano va, 2006 ). The polyhedron is the most appropriate representation for man-m ade objects, but its validation is quite complex (Arens et ai, 2005). Triangulated polyhedron is compromise between the two ensuring at least the simplicity of the polygons.
\ Fig. 4. 3D polyhedron (Arens 2003)
Arens 2003, and Arens et al 2005, have selected a polyhedron for implementation , since it is the most complex data type requ iring strict validation rules. A polyhedron is defined as a bounded subset of space, enclosed by a finite set of planar polygons (not self-intersecting) such that every edge of a polygon is shared by exactly one other polygon. The polyhedron bounds a single volume, i.e. from every point on the bound ary, every other point on the bound ary can be reached via the interior. The polyhedron has clearly defined inside/outside space, i.e. it is orient able . The polyhedron defined in this way can have also cavities (Fig. 4). The polyhedron data type is implemented in Oracle Spatial object-oriented data model , but the formalism is generic. To avoid duplications of point coordinates (As mentioned above), the description has two sections: 1) a list of all the point coordinates and 2) a sequence of polygons , each composed of a list with indices to the point coordinates of the first section. The validity of the new data type is controlled by a specially designed function, which checks the definition rules. Several tests were performed on the new data type and the results were positive. In very short terms, a tetrahedron data type will be proto typed following the formalism and implementation considerations as described in Peninga et a12006.
3D Geometries in Spatial DBMS
9
3D freeform curves and surfaces Freefonn curves and surfaces such as Bezier, B-spline and NURBS, are becoming progressively important for modelling of buildings, towers, tunnels, etc. Very often these models need to be integrated with 3D GIS for investigations and adjustment of the construction (e.g. for wind resistance). One option for such an integrated environment can be DBMS. Therefore we have initiated a research aiming at developing data types that can be used by AEC applications. Among the large amount of mathematical representations of 3D space (used in CAD and AEC worlds), NURBS is the first candidate to be considered. Some of the most important NURBS characteristics are (Piegl and Tiller, 1997): • NURBS offer a common mathematical form for both, standard analytical shapes (e.g. cone, sphere) and free form shapes, • The shapes described by NURBS can be evaluated reasonably fast by numerically stable and accurate algorithms, • Important characteristic for modelling real-world objects is that they are invariant under affine as well as perspective transformations. The only drawback of NURBS is the extra storage needed to define traditional shapes (e.g. circles). Using NURBS data types, a circle can be represented in different ways but the complexity is much higher compared to its mathematical definition (i.e. radius and centre point). The definition a NURBS curve consists of several quite complex equations, which can be found elsewhere in the literature. Based on these equations, the parameters to be included in the data type are specified as: control points, their weights, knots sequence and a degree value. These parameters double in case of surface (Pu, 2005). A NURBS curve requires fulfilment of a number of conditions such as the degree> 1, the number of control points >3, Degree = Number of knots - Number of control points 1, he number of weight values is equal to the number of control points, ach weight value> 0, knot vector is non-decreasing and has more than 1 knot. The new data type has been prototyped for Oracle Spatial, but outside the Oracle Spatial SDO_GEOMETRY model, which means it can be readily used for any spatial DBMS (PostOIS, MySQL, Informix, etc.). Besides the validation function, few simple spatial functions (rotation, translation, scale and distance) were developed. Since the data type is much more complex compared to the 3D polyhedron data type, a special care was taken for the visualisation in ABC software, i.e. an engine was developed to map the NURBS data type to the internal representation of Microstation and AutoCAD (Fig. 5). Two NURBS models were tested with the developed data type for retrieval, editing and posting.
10
Sisi Zlatanova
Fig. 5. NURBS building retrieved from DBMS Tests have convincingly demonstrated that appropriate data types for efficient management of free form curves and surfaces can be created at DBMS level. The design is compliant with OGC Abstract Specifications (see bellow). But the spatial functionality provided for NURBS data type has to be very carefully considered. Mat hematics behind many NURBS operations mig ht appear to complex for implementation at a DB MS level.
Standardization initiatives: Open Geospatial Consortium With respect to spatial data management and interoperability, several stan dardization initiatives have been set off, e.g. ISO, IAI, Web3D, etc . The Open Geospatial Consortium will be mentioned here, because its activities has largely motivated the spatial developments in RDBMS. The mission of the OGC, founded in 1994, is to enable interoperability of geo -services , OGC produces Abstract Specifications and Implementation Specifications (OGC, 2001). The aim of Abstract Specifications is to create and document a conceptual mode l sufficient to create the Implementation Specifications. The Abstract Specifications are subdivided into Topics, each of them related to different aspects of geo-spatial services. Topic 1 (identical to ISO 19107) is the most important one discussing the spatial schema for representing features. It shou ld be noticed that the Abs tract Specifications disc uss a wide range primit ives (also those used in CAD domain) suc h as co ne, sphere , triangulated surfaces and freeform shapes such as B-s plines and NU RBS . App lying this framework it should be no real problem representing real world in 3D, using even different representations. However, the Abstract Specifications provide on ly the general conceptual semantics. The way these can be imp lemented at different platforms (based on CORBA, OLE/CO M and SQL) is described in three different Implementation Specifications. The Implementation specifications of in-
3D Geometries in Spatial DBMS
11
terest for DBMS are the Simple Feature Specifications for SQL. Those provide guidance for implementing data types and spatial functions in DBMS. These specifications have greatly contributed to standardizing the implementing spatial functionality in DBMS. However, they are only 2D, i.e. limited to simple primitives such as points, lines and polygons (surfaces) and cannot meet the three-dimensional demands. OGC has already started numerous new initiatives to meet the challenges. Currently 36 projects (working groups) are discussing various aspects of data integration and exchange only in the Specification Program and almost all of them attempt to handle the third dimension. The most relevant is the work of CAD-GIS working group (Case, 2005). The mission of this group is spanning a bridge between CAD, AEC systems and GIS by finding opportunities to improve interoperability of geospatial data and services across these domains. Incompatibilities at various levels (semantic, geometry, topology) contribute to the complexity of the problem (Zlatanova and Prosperi, 2006). To suggest an appropriate schema for exchange of 3D spatial data, this group will consider several on-going developments, i.e, LandXML (for land survey and construction initiated in 1999 by Autodesk and EAS-E members), LandGML, CityGML (GML for city models), aecXML (for AEC including information about projects, documents, materials, parts, organizations, professionals, etc.), TransXML (a project aiming at XML schemas for exchange of transportation data), IFC (the Industry Foundation Classes used to define architectural and construction-related CAD graphic data as 3D real-world objects), OpenFlight (an industry standard real-time 3D scene description format), 3D ShapeFile (ESRI), X3D, etc. There is a firm believe the work of this group will contribute to extension of the Implementation Specifications toward real 3D data management.
Concluding remarks In the last five years DBMS made a large step toward maintenance of geometries as GIS used to manage them. The support of 2D objects with 3D coordinates is adopted by all mainstream DBMS. The offered functions and operations are predominantly in the 2D domain. The DBMS spatial schemas have to be extended to fully represent the third dimension (first with simple volumetric object and later with more complex 3D data types). Concepts for 3D objects and prototype implementations are already reported, DBMS have to make the next step and natively support them. 3D operations and functions have to be developed not only for the volumetric
12
Sisi Zlatanova
object but also for all other objects embedded in 3D space. 3D functionality is next to be considered. It should be remembered that Spatial DBMS is a place for storage and management, and less intended for extensive analyses. The 3D functionality should not be completely taken away from frontend applications such as GIS and CAD/AEC. 3D Spatial DBMS should provide the basic (generic) 3D functions, such as computing volumes and finding neighbours. Complex analyses have to be attributed to the applications. Some existing data types are clearly not sufficient for the purpose of some applications. A very typical example is the multipoint. It was definitely not designed for large amounts of points as from laser scanning. DBMS fail to handle efficiently such amounts of data until now. Such points need a special treatment. On the one hand, with the progress of data collection techniques, the amounts of points will only increase. Many laser-scanning companies are increasingly getting concerned about the management of such data. On the other hand, the advances in 3D modelling would require more intelligent management of both row and processed data. Clearly a new spatial data type with internal structure and index has to be developed. Triangle (or TIN) data type is also quite demanded. It is likely that TIN will continue to be widely used for all kinds of complex surface representations in GIS. Most of the terrain representations presently maintained in GIS as well as many CAD designs (meshes) are TIN representations. TINs can be stored in DBMS using the polygon data type. This data type is generally assumed for multiple vertices and thus over-attributed. Thus a simpler adapted data type is required. Maintenance of multiple representations to be used as Level-Of-Details is another critical aspect of large three-dimensional models. Still the management of multiple representations is far from formalized. A very promising initiative is CityGMI.." which concepts can further be incorporated in the Spatial Schema and later incorporated in Implementation Specifications. Management of texture and mechanism for texture mapping and texture draping is critical for management of realistic 3D City models. 3D objects usually need more attributes for visualisation compared to 2D objects. Moreover, very often 3D objects are textured with images from real world. As AEC and GIS applications come together the question of linking textures to geometries will appeal. Textures can be understood as 'presentation' attributes of 3D objects an also decoded in the data types. As mentioned above, the update of the OGC Implementations Specifications is very critical. Standardized vision on 3D objects and functions on them will stimulate DBMS toward 3D implementations. The first exten-
3D Geometries in Spatial DBMS
13
sion should be the volumetric object. To be able to provide a stronger management of objects from ABC and GIS, the Implementation Specifications have to be extended with other more complex 3D objects such as parametric shapes, freeform curves and surfaces. The 3D spatial functionality regarding those complex 3D spatial data types has to be further defined. Note, the Abstract Specifications provide only the Spatial Schema and do not discuss intended functionality. 3D Spatial DBMS has to offer an appropriate 3D user interface. To date, 3D user interfaces has not yet been exhaustively examined. Future 3D query interfaces should support the formulation of complex SQL-like mixed spatial and non-spatial database queries as well as 3D graphical input supporting the intuitive graphical formulation of 3D queries. Only few CAD/AEC interfaces (e.g. Bentley, Autodesk with respect to Oracle Spatial) are flexible enough to support the manipulation and database update of 3D objects. Further extended 3D solutions are expected primarily from the coupling of AEC software with Spatial DBMS. ABC applications offer a rich set of 3D modelling and visualisations tools that can be further extended toward specifying 3D spatial (SQL) queries.
Acknowledgements This research has been completed in GDMC of Section GISt, OTB, Delft University of Technology, The Netherlands and largely supported by GDMC partners. The author is specifically thankful to the developers of Bentley Inc. and Oracle Spatial for their help and constructive discussions.
References Arens, C, 2003, Maintaining Reality; Modelling 3D spatial objects in a GeoDBMS using a 3D primitive, Master's Thesis TU Delft, 2003, 76 p. Arens, C., J.E. Stoter, and P.J.M. van Oosterom, 2005, Modelling 3D spatial objects in a geo-DBMS using a 3D primitive, In: Computers & Geosciences, Volume 31, 2, pp. 165-177 Bittner, T., M. Dinnelly and S. Winter, 2006, Ontology and Semantic Interoperability, in: Zlatanova and Prosperi (Eds.) Large-scale 3D Data Integration: Challenges and Opportunities, Taylor&Francis, A CRC press book, pp. 139160 Breuning, M. and S. Zlatanova, 2005, 3D Geo-DBMS, Chapter 4 in S. Zlatanova&D. Prosperi (Eds.) Large-scale 3D Data Integration: Challenges and Opportunities, Taylor&Francis, A CRC press book, pp. 88-113
14
Sisi Zlatanova
Case, T., 2005, CAD/GIS Working Group, Charter and Mission, available at http://portal.opengeospatial.org, 4 p. INSPIRE, 2004, httpi//www.ec-gis.org/inspire/ Hoefsloot, M. 2006, Storing Point clouds in DBMS, MSC Thessis, available at: httpv/www.gdmc.ul/publicarions, 80p. Meijers, M., S. Zlatanova and N. Preifer, 2005, 3D geoinformation indoors: structuring for evacuation, , in: Proceedings of Next generation 3D city models, 21-22 June, Bonn, Germany, 6 p. OGC, 2001,OpenGIS Consortium, The OpenGIS Abstract Specification, Topic 1: Feature Geometry (ISO 19107 Spatial Schema), Version 5, edited by J.H. Herring, OpenGIS Project Document Number 01-101, Wayland, Mass., USA. OGC, 2005, Open GIS Specifications, Simple Feature Specifications for SQL (SFS) http://VV\V\N.opengeospatial.org/specs/?page=specs Oosterom, van P., J. Stoter, W. Quak and S. Zlatanova, 2002, The balance between geometry and topology, in: D. Richardson and P.van Oosterom (Eds.), Advances in Spatial Data Handling, 10th International Symposium on Spatial Data Handling, , Springer-Verlag, Berlin, pp. 209-224 Oosterom, van P., J. Stoter, and E. Jansen, 2006. Bridging the worlds of CAD and GIS, in Zlatanova and Prosperi (Eds.) Large-scale 3D Data Integration: Challenges and Opportunities, CRCpress, pp. 9-36 Penninga, F., 2005, 3D Topographic Data Modelling: Why Rigidity Is Preferable to Pragmatism, in 'Spatial Information Theory', Cosit'05', Vol. 3693 ofLecture Notes on Computer Science, Springer, pp. 409-425. Penninga, F., van Oosterom, P. & Kazar, B. M., 2006, A TEN-based DBMS approach for 3D Topographic Data Modelling, in 'Spatial Data Handling '06'. Piegl, L. and Tiller W., 1997, The NURBS Book 2nd Edition, Springer-Verlag Pu, S. 2005, Managing Freeform Curves and Surfaces in a Spatial DBMS, MSc Thesis, TU Delft, 2005, 77 p. available at http.z/www.gdmc.nl/publications Stoter, J. and S. Zlatanova,2003, Visualisation and editing of 3D objects organised in a DBMS, Proceedings of the EuroSDR Corn V. Workshop on Visualisation and Rendering, 22-24 January 2003, Enschede, The Netherlands, 16p Vijlbrief, T. and P.J.M. van Oosterom, 1992, GEO++: An extensible GIS, Proceedings 5th International Symposium on Spatial Data Handling, August, Charleston, South Carolina, USA Zlatanova, S., D. Holweg and V. Coors, 2004, Geometrical and topological models for real-time GIS, in: Proceedings of UDMS 2004, 27-29 October, Chioggia, Italy, CDROM, 10 p. Zlatanova and Prosperi (Eds.), 2006. Large-scale 3D Data Integration: Challenges and Opportunities, Taylor&Francis/CRC press book Zlatanova, S. and J. Stoter, 2006, The role of DBMS in the new generation GIS architecture, Chapter 8 in S.Rana&J. Sharma (Eds.) Frontiers of Geographic Information Technology, Springer, pp. 155-180 Zlatanova, S., A.A. Rahman and M.Pilouk, 2002, Trends in 3D GIS development, in: Journal of Geospatial Engineering, Vo1.4, No.2, pp. 1-10.
A Web 3D Service for Navigation Applications Jorg Haist l , Thorsten Reitz l , Volker Coors (1) Fraunhofer Institute for Computer Graphics, Darmstadt, Germany j oerg .haist @ igd.fraunhofer.de, thorsten .reitz @igd.fraunhofer.de,
University of Applied Sciences, Stuttgart, Germany
[email protected]
(2)
Abstract: The paper focuses on the actual and upcoming developments of web services that provide 3D spatial data for several client devices. The Fraunhofer Institute for Computer Graphics is actively involved into the development of a Web3D Service and has implemented extensions to this service proposal. Having a look at these extensions we will describe the motivation, the implementation, and the application in different areas, especially navigation systems. By using mobile devices in navigation applications the performance problems of low bandwidths have to be solved. To transfer 3D data, different extending approaches like compression to reduce idle time are presented. Applications for cyclists, hikers and tourists building up on these approaches are described. Here, we concentrate on visualization aspects like height profiles or the 3D visualisation on cell phones.
Introduction With the growing usage of three-dimensional geographic systems (3D GIS) and related systems like Google Earth, the need of usable standards is obvious. While standards for data formats are well-discussed and also developed, specific standards for services are in early development. Common data formats for 3D spatial data are Geographic Markup Language (Gl\1L), its profiles like CityGl\1L (Kolbe et al. 2005) and proprietary types like Shape-files or Autodesk's DXF. Also, GeoTIFF for 2.5D data and Virtual Reality Modeling Language (VRl\1L) and its successor X3D are often
16
Jorg Haist, Thorsten Reitz, Volker Coors
used. Regarding services, Web Feature Service (WFS) and its extension, the transactional WFS, are important because of their capability to distribute GML and their already high acceptance. Despite their disadvantage of providing only a projection of 3D Data, Web Terrain Service (WTS) and Web Map Service (WMS) are also used to derive products from 3D data. In contrast to the WMS and WTS, the Web 3D Service (W3DS) (Quad and Kolbe 2005) as a portrayal service for three-dimensional geodata, delivers 3D scene graphs. These scene graphs will be rendered by the client and can interactively be explored by the user. The W3DS merges different types (layers) of 3D data in one scene graph. One of the first implementations of the W3DS was integrated to the CityServer3D. The department Graphic Information Systems of Fraunhofer Institute for Computer Graphics has developed the CityServer3D system to implement new concepts and mechanisms for 3D GIS in current and future research projects. Due to its architecture the CityServer3D provides a platform to be easily extended by new modules. Based on a three-tier architecture, the system provides dynamically combinable data sources and can be accessed via several service interfaces. The geometric models are managed for example in the GeoBase21 database, processed by server components and delivered to clients. Due to the wide spectrum of supported interfaces, ranging from a custom Request/Response mechanism to standardized Web Services, these can be 3rd-party clients or the clients that were specifically developed for CityServer3D. With a facade technology that enables the integration of new service interfaces, the existing WMS facade was extended by the W3DS functionality. Now, CityServer3D provides the possibility to apply 3D city models in several application areas like tourism, navigation, risk management, decision support systems, or urban planning.
Visualization for Navigation Projects New mobile and ubiquitous computing applications are rapidly becoming feasible, providing people an access to online resources at any time and everywhere. Location-based Computing (LBC) (Zadorozhny and Chrysanthis 2004) infrastructure comprises a distributed mobile computing environment where each mobile device is location-aware and wirelessly linked. As location awareness is the core of LBC, most Location-Based Services and applications rely on a global tracking technology and a large portion of geospatial information as provided by GIS. Typical Location-Based Services are city guide applications and mostly deal with three major problems:
A Web 3D Service for Navigation Applications
17
Where am I now? How do I get to my point of interest? Where can I find something? Nowadays, LBS can provide feasible answers to these questions - the problems are solved. New aspects come up in the context of mobile services like personalising, provisioning of attractive content and actual information but also the usability and graphic rendition in presenting of these services gain in importance. The user has - beside his navigation and orientation support - the demand for background information and more detailed visualisations to objects located on his route. In a 3D-map the navigational value of a map increases due to the high visual correspondence between map objects and real world objects. This correspondence allows the user to recognize buildings easily (Coors et al. 2004), which leads to a more intuitive navigation inside the map in comparison to a 2D-map. Combined with an augmented reality user interface a 3D map with a dynamically created content fitting to his information needs will be the ultimate navigational aid for Location-Based Services. Some milestone research projects and prototypes on the way to geospatial augmented reality and location aware systems are the ARCHEOGUIDE project (ARCHEOGUIDE 2002), the MARS project (Hallerer and Feiner 2004), the GEIST project (GEIST 2004), and the ULTRA project. To our knowledge the first 3D maps for cell phones, running on a Nokia communicator, were developed in the Tellmaris project (TELLMARIS 2003). The research on 3D-maps for mobile devices is continued in the m-Loma project (Nurminen 2006). For further details see (Blechschmied et al. 2005).
Architecture The Web 3D Service The Web 3D Service as a portrayal. service for three-dimensional spatial data delivers graphic objects from a given spatial extent. Several formats can be chosen to deliver the scene. The data is delivered as 3D scene graph and therefore the client has to render the scene. According to the query format which is related to the WMS and the WTS constraints can be given in a query. The two main requests are GetCapabili ties to receive information about the service's capabilities
18
Jorg Haist, Thorsten Reitz, Volker Coors
and GetScene to receive a three-dimensional scene. Some important parameters to a GetScene query are: SRS: Spatial reference system POI: Point coordinates according to SRS BBOX: A two-dimensional bounding box, defined by Xmin,Ymin,Xmax,Ymax. FORMAT: MIME type of the delivered scene LAYERS: A list of layers POC: The Point of Camera, or the viewing point describes the position of the camera. STYLES: A list of so-called styled layer descriptors for each layer There are also parameters like PITCH (angle of inclination), YAW (azimuth), ROLL (rotation around viewing vector) and some more which are optional or conditional. An example query to a W3DS would be: http://clustera.igd.fraunhofer.de:7070/CityServer3D_Rl. O.O/WMS? request=GetScene& exceptions=text/plain& version=O.3.0& srs=EPSG:16263& bbox=62574.91,36549.58,63023.33,37102.43& format=model/vrml& poi=62800,O,-36800& poc=62300,200,-37505
Additionally to the shown query, a layer list and style information can be provided. Here, layers are understood as thematically different types of 3D data. But the layer structure is depending on the data structure of the server which offers the W3DS services. Since there is no standard way of grouping - layers of the features in a data source can be regarded as grouping elements - the features in a data repository, the first step is to call the W3DS's GetCapabili ties function. This will usually be triggered and also interpreted by a human user, to get the information which layers are available and which one should actually be retrieved in the GetScene requests that will follow. Common layers in a 3D city model often are buildings, street furniture or digital terrain models. As mandatory output format of a GetScene request VRML was defined. For the result of the GetCapabili ties operation, XML is used. The choice to define VRML as least common denominator was mainly driven by the aforementioned broad use of this format. Looking at classic internet-based scenarios with a desktop 3D-accelerated computer con-
A Web 3D Service for Navigation Applications
19
nected with good bandwidth to the Internet VRML has the advantages of being supported by a lot of applications, e.g. browser plug-ins. Leaving these scenarios and looking towards mobile scenarios, e.g. pedestrian navigation with the use of cell phones, VRML produced files are too big to be transferred to the device in acceptable time. Also the VRML support on mobile devices is very heterogeneous. The W3DS description also advises to integrate GeoVRML and/or X3D as formats to be supported. The advantage of both formats is that they overcome some gaps of VRML but both lack a broad usage inside the market. Due to the requirements of W3DS some other formats are being discussed: - CityGML, with its inclusion of X3D elements, could be applicable to some scenarios where GML is used as an exchange format for visualization purposes. Here it has to be kept in mind that GML neither is meant as visualization format nor does it usually map a scene graph the client will have to contain a powerful parsing engine for this task. Also, the W3DS does not aim at being a geodata exchange service, so for spatial data exchange services the WFS is the better choice. - M3G (JSR-184), as a scenegraph format for mobile devices using J2ME. Having the growing support for Java ME in mind, this nowadays seems to be a good solution to achieve a high acceptance and also to reduce implementation effort due to Java's concept of "write one and run anywhere". W3DS Usage and Extensions From the implementation and use of in the described mobile navigation scenarios and applications, it becomes clear that while the idea of complementing the WFS and WMS standards with a W3DS to handle visualization-centric requests of 3D geo-data is very promising, it still needs to be extended in several points to fulfill its role. The problems that occurred during work are: VRML as Exchange Format
As mentioned, VRML on mobile clients, especially on cell phones, is not well supported and this is regarding consumer services an unacceptable restriction. Also, other 3D-supporting formats are not available on cell phones. J2ME as an API offers the possibility to implement client software which is capable of querying, receiving and rendering 3D scenes. An ap-
20
Jorg Haist, Thorsten Reitz, Volker Coors
proach could be to create a VRML viewer for cell phones or by integrating M3G support into mobile applications. J2ME can be combined with an API for M3G which makes an implementation easy. Low Bandwidth in Combination with VRML
Another problem occurring with the use of VRML is generated by the low bandwidths which are available in today's mobile telecommunication networks. Often UMTS is claimed to be the solution for any performance problems of mobile applications. However, the bandwidth within an UMTS cell cannot be predicted. All devices which are logged in to a cell have to share the available bandwidth. This leads more or less to unpredictable bandwidth situation and limits - depending on the actual infrastructure development - the use especially where it might be required the most. For example, in cities where LBS and routing would be required, a high density of UMTS users will be logged in. This is a problem with which navigation systems using 3D city models will have to come along. There is a clear need to reduce the network traffic as much as possible. Here, 3D scenes formatted as VRML are growing too big. By using M3G, the file size can be reduced in two steps. The following table is comparing the two formats:
A Web 3D Service for Navigation Applications
21
Table 1: Comparison of VRML and M3G file sizes. For a detailed description of the definition of Level of Details for urban models see (CityServer3D 2005) and (Kolbe et. al. 2005)
File content Complexity VRML
M3 G
Fraunhofer IGD 20 triangles 1,24KB 267B
M3 G com pressed
403B
LOD1
Fraunhofer 120 IGD triangles
4,68KB 1,34KB
1,34KB
Par t of the 15000 city of '. Darmstadt, triangles Germany
1,06MB 59,9KB
3,93KB
LOD2
~
Reducing the Data Transfer Volume by Compression In a first step towards achieving a high compression, we currently use the ZIP encoding, as it is already supported by 12MB. However, by using a compression that uses the specific properties of 3D mode ls, especially the topological information of meshes, much higher rates could be achieved. An example for such a compression is the Delphi algorithm (Coors and Rossignac 2004), that uses a prediction scheme to reduce connectivity information down to less that 2 bits per triangle. Adding support for suc h a compression scheme to a standard like M3 G as an optional feature wou ld allow a very network-efficient transfer of even comp lex geometries, at the cost of slightly higher computing time on the side of the server and the client. The client cou ld then tell the server which compression schemes it can decode and which one it would prefer, and the server will answer according ly.
22
Jorg Haist, Thorsten Reitz, Volker Coors
Extension of Query Constraints
The W3DS allows to define constraints via two parameters. The minimum bounding box is used to define the spatial selection whereas layers can be used to define a thematic selection. The second possibility strongly depends on the layer structure which a service owns. If it does not offer an application-fitting layer structuring, it can not be used. This could lead to the situation that W3DSs would only be set up for special application scenarios and therefore would be too restricted. Within the market of geographic information systems, approximately 2/3 of the effort is spent in data acquisition and conversion. Due to this it should be possible that data and service providers could set up services which can be independently used from application scenarios. Here, more mighty query manipulations can be thought of. In one of our scenarios, an extension to request objects via an identifier instead of the BBOX will be used. The identifier can be a single identifier, e.g. if just one building is requested, or a list of identifiers, e.g. a set of buildings is requested. These identifiers can be of different types. The identifiers used by the system, e.g. the land register identifier key, are appropriate for technical users while for end consumers a more user-friendly access has to be found. Here, the integration of names, e.g. "Darmstadt", as identifiers has to be favoured whereas it has to be taken into account that the service has to process these identifying names with the help of thesauri or/and gazetteers. Extensions of Spatial Query Constraints
Navigation always includes a movement and by using a cell phone on-line navigation system in combination with an W3DS supporting the BBOX as spatial constraint the whole scene has to be downloaded at the beginning of the trip. Since this leads to big scenes, the data transfer will take a long time, the client device will have to handle big scenes and a lot of unused data will be transferred. Finally, often it is not known at the beginning where the trip is going to end so that it is not possible to get the entire scene at the start. In geographic information processing, spatial objects are also used to define spatial constraints in queries, e.g. a two-dimensional polygon can be used to query a result set out of a three-dimensional city model. Since the CityServer3D internally works with spatial constraint objects for queries, the W3DS was easy to adapt to these mechanisms. For navigation systems on mobile devices, this approach enables the client to intelligently manage the data transfer. So, if the user moves in reality
A Web 3D Service for Navigation Applications
23
and wants to have a location-based visualization only missing features are going to be requested and transferred to the server. Therefore, the client has to be capable of extending dynamically the displayed scene. Extension of LOD Support within Query Formulation
Modem systems which manage three-dimensional city models also support the management of LODs of features. In this usage LOD is used as term for persistent geometries with different quality levels. In [CityServer 2005] and [Greger et. aI., 2004] definitions can be found. To access different LODs the W3DS does not support a dedicated query parameter. Without an extension of the service, two solution approaches can be regarded. A first possibility is to use the layer or style query parameter to transfer information about the LOD which is wanted by the client. This approach is justified by thinking of the LODs as different styles to display a layer. However, the mentioned LODs usually have more than a pure visual purpose, and thus, using the layer descriptors this way is a way of mixing up various application domains and has to be considered a dirty solution. A second way might be that the service decides by itself which LOD is the right one to choose and to deliver. By using this approach some interesting questions concerning the assessment in spatial and semantic aspects of feature in a city model come up. But with the heterogeneous hardware capabilities of mobile devices a W3DS service cannot determine the correct LOD since the service does not have any information about the bandwidth and rendering power of the requesting device. By extending the W3DS by the parameter LOD, we give the power to the device to decide which model is requested. Here we also cannot get a exact prediction of what will be delivered by the service and what can be processed, but on the client we at least approximately know what can be rendered. The LOD parameter has to be quite flexible: - Assignment to a layer: All features of a layer are delivered in the defined LOD. - Assignment to a query: All result features of a query are delivered in the requested LOD. - Assignment according to the available resources: In case of bandwidth of processing restrictions, a definition of a limit for LODs is useful. So, if no other LODs are given to the query, it can be assured that the service does not deliver any LODs above the set limit.
24
Jorg Haist, Thorsten Reitz, Volker Coors
Applications For applications we use the technology CityServer3D which is extended according to the requirements of the specific application. The CityServer3D consists of several components which form together a system to visualize, manipulate, and handle 3D spatial data like city and digital elevation models. The system is serving as a technical platform to realize solutions for different application scenarios like tourism, navigation, risk management, decision support systems or urban planning. The following figure shows a non-photo realistic visualization of the area around the Fraunhofer Institute for Computer Graphics.
Figure 1 Sample Visualization of a city model using sketch rendering
The CityServer3D uses facade mechanisms within its interface layer which also provides the aGC services. W3DS uses this facade of the interface layer and is able to use not only the VRML converter from the converter layer of the server. Also other appropriate formats can be integrated into the W3DS interface as long as they are implemented as components in the converter layer. The W3DS is already used in projects of the Fraunhofer Institute for Computer Graphics . Regarding 3D Maps on mobile devices two supplementary solutions which use the W3DS were developed.
Route Extension Planning, analysing and visualization of route objects for navigation of cyclists and hikers are processed by the route extension. To support mobility for targeted user groups the CityServer3D was extended by three compo-
A Web 3D Service for Navigation Applications
25
nents which enable users to store routes and to extend them to travel guides. These extensions correspond to three different user groups (roles) . The data administrator has to maintain the stored models which are mainly routes, two dimensional vector-based mapping data and digital elevation models . So-called authors use the system to generate, import and store routes which are part of travel guides. End users are browsing in travel guides and are provided with the functio nality to buy interesting guides. cc:=r:=:Il!3I_ _
===-
...IllLI
..
eco
,
, ~ -'~~ 10.0==O "., -"""' .c=,, "'".c,.......,.,,so,". "=••,.•.,.""""=.'==71,)
Figure 2 Dynamically gen erated height profile
To generate new routes, authors can define a route manually or import a route by using GPS devices. The new or manipulated route is stored in the CityServer3D system and is exporte d as XML, as overview map, and as an altitude profile image . Figure 2 shows a sample profile and the usage of colours to show the gradients of route segme nts. Due to the user's requirements a highly configurab le but simple and well-known visualization -the height profile - was used. Within the definition of a route an author also provides information about important waypoints and POls . While waypo ints are part of a route object and belong to the three-dimensional line object , POls are not part of the route geome try but are assigne d to a route since they are of important character. POl s can be of severa l types which are predefined by an application-s pecific code list. They mark an interesting point according to the navigation, e.g. a high tower, or to the route theme, e.g. an archaeo logica l site.
3D Visualization To visua lize POls and landmarks in 3D maps on mobile devices, M3G in combination with a self-developed Java-based mobile viewer is used. POl s and landmarks can be used to answer the question: How do I get from my present position to the desired destination? Up to now this typical question
26
Jorg Haist, Thorsten Reitz, Volker Coors
made by visitors of an unknown place has been answered by using twodimensional maps. These maps are indeed able to give a rough overview of a large area since cartographic functions are well researched and developed for two-dimensional maps. For orientation in a smaller areas, however, two-dimensional maps are often insufficient as they do not take landmarks into account. Thus, informational needs like the current position and the direction to go are not directly deducible from a map leading to problems in finding the way to a desired destination. By supporting the visualization of different LODs a situation-dependent and query-adaptive visualisation of city models can be generated. To visualize a route from a specific point - like the current position of the user to a destination an optical accentuation and detailed presentation of objects being of high importance to the route is sufficient, surrounding buildings can be shown in a lower level of detail. This enables the transmission of large-area maps in short time, with low costs for the user and the presentation of maps on a cell phone in spite of its limited resources. The following figures are showing two approaches to visualize 3D features on cell phones.
Figure 3 (left) M3G-based visualization of Darmstadt on a Siemens S65 cell phone (right) Streamed movie visualizing landmarks and building of less interest
Figure 3 (left) shows the mobile viewer on a Java-enabled cell phone. The navigation with the loaded model is done by shortcuts, e.g. to access a bird's eye view , buttons, and via the joystick if it is available. Within this version of the mobile viewer W3DS is used to query and receive the required data. Figure 3 (right) shows a non-Java version which presumes the capability of receiving and rendering videos on the client device. Here, the Sony P900 was used to visualize the route which was prepared and ren-
A Web 3D Service for Navigation Applications
27
dered on the server. With this approach Java is not needed but an interaction with the model is not possible for the user anymore.
Conclusions & Outlook Within this paper we provided an overview on the current status and proposed some extensions which were motivated by our work for a further development of the W3DS. Making 3D visualization for spatial data available via web services seems to us an important function to build up networked and connected applications. As a well-known example, Google Earth springs to mind. For a wide usage of such services, e.g. in navigation systems, there is nowadays a lack of standards. Here we propose to use W3DS which is still in development but shows promising possibilities. Especially the relationship to existing standards eases the set up and usage of the W3DS. Besides the described extensions others are possible. In navigation scenarios, specially if a route is planned, an enhanced scene could be useful. By extending the scene with a route object and a related animation, the user could receive a animated preview of his trip. Especially hikers and bicyclists could improve their travel preparations. In this case, an animation allover the route would most probably not be helpful since the information depth would not be well distributed. So, there should be the possibility to integrate not one but more animation object, which can be assigned to a route. This approach could be used to provide the user with animation of hotspots, e.g. crossings, points of interest.
References ARCHEOGUIDE (2004) project website: http://archeoguide.intranet.gr/(accessed 6 July 2006) Blechschmied H, Coors V, and Etz M (2005) Interaction and Visualization of 3D City Models for Location-Based Services. In: Zlatanova Sand Properi D (2005) Large-Scale 3D Data Integration: Challenges and Opportunities, CRC Press, Taylor & Francis Group, New York CityServer3D: http://www.invisip.de/CityServer3D/ (accessed 6 July 2006) Coors, V., Elting, C., Kray, C., Laakso, K. (2004) Presenting Route Instructions on Mobile Devices - From Textual Directions to 3D Visualization, In: Dykes, J., A.M. MacEachren & M. J. Kraak (eds). Exploring geovisualization. Amsterdam: Elseviers, 2005, pp. 529 - 550
28
Jorg Haist, Thorsten Reitz, Volker Coors
Coors V, and Rossignac J (2004) Delphi: geometry-based connectivity prediction in triangle mesh compression In: The Visual Computer, International Journal of Computer Graphics, Vol. 20, Number 8-9, Springer Verlag, Berlin Heidelberg, November 2004, pp 507 - 520. GEIST (2004) project website: http://www.igd.tbg.de/igd-a5/projects/geist.html (accessed 6 July 2006) Haist J, and Coors V (2005): The W3DS-Interface of Cityserver3D. In: Kolbe, Greger (Ed.); European Spatial Data Research (EuroSDR) u.a.: Next Generation 3D City Models, pp. 63-67 Hollerer T, and Feiner S (2004) Mobile Augmented Reality. In: H.A.Karimi and A. Hammad (Eds): Telegeoinformatics Location-Based Computing and Services, CRC Press, pp 221-260 Kolbe T., Greger G, and Plumer L (2005) CityGML - Interoperable Access to 3D City Models In: Oosterom P, Zlatanova S, and Fendel E (Eds.): Proceedings of the Int. Symposiumon Geo-information for Disaster Management, Springer Verlag LoVEUS, 2004, Project website http://loveus.intranet.gr (accessed 24 January 2005) Nurminen A (2006) m-Loma - a mobile 3D city map. In Proceedings of Web3D 2006, New York: ACM Press, pp 7-18 Reitz T (2005), Architecture of an interoperable 3D GIS under special considerance of visualization applications. Master Thesis Fachhochschule Furtwangen TELLMARIS (2003) project website: project website: http://www.igd.tbg.de/igda5/projects/geist.html (accessed 6 July 2006) Zadorozhny, V. and Chrysanthis, P., 2004, Location-Based Computing, In: H.A.Karimi and A. Hammad (Eds): Telegeoinformatics Location-Based Computing and Services, CRC Press, pp 145-170.
Integration of Photogrammetric and LIDAR Data in a Multi-Primitive Triangulation Environment A.F. Habib a, S. Shin b, C. Kim a, Mohannad AI-Durgham a
"Geomatics Engineering, University of Calgary, 2500 University Drive NW, Calgary, AB, Canada T2N IN4 -
[email protected], (cjkim, mmaldurg)@ucalgary.ca. "Spatial Information Research Team. Telematics & USN Research Division Electronics and Telecommunications Research Institute, South Korea
Abstract Photogrammetric mapping procedures have gone through major developments as a result of the significant improvements in its underlying technologies. For example, the continuous development of digital imaging systems has lead to the steady adoption of digital frame and line cameras in mapping activities. Moreover, the availability of GPS/INS systems facilitated the direct geo-referencing of the acquired imagery. Still, photogrammetric datasets taken without the aid of positioning and navigation systems need control information for the purpose of surface reconstruction. So far, distinct point features have been the primary source of control for photogrammetric triangulation although other higher-order features are available and can be used. In addition to photogrammetric data, LIDAR systems supply dense geometric surface information in the form of three dimensional coordinates of laser footprints with respect to a global reference system. Considering the accuracy improvement of LIDAR systems in the recent years, which is propelled by the continuous advancement in GPS/INS technology, LIDAR data is considered a viable supply of control for photogrammetric geo-referencing. In this paper, alternative methodologies will be devised for the purpose of integrating LIDAR data into the photogrammetric triangulation. Such methodologies will deal with two main issues: utilized primitives and the respective mathematical models. More specifically, two methodologies will be introduced that utilize straight-line and areal features derived from both datasets as the primitives. The first methodology directly incorporates LIDAR lines as control information in the photogrammetric triangulation, while in the second methodology, LIDAR
30
A.F. Habib, S. Shin, C. Kim, Mohammad AI-Durgham
patches are used to geo-reference the photogrammetric model. The feasibility of the devised methods will be investigated through experimental results with real data. Keywords: LIDAR, photogrammetry, triangulation, linear features, areal features.
1 Introduction A diverse range of spatial data acquisition systems is now available onboard satellite, aerial, and terrestrial mapping platforms. The diversity starts with analog and digital frame cameras and continues to include linear array scanners. In the past few years, imaging sensors witnessed vast development as a result of an enormous advancement in the digital technology. For example, the increasing sensor size and storage capacity of digital frame cameras have lead to their application in traditional and new mapping functions. However, due to technical limitations, current digital frame cameras are not capable of providing geometric resolution and ground coverage similar to those associated with analog frame cameras. To alleviate such a limitation, push-broom scanners (line cameras) have been developed and used onboard satellite and aerial platforms. However, rigorous modeling of the imaging process of push-broom scanners is far more complicated than the modeling of frame sensors (Poli, 2004; Toutin, 2004; Lee et al, 2002; Habib et aI, 2001; Lee et al, 2000). For example, in the absence of the geo-referencing parameters, the narrow Angular Field of View (AFOV) associated with current push-broom scanners are causing instability in the triangulation procedure due to strong correlation between the components of the Exterior Orientation Parameters (EOP). Therefore, there has been a tremendous body of research that dealt with the derivation of alternative models for line cameras to circumvent such instability (Tao et al, 2004; Habib et aI, 2004; Fraser and Hanley, 2003; Fraser et al, 2002; Hanley et al, 2002; Tao and Hu, 2001). In addition to line cameras, the low cost of medium-format digital frame cameras is leading to their frequent adoption by the mapping community. Apart from imaging systems, LIDAR scanning is rapidly taking its place in the mapping industry as a fast and cost-effective 3D data acquisition technology. A LIDAR system is comprised of laser ranging and GPSIINS units. The increased accuracy and affordability of GPSIINS units are the main reasons behind their expanding role in the geo-referencing of LIDAR systems. Current LIDAR systems can measure several thousand points per second and are capable of providing high spatial density of observed coordinates far in excess of that derived from traditional surveying and photo-
Integration ofPhotogrammetric and LIDAR Data 31
grammetric techniques. Such a large sample size can provide a wealth of information for many applications such as surface reconstruction, structural monitoring, orthophoto generation, and city modeling. However, there is no inherent redundancy in reconstructed surface from a LIDAR system. Therefore, the overall accuracy depends on the accuracy and calibration of the different components comprising the LIDAR system. Moreover, the positional nature of LIDAR data collection makes it difficult to derive semantic information from the captured surfaces - e.g., material and types of observed structures, even with the presence of intensity images (Wehr, 1999; Baltsavias, 1999). Considering the characteristics of acquired spatial data from imaging and LIDAR systems, one can argue that their integration will be beneficial for accurate and complete description of the object space. However, the synergic characteristics of both systems can be fully utilized only after ensuring that both datasets are geo-referenced relative to the same reference frame (Habib and Schenk, 1999; Chen et al, 2004). Traditionally, photogrammetric geo-referencing is either indirectly established with the help of Ground Control Points (GCP) or directly defined using GPS/INS units on board the imaging platform (Cramer et al, 2000; Wegmann et al, 2004). On the other hand, LIDAR geo-referencing is directly established through the GPS/INS components of a LIDAR system. In this regard, this paper presents alternative methodologies for utilizing LIDAR features as a source of control for photogrammetric geo-referencing. These methodologies have two main advantages. First, they will ensure the co-alignment of the LIDAR and photogrammetric data to a common reference frame as defined by the GPSIINS unit of the LIDAR system. Moreover, LIDAR features will eliminate the need for ground control points to establish the georeferencing parameters of the photogrammetric data. The possibility of utilizing LIDAR data as a source of control for photogrammetric georeferencing hinges on the ability to identify common features in both datasets. Therefore, the first objective of the developed methodologies is concerned with the selection of appropriate primitives. Afterwards, the mathematical models, which can be utilized in the triangulation procedure, for relating LIDAR and photogrammetric primitives will be introduced. Another objective of the proposed methodologies is having them flexible enough to allow for the incorporation of the identified primitives in scenes captured by frame and line cameras. In other words, the developed methodologies should handle multi-sensory data while using a wide range of primitives (i.e., multi-primitive and multi-sensor triangulation methodologies).
32
A.F. Habib, S. Shin, C. Kim, Mohammad AI-Durgham
The paper will start by a brief discussion of possible primitives for relating the LIDAR and photogrammetric data together with the respective mathematical models for their incorporation in the triangulation procedure. The feasibility and the performance of the developed multi-primitive and multisensor triangulation will be outlined through the experimental results section using real data. Finally, the paper will conclude with final remarks and recommendations for future research.
2 Triangulation Primitives A triangulation process relies on the identification of common primitives for relating the involved datasets to the reference frame defined by the control information. Traditionally, photogrammetric triangulation has been based on point primitives. Considering photogrammetric and LIDAR data, relating a LIDAR footprint to the corresponding point in the imagery is almost impossible, Figure 1. Therefore, point primitives are not appropriate for the task at hand. Linear and areal features are other potential primitives that can be more suitable for relating LIDAR and photogrammetric data. Linear features can be directly identified in the imagery while conjugate LIDAR lines can be extracted through planar patch segmentation and intersection. Alternatively, LIDAR lines can be directly identified in the intensity images produced by most of today's LIDAR systems. It should be noted that extracted linear features from planar patch segmentation and intersection are more accurate than the extracted features from intensity images. The lower quality of extracted features from the intensity images is caused by the utilized interpolation process for converting the irregular LIDAR footprints to a raster grid. Other than linear features, areal primitives in photogrammetric datasets can be defined using their boundaries, which can be identified in the imagery. Such primitives include, for example, roof tops, lakes, and other homogeneous regions. In LIDAR datasets, areal regions can be derived through planar patch segmentation techniques. Another issue, which is related to the primitives selection, is their representation in both photogrammetric and LIDAR data. In this regard, image space lines can be represented by a sequence of image points along the feature, Figure 2a. This is an appealing representation since it can handle image space linear features in the presence of distortions as they will cause deviations from straightness in the image space. Furthermore, such a representation will allow for the inclusion of linear features in scenes captured by line cameras since perturbations in the flight trajectory would lead to deviations from straightness in image space linear features corresponding
Integration ofPhotogrammetric and LIDAR Data 33 to object space straight lines (Habib et al, 2002). It should be noted that the selected intermediate points along corresponding line segments in overlapping scenes need not be conjugate. As for the LIDAR data, object lines can be represented by their end points, Figure 2b. The points defining the LIDAR line need not be visible in the imagery.
(a) (b) Fig. I Imagery (a) and LIDAR (b) coverage of an urban area
(a) (b) Fig. 2. Image space linear features defined by a sequence of intermediate points
(a) while corresponding LIDAR lines are defined by their end points (b) For areal primitives, photogrammetric planar patches can be represented by three points (e.g., comer points, Figure 3a) along their boundaries. These points should be identified in all overlapping imagery. Similar to the linear features, this representation is valid for scenes captured by frame and line cameras. On the other hand, LIDAR patches can be represented
34
A.F . Habib, S. Shin, C. Kim, Mohammad AI-Durgham
by the laser footprints defining that patch , Figure 3b. The se points can be directly derived from planar patch segmentation techniques. Having settled the primiti ve selection and repre sentation issues, the next section will focus on explaining the proposed mathematical models for relat ing the photogrammetric and LIDAR primitives within the triangulation procedure.
(a) (b) Fig. 3. Image space planar features are represented by three points (a) while LIDAR patches are defined by the points comprising that patch (b)
3 Mathematical Models Utilizing Straight-Line Primitives Thi s section will focus on deriving the mathematical constraint for relating LIDAR and photogrammetric lines, which are represented by the end points in the object space and a sequence of intermediate points in the image space, respectively. From this perspective, the photogrammetric datasets will be aligned to the LIDAR reference frame through direct incorporation of LIDAR lines as the source of control. The photogrammetric and LIDAR measurements along corresponding linear features can be related to each other through the coplanarity constraint in (see Eq 3.1). This constraint indicates that the vector from the perspective centre to any intermediate image point along the image line is contained within the plane defined by the perspective centre of that image and the two points defining the LIDAR line. In other word s, for a given intermediate point, k", the point s {(XI, Y I, ZI), (X 2, Y2, Z2), (X o", X o", X o\ and (X k" , Yk" , O)} are coplanar, Figure 4.
Integration of Photogrammetric and LIDAR Data 35
(1)
(3.1)
Where
VI V
2
V
3
is the vector connecting the perspective centre to the first end point along the LIDAR line, is the vector connecting the perspective centre to the second end point along the LIDAR line, and is the vector connecting the perspective centre to an intermediate point along the corresponding image line.
It should be noted that the above constraint can be introduced for all the intermediate points along the image space linear feature. Moreover, the coplanarity constraint is valid for both frame and line cameras. For scenes captured by line cameras, the involved EOP should correspond to the image associated with the intermediate point under consideration. For frame cameras with known lOP, a maximum of two independent constraints can be defined for a given image. However, for self-calibration procedures, additional constraints will help in the recovery of the lOP since the distortion pattern will change from one intermediate point to the next along the image space linear feature. On the other hand, the coplanarity constraint would help in better recovery of the EOP associated with line cameras. Such a contribution is attributed to the fact that the system's trajectory will affect the shape of the linear feature in the image space.
36
A.F. Habib, S. Shin, C. Kim, Mohammad AI-Durgham (X~,Y;,Z~)
(n"', <1>", K")
Fig. 4. Perspective transformation between image and LIDAR straight lines and the coplanarity constraint for intermediate points along the line
Utilizing Planar Patches This section will focus on deriving the mathematical constraint for relating LIDAR and photogrammetric patches, which are represented by a set of points in the object space and three points in the image space, respectively. As an example, let us consider a surface patch that is represented by two sets of points, namely the photogrammetric set SPH= {A, B, C} and the LIDAR set SL= {(XP, YP, ZP), P=l to n}, Figure 5. Since the LIDAR points are randomly distributed, no point-to-point correspondence can be assumed between the datasets. For the photogrammetric points, the image and object space coordinates will be related to each other through the collinearity equations. On the other hand, LIDAR points belonging to a certain planar-surface patch should coincide with the photogrammetric patch representing the same object space surface, Figure 5. The coplanarity of the LIDAR and photogrammetric points can be mathematically expressed (see Eq. 3.2).
Integration ofPhotogrammetric and LIDAR Data 37
X
p
r,
X
A
YA
B
YB
Xc
Yc
v= X
(3.2)
Z; -ZA ZB -ZA =0
z;
-ZA
Fig. 5. Coplanarity of photogrammetric and LIDAR patches
The above constraint is used as the mathematical model for incorporating LIDAR points into the photogrammetric triangulation. In physical terms, this constraint means that the normal distance between any LIDAR point and the corresponding photogrammetric surface should be zero, or the volume of the tetrahedron comprised of the four points is equal to zero. This constraint is applied for all LIDAR points comprising this surface patch. It should be noted that the above constraint is valid for both frame and line cameras. Another advantage of this approach is the possibility of using such a constraint for LIDAR system calibration due to the fact that raw LIDAR points are explicitly incorporated in the mathematical model.
4
Experimental Results
To validate the feasibility and applicability of the above methodologies, multi-sensory datasets were solicited and analyzed. The conducted ex-
38
A.F. Habib, S. Shin, C. Kim, Mohammad Al-Durgham
periments will involve three types of sensors: namely , a medium-format digital frame camera, a satellite-based line camera, and a LIDAR system. These experiments will focus on investigating the following issues: • Validity of using the line-based geo-referencing procedure for scenes captured by frame and line cameras. • Validity of using the patch-based geo-referencing procedure for scenes captured by frame and line cameras. • Impact of integrating satellite scenes, aerial scenes, and LIDAR data in a unified bundle adjustment procedure. The first dataset is comprised of three blocks of 6 frame digital images captured in April 2005 by the Applanix Digital Sensor System (DSS) over the city of Daejeon in South Korea from an altitude of 1500m . The DSS camera has a 16 mega-pixels (9flm pixel size) and 55mm focal length. The second dataset includes an IKONOS stereo-pair, which was captured in November 2001 , over the same area. It should be noted that these scenes are raw imagery that did not go through any geometric correction and are prov ided for research purposes. Finally, a multi-strip LIDAR coverage, corresponding to the DSS coverage, was collected using the OPTECH ALTM 2070 with an average point density of 2.67 point/rrr' from an altitude of 975m. An example of one of the DSS image blocks and a visualization of the corresponding LIDAR coverage can be seen in Figure 6. On the other hand, Figure 7 shows the IKONOS coverage and the location of the DSS image blocks.
../
../
.. /
•••<"
....... ~./ ~•.pf'
(a) (b) Fig. 6. DSS middle image block (a) and the corresponding LIDAR cloud (b), the circles in (a) indicate the location of extracted linear and areal primitives from the LIDAR data.
Integration of Photogrammetric and LIDAR Data 39
To extract the LIDAR control, a total of 139 planar patches and 138 linear features have been manually identified through planar patch segmentation and intersection, respectively. Figure 6 illustrates the locations of the extracted features from the middle LIDAR point cloud within the IKONOS scenes. The corresponding linear and areal features have been digitized in the DSS and IKONOS scenes . To evaluate the performance of the different geo-referencing techniques, a set of 70 Ground Control Points (GCP) is also acquired, refer to Figure 7 for the distribution of these points. The performance of the point-based, line-based, and patch-based geo-referencing techniques will be conducted using Root Mean Square Error (RMSE) analysis. For the different experiments, a portion of the available GCP was used as control in the bundle adjustment, while the rest was used as check points . It is worth mentioning that none of the available control points are visible in any of the DSS imagery.
Fig. 7. IKONOS scene coverage with the three patches covered by the DSS imagery and LIDAR data
40
A.F. Habib, S. Shin, C. Kim, Mohammad AI-Durgham
To investigate the performance of the various geo-referencing methodologies, we conducted the following experiments (the resulting total RMSE values for these experiments are reported in Table 1): • Photogrammetric triangulation of the IKONOS scenes while varying the number of utilized ground control points. • Photogrammetric triangulation of the IKONOS and DSS scenes while varying the number of utilized ground control points. • Photogrammetric triangulation of IKONOS and DSS scenes while varying the number of LIDAR lines (45 and 138 lines) together with a changing number of ground control points. • Photogrammetric triangulation of IKONOS and DSS scenes while varying the number of LIDAR patches (45 and 139 patches) together with a changing number of ground control points. . . I . £or mu Iti-sensor an d mu I' ti -pnmitrve triangu1ation T a bIe 1: Chec k-point analysis
IKONOS
IKONOS + 18 DSS Frame images
only No. of
Control Points Only
GCPs
Control Points Plus
Control Points
Control Lines
Only
Patches 138
0
N/A
N/A
Control
3.065
45
139
45
3.091
5.406
5.860
1
N/A
N/A
3.013
3.075
5.388
6.401
2
N/A
N/A
3.127
3.199
4.816
5.245
3
N/A
21.322
2.863
2.845
2.932
3.093
4
N/A
19.956
2.707
2.754
2.638
3.054
5
N/A
4.342
2.716
2.740
2.628
2.720
6
3.668
3.352
2.692
2.741
2.607
2.711
Integration ofPhotogrammetric and LIDAR Data 41
7
3.936
3.039
2.696
2.728
2.549
2.561
8
3.591
3.417
2.566
2.532
2.45
2.717
9
4.079
2.543
2.552
2.521
2.435
2.511
10
3.068
2.499
2.559
2.525
2.432
2.492
15
3.152
2.413
2.458
2.434
2.397
2.444
40
2.013
2.087
2.112
2.095
2.030
2.054
In Table 1, the "N/A" means that no solution was attainable (i.e., the provided control is not sufficient to establish the datum for the triangulation procedure). Examining Table 1, one can make the following remarks: • Utilizing points as the only source of control for the triangulation of the stereo IKONOS scenes requires a minimum of six OCP. • Including DSS imagery in the triangulation for the IKONOS scenes reduced the control requirement for convergence to three OCP. Therefore, it is clearly visible that incorporating satellite scenes with few frame imagery would allow for photogrammetric reconstruction while reducing the ground control point requirement. • LIDAR linear features are sufficient for establishing the geo-referencing of the IKONOS and DSS scenes without the need for any additional source of control. As it can be seen in the fourth and fifth columns of Table 1, incorporating additional control points in the triangulation procedure did not significantly improve the reconstruction outcome. Moreover, by comparing the fourth and fifth columns, one can see that increasing the linear features from 45 to 138 does not significantly improve the quality of the triangulation outcome. • LIDAR patches are sufficient for establishing the geo-referencing of the IKONOS and DSS scenes without the need for additional source of control. However, as it can be seen in the sixth and seventh columns of Table 1, incorporating few control points had a significant impact in improving the results (using 30CP and 139 control patches, the total RMSE reduced from 5.406m to 2.932m). Incorporating additional control points (i.e., beyond 30CP) did not have a significant impact. The improvement in the reconstruction outcome using few OCP can be at-
42
A.F. Habib, S. Shin, C. Kim, Mohammad Al-Durgham
tributed to the fact that the majority of utilized patches are mainly horizontal with mild slopes as they represented building roofs. Therefore, the estimation of the model shifts in the X and Y directions is not accurate enough. Incorporating vertical or steep patches could have solved for this problem. However, such patches were not available in the provided dataset. Moreover, comparing the sixth and seventh columns, it can be seen that increasing the number of control patches from 45 to 139 did no significantly improve the quality of the triangulation outcome. • Comparing the different geo-referencing techniques, it can be seen that patch-based and line-based geo-referencing techniques resulted in a better outcome than the point-based geo-referencing. Such an improvement supports the benefit of adopting multi-sensor and multi-primitive triangulation procedure. In an additional experiment, we utilized the derived EOP from the multisensor triangulation of frame and line camera scenes together with the LIDAR surface to generate orthophotos. Figure 8 shows sample patches, where the IKONOS and DSS orthophotos are laid side by side. As it can be seen in Figure 8-a, the generated orthophotos are quite compatible as illustrated by the smooth continuity of observed features between the DSS and IKONOS orthophotos. Figure 8-b, shows object space changes between the moments of capture of the IKONOS and DSS imagery. Therefore, it is evident that multi-sensor triangulation of imagery from frame and line cameras improves the quality of derived object space while offering an environment for an accurate geo-referencing of temporal imagery. Following the geo-referencing, the involved imagery can be analyzed for change detection applications using the derived and properly georeferenced orthophotos.
Integration of Photogrammetric and LIDAR Data 43
(a) (b) Fig. 8. Change detection between DSS (color) and IKONOS (B/W) orthophotos. Smooth transition between the two orthophotos can be observed in (a) while discontinuities are observed in (b) due to changes in the object space
5 Conclusions and Recommendations for Future Work The continuous advancement in the mapping technology demands the development of commensurate processing methodologies to take an advantage of the synergistic characteristics of available geo-spatial data. In this regard, it is quite evident that integrating LIDAR and photogrammetric data is essential for assuring accurate and complete description of the object space. This paper, presented alternative methodologies for aligning LIDA R and photogrammetric data relative to a common reference frame using linear and area l primitives. The deve loped methodologies are suited for the characteristics of these datasets. Moreover, the introduced methodologies are general enough in the sense they can be directly applied to scenes captured by line and frame cameras . The experimental results have shown that the utilization of LIDA R derived prim itives as the source of control for photogrammetric geo-referencing yie lds slightly better res ults when com pare d with point-based geo-referencing techn iques. Moreover, it has been shown that the incorporation of sparse frame imagery toge ther with sate llite scenes wo uld improve -the results by taking an advan tage of the geometric strength of frame cameras to improve the inherent week geometry of line cameras onboard imaging satellites . In this regard, the co mbination of aerial and space scenes would improve the coverage extent as well as the geometric quality of the derived object space. The incorporation of LIDA R data, aerial images, and satellite scenes in a single triangu-
44
A.F. Habib, S. Shin, C. Kim, Mohammad Al-Durgham
lation procedure would also assure the co-registration of these datasets relative to a common reference frame, which would be valuable for orthophoto generation and change detection applications. Future research will focus on the automation of the extraction of linear and areal features from photogrammetric and LIDAR data as well as establishing the correspondence between conjugate primitives. In addition, the multi-sensor triangulation environment can be used as a quality assurance and quality control procedure for the individual systems. For example, LIDAR derived features can be used as a source of control for camera calibration. Alternatively, photogrammetric patches can be used for LIDAR calibration by using raw LIDAR measurements in the developed coplanarity constraint. Finally, we will investigate the development of new visualization tools for an easier portrayal of the registration outcomes such as draping perspective images on LIDAR data to provide 3D textured models.
References Baltsavias, E., 1999. A comparison between photogrammetry and laser scanning, ISPRS Journal ofPhotog rammetry &: Remote Sensing, 54(1):83-94. Chen, L., Teo, T., Shao, Y., Lai, Y., Rau, J., 2004. Fusion of LIDAR data and optical imagery for building modeling, International Archives of Photogrammetry and Remote Sensing, 35(B4):732-737. Cramer, M., Stallmann, D., Haala, N., 2000. Direct georeferencing using GPS/Inertial exterior orientations for photogrammetric applications, International Archives ofPhotogrammetry and Remote Sensing, 33(B3): 198-205. Fraser, C., Hanley, H., Yamakawa, T., 2002. High-Precision Geopositioning from IKONOS Satellite Imagery, Proceeding of ACSM-ASPRS 2002, Washington, DC, USA, unpaginated CD-ROM. Fraser, C., Hanley, H., 2003. Bias compensation in Rational Functions for IKONOS Satellite Imagery, Photogrammetric Engineering & Remote Sensing, 69(1): pp. 53-57. Habib, A., Schenk, T., 1999. A new approach for matching surfaces from laser scanners and optical sensors, International Archives of Photogrammetry and Remote Sensing, 32(3W14):55-61. Habib, A., Lee, Y., Morgan, M., 2001. Bundle Adjustment with Self-Calibration of Line Cameras using Straight Lines, Joint Workshop of ISPRS WG I/2, 1/5 and IV/7: High Resolution Mapping from Space 2001, 19-21 September 2001, University of Hanover, Hanover, Germany, unpaginated CD-ROM. Habib, A., Kim, E., Morgan, M., Couloigner, 1., 2004. DEM Generation from High Resolution Satellite Imagery Using Parallel Projection Model, Proceeding of the xx- ISPRS Congress, Commission 1, TS: HRS DEM Generation from SPOT-5 HRS Data, 12-23 July 2004, Istanbul, Turkey, pp. 393-398.
Integration ofPhotogrammetric and LIDAR Data 45 Habib, A., Morgan, M., Lee, Y., 2002. Bundle adjustment with self-calibration using straight lines, Photogrammetric Record, 17(100): 635-650. Hanley, H., Yamakawa, T., Fraser, C., 2002. Sensor Orientation for High Resolution Satellite Imagery, Pecora 15/Land Satellite Information IV/ISPRS Commission IIFIEOS, 10-15 November 2002, Denver, USA, unpaginated CDROM.
Lee, C., "Thesis, H., Bethel, J., Mikhail, E., 2000. Rigorous Mathematical Modeling of Airborne Pushbroom Imaging Systems, Photograrnmetric Engineering & Remote Sensing, 66(4): pp. 385-392. Lee, Y., I-Iabib, A., 2002. Pose Estimation of Line Cameras Using Linear Features. Proceedings of ISPRS Commission III Symposium "Photogrammetric Cornputer Vision", Graz, Austria, September 9 - 13,2002. Poli, D., 2004. Orientation of Satellite and Airborne Imagery from Multi-line Pushbroom Sensors with a Rigorous Sensor Model, International Archives of Photogrammetry and Remote Sensing, Istanbul, Turkey, 34(B1): pp. 130-135. Tao, V., Hu, Y., Jiang, W., 2004. Photogrammetric Exploitation of IKONOS Imagery for Mapping Applications, International Journal of Remote Sensing, 25(14): pp. 2833-2853. Tao, V., Hu, Y., 2001. A Comprehensive Study of Rational Function Model for Photogrammetric Processing, Photogrammetric Engineering & Remote Sensing, 67(12): pp. 1347-1357. Toutin, T., 2004. DSM Generation and Evaluation from QuickBird Stereo Images with 3D Physical Modeling and Elevation Accuracy Evaluation, International Journal of Remote Sensing, 25(22): pp. 5181-5192. Wegmann, H., Heipke, C., Jacobsen, K.~ 2004. Direct sensor orientation based on GPS network solutions, International Archives of Photogrammetry and Remote Sensing. 35(B1): 153-158. Wehr, A., Lohr, U., 1999. Airborne laser scanning---an introduction and overview, ISPRS Journal Of Photogrammetry And Remote Sensing 54(2-3):68-82.
LIDAR-Aided True Orthophoto and DBM Generation System Ayman Habib a, Changjae Kim a a Geomatics Engineering, University of Calgary, 2500 University Drive NW, Calgary, AB, Canada T2N IN4 - [
[email protected],
[email protected]]
Abstract Orthophotos have been utilized as basic components in various GIS applications due to the uniform scale and the absence of relief displancement. Differential rectification has been traditionally used for orthophoto generation. For large scale imagery over urban areas, differential rectification produces severe artifacts in the form of double mapped areas at object space locations with abrubt changes in slopes. Such artifacts are removed through true orthophoto generation methodologies, which are based on the identification of occluded portions of the object space in the involved imagery. Basically, true orthophotos should have correct positional information and corresponding gray values. There are two requirements for achieving these characteristics of true orthophotos: there must be no false visibilities/occlusions, and the building boundaries must not be wavy. To satisfy the first requirement, a new method for occlusion detection and true orthophoto generation is introduced in this paper. The second requirement, which is for non-wavy building boundaries, can be fulfilled by generating and utilizing a DBM in the true orthophoto generation procedure. A new segmentation algorithm based on a neighborhood definition, which considers the physical shapes of the involved surfaces, is introduced to obtain planar patches of which mainly man-made structures consist. The implementation of DBM generation methodology using segmented planar patches is complicated and requires in-depth investigation; hence the research in the generation of DBM and in the refinement of true orthophotos is still in progress. This paper suggests a new system that achieves accurate true orthophotos without introducing any external DBM information. The feasibility and performance of the suggested techniques are verified through experimental results with real data.
48
Ayman Habib, Changjae Kim
Keywords: True orthophoto, differential rectification, digital surface model, visibility analysis, building hypothesis, segmentation, digital building model
1. Introduction Remote sensing imagery is usually acquired through perspective projection, in which light rays reflected from the object space pass through the perspective center of the imaging sensor. Such projection results in scale variation and relief displacement in the acquired imagery. Orthophoto generation aims at eliminating relief displacement from perspective imagery. As a result, orthophotos are characterized by having uniform scale and showing objects in their true geographical locations. In other words, orthophotos have the same characteristics as maps. Therefore, the user can position objects, measure distances, compute areas, quantify changes, and derive other useful information from available orthophotos. Such uses make orthophotos an important component of GIS databases. The production of orthophotos requires the availability of a digital image and a digital surface model (DSM), as well as the internal and external characteristics of the imaging sensor (Kraus, 1993). With the increased use of digital cameras, LIDAR systems, and GPS/INS geo-referencing units, the mapping community has easy access to all the essential components for orthophoto production. Differential rectification has traditionally been used for orthophoto generation. For large scale imagery over urban areas, differential rectification produces severe artifacts in the form of double mapped areas at object space locations with abrupt variations in relief (e.g. in the vicinity of edges of buildings). An example of the double mapping problem is illustrated in Fig. 1, in which the perspective image and the generated orthophoto are shown. As can be seen in Fig. l(a), the vertical structures have significant relief displacements, which cause considerable occlusions in the object space. The generated orthophoto in Fig. l(b) shows that the relief effects along the building facades have been removed. However, double mapped areas, which are enclosed by solid black lines, occupy occluded portions of the object space. Double mapped areas constitute a severe degradation of the generated orthophoto and are a major obstacle to its interpretability. Such artifacts are removed through true orthophoto generation methodologies, which are based on the identification of occluded portions of the object space in the involved imagery. Existing methodologies suffer from several problems, such as their sensitivity to the sampling interval of the
LIDAR-Aided True Orthophoto and DBM Generation System
49
digital surface model (DSM) as it relates to the ground sampling distance (GSD) of the imaging sensor. Fig. 2 illustrates the original perspective image and the true orthophoto generated using the Z-buffer method, which is the standard method (Catmull, 1974; Amhar et aI., 1998; Rau et aI., 2000; Rau et aI., 2002; Sheng et aI., 2003). This method detects occlusion cells and assigns black colors to them. Theoretically, all the cells located in the occluded areas should be colored black. However, some cells or lines with non -black colors exist in these areas. Fig. 2(b) shows the true orthophoto results for a building, which is enclosed by the dashed circle in Fig. 2(a), using the Z-buffer method. Moreover, current methodologies rely on the availability of a DBM, which requires additional and expensive preprocessing. This paper presents new methodology for true orthophoto generation that circumvents the problems associated with existing techniques.
(a)
(b)
Fig. 1. Perspective image (a) and the corresponding orthophoto (b) with double
mapped areas enclosed by solid black lines
50
Ayman Habib, Changjae Kim
(a)
(b)
Fig. 2. Perspective image (a) and the corresponding true orthophoto (b) using Zbuffer method
The DSM utilized in the true orthophoto generation procedure is interpolated from the random LIDAR points over the mapped area. However, the LIDAR points have a very low probability of hitting the exact edges of a physical shape, due to the random and discrete nature of the points. Therefore, the utilized DSM , which inherits the characteristics of the associated laser points, does not represent the exact boundaries of the buildings. Moreover, the DSM is in the form of regular grid based on discrete representation which will assign breakline information to the closest grid cells. Fig. 3 shows a zoomed-in building that is cut from the generated true orthophoto and does not have clean boundaries. This phenomenon conflicts with the basic characteristics of a true orthophoto, which are the representation of accurate positions and gray values of the surfaces, and will lead users to misinterpret the object information. To resolve this problem, the use of the DBM, which has accurate building boundary information, is essential. To refine true orthophotos, either an external DBM can be introduced or a new DBM can be generated. In this paper, a new DBM will be generated and utilized to produce complete true orthophotos without wavy building boundaries.
LIDAR-Aided True Orthophoto and DBM Generation System
51
Fig. 3. Zoomed-in building with wavy building boundaries in the generated true orthophoto
The following section describes the proposed true orthophoto generation technique while comparing it with existing method. Moreover, the extraction of building hypothesis from the visibility map is briefly explained. Afterwards, newly suggested planar patch segmentation methodology based on the extracted building hypothesis is mentioned in section 3. Finally, concluding remarks and current research are summarized.
2. Initial true orthophoto generation and building hypothesis The qualities of produced true orthophotos seriously rely on the performance of visibility analysis of true orthophoto techniques. Poor performance causes false visibility/occlusion in the final products that users may misinterpret. To improve the performance of the visibility analysis, an advanced true orthophoto technique, compared to the existi ng met hod, is pro posed. Moreover, the other degradation of the produ ced true orthophotos, which comes from the wavy building boundaries, can be resolved by generating and utilizing DB M in the true orthophoto procedure. The bui lding hypothesis is one of the essential elements for DB M generation. After manipulating a visibility map, which is including the information of the exis tence of man-made structures, building hypothesis is produced and utilized in the DBM generation procedure.
52
Ayman Habib, Changjae Kim
2.1. Initial true orthophoto generation In a perspective projection, the top and bottom of a man-made structure are projected onto the image space as two points that are spatially separated by a relief displacement. This displacement is expected to take place along the radial direction, emanating from the image space nadir point (Mikhail, 2001). The radial extent of the relief displacement is the source of occlusions/invisibilities in the perspective imagery. The presence of occlusions can be discerned by sequentially checking the off-nadir angles to the lines of sight connecting the perspective center to the DSM points, along a radial direction starting from the object space nadir point. In the remainder of this paper, the off-nadir angle to the line of sight will be denoted as the a angle; see Fig. 4. Because there is no relief displacement associated with the object space nadir point, one can be assured that this point will always be visible in the acquired image. As one moves away from the object space nadir point, it is expected that the a angle will increase. When considering DSM cells along the radial direction, as long as there is an increase in the a angle as one moves away from the nadir point, these cells will be visible in the image. On the other hand, occlusions will take place whenever there is an apparent decrease in the a angle while moving away from the nadir point. This occlusion will persist until the a angle exceeds the angle associated with the last visible point, Fig. 4. The angle-based technique, which will be denoted as the radial sweep method, is introduced for occlusion detection and true orthophoto generation. Perspective center
C
B
~-~--DSM
Visibility map
Fig. 4. Using the off-nadir angle to the line of sight as a means of detecting occlusions The radial sweep method considers individual cells in the DSM by scanning along radial directions from the object space nadir point. After classifying the DSM cells along the first radial direction, one moves to the next
LIDAR-Aided True Orthophoto and DBM Generation System
53
radial direction by incrementing the azimuth by a given value of . This process is repeated until the whole range of azimuth values is considered. Detected occlusions along the radial directions can be stored in a visibility map with the same dimensions as the DSM. Finally, the gray values at visible DSM cells are imported from the original image using traditional differential rectification techniques. In this way, occluded areas are left blank, thus producing a true orthophoto. Fig. 5 illustrates the conceptual basis of the radial sweep method for occlusion detection. Perspective center
a So =0
column
;iF Target poin
8j
=i·~8
DSM Visibility map
Fig. 5. Conceptual basis of the radial sweep method for occlusion detection
A critical decision in the implementation of the radial sweep method is the choice of the azimuth increment value ( ). A small value will make the process time consuming and inefficient because the DSM cells close to the nadir point will be revisited repeatedly. On the other hand, a coarse selection of the azimuth increment value will lead to non-visited DSM cells at the boundaries. To avoid this problem, an adaptive radial sweep is proposed, in which the azimuth increment value is decreased gradually as one moves away form the nadir point, as shown in Fig. 6.
54
Ayman Habib , Changjae Kim
~
coiun
DSM
.. .»
ro
"
I I
-'- .
···.1
--
I -,
secti on 3
I
section 2 l -
..-
..'.
I
section 1
i
. nadir point
\ ~
~~"" -, All, " ':: 7
\
\. -,
-, 1717 -c,
-".
I
,I
1\' ,) '
........ \ I'\.
Y
Fig. 6. Decreasing azimuth increment values while moving away from nadir point
Produced true orthophotos using existing and new methodologies are compared to one another in Fig. 7. The building enclosed by the dashed circle in Fig. 7(a) has relief displacement along the radial direction. The true orthophotos of the building created using the different techniques are generated while detecting and removing the occlusions . The adaptive radial sweep method , Fig . 7(c), achieved a significant improvement in quality compared to the Z-buffer method, Fig. 7(b).
(a)
(b)
(c)
Fig. 7. Comparison of true orthophoto generation methodologies: original imagery (a), Z-buffer method (b), and adaptive radial sweep method (c)
LIDAR-Aided True Orthophoto and DBM Generation System
55
The drawbacks of the existing true orthophoto generation technique are overcome by the proposed technique that does not produce any false visibility/occlusions in the initial true orthophotos. However, the initial true orthophotos do not have clean building boundaries due to the mentioned characteristics of the utilized DSM. Therefore, the incorporating DBM into the true orthophoto procedure is crucial to produce true orthophotos without wavy building boundaries. The following sections will explain the building hypothesis generation and planar patch segmentation techniques that are the essential elements for the DBM generation. The visibility map, which includes the information of visibility and invisibility of the DSM cells, is utilized for building hypothesis. The implementation of DBM generation methodology is complicated and requires in-depth investigation. Therefore, this research is still focusing on the DBM generation and true orthophoto refinement techniques based on the accomplishment of the building hypothesis generation and planar patch segmentation.
2.2. Building hypothesis The true orthophoto generation procedure produces an initial true orthophoto and a building hypothesis. However, the building hypothesis does not come from this procedure directly. Several data processing steps are introduced in the suggested system to generate the building hypothesis. We begin with the idea that man-made structures will generate regular occlusion areas, while natural features usually have irregular occlusion shapes. Accordingly, regular shapes of occlusions indicate the existence of manmade structures around them. Based on this idea, visibility maps, that have information about visible and invisible DSM cells are utilized for building hypothesis generation. Fig. 8(a) shows a visibility map generated from a true orthophoto generation procedure. Black cells indicate occlusion areas, and white cells indicate visible points. However, this visibility map includes occlusion areas from both man-made structures and natural features, the latter of which are beyond our interest. Therefore, image processing techniques, such as opening, closing, dilation, erosion, labeling, and segmentation, are applied to the visibility map to remove occlusions that come from small natural features; Fig. 8(b) shows the resulting map. Finally, the building hypotheses are detected after analyzing the extent of the occlusions in the refined visibility map. The areas, enclosed by solid rectangles in Fig. 8(b), were detected as building hypotheses. Finally, raw LIDAR points, which belong to the building hypothesis areas, will be extracted from the whole dataset to be utilized in the DBM process.
56
Ayman Habib, Changjae Kim
(a)
(b)
Fig. 8. Visibility map from true orthophoto process (a) and detected building hypothesis after applying image processing steps (b) The selected building hypotheses from the visibility map include the buildings, as well as other features, which are not considered in this research. Therefore, these building hypotheses must be refined through further segmentation processes, which are explained in the next section.
3. Planar patch segmentation The objective of segmentation in this research is to refine building hypothesis generated in the previous section. The goal of these segmentation procedures can be achieved by partitioning random LIDAR points into meaningful objects. A common approach for segmentation begins with the definition of the neighborhood of a point. Every involved point has its own neighboring points, identified according to the neighborhood definition. A set of attributes for each point are then computed using its neighboring points. Finally, the segmentation is implemented based on the similarity of the computed attributes of the points. In other words, points that have homogeneous attributes are separated from the remaining points and are represented as a distinct group. As previously mentioned, the DBM generation process is concerned mainly with man-made structures, which have mostly planar patches. Therefore, the planar patches are our target for segmentation in this research. The three main steps in planar patch segmentation, which are neighborhood definition, planar patch attribute computation, and patch segmentation, will now be explained in detail.
LIDAR-Aided True Orthophoto and DBM Generation System
57
3.1. Neighborhood definition Neighborhood definition is the starting point for planar patch segmentation. The way in which the neighborhood of a LIDAR point in question is defined significantly affects the computed set of attributes of the point. Three different types of neighborhood definitions will be introduced and reviewed. One way in which the neighborhood of a point can be defined is by considering its proximity to other points that are projected onto the XY plane. Triangular Irregular Networks (TIN) is the well known technique based on this neighborhood definition. It has mainly been used for analyzing random 3D points. However, serious drawbacks are revealed when points are located close together on the XY plane but do not belong to the same physical shape. Fig. 9(a) shows an example of neighborhood definition using TIN methodology; points on the wall are falsely considered to be neighbors of points on the roof, and a similar problem occurs with the points that are scanned from the tree. Another type of neighborhood definition that can be used establishes proximity according to the Euclidian distance between points in 3D space. The spherical method is based upon this definition. The points that are located inside the sphere defined around the point in question are considered to be in its neighborhood; see Fig. 9(b). Even though we are considering three dimensional relationships between random points, the neighborhood definition criteria do not take into account the physical shapes of objects. This spherical definition has similar drawbacks to those of the triangulation approach because points belonging to different patches are often included in the same neighborhood. To avoid the problems that occur with these two types of neighborhood definitions, a different definition, which considers both the threedimensional relationships between random points and the physical shapes of surfaces, is introduced and employed in this research (Filin and Pfeifer, 2005). The physical shapes of surfaces upon which associated points are located are incorporated into the neighborhood definition. This means that points located on the same surface are considered as possible neighbors, while taking into account the proximity of the points. Points on different surfaces, on the other hand, are not considered neighbors, even if they are spatially close. This definition increases the homogeneity among neighbors. The schematic concept of the adaptive cylinder method, which follows from this definition, is illustrated in Fig. 9(c). Neighbors are determined using a cylinder whose axis direction changes according to the physical shape of the object in question. Therefore, this neighborhood definition is referred to as the adaptive cylindrical neighborhood defini-
58
Ayman Habib, Changjae Kim
tion. As shown in Fig. 9(c), points on the wall are not included as neighbors of the points on the roof.
(a)
--
-"
"
'"
,: -
......
(b)
i------;~-------:
:_------"::_-_ ._~
(c)
Fig. 9. Different neighborhood definition methods: TIN (a), sphere (b), and adaptive cylinder (c)
3.2 Planar patch attribute computation
Planar patch attributes are computed based on the neighboring points identified using the adaptive cylinder method. A plane for each point is defined using its neighboring points after introducing an origin. In addition, the roughness of the surface (variance component of plane fitting) is acquired during plane fitting implementation. As illustrated in Fig. 10, a normal vector from the origin to the plane is defined. Therefore, each point has a normal vector and a value for the roughness of the surface. The three components (nx, ny, nz ) of the normal vector are derived and utilized for planar patch segmentation, as described in the next section.
LIDAR-Aided True Orthophoto and DBM Generation System
a certain point in question......
~; ,-
,-
~
;
/' /'
(-1~~ ~ ~
.
59
neighborhood
#* ...
,.
n = (n x , ny, n z
)
computed plane origin
Fig. 10. Defined vector from an origin to the computed plane for a given point using its neighboring points
3.3 Patch segmentation After acquiring the planar patch attributes for all points, patch segmentation is implemented in the attribute space. Many segmentation algorithms have been developed to partition random LIDAR points into meaningful objects. These algorithms can, in general, be categorized into two groups: local approaches and global approaches. The local approach starts from seed points that represent distinct regions. These points will grow based on similarity and discontinuity criteria until they cover the entire regions. The performance of this technique depends heavily on the choice of seed points. Ideally, one seed must be assigned to each distinct region; usually, the user supervises selection of seed points. However, in order to implement a region growing algorithm in an automated and unsupervised environment, a complicated strategy based on the geometric characteristics of a digital surface is necessary (Besl and Jain, 1998). The other approach, on the other hand, considers the homogeneity of the attributes of all associated points at once, without involving seed points in the segmentation process. Therefore, this technique is referred to as the global approach. Various techniques that are based on this approach will be mentioned while considering their computational efficiency and their ambiguity of segmentation, which leads to the grouping of separate patches as one. All these techniques use a voting scheme with an accumulator array that is constructed in the attribute space. The dimension of the accumulator array depends on the attributes utilized in the technique in
60
Ayman Habib, Changjae Kim
question. The roughness of surface, one of the attributes, classifies the points while determining whether they belong to planar patches. As mentioned earlier, our target objects are planar patches, of which the majority of man-made structures consist. Only the selected points belonging to planar patches will be utilized for segmentation, regardless of the methodology employed. The three components of the normal vector, (nx, nv, rtz ) can be used as attributes in the voting scheme. One normal vector defines only one plane; therefore, the segmentation technique using these attributes will partition the associated points into distinct regions without any ambiguity of segmentation. However, the use of these attributes requires the construction of an accumulator array that has three dimensions for the three vector components. A voting scheme that uses a three-dimensional accumulator array is expensive and complicated in terms of computational efficiency. To reduce the three-dimensional accumulator array to two dimensions, the slopes of the normal vector in the x and y directions are used instead of the three normal vector componenets as attributes for the planar patch segmentation (Elaksher and Bethel, 2002). Even though this method reduces the dimension of the accumulator array, the ambiguity of segmentation is a problem. Parallel planes that have the same normal vector slopes in the x and y directions but different offsets in the z direction will be segmented as one group, if this method is used. This problem can be resolved through spatial analysis of the data after the segmentation is complete. In this research, the magnitude of the normal vector will be utilized as an attribute. One should note that the number of origins is related to the ambiguity of the planar patch segmentation and affects the results thereof when we use that attribute. Moreover, the number of origins determines the dimension of the accumulator array. If we have one origin for planar patch attribute computation, only a one-dimensional accumulator array is necessary. However, different planes with the same attribute values may exist. The planes attached to a sphere centered on the origin could be an example. In this case, it is difficult to separate these patches in the attribute space, hence the introduction of two origins in this research. They will reduce the ambiguity of segmentation significantly. In addition, only a twodimensional accumulator array is necessary for the voting scheme. Fig. 11 illustrates the schematic concept of the introduction of two origins. If one origin (origin 1) is located at a position that equalizes the magnitudes of the normal vectors of the points on the planes, all these points will be accumulated at the same position. Therefore, it is impossible to separate these points into three groups in the attribute space. This problem can be resolved by introducing another origin (origin 2). As Fig. ll(a) shows, the magnitudes of the normal vectors of the points with respect to the second
LIDAR-Aided True Orthophoto and DBM Generation System
61
origin will be different from one another. Thus, the points that had ambiguity of segmentation when associated with one origin are separated into three different groups after introducing the second origin; see Fig. l1(b). The points belonging to different planes in the object space will make different accumulated peaks in the attribute space. After aggregating the points belonging to each peak , the planes corresponding to the peaks will be segmented. Fig. 12 shows an example of patch segmentation. A region that includes a building with two planar patches is selected as the area of interest in Fig. 12(a). Two high peaks are generated in the accumulator array from the points that belong to the patches; see Fig . 12(b). Finally, the points corresponding to the two peaks are separated into two groups, as shown in Fig. 12(c).
Iln,,11Iln,,11 lin"II
origi n 2
Iln,,11 •• •••
origin 2
~~~f>,
. .....,
,,11 1n
1
c 'So o
.<:
(a)
(b)
Fig. 11. Introducing two origins (a) and a 2D accumulator array for the segmentation (b)
(a)
(b)
(c)
Fig. 12. Segmentation results of a building: digital image (a), accumulator array (b), and segmented patches (c)
62
Ayman Habib, Changjae Kim
To verify the segmentation results visually, the segmented patches are projected onto the image space using the exterior and interior characteristics of the imaging sensor. Fig. 13 shows two patches that are projected onto the stereo images. The projected patches 1 and 2, which are enclosed by solid and dashed lines, respectively, are located almost exactly on top of the roofs in the images.
(a)
(b)
Fig . 13. Projected LIDAR points and boundaries from the extracted patches onto the left (a) and right imagery (b)
4. Concluding remarks and current research The widespread adoption of GIS databases has increased the demand for orthophotos. At the same time, the improved resolutions and performance of current imaging, ranging, and geo-referencing systems are enabling the production of high quality orthophotos. Unfortunately, high resolution imaging sensors magnify relief displacement effects, especially when mapping over urban areas. For this imagery, true orthophoto generation techniques are essential in order to ensure reliable interpretability and to maintain the high quality of the available data. Therefore, there has been significant interest within the photogrammetric community to develop true orthophoto generation methodologies. True orthophotos should have correct positional information and corresponding gray values. There are two requirement s for achieving the characteristics of true orthophotos: there
LIDAR-Aided True Orthophoto and DBM Generation System
63
must be no false visibilities/occlusions, and the building boundaries must not be wavy. The adaptive radial sweep, a new method for occlusion detection and true orthophoto generation, is introduced in this paper to satisfy the first requirement while overcoming the drawbacks of existing true orthophoto generation techniques. The second requirement, which is for non-wavy building boundaries, can be fulfilled by generating and utilizing a DBM in the true orthophoto generation procedure. As a part of DBM generation procedure, a new segmentation algorithm based on a neighborhood definition, which considers the physical shapes of the involved surfaces, is also introduced in this research. Moreover, it is shown that the segmentation algorithm can be efficiently implemented in a 2D accumulator array with a significant reduction in segmentation ambiguities. Research in the generation of DBM and in the refinement of true orthophotos is still in progress. Therefore, current research is focusing on the development of the. remaining procedures required to obtain refined true orthophotos. Segmented patches and the involved images will be used to produce accurate boundaries of the patches in the object space, while utilizing the synergistic properties of the combination of LIDAR and photogrammetric data. Traditionally, automated matching has been implemented to find conjugate points between two images, using image processing techniques. However, the automated matching procedure does not guarantee that the selected conjugate points in two images represent the same feature. Moreover, this matching ambiguity is worsened when dealing with large scale imagery, due to serious relief displacement. This problem will be resolved by introducing segmented patches into the matching process. Through this matching process, accurate boundary information for the planar patches will be established in the 3D space. The boundary representation (B-rep) method will be used to represent generic building models and their elements. Vertices will be extracted, and their topological relationships will be determined, while applying image processing techniques such as line-following algorithms. Adjacent vertices will be grouped to form the building wire-frames. Fig. 14 shows an example of building wire-frames that will be constructed in this research. The implementation of DBM generation methodology is complicated and requires in-depth investigation, even though the conceptual steps can be explained simply. Building models constructed using the B-rep method will be used to refine the original DSM and the true orthophotos.
64
Ayman Habib, Changjae Kim
Fig. 14. Building Wire-Frames
As a last step, the true orthophoto refinement will be carried out using the original DSM, the imagery, the external and internal characteristics of the imaging sensor, and the acquired DBM. One of the most important requirements for true orthophoto refinement is an accurate DSM, especially for man-made structures. Therefore, the generated DBM will be superimposed on the existing DSM. Using this methodology, we can produce accurate true orthophotos without wavy building boundaries using the refined DSM.
References Amhar, F., J. Josef, and C. Ries, 1998. The generation of true orthophotos using a 3D building model in conjunction with a conventional DTM, International Archives of Photogrammetry and Remote Sensing, 32(Part 4):16-22. Besl, P.J. and Jain, R.C. (1998). Segmentation Through Variable Order Surface Fitting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 10(2):167-192 Catmull, E., 1974. A Subdivision Algorithm for Computer Display of Curved Surfaces, Ph.D. dissertation, Department of Computer Science, University of Utah, Salt lake city, Utah. Elaksher, A. and Bethel, J., 2002. Reconstructing 3D buildings from LIDAR data, ISPRS commission III, Symposium 2002,9-13, September, Graz, Austria. Filin, S. and Pfeifer, N., 2005. Neighborhood Systems for Airborne Laser Data, Photogrammetric Engineering & Remote Sensing, 71(6), 743-755 Kraus, K., 1993. Photogrammetry, Dtimmler Verlag, Bonn, Volme 1, 397p. Mikhail, E., 2001. Introduction to modem photogrammetry, John Wiley & Sons, Inc., New York, 479p.
LIDAR-Aided True Orthophoto and DBM Generation System
65
Rau, J., N. Chen, and L. Chen, 2000. Hidden compensation and shadow enhancement for true orthophoto Generation, Proceedings of Asian Conference on Remote Sensing 2000, 4-8 December, 2000, Taipei, unpaginated CD-ROM. Rau, J., N. Chen, and L. Chen, 2002. True orthophoto generation of built-up areas using multi-view images, Photogrammetric Engineering and Remote Sensing, 68(6):581-588. Sheng, Y., P. Gong, and G. Biging, 2003. True orthoimage production for forested areas from large-scale aerial photographs, Photogrammetric Engineering and Remote Sensing, 69(3):259-266.
Acknowledgement We would like to thank the GEOIDE (GEOmatics for Informed DEcisions) Network of Centers of Excellence of Canada for their financial support of this research (511#43). The authors are also indebted to the Technology Institute for Development - LACTEC - UFPR, Brazil and Terrapoint Canada for providing the LIDAR/image data and the valuable feedback.
Surface Matching Strategy for Quality Control of LIDAR Data A.F. Habib a, *, R.W.T. Cheng a Geomatics Engineering, University of Calgary, 2500 University Drive NW, Calgary, AB, Canada T2N IN4 - (habib, rwtcheng) @ (geomatics.ucalgary .ca, ucalgary .ca)
a
Abstract Light Detection and Ranging (LIDAR) is emerging as a fast and accurate technology for acquiring 3D coordinates of object space points at high density. The accuracy of the collected data depends on the data acquisition procedure and the calibration quality of the involved sub-systems. Unlike photogrammetric cameras, current LIDAR systems do not furnish the capability of system calibration by the end users. In other words, LIDAR technology can be considered as a black box from the end user's perspective. Therefore, the users are left with quality control procedures as the only means for ensuring data integrity, correctness, and completeness. Several methods are commonly used for LIDAR quality control, such as comparing interpolated range and intensity images or checking the coincidence of conjugate features extracted from overlapping LIDAR strips. However, these approaches require post-processing of the LIDAR data in the form of interpolation, segmentation, and/or feature extraction. Such procedures are time consuming and might introduce artefacts, which will influence the quality control outcome. Therefore, this research is focusing on developing a quality control approach based on surface matching of raw LIDAR data without any post-processing. The algorithm establishes the correspondence between overlapping LIDAR surfaces and estimates the transformation parameters (e.g., translations and rotations) relating them. In the absence of any biases, zero translations and rotations between overlapping strips should be expected. Thus, any deviations from these values indicate a bias in the data acquisition system. Experimental results from real data have shown the feasibility of using the proposed algorithm for detecting biases between overlapping LIDAR strips. This finding has been
* Corresponding author.
68
A.F. Habib, R. W. T. Cheng
confirmed by comparing the results with these derived using extracted conjugate linear features from the same data. Keywords: LIDAR, Quality Assurance, Quality Control, Surface Matching, Registration
1.
Introduction
In recent years, Light Detection and Ranging (LIDAR) systems have emerged as a cost-effective, fast, and accurate technology for direct acquisition of highly dense 3D positional data on physical surfaces. Positional information is derived through direct geo-referencing of the LIDAR unit using global positioning systems (GPS) combined with high-end inertial navigation systems (INS) (Figure 1). In addition to the range data, modem LIDAR systems can also capture intensity images over the mapped area. As a result of these advances, LIDAR is being more extensively used in mapping and GIS applications. It should be noted that the quality of the derived LIDAR data depends on the accuracy of the involved sub-systems (i.e., laser scan unit, GPS, and INS) and the calibration parameters relating these components (i.e., boresighting parameters). Equation 1 expresses the mathematical derivation of the ground coordinates of an object point based on the different measurements and calibration parameters associated with a LIDAR system. This equation relates four coordinate systems that are associated with the geo-referencing procedure, which include the ground coordinate system, the inertial measurement unit (IMU) body frame, the laser unit coordinate system, and the laser beam coordinate system (refer to Figure 1). The presence of biases in the involved components, such as laser beam bias, boresighting bias (i.e., angular and spatial biases between the system components), and GPSIIMU noise, will affect the accuracy of the derived coordinates of the object points. To understand the effect of such errors, a simulation process can be performed by adding biases and noise to a simulated LIDAR data and comparing the reconstructed surface with the original one. Generally speaking, for linear scanners, the laser beam range bias affects the vertical measurements (i.e., Z coordinate) while the laser beam angular bias affects mostly the X coordinate (assuming that the flight direction coincides with the Y axis). The boresighting offset bias affects -all three directions. Finally, an angular boresighting bias will mainly affect the horizontal coordinates of the derived LIDAR point cloud.
Surface Matching Strategy for Quality Control of LIDAR Data
69
beam
point (i)
Fig. 1. Coordinates and parameters involved in a LIDAR acquisition system
(1) m ri
m
m
INS
GPS
= rGPS + R INS (t)Rlasenmilrlaserunil +
m
laserunil 0i R INS (t)RlaseruniIRlaserbeam - : INS
r
J
Where: m
'i is the ground coordinate vector of the object point, rGn;,s is the ground coordinates of the GPS antenna phase centre,
rl~~:unif is the offset between the laser firing point and the GPS phase centre with respect to the laser unit coordinate system, R~s (t) is the rotation matrix relating the IMU body frame and the ground
coordinate system at time t,
R:::~runil is the rotation matrix between the IMU body frame and the laser unit coordinate system,
R::::;;;;~~, is the rotation matrix between the laser unit and laser beam coordinate systems, and
70
A.F. Habib, R. W. T. Cheng
P i is the range measurement acquired by the laser system. Typically, LIDAR data are supplied by the system in the form of X, Y, and Z coordinates of captured surfaces with the expected accuracies in the vertical and horizontal directions. For modem systems, the derived coordinates are usually coupled with intensity data. For any data acquisition system, quality assurance procedures are essential to ensure that the derived data meets the users' needs and accuracy requirements. These procedures, which are established prior to the mapping mission, include system calibrations, flight planning, and setting up the GPS base stations. For example, system calibration can be established by estimating the boresighting parameters in Equation 1 while minimizing the normal distance between a LIDAR footprint and a control surface. Such a calibration procedure requires a carefully planned flight configuration and the transparency of the data acquisition process (i.e., the availability of the raw LIDAR measurements). However, current LIDAR systems do not furnish the raw measurements. Thus, as far as the user is concerned, the acquisition system is a black-box. Such a nature is leaving the users with quality control procedures as the only means for assessing the LIDAR system performance. Since there is no redundancy in the LIDAR measurements, as suggested by Equation 1, we cannot have explicit measures to assess the quality of LIDAR-derived surfaces. This is in contrast to photogrammetric procedures where the bundle adjustment procedure provides measures of the quality of derived positional information (e.g., variance component and the variance-covariance matrix for the derived coordinates). Therefore, the users need to devise other quality control procedures to check the integrity and correctness of provided LIDAR data. Since LIDAR data is always acquired in overlapping strips from different flight lines, a common quality control procedure is to check the coincidence of conjugate features in overlapping strips. The degree of coincidence of such features can be used as a measure to describe the data quality and identify the presence of systematic biases. This quality control procedure can be accomplished by one of three approaches: 1) compare interpolated range or intensity images from overlapping strips, 2) compare conjugate features (e.g., linear features) extracted from overlapping LIDAR strips or 3) using specifically designed LIDAR targets in a check analysis procedure. For the first approach, the range measurements of overlapping areas are interpolated into a regular grid and represented as range images. Image differencing is then performed and the deviations between the two images can be used as a measure for quality control (Figure 2). Similarly, intensity measurements can be interpolated into a grid to
Surface Matching Strategy for Quality Control ofLIDAR Data
71
form intensity images. Conjugate features can be identified in the intensity images and the coordinates of these features can then be compared (Figure 3). Differences in the derived coordinates of conjugate features indicate the presence of biases in the data acquisition system. The main issue with this approach is that interpolation can lead to artefacts in the range/intensity images especially for urban areas where discontinuities in the range and intensity data are quite common. Such artefacts will lead to incorrect conclusions about the quality of the LIDAR data.
Fig. 2. Interpolated range images generated from two overlapping LIDAR strips (left and middle) and their difference image (right) with red and green showing positive and negative differences, respectively
Fig. 3. Conjugate features are identified in interpolated intensity images and their coordinates are compared The second approach for LIDAR quality control directly compares extracted conjugate features from the data without any interpolation. For example, linear features are commonly, found in urban scenes (e.g., building edges and road boundaries) and can be extracted by performing planar patch segmentation and intersection. The coincidence of the extracted features from overlapping surfaces can be checked by performing an absolute orientation procedure that estimates the necessary transformation parameters for co-aligning conjugate features [1]. Since overlapping strips are given with respect to the same reference frame as defined by the onboard GPS unit, zero translations and rotations should be required to relate the
72
A.F. Habib, R. W. T. Cheng
surfaces. If the derived parameters from the absolute orientation procedure significantly deviate from these expected values, the presence of biases in the data acquisition system can be inferred. The third approach for quality control is to directly compare the coordinates of specially designed LIDAR targets, which have been independently surveyed. LIDAR measurements over these targets should be processed through range and intensity data segmentation to come up with the LIDAR coordinates. Finally, the LIDAR and the surveyed coordinates are compared to come up with a measure of the quality of the derived surface (e.g., Root Mean Square Error - RMSE). It is quite evident that establishing and surveying LIDAR targets is an expensive and time consuming procedure. In addition, inaccessibility of the mapped site will be an obstacle for such a measure. The abovementioned quality control approaches require post processing of the raw LIDAR data. Thus, the validity of the derived measures relies on the amount of errors introduced from the processing steps. To mitigate such a dependency, this research focuses on developing an approach based on surface matching and registration of the raw LIDAR data (i.e., without the need for interpolation or segmentation) to identify biases in the data acquisition system. In other words, the quality control is performed by automated registration of two overlapping LIDAR strips while checking for consistent deviations between them. The same procedure can be also performed by the co-registration of a LIDAR surface with a control surface (e.g., obtained from photogrammetric or GPS survey). The former can be used as an internal quality control measure while the latter can be used as an external one. The next section will introduce the methodology for surface matching and registration. Section 3 describes the employed LIDAR data for an internal quality control experiment while Section 4 presents the results after applying the proposed algorithm. Finally, the last section provides conclusions and future research directions.
2.
Surface Matching and Registration
The proposed procedure is based on surface registration for the coalignment of overlapping surfaces, which might be obtained from adjacent LIDAR strips for an internal quality control measure. Alternatively, the involved surfaces might correspond to LIDAR data and an independently generated control surface for an external quality control measure. The registration procedure should deal with the raw data without the need for any processing. Moreover, the correspondence between conjugate surface ele-
Surface Matching Strategy for Quality Control ofLIDAR Data
73
ments should not be a pre-requisite for such a procedure. The following subsections describe the adopted registration paradigm and the surface matching methodology. For more detailed description regarding the proposed algorithm, interested readers can refer to [2] and [3].
2.1
Registration Paradigm
Any registration process should deal with four issues: 1) registration primitives (e.g., points, lines, or areal features) to represent the surfaces of interest, 2) transformation function to mathematically relate the associated reference frames, 3) similarity measure to describe the coincidence of conjugate primitives, and 4) matching strategy to automate the registration process. The following list describes these issues as they pertain to the problem at hand. 1) Registration primitives: 3D points (i.e., raw LIDAR points) are used to represent the first surface (Sl) and triangular patches (derived from the original 3D points) are used to represent the second surface (S2). The utilization of planar patches to represent the second surface is essential as we cannot assume point to point correspondence between the involved surfaces. Figure 4 shows an example of the first and second surface representation in this algorithm.
Fig. 4. LIDAR surfaces represented by points for SI (left) and triangular patches for S2 (right)
2) Transformation function: The 3D similarity transformation (Equation 2) is employed to map S, onto S2 with seven transformation parameters: three translations (X T, YT, and ZT), three rotations (w, tp, k), and a scale factor (8). In the absence of biases, we should have zero translations, zero rotations, and a unit scale factor. If the estimated parameters significantly deviate from these values, then the presence of a bias can be inferred.
74
A.F. Habib, R. W. T. Cheng
Moreover, the amount of deviation from the theoretical values can be used as the quality control measure. (2)
Where: X,
r: Z are the coordinates of a point in Sl,
X', y', Z' are the coordinates of the transformed point with respect to the reference frame of Sz, and R is the rotation matrix between the two reference frames as defined by the
rotation angles w,
(jJ,
and k.
3) Similarity measure: The similarity measure mathematically describes the necessary constraints for ensuring the coincidence of conjugate primitives. In this regard, the coplanarity condition (which states that a point from Sl, after applying the appropriate transformation parameters, and its conjugate patch from Sz should be coplanar) is used. In other words, the enclosed volume by the point and the patch should be zero (Equation 3, Figure 5). One point-patch pair provides one constraint in the form of Equation 3, thus to solve for the seven transformation parameters, seven point-patch pairs are needed.
X' Y' Z'l V
(3)
= X a Ya z, 1 =0
x, ~ z, 1 Xc ~ z, 1
Where: X', Y', Z' are the coordinates of the transformed point from Sl (Equation 2), and
a, b, c denote the three vertices of the conjugate patch from Sz.
Surface Matching Strategy for Quality Control ofLIDAR Data
75
3D similarity transformation
• q •
• • •• • • •
•• • •
•
• •••
•
Fig. 5. Coplanarity constraint describes the fact that the volume enclosed by a conjugate point-patch pair should be zero
4) Matching strategy: The matching strategy is a controlling framework that utilizes the primitives, transformation function, and similarity measure to automatically establish the correspondence between conjugate surface elements while solving for the involved parameters in the transformation function. The Modified Iterated Hough Transform (MIHT) [2] and the Iterative Closest Point (ICP) algorithm [4] are employed as the matching strategy. MIHT is based on a voting scheme to initially solve for the transformation parameters and establish the correspondence between conjugate surface elements. On the other hand, the ICP algorithm refines the established correspondences and estimated parameters from the MIHT procedure. The following subsection describes the utilization of MIHT and ICP procedures to solve the registration and surface matching problem.
2.2 Surface Matching Methodology The surface matching algorithm begins by setting initial approximations for the seven transformation parameters. Using these approximate values, points in S, are transformed into the reference frame associated in S2. The MIHT is first used to perform the matching between the surfaces by considering all possible point-patch pairs between the transformed points in Sl and the patches in S2. Each hypothesized match can be used to solve for one of the transformation parameters, such as X T , by satisfying the coplanarity constraint (Equation 3), while keeping the remaining parameters as constants. Another hypothesized match will lead to another value of XT • For all possible point-patch pairs, different values of X T are solved for using the coplanarity constraint. The resulting values from the hypothesized matches are stored in an accumulator array. The correct point-patch matches will lead to the same X T value and thus will manifest itself as a
76
A.F. Habib, R. W. T. Cheng
peak in the accumulator array (Figure 6). The initial approximation of XT will then be updated with this peak value. The remaining six parameters are then solved for sequentially in the same manner. The entire process is iterated by decreasing the accumulator array cell size to reflect the improved quality of the derived parameters. The point-patch pairs that contributed to the detected peak reflect established correspondences between conjugate surface elements.
· 10
0
xr Shi ft (m}
10
30
Fig. 6. An accumulator array with the peak indicating the most probable solution
Since the point density of the LIDAR data is very close to the noise level of the data, MIHT might not lead to smooth convergence of the estimated parameters. To ensure smooth convergence, the MIHT is coupled with the K'P algorithm. More specifically, estimated parameters from the last MIHT iteration is used as the starting point for K'P, Since the rcp algorithm is known to be very sensitive to the quality of the initial approximations, the implementation of the MrHT ensures the availability of reliable approximate values. The rcp then establishes correspondence by performing matching between the two surfaces . A point and a patch is considered a matched pair if the following criteria are met: 1) a point matches a patch if its normal distance is the shortest when compared to other patches; and 2) the projected point onto the plane of the patch has to be inside the patch (Figure 7).
Surface Matching Strategy for Quality Control ofLIDAR Data
77
$~
Point is inside the patch (Odd # of intersections)
Point is outside the patch (Even # of intersections)
Fig. 7. Matching criteria: 1) shortest normal distance (top) and 2) projected point is inside the patch (bottom)
The transformation parameters are then solved for using the established matches in a least squares adjustment procedure. The matching and parameter estimation is repeated until there is no change from iteration to the next. It should be noted that the points that are classified as non-matches (e.g., errors or discrepancies between the surfaces) are excluded from the adjustment process leading to a highly robust solution.
2.2 Outcome Measures The estimated transformation parameters and the associated standard deviations (SD) from the last iteration can be analyzed for quality control purposes. As previously mentioned, in the absence of system biases, the theoretical transformation parameters relating two overlapping surfaces should be zero translations and rotations, and a unit scale factor. Significant deviations between the estimated parameters and the theoretical ones indicate the presence of biases in the data acquisition system. In addition, the average normal distance (root mean square - RMS) between the matched point-patch pairs can be used to quantify the quality of fit of the two surfaces after the registration. To confirm the feasibility of the registration process, qualitative assessments of the matching results are necessary. This can be performed by displaying the matched and non-matched points from S1 after being transformed onto the reference surface S2using the estimated parameters. In addition, non-matched points can also be projected onto an ortho-photo of the coverage areas. Such visualization will show how well the LIDAR surfaces are registered and provide insights into why certain points are classified as non-matches.
78
A.F. Habib, R. W. T. Cheng
3.
Data
A LIDAR dataset that covers an urban area in Brazil was used to test the feasibility of the proposed surface matching approach for quality control. The data is given with respect to the World Geodetic System 1984 (WGS84) reference frame. It was captured by an Optech ALTM 2050 airborne laser scanner (The Optech Incorporated, Toronto, Canada) from an average flying height of 975 m. The average point density is approximately 2.24 points/rrr' (i.e., -0.7 m point spacing). According to the flight and sensor specifications, this data is expected to have horizontal and vertical accuracies of 0.5 m and 0.15 m, respectively. Two adjacent and partially overlapping strips were used for surface registration and matching (Figure 8). As it can be seen in this figure, the covered area mainly include build ings, vegetations (e.g., trees), roads, and other man-made structures. Two neighbouring strips, SI and S2, are extracted for testing the feasib ility of the proposed procedure. The strips are comprised of 44,156 and 22,799 points, respectively. The 22,799 points of S2 were further processed to generate 45,520 triangular patches based on Delaunay triangulation in MATLAB (The Mathworks, Inc., Natick , MA, version 6.5). Figure 4 shows portions of SI and S2for this data.
Fig. 8. Range images of the overlapping LIDAR strips, 51 (left) and 52 (right), used for the experiment.
4.
Experimental Results and Discussions
The surface matching algorithm was applied to the above mentioned LIDAR strips and Table 1 summarizes the results. It is worth noting that the surface registration and matching procedure did converge even when
Surface Matching Strategy for Quality Control ofLIDAR Data
79
using rough approximations for the transformation parameters. The derived RMS of the normal distance (0.142 m) suggests successful registration with a high quality of fit between the two surfaces, especially when considering the accuracy of the data. It is also clear from the table that the estimated transformation parameters deviate from the expected values (zero rotations and translations and unit scale factor) and that these values are significantly larger than the respective standard deviations especially in the horizontal directions (X T and YT) . Such observation indicates that some biases do exist between the two LIDAR strips. It is speculated that these biases in the horizontal directions are due to angular boresighting biases between the GPSIINS and laser scanning units. However, the effect of these biases is still within the noise levels of the data. Table 1. Transformation parameters derived from the surface matching algorithm for the LIDAR data. Initial Approx. Expected Parameters Estimated Parameters (±SD) Estimated Variance Component RMS of the Normal Distance
Xr(m)
Yr(m)
Zr(m)
S
w (0)
ffJ (0)
k (0)
3.000
-3.000
3.000
1.000
-3.000
3.000
-3.000
0.000
0.000
0.000
1.000
0.000
0.000
0.000
-0.660 0.007 -0.367 1.001 -0.017 0.002 0.003 (1.26e(1.55e-3) (2.44e-3) (2.20e-5) (6.40e-5) (1.14e-4) (1.80e-5) 3) 0.123
0.142 m
To visualize the matching results and to investigate why the certain points/areas are classified as non-matches, the points from S, were transformed with the estimated transformation parameters and displayed together with S2 (Figure 9). The non-matching points (red) appear to be located around the same areas (e.g., edges of buildings). To provide justification for these non-matches, the points from Sl were also projected onto an ortho-photo of the area (Figure 10). The non-matches (red) on the left side are from a non-overlapping area between the two strips. A closer investigation of the non-matches (Figure 10) indicates that they were mainly located along the edges of buildings and around areas with vegetations (e.g., trees). This can be explained by the fact that these physical surfaces, with sudden height changes, were not modelled correctly by irregu-
80
A.F. Habib, R. W. T. Cheng
larly shaped triangular patches (refer to Figure 4). It should be noted here that these visualization steps are not necessary for LIDAR quality control purposes, but they were explained here to verify the registration results and to show that the surface matching algorithm can also identify discrepancies (e.g., changes or blunders) between the surfaces.
Fig. 9. A section of the co-registered surfaces with the green mesh representing Sz, the blue points representing the matches and red points the non-matches from St
Fig. 10. Matched (blue) and non-matched (red) points from SI displayed on an ortho-photo of the coverage area (left). The non-matched points were mainly located along edges of buildings and around areas with vegetations (right) To verify the results of the surface registration and matching, we conducted a quality control procedure based on checking the degree of coincidence between identified linear features in the same data. In a parallel study, we extracted 164 conjugate linear features [5]. This was accomplished by first performing plane segmentation and fitting to the laser points, which are followed by neighbouring plane intersection to obtain linear features (e.g., ridges along roof tops) . This procedure is expected to produce linear features with higher accuracy than the LIDAR data itself. It
Surface Matching Strategy for Quality Control ofLIDAR Data
81
should be noted that conjugate linear features from the overlapping strips were manually identified and used in an absolute orientation procedure to derive the seven 3D similarity transformation parameters relating the two surfaces [1]. Table 2 reports the results from the absolute orientation procedure together with the results from the proposed surface matching approach. It is clear that both approaches produced transformation parameters with similar values with average differences of 0.2 m for translations and 0.007° for rotations. These differences are within the noise level of the data. Similar to what is reported by the surface registration and matching, the results from the linear features also show significant biases in the horizontal directions (i.e., X T and YT deviate from expected values of zeros and are larger than their standard deviations). Therefore, we can confirm that the surface matching approach is a valid technique for quality control of LIDAR data. However, the surface matching does not require any processing of the LIDAR data and it does not rely on the availability of linear features in the data. Moreover, the surface matching procedure automatically establishes the correspondences between conjugate surface elements while estimating the transformation parameters. Table 2. Transformation parameters and standard deviations derived from linear features and surface matching of the LIDAR data.
Method XT(m) Linear Features & Abs. Orient. (±SD) Surface Matching (±SD)
5.
YT(m)
ZT(m)
S
w (0)
lfJ (0)
k(O)
-0.418 -0.209 -0.019 1.000 -0.010 0.017 0.003 (2.98e-2) (2.7ge-2) (7.87e-2) (2.30e-5) (2.2ge-2) (3.78e-2) (1.30e-3)
-0.660 -0.367 0.007 1.001 -0.017 0.002 0.003 (1.26e-3) (1.55e-3) (2.44e-3) (2.20e-5) (6.40e-5) (1.14e-4) (1.80e-5)
Conclusions and Future Directions
This paper presents an automatic surface matching and registration algorithm for the quality control of LIDAR data. In general, quality control is an essential procedure to ensure that the derived data from a given system meets the users' requirements. For LIDAR, quality control is critical since the users' role in the quality assurance process is very limited due to the non-transparent nature of current LIDAR systems. The proposed surface matching and registration technique is an automated process that works
82
A.F. Habib, R. W. T. Cheng
with the raw data without the need for any prior processing. Deviations in the estimated transformation parameters from the theoretical ones (zero rotations and translations and unit scale factor) are used as internal and external quality control measures to inspect the presence of biases in the data acquisition system. When dealing with overlapping LIDAR strips, the deviations are considered as an internal quality control. On the other 'hand, external quality control measures can be derived by comparing the LIDAR surface with another version of the surface that has been independently and accurately acquired. Experimental results with real data have showed the feasibility of the proposed algorithm in detecting biases in the horizontal directions between two overlapping LIDAR strips. These errors were speculated to be due to angular boresighting biases. The surface matching results were also validated against those derived by using conjugate linear features in a line-based absolute orientation procedure. Compared to the other quality control techniques, the proposed approach does not require any interpolation or segmentation of the LIDAR data. Moreover, it can be applied for any coverage areas with no requirements of having LIDAR targets or structures with linear features (e.g., urban areas). It should be noted that in current specifications, only the vertical accuracy is carefully verified by the data provider with precisely surveyed check surfaces. On the other hand, horizontal accuracies are more difficult to verify due to the lack of distinct "topographic features for testing. Also, no specific regulations and verification requirements are defined by the photogrammetric and mapping societies for reporting horizontal accuracies of LIDAR data [6]. However, both horizontal and vertical accuracies should be assessed for quality control since errors in both directions can greatly affect the accuracy of mapping products generated from LIDAR (e.g., digital elevation models). This is especially important for applications such as marine navigation and floodplain management where highly accurate mapping products are needed. The presented methodology constitutes an effective and economic tool for checking the quality of derived LIDAR surfaces. In the future, the algorithm will be applied for external quality control, where the matching is performed between a LIDAR strip and a control surface of the coverage area. Another extension is to implement the surface matching algorithm in multiple sub-sections of the LIDAR surface to check if the detected biases are consistent throughout the entire dataset. The identification of biases is the first step to quality control. In the future, the research will be extended to allow for the justification and compensation for detected biases.
Surface Matching Strategy for Quality Control of LIDAR Data
83
References 1. Habib A.F., Ghanma M.S., Morgan M.F., and AI-Ruzouq R.E., 2005. Photogrammetric and Lidar Data Registration Using Linear Features. Photogrammetric Engineering and Remote Sensing, 71(6), pp. 699-707. 2. Habib A.F., Lee Y., and Morgan M., 2001. Surface matching and change detection using a modified Hough transformation for robust parameter estimation. Photogrammetric Record, 17(98), pp. 303-315. 3. Habib A.F., Cheng R.W.T., Kim E.-M., Mitishita E., Frayne R., and Ronsky J.L., 2006. Automatic surface matching for the registration of LIDAR data and MR imagery. Electronics and Telecommunication Research Institute Journal, 28(2), pp. 162-174. 4. Besl P. and McKay N., 1992. A method for registration of 3-D shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14, pp. 239-256. 5. Lee I.-B., Habib A.F., Yu K.-Y., and Kim Y.-L., 2005. Segmentation and extraction of linear features for detecting discrepancies between LIDAR data strips. In: The International Geoscience and Remote Sensing Symposium, Seoul, Korea, 4 pages. 6. Unknown, 2004. American Society of Photogrammetry and Remote Sensing guidelines vertical accuracy reporting for LIDAR data Vl.0. American Society ofPhotogrammetry and Remote Sensing, pp. 1-20.
Acknowledgement This research work has been conducted under the auspices of the GEOIDE Research Network through its partial financial support of the project. The authors would like to acknowledge the valuable feedback from Terrapoint Canada Inc. Weare grateful for the Technology Institute for Development - LACTEC - UFPR for supplying the LIDAR data. Finally, the authors would like to acknowledge the British Columbia Base Mapping and Geomatics Services for its interest and support of current and future research for the development of standards and specifications for LIDAR data.
On-line Integration of Photogrammetry and GIS to Generate Fully Structured Data for GIS Hamid Ebadi, Farshid Famood Ahmadi No 1346, Valiasr Ave, Mirdamad cross, Faculty of Geodesy & Geomatics Engineering K.N.Toosi University of Technology, Tehran, Iran, Tel: +98218786212, Fax: +98218786213, E-mail
[email protected], farshid_famood @sina.kntu.ac.ir
Abstract An important issue in implementation of a GIS system is preparation of data to be entered in GIS. To produce spatial data for GIS using photogrammetric techniques, conventional approach is to utilize photogrammetric and GIS systems individually (off-line procedure). Preparation of spatial data on the basis of the mentioned method has some problems, such as: difficulties in conversion of spatial data from CAD systems to GIS, reduction of the spatial data accuracy in editing process, impossibility of simultaneous attribute data adding to spatial data, and increasing the cost and time of data production and preparation process. Feature digitizing from photogrammetric models can be performed in an interface GIS environment by on-line integration of photogrammetric and GIS systems. Based on this approach, generated data can be saved with standard structure and format defined by GIS environments and directly used for GIS analysis without further editing. In this paper, design and development of a software package called Online Integrated Photogrammetric GIS (OIPGIS) is described. It was successfully implemented for the first time. Above system overcomes mentioned problems in off-line approach, and saves time and cost of spatial data production significantly.
1 Introduction One of the most significant stages in implementation of a GIS system is preparation of data for entering to GIS system. It could be said that the
86
Hamid Ebadi, Farshid Farnood Ahmadi
most important factor in a successful GIS project is referred to the quality of entered data [1]. Spatial information extracted from the aerial photos by using of photogrammetric techniques is considered as one the most important resources for generating spatial data in GIS. Nowadays, the importance of imagery as a resource for obtaining spatial data is increasing; as in near future around 50% of the existing data in the organization related to management of spatial information will be attained by this approach [3]. The current and conventional method for the spatial data preparation to be entered in GIS by photogrammetric techniques is to use photogrammetric and GIS systems individually. On the basis of this method, at first, the spatial data are generated in a CAD based environment as digitised maps without considering GIS system characteristics and requirements, thereafter preparation and editing of the spatial data is applied and the required structure for data to be entered in GIS system is created. After this stage, to enter the prepared spatial data in GIS, some processes for converting the data format from CAD systems to GIS systems is carried out. Related after entering spatial data to GIS, the attribute data should be amalgamated to the spatial data through a separate process. (Figure 1)
Phetogrammetrtc System
Converting the Darn Fermat from CAD Systems to GIS Systems
Fig. 1. Stages to conventional approach for preparation of spatial data to be entered in GIS.
Spatial data preparation on the basis of mentioned method will pose some problems to the integrated system, such as: Difficulties in Conversion of Spatial Data from CAD Systems to GIS.
Most of the commercial GIS software have a conversion module in order to convert the spatial data format from CAD based environment systems to
On-line Integration of Photogrammetry and GIS
87
GIS. If the storing process of spatial data in CAD environment is carried out based on necessary constraint and condition of feature extraction operations, conversion of data format would be straightforward. But it is also mentionable that in the most spatial data storing process in CAD environment, above constraints are not applied, and consequently due to the conversion process a lot of problems will be occurred which their types and categories could be different on the basis of entered file structure from CAD environment. The cause of these problems is referred to the differences between CAD and GIS design issue, and therefore a great disparity between data modelling of these systems is expectable. [4] The major differences between these two systems that could affect on the conversion process as follows [4]: • Differences in spatial data-base structure • Differences between the concept of layer (in CAD systems) and theme (in GIS systems) • Existing differences in the useable coordinate systems • Existing differences in storing procedure of features. Consequently it is necessary to work out some editing on the preparation of the output CAD systems in favour of entering them to the format conversion environment. Reduction of the Spatial Data Accuracy in Editing Process All of the defined analyses in GIS are based on digital processing and analysis of spatial and attribute. Although spatial data analysis in digitised form has some advantages, (e.g. accuracy, speed and cost) but the existing of slight errors in spatial data which are dispensable in CAD systems, are considerable and could cause serious problems due to GIS analysis process. [2]. For instance, if the distance- between endpoints of some line segments is expressed in the level map scale accuracy, the disconnection of the line segments would not cause a problem in a CAD system, but it is identified as a solemn problem for GIS systems. Therefore it is necessary to do some editing operations for preparation of spatial data to be used for GIS analyses in addition to editing operations described in previous section. Because of the off-line data editing in a separate stage, without the existence of photogrammetric model (which extracting process has done
88
Hamid Ebadi, Farshid Famood Ahmadi
within), in the most situation applying these types of editing operations will cause reduction in spatial data accuracy [2]. Impossibility of Adding Attribute Data to Spatial Data at the Time of Spatial Data Production.
In the spatial data production using photogrammetric method, the produced 3-dimensional model from aerial photos which can be regarded as a resource for spatial data in photogrammetric systems, could be a proper resource for obtaining some of attribute data related to the existing features in the study area. As for the mentioned method, GIS and photogrammetric systems are independently utilized, it is not possible to access an attribute data base simultaneously with feature digitisation process, and for this reason it is not possible to attach attribute data to spatial data during feature digitization process. Increasing Cost and Time of Spatial Data Production and Preparation for Entering to GIS System
Applying each stage related to spatial data production and preparation including features digitising from the produced model in photogrammetric systems, editing of spatial data so as to create the necessary structure for entering information to GIS, converting data format, and eventually adding attribute data in separate stages, requires different sections and working groups, appropriate hardwares as well as experts . These considerations will pose a substantial cost and time for spatial data production and preparation, moreover existence of independence between mentioned sections will enhance the duration of working stages.
2 On-line Integration of Photogrammetry and GIS The aim of integration of several systems into a unique integrated system, is to access simultaneously to all or a number of sections abilities provided by each system and to abolish each system blemish by others capability. As it was pointed in the preceding sections, individual usage of photogramme try and GIS systems leads to serious problems during production and preparation of spatial data. By on-line integration of these two systems, digitising operation of features from created model in photogram-
On-line Integration of Photogrammetry and GIS
89
metric systems will be done in a GIS environment interface. Due to the fact that storage of spatial data on the base of GIS format is possible, and spatial data could be entered in GIS analysis without further editing and/or conversion, therefore problems such as spatial data conversion from CAD format to GIS format and reduction of spatial data accuracy during editing procedure will be omitted. Also the ability of attainment to attribute data bases, which could be given to users, by GIS systems, will make it possible to attach simultaneously some attribute data extracted from photogrammetric model to spatial data during feature digitising process. Consequently integration of photogrammetry and GIS, would decrease distance between spatial data production systems and processing systems and therefore will cause a diminution in time and cost in spatial data production and preparation to be entered in GIS. The first step in integration of photogrammetric and GIS systems, is to create a interface system between these systems. This system could be a device for converting data format while are sent from photogrammetric environment, to a proper format for GIS systems, or a device for structuring spatial data, or a combination between the two. (figure 2)
Fig. 2. Creating an interface system between photogrammetric and GIS systems.
The mentioned interface system is known as the midst of the integration system, and its structure is a description of type and operation of the integration system. So it is necessary to investigate targets and requirements of project and users expectation during designing and system implementation thoroughly.
3
Main Structure of Interface System
As it was mentioned before, our aim in this research is to perform an online integration between photogrammetric and GIS systems in order to create appropriate structure for spatial data at the time of feature digitising operation from the created model in photogrammetric model. Thus interface system has to have the following characteristics and capabilities:
90
Hamid Ebadi, Farshid Farnood Ahmadi
• Possibility of receiving 3-dimensional extracting points from the interface GIS environment model in photogrammetric system and sending it to a interface GIS with feature digitising at the same time. • Ability to depict and draw of each feature in interface GIS environment with its digitising in photogrammetric environment simultaneously. • Possibility to add some attribute data to features during digitising. • Capability to save spatial and attribute data according to the standard format of GIS systems through three segregated feature layers (point, Line and polygon) directly. Accordingly four major sections have been considered in the general comprehensive scheme. • Input section • Documentation and feature drawing section • Receiving section of attribute information • Storage section The main structure of system is shown in figure 3. ---------------------------------------------~I I ~e I
Interf'ar. System
I
~-'--
Sendln~r~~
'-,,,"-_ _ . _~~/
points I
Input
sectionl \fe~=~U:;:~:;I~~~:.m} ~~:~~: j-l,,~~~~~J
photogrammetric : system
:I
i
I I I I I I
Recervmg secnon of attribute information
J
I
I
~': II I
:
I
I
I
~--------------------------------------------_:
Fig. 3. Main structure of interface system
4
Implementation and Test of System
In order to implement the first version of interface system which has been presented as OIPGIS (On-line Integrated photogrammetric-GIS), Visual Basic programming language has been used. Since the OIPGIS system is designed on the basis of photogrammetric systems which utilizes MicroStation environment for digitising operation, and uses MicroStation environment as an interface environment in order to
On-line Integration of Photogrammetry and GIS
91
receive points, therefore to use this system, MicroStation software should be installed on the computer. MDI (Multiple Document Interface) technique has been used in implementation of GIPGIS interface system and offers an appropriate user friendly system. (Figure 4)
I,·... .. '"
•• ""...
'tI::l "":.'"
....... " " ( I ..... ~~ .~·.
~
,l Sol 6l't1
~ [" IC O
I
!I-:,fl"MiI':.....-
--=
Fig. 4. OIPGIS user interface
Considering port in GIPGIS input section is designed in such a way that makes it possible to control MicroStation environment completely when input section is activated, and makes GIPGIS system able to receive input points which can be entered from input devices such as Stereo-Link, mouse , digitizer and etc. as measured points. So, it could be said that GIPGIS system is independent from input devices, It is also possible to use this system for online connection of digitizers, which its documentation is MicroStation to GIS. GIPGIS system was tested and evaluated by two methods after its implementation, the results was satisfactory. In the first method after 3-dimensional model creation in photogrammetric system, digitising operation for each feature (point, line and poly-
92
Hamid Ebadi, Farshid Famood Ahmadi
gon) was applied in different sections of model. In this method some attribute data has been added to features during digitising operation. In the second method a scanned paper map at the scale of was entered to MicroStation environment and was super imposed to drawing file as background, then digiti sing operation of the map has been done with Olf'GIS system using on screen digitizing procedure (figureS).
(
I
I
/
I /
I
/
I
I
I
I
I )
I
r
L~ ~ -
Fig. S. Digitising of scanned map with OIPGIS system
5 Conclusions and Recommendations Design and implementation of OIPGIS system as the first integrated system for On-Line integration between photogramrnetric and GIS systems, could be considered as a great step in reducing distance between the spatial data production systems and Geographic Information Systems (GISs), which could be known as an immense revolution in development of production and spatial data analysing systems. By On-Line integration of photogrammetric and GIS systems, features digitising operations according to the created model in photogramrnetric systems, is carried out in a GIS interface environment, and spatial data are saved into GIS systems with appropriate standard and data structure. Thus spatial data could be directly entered for GIS analysis without further editing conversion. Hence by using these systems problems such as spatial data conversion from CAD format to GIS format and reduction of spatial data accuracy during edition procedure will be omitted. Also the ability of attainment to attribute data-bases, which could be given to users, by GIS systems, will make it possible to attach simultaneously some attribute data extracted from photogrammetric model to spatial data during feature digi tising process. As a result integration of photogrammetry and GIS, would decrease distance between spatial data production systems and processing
On-line Integration of Photogrammetry and GIS
93
systems and therefore will cause a diminution in time and cost in spatial data production and preparation. Regarding to problems arising from using photogrammetric and GIS systems individually and considering capabilities of On-Line integration of these systems abilities, it is recommended to replace to conventional methods with OIPGIS systems in order to reduce time and cost of production and preparation of spatial data to be entered into GIS. Researches are underway to further develop the integrated system for topology creation, detection and repair of existing errors in spatial data, and making logical relations among features in an automatic and intelligent way.
References 1- Ebadi, H., Famood Ahmadi, F., Varshosaz, M., 2005, On-line Integration of Photogrammetric and CAD Based Systems with Emphasis on Logical Relations among Features, ISGIS, Malaysia. 2- F., Famood Ahmadi, 2004, On-line Integration of Photogrammetric and CAD Based Systems with Emphasis on Logical Relations among Features, M.Sc Thesis, Faculty of Geodesy and Geomatics Engineering, K.N.Toosi University of Technology. 3-Woodsford, P.A., 2005, SYSTEM ARCHITECTURE FOR INTEGRATING GIS AND PHOTOGRAMMETRIC DATA ACQUISTION, ICWG IIIlV. 4- NPS Spatial Data Specifications, 2005, A Step-By-Step Guide to converting dwg CAD files to GIS shapefiles, Boston, Massachusetts.
3D Integral Modeling for City Surface & Subsurface Wang Yanbing", Wu Lixin'"', Shi Wenzhong", Li Xiaojuan" a Institute of 3D Information Acquisition & Application, Capital Normal University, Beijing, China; b Center for GIS/GPS/RS & Digital Mine Research, Northeastern University, Shenyang, China; C Center for Advanced GIS Research, LSGI, Polytechnic University Hong Kong, Hong Kong, China
Abstract With the rapid development of urban, city space extended from surface to subsurface. As the important data source for the representation of city spatial information, 3D city spatial data have the characteristics of multiobject, heterogeneity and multi-structure. It could be classified referring to the geo-surface into three kinds: above-surface data, surface data and subsurface data. The current research on 3D city spatial information system is divided naturally into two different branch, 3D City GIS (3D CGIS) and 3D Geological Modeling (3DGM). The former emphasizes on the 3D visualization of buildings and the terrain of city, while the latter emphasizes on the visualization of geological bodies and structures. Although, it is extremely important for city planning and construction to integrate all the city spatial information including above-surface, surface and subsurface objects to conduct integral analysis and spatial manipulation. 3D Spatial Modeling is the key to 3D GIS, and is the basis of 3D GIS. All kinds of 3D spatial modeling methods are different with the difference of modeling objects. Based on the difference of spatial modeling objects, 3D spatial modeling methods are divided into two kinds, one is Geographical Spatial Modeling, and another is Geological Spatial Modeling. With the contrast of all the 3D spatial modeling methods, this paper also present some key issues on 3D spatial data modeling for geographical objects, surface buildings and geological objects integrated seamlessly with TIN being its coupling interface. Based on this, the paper introduced the conceptual model of Object Oriented 3D Integrated Spatial Data Model (003D-ISDM), which is comprised of 4 spatial elements, i.e. point, line, face and body, and 4 geometric primitives, i.e. vertex, segment, triangle
96
Wang Yanbing, Wu Lixin, Shi Wenzhong, Li Xiaojuana
and generalized tri-prism (GTP). The spatial model represents the geometry of surface buildings and geographical objects with triangles, and geological objects with GTP. Any of the represented objects, no mater surface buildings, terrain or subsurface objects, could be described with the basic geometry element, i.e. triangle. So the 3D spatial objects, surface buildings, terrain and geological objects can be integrated together with the TIN being its coupling interface. Based on 003D-ISDM, an application case in the central business district (CBD) of municipal Beijing are introduced. The case shows that the potential applications of 003D-ISDM in the domains of digital city, digital geotechnical engineering, and so on. Keywords: 3D spatial model, integral modeling, objected oriented, seamless, GTP, digital city.
1 Preface GeoSpace is a continuous 3D space. It is one of the important issues in GIS to represent the geospace and its objects in 3D space. Current 3D GIS researches mainly focus on the spatial objects above-surface. With the rapid development of computer science and spatial information technology, the spatial data that GIS managed not only include the nature geographical data, but also extend to above-surface and subsurface natural and humanized (Fig 1). It is an important category in GIS to integrally study the description, representation and analysis of surface and subsurface spatial data.
Fig. 1. A typical skeleton of geospace
3D Integral Modeling for City Surface & Subsurface
97
City spatial information is the most complex data in geospace information, and they are representative on 3D GIS research. An investigation into user requirements for 3D city modeling in current 3D GIS mainly focused on the spatial objects above-surface, few or not on the subsurface objects. With the rapid development of urbanization, more and more issues were influenced directly or indirectly by city subsurface environment including city geology, hydrology, engineering geology and tunnels. It is demanded that much more study should focus on the real 3D spatial information and on its integration. 3D GIS is an efficient method for acquisition, storing, managing, visualization and analyzing the integral information of surface and subsurface, and is an important platform for supporting the sustainable development research on urban. However, different 3D GIS modeling methods are usually used in separate domains. The presented models during the past decades are usually divided into above-surface, surface and subsurface modeling methods according to the spatial objects it modeled. And we call the space that above kinds of model described are Geographical Space and Geological Space. Usually, current 3D GIS describe the Geographical Space objects, while the Geological Space objects are represented in some geological software (Wu LX, 2003). In a word, integral real 3D spatial data of city should contain Geographical Space and Geological Space data, and a real 3D GIS should describe and represent all of the spatial objects including Geographical Space and Geological Space. The basis of 3D GIS for digital city is to construct a new 3D data model for the integration of urban Geographical Space and Geological Space. According to the characteristics of geospace objects, it is classified into three kinds: geographical objects, surface objects and geological objects.
2 3D Spatial Data Model Data model and data structure are the basic contents for spatial data organization, and they are the keys for GIS software development. Concept model is a core of spatial data model design, which is used to define some rules for representing attributes of spatial model. In 3D GIS modeling, Features' location and shape are in- 3D space, and they have their own space. The main differences between 2D GIS model and 3D GIS model are that features' attribute of the former is the domain of spatial position and the changes of attribute are following the change of geometric (position, shape, size). But 3D GIS modeling objects have their own attribute.
98
Wang Yanbing, Wu Lixin, Shi Wenzhong, Li Xiaojuana
A modification of the 3D GIS models is that they allowing operations with body objects. Based on the spatial objects that 2D GIS defined (point, arc, face), the 3D GIS models add another object: body. Because of the extension in dimension, 3D GIS models have great difference in topology and geometric to 2D GIS models, such as the geological stratum as a face object, geological body and buildings as body object. So it is more complex to model in 3D GIS than in 2D GIS. Many authors and researchers have studied on the 3D spatial modeling and presented kinds of 3D spatial data models. According to the different objects studied on, all the models are divided into two types: 3D Geographic spatial objects model and 3D Geological spatial objects model.
2.1 Geographical Space Objects Modeling Geographical Space objects mainly refer to the constructions on the ground, such as buildings, bridges and towers. The concept of a 2D GIS introduced by Molenaar in 1989 as "Formal Data Structure", and extended to comprise 3D information and texture of 3D Formal Data Structure (3D FDS) in 1992. The model comprised of four elementary objects (point, line, surface, body), four primitives (node, arc, edge, face). The topology, which has is-in and is-on of 3D FDS, is distinctly expressed including: point and body, arc and body, node and face, arc and face (Fig.2). Based on 3D FDS, such models as n-couple model, Simplified Spatial Model (SSM), Urban Data Model (UDM), Object-Oriented 3D Data Model (003D Model) are presented.
~
€
---Cla~s-
Gurta?e ...._. Face
·boa .
~'."
~.'. ,p.·.e.·. ~;
. Entity Type c:J Attribute Typ e ...-.-. M: 1 Link Type
~.
br der . .....
. .:.iSO forward
~ e. f"
'..
.. g .
" :.•.. (·.·.Ar..C
..' . back;;;ara
Fig. 2. 3D Formal Data Structure
end
y.N .. od.e........•.•..
®
3D Integral Modeling for City Surface & Subsurface
99
2.2 Geological Space Objects Modeling The subsurface objects mainly refer to humanized engineering body such as tunnel, subway and cave, and natural geological body such as stratum, faults and ore body. The modeling of geological body is included in this paper. The geological body modeling is also called three-dimensional geological modeling (3DGM). Usually, geological models are divided into three kinds: surface model, solid model and integrate model. They including: TIN model, Grid model, block model, tetrahedral network model (TEN), pyramid model, wireframe model, B-Rep model, Multi-DEM model, CSG model, Octree model, GTP Model, TEN+Octree, TIN+Octree, and so on. All of these model are used to describe the occurrence of geological body and to calculate the possible reserves (Houlding S 1994). Here we introduced some typical models.
I. TEN TEN was introduced by Pilouk to overcome some difficulties of 3D FDS in modeling objects with indiscernible boundaries. According to the definitions, TEN has four constructive objects (tetrahedron, triangle, edge, node). The topological relationship is given by the adjacency of spatial objects. A body object is composed of tetrahedrons, a surface object of triangles, a line object of arcs and a point object of nodes. TEN model is appropriate for representing irregularities in the real world, such as terrain, soil, geological objects, etc. Since the model uses the simplex-complex concept, TEN can be expected to cover the scope of possible topological relations in 3D space (see Fig.3). While algorithms for constrained triangulation are discussed in the literature, the 3D construction of 3D constrained tetrahedrons is still under research.
~
~
~ng.to ~ng.to
~compl?'
~
~ong.to
CIn.I).co~ ~
p~_of ~f
(-~---S'-irn~pl"=e·
n-dimension
,
~
~ng.to
~comPl.;)
:Ef
~im~n.2).simPS>--~
,,-----
.....~
,~."
" '~
(n-l)-SirnOlex~_tuP10 \. c ~ordinat ~(n-Lj-dimension (n-2)-dimension O-dimensicn
Fig. 3. TEN: n-dimensional conceptual model
Wang Yanbing, Wu Lixin, Shi Wenzhong, Li Xiaojuana
100
2. General tri-prism (OTP) model
GTP model of broad suitability is proposed (Qi A W, Wu LX 2002; Wu LX 2004). The GTP is comprised of six primitives as node, TIN-edge, side-edge, TIN-face, side-face and GTP. Besides, three intermediary diagonal lines in each GTP component are temporary applied for spatial operations. Six groups of topological relations between the six primitives are carefully designed for geo-spatial inquiry and geo-spatial analysis. GTP model makes use of the up and down triangles of GTP to exp ress the geological stratum, and expresses the spatial relation among the stratum by use of the side face of GTP. Fig 4 shows that the wide lines represent the dri ll holes . The GTP is com bined by two unparallel triangles and three irregular side faces, which might be uncoplaner for the unparallel drill holes. Tri-pri sm is one of the special cases of GTP in condition that the drill holes are parallel, and a GTP will degrade into a tetrahedro n or a pyramid respectively in condition that one triangle face or one size -edge shrink to a point. He nce, the GTP is so calle d.
_..L---I
I I
,,
, ~----
F ig. 4. The principle of OTP model based on unparallel drill holes
It is obvious that the GTP model can efficiently make use of the drill hole data that sampling each interface point to identify different geological strata. GTP model can not only model the strata in real 3D, but also represent the layer interfaces of strata by means of the collection of TIN faces .
2.3 Comparison of geographical model and geological model 3D GIS data modeling is the basis of 3D GIS , but all above the models are not universal. This paper compared all of these models in geometry, primi tive, using field and their advantage and disadvantage in order to giving some advices (Table 1).
3D Integral Modeling for City Surface & Subsurface
101
3 Integration of TIN and GTP According to the above comparison, this author build a 3D spatial data model integrated TIN and GTP. TIN is used for surface and geographical objects modeling, while GTP is used for geological objects modeling. Although, it can also be used for the spatial integration of surface and subsurface objects (Wang Yanbing 2005). The new 3D spatial integral data model based on the integration of TIN and GTP, named as object-oriented 3D integral spatial data model (003DISDM), is presented. In 003D-ISDM, geographical objects, geological objects are integrated seamlessly based on TIN-coupling. In 003D-ISDM, the same type of geometrical element has different attributes in geographical level. Point, line, face and body describe all the spatial objects. According with the inheritance attributes of objects and their topological relations, the constructive elements can be divided into vertex, segment, triangle and GTP (see Fig.5). Spatial Objects
LegeJUl: c:::J
(min, max)
ConstNd ObjectSpatialObject Rela.tDm~ Comexof Set
(m:n)
»>........... ''t.
". - - - ~
Mappillg ObjectsLevels
Fig. 5. The spatial data model of 003D-ISDM for city surface and subsurface integration
102
Wang Yanbing, Wu Lixin, Shi Wenzhong, Li Xiaojuana
Table 1. Comparison of 3D spatial data model
~
~
~
Name
~
Authorl Using Date
C1 (t)
Mole3DFD 0 naar ~ S 1990 ~
3D city model
::::r"
c:;.
a
n TuS 0 Pigot 0.. ~
pIe 1992 model
ZlaSSM tanova 2000
UDM
Coors 2003
Primitive
Entity object
Node, Arc, Face, Edge
Easy make the Point, adjacency be- Difficult to exLine, Sur- Single complex tween spatial press face, value map and non spa- spatial objects Body tial objects
Abdul 2000
C1 (t)
Weboriented, Node, visualiza- Planar tion, face query
1996
~isadvantage
Point, Line, Sur- Simplex face, concept Body
Update dynarnically and modify Abstract primitives eas- are complex, ily, small stor- constructive obage j ects are multivalues
Abstract Point, primitives eas- Update dynarniCity visu- Node, Line, Sur- Trianguily, small stor- cally and modify lization Triangle face, lation age, facial are complex Body visualize
Node, Geology, Line, terrain Surface, Solid
0
0- Grid c:;. e:.. S 0 c, a, TEN Pilouk
(JC/
Advantage
Topological Geology, Difficult to ex0-3-Tuple expression, npress complex complex 0-3 Cell Cell manifold easy maintebuildings spatial objects Complex nance
Shi WenNode, 003 zhong, City visu- SegD lization ment, Yang bisheng Triangle 2002 3DTIN
Model base
Point, Complex obLine, Sur- 00 modject, LOD, face, Vol- eling Visualization ume
Indistinct Topology, little 3D spatial analysis function
Point, Single Line, SurFacial visuali- No attribute in value face, zation object maplFDS Body
terrain
Square, rectangle
Surface divide
Geoinforrnation, pollution cloud
Node, Arc, Triangle, Tetrahedron
Point, Line, Sur- Simplex face, concept Body
Facial visualiMassive data zation Facial visualization, complex body and face
Difficult to visualize complex objects, great storage
3D Integral Modeling for City Surface & Subsurface
Octree
CAD/CA Hunter M, geol- cube 1978 ogy, sea
Simon SolidHould- Geology, compoMode nent mine mg I 1994
GTP[2 Wu LiXin 2004
0]
~
~
(D aC4 ~
~ 0..
(1)
Node, Line (TIN-L, Geologi- side-L), cal engi- Face neering (TIN-F, Side-F), GTP,dia gonal
Li 3D city TIN+ Qingqua model CSG n,1998
§ 0.. ~
TI N+ Oc tre e
Simple structure, inside Body di- spatial attribute trans- forvide mation expression
Shi Wenzhon g,199 6
3D city model, geological engineering
!Difficult to express geological objects' geometrical boundat)', Massivedata, dataredundancy
Fit for ComGeologiplex structure Great manual cal ob3-couple inside expres- work ject's sion boundary
Topology completely, Difficult to visuPoint, of Line, Sur- Body di- 3D expression alization geocomplex of geology, face, vide [based on drill logical object Body hole data
Point, Node, Line, SurSurface Segface, divide ment, Body, Triangle DEM Node, Segment, Triangle, Octree
Li geologi- OcTEN Qingqua cal engi- tree TE +Octr n, Li neering N ee Deren
4
103
lDifficult to exIPress the comTwo models plex geological integrate in a object, such as user windows fault, fold, fissure
TIN express Topology exsurface, pression, Octree query easily express inside Octree express whole, TEN express part
data Pctree changed with the ~hange of TIN easily disturbed by the pointer
Improve the precision of toifficult to build objects, retopology of spaduce data stor- tial objects age
Case Study In Digtal City
A real-3D GIS software platform, GeoMo 3D , is developed based on 003D-ISDM with Visual C++, OpenGL and SQL server for city spatial objects integral modeling. In the system, some experiment data including drill hole data, building data, road data and terrain data of central business district (CBD) of municipal Beijing are applied to test the model.
104
Wang Yanbing, Wu Lixin, Shi Wenzhong, Li Xiaojuana
In the area of Central Park of CBD, the elevation of buildings is 23.4 to 87.6m, and there are 57 drill holes. Based on the drill holes, photogrametry data and planning data, the seamless integration of the terrain surface, above-surface and subsurface in Central Park of CBD is realized. Fig 6-7 show respectively the seamless spatial integration, and the query for the drill holes under the Central Park. ~
$.
[) .. g
"a«.I~ .~ •
. ..
w .• a.•
T
I"" ••
•
.-
.. Fig . 7. Seamless Integration
Fig. 6. Query for the drill holes under the central park
Conclusions With the rapid development of urban, the city space extends from surface to subsurface rapidly. 3D city spatial data have the characteristics of heterogeneity, multi-object, multi-scale and multi-structure. However, the current modeling research on 3D city spatial information system does not fit for the spatial integration of all the data surface, above-surface and subsurface. This paper presented a concept model, 003D-ISDM, to integrate seamlessly the above-surface objects, surface objects and subsurface objects based on TIN-coupling. 003D-ISDM is a real 3D spatial mode and is capable of representing the topology of primary elements (vertex, segment, triangle, GTP) and spatial objects (point, line, face, body). Based on 003D-ISDM, a real-3D GIS software platform, GeoM0 3D, is developed, and it has been successfully applied in the real 3D spatial modeling of the central business district (CBD) of municipal Beijing. The potential applications of 003D-ISDM 3D and GeoM0 in the domains of digital city, geotechnical engineering and digital mine are broad.
3D Integral Modeling for City Surface & Subsurface
105
References Alexander Koninger, 1998. 3D-GIS for Urban Purposes. Geoinformatica, 2: 1, 79103. Arnaud de la Losa, Bermad Cervelle, 1999. 3D Topological modeling and visualization for 3D GIS. Computers & Graphics, 2:3, 469-478. Coors, V. 3D GIS in Networking environments. CEUS Computers,Environment and Urban Systems. 2003, 27: 345 -357. Houlding. Simon, 3D geoscience modeling: computer techniques for geological characterization. Berlin; New York: Springer-Verlag. 1994. Li Qingquan, Li Deren, Three-Dimensional Run-Encoding (3DRE) for Octree, Journal of Wuhan Technical University of Surveying and Mapping. 1997, 22(2): 102-106. Li Qingquan, Li Deren. Research on the Conceptual Frame of the Integration of 3D Spatial Data Model. Topography transaction. 1998, 27(4): 325-330. Molenaar, M. A formal data structure for 3D vector maps. Proceedings of EGIS'90.Voi. 2, Amsterdam, The Netherlands. 1990. pp: 770-781. Peter Van Oosterom, Jantien Stoter, Wilko Quak, Sisi Zlatanova, 2002. The Balance Between Geometry and Topology, Advances in Spatial Data Handling, 10th International Symposium on Spatial Data Handling, 209-224. Pigot, S. A topological model for a 3-dimensional Spatial Information System. PhD thesis, University of Tasmania, Australia. 1995. Pilouk, M., K. Tempfli. An object-oriented approach to the Unified Data Structure of DTM and GIS. Proceedings of ISPRS, Commission IV, Vol. 30, Part 4., Athens, USA. 1994. Qi Anwen, Wu Lixin, et al, 2002. Analogous Tri-prism : A new 3D geo-science modeling methodology, Journal of China Coal Society, 27:2, 158-163. Rikkers R., M. Molenaar, J. Stuiver. A Query Oriented Implementation of a 3D Topologic Data structure. EGIS'93, Vol.2, Genoa, Italy. 1993, pp: 1411-1420. SHI Wen-zhong. A Hybrid Model for Three-Dimensional GIS. Geoinfomatics, 1996,1: 400-409. Shi Wen-zhong, Yang Bi-sheng, Li. Qing-quan 2003. An Object-Oriented Data Model for Complex Objects in Three-Dimensional Geographic Information Systems. International Journal of Geographic Information Science. 17:5,411430. Simon W. Houlding. 3D geoscience modeling---computer techniques for geological characterization. Springer-Verlag: [s.n.], 1994. Wang Yanbing, 3D Spatial Modeling for City Surface and Subsurface Seamless Integration Based on TIN-Coupling, PHD Dissertation, 2005.7 Wu Lixin, Shi Wenzhong, Christopher Gold, 2003. Spatial Modeling Technologies for 3D GIS and 3D GMS. Geography and Geo-Information Science, 19:1,5-11. Wu Lixin. 2004.Topological relations embodiedin a generalized tri-prism (GTP) model for a 3D geoscience modelingsystem.Computers& Geosciences. 30: 405-418. Zlatanova Siyka. 3D GIS for Urban Development. ITC Dissertation. 2000.
Spatial Object Structure for Handling 3D Geodata in GRIFINOR Eric Kjems a, *, Jan Kolar a Centre for 3DGI, Aalborg University, Niels Jernes Vej 14, Aalborg DK9220, Denmark. E-mail: (kjems. koldajts'Sdgi.dk
a
Abstract This paper discusses the use of spatial objects in general and in relation to the GRIFINOR software platform (Bodum, Kjems et al. 2005). Here, objects have been created in a way that focuses on the overall demand of fast data retrieval, dynamic object structure and level of detail (LOD) (Dollner and Buchholz 2005). Objects can be conceived as many different things. Though objects in the GRIFINOR1 system are used primarily as a spatial representation of real world features they are not limited to it. The objects can contain geometric information as well as all other kinds of information which can be described in a digital form. Objects in GRIFINOR are implemented as Java objects, and a hierarchical structure has been maintained for the physical object structure like a building and its sub-parts. Therefore, there are no limitations to the extension of the objects, which in relation to the real world means that the representation of the features can be extended with more and more detail and information. Since the geodata used in GRIFINOR are stored as individual objects and not as one large model file it is easy to handle very large datasets distributed on several databases. The paper will discuss advantages and disadvantages of this object- oriented approach and present examples of use. Objects in GRIFINOR are not restricted to representing buildings or other geometrical well-defined features. (Stoter and Zlatanova 2003), (Zlatanova 1998). Every part of the world represented in the GRIFINOR container is defined as an object. From a computer science point of view
1
http://www.grifinor.net
108
Eric Kjems, Jan Kolar
creating objects even like terrain features is not difficult, but from a representational point of view terrain or other big surface areas are hard to define. So in terms of representing the real world, the ontology used in the system is of importance and influences the object structure significantly, but with regards to storing data in objects the ontology is of less importance and merely a question of applications. When focusing narrowly on the creation of objects regarding the demands of the GRIFINOR system, you might experience some drawbacks in other fields; e.g. in the field of data exchange. But the approach chosen in the GRIFINOR system makes it almost easy exchanging data since the data model in the Java object is highly generalized and not dependent on any format like XML. This paper will discuss the questions of data exchange in relation to handling objects and in particular, in connection with the GRIFINOR system. Interoperability is a key-feature for almost every application.
1. Introduction The object-oriented approach described in this paper is a new way of dealing with geographically related information. The approach has been implemented in a 3D-GIS platform called GRIFINOR. The system has been described and presented on several occasions (Bodum, Kjems et al. 2005), however understanding the design-philosophy and background of GRIFINOR is very essential for understanding the way geographic related objects are handled and therefore a short introduction is inevitable. GRIFINOR is the outcome of many discussions, several attempts, and finally some good programming. The idea of handling the world in three dimensions is not now, and was neither back in 1999, when the ideas for GRIFINOR evolved, a new one (Raper 1989). At that point GIS were mainly oriented towards two-dimensional spatial analysis and 3D-GIS were more correctly 2,5D-GIS since the third dimension as heights and elevation were handled as attributes to points, lines, and polygons. Nothing wrong about that since GIS had evolved through decades and modelling the world in two dimensions makes a lot of sense in most cases. But in GRIFINOR we wanted to show the world as it is and not presented as coloured maps. So the main motivation for GRIFINOR was: "How to handle all kinds of features we see in our daily life". At that point we were not really that concerned about whether these features were of any importance for any kind of analysis or for that matter what size or shape the features should
Spatial Object Structure for Handling 3D Geodata in GRIFINOR
109
have to be most suitable for the system. We simply liked the idea of being able to store all kind of features from the real world into a container without any limitation. Here no limits also means no model boundaries due to file sizes or lack of data. We wanted to retrieve data from a database, in a way that the data-stream is controlled by the user due to his or her navigation and interaction on the screen. This means if one moves in a certain direction all available data simply pop up on the screen. One does not have to write SQL's 2 or other kind of import commands to extend the model in use. So the position by means of (X, Y, Z) and the direction one moves towards controls the data retrieval. GRIFINOR has become a platform for other applications that merely uses the object oriented data handling provided by that platform. The overall focus for the following sections is to use GRIFINOR for geographical data and storing features from the real world. This approach is also to be understood as an application in relation to GRIFINOR and shows the potential of handling geographical data as objects. The following will be a review of handling objects in general and how this has influenced the way GRIFINOR is built.
2. What are Objects When Nygaard and Dahl invented the object-oriented approach back in the 1960s developing Simula 67 it was actually done with the intention to deal with real world topics like real world phenomena and physical modelling. (Birtwistle, Dahl et al. 1973). Therefore, it is rather sensible to choose the object-oriented approach developed within the computer science community for describing real world features; features that reach beyond their physical dimensions and representation. The number of real world features is almost infinite which makes it impossible to prepare even an object-oriented system that can handle them all. But the object-orientation provides you with the possibility to define an object of almost arbitrary kind. Figure 1 shows a simple example which describes an object-oriented data model that can be used in GRIFINOR.
2
Structured Query Language
110
Eric Kjems, Jan Kolar Streetlight
~
+id: int +poleID: int +mountID: int
grifinor.datarep.GO
+lampID: int
+cid: int [) +gm:
Geometry
0
reference
I
A
Mounting +id: int +slID: int
--[>
reference
I
reference
I
Pole
Bulb
+id: int
+installDate: Date
+slID: int
+type: String
Inheritance, exte nsion
Lamp +id: int
Aggregation, one class is part of another
I
--
.-.
I
+slID: int +checkBulb () : int
Fig. 1. A possible street light object.
In the above example a street light is decomposed into 4 sub parts all defined as objects. Grifinor.datarep.GO is the main control object to which objects that contain geometry are defined as extensions (follow the arrows). At the same time, the object StreetLight defines its sub parts as object Pole, Lamp and Mounting and further the Bulb is part of the object Lamp. So the Bulb refers to the Lamp which again logically belongs to the StreetLight, but with regard to the geometry object Lamp is controlled directly by the GO object. So we find that objects contain of two major parts: a logical hierarchy and a control mechanism for drawing the geometry. It is obvious that storing objects one by one each in a separate file is a challenge for data storage but apart from a huge amount of small files this data-model gives you other advantages. A few of them will be enumerated in the following: • Objects can act independently to other objects. • Objects can have relations between each other. • Objects can be used to store all features in separate files cf. figure 1 while traditional models typically contain all features together in one file. Since each feature of a small area, a city, a country or the whole world can be loaded into the system as separate objects this way of storing data suites a distributed server structure well. • Objects can be handled easily within a working system. This means that a single object for instance can be updated without interfering with others. One single object can simply replace another one. An object can
Spatial Object Structure for Handling 3D Geodata in GRIFINOR
111
also change itself time-dependent which gives the possibility of dynamic objects . • Objects are built by code-lines. These code-lines can be adapted to any kind of imaginable feature eventually coming in the future as long as it is possible to define the feature in a programming language. Therefore, objects can handle structures and demands which may not be obvious at the moment but a possibility in the future. • Objects can contain instructional code which could be small applications letting objects react and handle in specific ways. A set of vector data provided by some lines and polygons to describe the world are not enough anymore. In for instance GRIFINOR objects are designed as a consequence of that fact; the fact that the modelled world calls for a new data-model. The object-oriented data-model opens up for new possibilities within the geographic information community. First of all, the way we define data without thinking about topology or the way features are structured. Objects in GRIFINOR hold their own topology and are also geometrically independent of others. Being able to handle each existing real world feature freely and as a separate one independent to the neighbouring object is a relief. Secondly; since we chose Java for defining each object we thereby also chose an open standard and de-selected an internal data-format for storing data in a binary code like most commercial systems do. In continuation of that, the perhaps biggest advantage is that objects open up for the possibility to handle any kind of exchange format or data structure, because Java-objects simply can contain what ever is necessary. The following sections will elaborate on these statements and thereby explain the unique way data are stored and handled as objects also in GRIFINOR.
3. Data Modeling Using Objects GRIFINOR uses no internal ontology as provided by standards such as GML, IFe or SEDRIS, but allow taking advantage of all such specifications. The ontology comes with the data and supports the original data model as they are needed by different applications. In the example depicted in figure 1 a custom model is used for the street-light, but it is possible to use any other standard or custom model. Some important modelling concepts are further demonstrated on the example. Lamp cannot exist without Bulb, although Bulb can be "null" representing a meaning that a bulb is missing. Note that Bulb does not "know" that it is part of Lamp. In contrast Lamp "knows" which StreetLight it belongs to thanks to a stored
112
Eric Kjems, Jan Kolar
reference slID. Which out of these two approaches is the better one forrepresenting an aggregation depends solely on the application. Using the call back references it allows tracing of "parents" but limits use of the class. In the example Lamp can be used only with StreetLight while Bulb can be reused in any kind of object that needs a representation of a bulb. The class grifinor.datarep.GO is an abstract class that provides the ability for storage and retrieval of objects in GRIFINOR, which is the subject of the next section. So if you want to be able to identify a street-light with, its pole, mounting, lamp, and bulb this information has to be stored in the objects. If the street-light is identified as only one coherent geometry in the input data set it will of course only be able to refer to object GO as that. The most important issue to understand regarding GRIFINOR and the way the system copes with its objects is that all handling of objects must be as an extend of the object Grifinor.datarep.GO and is related to the geometry of the feature stored in the object. Cf. figure 1. If you want your system to handle e.g. the IFC standard, you will need to create a data-model that can retrieve the geometry and the data structure of an IFC-file. Here you must decide whether you primarily want to retrieve the geometry and just show the model or if you want to re-establish cf. figure 1 the data-model of IFC as related objects in your application. The choice is also related to the interacti vity in the client later on. Since pointing at geometry in an application only, results in available information if there was taken care of that possibility during the data storage. Therefore it is more or less the application and the purpose of the application that decides the numbers of separate objects and how extensive the object hierarchy is supposed to be stored together with supplementary information. Information stored in an object that contains geometry can be retrieved without knowing what to look for. It simply withdraws the information stored together with the object. So there is no limitation like e.g. using a specific designation for objects. GRIFINOR does not expect a certain amount of information provided in a certain way. It handles data the way they are stored. This gives an amazing freedom for storing and retrieving data. For a lot of people this way of handling data and information may be considered as a kind of anarchistic and even controversial manner. But essentially, this freedom comes with the choice of using objects. It is not possible to think of all kind of features of the world in the future therefore an "open end" data-structure that can be shaped continuously and will be able to manage all kind of future aspects as long as they can be coded somehow is preferable.
Spatial Object Structure for Handling 3D Geodata in GRIFINOR
113
4. How Objects are Stored The first attempt for data storage in GRIFINOR was to look at Oracle Spatial 9. Oracle has a lot of functionality to handle data in a spatial relational database (Groger, Reuter et al. 2004), (Coors and Zipf 2005), but at the same time DB's in general have a lot of constraints due to the table structure and the stringent way to control logic and relations between the stored geometry, topology, attributes etc. In GRIFINOR we did not want to deal with constraints before we actually had designed our system. If we had chosen to develop the system on a database like Oracle Spatial it would have hindered us to obtain a system as open as GRIFINOR. This was intended to be done from a user point of view, not from a technical point of view. GRIFINOR requires that all features or information about features are connected to some kind of geometry. The database search index is connected to the geometry to be able to handle level of detail (LOD) in the system. Therefore it would be impossible to find any kind of data which is not connected to geometry. If you want to retrieve some information you will have to point at or somehow select the geometry that is connected to this information. This concept is enforced by grifinor.datarep.GO class from our example. The system accesses the data through an indexing mechanism that is based on how close the data are to a given point. In order to access the data they must be stored using the index. Only objects that extend grifinor.datarep.GO can be stored using the index. Therefore, in the example features represented by StreetLight, Pole, Mounting and Lamp will be retrieved and visualized by the system automatically when you are virtually within their proximity. Note that the Bulb does not extend the GO class, nevertheless the information carried in it can be retrieved thanks to Lamp. It is important to realize that the extension of GO is necessary in order to retrieve and visualize data using the system and not in order to store objects. Technically, any Java object that is allowed to be serialized, can be stored in the system. Serialization is a standard functionality provided by JVM3 that allows saving of objects onto a storage-medium, such as a harddisk or a memory-buffer, or transmitting objects over a network. Features in GRIFINOR are stored as Java objects. This is very essential and opens a whole new range of possibilities of data-storage within the domain of Geo-information. Every piece of geometry, attribute information or even other applications can be stored in GRIFINOR. There are no
3
Java Virtual Machine, http://en.wikipedia.org/wiki/Java_virtual_machine
114
Eric Kjems, Jan Kolar
constraints to what kind of objects you want to include which also means that you can handle objects of any kind of scale. Well , almost. In figure 1 you can see the scale GRIFINOR has to cope with. From the size of a molecule to the whole world we have an order of magnitude of 1016 • Even though the main area where GRIFINOR has its main function lies between (mm) or perhaps even (m) to about 100 (km) the range easily extends further from a geographical point of view. The limitations of scale we encounter here are due to computational limits. The limitation is tightly connected to the amount of available numbers needed in order to describe a coordinate. A common graphics card which is responsible for the graphics shown on the screen has a limitation of 32 bit. Figure 2. shows two different ways of spending 32 bit for precession. We have chosen to describe the world in (mm) using the integer number type and thereby avoiding spending bits on fractions of meters not used anyway. The more numbers you have at your disposal the more you can enhance the scalability of your system. GRIFINOR is at the moment covering the range from (mm) to thousands of (km) . This gives you the possibility to look from outside of the world down to details within a building.
..
1160'x
c _Doubl. pr, cL'S_kJ._n floatlng -poJnt
numbe u24::bI~~-t>ltmJlntJssaJ
... S ingle prec:luon Ilo . tJng ..point number [32
... Singl..p,. cf.l on inl lllg Df 13Zobrl]
I 2'.>ot m 1
$O(m!.. Z
. .•
1200 x
J{rvn!>.2 - 13.510(""'1
- IJ..C02lkm) ' 1.55 m ~rrrnll(2 .. 12 ae:tkmJ
•
_ •••••••••••3 ...
&3 m m
Precision of positioning in geographic metric space by standard nume ric represe ntations for computers.
Fig. 2. Scales of the world compared to computational limitations..
5. Interoperability Using Objects As mentioned earlier GRIFINOR is designed as a distributed server-client system . This means that not only one but several servers can be used as data storage and data provider. This also means that several locations around the world can participate and provide data for GRIFINOR. This makes a lot of sense taking into account that GRIFINOR is a true global
Spatial Object Structure for Handling 3D Geodata in GRIFINOR
115
system that covers the whole world seamlessly. Everybody around the world with for instance access to local data can provide data to the GRIFINOR system or community; simply by adding geometry as Java objects maintaining geocentric coordinates without the use of any kind of projection. Storing features as Java objects gives you a lot of possibilities not previously known or used in GIS applications. Apart from geometry; and an object identification given by a class name attributes in Java can contain instructional code. This means e.g. that you can give some kind of accesscontrol to an object and its attached information, or you can activate specific routines which are enabled when a user virtually gets within a certain geographic distance in relation to that object. A simple function would be a door that opens when you get close to it. One could also imagine that another application gets executed when you point at a specific object or it somehow is triggered by an external action. Objects in GRIFINOR are very flexible and open up for a whole new range of applications with georeferenced information and activities. Every piece of geometry can contain code-lines. This gives the possibility to create interoperable applications for instance for location-based services (LBS); or dynamic model-maps for tourists; or simply new interfaces for information services. Objects can move, change shape and texture. They can play a sound or your favorite melody when a certain object gets into a state of your preference. Objects can be updated dynamically. This means that data at all time can be updated separately on each object without having to load a whole new dataset. In the proposed solution all this functionality can come from the resources on network together with the data. And, more importantly, the functionality is defined in the objects together with the data model for an application. In our trivial example depicted in figure 1 each instance of Lamp knows which StreetLamp it belongs to and thanks to extension of GO class it has geometry and can be indexed for an instant visual exploration. This has been introduced in the previous sections. But as mentioned already algorithms can also be defined within objects that can perform operations with data. For example Lamp has a method checkBulb, which can get the date the bulb has been exchanged the last time, compare it with the current time, and if the period exceeds a certain limit then launch an alert message on the screen. It is important that GRIFINOR preserves this property of objects across the network the same way as communicating the data that constitute the objects. This provides a great deal of interoperability because the piece of software that can process or visualize the data can come together with the data. The checklsulb method is de-facto a new part of the system which is nearly perfectly interoperable on all clients of the
116
Eric Kjems, Jan Kolar
GRIFINOR platform. Thus, GRIFINOR can provide three-dimensional geo-visualization for different kinds of data but also user specific software components and information services. Giving objects a kind of independent life opens up the possibility to build them more "intelligent". In other words we have the possibility to connect objects and make them dependent on external influences like neighboring objects external signals or letting objects change through time.
6. Conclusion The way GRIFINOR handles data and stores objects opens a completely new approach to GIS. Traditionally we have used technical maps or other kind of vector geometry and identified elements like footprints of roofs or trees. In a true 3D system you can simply look at it and see that this is a building or a tree. A modelled three dimensional interface in connection with geographical information provides some advantages as well as some disadvantages. GRIFINOR is not designed to compete with existing GIS software which primarily is used for two dimensional analyses but is meant as a supplement for handling geodata in three dimensions. GRIFINOR is most of all a new interface for exploring data. What you put into it and use it for is up to you. Seeing and exploring are the key features. While traditional GIS applications are focusing on exploring geography through cartography and scientific analyses 3D-GIS is exploring and enhancing your knowledge of an area much closer to what you would experience in real life. You can simply recognize the geography much easier. The aspect of 3D involved in GIS is to enhance our vision and present the world more intuitively perceptible without any kind of data-abstraction involved. Therefore 3D-GIS have a complete new issues on its agenda compared with traditional GIS'. It is suddenly important to represent the world as it is no more no less. This takes one to the recognition that traditional GIS are not sufficient or rather not appropriate anymore involving the third dimension, since it is possible to handle objects in traditional RDBMS' even if this task is not an easy one. As presented in this paper we have chosen the object-oriented approach in GRIFINOR. Mainly the advantages of that approach have been enumerated here while the disadvantages intentionally have been omitted. Perhaps not very professional or scientifically, but one can be sure that the advocates of the traditional way of handling data will take care of that part.
Spatial Object Structure for Handling 3D Geodata in GRIFINOR
117
At the state GRIFINOR is now, no applications have been made in earnest where the possibilities of object-handling presented in this paper has been used. But as presented earlier, since objects are built with Java code there are no real limitations apart from ones imagination. At the moment GRIFINOR only reads data in 3DS or VRML format and places the geometry in the model-map due to geocentric coordinates. But the potential of this platform his huge as described above. A client for GRIFINOR can be built on any computer hardware that can show graphics on a screen, big or small. Everything from a phone to a large VR facility is a possibility. If you want to build the application with a distributed server/client structure you will also need an Internet connection. Finally figure 3 illustrates a typical 3D-GIS scale example taken from GRIFINOR going from the world perspective in the first view, down to city-level in the second, further down to the building level in the third, and finally in the fourth to a level where interior with high detail can be observed in the view.
Fig. 3. An example of scales in GRIFINOR.
118
Eric Kjems, Jan Kolar
References Birtwistle, G. M., Dahl, O. J., et al. (1973). SIMULA Begin, AUERBACH Publishers Inc. Bodum, L., Kjems, E., et al. (2005). GRIFINOR: Integrated Object-Oriented Solution for Navigating Real-time 3D Virtual Environments. Geo-information for Disaster Management. P. v. Oosterom, S. Zlatanova and E. M. Fendel. Berlin, Springer Verlag. Coors, V. and Zipf, A. (2005). 3D-Geoinformationssystem, Grundlagen und Anwendugen. Heidelberg, Herbert Wichmann Verlag. Dollner, J. and Buchholz, H. (2005). "Continuous level-of-detail modeling of buildings in 3d city models." GIS'05: Proceedings of the 13th ACM International Workshop on Geographic Information Systems: 173-181. Greger, G., Reuter, M., et al. (2004). Representation Of A 3D City Model In Spatial Object-Relational Databases. XXth ISPRS Congress, Geo-Imagery Bridging Continents, Commission 4. ISPRS. Turkey: 6. Raper, J. (1989). GIS - Three Dimensional Applications in Geographic Information Systems. London, Taylor & Francis. Stoter, J. and Zlatanova, S. (2003). Visualisation and editing of 3D objects organised in a DBMS. EuroSDR comm. 5 workshop on Visualisation and Rendering, lTC, Netherlands, EuroSDR. Zlatanova, S. (1998). Data Structuring and Visualization of 3D Urban Data. Association of Geographic Laboratories in Europe (AGILE), Enschede, The Netherlands.
The Study and Application of Object-oriented Hyper-graph Spatio-temporal Reasoning Model Luo Jing, Cui Weihong and Niu Zhenguo Institute of Remote Sensing Applications, Chinese Academy of Sciences Beijing, China. E-mail:
[email protected]
Abstract This paper introduced the basic theory of hyper-graph and the development status of spatio-temporal model on the basis of the results of former people. Then through comparing various spatio-temporal data model and various spatio-temporal reasoning model, combining the technology of object-oriented and the realistic want of production, eventually construct the object-oriented hyper-graph spatio-temporal data model and the spatiotemporal reasoning model and apply it in the realistic production. Through practice, this spatio-temporal reasoning model realize the integration of space and time, and it has the complete spatio-temporal connotation, whether for the development of spatio-temporal reasoning model or for the management of rotating parcel, all have certain theoretical meanings and applied value. Keywords: Object-oriented, Spatio-temporal data model, Hyper-graph, Spatio-temporal reasoning model.
1. Introduction Spatio-temporal reasoning is a mechanism that it attempts to treat the object holding time in GIS how to evolve with time. How to record these changes involved with how to describe this object. When people have been studied historical database, they once have put forward two concepts of time---event time and system time. The event time refer to which have changed in real time. The system time used to track the change of record [12]. Generally, we only consider the event time (RaafatI994, ChengI995). Accordingly, there are three kinds of manners to express the change of spatial object (KlopproggeI981, Arian1986, and GadiaI988).
120
Luo Jing, Cui Weihong and Niu Zhenguo
The first is to build a new edition for this object evolved with relation table when a object or many objects have changed in an event. The second is to define a new edition for the changed object. The third is only to add a new attribute field for changed object, it is not only convenient to query, and also reduce the redundancy of data. Because hyper-graph has two important features, which is the strict theoretical basis and flexible structure, then how to make use of hypergraph theory and combine application so as to put forward the new spatiotemporal reasoning model is our emphasis. This paper developed the hyper-graph-based profit-driving spatiotemporal reasoning model that is under the regional sustainable hypergraph data model [8]-feature-based spatio-temporal data model [7]. This paper started with introducing the development status of spatiotemporal data model, to build hyper-graph-based spatio-temporal data model through using object-oriented technology, lastly, we combined the fore two steps so as to implement the application framework, and be detected and applied in "YunNan province digital agriculture 3S system".
2. The development status of hyper-graph model Professor F.Bouille who was in the Paris University in 1977 had advanced the hyper-graph model. When the hyper-graph data structure (HBDS) theory has been advanced, it has aroused abroad regard by international scientists. Bouille (1979) have advanced six kinds abstract data type in hypergraph-based data model and hyper-graph data structure (HDBS, hypergraph-based Data Structure), and have developed the theory and manner for hyper-graph modeling and database design (Bouille, 1983,1994). The concept of attribute have a broad cover range in hyper-graph data model, attribute may be the simple numerical value, an array with n dimension, a program or principle (such as obligation condition) or a structure, and itself also can be described by hyper-graph [11]. Thereby, attribute described not only the static features of object, also the dynamic features, and the dynamic features refer to the operations that describe the , instance or type. The relationship between types and instances is diverse. This relation may be hierarchy, also may be the not-hierarchy. The relationship between Hyper-graph class expressed the longitudinal relationship of hyper-graph type, distinctly behaved the inheritance, unite and congregation of relationship of type in the object-oriented model. The not-hierarchy of hyper-
The Study and Applicattion of Object-oriented Hyper-graph
121
graph model expressed the transverse relationship of hyper-graph type; this relationship is our basis to process spatial analysis. Spatio-temporal data model mainly have following types: Spatial Temporal Cube (Hagerstand, 1970, Rudcer, 1977, Szego, 1987); Snapshots (Ross, 1985); Base State with amendments (Langran, 1990, Peuquest, 1994); Space-time Composite (Chrisman, 1983); Spatial Temporal Domain (Peuquest, 1994). Saw the development of above model, we found that concerned spatio-temporal data model also have following defaults: many model take the traditional GIS as its basis, it is hard to realize the integration of spatio-temporal, the intension of spatio-temporal is simple and so on [7]. The existence of these problems is not convenient to the development of GIS. To speed the rapid development of GIS, we must solve these problems or one aspect of these problems as soon as possible. Therefore, in the following works, we will try to use the object-oriented technology and combine the strict hyper-graph theory basis and flexible structure so as to build and develop a new spatio-temporal reasoning model.
3. The hyper-graph spatia-temporal data model based on object Because spatio-temporal data model is the precondition of processing spatio-temporal reasoning, then, as the development of GPS, RS and GIS, the study for spatio-temporal data model have been the hotspot and important frontier task of GIS [8]. 3.1. Meaning of the object-oriented hyper-graph spatiotemporal data model
The entity may change, such as come into being, disappear, split, and combine etc in the course of entities' spatio-temporal evolvement. In allusion to the spatio-temporal object entity of material tobacco planting, namely planting parcels, its spatial information and attribute information all will change with the change of time. The object-oriented technology having been extended, this trend brings the hope to new spatio-temporal data model. The object-oriented geographical data model can build a physical conceptual model that can sufficiently express geographical space and time. Through defining diverse semantic abstract mechanism, we can adopt a manner that is consistent with scientific cognition model so as to build the geographical spatio-temporal
122
Luo Jing, Cui Weihong and Niu Zhenguo
data model. Moreover, as the hyper-graph model has the object feature and itself inherent hyper-graph feature and spatial topology feature, hence, the hyper-graph-based object model can express more abundant abstract data that expressed the relationship than general object model [11]. 3.2 The object-oriented hyper-graph spatio-temporal data type
Geographical phenomena have three basic features, or in another words, GIS database have three basic data that refer to time, attribute and space. Spatio-temporal date refer to the spatial data which evolve with time. We once have advanced a spatio-temporal conceptual model which based on the hyper-graph, advanced fifteen abstract data type which can completely reflect spatial attribute and temporal attribute, spatial function and temporal function, as well as spatial correlation [7]. There are two kinds of data in our spatio-temporal reasoning model, one is "spatio-temporal object", and the other is "spatio-temporal continuous data" . spatio-temporal object data The spatio-temporal object is the main body of spatio-temporal reasoning model. It refers to the objects that change with time, and emphasis on describing the spatio-temporal character of independent object (namely, the change of spatial position or spatial area which hold by the spatiotemporal object) and the relationship between the other objects (such the spatial relationship as topology of time, direction, distance and so on). Spatio-temporal continuous data Spatia-temporal continuous data generally have spatial area divided into many continuous units, such as, pixels of the remote sensing image, district of land etc; then, we will consider the time attribute of every unit in the continuous intervals. This paper tried to build object-oriented hyper-graph spatio-temporal data model, and fatherly resolve the spatia-temporal integration in hypergraph-based spatio-temporal data model from two points. The model structure was being expressed in figure 1: In the data model, entity was abstracted to geographical object model and feature object model. Here, we will adopt object-oriented technology so as to define the model. Feature object described the objects' features, it relate to geographical object by using itself having features Geographical object is used to specify the feature object which is in this model, making the same geographical object to the different attribute, we can have the same geographical object defined in different object, such as point object, line object, polygon object and others objects. The spatia-temporal rela-
The Study and Applicattion of Object-oriented Hyper-graph
123
tionship of geographical object can be linked by the object's spatiotemporal feature relationship. Feature object include spatial feature, temporal feature and parcel shape feature, and the geographical object expressed the shape feature. Temporal fea-
Spatial feature
I
Spatial func-
I
L
I
Com-
n
Time
Temooral function
In-
Temooral
rela-
Fig. 1. Profit-driving spatio-temporal data model based on hypergraph theory
5. The building of spatio-temporal reasoning model The currently developed spatio-temporal model may be divided into theoretical model and application-oriented model. The application-oriented spatio-temporal model has brought many representational models under the real application demand driving in recent years [13]. Such as, Muller has put forward a spatio-temporal model that is used to describe motorial qualitative spatio-temporal relationship [2]. Pissinou with other people have advanced a spatio-temporal relation model which is used in the multimedia GIS in 2001 [3], this model have the thirteen temporal interval relationship which is advanced by Allen [1] extended into higher dimension, and this model is used to describe the topological and directional relation between objects, and taking the concept of video frame to express time, it is a discrete qualitative model and hard to be used out of the multimedia. Erwing [7] have used spatio-temporal predication to express spatiotemporal relation between objects for the first time, this model aimed at having traditional relational database extended to spatio-temporal database, and considered the topological relation in the spatial relationship. Andrea with other people [14] have advanced a model which based on the
124
Luo Jing, Cui Weihong and Niu Zhenguo
object movement time interval and spatial relation interval, this model is used to analyze the spatio-temporal relation between robots in the robotic football play, and it also only considered the qualitative directional he distant relation in spatial relation. In the spatio-temporal model, the confirmation of the spatio-temporal factor is the basis for spatio-temporal reasoning. According to the spatiotemporal data standard advanced by ISO/TC 211 [9], we have spatiotemporal factor divided into temporal attribute and spatial attribute, temporal function and spatial function, temporal relationship and spatial relationship. This division is the scientific basis for the frame of integration of spatio-temporal. The temporal relation and spatial relation is especially the basis for the spatio-temporal reasoning. In the support of the hyper-graph theory, mostly spatio-temporal relation especially the directly relation and indirectly relation can be implemented by hyper-graph link, multi-link, hyper-link and hyper multi-link. As to the blur relation, it can be implemented by the blur reasoning of hyper-graph. The object's spatial elements have two manners relating to temporal attribute. One is the temporal feature which is hold by the object itself, such as, the relation of the same parcel in the different time; the other involved with diverse element, and every element have its own temporal feature as the relation, such as the relation of different parcels in different time [9]. In the above model that was advanced by us, the object take as one whole spatio-temporal, so have the entity's spatial attribute and temporal attribute perfectly related. According to the requirement of monitoring parcels, the parcels are the object entity which is our process spatio-temporal reasoning, so, the core problem became the spatio-temporal link between objects. The rotating involved with the compare of the soil fertility and crop product in rotating periods. So, We selected the economic benefit attribute and soil fertility as our masterstroke for spatio-temporal reasoning study, and had the link between types described by the section three as manner. As the link is under the type attribute, so the link is a link relation based on hyper-graph. Through comparing the driving of economic benefit and soil fertility, we can implement the monitor of change information of parcel so as to achieve the aim of crop rotating spatio-temporal reasoning study. This spatio-temporal reasoning model expressed in figure2.
The Study and Applicattion of Object-oriented Hyper-graph
Temporal attri Temporal functi Spatial attribSpatial function
-...
",~--, I I
I
,
I
,,
, ,
,
County 1_ •. _1
\ \
Tobacco station Level
Tobacco Field
,
,, I
,
,, ,
I
,
\
I
,
\ \
I I
I
, ,,
In-"
1
I
Spatial relation ~
",
RaG~"
Temporal relatio
,
125
--~--
, ~'
"\~/:,'
~\ " , , , ' "" ", ... , ' ......
I
\(3/,' \
'I
~
" "", , ' ... , \
\
,
-~,'
~
1
,
I I I
,,,,~ ~
Ty
--.
Li
Fig. 2. The flow chart of spatio- temporal reasoning
5. Case study We had the change of spatial attribute of parcels in close years as the basis to ensure whether the parcels were rotated, And this manner provide with monitoring to tobacco planting. Through monitoring, we can optimize the planting mode, boost the profitable income, and provide the theoretical basis for managing tobacco planting. We have implemented the technical path through monitoring on the tobacco planting rotating status in YunNan province.
126
Luo ling, Cui Weihong and Niu Zhenguo
Time: 2005.7 Crop: Tobacco1 Event: Lack of Ni-
trogen
Time: 2004.7 Crop: Com Event: Lack of Nitrogen
Fig. 3. Result of Reasoning of Rotating
6. Conclusions Presently, the YunNan province cannot completely implements freestyle rotating mode that it is completely droved by the economic profit. It is possible to compare the product and qualities of parcel and in the control of set index so as to decide whether will is to be rotated. Therefore, in the future, we will consider how to shape a spatio-temporal reasoning mechanism which based on the remote sensing data, having profit-driving as driving, synthetically consider the social economic level and environmental condition which affect rotating decision. Such a spatio-temporal reasoning mechanism will greatly reduce the jumping of manpower resources in the produce process, so as to greatly economize economical cost.
Acknowledgments The work described in this paper was supported by a grant from NSF of China (NO 40201020) and the project of "YunNan province digital agriculture 3S system". The authors also like to thank the supported by ProfessorCui W.H.
The Study and Applicattion of Object-oriented Hyper-graph
127
Reference literature [1] C.Rerge. Graph and Hyper-graph. Amsterdam: North Holland, 1973 [2] L.Collatz. Different types of hyper-graph and several applications. The third graph theory special meeting memo in Czechoslovakia Graph and other combinatorial topics Aug 24-27, 1982: 30-49. [3] M.Karomski. Random hypertree, Allocations and partitions, ibid: 159-162. [4] Ariexri S.Fracnkdl. A deletions game on hyper-graph. Discrete Applied Mathematics 30, 1991: 155-162. [5] Endre Boros (USA), on shift stable hyper-graphs. Diserete mathematics 87 (1991): 81-84. [6] Andreas Brandstadt, Victor D.Chepoi, Feodor F.Dragan, The algorithmic use of hypertree structure and maximum neighborhood orderings, Discrete Applied Mathematics 82 (1998) 43-77. [7] Cui W.H development of a feature based spatial-temporal data model for dynamic monitoring land use change, mapping magazine, Vol33 No.2May 1381452004 [8] Cui W.H the study of spatial data structure, science publishing company, 1995 [9] ISO/TC211 Final text of CD 19115 geographic information. Part 7: Spatial Schema, Part8: Temporal Schema [R], 2001 (CD-disk) [10] USERY E L. Feature-Based Geographic Information System Model. Photogrammetric Engineering &Remote Sensing Vol 62 No 7[J], 1996. 833-838 [11] Zhangjin, the object-oriented hyper-graph spatio-temporal data model, mapping magazine, May,1999 [12] Shuhong, Chenjun, Dudaosheng, Zhouyongqian, the object-oriented spatiotemporal data model, the (Wuhan) mapping science and technology university transaction, 1997.9 [13] Liudayou, Huhe, Wangshengsheng, Xieqi, the study development of spatiotemporal reasoning, software transaction, Vol. 15 No.8 August 2004 [14] LANGRAN G. Time in Geographic Information Systems. Taylor& Francis, London New York Philadelphia, [M], 1992. 121-157
Using 3D Fuzzy topological relationships for Checking of Spatial Relations between Dynamic Air Pollution Cloud and City Population Density Roozbeh Shad, Mohamad Reza Malek, Mohammad Saadi Mesgari, Alireza Vafaeinezhad Faculty of Geodesy and Geomatics Eng. K.N.Toosi University of Technology, No 1346, Mirdamad cross, Valiasr st., Tehran, IRAN. Email: {Rouzbeh_Shad.Avafaeinezhad}@yahoo.com.
[email protected],
[email protected]
Abstract One of the important mechanisms for representing topological relations between spatial objects is spatial topology. These relationships help us to model spatial relations between spatial objects in a simple and efficient manner. For achieving this goal there are some difficulties like; complexity of three dimensional relationships, fuzzy spatial properties of objects and dynamic relationships between them. For example; suppose there is an air pollution cloud which is moving through a spatial area of an urban. This phenomenon can be defined as a 3D fuzzy object which each point on it has amount of membership value to the universal pollution set. Also, density of population can be defined as a 3D fuzzy object which each point of it has amount of membership value to the universal population density set in the urban area. Then both of them are 3D fuzzy objects and they have dynamic topological relationships which help urban managers and decision makers for solving some related problems such as; managing population distribution, controlling traffic, arranging health care decisions and controlling air pollution. With respect to the above in this paper we try to check and complete our information about how we can define 3D fuzzy topological relations between two mentioned phenomena and propose an efficient method for modeling that in GIS environment. Then, we will be closer to 3D fuzzy modeling and designing fuzzy user interface. Finally; with creating a small knowledge base system we show a new horizontal for implementing this application on the spatial systems. Keywords: GIS, Fuzzy, 3D, Topology, Population density, Pollution Cloud
130
Roozbeh Shad, Mohamad Reza Malek et. al
1. Introduction In GIS environment we usually map spatial features schematically for representation and use mathematical and simulating models for processing, analyzing and decision making [1]. For achieving these goals there are some difficulties such as; complexity of defining boundaries for some features, increasing the volume of data which have 3D spatial properties and dynamic (spatio-temporal) properties of objects. These difficulties make GIS models more complex and ineffective. So, it is necessary for spatial systems using some flexible models which be able to solve these problems for some kind of features efficiently. One of specific features which are so important in people life is called environmental condition. There is dynamic interaction between human life and environmental impact (Figure1). So many things such as economical decisions, cultural variations and political strategies are related to the environments which people are living.
Fig.I. Interaction between environment and human life
In this case study we consider air pollution as an environmental parameter which can effect on the quality of physical and mental health of people directly and can create some economical, cultural and political problems in society indirectly. A pollution cloud consisted of some poisoned suspended materials such as (CO, C02, S02, ... ) which are so harmful for health of people specially in metropolitan area where there are highly cumulative population. Then we can consider this parameter as a 3D dynamic undefined boundary spatial feature in GIS environment and check its topological relationship with population density area using GIS spatial analysis tools. Like pollution cloud, population density can be considered as 3D dynamic undefined boundary spatial object which can be influenced by pollution cloud. Several approaches have been proposed for identifying topological relations between crisp spatial objects. Corbett (1979) introduced the algebraic topological structure for cartographic modeling. Allen (1983) identified 13 topological relations between two temporal intervals. The breakthrough on topological relations between spatial objects was made by the well-known
Using 3D Fuzzy topological relationship
131
4-intersection and 9-intersection approaches proposed by Egenhofer and Franzosa (1991). A lot of research has been done based on this aspect (Egenhofer and Herring 1990a, 1990b, Herring 1991, Egenhofer et al. 1994a, 1994b, Egenhofer and Franzosa 1994, van Oosterom 1997, Molenaar 1996, Chen et ale 2001). On the other hand, Kainz et ale (1993) investigated the topological relations from the perspective of poset and lattice theory. Randell et al. (1992) described topological relations by using their RCC (Region Connection Calculus) theory, which is based on logic. In this paper our main purpose is focused on 3D topological modeling of relations between these two fuzzy objects (3DFT). We call these two objects "Fuzzy objects" because their boundary is undefined and we must describe them using some fuzzy concepts in GIS environment[2,3,4,5].
2. Fuzzy properties of Spatial Objects Our world is consisted of indefinite spatial objects which are continues and are defined in indefinite space. We use data abstraction and digitalization in GIS environment by defining spatial dimension, boundary and different attributes for each conventional object in real world. These data are entered to the spatial models digitally. Therefore it is necessary to define them mathematically and increase the models flexibility, efficiency and simplicity. Based on point set theory we can say each region (as A) is consisted of interior (AO), boundary (aA) and exterior (Ae) . 9-Intersection Egen Hofer matrix uses these concepts for defining topological relations between crisp region. In here we define a 3D fuzzy region based on the point set theory. Definition 1: If a simple region is called A, we can define its interior as follow. A(X,y,Z) for (x,y,z) sur§P(A)°
Otherwise
(1)
132
Roozbeh Shad, Mohamad Reza Malek et. al
Which J.lA(X,y,Z) is called membership value and supp(A) is defined as follow. (2)
Supp(A)={ (x,y,z), J.lA(X,y,Z) ~ O}
There are different definitions for boundary which are described by researchers. We use below definition for boundary.
Definition 2: If A is considered as a fuzzy region the membership function of aA is defined as follow[6,7]. (3)
J.laA(x,y,z)=2min[J.lA(x,y,z), 1- J.lA(X,y,Z)]
This boundary is extracted based on below conditions: 1- the maximum of membership value is on J.lA(x,y,z)=1/2. 2- If a point has a certain membership to interior or exterior then we have J.laA(X,y,Z)=O. 3- The boundary of A e is equal to the boundary of A",
Definition 3: Suppose A as a simple fuzzy region then membership value of A e can be defined as follow. J.lA(X,y,Z) for (x,y,z) :fupp(A)e J.l Ae(X,y,z)=
{O
(4)
Otherwise
3. Binary Topological relations for 3D Fuzzy Regions In this section we use Egen Hofer 9-Intersection matrix [8] for representing 3D topological relations between two 3D simple objects. In 3D space we can divide different objects to point (P), line (L), surface (S) and body (B) based on their topological dimension. Accordingly we can say there may be occurred different kinds of relations between two spatial objects which related to their topological dimension and can be shown by R(P,L), R(L,S), ... [9].
Using 3D Fuzzy topological relationship
133
Topological intersection between two objects is related on some parameters such as; space dimension, topological dimension and kind of boundary (continuous or not). For example two regions are not able to have intersection together without intersection on their boundaries. In 3D space, objects have 2D topological dimension and their boundary are continuous. In this space we have four topological elements (OD,1D, 2D, 3D).
Pz
P Fig. 2. 3D Topological relation between two simple objects
These topological elements can be defined as follow. ODB={P 1,PZ,P3, P 4} ODA={P1,P"z,P"3, P"4} 1DB={L1,Lz,L3, L 4,Ls, L 6} 1DA={L"1,LiL"3, L"4,L"s, L"6} 2DB={11,1Z,13,14} 2DA={1"1,12, 1"3, 1"4} 3DB={B} 3DA={B "} Each element of A can be related to whole elements of B. The kind of these relationships can be shown using 9-intersection matrix. Then we can say number of relationships is defined according follow table.
. 1re 1ations . . Table1. No. of topo OjTICa eo bjects between two 3D SImp 512* ODB 1DB 2DB 3DB
ODB 16 24
ODB 24 36
ODB 16 24
ODB 4 6
16
24
16
4
4
4
4
1
In this table many of relations between two objects are not real and we can not find such relationships in the real world. Using some constraints and conditions we can extract real relationships between two simple objects. With respect to our application, pollution cloud and population density are considered as two body objects, then we have one relation between 3D and 3D elements which represent 512 states in 9-intersection matrix. Of different matrixes just 8 of them are real which are shown in figure3.
134
Roozbeh Shad, Mohamad Reza Malek et. al
disjoint
meet
contains
covers
inside
coveredBy
equal
overlap
Fig. 3. No. of topological relations between two simple 3D bodies[9]
Therefore, we can create 9-intersection matrix using characteristic function ( r) and a definite set (V9).
AO BO AO V(A,B)=
aA
B°
en
AO Be
aA aB aA
A, B ° Ae
en
(5)
Be
Ae Be
o (6)
1
4.
Other-
Fuzzy 9-intersection matrix
If the Fuzzy 9-intersection matrix between A and B is shown by F 9, we can say[lO] J.lF9(v)=h(v) for all v \E (7) In this relation h shows the maximum of membership value for two objects (A and B).
Using 3D Fuzzy topological relationship
H(AO BO) H(AO aB) H( AO Be) F(A,B)=
135
(8)
H(aA BO) H(aA aB) H(aA Be)
H(Ae BO) H(Ae aB) H(Ae Be)
5.
Similarity
Using the similarity concept between two objects which have crisp and fuzzy topological relations, the kind and the strength of relation can be determined. For achieving this goal, at first we must determine the kind of relations between two objects using 9-intersection matrix. Then it is necessary to determine the spatial point which are certainly belonged to boundary set (llaA=I). Consequently, we will be able to calculate the amount of similarity between two matrixes using the t-norm and s-norm concepts[10]. (9) [(Fg t-norm r) s-norm (Fg- t-norm r-)]y Il(Fg,r)= In this relation y is the minimum of membership value of the computed fuzzy set and Il(Fg,r) shows the amount of similarity between Fgand r., r2, r3, ra, rs, r6, r7 or Tg, Consequently, the amount of Il(Fg,r) can determine the kind of 3D topological relations and the strength of it. Therefore, by classifying Il(Fg,r) and labeling each class, the strength of relations will be defined using "No", "Slightly", "Somewhat", "Mostly" and "Clearly" terms.
q(r)=
6.
'no'
if
Il(Fg,r)<=0.03
'slightly'
if
0.03
'somewhat' if
0.3
'mostly'
if
0.6
'clearly'
if
0.96
(10)
Designing a spatial knowledge base system
The fuzzy spatial knowledge base systems are working based on fuzzy spatial knowledge. Designing and implementing these kinds of systems and connecting them to spatial interfaces help us to have reliable and stable outputs. Linguistic variables are playing very important role in some conditional records which are used in the structure of knowledge base as
136
Roozbeh Shad, Mohamad Reza Malek et. al
If-Then rules. For this purpose, we must define topological relations and linguistic terms efficiently and enter them as If-Then rules into knowledge base. Then the effect of air pollution cloud on the population density is formulated using fuzzy topology and expert knowledge in a control system.
Crisp topological rela-
Fuzzyfying using simi-
KnowIedoe base
Inference En-
Fig. 3. control system architecture
With considering, the importance of dynamic fuzzy topological relations for decision making the knowledge database is designed using the following rules. Then the center of air pollution control can deduct online based on these rules using follow inference engine. (11) B'(y)=s-norm(A'(x) t-normRACBI(X,y) = s-norm (t-norm (t-norm (Ali(Xi)),Bli(yi) Where Ali(Xi): are If parts of rules Bli(Yi): are Then part of rules A'(x): is real condition in our system RAI_BI(X,y): is the relational equation which create relationships between Ali and Bj,
A( t\
If
Using 3D Fuzzy topological relationship
137
B( t\
Fig. 4. Defining fuzzy rules
Then
Consequently system can inference and announce the amount of danger to air pollution center using the above rules. Therefore based on real time analysis the air pollution center will be able to make decision for controlling air pollution and determining its effect on people life.
7.
Conclusions
The analysis presented here provides a system of reasoning about 3D topological relations between uncertain 3D spatial objects in GIS environment. The modeling of this problem will play an important role for real time decision making and designing warning system especially for air pollution and its effect on the people life. The most significant result here is designing an effective procedure for extracting 3D fuzzy topological relationships between two 3D spatial regions (Air pollution and population density). This has been done based on similarity computation between the 3D fuzzy 9-intersection matrix and crisp Egen-hofer 9-intersection matrix. The linguistic variables which are extracted based on similarity computation enables an intuitive, informative, generalized description to be made for the knowledge rules.
Acknowledgements I am very grateful to Dr. Malek and Dr. Mesgari for his invaluable comments on this paper.
138
Roozbeh Shad, Mohamad Reza Malek et. al
References [1] P.A. Burrough, Natural objects with indeterminate boundaries, in: P.A. Burrough, A.D. Frank (Eds.), Geographic Objects with Indeterminate Boundaries, GISDATA II, Taylor & Francis, Great Britain, 1996, pp. 3-28. [2] 1.T. Bj)rke, Fuzzy set theoretic approach to the de6nition of topological spatial relations, in: J.T. Bj)rke (Ed.), ScanGIS'95, 12-14 June 1995, Department of Surveying and Mapping, Norwegian Institute of Technology, 7034 Trondheim, Norway, pp. 197-206. [3] E. Clementini, P. Di Felice, An algebraic model for spatial objects with indeterminate boundaries, in: P.A. Burrough, A.D. Frank (Eds.), Geographic Objects with Indeterminate Boundaries, GISDATA II, Taylor & Francis, Great Britain, 1996, pp. 155-169. [4] A.G. Cohn, N.M. Gotts, The egg-yolk representation of regions with indeterminate boundaries, in: P.A. Burrough, A.D. Frank (Eds.), Geographic Objects with Indeterminate Boundaries, GISDATA II, Taylor & Francis, Great Britain, 1996, pp. 171-187. [5] AJ. Roy, J.G. Stell, Spatial relations between indeterminate regions, Internat. J. Approx. Reasoning 27 (2001) 205-234. [6] E.C. IbWa, 1. TarrWes, On the boundary off uzzy sets, Fuzzy Sets and Systems 89 (1997) 113-119. [7] Y. Leung, On the imprecision ofboundaries, Geographical Anal. 19 (2) (1987) 125-151. [8] MJ. Egenhofer, R.D. Franzosa, Point-set topological spatial relations, Internal. J. Geographical Inform. Systems 5 (2) (1991) 161-174. [9] Egenhofer, M.J., Herring, 1.R., 1990. A mathematical framework for the definition of topological relationships, In:Proceedings of Fourth International Symposium on Spatial Data Handling, Zurich, Switzerland, pp. 803-813. [10] R.R. Yager, Some procedures for selecting fuzzy set-theoretic operators, Internal. 1. General Systems 8 (1982) 115-124. [12]Corbett J., 1979.Topological principles of cartography.Technical Report 48, Department of Commerce, Bureau of the Census. [13] Allen J.F., 1983.Maintaining knowledge about temporal intervals. Communications of the ACM 26(11): 832-843. [14]Egenhofer M.J. and Franzosa R., 1991. Point-set topological spatial relations. International Journal of Geographic Information Systems, 5(2): 161-174. [15]Egenhofer MJ. and Franzosa R., 1994. On the equivalence of topological relations. International Journal of Geographical Information Systems, 8(6): 133152. [16]Randell D.A, Cui, Z. and Cohn, A.G., 1992. A spatial logic based on regions and connection. In: Proceedings of the 3rd International Conference on Knowledge Representation and Reasoning, Morgan Kaufmann, San Mateo, pp. 165-176.
3D Modeling Moving Objects under Uncertainty Conditions Shokri, T I . M. R. Delavar', M. R. Malek l, A. U. Frankl, 2and G. Navratil 2 'Center of Excellence in Geomatics Engineering and Disaster Management, Dept. of Surveying and Geomatics Engineering, Engineering Faculty, University of Tehran, Tehran, Iran. tala.shokri @eng.ut.ac.ir,
[email protected],
[email protected] 2Institute of Geoinformation and Cartography, TU WIEN, Austria
[email protected], navratil @geoinfo.tuwien.ac.at
Abstract Geospatial information systems (GIS) have been applied in modeling environmental and ecological systems. 3D Moving objects are spatiotemporal objects whose location andlor extent change over time, and they are among those recent evolutions that emerged to fulfill some of the new requirements for GI community. Many of the earlier works were based on the assumption that exact trajectory information was available (or could be obtained) at every time instant. Unfortunately, this assumption cannot be guaranteed in real applications where trajectory information is associated inherently with uncertainty and lack of complete and precise knowledge. In this paper, we explore how a trajectory is influenced by uncertainty. Then we study the nature of 3D moving object trajectories in the presence of uncertainty and we introduce two data models for uncertain trajectories that are used to represent moving objects. By using this model in a case study, we obtain the most probable answer where we consider a 3D moving object path under uncertain conditions and lack of knowledge. Keywords: moving object, sampling error, uncertainty, temporal GIS, trajectory uncertainty
140
Shokri, T., M. R. Delavar, M. R. Malek, A. U. Frank, G. Navratil
1. Introduction For some spatiotemporal applications, it can be assumed that the modeled world is precise and bounded. While these simplifying assumptions are sufficient in some applications, they are unnecessarily for many other applications, such as navigational applications that manage data with spatial and/or temporal extents. Moving object databases appear in numerous applications such as emergency services, navigational and military services, flight management and tracking, m-commerce, and various location based services as fleet management, vehicle tracking, and mobile advertisements. These advancements demand new techniques for managing and querying changing location information. One of the key research issues with moving objects is the management of uncertainty. Many of the earlier works were based on the assumption that exact trajectory information was available (or could be obtained) at every time instant. Unfortunately, this assumption cannot be guaranteed in real applications where trajectory information is associated inherently with uncertainty and lack of complete and precise knowledge. Inspired by the importance of this subject we first explore how a trajectory is influenced by uncertainty, and then we study the nature of moving object trajectories in presence of uncertainty. We introduce two data models for uncertain trajectories that are used to represent moving objects in 3D space (2D positional and ID temporal). By using these models, we can query uncertain databases with moving objects having measurement errors. There are several works towards modeling and querying moving objects with uncertainty. Pfoser and Jensen discuss [7] spatiotemporal indeterminacy for moving objects data and present a formal quantitative approach to include uncertainty in modeling the moving object. The authors limit the uncertainty to the previous position of the moving objects and the error may become very large as time approaches [7]. It describes the methodology to compute and utilize error information of moving objects' trajectories. The approach, however, is limited to point objects; also, it does not take temporal errors into account. Pfoser and Tryfona [8] take a more pragmatic approach in that the world is modeled in terms of spatial data types, and fuzziness is expressed as related to the data types and the operations on them. Trajcevski et al.[3] address the problem of querying moving objects' databases, which capture the inherent uncertainty, associated with the location of moving objects. They propose a model for trajectory as a 3D cylindrical body that incorporates uncertainty in a manner that enables
3D Modeling Moving Objects under Uncertainty Conditions
141
efficient querying. Yu et al. [1] propose a practical framework and mathematical basis for managing and capturing multidimensional continuously changing data objects. Mokhtar and Su [4] introduce a data model for uncertain trajectories of moving objects in which the trajectory is a vector of uniform stochastic processes. Ding and OUting [2] discuss how the uncertainty of network constrained moving objects can be reduced by using reasonable modeling methods and location update policies and then present a framework to support variable accuracies in presenting the locations of moving objects. The rest of the article is structured as follow. In section 2 we define uncertainty concepts for a trajectory. Section 3 presents two major sources of uncertainty in a trajectory. In section 4 we present our two proposed models for querying a database of trajectories with uncertainty using weighted interpolation for 3D moving object's location. Finally, section 5 gives concluding remark and outlines the direction for future work.
2. Uncertainty in Moving Objects Uncertainty is an inherent property of information location of moving objects. Unless uncertainty is captured in the model and query language used, the burden of factoring uncertainty into answers to queries is left to the user. For example, consider a ship equipped with OPS that can transmit its positions to a central computer. At 'the central site the data is processed and utilized. Example queries occurring in such an application are as follows: • Which ship is nearest to a destroyed ship? • A what time will "ship A" reach "island B"? • Compute the best direction of motion for the ship in order not to bump into a seen rock. If we consider uncertainty in information and trajectory, the questions have no clear answers. Taking uncertainty into account, we can restate the questions as follow: • Which ship will be, with a 50% probability, within 100 meters of ship A in 20 minutes? • How likely is it that "ship A" reaches "island B" without being hit by a storm that will happen between 9:00 and 12:00 ? • Compute the minimum temporal range so that ship A could be in region B.
142
Shokri, T., M. R. Delavar, M. R. Malek, A. U. Frank, G. Navratil
To answer these questions we need an abstract model with quantifiable uncertainty and by using that model, we can obtain the most probable answer.
3. Types of Uncertainty in Trajectory A first step in incorporating uncertainty into a representation of trajectories is to quantify it. Thus, in section 3 we want to define errors introduced by the trajectory acquisition process.
3.1. Measurement error Generally, an error can be introduced by inaccurate measurements [5]. The accuracy and thus the quality of the measurement depend largely on the technique used. This paper assumes that GPS is used for the sampling of positions. Two assumptions are generally made when talking about the accuracy of the GPS. First, the error distribution is assumed to be Gaussian. Second, we assume that the horizontal error distribution is circular [9]. Figure 1 visualizes the error distribution. In addition to the mean, the standard deviation, a, is a characteristic parameter of a normal distribution. Within the range of ± a of the mean 39.35% of the probability is concentrated in a bivariate normal distribution (2dimensional). .-?obability
---E=c,==-:::=-···==~--~~~-circularpositional enor
Fig. 1. Positional error in the GPS [7]
3.2. Uncertainty in sampling We capture the movement of an object by sampling its position using a GPS receiver at regular time intervals. This introduces uncertainty about the position of the object that is affected by the frequency with which position samples are taken, i.e., the sampling rate. This, in tum, may be set
3D Modeling Moving Objects under Uncertainty Conditions
143
by considering the speed of the object and the desired maximum distance between consecutive samples [7]. By looking Figure 2(a), one would assume that the straight-line best resembles the actual trajectory of the object. In Figure 2(b) we can obtain the better trajectory that is more similar to the actual one by increasing the sampling rate or decreasing the moving object speed. The difference between the two trajectories shows the "uncertainty". 3..5 3,~
3 ~.~
)-
2
1.5 0.5
.
/
D +----r----r---....,...-e I 2 X 3
(a) simple trajec tolY
2.5
0.5
oi----r----y----,,----, e 1 2 X :J 4 (b) trajec tory with increasing sampling points
Fig. 2. Uncertainty in a trajectory
For better understanding, consider the trajectory in a time interval delimited by consecutive samples. We know two positions, PI and P2 , as well as the object's maximum speed, V m (see Figure 3). If the object moves at maximum speed Vm from PI and its trajectory is a straight line, its position at time t x will be on a circle of radius rl = Vm(tl + t x ) around PI (the smaller dotted circle in Figure 3). Thus, the points on the circle represent the furthest position away from PI the object can get at time t; If the object's speed is lower than Vm , or its trajectory is not a straight line, the object's position at time t, will be somewhere within the area bounded by the circle of radius ri. Next, we know that the object will be at position P 2 at time t2. [tl; t2],
Fig. 3. Uncertainty between samples [8]
Thus, applying the same assumptions again, the object's position at time t, is on the circle with radius r2 =Vm(t2 - t x ) around P 2. If the object moves slower or its trajectory is not a straight line, it is somewhere within the area
144
Shokri, T., M. R. Delavar, M. R. Malek, A. U. Frank, G. Navratil
bounded by the dotted circle. The above constraints on the position of the object mean that the object can be anywhere in the intersection of the two circular areas at time t; This intersection is shown by the shaded area in Figure 3. In the following, we present two models to handle this uncertainty in order to calculate the most probable answer in the shadow area.
4. Two Models for Trajectories with Uncertainty As mentioned with GPS technology, a 3D moving object's position can be determined instantaneously with some errors and the moving object's speed would be affected. These matters lead us to introduce a new approach to sample 3D position (x, y, t) between sampling point or future position for a moving object in order to gain the most probable answer to uncertainty. We will not consider any error connected to the times of measurements. We assume that we know precisely the time when a position sample was observed. This assumption seems to be justified when using GPS and its precise clocks as a measuring device. In this section, we introduce two models for our uncertain trajectory.
4.1. The first model First, we want to find a moving object's 3D position in future. As shown in Figure 4, we have a moving object that its positions at time intervals are recorded and we want to find its position at a precise time tx in future. In this model, it is assumed that the direction of the object's speed is known and definite. Considering variable speed for moving object, we assume maximum and minimum speeds, called Vmax and Vmin , between each two points. They are calculated as follow: V. . 1 mm
= d.
1
t,
-&1.1 & V. = d , +&ii _ t. 1 max t . - t.
1+1
1
1+1
1
(1)
(2)
3D Modeling Moving Objects under Uncertainty Conditions
145
Where &l is measurement error and d, is the distance between each consecutive point and i is a counter for sampling points and n is the number of points as shown in Figure 4. As represented in Equations (3) and (4), we can calculate dnmin , dnmax (minimum and maximum distances between the last sampling point and moving object's 3D position in future at a precise time Ix), and estimate the uncertainty range, Ri, for new position at time In in future. d n max
=max(~max) *(I n -
(3)
1n-l)
(4) (5)
dl
d2 (X3,Y3,t3) ~ (X2,Y2,t2)
4
(Xl,Yl,tl) Fig. 4. Trajectory of a moving object with uncertainty With this method, we can calculate a span in which the moving object actually will be a database of at time tn (Equation (5)).
4.2. The second model In the second model, we want to find the position M of an unknown moving object. The accuracy of each point depending on its measuring technique is different, thus points in this model have different errors. In comparison with the model shown in Figure 4, in this model, we consider more sampling points and calculate the probability positions M by 3D linear interpolation between each point i and point n. Then, in order to calculate the most probable answer among probable positions, we use weighted interpolation among these positions in which weights for all the points are represented in Equations (6) and (7) [6]:
146
Shokri, T., M. R. Delavar, M. R. Malek, A. U. Frank, G. Navratil
1 0'.
= __ '_
W Ii
1
n
L:a, 1
1 d.
w = __ '_ 2i n 1
(6)
L:d, 1
(7)
Where
(Ji
is measurement error for sampling points and d, is
spatiotemporal distance between each point and unknown position M. W l i is the weight corresponding to measurement error and W2i is the weight introduced for spatiotemporal distance as shown in Equation (6). d', is the projection of d, in 2-dimensional space (x and y coordinates Equation (8)) and n is the number of sampling points and i is a counter for sampling points. In Equation (9), (Xi, Yi, ti) are the 3D coordinates of probable positions M and (x, y, t) are the 3D coordinates of unknown point M. (8) (9)
In order to calculate d. we should consider an approximate 3D position for M called M/XbYi,t) at time t, that could be calculated with a 3D linear interpolation between two desired points. Finally, we can combine the two introduced weights and calculate M (x,y,t) as presented in Equation (10) [6].We have considered five sampling points for a trajectory of a moving object that three of them have more measurement errors than the others. Point 5 shown in Figure 5 is determined with precise coordinate, points 1 and 4 are determined with an accuracy of 1 meter and the others have 3 meters accuracy and we can find an unknown position at precise time tx for moving object shown in Figure 5.
3D Modeling Moving Objects under Uncertainty Conditions T
147
5
Y
4
M
Fig. 5. Trajectory of a moving object in 3D space
n
(10)
X=~WX. L-i 1 1 1
All database information is shown in Table 1 and final results of the modeling are shown in Table 2. In this example we sample unknown point at four times calculated with a 3D linear interpolation between each of the points and point 5. Table 1. Database information for trajectory 1 X(m)
112
255
419
580
651
Y(m)
183
254.5
336.5
417
452.5
Table 2. The results of 3D modeling ~~#~~I(~~__m_~~m#
WI
_~~XiM(m) YiM(m)
d'(ml~_~#_
1
0
398.388
0.058
0.375
464.92
359.46
394.573
0.058
2
31
201.918
0.115
0.125
434.32
344.16
200.487
0.115
3
42
81.331
0.286
0.125
490.81
372.4
80.285
0.280
4
65
42.959
0.541
0.375
542.63
398.32
41.779
0.537
5
84
M
148
Shokri, T., M. R. Delavar, M. R. Malek, A. U. Frank, G. Navratil
Finally, we calculate the most probable position of the unknown point by weighted interpolation as follow: n
X
= L WjX j = 506.32 1
n
&Y
= L wjYj = 379.05 1
5. Conclusions and Future Research The paper has proposed two models for acquiring and representing the movements of point objects assumptions under locational uncertainty. The 3D positions of objects were sampled at selected points in time and in the first model, the position of a 3D object has been predicted at exact time in future by increasing the uncertainty impact. In the second model, the positions (x, y, t) between these points at a given time are obtained using weighted interpolation. Results show that by implementing these models we can locate a moving object at specific time more similar to the exact location in comparison with conventional methods. This work points to several directions for future research. Firstly, for the representation of the movement, we chose to linearly interpolate between measured positions. More advanced techniques may be used for this purpose. Secondly, two types of error measures were considered. Additionally, time error could be considered. In reality, the space considered will typically contain roads, railroad tracks, lakes, or other infrastructure that may be taken into account to reduce overall uncertainty and error in the database.
References [1] Yu. B., S. D. Prager and T. Builey, "The isoscale-triangle uncertainty model: A spatiotemporal data model for continuously changing Data.", Proceeding, 4 th Workshop on Dynamic & Multi-Dimensional GIS, Wales, September 2005, pp. 179-183. [2] Ding. Z. and, R.H. "Guting, Uncertainty management for network constrained moving objects." Proc., 15th IntI. Conf. on Database and Expert Systems Applications, DEXA, Zaragoza, Spain, 2004, pp. 411-421. [3] Trajcevski G., O. Wolfson, and B. Xu. "Modeling and Querying Trajectories of Moving Objects with Uncertainty.", Technical Report UIC - EECS - 01 - 2, May 2001.
3D Modeling Moving Objects under Uncertainty Conditions
149
[4] Mokhtar, H. and J. Su, "Universal trajectory queries for moving object databases", Proc. IEEE International Conference on Mobile Data Management, Berkeley, California, USA, Jan. 2004, pp..114-124 .. [5] Leick, A. GPS Satellite Surveying, John Wiley & Sons, Inc., 1995. [6] Li, L. and P. Revesz, Interpolation methods for spatiotemporal geographic data. Computers, Environment and Urban Systems, Vol. 27, pp. 1-27,2003. [7] Pfoser, D. and C.S. Jensen, "Capturing the uncertainty of moving-object representations." Proc, 6th International Symposium on the Advances in Spatial Databases, , Hong Kong, July 20-23, 1999, pp. 111-132. [8] Pfoser. D. and N. Tryfona, "Capturing Fuzziness and Uncertainty of Spatiotemporal Objects" A TIME CENTER Technical Report-TR-59, August 8,2001. [9] Van Diggelen, F.: GPS accuracy: lies, damn lies, and statistics. GPS World, 5(8),1998.
Research on a feature based spatia-temporal data model Weihong Cui 1,2, Wenzhong Shi 2, Xiaojuan Lit, Luojing', Zhenguo Niu 1 lInstitute of Remote Sensing Application CAS, Department of Land Surveying and Geo-informatics, 100101,Beijing, China. 2Department of Land Surveying and Geo-informatics, The Hong Kong Polytechnic University, Hong Kong {
[email protected]}
Abstract This paper presents a design and development of feature-based spatialtemporal data model with integrated spatial and temporal elements and whole spatial-temporal intension (FBSDTM) for dynamic monitoring of landuse. Firstly, fundamental theoretical framework of the feature-based spatial-temporal data model and data structure is developed based on the hyper-graph theory. Secondly, a spatial-temporal database based on the developed FBSTDM is designed. Thirdly, an application of landuse monitoring which employs an integrated technology of GPS, RS and GIS is introduced. The paper is structured into four major sections. The concept model selection features and spatially temporally assigning values, Date Base design and case study. Keywords: feature, spatial-temporal data model, hyper-graph
1. Introduction The study of spatio-Temporal data model have been the hotspot and the great foreland task. Such as J.Allen(1983), B.Knglght(1995), Usery EL(1996), C.Roswell(1998), Good Langan(1992), ChenJun(1999), ShuHong(2000) all have make significant work on spatio-temporal modeling, spatio-temporal semantic and spatio-temporal data relation. Entering into the information society of 21 century, as the course of global and information have been accelerating, spatio-temporal model and the study of temporal GIS have been the hot task. Such as, the monitor of global, environmental monitor, water and soil disappear and land change, the
152
Weihong Cui, Wenzhong Shi, Xiaojuan Li, Luojing, Zhenguo Niu
development of digital earth etc, and also the currently being the land and resources dynamical monitor on national level and regional level all need the spatio-temporal data model theory and the support of it. The currently spatio-temporal data model has following types: spatial Temporal Cube (Hagerstand, 1970, Rudcer, 1977, Szego,1987); Snapshots (Ross,1985); Base State with amendments (Langran,1990, Peuquest,1994); Space-Time Composite (Chrisman,1983); Spatial Temporal Domain (Peuquest, 1994)[8] etc. Through viewing on the development of spatiotemporal modeling, many make the traditional GIS as its base, so that it is very difficult to implement the incorporation of spatial and temporal, thus existing some remarkable defect. Such as: (1) The separation of the spatial and temporal. Currently, many spatiotemporal model all have separated the spatial and temporal, but to the real world, the spatio-temporal attribute, function and relation is the uniform, the separation of spatial and temporal is not fit the external requirement. (2) The spatio-temporal intension is very simple. In many spatiotemporal model, "temporal" only refer to the time location of a geographical phenomena; "space" refer to the geographical phenomena's geometrical shape and geographical coordinate. This simple intension can't reflect the course of the geographical phenomena's happening, growing and dying, also the correlation and function of the temporal and spatial. (3) The theory and application haven't combine enough. Some spatiotemporal data model sometimes lay particular stress or stay on the study of theory, and many theory want to be checked in the application. How to resolve these problems? Apparently, there is not space to develop in the original GIS data model and spatio-temporal modeling. Because the traditional GIS data model is based on vector model or beset model. Such a kind of model like as GoodChild once indicated: "the current GIS data model can't operate basic geometrical objects or groups of objects", "can't support all geographical analysis"(GoodChild, 1987). Apparently, the traditional GIS is very difficult to implement the integration of space and time and the unitary analysis. This paper attempt to resolve the integration and the simple intension of spatio-temporal from two points. Firstly, we will establish a spatio-temporal model which make "feature" as its base; Secondly, we will establish the hyper-graph spatiotemporal data structure. Lastly, we will combine these two aspects so as to implement the spatio-temporal model from theory to technology, and then be checked in the application of "land use dynamic monitor" of national 95 scientific project.
Research on a feature based spatia-temporal data model
153
What is the geographical feature? Feature is the sum of geographical phenomena(ISO TC/211 2001) [6], or the basic unit of geographical space information(Open GIS, Chan Kelly, 1999), the standard of information in ISO TC/211is based on feature conceptual model (R.Rugg 1988, 1998), model which take feature as its base have been the hotspot for the study of GIS model(M.J Engen hofer 1995, W.Kuhn 1995, AY.Tang 1996, M.Adcuns and EL.Usery 1996)[10]. Our developed model is based on the two level of feature type and feature instance. So-called feature type refer to the geographical phenomena which have the same feature. Such as: road, airport, river, infield and bridge etc. So-called feature instance refer to the material geographical object. Such as: the Changjiang River, the capital airport and the 102 national highway etc. After the HBDS theory(F.Bouille, 1979)[2,3] have been put forward, this theory attract importance of every nation's scientists. We had used in the hyper-graph theory (HBDS) in 1980's. We build the city information system of the two blocks of Richmond (Rugg and Cui 1984). Afterward, we have developed the regional sustainable development hyper-graph data model under the support of the national natural science fund(Cui W H,1995). This study have absorbed geopraphical feature concept and then develop to the new hyper-graph spatio-temporal data model. This paper is make up of four parts: hyper-graph conceptual model, feature selected and attribute of spatio-temporal evaluated, database design and case study. The first part sum up the hyper-graph-based spatio-temporal conceptual model, indicating the hyper-graph's abstract spatio-temporal data type, relation, feature-based spatio-temporal link to the spatio-temporal data directly operated in the system. The second and third introduce how to select feature type, how to evaluate and how to construct spatio-temporal database. The forth part state the soil resources dynamic monitor case study which is based on the hypergraph spatio-temporal model.
2. Hyper-graph Spatial-Temporal Concept Model A hyper-graph is a graph in which more than two vertices are linked by the same edge (Berge ,C.1970 P.389). In other words, hyper-graph is a graph of graph. The graph G is defined: G
=(V,R)
154
Weihong Cui, Wenzhong Shi, Xiaojuan Li, Luojing, Zhenguo Niu
v = (V1,V2, V3, Vn) Subset vertices R = (R1, R2, R3, Rn) Subset arcs The hypergrah H is defined: H = (V,R) V = (V1,V2 Vn)be a finite set, and let R = (Ei/iEI) be a family of subset of V. Ei is edge. The correspond R is called the V's hypergraph.V=n is called hyper-graph's order, V1,V2 Vn is called vertex, E 1, E 2 •••••• Emis called edge subclass. Hyper-graph based data model was proposed by F. Bouille (1979). This data model is based on the hyper-graph and set theory. He built six kinds of abstract data type (ADT) and developed the theory of analysis approach of hyper-graph model and designing database (Bouille, 1983). The theory of hyper-graph data model is well developed in Europe, especially in France (F.Salge 1989). Adopting hyper-graph engine so as to control terrain dynamic change in the 3D dynamic system(Hiibertb, 1994). For example, the National Geographical Institute of France designed and developed a national basic database based on the hyper-graph conceptual model (Salge, 1989). In USA, Rugg (1982) researched on Urban Information System, Road System and Census Information System. The hyper-graph was also used to build the object-oriented system and the geographical expert system (Bouille 1992, 1994). In China, hyper-graph were applied for developing the Fast Reaction Urban GPS System (Cui 1995) and Regional Sustainable Decision Support System(Cui 1995,1997). In Canada and Japan, theoretical research and applications of the hyper-graph data model were studied (Tosio Kitagawa 1981). Hyper-graph has strict mathematical basis and flexible structure. And that, It is very useful to apply hyper-graph theory to build the spatiotemporal conceptual model. 2.1. Spatio-temporal data model based on hyper-graph Entity is the kernel of spatio-temporal data model based hyper-graph. An entity embraces space, time and attribute, function and relationships. First, we introduce the spatial data model based hyper-graph. The hyper-graph data is composed of set, element, attribute and relationship (data) :: = (set) I (element) I (attribute) I (relationship) For spatial data, the set is equivalent to the class, the element is equivalent to the object. That is: (spatial data) :: = (class) I (object) I (attribute) I (link) Bouille (1979) proposed six ADT. These are object, class, object attribute, class attribute, link between objects and link between classes. These six ADT include all phenomenon of the real world. But this model can't
Research on a feature based spatia-temporal data model
155
satisfy the want of the spatio-temporal modeling, so we must extend the six ADT. The development of a new model has considered following factors: CD According to ISO/TC211, geographical feature exists in two levels: feature type and feature instance. (?) Geographical feature have been described to the discrete object in instance level. Feature instance is related to its geographical coordinate and time coordinate(ISO/TC211), that it is to say, feature instance have space and time dimension. Space and time is necessary to feature instance. Q)Feature type and feature instance include three basic elements: feature function, feature attribute and feature relation. Based on above three factors, we have put forward hyper-graph spatio-temporal data model frame and conceptual model(see as figurel and figure2), so as to reflect the correlation and intension of every abstract data type of spatio-temporal data model. Feature type
Ternnoral
Snatial func-
Temnoral at-
Soatial attribSoatial rela-
Fia.I. The framework of feature-based snatio-temnoral data model
156
Weihong Cui, Wenzhong Shi, Xiaojuan Li, Luojing, Zhenguo Niu Feature Instance
Feature type
Temporal linkbetweenfeature instances Spatiallink betweenfeatureinstances Spatio-temporal Link betweenfeature instarces Feature instance Spatialfunction
Feature type function ~___
Feature type attrib1.ie Feature type ielatiorship
link betweenfeetuie types
Feature instance Spatialattribute Featuieinstance temnorel function Feature instarcetemnoral attribute Feature instarcespatial reationshin
~_--+'""":1'!t8
Feature instarce temporal ielatiorships Figure 2.Thespatio-temporel datamodel bared 0 n the hypergraph
(l)Hyper-graph Abstract Spatial-Temporal Data Type (HASTDT) According to the above two charts, we can get the following 15 HASTDT 1. Feature type------------------FT -----------FTF
2. Feature type function --
3. Feature type attribute----- FTA ship --------FTR
4. Feature type relation-
5. Link between feature types------- LFT ------------FI
6. Feature instance--------
7. Feature instance spatial attribute------------------------------FISA 8. Feature instance temporal attribute---------------------------FITA 9. Feature instance spatial function -----------------------------FISF 10. Feature instance temporal function -------------------------FITF 11. Feature instance spatial relationship------------------------FISR 12. Feature instance temporal relationship---------------------FITR 13.Link between spatial feature instances ---------------------LIS 14. Link between temporal feature instances -----------------LIT 15. Link between spatial and temporal feature instances-----LIST (2) Comparing the contents of the feature based hyper-graph spatiotemporal model with set theory and hyper-graph theory. We get 15 kinds of HASTDT based on the set theory and hyper-graph theory. The following table shows comparison of the set theory, hypergraph theory and the feature based hyper-graph spatial temporal model.
Research on a feature based spatio-temporal data model Set theory Set Element Property
Hyper-graph Class Object Attribute
Relationship
Relation
157
Feature based spatio-temporal model Feature type Feature instance Feature type function Feature type attribute Feature type relationship Feature instance spatial attribute Feature instance spatial function Feature instance temporal attribute Feature instance temporal function Feature instance relationship Link between feature type Link between spatial feature instance Link between temporal feature instance Link between spatial and temporal feature instance
The content of the three points is tightly linked. Feature-based hypergraph spatia-temporal data model is a spatia-temporal integration data model which is founded on the set theory, hyper-graph theory.
2.2. The relationship and the link of the feature-based spatiotemporal data (1) The link between spatia-temporal data In hyper-graph, we can extract and separate data through link. When we deal with the link between of feature type or feature instance, these links were replaced by the level structure of link type or link instance. In another words, feature type, feature instance, function, attribute, including time or space all can be conveyed through hyper-graph link. In the course of convey, if type or instance is an arc which have a value, then this value will convey to the vertex. Consequently, the arc's value become the vertex's value, ad the vertex become a latent type(figure3). CD The link between feature type When we carry through the link between feature type, chart LFT can express tree structure. LFT become a new type, at the same time, LFT become a new set of link. All of the feature instance of the feature type is latent. According to the feature-based spatia-temporal data model, instance include geographical and temporal factor. Through the link between feature type, instance can be extracted.
158
Weihong Cui, Wenzhong Shi, Xiaojuan Li, Luojing, Zhenguo Niu
Link between feature instances Among the 15 HASTDT, three are the link between the instances (LIS, LIT, LIST). When link, the attribute, function and the relationship is the value of the arc and when conveying the value to the vertices, the arc value is become the vertices' value and the vertices also become the class which is a hidden class. Hence the attribute, function and the relationship which linked by the instance type is become the hidden instance attribute, function and relationship.
UT n Fig. 3. Link class -- hidden class
(2) Data convey and extract Spatio-temporal data can be conveyed and extracted by the link in the hidden structure. FT i is the feature type; i is the code of the type, that domain ranges from 1 to i. FT ji is feature instance, j is the code of the instance, that domain ranges from 1 to j. In the following section, we will discuss the selection of feature type for land use monitoring in detailed.There is an example to introduce how to convey and extract data through hyper-graph link seeing figure4 (hidden structure). We know FT ji is a kind of feaane type. There are 6 instance, if every instance have 8 hyper-graph abstract spatio-temporal data type. After being assigned, the eight data types will form a new type -- a hidden type. We can convey feature instance from the hidden type to get Vl j i ta, it is a feature instance's temporal attribute. The assignment is based on temporal attribute.(see 3.2) They are S_85, S_90, S_95, S_98. This means that the temporal attribute is "state"(see 3.2) since 1985, 1990, 1995 and 1998. There are two conveyances. The fist conveyance conveys FTI to 8 ASTDT where each ASTDT included six instances. The second conveyance conveys from instance temporal attribute to different temporal attribute values. The results should be two instances (No.1 and No.4) that belong to S-85. Two instances (No.2 and No.6) belong to S-90. One instance belongs (No.3) to S-95 and to S-98.
Research on a feature based spatio-temporal data model
159
• --.......-
Convey
Ul1st
:.;;.»>"
,~
Abstract
..--------.
C-(7, 'Ho(.~-. o <, .........•....
..
'-1)} \,-2 \'~'\- ~
S 85 S 90 S 95 Fig.4. Example for data convey and extract
S 98
2.3. HASTDT directly manipulation in system The feature based spatial temporal data structure is developed based on set theory. For different data type could be operate as aggregation data process very convenient for users. These operations generally include following three types: (1) Directly operation on Feature type a. Feature type set linked b. Relation set with Feature type c. Upper Feature type d. Lower Feature type e. Feature instance data set correlative with Feature type f. Spatio-temporal data set of feature instance for HASTDT (2) Directly operation on Feature instance a. Instance data set that reflects spatio-temporal relationship. b. Instance data set whose spatio-temporal relationship is appointed. c. Data set whose feature instance is appointed (3) Operation on spatio-temporal relationships From picking up spatio-temporallink mentioned in figure5, we can carry out: a. Spatial distribute with the condition of appointed temporal-attribute.
160
Weihong Cui, Wenzhong Shi, Xiaojuan Li, Luojing, Zhenguo Niu
b. Spatial distribute development trend with the condition of appointed temporal functions. c. Instance's spatial analysis with different spatial temporal relationship.
3. Select feature and spatially-temporally assigning value 3.1. Select feature Selecting feature not only as a scientific task, but also as an application considering. The former requires scientific illumination about feature' definition, structure of feature and correlative relationship inside feature to guarantee it scientific. The latter requires that the feature' system can be ratified by the public to assure its authority. In United States, a lot of works on feature definition (working group III of the national committee for digital cartographic data standard) have been done in the 1980's. Later, SDTS and NSDI collected on this aspect, 2800 feature definitions including 1300 entity terms and 300 attribute terms, based on which 200 feature type standards and 244 attribute standards were gotten (Rugg 1995). In these areas, a great achievements have been made. In recent years, ISO/TC211 have set down the feature sort means and time schema which are fulfilled under the leaderships of Rugg and Roswell respectively. These works played an important role in standardizing feature internationally. It have played important roles in international geographical information standardization. Our nation is from land use point of view to sort land type in land use dynamic monitor application. This kind of sort is not feature sort. Firstly, geographical information feature take basic geographical data as its object. Secondly, China has its own traditional management system for land, which is completely different from the system of Parcel, districts and zoning to international system. Thirdly, Chinese Management Bureau of Land has issued standardization for investigating land use, which includes 8 classes and 46 sub-classes, and being using it for over 15 years. So, we did consider the sort in land use investigation, also can't treat this sort equivalent to feature sort. Summarize above mention points, we need to create new features for land use dynamic monitoring. First of all, we keep feature framework and definition provided by ISO Te/211 that "Geographic feature are digitally represented at two levels: instance and types." Then we select, group and reclassify the land classifica-
Research on a feature based spatia-temporal data model
161
tion of land use dynamic monitor according to the definition of classification standard of ISO/IC211. There are four types belongs to feature type(cropland, garden, woodland, grassland) in 8 land use type(cropland, garden, woodland, grassland, city industry land, traffic land, water area, unemployed land), the other land use type is classed from "use" point of view, which is different from "feature" concept. They belongs to "super class" from the concept of hyper-graph. Such as: water area itself belongs to super class in hyper-graph concept, but its second level which are river and reservoir all belong to feature type. So, there is need to select, group and reclassify to the land use class defined by national soil and resources department. The aim at keeping our national land use classification system, at the same time linking up to the international used feature classification standard, we treat this land use fashion such as paddy field, dry land and vegetable land which belong to cropland as attribute. The total of second land use type is 24, and these type re-group 31 subclasses. At the same time, we hold the ill number which is in the original land use classification as the attribute code to the hyperlink, so as to interchange to our national land use classification.
3.2. Spatially-temporally assigning value Spatially-temporally assigning value is an important part for building Feature Based Spatial Temporal Model. In assigning value, we must consider spatio-temporal reference system, regional characteristics, standard in use and requirements of spatio-temporal analysis. The temporal system we used for case study of FBSTDM in Baotou include: assigning quantificated value 1) years 2) period 3) during 4) persistence assigning qualitative value l)instance 2)growing, stabilization, degeneration 3)simple temporal relationship, succession temporal relationship Following are the spatial and temporal assignation for FBSTDM model:
a. Assignation value for temporal attribute There are two kinds of temporal attribute: events and states. We commenly did not treat "event" as a temporal attribute, that it is because that land use have not "event" feature. On the contrary, we use "state" as temporal attribute so as to save the origination year of the state. Assign "E" for event and "S" for states. Assign xx ,two digit for showing temporal starting value of a state. For example: cropland assign S_96 means that the temporal attribute assigned
162
Weihong Cui, Wenzhong Shi, Xiaojuan Li, Luojing, Zhenguo Niu
"states" since 1996, or cropland assign s_96_98 means that "states" since 1996 to 1998.
b. Assignation value for temporal relationship We have two kinds of temporal relationships: simple temporal relationship and succession temporal relationship. Here we assign "SI" as the value of simple temporal relationship, and assign "SU_iden1, iden2--" as the value for succession temporal relationship. The feature successive always includes both spatial and temporal aspects. For example, "SU_12,16" means that the parcel were successive from parcel No12 and parcel No16.
c. Assignation value for temporal function. The feature temporal function describes a change in value of a feature attribute (Temporal Schema ISOITC211 1998). For land use we use following description G-------growing
S--------- stabilization
D--------- degeneration
d. Assignation value for spatial attribute. The spatial attribute is the spatial description of feature. There are different descriptions for different feature type. The description consists of two parts: name and value.
e. Assignation value for spatial relationship Spatial relationship is specified in the topological relationship between instances. To the topological relationship we assign: Beginning point;
"B"
Right face;
Left face; "RF"
"LF"
End point.
"E"
f. Assignation value for spatial function The spatial feature function represents as behavior of feature. assign the feature function name and description. For example: Crop land passable
assign:
crop-possible
So we need to Road
assign:
4. Case study of dynamic monitoring land use based on feature We tested the spatio-temporal dynamic data model for monitoring land use in Baotou, Inner Mongolia. The study are was 2008KM 2 . The objective of the study was to provide a demonstration for dynamic monitoring land use based
Research on a feature based spatio-temporal data model
163
on the integrated technologies of remote sensing, GIS and GPS at county level in China. The main process of dynamic monitoring land use are: -<¢> Taking the scale of 1:10,000 as the basis, obtaining land use data, and building basic database by input all the data. At the same time, we build O-cell, l-cell and 2-cell database. -<¢> Comparing remote sensing image and land use database to find changed area. -<¢> Surveying changed area with GPS. Input change data into land use database and build dynamic changing archive of land use. From Figure 6, we can learn that the whole frame consists of class, hyper-class, link and hyper-link. The hyper-class is a class set that consists of two or more classes linked by the same relationship R. For example, road and railway are classes, and traffic area is their hyper-class. Hyper link is link between two hyper classes or a hyper-class and a class. In the dynamic monitoring of land use, there are four classes (feature types) and four hyper-classes. Geographic units include county and township, which consists of two classes. Three classes of data were captured .They were GPS data, Land surveying data and TM images data. Two classes of data were obtained via data processing -- segment and polygon data. The integrated GPS data were then transferred into GIS format and adjusted to form new topology. Temporal data was gotten through outdoor investigation and input into GPS data's head file. the other data concerning spatial and temporal functions was gotten from other database and input FBTBM's head files.Based on above manners, we constructed the feature-based spatiotemporal data model and land use spatio-temporal database to implement spatio-temporal query and spatio-temporal reasoning.
4.1. The spatial-temporal query based on hyper-graphics The feature-based data model in this paper is established on the 15 abstracted spatial-temporal data types. The spatial-temporal query based on hyper-graphics not only applies to the usual spatial-temporal query, such as spatial-temporal snapshot, but also the spatial-temporal relationship of each feature instance, finishes the spatial analysis and spatial-temporal analysis. Figure 6 have expressed the hyper-graph-based spatial relation query. This map have showed that users can query common point or line, also adjoining polygon. The arrow express querying borderline, pentagram represent the polygon which is intersect with queering borderline.
164
Weihong Cui, Wenzhong Shi, Xiaojuan Li, Luojing, Zhenguo Niu
-
,-,
" ,
,
,
....s:
"';"~---------------
cro .. tE) ,p
,
...
~---~
\.
,
~\
,.:
lJ~
',
I
,
I
I
{ C9 \ I
I
i s I I
:
: , I
:
..... ".- . . . . Residential '-,, :I area , j
:I I
~
.... ~
:
I ,
.. .. ... -..... ...
• I
....- .. ~ '
I.
TraflIc . . . :
."
•
1,..,
I
,
\
... _ .. _ ,.. ...
,
I
,
,
I
., ~
I
~
...
,"
- Wate~-
.
, , , ..' tither~ ':
"
...... ...
\.\
.,
Class
, ..
(Feature
' I
,.. '
I
.. . . __ .. .. '
Hvperclass
j
, ,,
"'......... _ .. #. 6'
,
Fig. 6. Hyper-graph based spatial relationship query
Link
Hyper
Fig.7. Feature-based spatial-temporal reasoning
Research on a feature based spatio-temporal data model
165
4.2. Spatia-temporal reasoning based on hyper-graph Reasoning is a kind of judgement manner, which reason about unknown fact from the known condition. Fig 7 shows spatial temporal reasoning based on hyper-graph, which include the feature instance's time attribute, space attribute, function and relationship also the hyper-graph's spatio-temporal link, the link of the feature type, as well as the link of feature instance. The arrow express the state of fore-expanded habitation. The pentagram is the extend part by reasoning.
5. Conclusions The feature based spatial temporal data model (FBSTDM) based on feature, taking hyper-graph theory as its foundation, and taking land use dynamic monitor in Baotou as its case study. It is a new explorative task in our nation 95 scientific project and acquire "95 nation important scientific excellent fruit credit certificate". The result represented that the FBSTDM model very flexible and helpful for users not only for dynamic monitoring but also for spatial and temporal reasoning.
Acknowledgments The work described in this paper was supported by a grant from the Research Grants Council of the Hong Kong SAR (Project No.: PolyU5093), NSF of China (Project NO 40201020) and Science and Technology Ministry of China (Project No.: 96-BOI-07). The authors also like to thank Dr. R. Rugg from Virginia Commonwealth University for his comments on feature definition and hyper-graph data structure.
References Batton-Hiibert M Automated searching of 3D cartographic Shape Submited to Expansion Basis on DTTMlSTM Environment. Application to GIS and Environmental Surveying Systems GIS BRNO 1994 Proceedings Berge,C. Graphs and Hyper-graphs translated by E. Minieka New York,North Holland Publishing Company. 1973 Bouille F. The Hyper-graph Based Data Structure and its Application to Data Structure and Complex System Modelling. Report to the Development of the Air Force European Office of Aerospace Research and Development 1979 Bouille F. Survey of the HBDS application to cartography and mapping 1983.
166
Weihong Cui, Wenzhong Shi, Xiaojuan Li, Luojing, Zhenguo Niu
Bouille, F., 1994, Object-Oriented Methodology of Structuring Multiscale Embedded Networks GIS BRNO 1994 Proceedings Claramunt,C.and Theriault,Mmanaging time in GIS: An even-oriented approach. Proceeding of the International Workshop on Temporal Database:Recent Advances on Temporal Databases 1995 Chang,T,and Molenaar,M A process orient spatio-temporal data model to support physical environment modeling Proceeding of the 8 the international symposium on spatial data handling 1998 Cobert J A General Topological model for spatial reference US Census 1985 Cui W.H. Hyper-graph Data Structure approch in Reginal Sustainable Decision Surport System 1995 Cui W.H. Digital Earth 1999 in publication Cui W.H.Integration GIS with GPS and RS for reginal and urban information system.GIS Conference proceeding 1994 Hong Kong Feature cataloguing methodology ISO/TC 211 n 604 1998 ISO/CD 15046-10 ISO/TC211/WG3 Feature Geometry Version 3 Open GIS Consortium,Inc.Document Number 98101 The Open GIS Feature (Open GIS Consortuem ,Inc. 1998) Kraak Menno-Jan GIS Cartography:visual decision support for spatio- temporal data handling IJGIS 1995 Vol 9 No6. Goodchild Michael F. A spatial analytical perspective on geographical infornation systems IJGIS 1987 Vol 1 No4327-334 Goodchild Michael F. Performance evaluation and work-load estimation for geographic information systems IJGIS 1987 Vol 1 No 1 Langran Gail Time in Geographic Information Systems 1992 Mason D.C. and et. Handling four-dimensional geo-referenced data in environmental GIS IJGIS 1994,Vol.*.No2 Mitasova Helena and et Modelling spatially and temporally distributedphenomena: new methods and tool for GRASS GIS IJGIS 1995 Vo1.9.No4 Peuquet Donna J. et. An even-based spatio-temporal data model (ESTDM) for temporal analysis of geographical data IJGIS 1995 Vol 9 No 1 Rules for application schema ISO/TC 211 n 631 1998 ISO/CD 15046-9ISO/TC 211/WG2 Rugg Robert D. A feature based planning support system. Computers,Environment and Urban Systems, 16(3):219-26 Rugg Robert.D. Building A Hyper-graph-Based Data Structure: The Examples of Census Geograph and Road System. 1982 Rugg Robert D Feature Definition:Methodology Used by Working Group 30f the National Committee for Digital Cartigraphic Data Standards 1989 URISA Proceedings Rugg Robert D Defining Standard Features for Land Use Applications Cartography and GIS Vol22 no 3 1995 Rugg Robert D ,Cui W.H. Creating A Hyper-graph-Structured Data Bbase for Urban Cartographic Analysis USGS report 1989
Research on a feature based spatio-temporal data model
167
Rugg Robert D, Cui W.H Development of a Feature-Based Planning Support System. Project proposal VCU Richmond 1990 Shi W.Z. and Zhang,M,W, Object-oriented approach for spatial temporal and attribbute data modelling Proceeding of GISILIS'95 2 Shi W.Z. and Matthew Y.C.Pang A Spatio-temporal model for Digital Earth "Digital Earth" in publication 1999 Spatial Schema ISO/TC211 n 637 1999 ISO/CD 15046-7 ISOITC211/WG-2 Tang Agatha Y and et A spatial data model design for feature-based geographical Int,J. Geographical Information Systems 1996 Vo1.10,No.5 Temporal Schema ISO/TC 211 n 619 1998 ISO/CD 15046-8 Usery E.Lynn A Feature-Based Geographic Information System Model Photogrammetric Engineering & Remote Sensing Vol 62 No 7 1996 Usery E.Lynn Category Theory and Structure of Features in GeographicInformation Systems Cartography and GIS Vo1.20 No1 1993
0-0 Feature in 3D Planar Polygon Testing for 3D Spatial Analysis Chen Tet Khuan, and Alias Abdul-Rahman Department of Geoinformatics, Faculty of Geoinformation Science and Engineering, Universiti Teknologi Malaysia,81310 UTM Skudai, Malaysia {kenchen, alias} @fksg.utm.my
Abstract
In this paper, we discuss the problem of OD feature-in-3D planar polygon. It has been recognized that 3D union and 3D intersections are among important spatial operators in 3D GIS. A OD feature-in-polygon test is one of the problems in 3D GIS. Though it is a classic problem, however, it has not been addressed appropriately in the literature. Moreover, the OD feature-in-polygon query is rather complicated if it is implemented into 3D spatial information system. From the aspect of 3D spatial analysis, the general OD feature-in-3D planar polygon problem should be formulated in a suitable way. Our basic idea is to solve a general OD feature-in-3D planar polygon problem that includes all special conditions. The method addresses an essential mathematical algorithm that applicable for real objects and provides an approach for implementation in further 3D analytical operation, e.g. 3D union or 3D intersection for 3D GIS. KEY WORDS: OD feature, 3D planar polygon, and 3D GIS
1
Introduction
In GIS, a very natural problem that implements the field of computational geometry is the determination of OD feature in polygon. By given a OD feature, N and an arbitrary closed polygon P represented as an array of n points, Po, PI, ..., Pn- 1, P; = Po, determine whether N is inside or outside the polygon P. In literature (Foley, et. al, (1990); Haines, (1994b); Harrington, (1983); Nievergelt and Hinrichs, (1993); O'Rourke, (1998);
170
Chen Tet Khuan, and Alias Abdul-Rahman
Sedgewick, (1998); Weiler, (1994); Woo eta al, (1997», two main definitions can be found. The first one is the even-odd, in which a line is drawn from N to some other point S that is guaranteed to lie outside the polygon. If this line NS crosses the edges e, = PiPi+ 1 of the polygon an odd number of times, the N is inside P, otherwise it is outside (see Figure la). This rule can easily be turned into an algorithm that loops over the edges of P, decides for each edge whether it crosses the line or not, and counts the crossings. We discuss these issues in detail in Section 4. The second one is based on the winding number of N with respect to P, which is the number of revolutions made around that point while traveling once along P. By definition, N will be inside the polygon, if the winding number is nonzero, as shown in Figure l(b). From these 2 methods, various implementations of these strategies exist (Franklin, (2005); Haines, (1994a); Haines, (1994b); Mehlhorn and Naher, (1999); O'Rourke, (1998); Sedgewick, (1998); Stein, (1997); Theoharis and Bohm, (1999», which differ in the way to compute the intersection between the line and an edge in order to determine whether the OD feature is inside or outside the polygon. However, some of the problems need to be investigated, since these determination strategies are rather fragmented, i.e. in term of applying into 3D planar polygon instead of 2D, or if y-intercept crosses the vertices or line of 3D polygon in certain special cases. ou t __-+----\---~lor-. . wn =0 _ _~f----.l'r__....
wn
'.,LII.I~'----r---",
(a)
=1
wn =2
(b)
Fig. 1. Determination of OD feature in polygon, (a) odd-even, and (b) winding methods
In this paper, we concentrated on simple but complete strategy in determining the OD feature inside or outside 3D planar polygon. The algorithm will fully cover the third dimension in order to apply in 3D situation. The paper is organized in the following order: first, short discussion of the 3D plane equation is given in section 2. This discussion will cover some examples in order to verify the mathematical function. Then, the de-
O-DFeature in 3D Planar Polygon 171
termination of Ol) feature on/abovelbelow the 3D planar polygon is followed in section 3. The determination will only be continued to next stage if this test is successfully passed, i.e. Ol) feature is on the 3D planar polygon. The determination of Ol) feature is located inside or outside 3D planar polygon will be discussed in detail in section 4. The paper presents the experiments on the algorithm in section 5 and concludes the research in section 6.
2 The 3D Plane Equation There are many ways to represent a plane n. Some work in any dimension, and some work only in 3D. In any dimension, one can always specify 3 non-collinear points Po=(Xo,Yo,Zo), P1=(XbYbZ1) , Pz=(Xz,Yz,~) as the vertices of a triangle, the most primitive planar object. In 3D, this uniquely defines the plane of points (x,y,z) satisfying the equation:
x
y - Yo X 1 - XO Y1 - Yo Xz-Xo Yz- Yo Ax + By + Cz +D
-Xo
Z-2o ZI-2o 2 z - 2o
=
a
(Eq. 1)
=
a
(Eq.2)
This determinant is satisfying general form of plane equation, with normal, Pn = (A, B, C), and (x,y,z) denotes any point on the plane For any plane more than 4 vertices, plane equation is same as the equation of a triangle formed by any 3 points. As both polygon and triangle in Figure 2 have the same plane equation if and only if they are coplanar. In order to compute the equation of any polygon, which more than 3 vertices, any 3 vertices can be used to calculate the plane equation. Figure 3 denotes an example of calculating the plane equation from rectangular. ,~
fII"
)
~
/ ,; U ,. ,. ,. ",,"'"
- :,.\
Polygon equation = B
(
.~
Triangle equation =A
Fig. 2. Both triangle and polygon are co-planar
172
Chen Tet Khuan, and Alias Abdul-Rahman
z
10
x Fig. 3. Rectangular
To calculate the plane equation of a polygon that more than 3 vertices, e.g. rectangular from Figure 3, only 3 vertices will be used. In the 1st case, PI, Pz, and P3 are used. The plane equation of the rectangular in Figure 2 is: Given PI (5,5,10), P 2 (5,10,10), and P3 (10,10,10);
=
x - Xl XZ-X l
X3 - Xl
=>
X - (5)
=
Y Yz Y3 Y
Yl
=
2 - 21
Yl 2 z-2l 23 - 21 Yl - (5) 2 - (10)
=
(5) - (5) (10) - (5) (10) - (10) (10) - (5) (10) - (5) (10) - (10) => (X - 5)e[0 - 0] - (Y - 5)e[0 - 0] + (2 - 10) e[O - 25] => [OX - OY - 252 + 250] / -25 (Simplify the equation by deviding a factor of (-25) => OX + 0 Y + 2 - 10 Therefore, r, (A, B, C) = Pn (0,0, 1), and D = -10.
=
= =
0
0 0/-25
0
In order to prove the plane equation is unique for any co-planar polygon, P}, Pz, and P4 are used. In the 2nd case, the plane equation is: Given PI (5,5,10), P 2 (5,10,10), and P 4 (10,5,10);
=
X - Xl
=> => => =>
=
Y Yz Y4 Y
Yl Yl
=
2 - 21
2 z-2l Yl 2 4 - 21 - (5) 2 - (10)
=
o
(5) - (5) (10) - (5) (10) - (10) (10) - (5) (5) - (5) (10) - (10) (X - 5)e[0 - 0] - (Y - 5)e[0 - 0] + (2 - 10) e[O - 25] [OX - OY - 252 + 250] / -25 (Simplify the equation by deviding a factor of (-25) OX + 0 Y + 2 - 10
=
0
XZ-X l X 4 - Xl X - (5)
=
= =
0 0/-25 0
O-D Feature in 3D Planar Polygon 173
Therefore, Pn (A, B, C) =Pn (0,0, 1), and D
=-10.
These 2 cases prove that: For any polygon that more than 3 vertices, the plane equation is unique and it can be computed using only 3 vertices.
3
Determination of OD feature on/above/below 3D planar Polygon
To determine a OD feature is inside a 3D planar polygon, one important condition must be fulfilled in the first place. The OD feature must located on the 3D plane. In order to verify this condition, the plane equation from Eq. (2) is followed: Ax + By + Cz + D = 0 (from Eq. 2) with normal, Pn = (A, B, C), and OD feature = (x,y,z) Take note that the 3D polygon is created from a set of points. The sequence of this point set is important in determining a OD feature is located on, above or below a 3D planar polygon. If the sequence of points is counter-clockwise, any point that fulfils Ax + By + Cz + D > 0 denotes that a OD feature is above the 3D plane. Conversely, a OD feature is located below the 3D plane when Ax + By + Cz + D < O. A OD feature is located on the 3D plane when Ax + By + Cz + D = O. Figure 4 denotes the a OD feature is located in two different cases.
bf210W
e
(a)
(b)
Fig. 4. The point set sequence of a polygon: (a) counter-clockwise, and (b) clockwise
174
Chen Tet Khuan, and Alias Abdul-Rahman
4 Determination of OD feature inside/outside 3D planar Polygon After a OD feature is verified either onlabovelbelow a 3D planar polygon, the second condition is followed. The OD feature must within the boundary or interior of a 3D planar polygon. To determine whether a OD feature is inside or outside a 3D polygon, the XY plane is tested in the first place. If a OD feature is located at the interior of 3D polygon, the YZ plane test will be ignored. This is because only OD feature that is co-planar with the 3D polygon will be tested. If the OD feature is located above or below the 3D planar polygon, it is definitely outside the polygon. Conversely, if the OD feature is located at the border of 3D polygon, the YZ plane test will be carried out.
4.1 The XV plane test In most of the common test for detecting point inside polygon, either xintercept or y-intercept ray are implemented. Figure 5 denotes a simple test for point inside 2D polygon. If the amount of intersecting point between yintercept ray and the border of polygon is even, then the point is outside the polygon, otherwise is inside.
Interse ctionpoint
(a)
(b)
Fig. 5. Amount of intersecting point, (a) odd, and (b) even
However, there are 4 special cases exist in reality. There are: Case 1: y-intercept ray intersects the border of polygon at a line (one of the intersection point)
O-DFeature in 3D Planar Polygon 175
In certain case, the y-intercept ray crosses the border of polygon at a line. Figure 6 denotes the example of intersection between y-intercept ray and border of polygon. In this case, the intersection will be considered. The total number of intersection (from Figure 6a) is 1, although the y-intercept ray intersects 2 vertices or a line.
Total
Total
inters ecting number is 1 - - .
inters ecting number is 2
(a)
(b)
Fig. 6. The intersection between y-intercept ray and border of polygon, (a) Inside, and (b) Outside
Case 2: y-intercept ray intersects the border of polygon at a line (NOT one of the intersection point) In this case, the y-intercept ray also crosses the border of polygon at a line.
Figure 7 denotes the example of intersection between y-intercept ray and border of polygon. The intersection will not be considered. The total number of intersection (from Figure 7a) is 1, although the y-intercept ray intersects a line and a vertex.
(a)
Fig. 7. The intersection between y-intercept ray and border of polygon,
(a) Inside, and (b) Outside Case 3: y-intercept ray intersects the border of polygon at a point (one of the intersection point)
176
Chen Tet Khuan, and Alias Abdul-Rahman
In certain case, the y-intercept ray crosses the border of polygon at a point. Figure 8 denotes the example of intersection between y-intercept ray and vertex of polygon. In this case, the intersection will be considered. The total number of intersection (from Figure 8a) is 1. Total inters ecting number is 2
(a)
(b)
Fig. 8. The intersection between y-intercept ray and border of polygon,
(a) Inside, and (b) Outside
Case 4: y-intercept ray intersects the border of polygon at a line (NOT one of the intersection point) In certain case, the y-intercept ray crosses the border of polygon at a point. Figure 9 denotes the example of intersection between y-intercept ray and vertex of polygon. In this case, the intersection will not be considered. The total number of intersection (from Figure 9a) is 1, although the y-intercept ray intersects 2 vertices.
(~
~)
Fig. 9. The intersection between y-intercept ray and border of polygon,
(a) Inside, and (b) Outside 4.2 The YZ plane test
O-DFeature in 3D Planar Polygon 177
The YZ plane test will be carried out if and only if the OD feature is located at the border of the 3D polygon at XY plane. Figure 10 denotes the OD feature is located at the border of 3D polygon at XY plane. · fr V lew om top
View from top
•
(b)
(a)
Fig. 10. OD feature is located at the border of 3D polygon at XY plane
In certain case, OD feature is located at the border of 3D polygon at XY plane, but it is located outside the polygon in reality. Figure 11 denotes the OD feature is located at border of 3D polygon at XY plane but outside the polygon in reality.
i..
View from top
z
r~
View from top
z y
•
f
~x
y
• x
(~
f
x
x ~)
Fig. 11. OD feature is located at border of 3D polygon at XY plane, but
(a) outside, and (b) inside polygon in reality Refer to the Figure 11, the YZ plane test needs to be carried out in order to verify correctly the OD feature is located either inside or outside the 3D planar polygon. However, the YZ plane test is same as the XY plane. The YZ test needs also to compute the amount of intersection between the yintercept ray of OD feature and border of 3D polygon. If the amount is even, the OD is located outside the polygon, otherwise is located inside the 3D polygon. All the special cases mentioned in XY plane test will need to follow as well.
178
5
Chen Tet Khuan, and Alias Abdul-Rahman
Experiment and discussions
The experiment is implementing the C++ program to test the approach mentioned in section 2, 3, and 4. Figure 12 denotes the methodology for the determination of the OD feature is located inside/outside 3D planar polygon .
Calculate the plane equation of 3D po lygon
Determine OD feature is ON the plane onD polygon
YES
NO
Determine OD feature is loca ted at border of pclygon et XYplane
NO
Calculate the amount of intersection be tween y-intercep t roy and border of polygon for XY plane
YES Determine OD feature is located at
border of polygon at YZplane
NO Calculate the amount of intersection between y-intercep t roy and border of polygon for YZ plane
Fig. 12. The procedure for determining OD feature inside/outside 3D planar polygon .
_ O-D Feature in 3D Planar Polygon 179
N 1 J11 400 '~
""Oll"" ""Oll""
""m"" ""m"" ( NO
iI ~ cwert t"I'l..~(t..... ~
cw ::=,..,
c...mQuerY
= & '~nr t a~
z.=::Cktn
" ,_
. ...'-"
/ j. 'lI ~
1 / £
i
...;J
~
0 B8
d+ c O
,. 5l
.. 1
i0 ~ "' Cl .-
x
rJ -.
v
0 0
z
Q[J
~. r
Biller
'IIr
1 .... - - - - I
1.----I BuToM.!in
z
I
1JrlI . METER
~k: x
14 , ~ .-' ....d.Urr:'-' =,.-r; l iMil2 =-:'nI~---
(a)
(b)
Fig. 13. (a) Experiment, and (b) Software module The experiment is given in Figure 13 is to verify the capability of the methodology from Figure 12 in producing the correct result. C++ program is used to create a software interface that consists of all related algorithms given from Figure 11. In this experiment, a 3D polygon created from 4 vertices is tested. There are PI (300,600,300), P2 (300,300 ,300), P3 (300,300,600), and P4 (300,600,600). The OD feature is given as N (300,400,400). The software module read the dataset from a text file that consists of the 3D polygon and OD feature dataset. The interface read PI as the ill number for 3D planar polygon, whereas Nl is the ill number for OD feature (see Figure 13b). The following triplet is the dataset for OD feature . Later on, the vertices for 3D polygon are followed until the character "END" is reached. The result from the implementation is given as a test file (see Figure 14).
180
Chen Tet Khuan, and Alias Abdul-Rahman ~ PolnllnPoly - Notepad File Edit
Formot
~ [Q] (8J
View Help
=
Fig. 14. The result
6 Concluding Remarks The paper presents an approach for determination of OD feature inside/outside 3D planar polygon. The benefit of creating this approach is expected to be providing a simple GIS analysis for point feature inside/ outside 3D planar polygon. The methodology was tested using e++ program and the result gives a positive output respected to the algorithm. The initial results show the approach works for typical 3D planar polygon. These works could be extended to the much bigger research effort that includes: 1). 3D Union, 2). 3D Intersection, and etc. Obviously, the initial outcomes of this experiment certainly will be utilized in the 3D GIS software development in the very near future, i.e. the 3D analytical operations.
References [1] Foley, J.D., A. van Dam, S.K. Feiner, J.P. Hughes (1990), ComputGraphics: Principles and Practice, 2nd Edition, Addison-Wesley. [2] Franklin , R. (2005), PNPOLY - Point inclusion in polygon test, http://www .ecse.rpi.edu/Homepages/wrf/geom/pnp oly.html [3] Haines, E. (1994a), point in polygon inside/outside code, http://www.acm.org/tog/GraphicsGems/gemsiv/ptpoly haines/ptinpoly.c
O-D Feature in 3D Planar Polygon 181 [4] Haines, E. (1994b), Point in polygon strategies, In: P. Heckbert (Ed.), Graphic Gems IV, Academic Press, Boston, MA, pp. 24-46. [5] Harrington, S. (1983), Computer Graphics: A Programming Approach, McGraw-Hill. [6] Mehlhorn, K., and S. Naher, (1999), LEDA: A Platform for Combinatorial and Geometric Computing, Cambridge University Press. [7] Nievergelt, J., and K. Hinrichs (1993), Algorithms and data structures: with applications to graphics and geometry, Prentice-Hall. [8] O'Rourke, J. (1998), Computational geometry in C, 2nd Edition, Cambridge University Press. [9] Sedgewick, R. (1998), Algorithms, 2nd Edition, Addison-Wesley. [10] Stein, B. (1997), A point about polygons, Linux Joumal35. [11] Theoharis, T., and A. Bohm (1999), Computer graphics: principles & algorithms, Symmetria. [12] Weiler, K. (1994), An incremental angle point in polygon test, In: P. Heckbert (Ed.), Graphic Gems IV, Academic Press, Boston, MA, pp. 16-23. [13] Woo, M., J. Neider, and T. Davis (1997), OpenGL programming guide, 2nd Edition, Addison-Wesley.
Appendix FaE *InputData = fopen("Input.dat", "rt" ); FILE *OutputData = fopen("Output.dat", "w" );
II Read planar polygon dataset, P and OD data, Nex,Y,z)II Calculate the plane equation of planar polygon II Ax + By + Cz + D = 0
IF (A(xN) + B(YN) + C(ZN) + D = 0 ) {
II Nex,Y,z) is located on the planar polygon II Nex,Y,z) will be tested for other conditions }
else {
II Nex,Y,z) is NOT located on the planar polygon fprintf (OutputData, "OD is OUTSIDE planar polygon = %f %f\n", Nx, Ny, Nz); exit;
%f
182
Chen Tet Khuan, and Alias Abdul-Rahman
II Determination of min & max of x-coordinate from polygon P II Determination of min & max of y-coordinate from polygon P II Determination of min & max of z-coordinate from polygon P IF ((Min Xp < XN < Max xp) && (Min yp < YN < Max yp) && (Min Zp < ZN < Max zp)) { II Nex,Y,z) is located on the planar polygon II Nex,Y,z) will be tested for other conditions }
else {
II Nex,Y,z) is NOT located on the planar polygon fprintf (OutputData, "OD is OUTSIDE planar polygon = %f %f\n", Nx, Ny, Nz); exit;
%f
II Compute Nex,Y,z) INSIDE or OUTSIDE polygon for the (X & Y) and II (Y & Z) plane IF (Nex,y,z) touch the BORDER of polygon P in the X & Y plane) { II check the Y & Z plane IF (Nex,Y,z) touch the BORDER of polygon P in the Y & Z plane) { II Nex,Y,z) is located INSIDE the polygon P fprintf (OutputData, "OD is INSIDE planar polygon = %f %f %f\n", Nx, Ny, Nz); }
ELSE {
II Calculate the amount of intersection point of Z-axis IF (Amount is odd) II MUST involve all special cases, please refer to Sec. 4.1 {
II Nex,Y,z) is located INSIDE the polygon P fprintf (OutputData, "OD is INSIDE planar polygon = %f %f %f\n", Nx, Ny, Nz);
O-DFeature in 3D Planar Polygon 183
ELSE {
II N(x,Y,z) is located OUTSIDE the polygon P fprintf (OutputData, "OD is OUTSIDE planar polygon %f %f\n", Nx, Ny, Nz);
=
%f
}
ELSE {
II Calculate the amount of intersection point of Y-axis IF (Amount is odd) II MUST involve all special cases, please refer to Sec. 4.1 {
II N(x,Y,z) is located INSIDE the polygon P fprintf (OutputData, "OD is INSIDE planar polygon = %f %f\n", Nx, Ny, Nz);
%f
}
ELSE {
II N(x,Y,z) is located OUTSIDE the polygon P fprintf (OutputData, "OD is OUTSIDE planar polygon = %f %f\n", Nx, Ny, Nz);
%f
Definition of the 3D content and geometric level of congruence of numeric cartography R. Brumana, F. Fassi, F. Prandi DIIAR, Politecnico di Milano. P.zza Leonardo da Vinci, 32 Milano, Italy
Abstract
The research concerns the study about the 3D contents of digital cartography and the successive migration on GIS platform. We choose to work on different datasets (for example the Milan's 3D cartography at scale 1:1000 in DXF format). The DXF format is less evolved and flexible than GML language. This codify was studied in the first part of the project but the its complexity and the incomplete formal definition get the attention to concentrate more on the geometry and the 3D congruence than on the language. The next step was to evaluate the type of geometric components that let to transfer the necessary information contents to the complete description of the examined object, independently from the chosen representation environment. For example, to correctly define in three dimension one quoted point or a contour line is enough to have a simple geometric primitive with a specified attribute; but this type of information isn't sufficent for objects with complex geometry . We try to define the minimal content of threedimensional information that can be associated with an element to have its complete 3D definition, naturally trying to increase minimally the restitution duties. We begin a systematic operation for recognise the elements that are more useful to generate 3D terrain models like scarps, bridges and canals. For other typologies of elements that are represented in restitution phase, we are defining the geometric components for their complete threedimensional representations with particular attention to the geometric congruence between the elements: the study wants try to define the topological constraints among the elements (like in 2D cartographic representation). Other step is to verify the possibility to use these 3D geometric structures inside GIS and GEO-database platforms and to test the capacity of these systems to manage 3D complex data structures.
186
R. Brumana, F. Fassi, F. Prandi
Introduction
This job is a portion of a PRIN research, titled" Evolved structure of digital mapping for the GIS and the web" coordinated from Prof. Galetto, that it want define completely the theme of the 3D content of the numeric cartography and its DTM connection .. This is a open problem, both for the content both for the congruence concept and for the inconsistency between the third dimension of elements and the DTM. A second problem is about the data structure; now the production of numeric cartography is distributed in DWG or DXF format; this format is not suitable to store all the data contained in the topographic database. It is necessary to begin the use of languages and formats that let the registration and the distribution of digital mapping data. This involves a review of the actual logical, semantic and physical schemas of the data. In the case of the generation of digital mapping is also important to consider the charge of the photogrammetric restitution and the difficult of the extension of the topological model from the 2D to the 3D. The goal of the research is to identify the tri-dimensional geometric com- ponents enough to define the geometry of every single entity. This using the standards defined in the ISO TC/211[1] documents and, at Italian level, defined in the Ln_007_1, 007_2, 007_3 documents of the "Intesa Stato Regioni" [2].
Data Structure
For the test area has been decided to use a part of the new digital map at 1:1000 scale of Milan [3]. In fact this map has advanced properties about the 3D contents. For example all the buildings have at their interior the information about the elevation of the footprint and of the eaves and of the ridge of the roofs[4] but, it has also many gaps which don't let the automatic definition of a tri-dimensional model in GIS environmental. The map, in DWG-DXF format, uses only 3D points and lines for the rep- resentation of the 3D objects. Without a big editing work these geometric objects don't allow the understanding of the 3D reality at the final user. For example the construction of 3D model of buildings, with the ridge of the roof information, requires a minimum of 3D modelling base knowl- edge. For this reason it's important to define a data structure that
Definition of the 3D Content
187
allows the management of the 3D geo-information. This structure however must not increase considerably the photogrammetric restitution work. In the last years, the rapid growth of GIS technologies and of 3D visualizer have increased the need of structures definition for the 3D geographic data . This structure will provide a coherent and congruent data with the physical reality that you want to represent at 3D geographic data final users.The first step of the work has been the analysis of the geometric pitch used for the representation of every physical elements contained in the map . The aim of this analysis is to check if the geometric structures used for the representation are adequate to completely describe the 3D contents of the elements. All the information contained in the examined CAD dataset is stored with3D points or 3D polyline; but this kind of structure returns many difficul- ties in the understanding and the management of the 3D data, in GIS soft- ware. In fact in the GIS system the information about the third dimension has been stored as geometric property of the vertex but not as attribute information . For example a 3D polyline with the vertex at different elevation will be correctly imported at geometric level but, the stored elevation attribu te will be only one randomly chosen between the n-vertex that composed the polyline. E~v .t1 i on
Co lOl
7 CONTlfIIJOUS
~I
RecOfd
.!.~J~r---;- .!..J!.!J
Show:
P",-.J ~ ~ y 0 I
Z 3
•5 6 7
••
10
11 12 13 14 15
"
(
1512458, 163 1512456,268 1512454,204
fAI Selected I Reccwds (1
0l.J of 22Selected!
Z
15 12450 , 4 10 1512" 48,642 1512446, 105 1512439,501 1512" 27, 704 15124 19, 758 1512414,9 13 1512400,544 1512393, 301 1512382,8 91 1512371,000 151236"1,948
5032537 , 105 5032536 ,903 5032536,910 503253 7,893 5032539,365 503254 1, 112 503254 4,040 503255 1, 313 5032572 ,90 7 5032585 ,035 5032590,665 5032597,677 503261 5,222 503262 6,1 97 5032634,4 82 503264 2,388
119, 160 115, 132 115, 095 115, 090 115, 100 115,068 115,0 46 115,329 115,447 115,525 115,639 115,575 115,740 116,920 116,541 116,966
' C:l ?'l1 o';'7 0 1 1
c:~?o:."'''''':ln
"004'7
1512452,2n
119,16
I
~
Fn shS ketch
I
Fig. 1. The figure shows the difference between the geometric properties, where every vertex have the elevation value, and the table of attributes, where only one elevation is stored
188
The 3D polygon data structure is however useful to model the geographic features; for this reason is important that the software allows to manage this geometry .
u Fig. 2. Wrong import of 3D polyline in GIS software, the imported polygon is flat
An important aid to the standardization of the geometries is contained in the "Intesa GIS"l documents for the realization of topographic database. In this documents are defined the formal conceptual model used for the speci- fication of the spatial component of the class (a class defines a common properties for a set of a homogeneous objects). This model is related to the standard ISO rrC211: ISO 19107 "Geographic Information - Spatial Schema" document, where are specified the indication for the complete definition of the class. In particular, for every class is defined the spatial component typology.
Fig. 3. ISO ITC211 hierarchy of the feature class
1
"Intesa GIS" : Agreement "State-Region-Local Site" for the realization of Geographical Information System, approved by State Region conference at 26. September 1996
Definition of the 3D Content
189
Definition of the geometry and Data Processing The definition of the geometry requires the analysis of different aspect and related problem: - Singling out Semantic Feature Class - Identifying correspondent Geographic Entities - Logical-topological relations. - Minimal integration of Photogrammetric work and Data survey For the specification of the semantic feature class has been used the base of the "Intesa" documents. This semantic feature class has been compared with the layer of the dataset, in order to investigate the difference between the two logical models. From this analysis, that has registered many difference between the two models, it clears that the semantic model must be unique. The research has highlighted the need of an integration of the specification of the conceptual model of the "Intesa", particularly for some 3D objects in the urban contest. Starting from this specification it has been carried out a first control above the geometric structure used to store the data. Here are listed the problems about the 3D geometric structure of some particular feature class. In particular, it has been examined three classes of problems classified as below: - The geometric structure is defined, but there are not defined the rules for the data acquisition - The geometric structure defined is not adequate for completely define the 3D object - The geometric structure is defined but, is not defined the topological model for the feature. The first type of problem is typical of the structures as a railway scarp. In this case the geometric entity suitable for the three dimensional description of the feature is defined. The spatial component of this class is a 3D Complex Ring but, it's important to define a series of rules for the data acquisition. In fact the ring used for draw the object consists of three parts: a scarp's head, a scarp's foot and a scarp's fictitious close. It is important that, during the acquisition of the data, the vertex elevation of the head is biggest than the elevation of the foot. Likewise the ground points, that are represented with a 3D point, must be contained between two contour lines that are drawn with a 3D polyline. In this cases, when the geometric structure is adequate to represent the 3D objects, is enough to define the rules for the data acquisition. This rules
190
can be defined in a topological model of the data. It is possible to create the tools for the control of this rules, for example it has been implemented a LISP tool for the control of the ground points. The tool selects the points if their height is not in congruence with the others objects (contours lines, street polygon etc).
I
~
Fig. 4. Representation of the railway scarp before and after the definition of the rules of geometric acquisition
Fig. 5. Visualization of the wall before and after the definition of the geometric structure of the class. In the right side the geometric components of the class is a 3D polyline. In the left side the geometric component of the class is a 3D Complex Ring
The geometry, used to represent the wall feature class, is the 3D polyline acquired in correspondence of the head of the wall. This element is ade- quate for a 2D representation but is not enough for the threedimensional view of the class. Furthermore for this kind of features, the
Definition of the 3D Content
191
3D photo- grammetric acquisition is complex with a 3D "complex ring" structure. It has been implemented a specific tool for the generation of vertical 3D complex ring features starting from the polyline geometry.
Fig. 6. Visualization of the lisp tool for the 3D wall generation.
The 3D vertical Complex Ring is a useful structure for the features with a big vertical component, moreover it is completely compatible with the GIS data structure. The buildings have been modelled with a multipatch structure[5] which is a type of geometry that permits to represent finite closed volumes that have a closed polyline as trace. By means of a tool developed in visual ba- sic it's possible to transform polygonal elements, which may be buildings in 2D cartography, into multipatch with heights equal to the heights of the building itself. There are, especially in the urban areas [7], some features that are difficult to model for many reasons: because they are complex or because the data are not enough for the definition of the geometry. In the table below are showed a series of features typologies with their modelling problems and the possible troubleshooting.
192 Table 1. Example of geographic features with relative geometries and 3D model- ling problems
Entity 3D Complex Ring
Problem Acquisition only of the boundary of the base of the features
Wall
3D Polyline
The geometry don't allowed the complete definition of the object
Troubleshooting Introduction of multipatch structure that allow the generation of the wall Automatic generation of the vertical 3D Complex Ring starting of the 3D Polyline
Staircase
???
Complex object, the geometry is not defined
Use of tools for the manual modeling of the features
The geometry of the features is not correctly acquired
Definition of a series of rules for the Data acquisition
Feature Building
Scarp
3D Complex Ring
Another problem is the topological structure of the data[6]. For all the features contained in the Dataset has been implemented a 2D topological model. The next step of the research will be the implementation in the data model of a 3D topological model. Data acquisition and integration
Data acquisition and integration
The dataset used for this experimentation, is a portion of the digital map on the 1:1000 scale of the municipality of Milan. In this map, as in the new generation of topographic database, it is possible that many information for the modeling, useful or sometimes indispensable, are absent. In this cases, like in the cases of ancient and little historical site, is necessary to acquire new and different information to complete the cartography data set. Particularly in the case of the generation of 3D front views, where it is required a big number of details, an integration of the data is necessary. The
Definition of the 3D Content
193
goal of this part of the research is the definition of the methodologies for useful different data integration for 3D structures generation. It's obvious that the 3D survey, used for the data integration, must be consistent with the 3D model representation scale. The new techniques of survey, as a RTK GPS and terrestrial laser scanner, have been experimented for the data integration. These data don't want to substitute the cartographic data, but they can be a useful where the cartographic data are in-complete or to acquire the structure whit complex geometry.
Fig. 7. Use of laser terrestrial scanner data for data integration of a dam's river ba- sin
For example with the GPS survey we can quickly acquire significant data related to a complex structure as a staircase (top and base elevation and number of stairs) and we can create a simplified model of the structure. The terrestrial laser scanner (TLS) allows the acquisition from different points of view. The traditional techniques for the maps production (aerial images, aerial laser scanner) observe the reality from a top view; the TLS observes the world from a frontal point of view; this can be a resource for acquisition of the data in a complex or particular environments as historical center site or cost lines.
Conclusions The development of 3D visualization techniques calls for an improvement of management of three-dimensional data into GIS system. This management needs a definition of the data structure and rules. The goal of this work is to define the geometric features to represent the geographic objects. For many kind of features the correct geometry for a 3D representation has been defined, for other it has been enough to define the rules for a correct acquisition of the data. All the defined geometric features are compatible with the ISO Te1211 standards and allow a correct management in
194
the GIS environmental. There are still many problems as the generation of the complex structure or the data absence. In this case the research wants to define methodologies for the integration of the traditional aerial photogrammetric data with a different kind of survey as laser scanner or GPS. Another open problem is the extension of the topological 2D model to the third-dimension for the dataset. There are many topological models pro- posed from many authors; the next step of the research will be the implementation of this model in the data structure. The least but not the last problem is the data format for the storage of the geographic information. It's important to use an interoperable and flexible format for the production and the sharing of the information. Usually the typical GIS format is not flexible for the features generation or editing, while at the contrary the CAD or the modeler format is flexible for the generation of the 3D structure but is not adequate for store the information. The new generation of the extensible languages as the GML can superate this problem. The future aspect of the research can be the integration through the different environmental and languages for a complete interoperability trough the system.
References
[1] ISO 19125-1 2004, Geographic information/Geomatics [2] Intesa stato regioni enti - locali. Sistemi Informativi Territoriali comitato teenico di coordinamento specifiche per la realizzazione dei data base topografici di interesse generale. Specifiche di contenuto: gli strati, i temi, le classi. Intesa GIS/WG01 Vo1.1n 1007_1 [3] Bezoari G, Monti C, Selvini A. (2004) La cartografia numerica della citra di Milano: interventi per il collaudo. Attuali metodologie per il rilevamento a grande scala e per il monitoraggio. Convegno Nazionale SIFET. Chia Laguna, Cagliari. [4] Norme tecniche per la realizzazione di cartografia numerica alIa scala nominaIe 1:1000 e 1:2000. Bollettino ufficiale della Regione Lombardia. 3° supplemento straordinario al N.7, 18 febbraio 2000 [5] Ulm K (2005) 3D city Model from aerial images, Geo-informatics JanuaryFebruary. [6] Zlatanova S, Rahman A A, Shi W (2002) Topology for 3D spatial objects. International Symposium and Exhibition on Geoinformation, Kuala Lumpur, Malaysia, CDROM, pp 7 [7] Varshosaz M (2003) True Realistic 3-D Models of Building in Urban Areas. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vo1.XXXIV-pp 5-10
3D multi-scale modelling of the interior of the Real Villa of Monza (ITALY) C. Achille, F. Fassi DIIAR - Politecnico di Milano, P.zza Leonardo da Vinci, 32 20133 Milano, Italy
Abstract Real Villa in Monza (ITALY) was founded as a symbol of the prestige and the greatness of the Asburgic family. Maria Teresa decided to build it when she named his third-born child Ferdinand General Governor of the Austrian Reign of Lombardy. The work started in 1777 and was directed by the court architect Giuseppe Piermarini. This work takes place inside the "International Design Competition for the Refurbishment and Enhancement of the Real Villa in Monza and its Gardens" (http://villarealemonza.regione.lombardia.it). The aim is to generate materials to support the restoration and the representation for the Villa under different levels of detail. The work implies the realization of a multi-scale modelling. A little scale to represent the whole complexity of the building with its .global constructive structure (1:20 1:50) and for virtual navigation or web presentation. It is also necessary a big scale to model little details, useful for restoration and reproduction of particulars. Laser Scanner Leica HDS 3000 is used to create fast 3D models; topographic and digital photogrammetric survey are used to create detailed DTMs of vaults and walls. Leica Laser Trackers LTD706 is used for reverse engineering and rapid prototyping of little pieces of decorations or parts of ornaments.
196
C. Achille, F. Fassi
The modelling pyramid
5TE P
:3
S T EP
ZS
ST E P
2A
5 TE P
1
Fig. 1. The four steps of the 4 years modelling work in Real Villa of Monza from general setting to high detailed decorations
Step 1: the little scale modelling at scale 1:100 - 1:200
Fig. 2. Axonometric and front views of the complete complex of Real Villa
From horizontal and vertical profiles we built a simplified 3D model of the whole Villa, this simple model is mapped with the high resoluti on rectified images[lO] that we build for the wall facade. This simple raster- vector
3D multi-scale modelling
197
model has a big potentiality because it offers a useful support to the design and to the contextualization of the new planned intervention[l] [4].
Step 2: Modelling or the interior of the Villa at scale 1:20 - 1:50
The second step of our work interest the interior of the Real Villa. All the rooms of noble floor were modelled at scale 1:20 and 1:50, using different methodologies and technologies, also with different finality. We use principally Laser Scanner and Photogrammetry technologies, natu-rally supported by classical topographic surveys. The Laser Scanner is Leica HDS30001 and for the photographic campaign we employes the Rollei DB44 Metric', Laser scanner let to recreate the "state of fact" significantly more quickly than classical topographic or manual survey. We use it to extract the geometric content of the objects like discontinuity lines, edges, guide lines of the vault, profiles and sections and to supply simplified or middle accurate DSMs for the construction of digital orthophotos. The photogrammetry furnishes high precision and high resolution colour information to mach with product 3D model. This second step can be divided in two subsection depending on the aim of the modelling process. The step 2A has the aim to product navigable 3D model of the Villa, video and image for high quality representation of the interior. The Step 2B has the purpose to study and comprehend the architecture and to supply detailed and accurate 3D model to architects and restorers.
Step 2a: For virtual navigation and high quality representation.
For the modelling of the Villa's interiors we begin with a classic topographic way. The rest of the work is an experimentation to test, compare and interact different kinds of instruments and new methodologies.
1 System performance between 0 and 50 m scan range: Accuracy in the distance measurement = 4mm; Angle (both vertical and horizontal) 60microradians; tar-get acquisition accuracy 1,5mm; modelled surface precision = 2mm. 2 Rollei DB44-Metric, CCD-Chip 4080 x 4076 pixel on a 36,720x36,684 mm format. The camera has calibrated optics with focal 40mm, 80mm and 150mm.
198
C. Achille, F. Fassi
Fig. 3. The 3D model of the Piermarini great staircase is product with classic topographic measure of sections and guide lines of the vault. Aim of the survey is to describe the geometric form of the vault, skinning the decorative apparatus. All the box is dressed with high quality ortophotos and rectified images
Fig. 4. In the upper image is visualized the cylindrical development of the walls of a rooms for the virtual navigation with VR Quicktime. In this case the walls are not texturized. In the bottom image is visualized the rendered image of the Ball Rooms .
3D multi-scale modelling
199
Step 3: Modelling or the interior of the Villa at scale 1:50 - 1:20 for geometric and architectonical studies and production
To elaborate the points clouds we use Cyclone and Cloudworks and we test in particular the functionality of section extraction and polyline fitting. From the laser clouds we extract the generatrix of the form to get a glossy model , necessary for the construction of the orthophotos, with resolution that must be compatible with the representation scale and with the searched detail.
Fig. 5. All the models and the orthophotos of the interior of the Villa are realized with topographic, photogrammetric and laser scanner data at nominal scale of 1:20. Everything is georeferenced in Gauss Boaga System coordinates
An example: The vaults of the Throne Room and of the Birds Room
The vault of Throne Room is a typical cloister vault generated by two barrel vault with the same direction line. The other one is another type of cloister vault with a central plane. The aim of the research is to study and to interpret the two different geometric construction rules from the scanned data and to extract the geometric properties in semi automatic or in automatic mode testing for this purpose different software and algorithms. The results or this test is that using automatic mode gives false and not accurate results. The best way to proceed is to extract section slices, to draw manually approximate profiles fitting the result later with approximation algorithms.
200
C. Achille, F. Fassi
Fig. 6. In the upper images the vault of Throne Room. From left to right: the geometric scheme of the cloister vault, the points cloud of the vault with the sections lines, a profile extraction applied with CloudWorks. In the bottom images are represented the longitudinal and the transversal profiles of the Birds Rooms Vault. In the two right images the blue ellipses are automatic extracted interpolating the points with the geometric primitives on the section plane [9].
- ""'' -'"''-.- 11.
~~
~-=-
Fig. 7. In the left image you can see the construction of the Birds Room's model: the vault is a nurbs surface extracted from laser data and the wall is constructed with AutoCad and 3D Studio from topographic and manual measurements. On the right is shown the high resolution (1 mm) orthophotos. To build it we use a DSM extracted from the meshed laser surface. In this case we operate in this way because the vault is painted and lacking of boiseries
3D multi-scale modelling
201
Step 4: Modelling of details at scale 1:1 or greater
The last step of the modelling pyramid is the 3D model of details. The Villa is rich of wooden decorations and friezes that are in bad condition or lacunose. In particular this kind of ornaments consist in repetitive geometric figures around doors and windows. The aim of the work is to prototype the decorations that are still in situ, to product the cast and replace or complete the whole ornament apparatus of the villa. For this reason we need to product a very detailed 3D model of this decoration . We use Leica Tscan System' [11], a laser scanner normally used for mechanical precision measurements. This kind of instrument let to work in difficult condition, with high precision, deep resolution and very good handiness and flexibil-ity .
Fig. 8. Leica Laser Tracker System LTD706 with Tscarr' and Tprobe
The product of the scanning is a very big points clouds. The aim of the research is to test some software- to create a 3D complete model of this ornament and to prototype it. The manner to start from a laser scanner points 3 Tested with Leica Metrology Italy gently support. 4 The measurement uncertainty of spatial length "UL" = ± 60llm for distance < 8.5m . For distance> 8,5m the special length uncertainty "UL" is equal to ± 7llmlm. The measurement uncertainty of sphere radius "UR" (the deviation between a measured sphere radius and its nominal value) is ± 50llm for distance < 8.5mand ± 71lmlm for distance >8.5m. The measurement uncertainty of plane surface "UP " is defined as the 2-sigma value of all deviation from the best-fit plane that is calculated with all measured points and is =± 951lm + 3llmlm. 5
Test software: 30 days time-limited licensed.
202
C. Achille , F. Fassi
clouds and to arrive to a good 3D model is not very simple and automatic. Every software (like Rapid Form, Polyworks, Spider) can work with big number of points and elaborate the model with more evolved functions. The big mass of data constrains to use very powerful (and expensive) graphic workstation.
Fig. 9. A phase of a survey. You can see in these pictures the difficulties caused from the position of the frieze and the real time creation of the surface (circa 10 million points, circa 250 mega ascii file)
After the segmentation of the whole clouds in different parts, some filter-ing operation and high quality decimation was applieds, The filter was ap-plied to detach and eliminate outlier and scattered points, the decimation samples points on the surface, detaching the curvature of the surface: the algorithm eliminate more points in flat areas and conserve all the points where the curvature radius is closer. The second Phase consist in the trian-gulation: in this case we found some problem due to the kind of laser data. The cloud coming from T-Scan laser isn't a structured laser data; so the tri-angulation is more complicated: a lot of reverse normal triangles, a lot of holes and gaps, spikes and singularity in mesh construction. Thank to the rich amount of data we was able to clean and eliminate all these problem.
6 This kind of elaboration were applied with Rapidform Software
3D multi-scale modelling
203
Fig . 10. An example of big holes closing. This operation is very simple with PolyWorks tools, that let to create a nurbs surface that fill the hole and make automatically the triangulation of this surface attaching this to the mesh
Fig. 11. In the left image you can see the irregular and confused original mesh. In the right image the surface is re-meshed with Rapidform . This plug-in let to have a more organized surfaces and to correct some error and singularity of the original mesh. The application of this plug-in will simplify all the operation of hole filling and the eventual transformation of the mesh in nurbs surfaces
204
C. Achille, F. Fassi
Fig. 12. Particular of the prototyped ornament. Is immediate to understand the high detail level of the object. This kind of prototype can be used to build silicone mou lds or stamps for wax or sand melting
References [1] Ministero della Ricerca Scientifica e Tecnologica (1999) Emergenza rilievo Applicazioni di metodi operative al rilievo per la valorizzazione e il restauro dei beni architettonici e ambientali. Roma, ed Kappa. [2] Vettore A, (2001) Proceedings Italia-Canada Workshop on "3D Digital Imaging and Modeling Applications of: Heritage, Medicine & Land . Padova [3] Breyman GA (2003) Archi, Volte , Cupole. ed Librerie Dedalo, Roma [4] Min istero per i beni e Ie attivita culturali, Soprintendenza per i Beni Archiettonici e per il Paesaggio di Milano (2003) I quaderni della Villa Reale di Monza. - A cura di Marina Rosa, ed BetaGamma, Viterbo [5] Ministero per i beni e Ie attivita culturali, Soprintendenza per i Beni Archiettonici e per il Paesaggio di Milano (2006) I quaderni della Villa Reale di Monza. - A cura di Marina Rosa, ed BetaGamma, Viterbo [6] Bonora V, Chi eli A, Spano A, TucciG (2003) 3D metric-modelling for knowledge and documentation of architectural structures (Royal Palace in Tu-rin). International Archives of the Photogrammetry, Remote Sensing and Spa-tial Information Sciences, Vol. XXXIV, Part 5fW12, pp 60-65 , Ancona [7] Guidi G, Angelo Beraldin J, Atzeni C (2004) High-accuracy 3-D Modeling of Cultural Heritage: The Digitizing of Donatello's "Maddalena. IEEE Transaction s on image processing, Vol. 13, NO .3
3D multi-scale modelling
205
[8] Lingua A, Piumatti P, Rinaudo F (2003) Digital photogrammetry: a standard approach to cultural heritage survey. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XXXIV, Part 5/W12, pp 210-215, Ancona [9] Monti C, Brumana R, Fregonese L, Achille C, Fassi F, (2005) From points clouds to surface analysis. Case studies, problems and perspectives. ItalyCanada 2005 Workshop on 3D digital Imaging and Modeling applications of: heritage, industry, medicine & land, Padova [10] Monti G, Fassi F, Savi C (2004) I fotopiani di Villa Reale. In: (ed) Milano II fotopiano digitale, Cicero, pp 76-79 [11] http://www.leica-geosystems.comlcorporate/enllgs_405.htm
On The Road To 3D Geographic Systems: Important Aspects of Global Model-Mapping Technology Jan Kohlr Centre for 3DGI, Aalborg University, Niels Jemes Vej 14, Aalborg DK9220, Denmark. E-mail:
[email protected]
Abstract. What are the main concepts behind the new geographic systems like Google Earth or NASA World Wind? How do they relate to the current GIS mechanisms? What new do they bring to the society? Or, what are the differences compared to cartographic maps? These questions are recently asked at IT departments of companies dealing with geographic information as well as in various agencies that collect, maintain and provide geographic data. This text approaches the hype around these technologies from a more academic perspective. The main question addressed here has rather the following form; what new problems do these systems address in terms of research and science? The argumentation concentrates on geographic spatial referencing, uniform. organization of geographic data on a network, and relates them to the Internet services and to methods of navigation in three-dimensional virtual environment. These seemingly separate topics are identified as cornerstones of complex systems that attempt to augment information services with a three-dimensional model of the environment we inhabit. One possible solution and implementation of the core concepts is introduced. This software platform is provided to the community as an open source project called GRIFINOR.
1. Introduction In terms of geography, three-dimensional representation brings some new possibilities to mapping, which traditional cartographic maps cannot provide. One of the most prominent is the ability to create a model for an entire planet in scale 1:1 without restrictions to dimensionality of the model's
208
Jan Kohlf
geometry. This is a tempting challenge for researchers and companies with many problems to solve. One issue is how to acquire all necessary measurements with sufficient accuracy and how to process them in order to construct the model. Another question of an equal importance is how to design a technology that allows people to exploit the model. This text addresses the later question, and based on the experience from the GRIFINOR project! attempts to identify common aspects that these technologies have to deal with. A research subject targeted on a technology that provides an interaction with three-dimensional digital model of the entire planet in one-to-one scale tends to be complex to define. It can be recognized subjectively as it deals with modelling of an environment in which we live. Therefore terminology including the name for this subject remains unclear although multiple terms have been used when referring to such technology. The term "digital earth" (Gore 1998) was officially accepted in 1999 by an inter-agency working group established in the United States. But the terms "virtual earth" or "virtual globe" have been used since with a similar meaning. From the commercial sector recently came term "geographic exploration systems" first used at ESRI's User Conference in 2005. Sometimes two of these terms are used2 in the same text referring to one technology. Finally variants based on more the traditional term geographic information systems (GIS) (Tomlinson 2000) are used, such as "3D GIS", "2 nd generation GIS", or "global GIS" with overlapping meanings. Nevertheless there are very few research works that address the relevant technological concepts. And if they exist, such as (Cignoni et. al. 2003) or (Aasgaard and Sevaldrud 2001), they originate from different domains of science and do not use the above mentioned terms. In order to refer to the three dimensional graphical content, which the technology in focus should deal with, a term model-map is used in the article. The term was used in (Kjems and Kolar 2005) and refers to a three dimensional model of geographic features in an area, which is geographically referenced like traditional cartographic maps. The term global modelmap is used when it is important to stress that the model covers the entire planet, since a model-map can have only local coverage. The main part of the article consists of four sections. Each section concentrates on arguments about a selected issue that is considered important for the systems providing a global model-map. The selection of the aspects 1 June 2006: http://www.grifinor.net 2June2006: http://www.geoplace.com/Uploads/FeatureArticle/0601dr.asp
On the road to 3D Geographic Systems
209
aims at addressing problems that are missing or remain unsolved compared to GIS, which is considered the most related technology. The identification of the cornerstones as well as all the ideas comes from the GRIFINOR project. The arguments collected in each section are supported by the solutions that have been implemented in the GRIFINOR system. These solutions are used as an example, but they demonstrate how all of the cornerstones mentioned in the article can be found in a single system.
2. Spatial referencing and coordinate systems Any geographic software must select which coordinate system to use. It should be stressed that it is an elementary decision that influences the algorithms of the implemented functions, data representation as well as data management. Additionally, in case of three-dimensional applications, a reference surface for heights must be chosen as well. This is of the same importance for these systems as the horizontal position. Regarding the coordinate system, every important GIS today is built around the spatial concepts used for the creation of cartographic maps. The data are organized in planar "tiles" and analysed using planar Euclidean geometry. Planar geometry has many advantages in terms of computational complexity and provides sufficient results for many practical applications. However, one must realize that each planar "tile" is associated with a transformation process to a selected cartographic projection, as illustrated in Figure l.d. Dealing with the projections has some negative aspects, such as geometric distortion, limited spatial range, numerical precision, diversity of projected reference systems and others (Kjems and Kolar 2005), which requires specialized staff, complicates the development of a computer system and affects data exchange and interoperability. The reference surface for the three dimensional mapping introduces a conceptual paradox to the GIS technology. One should consider the plane of a cartographic map as an approximation of the shape of the Earth (or of a part of it). A plane has a simple definition and since its mathematical description is explicit for each map, storage of any data for its definition is unnecessary. This is a great advantage, which is well exploited in GIS systems. But the plane as a representation of the planet's figure coincides with the representation of the terrain relief. On the map both surfaces are enforced to be de-facto the same plane, and for users the conceptual difference is usually unimportant or, in a worse case, can be unclear. Nevertheless, model-maps that represent fractions of topography in three dimensions are today also built on top of these "tiles". This causes the
210
Jan Kolai'
paradox mentioned above because the three dimensional geometry is used to represent geographic features in a skewed space that is defined above the projected "tile" even though it is (technically) unnecessary. Consequently all the negative aspects of projections mentioned in the previous paragraph are inherited, while all of them can be simply avoided .
Fig. 1. Coordinate systems and reference surfaces in GRIFINOR
For the design of new systems providing a global model-map it is suggested to tum to a geocentric coordinate system as the primary system. Concepts for data representation, data management and all elementary functions should be based on a single origin and orientation that can be used for spatial referencing around the entire planet. The reason for the paradox mentioned above can be considered as a consequence the evolution of GIS into three-dimensional space. However the global model-map systems follow a different concept of spatial referencing. It is important to realize that this difference lies at very core of these systems and affects nearly every part of the system, including the data. Therefore in terms of system design and development the change of today' s GIS to global threedimensional systems cannot be regarded as an evolution. Due to the conceptual differences it would rather become a revolution, meaning that new systems need to be developed.
On the road to 3D Geographic Systems
211
2.1. Spatial Referencing In GRIFINOR
GRIFINOR eliminates the use of cartographic projections from concepts of all elementary parts of the system, including the data representation. Three dimensional coordinates with origin coinciding with the Earth's centre of masses are used for referencing. The definition of the axes is analogous to the World Geodetic System 3 (WGS), meaning that the z-axis is defined by the North Pole and the x-axis lies in the equatorial plane and points to the Prime Meridian, where the reference pole and the meridian are defined by International Earth Rotation and Reference Systems Service4 (IERS). For this origin and orientation, three systems of coordinates are used in GRIFINOR when convenient. Right-handed Cartesian coordinates [x,y,z] (see Figure l.c) are used for data representation, visualization and indexing (see the next section). Spherical coordinates [<1>, A, r] (see Figure l.a and l.b) are used for indexing and representation of the geopotential model of the Earth that is described later in this section. Geographic coordinates [cp, A, H], where H is ellipsoidal height (see Figure l.a and 1.b), are currently used only for reporting a position on the Earth to the user. Conversions between these systems were implemented in GRIFINOR. In GRIFINOR height is as important a measure as the position on a mathematical surface, such as a plane or an ellipsoid. However height, as a geodetic term, has multiple definitions (Torge 2001) and must be applied correctly. For sake of clarity, only a distinction between normal height and the ellipsoidal height is made here. Normal height has a physical explanation while the ellipsoidal height has a mere geometric meaning. The reference surface for normal height is defined by all points with the same gravity potential, which is closely associated with the mean ocean surface. Such reference surface is the geoid (see Figure l.b). The difference between ellipsoidal and normal heights can exceed hundred meters depending on the location on the Earth. This imprecision is unacceptable for a global model-mapping system that should be also capable of addressing details about constructions in urban areas. Since the definition of normal height works only under presumption that the coordinates of the point are determined in the gravity field of the Earth, a model of that field is implemented in GRIFINOR. For this purpose a geodetic method for gravity potential based on spherical harmonics (Torge 2001) was implemented. The
3 4
June 2006: http://en.wikipedia.org/wikiIWGS84 June 2006: http://www.iers.org
212
Jan Kolar
coefficients of the Earth Geopotential Model 1996 (Lemoine et al. 1998) have been used as data about the variations of gravity potential.
3. Spatial indexing and queries Spatial indexing, as a subject dealing with organization of data and algorithms that aid a rapid access, can be regarded as "quiet waters" in GIS. Today's GIS are based on two-dimensional indexing techniques such as Rtrees or quadtrees (Silberscahtz et.al. 1997), which have been solved thirty years ago. However as identified in the previous section, the systems dealing with global model-mapping have other approach to spatial referencing and therefore need different indexing technique than the current GIS systems. It is safe to expect that some three-dimensional variations such as octrees can be used. But the development of alternative indexing approaches might also be considered, especially in the domain of spherical or even geographic coordinates. A combination of several indexing approaches may also be feasible, as there are geographic data of diverse nature. It was mentioned in the previous section that the plane of a map is regarded as a representation of the Earth's figure and that the same plane coincides with the representation of the terrain relief. This causes a scenario in which the terrain data are "optional" for maps. In terms of spatial indexing it is important that any additional data about the shape of the terrain, such as contours or elevation points, can be usually indexed using the same mechanisms as other points, linear or polygonal features. This is because they are represented with respect to the same planar geometry. However, this scenario may change when shifting to the three-dimensional space. It definitely changes when dealing with the systems for global modelmapping, where either geoid or terrain data play the defining role similar to that of the plane of a map. In model-maps the geometry of the terrain has a clear distinction from the geoid that represents the figure of the Earth. These surfaces are different in space, and none of them has a simple or explicit mathematical description either. Data 5 must be stored in order to define both of these surfaces. The importance of the terrain representation increases dramatically in model-maps, because it provides the surface that nearly all other geographic features are related to. Both of these surfaces have a common 5
The storage requirements for data describing the geoid are lower by several orders of magnitude than the requirements for terrain data of the same geometric precision.
On the road to 3D Geographic Systems
213
characteristic of being continuous around the entire globe, which might be convenient for indexing in terms of spherical geometry. This also distinguishes these surfaces conceptually from many other geographic features. The arguments given above support the use of multiple indexing techniques; a separate indexing for data representing phenomena that spans entire globe, such as the topographic surface, while another indexing method can be suitable for individual geographic features such as man made constructions or vegetation. However, this suggestion cannot be generalized. How convenient it is to use multiple indices depends on the nature of data and on queries that the index supports. A good example of a query type that all global model-mapping systems need to deal with has a form; "retrieve a part of the model-map that should be shown at a given position", which is also a generic query used for visualization.
3.1. Spatial Indices In GRIFINOR The primary goal of indexing in GRIFINOR is to group data about geographic features according to their position and importance for visualization. This is necessary in order to aid visualization queries, which are essential for updates of the three-dimensional scene of the model-map. GRIFINOR uses two original indices. Properties common for both indices are: - space driven - use hash index on top of B-tree. GRIFINOR implements space driven indices, because they ensure a fixed, invariant subdivision of the space. This is feasible when storing the content of an index on different hosts is required. GRIFINOR uses twolevel indexing mechanism. On a lower level indices exploit a sorted ordering of data for storage through B-tree (Silberschatz et.al, 1997). The common implementation of B-tree then closely relies on two different hash indices. Hash indices, in contrast to the order indices, are based on the data distributed uniformly across a given range of "buckets", where the "bucket", in which a record is stored, is determined by a function. By the two-level mechanism GRIFINOR achieves that the use of an index can be split between two more or less independent components. Both of the properties are crucial for the network architecture of GRIFINOR, which is described in the next section. The rest of this section introduces the hash part of the indices.
214
Jan Kohii'
3.1.1
GIG
GRIFINOR implements the global indexing grid (GIG), a method first introduced in (Kolar 2004a) and further developed in the author's doctoral thesis. The solution is subject to discrete global grids (Sahr et.al, 2003) . The subdivision of the three-dimensional space is based on the tessellation of sphere using a geometric structure, called the Voronoi diagramv. A similar concept was introduced in (Lukatela 1987). In contrast to Lukatela's work where tessellation depends on distribution of data in space , GIG defines a space driven subdivision. Another important property of GIG is that the subdivision is defined at many levels of resolution and thus supports multi-resolution data representation, which is a requirement for any global model-map system.
Fig. 2. Geometric interpretation of GIG tessellation and GIG cell on the unit sphere .
The subdivision unit of GIG is a cell as illustrated in Figure 2. A cell consists of all the points with an angular distance to the vector of cell's centroid smaller than to a vector of any other centroid. Note that the geometric representation of the cell is a cone that goes to infinity and thus tessellate entire three-dimensional space. GIG is based on spherical coordinates as introduced in the previous section. Details about GIG can be found in the works mentioned above. GRIFINOR uses GIG for data representation of the topographic surface and geoid. The solutions are addressed
6
June 2006: http://en.wikipedia.org/wikiNoronoi
On the road to 3D Geographic Systems
215
in (Kolar 2004b) and (Kolar and Ilsoe 2005) and the author's doctoral thesis.
3.1.2
DSO
For the indexing of individual geographic features , such as houses, GRIFINOR deploys an original depth-smart octree (DSO) method . DSO is an octree-like hierarchical subdivision with variable depth, which allows associating an arbitrary number of records to any subdivision unit. The units have a significant spatial interpretation as cubical cells, where each cell on a given subdivision level has one eight of the volume compared to its parent cell, as depicted in Figure 3.a. The root cell sets the limit to the spatial range of the index . In GRIFINOR the root cell has sides approximately 34360 kilometres long (see Figure 3.b) with centre of the cell and cell orientation coincided with the origin and orientation introduced in the previous section. In DSO each cell is identified with a unique string (identifier), which is used by B-tree index when the data are actually stored (the same holds for GIG). The construction of an identifier is based on binary subdivision which is applied to the root cell in each dimension. This means that the identifier consists of three sub-strings. The example depicted in Figure 3.c shows sub-strings for the first three DSO levels. The implementation of DSO in GRIFINOR has 31 levels.
34359 .738368 krn
,
Q
. 100
--------...,1000
DSO ROOT C>:LL
--
10
au
-101
WlO
lOll
.
--- -H
110
1100
1 11
1101
Ill.
lin
C)
Fig. 3. DSO subdivision and construction of DSO cell identifier
The main idea behind DSO is that features that are more important for visualization are indexed in the larger cells from a subdivision level, closer to the root. The measure of the importance must be solved as a separate task. Through this mechanism GRIFINOR deals also with multiple levels of detail of a single feature. Although both measure of the visual impor-
216
Jan Kolar
tance and multiple level of detail are major subjects in GRIFINOR, their description is out of the scope of this text.
4. Interoperable network service Interoperability and data exchange remain unsolved problems for GIS. This suggests the same situation when dealing with global model-mapping systems. A solution to interoperability is considered the third cornerstone that should be addressed in the new systems. Generally speaking this implies the ability to provide the model-map over the network so that it can be used by different applications. The core issues are the communication of data between two system components on the network and the overall network architecture of these components. A solution would result in a network service similar to those provided by map servers, operating system updates, or by decentralized systems like BitTorrent7. Such a capability is advantageous in today's information society. Communicating data so that a model-mapping system would work over the network is closely related to the concepts of the data management, which has been addressed in the previous section. In general the communication consists of query messages and response messages sent between the systems components. The query messages are closely associated with the queries supported by indexing mechanism. The definition of such messaging mechanism, referred to as protocol, should be regarded as part of the systems for global model-mapping. For planar maps standard definitions of a protocol exist, for example the web map service 8 provided by OGC (WMS). However the protocol for the global model-mapping systems cannot be the same like that of WMS due to the different concepts about spatial referencing and indexing (see the previous sections), but also because the visualization of the three-dimensional scene and the navigation in it (see the next section) is quite different. In this respect OGC web feature service-' (WFS) is much more flexible since it deals with exchange of individual geographic features from which a model-map can be constructed. It should be stressed that the protocol and the functionality of the system are tightly coupled. The protocol transfers data in some internal format, where part of the data is used to control what operation should be performed. For example if a response message of the well known hyper-text June 2006: http://en.wikipedia.org/wiki/Bittorent June 2006: http://en.wikipedia.org/wiki/Web_Map_Service 9 June 2006: http://en.wikipedia.org/wiki/Web_Feature_Service 7
8
On the road to 3D Geographic Systems
217
transfer protocol (HTTP) contains response code "404", this tells to a web browser that requested page cannot be found. HTTP can also transfer the own data that has been requested. Such data are typically in some exchange format, e.g., well known hyper-text mark-up language (HTML). However, the design of a protocol for global model-map systems that would maximize interoperability remains a problem. The efforts of standard committees are focused on specifications of data associated with an exchange formats. This definitely supports interoperability, but this approach works only under the presumption that the global model-mapping systems are fully and only compliant with a selected standard, e.g., CityGMLIO or SEDRISII. This presumption is difficult to fulfil because these standards are extensive and their adoption is a complex process. Also the state of interoperability achieved by this approach is very fragile because any alternative solution or custom extension may easily ruin the status. Additionally the standards itself are changing. Regarding the set-up of the system components on the network, clientserver architecture may be applied. The client-server model is based on the distinction of the client from the server. Each instance of the client software can send query messages to a server. For example a web browser acts as a client when visualizing a map stored on a web server. However an alternative peer-to-peer architecture may be considered for global modelmapping systems. An important goal in establishing a peer-to-peer network is that all clients provide resources, including bandwidth, storage space, and computing power. Thus, as system components arrive and demand on the system increases, the total capacity of the system also increases. This is not true of a client-server architecture with a fixed set of servers, in which adding more clients could mean slower data transfer for all users. The distributed nature of peer-to-peer networks also increases robustness in case of failures by replicating data over multiple peers. Finally by enabling peers to find the data without relying on a centralized index server there is no single point of failure in the system.
4.1. Interoperability in GRIFINOR GRIFINOR deals with interoperability on the level of programming language instead of strict specification of data representation. It has been decided to keep the GRIFINOR independent of any particular standard specification of geographic features and leave the possibility to add suitable 10 11
June 2006: http://www.citygml.org June 2006: http://www.sedris.org/it_is.htm
218
Jan Kohii'
data representations separately for each application. Therefore the implementation of GRIFINOR ensures scalability in data representation while keeping the interoperability of the system intact. Since the system is coded in Java, which is an object oriented technology, both the communication protocol and the data representation of features are defined in terms of objects. The important part of the protocol is implemented in the class grifinor .godb.GRIFINMessage. Instances of this class represent query messages that are sent between system components over network. The data representation of any feature that can be viewed in GRIFINOR must extend the class grifinor.datarep.GO. GO is a Java abstract class and ensures the scalability for custom definitions of geographic features . It allows de-facto arbitrary definitions of features , their meaning , the relation ships between them and their functionality. With this in mind , GRIFINOR must be regarded as a platform that supports the definitions of custom data representations and application specific functionalities .
...
("-A -PiIil.altGft- .... . PtoGr.ammi"O InlMf.,II[. tAPII)
Query
Activ e query
processor
manager
t
t
Object store
Viewer
t
t
Serv er
Server
catalogue
list
Fig. 4. Schema of GRIFINOR's network architecture.
The interoperability within GRIFINOR platform is provided by implementing a custom class loader. This means that even if a system component receives an unknown object defined by a custom application it will be able to obtain the executable definition of the class from the provider and modify itself at runtime. This ensures that both data and processing methods can be exchanged in GRIFINOR. More about this is described in (Kjems and Kolar 2006).
On the road to 3D Geographic Systems
219
Currently GRIFINOR works in a client-server manner. Nevertheless the design is such that both client and server components are tightly coupled. This means that a GRIFINOR instance can behave as both client and server, which allows to build a peer-to-peer network. However, a specification of such network is currently missing.
5. Navigation mode in 3D The nature of the scene consisting of three-dimensional graphics is very different from that made of a two-dimensional digital image. The usual display devices attached to computers have two dimensional screens, which perfectly suits to visualization of the images. There is a good chance that navigation is unnecessary in order to view and to explore an image, especially on high resolution displays. In contrast, a three-dimensional scene without navigation possibilities immediately loses its potential to explore the scene and can easily become useless. Also the navigation options are richer when dealing with the three-dimensional scene compare to panning and zooming with two-dimensional images. A navigation mode that allows users to move through and interact with the model map is identified as the last fundamental aspect in this article. Any system aiming to support geographical exploration of a model-map must deal with a method that allows users to change the view-point in the scene. Very little has been found in literature about this subject.
5.1. Geo-embedded Navigation GRIFINOR provides a relatively easy and intuitive solution to explore a global model-map. It has been found that exploring is more natural when the "up" vector of the viewer is aligned with the gravitational up vector at the current position. This means that the roll rotation (see Figure S.a) is eliminated and users do not need to control it. The navigation remains intuitive when navigating through interiors of buildings, around urban areas as well as at global scale around the planet. Orientation is performed in the local coordinate system as depicted in Figure S.a. The up direction vector is determined using the current position in the geocentric coordinates, and movement is relative to the immediate perspective of the viewer controlled by user. Moving along a global model-map with a pitch angle of zero should keep the user at the same altitude and after passing through the entire orbit around the planet bring the user back to the original position. Due to the imposition of geographical orientation rules, we have chosen to
220
Jan Kolar
name this family of navigation modes geo-embedded navigation (Ilsoe and Kolar 2005). The algorithm of the geo-embedded navigation is performed between rendering frames. The first step takes into account the changes to orientation, which users control by using the mouse device. Based on the new orientation angles an orientation transform matrix is constructed. This matrix is used in order to transform the requested movement, which is also controlled by mouse, into geocentric coordinates. In the geocentric coordinates a new position candidate is found and the error in vertical position is removed, as depicted in Figure 5.b. Then a new orientation at the new position is computed. Finally the new orientation and position is set for the next frame, which is rendered on the screen. The detailed explanation of the algorithm along with a pseudocode is presented in (Ilsoe and Kolar 2005) and has been fully implemented in the GRIFINOR system. y
localUp
pos pos
pos1
~yaw
local Fwd
z
Fig. 5. Basic spatial concepts of geo-embedded navigation.
A relation to another cornerstone of GRIFINOR is important. The DSO global indexing method introduced above is at its eleventh's level of subdivision congruent with the local coordinate system used by geo-embedded navigation (see Figure 5.a). This is due to the technological limitation of the accelerated graphical hardware available today. This limit sets 32-bit float numbers being used internally by the hardware. As a consequence the model-map that should be rendered on the screen must use the same representation of numbers. However a model-map of entire planet does not fit in a single coordinate system using 32-bit float numbers with a sufficient accuracy.
On the road to 3D Geographic Systems
221
6. Conclusions and directions for future research This article addresses four aspects inherent to systems dealing with global model-mapping. These aspects aim to point out problems which are new or which should be reconsidered compared to the solutions used in traditional GIS. It is proposed that since there are no technological restrictions to representing features in three dimensions, geocentric coordinates should be used as a principal coordinate system, thus avoiding dealing with cartographic projection on the level of system development. It has been shown that elementary concepts of geographic databases, which are inseparable components of GIS, namely the indexing mechanisms, need to be reconsidered. This is a direct consequence of the first aspect regarding selection of coordinate system. In the text it is also argued that the ability to work over a network is a vital requirement for any information system, which influences many essential decisions about the technology. Therefore it should be addressed in the design of global model-mapping systems from the start. Finally this article stresses the very different nature of threedimensional scene, which is a new aspect compared to traditional GIS. It is argued that the mode of navigation in a three-dimensional scene is essential for systems providing a global model-map. Solutions to the different cornerstones of the emerging global geographic technology were demonstrated through implementations in the GRIFINOR platform. These solutions are used as an example. They illustrate how the main aspects addressed in this article relate to a single coherent technology that is able to provide a global model-map to users or other applications. The introduced indexing mechanisms do not enforce the use of any kind of projection. GRIFINOR is therefore one of the very first systems (if not the first at all) having such a property. However, this causes problems when dealing with image data, which by definition are meant for visualization on flat displays. As a consequence aerial, satellite images or topographic maps can be treated as appearance attributes to the terrain geometry, the same way as a facade texture is an attribute to the geometry of a building. Dealing with terrain textures in a rigorous way remains a subject for future work. The proposed solution to interoperability allows freedom to implement different schemas of data representations, with or without topology, together with an analytical functionality. Although this is conceptually an elegant, object-oriented solution to interoperability it has a drawback. GRIFINOR is a platform based on Java technology, therefore interoperable applications and implementations of various data representations must be made in Java as well. Several subjects for future research are related to
222
Jan Kolar
the ability to provide a model-map as a network service. GRIFINOR will namely have to deal with multiple hosts and dynamic searches for the data resources available on the network. Finally, the implementation of GRIFINOR platform presented in this article is provided to the community in form of open source code. Anybody who is interested can study the topics addressed here and experiment with other possibly better solutions or extend it for custom applications of global model-map.
References R.Aasgaard and T. Sevaldrud (2001), Distributed Handling of Level Of Detail Surfaces with Binary Triangle Trees. In Proceedings: ScanGIS'2001 - The 8th Scandinavian Research Conference on Geographical Information Science, Aas, Norway. P. Cignoni, F. Ganovelli, E. Gobbetti, F. Marton, F.Ponchio and R. Scopigno (2003), Planet-Sized Batched Dynamic Adaptive Meshes (P-BDAM). In Proceedings: IEEE Visualization, IEEE Computer Society Press, pp 147-155. A. Gore (1998), The digital Earth: understanding our planet in the 21st century. Keynote: Grand Opening Gala of the California Science Center, Los Angeles, CA, USA. F. G. Lemoine et. al. (1998), The Development of the Joint NASA GSFC and NIMA Geopotential Model EGM96. NASA Goddard Space Flight Center, Maryland, USA. H. Lukatela (1987), Hipparchus Geopositioning Model: An Overview. Eighth International Symposium on Computer-Assisted Cartography (Auto-Carto 8), Baltimore, Maryland, USA. P. M. Ilsoe and 1. Kolar (2005), Geo Embedded Navigation. Geo-information for Disaster Management, Springer-Verlag, pp 1163-1172. E. Kjems and 1. Kolar (2005), From mapping to virtual geography. Proceedings CUPUM 2005, London, U.K. E. Kjems and J. Kolar (2006), Spatial object structure for handling 3D geodata in Grifinor. In this publication. J. Kolar and P. Ilsoe (2005), Flexible Terrain Representation Using Runtime Surface Reconstruction. Proceedings of WSCG Conference, Pilsen, the Czech Republic. J. Kolar (2004a), Global indexing of 3d vector geographic features. In Proceedings: International Society for Photogramrnetry and Remote Sensing 20th Congres, pp 669-672.
On the road to 3D Geographic Systems
223
J. Kolar (2004b), Representation of geographic terrain surface using global indexing. In Proceedings: 12th International Conference on Geoinformatics Geospatial Information Research, pp 321-329. K. Sahr, D. White and A. J. Kimerling (2003), Geodesic Discrete Global Grid Systems. Cartography and Geographical Information Science, vol. 30, pp 121134. A. Silberschatz, H. F. Korth and S. Sudarshan (1997), Database System Concepts, Third Edition, The McGraw-Hill Companies, Inc. press. W. Torge (2001), Geodesy, Walter de Gruyter GmbH & Co. press. R. Tomlinson (2003), Thinking about GIS, Enviromental Systems Research Institute Press, Redlands, California, USA.
Cristage: A 3D GIS with A Logical Crystallographic Layer To Enable Complex Analyses B. Poupeau , O. Bonin IGN, COGIT Lab., 2-4 avenue Pasteur, F-94165 Saint-Maude Cedex France - {benoit.poupeau, olivier.bonin} @ign.fr
Abstract Researches on 3D GIS have mainly focused on the geometrical and topological modeling of objects to ensure data coherence and correct visualization (Molenaar, 1992; Pilouk, 1996; Zlatanova, 2000). While well suited to applications where visualization is important (urban planning, estate market. .. ), these models are not suited to compute complex queries such as the propagation of a natural phenomenon (e.g. roof collapsing in a quarry). This paper presents Cristage (Poupeau, 2006), a 3D GIS prototype, that enables to perform complex analyses. Cristage is composed of two layers. The first layer deals with data coherence, visualization and computation of simple volumic queries (intersection, inclusion). It relies on Zlatanova's SSM (Zlatanova, 2000) and a spatial enumeration of objects by tetrahedrons. The second level is a logical layer on which analyzes are performed. Each geographical object is abstracted by one or several crystals and is embedded in a 3D mesh. Mesh adjacencies are stored in this level. These adjacencies are purely topological: they correspond to adjacencies between the real geographical features, and not necessarily between their geometrical representations. An adjacency is described in Cristage by a "contact object", which is a 2-Cell that bounds two 3-Cells (a 3-Cell is the topological representation of a crystal). To study geographical phenomena, we add to "contact objects" of interest a semantic information. Then a "coupling object" gathers a set of "contact objects" (topologically associated to a cell-tuple). It represents a 2D geographical object that structures 3D space. Thus, DTM, faults or stratigraphic joints are "coupling objects". The logical layer of Cristage with "coupling objects" constitutes the basis to perform spatial analysis and to assess phenomena propagation (Bonin, 2006).
226
B. Poupeau , O. Bonin
Introduction The recent progresses of GIS in 3D visualization (Kofler et aI. 1996; Koninger and Bartel, 1998; Huang et aI., 2001), 3D topology and spatial analyses (Brisson, 1990; de La Losa, 2000; Billen, 2002), Web 3D GIS (Coors, 2002), 3D GIS interface (Verbree et aI. 1999) concern numerous applications such as geosciences, urban planning, cadastre or telecommunication (Stoter and Ploeger, 2002). During the last decades, researches have emphasized on data models in urban areas to ensure a good visualization with data coherence. Most of these "topological" models are built on a geometric modeling such as boundary representation (Zlatanova, 2000, de La Losa, 2000 ...) or spatial enumeration (Pilouk, 1996; Penninga, 2005 ... ). In that case, they serve as a support to study the topological relationships between two objects: disjunction, adjacency, intersection and inclusion (Egenhoefer, 1990; Billen, 2002). These complementary studies allow to "require recognition of the relation between two objects" and to "identify a relation regardless of the dimension of objects" (Zlatanova, 2000). However these works are not sufficient to perform complex spatial queries implying numerous geographical objects. For example, in order to know which objects are affected by a natural hazard such as an underground collapsing, it is necessary to answer queries of this kind: Which parts of a geological bend can collapse? The roof (the upper part), the wall (the lower part) or both of them? What are the objects concerned by the collapse phenomenon (geological bends, DTM, buildings)? This example brings out the interest to characterize semantic parts of geographical objects as well as to infer relationships between geographical objects to analyze them. First, this paper introduces a new view upon the topological relations aiming at the analysis of the propagation of a natural phenomenon such as a collapse. Then, Cristage (Poupeau, 2006), a 3D GIS designed to assess hazards, is presented. This GIS is composed of two layers. The first one is dedicated to topological and geometrical modeling to ensure a good visualization and to compute on-the-fly volumic queries. The second layer is a logical layer. It gives an abstraction and a description for each geographical object with one or several crystallographic forms. Each of these crystallographic forms is embedded into a 3D crystallographic mesh. This second layer is also used to manage relations between geographical objects. A cell-tuple structure (Brisson, 1990) called in Cristage "coupling" describes logical adjacency between 3D meshes.
Cristage: A 3D GIS with Logical Crystallographic
227
Functional Analysis In case of a complex natural hazard such as an underground collapse, the existing topological structures (de La Losa, 2000; Zlatanova, 2000) and topological relations models (Egenhoefer, 1990; Billen, 2002) are not sufficient to analyze the repercussions on the geological layers above and on the building. Gil ---. GI2 -----II> GI, _
GI" - .
Gis
-----II>
Fig. 1. An underground collapsing phenomenon (after http://www.prim.net)
The 9-intersection model (Egenhofer and Herring, 1990) with Zlatanova ' s SSM is used to illustrate the example of Fig. 1. The inclusion relations, before the collapsing, between the cavity, Ca , and the geological layer, G1 6 , are formalized as: c . on GI, O C. onXI, R(Ca,Gl,) = iJC. n Gl, ° iJC. n X l, [ C. -n Gl: C.-nXI,
c . onGI'-j
[00 0] 010
iJC. n Gl; - = 0 1 1 C.-nGI,-
If the roof C, falls down , the relations change: Ca and C, merged to create a unique cave Cm' Finally, when the collapse spreads to the house, all the relations between the huge cave Cm, the geological layers and the house can be expressed as: CmOn GI, O Cm onXI, R(Cm,Gl,) = iJCmn Gl, ° iJCmn Xl, [ cj r.ct» Cm-nXI,
Cm OnGI' - j
iJCmn Gl,Cm-nGI,-
[I0I] III
=0
0 1
While very useful to describe the topological relations between two objects of different dimension, theses formalisms are limited when it comes to analyze the changes in topological relations. This example illustrates several drawbacks of these formalisms: Inferring from those formalisms and extracting the set of geographical objects concerned by the collapse is difficult without heavy computations and algebra relations;
228
B. Poupeau , O. Bonin
- Topological structures, such as 3D FDS or SSM, are very useful for data coherence. However it will be more powerful to complement them with semantic information to characterize the main parts of geographical objects. Without this semantic information there is no means to distinguish between the walls and the roofs in a geological layer. To analyze a complex phenomenon, it is important to know which parts of an object can be connected with other objects. For instance, it would be interesting to know whether the roof of the geological layer 6 is adjacent to the wall of the geological layer 5. (Fig. 1); - Some of these spatial relations described through these formalisms, do not belong to the reality. For instance, the Egenhoefer's model contains 512 possibilities among which 49 cannot be represented in the reality (Li, 2006). Different data producers may provide data of different nature: DTM, underground and topographic data, etc. The topological relation models need coherent data to be really exploited. Otherwise, it implies an increasing computing time and unreliable results. These drawbacks put the stress on the lack of spatial characterization of geographical objects, and of explicit relations between connected objects that may not be geometrically adjacent in the dataset. Formalisms for geographical relations are powerful tools to describe relations and to reason on them. They extend to objects of different dimensions (possibly non-manifold), and enable to cover all cases that may be encountered in geographical datasets. However, they require geometrically and topologically coherent data to be applied. We choose a less rich approach to cope with incoherent data by restricting possible relations between to 3D objects to adjacency (with a surfacic contact) and disjunction. Note that for 3D objects, intersections are less likely to occur if we stick to feature models close to the real world. From the postulate that each real object is locally adjacent to another one, Cristage supplies a logical layer to manage contacts between crystallographic abstractions of geographical objects. Contacts between geographical objects are handled through relations between faces of crystallographic meshes. Thus, Cristage manages a topological representation between objects that has no geometrical counterpart (meshes are not adjacent, only real world features are). This topological model is a support for complex analyses.
3D Geographic object modeling Most of the existing topological structures are based on a B-Rep model. This kind of model does not make possible complex analyses or extensive
Cristage: A 3D GIS with Logical Crystallographic
229
data coherence assessment between different providers but is well adapted to visualization and data storage. Cristage proposes both a topological and a geometrical layer, as well as a logical layer. The first layer (Fig. 2) is composed of two geometrical representations: - A weak B-Rep, in association with a part of Zlatanova's SSM (topological structure), using only Faces and Nodes. It ensures visualization, data coherence and the capability to move around objects; - A spatial enumeration whose volume element (voxel) is a tetrahedron. It allows computing "on-the-fly" volumic queries such as the volume of a geographical object or the intersection volume between two objects. 2D Geographical Object
Fig. 2. UML diagram of Cristage, geometrical part
The second layer provides a logical analysis of the geographical object (Fig. 6). The study of the symmetry elements (axes, planes and centre of symmetry) enables to match each geographical object to one or several crystallographic forms. Each crystallographic form, characterized by its own combination of symmetry elements, belongs to one of the thirty-two symmetry groups, belonging themselves to one or the seven crystallographic systems (Fig. 3). Each crystallographic system has an elementary 3D mesh. The geometry of a mesh is defined by three vectors (a local coordinate system Ox, Oy, Oz), three lengths (a, b, c) and three angles ( a, f3, y). Each plane of the crystallographic form can be characterized by a simple system of indices introduced by Miller. This indexation is helpful to extract some interesting parts of the geographical object. For example, in the Fig. 3, the face with index (001) represents the roof of the house. The use of crystallographic forms, rebuilt from their 3D meshes and their symmetric elements, allows to simplify geographical objects and to keep
230
B. Poupeau , O. Bonin
their global shapes. The Miller indexation enables to reason easily on one object and to model spatial relations between objects.
M
Cubi c
UD
••
.,........
E~m.m"'
1A2,1M , e
(00 1)
1_100~ '' . c lb: .... (0-10 1
orthorhombic
010 1 1
~ ._ • • • ~ . -i - . - - J,., (06-11 /
•
........
(100 lery atallo g ra phk: Form : Monocinicptiam
@
..........
Cryatallo g..-aphl c Sy&t.m :
@ -111 (001 )
1 10-10
Monoclinic
I
Inde xed rnaln part a
of U. abullding
(100)
Fig. 3. Three of the seven crystallographic systems (left, after Pomerol, 2003) and the crystallographic abstraction of a geographical object (right).
This logical modeling is the first step to characterize the main parts of a geographical object (the upper area and the lower area of a geological bend) and to give semantic information (the roof and the wall of a geological bend). Besides, this high-level description emphasizes on the parts of the geographical objects likely to be in contact with other ones and simplify the explicit storage of these relations. The definition and the management of these explicit bounds are further described in the sequel of this paper.
Coupling Object The two previous sections illustrate the limits of formalisms based on the study of intersections, as well as Cristage logical layer. Cristage approach is to handle adjacencies between crystallographic meshes to manage geographical relations between objects. As the faces of the crystallographic forms are linked to the geometrical representation of objects, it is possible to go down to the geometrical layer when needed. Thus, the logical layer in Cristage is a helper to manipulate a set of geographical objects in relation and to analyze the repercussion of a natural hazard involving several objects (e.g. landslide, collapse). To connect geographical objects, adja-
Cristage: A 3D GIS with Logical Crystallographic
231
cencies are deduced from the geometrical representation (clear adjacencies, near disjunctions , small intersections, inclusions, etc.) with the help of computational geometry algorithms. They can also be explicited by hand. In terms of topological modeling, each crystallographic form is associated to a 3-Cell. A n-Cell is a topological manifold homeomorphic to a n-simplex or a n-ball. A 2-Cell, named "contact object" in Cristage, connects two 3-Cells and represents the logical adjacency between two crystallographic forms. If a "contact object" bears semantic information, a first idea is to call it "interface object", and to gather interface objects to describe surfacic structures (e.g. DTM, faults). Thus, these surfacic structures ("coupling objects") can be thought of as cell-tuple. Formally, a cell-tuple is defined as a finite set of cells and respects these properties: - The inside of cells from the cell-tuple are disjoint; - The boundary of every cell from the cell-tuple belongs to the cell-tuple; - Each cell is connected to another by a cell from the cell-tuple. For example, a geological layer of clay can be subdivided in different blocks according to its humidity. A crystallographic form is abstracted from each of the sub-blocks, and 2-Cells connecting the blocks are "contact objects". Moreover, if a house is built on this geological layer, the 2Cell, that constitutes the logical adjacency relation, will be an "interface object" as it has a geographical significance (natural terrain) (Fig. 4). Object A
------------------1._---------------------------------------------------- 1 ------------------------------------------------------
_ _ Contact Object
----------_.
Object B
lii!iiiiiiiiiiiii!iii!iii!iii!iii!iiiil
------------------
Clay layer
Interface ObJect: - - Stratigra phic Jc>n1
Limestone layer
Fig. 4. A contact object and an interface object
However, this modeling of surfacic geographical objects as cell-tuples is not totally satisfying, because of the lack of geometrical representation for these objects. If we go further in the philosophy of Cristage, which considers each object to be volumic, we can also consider geographical entities such as DTM and faults to be associated to crystals, and thus to 3-Cells. Actually, crystallography can manage degenerated volumes (surfacic entities or open shapes) through surfacic crystallographic forms embedded in 3D meshes. We extend this possibility to more degenerated primitives (namely lines and points). Thus, a DTM or a fault remains a surfacic object in the system in terms of geometrical representation, but is associated to a crystal and to a 3D mesh. A stratigraphic joint between two geological layers will be topologically described by the crystal associated to the joint (3-
232
B. Poupeau , O. Bonin
Cell), crystals associated to the upper and lower geological layer, and celltuple describing the contacts between these objects. From a geological point of view, we consider a fault area rather than the interface itself. To summarize, each geographical object, whatever its dimension (OD, 1D, 2D or 3D), is associated to a crystallographic form (possibly OD, 1D or 2D) and to a 3D mesh. A crystallographic form is only explicitly computed for 2D and 3D geographical objects. This description allows manipulating objects of different dimensions in a very consistent way. An object such as a DTM or a fault is associated both to a cell-tuple and to a 3-Cell to allow a large array of queries. Unlike other models based on a cell-tuples (Brisson, 1990), this kind of topological structure is not used to model geographical objects and to deduce their geometrical representations. Crystal topology has a specific role in Cristage: to manage the logical adjacency relations between objects so that it is possible to spread the effects or the evolution of a natural phenomenon.
• • "Coupling Object"
The "coupling Object" mesh
Fig. 5. A fault constitutes a "coupling object" between two blocks of geological bends
Conclusion and outlooks This paper is a first step towards 3D spatial relation explicitation and towards the study of complex natural hazards. Indeed, the crystallographic characterization enables to focus on some important parts of geographical objects. Besides, the addition of a topological representation of adjacencies between crystallographic forms helps to reason on geographical objects
Cristage: A 3D GIS with Logical Crystallographic
233
without the requirement of perfectly coherent geometrical representations (Fig.6).
Fig. 6. UMl., diagrams of Cristage, logical part
Moreover, the fact that the topological relations between objects in Cristage are limited to adjacency and that all objects whatever their dimension are abstracted to crystals with 3D meshes form a solid framework to analyze the spatial spreading of phenomena characteristics such as a landslide. Finally, "coupling objects" (Cell-tuples) as well as their associated 3D meshes are useful to propagate natural phenomena along themselves (e.g. a fault) or through their own structure (e.g. a DTM suffering from the effects of an underground collapse). This layer can also be a support to correct topology in the geometrical layer and to ensure good visualization of objects. Existing formalisms could also enhance the logical layer of Cristage. To conclude, a relation algebra is being studied to reason on the crystallographic forms in terms of cardinal relations (North, South, East, West, Up, Bottom, etc.) (Bonin, 2006). This algebra, in association with crystallographic properties and adjacency relations, is the starting point to model a complex natural phenomenon such as a collapse.
Bibliography Billen R., 2002, Nouvelle perception de la spatialite des objets et de leurs relations. Developpement d'une modelisation tridimensionnelle de l'information spatiale, Universite de Liege
234
B. Poupeau, O. Bonin
Bonin O. and Poupeau B., 2006, Orientation et position de solides dans l'espace, In Proceedings of RTE'06, Nantes, accepted, to be published, Brisson E., 1990, Representation of d-dimensional geometric objects, PhD thesis, University of Washington Coors V., 2002, 3D GIS in Networking environments, CEUS, 17 De La Losa A., 2000, Modelisation de la troisieme dimension dans les bases de donnees geographiques, Universite de Marnes-La-Vallee Egenhofer M. and Herring J. R., 1990, A mathematical framework for the definition of topological relationships, in Proceedings of fourth International Symposium on SDH, 803-813 Huang B., Jiang B. and Lin H., 2001, An integration of GIS, virtual reality and the internet for visualization, analysis and exploration of spatial data. International Journal of Geographical Information Science, 15, pp. 439-456 Kofler M., Rehatschek H. and Gruber M., 1996, A database for a 3D GIS for urban environments supporting photo-realistic visualization, http://www.icg.tugraz.ac.at/isprs96 Koninger A. and Bartel S., 1998, 3D-GIS for urban purposes. Geoinformatics, 2, pp.79-103 Li S., A complete classification of topological relations using the 9-intersection method, International Journal of Geography Information Science, 20, 6, July 2006,pp.589-610 Molenaar M., 1990, A formal data structure for 3D vector maps, in Proceedings of EGIS'90, 2, pp. 770-781 Penninga F., 3D Topographic Data Modelling: Why Rigidity Is Preferable to Pragmatism, COSIT 2005, Ellicottville, 409-425, 2005 Pilouk M., 1996, Integrated modeling for 3D GIS, lTC, The Netherlands Pomerol C. and Lagabrielle Y and Renard M., 2003, Elements de geologie, Dunod, pp. 287-295 Poupeau B. and Bonin 0.,2006, 3d analysis with high-level primitives: a crystallographic approach. In Proceedings of SDH'06, accepted, to be published, Stoter J. E. and Ploeger H.D., 2002, Multiple use of space: current practice of registration and developpement of a 3D cadastre, Proceedings UDMS 2002 Verbree J., Maren G., Y. and Germs, R., 1999, Interaction in virtual world viewslinking 3D GIS with YR. International Journal of Geographical Information Science, 19, pp. 385-396 Zlatanova S., 2000, 3D GIS for urban development, lTC, The Netherlands Zlatanova S. and Rahman A.A. and Pilouk M., 2002, Processing and Applications, 3D GIS: current status and perspectives, in Proceedings of the Joint Conference on Geo-spatial theory -
The Democratizing Potential of Geographic Exploration Systems (GES) Through the Example of GRIFINOR Lars BODUM and Marie JAEGL Y Centre for 3D GeoInformation, Aalborg University, Niels Jernes Vej 14, DK-9220 Aalborg, Denmark. E-mail {lbo, jaegly }@3dgLdk
Introduction How can geographic information become a facilitating part of a democratization project? This question has been asked many times since geographic information systems (GIS) were invented about 40 years ago, and many possible answers have been given. Most of these answers have been in the form of either a planning support system or a decision support system. The main arguments against these systems have been the lack of individual control of content and poor representations of reality. Traditional geographic information systems simply have failed to match the demands within this area. Geographical information (geo-infonnation) can participate in a democratizing effort at several different levels. The efforts mentioned above make use of geographical data in democratization efforts in other domains, i.e, democratization through geo-information. However there can be little use of geo-information for democratization if widespread access to data is not a reality, that is to say democratization of geo-information. Access to data is of limited interest without the capabilities to analyze it and use it. Therefore access to methods of analysis is a third level. Finally, the creation and distribution of geographical data has to be taken into account when talking about democratization and geo-information. This article looks at geo-information from a communicative perspective. It argues that technologies of global representation of the Earth in 3D, recently labeled as Geographic Exploration Systems (GES), offer a great democratisation potential, at each of the levels outlined above. Several GES are already available for use, with different features and levels of interaction available to the user. Amongst these systems, this article focuses on the particular case of GRIFINOR [1]. GRIFINOR is a newcomer in the group of GES' , developed at the Centre for 3D Geolnformation at Aalborg
236
Lars BODUM and Marie JAEGL Y
University (http://www.grifinor.net). Its particularity compared to other GES' is that it is a decentralized system. We argue that this particular feature offers great opportunities for a democratic use of geo-information. GRIFINOR is an open source project. This article justifies the choice of the Open source development model for aGES. Keywords: Three-dimensional, Democratizing, Geovisualization, Geographic Exploration Systems (GES), Global, Geocentric, Distributed, Open Source.
Geo-Information, the Internet and Democracy In this paragraph, we explain our view that information, and especially geographic information participates not only in understanding the world, but also in shaping it. Therefore the creation and publication of geographical information are politically significant activities. The rise of the Internet as a mainstream media revolutionized how information is produced and shared, and this applies also to geo-information.
Geo-information participates in shaping the world The very first words of Benkler's "The Wealth of Network" express clearly the importance of information in modem societies: "Information, knowledge, and culture are central to human freedom and human development. How they are produced and exchanged in our society critically affects the way we see the state of the world as it is and might be; who decides these questions; and how we, as societies and polities, come to understand what can and ought to be done. " [2] Our knowledge of the world passes through its representation. What we know of the world is not only described but also defined, by the information we receive. Therefore creating and spreading information participates in shaping reality. This is especially relevant when it comes to geographical information, since in this case the representation aims at describing very essential aspects of the world. Even though the subject of geographical information is very tangible, according to Raper, it cannot be considered as neutral: "In GISc many would argue that realist representation is an ideology associated with a desire to intervene in the world. Hence, Baudrillard
The Democratizing Potential of GES
237
(1983) argued that reality can be reflected, perverted, denied or invented in representation. The question is, can such interventions be emancipatory? This would imply that GISc must remain aware of the social dimensions of its representational ideologies and seek to work with social theory rather than attempting to dismiss it [...J"[3] pI7 If we follow Raper's point of view, all efforts in creating geographical information can influence reality, because they are made with this intention in mind. "All map-making has a certain motivation [...J scientific map inventory as conducted by contemporary mapping agencies around the world cannot be regarded as socially, culturally, and politically neutral [...J mapping as representation in the broad sense remains a contested activity shaped by the objectives ofthe sponsors ofthe cartography" [3] p76. Therefore the act of creating geo-information should be considered also in its social dimensions, just as other domains of information.
Democratization of information with the Internet network The rise of the Internet, and especially the development of the World Wide Web (WWW) into a mainstream media, brought deep changes in how information is spread, but also in how it is produced. A decade ago, information was put together and published by a limited number of media professionals. Nowadays the Internet, with its decentralized structure, makes every personal computer the equivalent of a printing room. Everyone - in places where the technology is available - can potentially publish his or her own information: distribution of information shifted from a few-tomany to a many-to-many structure.
Democratization of geo-information: GES The change brought along by the Internet also influences geo-information, and contributes to democratization at all the levels outlined in introduction. This process has lead to the numerous technologies combining the Internet and geo-information, such as the different Internet mapping initiatives developed through Open Geospatial Consortium specifications such as Web Map Services (WMS) and Web Feature Services (WFS). In a parallel development stream, the focus has been on geovisualization as a contrast to the traditional interface of the paper map. By exploring the possibilities of the new media, several initiatives were taken to define a new scientific paradigm as the succession to cartography as it had been known for decades [4-7]. One of the focus areas in this development has been a multi-
238
Lars BODUM and Marie JAEGL Y
dimensional interface to virtual environments, such as Geo-VRML [8]. At the same time an important project called Alexandria Digital Library outlined the perspectives of a future spatial data infrastructure, where geography would be the ultimate interface to information of all kinds through an interactive map-based or globe-based interface [9]. Especially the interest for such a new kind of spatial data infrastructure was one of the triggering factors for the whole idea around Digital Earth, which became reality a year later. In 1998, Al Gore, then Vice President of the United States, gave a famous speech that is still considered a milestone in geographical information. He introduced the broad public to the concept of Digital Earth, and defined it as a: H... multi-resolution, three-dimensional representation of the planet, into which we can embed vast quantities of geo-referenced data" [10] He saw it ultimately as a way to: turn a flood of raw data into understandable information about our society and our planet" [10] This was because the digital earth would provide an interface to broadcast and access all kinds of geographic referenced data. The democratizing potential of this technology lays in the possibility to make available for people the huge amounts of information contained in geographical data. It would give people access to the data, but mainly it makes it meaningful and usable for particular purposes. The three-dimensional aspect of digital earth has many different advantages, apart from an inherent enhancement of the visualization experience. A three-dimensional environment reproduces the context in which humans evolve in their everyday life, therefore making navigation intuitive. Furthermore, a three dimensional model is more politically neutral than a twodimensional map: "The choice of projection may be taken out of the hand of the surveyor and laid over to a political decision, where the size of neighboring countries may influence the choice of projection. As long as we choose a level of abstraction and a mathematical representation to render maps on paper or in digital form, the right solution is a compromise controlled by many parameters" [11] Since Al Gore's speech, many developments no doubt fall under the technology that he was considering. These developments have been termed Geographic Exploration Systems (GES) by David Maguire, and are mainly exemplified by the GeoFusion technology that powers NASA WorldWind, Google Earth, and ESRI's ArcGIS Explorer of 2006. H •••
The Democratizing Potential of GES
239
These systems have taken the world of geo-information one step closer into democratic access. Geographical data is made available through the Internet, in the form of virtual globes. The next step in this evolution seems to be the shift from geo-information as the content of the data itself (when you access Google Earth just to watch satellite images) to geo-information as an interface to other information, applications, or communication. Google Earth already includes some of these dimensions. The content includes hyper links, users can add and exchange their place marks, or create their own 3D models and view them on the virtual globe. The following parts of this article present the GRIFINOR as a GES, and how it contributes to the process of geo-information democratization.
GRIFINOR This paragraph develops on how the GRIFINOR system, a newcomer in the group of Geographical Exploration Systems takes the democratization of geo-information a step further. Although GRIFINOR fits in the description of GES, this term does not express the additional features of the system. It is best described as a distributed platform for geo-visualization. GRIFINOR is the software implementation of a new Internet service, which provides Geographical References For the Internet Network (GRFIN). It organizes data according to geocentric coordinates. Data and information are represented and accessed visually through a 3D virtual globe. In this respect, it is similar to other GES. There are 3 main differences, however: GRIFINOR is accessible directly from a web browser, it is object-oriented, and it is decentralized.
240
Lars BODUM and Marie JAEGL Y
Fig. 1. This is a screenshot of GRIFINOR as the GES works in first version .
Decentralized The main originality of GRIFINOR, compared to other GES like Google Earth, is that it is a decentralized system. Each computer with a GRIFINOR client installed can also be a server and broadcast its own data. The clients' role is to provide users with a viewer and navigation tools, while servers provide storage and data management technology for the virtual model. The system retrieves data directly from various databases when needed, without passing through a central server. The result is a decentralized network that works like a Gnutella-style network. The concept behind the Gnutella network is simple. To join the network, a client has to know the address of at least one node already on the network. Once the client is connected to this node it can then broadcast a ping to find the addresses of other nodes. The basic idea is that each node maintains a connection to a number of other nodes, normally about five. To search the network for a resource the client sends a query message to each of the nodes it is connected to. They forward the message and when a resource is found the result i.e. resource name and address is propagated back along the path. Any geo-referenced data can be fed into GRIFINOR, visualized on one's own computer, but also shared for the whole world to see. This can be
The Democratizing Potential of GES
241
used to view large-scale geographic dataset, such as satellite images and 3D city models, but also other kinds of data. An individual can place a model of his or her own house in a global model to see how it interacts with the surroundings. A company can allocate and visualize retail points or make marketing analysis and show it directly in GRIFINOR. The system can be used to view and navigate your own data, or to share this with other users who decide to visualize data from your server. This particular feature makes GRIFINOR a powerful tool for democratization in a future spatial data infrastructure. Object-oriented GRIFINOR is programmed in Java, which is an object-oriented language. Each geographic feature (building, vegetation, terrain) is treated as an independent Java object, which can interact with other Java objects, or with application programs. When requested by the client, objects are sent over to be processed and visualized. This leaves a great flexibility to create all kinds of applications and application-specific contents. Buildings in the prototype of GRIFINOR were created from a combination of footprints (of buildings) in the detailed vector-based map (1: 1.000) and roof elements extracted from an aerial laser scanning (LIDAR). It was possible within 2-3 days to create 85% of the 80.000 buildings in Aalborg in this semi-automated way [12]. But the re-construction method is not the important part. Whether it is by laser scanning or by photogrammetry, the important part of this work is to define an object classification that is suitable for the application running on top of GRIFINOR. This classification should also be able to handle Level-Of-Detail (LOD). For city modeling a reasonable choice would be CityGML [13]. The terrain is not based on raster data, as it is for example in Google Earth, but on TIN (triangulated irregular network). Google Earth's solution offers a great technology for a smooth visualization, which the TIN solution cannot match. However, the solution implemented in GRIFINOR has the advantage that it allows the terrain to be treated as a programming object. This opens opportunities for further development. TIN is used in industry, for example to retrieve data for the computation of a height profile along a given line or an accurate analysis of a line of sight. These kind of analytical applications are possible in GRIFINOR. The system has been designed as a platform for applications. It can therefore support not only visualization functions, but also analytical applications, going towards a GIS in three dimensions.
242
Lars BODUM and Marie JAEGL Y
GRIFINOR is programmed in Java, which makes it platform independent. It can be launched directly from a Web browser, which makes it very broadly accessible. What is required is to have Java runtime environment installed on the computer.
Open source In this article we are dealing with democratization, through and of geoinformation. From this point of view, the choice of an open source distribution is easily justified. For a computer program to be developed as "open source" means that the human-readable code is distributed, not only the computer readable binaries. To have the code makes it possible for a programmer to understand and modify the program to suit his or her own needs. Open source programs usually come with a license that specifies the conditions in which the code can be used. The most well known licenses (such as the GNU Public License) give the licensees the freedom to redistribute the program, with or without modifications. But this must happen under the terms of the same license, which will continue to stand even when the code is distributed and redistributed. GRIFINOR is distributed under the LGPL license, which gives the right to use, modify, and redistribute the program, as long as it is redistributed under the terms of the same license. This choice was made because we consider GRIFINOR has the potential to empower actors of a limited size to broadcast their data on a decentralized platform, very much as the Internet does for text and other media. Such a potential can be used only if a network dynamic is started, and individuals adopt the new system for their own activity. Therefore, users should be able to take the platform and modify it to suit their needs. Further more, making this software open source increases the chances that it will be useful for society. This is in fact also a unique possibility to spread the technology to parts of the society, that rarely have a fair chance of acquiring the integrated and proprietary software, that dominates the GIS market right now. From an economical perspective, many discussions are taking place about an open source business model that exceeds the scope of this article. To support the use of the system, as well as the spreading of the technology by distribution of the source code, a Web portal (www.grifinor.net) has been developed by the GRIFINOR team.
The Democratizing Potential of GES
243
Conclusions In this paper we have looked at the obvious possibility that GES can change the very fundament of spatial data infrastructures when it comes to democratization of geo-information. From the ideas of "Digital Earth" to the situation of today where several vendors has launched equivalent solutions to support the idea of GES. The second part of the paper presented a new GES called GRIFINOR, where major principles such as distributable network, object-orientation and open source was explained. With the launch of GRIFINOR, a new and exiting platform for geographic exploration is accessible for everyone. Next step in this evolution will be development of specific applications that can run on the platform. We hope this will take place in the near future.
Acknowledgements This initiative under the Centre for 3D Geolnformation is funded by: • European Regional Development Fund (ERDF) • Aalborg University, Denmark • Kort & Matrikelstyrelsen (Danish National Survey and Cadastre) • COWl A/S, Denmark (formerly known as Kampsax) • Informi GIS, Denmark
References [1] L. Bodum, E. Kjems, J. Kolar, P. M. Ilsoe, and J. Overby, "GRIFINOR: Integrated Object-Oriented Solution for Navigating Real-time 3D Virtual Environments," in Geo-information for Disaster Management, P. v. Oosterom, S. Zlatanova, and E. M. Fendel, Eds. Berlin: Springer Verlag, 2005. [2] Y. Benkler, The wealth of networks: how social production transforms markets and freedom. London: Yale University Press, 2006. [3] J. Raper, Multidimensional geographical information science. London: Taylor & Francis, 2001. [4] A. M. MacEachren and M.-J. Kraak, "Research Challenges in Geovisualization," Cartography and Geographic Information Science, vol. 28, pp. 3-12, 2001. [5] A. M. MacEachren and D. R. F. Taylor, Visualization in modern cartography, 1st ed. Oxford, U.K. ; New York: Pergamon, 1994. [6] J. Dykes, A. MacEachren, and M.-J. Kraak, "Exploring Geovisualization," in International Cartographic Association: Elsevier Science Ltd, 2005, pp. 732.
244
Lars BODUM and Marie JAEGLY
[7] J. Dykes, K. M. Moore, and D. Fairbairn, "From Chernoff to Imhof and Beyond: VRML & Cartography," presented at Fourth symposium on Virtual Reality Modeling Language, Paderborn, Germany, 1999. [8] R. Martin, I. Lee, and G. L. Yvan, "Under the hood of GeoVRML 1.0," presented at Fifth symposium on Virtual Reality Modeling Language, Monterey, California, United States, 2000. [9] M. F. Goodchild, "Towards a geography of geographic information in a digital world," Computers, Environment and Urban Systems, vol. 21, pp. 377391,1997. [10]A. Gore, "The Digital Earth: Understanding our planet in the 21st Century." California Science Center, Los Angeles, 1998. [ll]E. Kjems, "From mapping to virtual geography," presented at CUPUM '05 : Computers in Urban Planning and Urban Management: Abstracts of the 9th International Conference, London, 2005. [12]J. Overby, L. Bodum, E. Kjems, and P. M. Ilsee, " Automatic 3D building reconstruction from airbomelaser scanning and cadastral data using Hough transformation," presented at XXth ISPRS Congress, Istanbul, 2004. [13]T. H. Kolbe, G. Greger, and L. Plumer, "CityGML - Interoperable Access to 3D City Models," in Geo-information for Disaster Management, P. van Oosterom, S. Zlatanova, and E. M. Fendel, Eds. Berlin: Springer Verlag, 2005, pp. 883-900.
The Integration Methods of 3D GIS and 3D CAD LI Juan', TOR Yam Khoon 2, ZHU Qing', ILIESMARS, Wuhan University, P.R.China Email:
[email protected] 2School of Civil & Environmental Engineering, NTU, Singapore Email:
[email protected]
Abstract GIS (Geographical Information System) and CAD (Computer Aided Design) have evolved largely in the past 30 years. GIS and CAD describe the same real-world objects but they belong to different domains. The spatial and non-spatial data involved in the two systems are quite different and the two systems cater for different applications. With the advancement in information technology and 3D representation of geospatial information, more and more 3D applications demand both of them to be used together. This paper focuses on how to integrate the 3D GIS and 3D CAD. After reviewing the development of both the 3D GIS and 3D CAD systems, this paper compares their differences in the domain/purpose, coordinate system, object type, and so on. Then five approaches for the integration of 3D GIS and 3D CAD system are introduced, i.e. data exchange; direct data import; share the API of database; CAD-based GIS system/GIS-based CAD system; formal semantic and integrated data management. Basing on the first approach, this paper designs the scheme of integrating 3D GIS and 3D CAD systems with the considerations that CAD data maybe in a local coordinate system not compatible with GIS data; the information loss due to data translation; translating data between the complex geometric description in CAD and the simple geometric description in GIS and so on. According to this scheme, and based on VGEGIS software platform developed by LIESMARS of Wuhan University and AutoCAD 2006, a pro-
246
Li Juan, TOR Yam Khoon, Zhu Qing
totype system is developed to demonstrate the integration of 3D GIS and 3D CAD.
Keywords: 3D GIS, CAD, integration, DXF, geometry translation
1. Introduction GIS (Geographic information system) and CAD (Computer Aided Design) systems have evolved over the past 30 years. In its original concept, CAD software was designed for geometric models of relatively small in size vis-a-vis model of geographical scale. In contrast, GIS software was primarily designed to deal with geospatial models covering relatively vast extent in area. Its fundamental feature is to create and maintain spatial (the graphics) and non-spatial (attribute information) data. GIS is used to represent existing objects and it places emphasis on spatial analysis whilst CAD is used to represent non-existing objects and it generally lacks the capability in spatial analysis. As CAD has developed into AEC (Architecture, engineering, and construction) solutions and GIS has developed into 3D solutions, there is a renewed interest in large-scale geo-information among the diverse set of AEC and GIS end users (Zlatanova and Prosperi, 2005). To optimize the use of land resources which is limited in an island-state like Singapore, extending the utilization of land resource to subsurface is very important. Thus land resource could no longer be treated as two-dimensional, so there is a need to create a 3D GIS system and also to synergize it with design-based CADIAEC system. Successful integration of 3D GIS and CADIAEC is crucial to an intelligent spatial information management system which is not limited to the efficient and effective archiving and management of land resources, but also the planning and design of infrastructure - both surface and subsurface. More and more integrated GIS-CAD systems (Fig. 1) are required by a variety of business and technology because both systems provide information on and deliver representations of the same real-world (man-made) objects in each phase of the project life cycle, such as one want to know where
The Integration Methods of 3D GIS and 3D CAD
247
objects are as well as the details about them. The first question can be answer by wi GIS system and CAD system can represent objects in detail. CAD system has limited and slow database links and poor spatial analysis as compared th GIS. And GIS trying to be a CAD system are often accused of being overly complex and having limited sketching and drafting capabilities. So, fundamentally, GIS and CAD systems are different. Thus, the differences between CAD and GIS are required to be studied and understood before integrating them.
Fig. 1. The integrated GIS-CAD system (adapted from Autodesk, 2004)
2. Methods of Integration CAD and GIS systems traditionally focus on different domains and purposes. There are a number of different approaches for integrating GIS and CAD.
2.1. File Translation File translation involves conversion of data from one file format to another, i.e. converts CAD data to a GIS data format and vice versa. DXF can be used as intermediate file format (Fig. 2). Because of the differences in
248
Li Juan, TOR Yam Khoon, Zhu Qing
CAD and GIS data models and file formats, users need to specify the syntactic and semantic mapping. Unfortunately, the process is not flawless, and data is often lost. The loss of information due to translation cannot be ignored, but with careful attention it can be minimized. CAD
CAD Drawing
DXF
Fig. 2. File Translation (Maguire, 2003)
2.2. Direct Data Import Direct data import (Fig. 3) is conceptually similar to file translation except that data is read and converted on the fly into memory. It does not need intermediate format, just an in-memory representation in the GIS. For example, Arc View can read MicroStation DGN files directly so the CAD data can be visualized, queried, and printed in the same map as other GIS data.
CAD Drawing
Fig. 3. Direct Data Import (Maguire , 2003)
2.3. Share Access to a Database An alternative to data conversion is shared access to a database. Technically, one system embeds an application programming interface (API) that allows acces s to data on the fly. ESRI's CAD Client extension to ArcSDE
The Integration Methods of 3D GIS and 3D CAD
249
allows MicroStation or AutoCAD users to store and retrieve CAD elements and GIS features in a DBMS (ESRI, 2003a, 2003b). The ArcSDE access API is embedded inside the CAD system. The API shows up as tools in the CAD user interface (Fig. 4).
Fig. 4.Share Access to a Database (Maguire, 2003)
2.4. Integrated GIS-CAD System Many CAD systems incorporate GIS tools such as thematic mapping and geoprocessing, and GIS programs are adding simple drawing and editing tools typically used in CAD applications. ESRI's ArcInfo v8 was a big step in the right direction. It provides better drawing and editing capabilities, including snaps, true curves and other CAD-like features. ArcGIS 9.1 and 9.2 have CAD editing tools such as fillet, inverse and proportion, but they still lack the functionality of the tools in traditional CAD programs. It is not expected to see GIS programs matching the quality and precision of CAD drawing and editing tools in a near future (Sipes, 2006).
2.5. Formal Semantic and Integrated Data Management GIS and CAD have been developed independently over many years . Although there are overlaps between the data and operation they support, it is not easy to integrate them. One of the efficient solutions to integrate two different systems is to achieve interoperability between them.
250
Li Juan, TOR Yam Khoon, Zhu Qing
Interoperability is the ability of a system, or components of a system, to provide Information-portability and inter-application cooperative process control. Two geographical databases X and Y can interoperate if X can send requests for services R to Y on a mutual understanding of R by X and Y, and Y can return responses S to X based on a mutual understanding of S as responses to R by X and Y (Bishr,1998). Heterogeneities, such as data heterogeneity and operation heterogeneity, between two different information systems, cause most of the problems related to interoperability.
3. Integration Based on The File Translation In this paper, file translation is chosen to achieve integration of the 3D GIS and 3D CAD. The file translation involves conversion of data from one file format to another. First, we need to choose the platform of 3D GIS and 3D CAD. Then intermediate file for file translation between these two systems are needed to chosen.
3.1. The Platform of 3D GIS and 3D CAD The 3D GIS platform used is VGEGIS, which is developed by LIESMARS of Wuhan University. The major characteristic of this software is automatic generation of large 3D city model with city 3D coding data, GIS data, CAD data etc. And it provides advanced tool for three dimensional visualization, analysis and surface generation. The 3D CAD platform is AutoCAD, which is the product of Autodesk. AutoCAD is the 2D drafting and detailing and 3D designing software that enable one to create with speed, share with ease and manage with efficiency.
3.2. Intermediate File for File Translation This paper uses DXF file as the intermediate file. DXF (Data exchange Format) was first introduced by AutoDesk and is one of the most widely used data exchange format.
The Integration Methods of 3D GIS and 3D CAD
251
A DXF file contains 2D and 3D drawing components. Those components are known as Entities. The DXF file can represent almost any CAD drawing using those entities and can connect a group of entities together (such as windows, doors, etc.) and use them later in the file. Essentially a DXF file is composed of pairs of codes and associated values. The codes, known as group codes, indicate the type of value that follows. Using these group code and value pairs, a DXF file is organized into sections and comprising records which contain a group code and a data item. Each group code and value is on its own line in the DXF file. Generally BLOCKS section and ENTITIES section contain geometry. An ENTITIES section can contain a number of different entities.
3.3. Geometry Conversion Geometry is the very important part of the CAD and GIS systems. Both are information systems that involve geometry for many different purposes. One could classify CAD as one system which deal with both moveable objects (tables, cars, airplanes, etc.) and unmovable objects (plants, buildings, houses, railways, roads, bridges, tunnels, etc.). Likewise, unmovable objects are the sole domain of GIS system. So, geometry conversion between CAD and GIS on unmovable objects is essential in the integration of these two systems.
3.3.1.
Coordinate
Sometimes the coordinate used in CAD is different from the GIS coordinate. So when converting the data between these two systems, we need to resolve some conversion problems, for instance transforming coordinates from left-handed system to right-handed system, unit changes and etc.
3.3.2.
Line
In AutoCAD, one can create a variety of lines such as single line, and multiple line segments - with and without arcs - known as Line, Polyline, 3D Polyline and Multiline (Fig. 5).
252
Li Juan, TOR Yam Khoon, Zhu Qing
VGEGIS
AutoCAD
I
Line
I I
Polyline
I
Multiline
...
I
II"
... II"
I
3D
8
...
I
.....
II"
~
...
I
II"
Fig. 5. Line conversion between CAD and GIS
In VGEGIS, there is only one type of line - comprising vertexes and segments. The simplest line composes of two vertexes and one segment. Since the line types of these two systems are very different, the difficulty thus lies in translating between the multitude of lines in AutoCAD and the simple line type in VGEGIS. With the inherent constraint, all line types in AutoCAD are translated into VGEGIS as lines. Some of the lines in AutoCAD, e.g. Polyline, do not have z values in their vertex coordinates, their elevation attributes, if any, will be used as the z values for the vertexes of the line. So after extracting the geometric information and other properties of the line, e.g. color, width, style, from AutoCAD drawing files, all kinds of lines can be imported to the VGEGIS system according to the data structure of VGEGIS. Conversely, in translating lines from VGEGIS to AutoCAD, the problem of how to insert the lines into AutoCAD arises because there are several kinds of lines in AutoCAD. 3D Polyline, having x, y, z coordinates and comprising several vertexes and multi segments, is the equivalent of the line in VGEGIS. So it is the adopted line type in the translation.
3.3.3.
Surface
In AutoCAD, the surface modeler defines faceted surface using a polygonal mesh. Because the faces of the mesh are planar, the mesh can only ap-
The Integration Methods of 3D GIS and 3D CAD
253
proximate curved surfaces. There are several types of surfaces in AutoCAD, i.e. 3D face, Ruled surface, Tabulated surface, Revolved surface, Edge-defined surface, Predefined 3D surface and General surface meshes, etc. In drawing database, these surfaces are recognized as 3DFACE, POLYFACE MESH or POLYGON MESH (Fig. 6). In VGEGIS, there are two kinds of surfaces. One is the generic surface and the other is the vertical surface. Generic surface can be used to represented road, lake, geological surface or any other surface object. Vertical surface can be used to represented billboard, wall and etc. Generic surface can be either concave or convex. Each facet of the generic surface consists of 3 vertices. And the drawing file records the boundary points of this surface for doing further spatial analysis. Vertical surface, on the other hand, consists of 4 vertices. Since vertical surface is a special surface in VGEGIS, surfaces from AutoCAD are imported as generic surfaces to VGEGIS. The AutoCAD surface, having more than 3 vertices, has to be triangulated before importing since the generic surface consists of facets with 3 vertices each. In translating a surface from VGEGIS to AutoCAD, POLYFACE MESH which consists of facets with three or four vertices - similarly to VGEGIS generic surface - was adopted as the most suitable type for representing the VGEGIS surface.
254
Li Juan, TOR Yam Khoon, Zhu Qing
AutoCAD SURFACE: 3D face, Ruled surface etc
DRAWING DATABASE:
GIS SURFACE:
3DFACE POLYFACE MESH
........ A ~
generic surface vertical surface
POLYGON MESH
Fig. 6. Surface conversion
3.3.4.
Solid A solid object represents the entire volume of an object. Complex solid shapes are easier to construct and edit than wireframes and meshes. There are several types of solid in AutoCAD, including basic solid shape (box, cone, cylinder etc), solid torus, solid wedge, extruded solid, revolved solid and composite solid. All these kind of solids are represented as 3DSOLID in the drawing database of AutoCAD. In VGEGIS, there are four kinds of solid, called polygon volume, rectangle volume, sphere and column. The top surface of a polygon volume is a polygon and that of a rectangle volume is a rectangle. They can be used to describe all kinds of buildings, bridge, court etc. Sphere can be used to represent all sizes of sphere. In DXF file, the 3DSOLID is encoded in a proprietary format, this paper uses two methods to resolve this problem. One is converting 3DSOLID to appropriate representation (such as POLYFACE MESH) in CAD. The other is inserting 3DSOLID into VGEGIS as a point, i.e. as a single object.
AutoCAD
I
3DSOLID
1....-
I.....-.--~J JI"'"II.--
1
.....
Fig. 7. Solid translation
POLYFACE
--J,.:~ . . I---""1
GIS
The Integration Methods of 3D GIS and 3D CAD
255
3.4 Information Loss In this paper, translation works by "mapping" data from one format to another. Actually, for data translation, the end result is three copies of the data because an intermediary format, i.e. DXF, is used. Because formats are different, there is rarely a one-to-one match for every object, thereby resulting in information loss. For example, although AutoCAD package supports a mathematical arc, VGEGIS does not. Thus, the mathematical definition may be lost and a stroked arc may be created in its place, i.e. a precise, three-point arc drawn in AutoCAD is typically converted to a series of straight-line segments in VGEGIS. In order to reduce this kind of information loss, we can increase the density of the straight-line segments. That is to say to use more straight-line segments to describe the arc. The more straight-line segments the arc has, the more precise it is.
4 IMPLEMENTATION The implementation of the approach of using data exchange format to achieve GIS-CAD integration is achieved using programming code written in C++. First, it is necessary to identify (a) the object types that are going to be modeled in CAD and GIS; (b) the functionalities and (c) the output that are expected from each system. The architecture of the approach is illustrated in Fig. 8 and an example of the two-way transfer of terrain surface, lines and 3D buildings between AutoCAD and VGEGIS is displayed in Fig. 9.
256
Li Juan, TOR Yam Khoon, Zhu Qing
TERRAIN ANALYSIS
GEOLOGY PROFILE ANALYSIS
3DMODEL ANALYSIS
3DMODEL QUERY
Fig . 8.The architecture
Fig . 9. Model translation between AutoCAD and VGEGIS
The Integration Methods of 3D GIS and 3D CAD
5
257
CONCLUSION
This paper focuses on how to achieve the integration of 3D GIS system and 3D CAD system. Among the five methods for integration, this paper chose the file translation method. Using this method, this paper implements the geometry translation, including line, surface and solid, between 3D GIS and 3D CAD and resolves largely the technical differences in the geometry. This shows that the file translation is a pragmatic and efficient approach in integrating the two systems. A tight integration will require a combined solution at the data representation level. Currently, CAD-GIS integration projects tend to be project specific and case-by-case. But model conversions are seldom based on pure geometric "translations", so semantics is important. One of the efficient solutions to integrate two different systems is to achieve interoperability between them. So the future work is to achieve semantic interoperability.
Reference Autodesk (2004), CAD and GIS - Critical Tools, Critical Links. Bishr Y. (1998), Overcoming the semantic and other barriers to GIS interoperability. Int J Geographical Inf Sci; 12(4):299-314, 1998 ESRI. (2003a), Using CAD in ArcGIS, An ESRI Technical Paper, June 2003 ESRI. (2003b), Creating Compatible CAD Data for ArcGIS Software, An ESRI Technical Paper, June 2003. Maguire
DJ.
Improving
CAD-GIS
interoperability,
http://www.esri.com/news/arcnews/winter0203articles/improving-cad.html 2003 Sipes JL. Spatial Technologies-Integrating CAD and GIS, http://gis.cadalyst.com/, 2006 Zlatanova S, Prosperi D. Large-Scale 3D Data Integration, New York: Taylor & Francis, 2005
3D Navigation for 3D-GIS - Initial Requirements Ivin Amri Musliman
1,
Alias Abdul Rahman 1 and Volker Coors
2
1 Dept. of Geoinformatics, Faculty of Geoinformation Science & Engineering, Universiti Teknologi Malaysia, 81310 Skudai, Johor, Malaysia. ivinamri @grnx.de, alias @fksg.utm.my
Faculty of Geomatics, Computer Science and Mathematics, Hochschule fur Technik, Schellingstr.24, 70174 Stuttgart, Germany. volker. coors @hft-stuttgart.de
2
Abstract The needs for three-dimensional (3D) visualization and navigation within 3D-GIS environment are growing and expanding rapidly in a variety of fields. In a steady shift from traditional two-dimensional (2D) GIS toward 3D-GIS, a great amount of accurate 3D data sets (e.g. city models) have become necessary to be produced in a short period of time and provided widely on the market. This requires a number of specific issues to be investigated, e.g. 3D routing accuracy, appropriate means to visualize 3D spatial analysis, tools to effortlessly explore and navigate through large models in real time, with the correct texture and geometry. There had been a lot of study on 3D landscapes, urban and city models. The rapid advancement in science and technology had opened wide options for a change and development of current methods and concepts. Virtual Reality (VR) is one of those developments, which gives the sense of feel in virtual environment. It enables users to visualize, make query and exploring 3D data. Such system can, not only help laymen, who often have trouble in understanding or interpreting complex data, but they also can help experts in decision making. The objective of this paper is to discuss some initial requirements of the proposed solution towards 3D-GIS. Eventually, this paper will serve as a starting point for a more challenging research idea. The focus of this research is to investigate and implementing 3D navigation techniques and solutions for 3D-GIS. Investigation on the support of navigation in real world environment will be carried out. This will include a research on the benefits of using 3D network model (non-planar graph) compared to 2D, how to use visual landmarks in route descriptions and using 3D geometry to get more accurate routing (in buildings, or in narrow
260
Ivin Amri Musliman, Alias Abdul Rahman and Volker Coors
street, etc). And as for implementation, a GUI provides the users with means (e.g. fill-out forms) to specify SQL queries interact and visualize 3D outcomes in virtual reality environment. This has opened up the ability to distribute and navigate accurately in 3D virtual worlds. The initial study on Klang Valley will go through data conversion processes from different formats like Laser, VRML, CAD and Shape 3D in a first person view environment using a developed system using VRML, JAVA and .Net compiler. The dataset structure will be in the form of various 2D, 2.SD and 3D array of height fields. KEY WORDS: 3D-GIS, Navigation, Visualization, Virtual Reality.
1.
INTRODUCTION
The needs for three-dimensional (3D) visualization and navigation within 3D-GIS environment are growing and expanding rapidly in a variety of fields. This requires a number of specific issues to be investigated, e.g. 3D routing accuracy, appropriate means to visualize 3D spatial analysis, tools to effortlessly explore and navigate through large models in real time, with the correct texture and geometry. There had been a lot of study on 3D landscapes, urban and city models but it focuses on typical three-dimensional visualization of geo-data. No further functionality, e.g. topological relations between the objects or object identities, are attached or maintained to these visualization elements. Therefore, the current 3D-GIS are widely recognized as so called "pretty model" of a landscape of a city. In fact, in some map navigation situations, due to the limitation of the viewing device, having a large data set and improper data management issues, it is hardly to visualize, generalize and even maintained landmarks on the map. To overcome this problem, a description for each possible route and point of interest on the map is created. Currently, mainly geometry is used to generate description like "tum left in SOm" and present it to the user either in voice mode or in text appearance. Again, GPS that attached to the viewing device has certain drawbacks. It needs of course direct contact to the satellites and it gives no idea of the landmark's whereabouts. If the signal is lost or multipath occurred, this point of interest or route description is no longer valid, thus giving a wrong information or direction to the user. But then, if visual landmarks are present in route description, it could give better understanding. With visual support it could be "tum left at the red building on the right side".
3D Navigation for 3D-GIS - Initial Requirements
261
There are many promising applications in reach, if we can take advantage of the "knowledge" of third dimension in the applications. For example, in 3D navigation, one can have an accurate point-to-point network routing based on 3D network model compared to 2D if an accurate data set is present. Besides that, it can also used to determine the position and the orientation of a human within an urban environment, for instance (Jasnoch et al, 2000). A virtual tourist can use this information to get from one place of interest to another with an accurate and efficient route and retrieve detailed information of that particular place, thus taking advantage in travel planning. The overall aim of the study is to investigate on the support of navigation in real world environment. This will include; firstly, a research on the benefits of using 3D network model (non-planar graph) compared to 2D which will not limited to buildings but also at street level. Secondly, emphasis on how to use and generate visual landmarks for route descriptions. For implementation stage, a Gill will be developed which provides the users with means (e.g. fill-out forms) to specify SQL queries, interact and visualize 3D outcomes in near-virtual reality environment (PC based). This paper describes the possible research for 3D navigation development as one of 3D-GIS application. Firstly, a simple concept on 3D navigation for 3D-GIS is briefly presented. Several research questions have been point out here as well. Secondly, it concentrates on the methodology for the development of a simple viewing prototype of the 3D navigation solution. A short description for the experiment is described. Final remarks and further works summarize and conclude the research on 3D navigation for 3D-GIS.
2.
3D-GIS
In comparison to the advancements in 3D visualization, relatively little has been accomplished in the realization of practical 3D-GIS. The obvious reason remains: the transition to 3D means an even greater diversity of object types and spatial relationships as well as very large data volumes. In a 2D GIS, a feature or phenomenon is represented as an area of grid cells or as an area within a polygon boundary. A 3D GIS, on the other hand, deals with volumes. Consider a cube. Instead of looking at just its faces, there must also be information about what lies inside the cube. 3D GIS require this information to be complete and continuous.
262
Ivin Amri Musliman, Alias Abdul Rahman and Volker Coors
2.1 3D Navigation Van Driel (1989) recognized that the advantage of 3D lies in the way we see the information. It is estimated that 50 percent of the brain's neurons are involved in vision. What's more, it is believed that 3D displays stimulate more neurons: involving a larger portion of the brain in the problem solving process. With 2D contour maps, for example, the mind must first build a conceptual model of the relief before any analysis can be made. Considering the cartographic complexity of some terrain, this can be a difficult task for even the most dexterous mind. 3D display, however, simulates spatial reality, thus allowing the viewer to more quickly recognize and understand changes in elevation and texture. The concept applies to 3D navigation as well. When a user route him/herself to an unknown destination (point), it is good to have a system that can visualize map in a way that will make the user understand better and assist him/her to the destination in a most efficient route. Currently, most navigation systems tend to display the map by changing the map's horizon perspective (to look like a 3D scene). This is just somehow a nice map presentation to user and it doesn't mean anything without any help such as landmarks on the map. 3D navigation system on the other hand, should be able to show 3D surrounding objects along the path to the place of destination. These 3D objects are also known as visual landmarks. The visual landmark has more likely the same shape as the real-world object with generalized details. And of course, navigating through 3D dataset will make sense when it comes to accuracy since data that are involved will be an accurate one. Application such 3D network analysis could be applied here, and therefore, user may have alternative solutions for a minimal distance or cost for each route to destination. For example, in exploring a 3D city, one can query where hislher current location is and then choose where to go next, either to a known point/location of interest or by a certain radius interval (e.g. 1 km from a point). Options like fastest route, shortest route or minimum cost can be choose from here on. From the derived results, it is possible to know which route should to be taken with an option of desire (e.g. with shortest and minimal cost route). The route is not limited to road (line features) networks, but also able to show routes that across terrain (if overlaid with DSM data), but then navigation rules need to be applied here, e.g. whether allowing search to cross boundary of line networks or not. While navigating along the chosen route, detail information (attributes) for 3D objects surrounding can be viewed as well, where they are cross-referenced with another database.
3D Navigation for 3D-GIS - Initial Requirements
263
2.2 The Research Questions In order to be able to start investigating 3D navigation techniques and solutions for 3D-GIS, the present system (techniques and solutions) had first to be thoroughly examined and described. In this research, although with the support of visual landmarks in navigation, several research ideas and possible methods still remain as gaps that need to be addressed, where the following research questions raised: • How to generate visual landmarks descriptions as a solution for 3D navigation? How do visual landmarks influence a 3D visualization as navigation support? • What is the most suitable or appropriate spatial modeling for 3D navigation 3D-GIS environment? Will mobile augmented or mixed-reality visualization make a big benefit towards 3D navigation? For data accuracy wise, to have an accurate routing (in buildings, or in narrow street, etc), 3D geometry and surface obtained from Laser and any related data will be used. Emphasis is put on how OGC standards could be implemented in such system, especially uncertainties about their integration between several data formats and systems that will be handled in 3D navigation solution. Possible methods of 3DGIS database regeneration and integration by reconstructing 3D buildings will, as mentioned above, be one of the main issues to be discussed in this study. Here, focus will be on processes rather than on technical device. • Does generalizing roof top in large-scale maps will affect the accuracy of the generated 3D-GIS database? Generalization of 3D objects plays a major role in 3D navigation. It is possible to use either Level-of-Details (LoD) or some compression (e.g. Edgebreaker, Delphi, etc.) or even multi-resolution modeling approach to improve data storage and retrieval in order to avoid the time-consuming data rendering and query results. • Will the implementation of data compression to the generated 3D-GIS database improve data retrieval or the use of LoD would be better? How about tiled-based method (Zhu et aI., 2002)? Or trying to implement using multi-resolution modeling (Coors & Flick, 1998)? Nowadays, multi-resolution models are only used in raster images and in highfields. Gueziec has introduced an algorithm using multiresolution model in VRML (Gueziec, A. et aI., 1998). However, none
264
Ivin Amri Musliman, Alias Abdul Rahman and Volker Coors
of these multi-resolution modeling approaches is focused on redundancy free transmission of different LoD on demand in a web-based application. • How to implement multi-resolution modeling in 3D navigation solution without having transmission redundancy of different LoD?
3.
METHODOLOGY
The exploration of the 3D worlds in the near-VR systems ensures a certain level of virtual reality techniques. Existing VRML, X3D, JAVA, GML, SVG and other Web standards and software modules allow the development of GUI with limited effort, relying on some operations provided by browsers or viewers. Therefore VRML and JAVA will be employed as a front-end engine to the 3D-GIS model. As for a start, a simple viewing prototype has been developed. The prototype tries to render and visualize the 3D city model of Putrajaya and try to connect the objects in the 3D scene with their attributes in the land registry dataset. Laser data of Putrajaya area obtained from the National survey department is used. The true distance for accurate routing will be based on surface model from Laser data. The motivation was to give the user the ability to interact with the 3D scene not only by moving around, but also to have access to the attributes information by clicking on the objects or 'intelligently' display the information on the screen. The given 3D dataset only consists of geometry information so that an algorithm had to be invented, which connects the geometry with the attributes. This section is considered to be the designing and developing a simple viewing prototype of 3D navigation solution. The simple application prototype is just an initial progress of the full development to show how 3D navigation for 3D-GIS can be done. Although the test application is a desktop application, it is possible to realize the concept on a website as well. The VRML viewer can be added in web pages and the application part could be realized as a Java Applet, which interacts with the viewer.
3.1 Tools The development environment is VB.Net and for VRML visualization, the Cortona Viewer by Parallel Graphics. Parallel Graphics supports developers by the Cortona SDK (User guide), which allow the Cortona VRML
3D Navigation for 3D-GIS - Initial Requirements
265
Viewer be added into the .Net projects as an ActiveX component. There are also some event handler classes to handle mouse events and other interface functions so you can completely interact with the viewer.
3.2
Visualization
The 3D geometry dataset will consists of several ASCII files (temporary) dumped or connected directly form databases, which will consist of point tables. Besides the x, y, z-coordinates each point has got the attributes houseindex, wallindex, kindofwall (roof, wall, floor). With the help of these attributes it is possible to create an IndexedFaceSet-Node (IFS). The used fields are shown in Figure 1. This VRML-Node seems to be the best solution to describe the building geometry, because it can describe complex objects, which consist of several areas, very easily.
IndexedFaceSet{ exposedField SFNode color NULL exposedField SFNode coord NULL field MFInt32 coordIndex field MFInt32 colorIndex }
Fig. 1. Part of VRML Specification '97
Converting the ASCII information (temporary) into VRML syntax will go through the ASCII table first, read the coordinates and the attributes and then decide when to create a new building (IPS-Node), a new area or when to change the color of the created area to color roof and walls differently based on the attribute selected. To be able to identify the objects, which are essential for the connection, each IFS-Node has to be given a unique identifier. In VRML it is possible to give a Node a name called DEF-Name. In the ASCII file the index for the buildings starts always with one, so if this index is used and the 3D scene consists of more than one temporary ASCII file there is no unique identifier for the buildings. Therefore a combination of filename and index is used to form a kind of primary key for each building, which is also used for the connection later on.
266
3.3
Ivin Amri Musliman, Alias Abdul Rahman and Volker Coors
Connection to attribute data
The connection is done by a geometrical test, similar to point-in-polygon concept. Each building of the temporary ASCII file has got a representative point inside its geometry. The location of this point in the 2D land registry is checked and if the point lies inside of a land registry building's outline, the dataset connected to this outline is considered to be connected to the 3D building. This connection (Abdul Mannan, 2004) is saved in a text file (see Table. 1), to avoid doing this test each time a model is loaded.
Table 1. Connection Table DEF-
DBF-
Representative3D
In this table ("connection table") the DEF-Name of the 3D object and the corresponding dbf index of the land registry dataset are saved. The representative coordinates are saved in addition for query or analyzing matters. The concept of the test is fairly simple but there are some more difficult points in programming. To test if the representative is inside of the polygonal outline, it has to be connected with a very far point by a line. Now the intersection points of this line with the outline are counted. Even number of intersection means the point is outside, an odd number of points means point is inside. This test works also if the geometry has got void areas inside (Finley, 1998).
3D Navigation for 3D-GIS - Initial Requirements
267
Fig. 2. The anticipated view for 3D Navigation for 3D-GIS.
4.
FINAL REMARKS AND FURTHER WORKS
This paper has discussed a few problem arise in implementing 3D navigation for 3D-GIS. An investigation on the support of navigation in real world environment will be carried out to find techniques and solutions for 3D navigation. Several research ideas and some initial requirements of 3D navigation solution towards 3D-GIS also been discussed. To make it more understandable, a simple viewing prototype has been developed as an initial stage of implementation. Eventually, this paper will serve as a starting point for a more challenging research effort and the will be part of on going research program on 3D-GIS. Currently, map navigation solution in the market doesn't support fully 3D features and functions. In fact, it can only be found as street navigation system in autos and a few only as web-based applications. One major drawback in such system is the support of navigation in real world environment. The system only support for 2D environment. Only basic network analysis can be done. Map landmarks are not emphasized in presentation. For example, maps that served in car navigation system does not help much in visualizing objects e.g. buildings, landscapes, landmarks, etc.
268
Ivin Amri Musliman, Alias Abdul Rahman and Volker Coors
thus making it hard for user to recognized it. The map display buildings or any place of interest as a point feature or as a square box. User interaction with map is not possible, thus information (attributes) for each map features is not shown completely. Although, there are a few of works devoted to 3D navigation, the research is concentrated around a few basic research ideas. Integration of 3D navigation with current direction and technology for 3D-GIS raises research topics at a database level toward standard object descriptors and operations. It is expected that the outcome of this research could help laymen who often have trouble in understanding or interpreting complex data, but also can help experts in decision making. It is hope further works could focus on data regeneration and management for 3D-GIS. Still 3D-GIS functionality needs to be addressed: 3D buffering, 3D shortest route, 3D intervisibilities are some of the most appealing for research for integrating with 3D navigation system.
REFERENCES Coors, V. and Flick S., 1998. Integrating Levels of Detail in a Web-based 3D GIS. Coors, V. and V. Jung, 1998. Using VRML as an Interface to the 3D data warehouse, in: Proceedings of VRML'98, New York. Finley, Darel. R., 1998. Point-In-Polygon Algorithm.
http://ww\v.alienryderflex.con1/polygon/ Gueziec, A. et al. Simplicial Maps for Progressive Transmission of Polygonal Surfaces. Third Symposium on the Virtual Reality Modeling Language, Monterey, California, USA, 1998, ACM Press. M. Abdul Mannan and Bogdahn Juergen, 2004. Virtual Environments in Planning Affairs, in 20th ISPRS Proceedings, Istanbul, Turkey. Morley, Chris, 2002. Tool: libVRML97
http://www.vermontel.net/--cmorley/vrlnl.htm User Guide, Cortona SDK 4.1 http://w\vw.parallelgraphics.com/developer/ Uwe Jasnoch, Volker Coors and Ursula Kretschmer, 2000. Applications of 3DGIS. Van Driel, Nicholas J., 1989. "Three dimensional display of geologic data". VRML '97 Specification. http://www.web Id. org/x3d/ speci fications/vrmJ/IS()_TEe _14772All/index.html. Zlatanova, Siyka 2000. PhD thesis, 3D GIS for Urban Development, Thesis ICGV, GrazUT, Austria and lTC, The Netherlands, ITC Dissertation Series No. 69, http://w\vw.gdmc.nl/zlatanovaiPhDthesis/pdf/content.htlnl
Web-based GIS-Transportation Framework Data Services using GML, SVG and X3D Hak-Hoon Kim and Kiwon Lee Dept. of Information System Engineering, Hansung University Seoul, Korea, 136-792
Abstract In Korea, most GIS-related national standards are produced by TTA (Telecommunications Technology Association). Among dozens of nation-wide GIS standards by TTA, transportation data model is .considered as a kind of framework standard database model for the transportation application domains. In this study, multiple types of geographic features or objects defined in the TTA transportation data model were encoded by GML for the XML-databases building, taking into account of practical applicability in the GIS application. For this process, it was carried out to implement GML editor software with the main functions of (1) supporting and customizing map style sheet, (2) attribute editing, (3) adding/deleting/manipulating geo-base features in TTA data model and user-defined model for database extension, (4) web publishing through general web browser, and (5) data transferring to SVG (Scalable Vector Graphics) for 2D transportation features and X3D for 3D complex features. It is thought that functions handling SVG and X3D from GML or XML database can be used as major components within web-based geospatial data services supporting complex types of transportation application data model, as well as nation-wide standard database model including 2D and 3D geospatial data sets. This approach can be utilized to web-based GIS services application related to standard data model containing 3D complex data sets.
1. Introduction In the most recent GIS-based enterprise application system with large physical database, efficient uses of the interoperability, the standards, and new web technologies have been regarded as one of the important issues.
270
Hak-Hoon Kim and Kiwon Lee
As for the interoperability and the standards among these issues, OGCGML (GML of Open Geosptial Consortium, Inc.) [4][6], as an international XML standard for geo-spatial information [8], is known to show advantages over conventional GIS applications handling value-added contents and features from various types of sources: Working on a general web browser, Easy customizing of map styling, editing, and linking, Controlling over content and Better querying capabilities an so forth. Besides these aspects, OGC-GML should be considered to be practical factor or physical concept due to its effectiveness as international or industrial standards. In Korea, TTA (Telecommunication Technology Association)[7] have released most GIS-related national standards. Among numerous GIS standards of TTA, transportation data model is a kind of framework standard model of geography information data for the transportation application domain. As well, web-based 2D/3D graphic processing technology [2][3][5], a kind of new web technology, can be also considered as the important one for smart web application development. In this study, these three aspects are considered in the viewpoint of smart web GIS application development, and a prototype using them are designed and implemented.
2. Applied Technologies: Overview 2.1. Geographic Markup Language (GML) According to World Wide Web Consortium (W3C), eXtensible Markup Language (XML) is a simple, very flexible text format derived from SGML (ISO 8879). It is playing an increasingly important role in the exchange of a wide variety of data on the web and else where [8]. As GML is a modeling language and an encoding based XML for geographic information, it supports spatial and non-spatial properties of objects. The core concept of GML is the feature that is physical objects and abstract objects in the world. The feature is described by properties (name, type, value). Spatial properties are properties of feature that have a geometric object. In GML 3.0, geometric objects support both 2D and 3D (e.g. points, polygons, polylines, curves, surfaces and solids) [6].
Web-based GIS-Transportation Framework Data Services using GML, SVG and X3D 271
2.2. TTA transportation data model The purpose of TTA standard transportation data mode l is to standardize geography information for the transpor tation application domain by defining class such as road, rail and structure. This model can be pract iced in vario us transportation applications. Feat ure class of TTA transportation model is defined by Table 1 [7]. Table 1. Feature class of TTA transportation model [7]
Theme Transportation
Sub - Theme
Feature class TRN_RoadNetwork TRN_RoadBG TRN_RaiIN etwork TRN_RaiIBG TRN Structure
TRN Str
,,---,
r",por"lIOl '········································ ·1
« "tr.;!l",d :>;> ~ C3:uro CI3~~
~
0 ,·
... .J
r R~·~R~~~t;~rk
I
¢.
TP_obje,ct
T'¥tJtI
r~ ~ rOll~I O ~ OIJj. d
I G~;L I obioct
:,1
I
lRN RoOONIo!MrJrk Roo<.J StllJrrlr.!t l1 +1
0..'
r ij ttur ~
I
?
I J
t
........·..··········.t.......·..·...
t~
•u·1TRN RO"dBG
< c~b s t ril ct))
TRN_ROJdBG
I
Geometry
Objoct
1 RoodSogmont I
' U. •~ II
...1 I TRNJ:~OJdN(!M1ork_ROJdJul1((jon Anr1
I ·1
·0 11
TRN_ROJC£G _~or:;cca Of'l I
I
I
"I
Fig. 1. UML class diagram of Road a part of TTA transportation model [7].
Fig . 1 represents UML class diagram of Road a part of TT A transporta tion model. The Road derived from feature type in a Ge abstract specification is composed of TRN_Roa dNetwork and TRN_RoadBG. Each one of those has feature classes that are TRN_RoadNetwork_RoadSegment, TRN_ RoadNetwork_RoadJunction and TRN_RoadBG_RoadSegment, TRN_RoadBG _Intersection. Feat ure classes have one to one relationship.
272
Hak-Hoon Kim and Kiwon Lee
TRN_ RoadNetwork_RoadSegment and TRN_RoadNetwork_RoadJunction mean topology by start and end point relationship.
2.3. Scalable Vector Graphics (SVG) and eXtensible 3D (X3D) SVG and X3D are standard format based XML for authoring two dimensional and three dimensional web graphics. They provide to create visually rich and functionally smart interfaces for the visualization and rendering of web data. There are many advantages of SVG and X3D. First, they provide high quality web graphics and dynamically generate graphics from realtime XML data. Second, They can be interactive and scriptable graphics utilizing XSLT, JavaScript, DOM(Document Object Model), Java, Visual Basic, etc. Third, Because of language based XML, They are integrated with other members of the XML family and graphics in human readable format based text. Last, They are open and royalty free standards [5],[8].
3. Smart Web-based GIS application: A Prototype A successful application based web services is separation of data, logic and presentation. Web services depends on data are modified whole architecture as data structure is changed. It may cause several problems to duplicate data and increase cost for maintaining application. Fig. 2 shows the transformation between three systems represented by the end to end XML system, file based system and closed application. XML application of Fig. 2(e) fits in this condition, because it separates data, logic and presentation by XSLT transformation. According to this strategy, we implemented a prototype of application system based on TTA Transportation model by using GML in web environment. For efficient GIS Web application, this system is composed of three layer architecture that is data layer, logic layer and presentation layer (Fig. 3).
Web -based GIS-Tran sportation Framework Data Services using GML, SVG and X3D 273
t
~
File Ba sed System
Clo sed Applicat ion
MJ LllR..E DATA LOCATIONS
MJ LTIR..E DATA lOCA110 1\6
End t o End XML System RE:USABLE
DATA OR XM..
+
o
8-' -
1 f z
~ III
MJ LTlFU DATA REQUESTS
BA Ta-tED D AT A REQLESTS
RE:CEIVE
AG GREGATE
ASSEMlLE
I
SlRUCTURE APPL ICAroN I
GENERATE FLE
TTT
DELIVERED
!!iF z i1
!!iF z
~
i1
1 (A) GOOD
!!iF z
ONE APRJCAll0N
!
lRANSFORM XSLT
I
MJl TIR...E oo....lVERES
~
AG GREGAlE STRUCTIJRE; APPL ICATION
STRUClURED APPL ICATION l:l:lIVERED
i1
TTT
j
OI\E .Am...ICAnON INTERFACE
(B) PROBLEMATIC
(C) BEST CASE
Fig. 2. Three systems for web-based application [3].
GML Generator
End to En d XML System REUSABLE D ATA ORXM....
me:::::::>
S VG : Two - Di mensional W eb Graoh ic X3D : Three - Dimensional \lYeb Graohic
(C ) BEST CASE
Fig. 3. Design Strategy using standard web technologies .
In data layer , information about multip le geographic features or objec ts defined in the TTA transportation data model was encoded by OGe-GML. In logic layer, creating GML files are exported SVG and X3D files by XSLT for web publishing through genera l browser. In presentation layer, SVG and X3D files are visualized by Web browser. But Web browser must be installed SVG and X3D viewer plug-in . Fig . 4 show GML Gen-
274
Hak-Hoon Kim and Kiwon Lee
erator application that is implemented by MSXML4.0, Visual C++ 6.0 MFC and DOM API. In this application , There are the main function s of supporting map style sheet, attribute editing , adding/deleting/manipulating geo-base features in TT A data model and user-defined model for database extension, web publi shing through general browser, and map exporting to SVG and X3D with additional properties for 3D. In this prototype, ESRl shape file[l] is used as exchange format with external system. SVG Transfo r m ation
A d d f eature la y er
~~~'=~~ ~"'= J=" '_= 'Kb ~~~ * ~~~~ - '"'.. ~ 1tm :::A»d ~
,."
"""'"
:=J ITllNJl(IldPIi......... :=J lTANJlD;oJ1;r:;;;;;;t]ijO. dS.gmltn-:
~
ow "",", ,
o....~j
. ..
-
,
a
l
p p p
~
I f;" D J 1 8A.I~ tlGUL_Af' P_s ro.rtPu ln l e r
~
If: .O l t 8 A.1 ~ .GIt.IL_"" p_S m .rtPo l ot e r ~
~~
U~ •• •
. . .
lSI
Poly.•. Po
i,
p
.
~:;: 'i! IRH"'1
.
~~~:::
AI4 ::l'ISTJ)
Poly.•, Po...... .
"
1 :. .· .: 1 ..'.·.:.•
...
~~ ... ~
IRNS fn
§1:: r:!1I0J)
L.: ---- !~-.:. .- ' "Hlt" I"111__ .• §!~
I
~
I f:"Q Jl 8 A I ~ "GUlj'(I p_S lnanPo l nw
...
~~.. ...
!!!! ...
~!§, ..
"""
~
....
8-9tz.~'l!I',' louT1
1101
s:.~!:r.~ ~.! ~OI
IRPlSl
£ ~ e" (RC T1
I ·o~
iil: eAl~51:' [UT LJ
1!l9!1
:i:il eAl..,e 'l'21~ [UT "'1
lorte
~ ~ .......·· I ~
~ ~ '-==F=---~---------' l-=:::::::...:= :::...--_
GML T re e view
I m po rt
~hp.
dbf files
-----l
GML ed ito r
Fig. 4. GML-based database manipulation and linking with physical databases built by TTA-Transportation data model included web graphic transformation function .
Web-based GIS-Transportation Framework Data Services using GML, SVG and X3D 275 ~ ~
<X3D profile='Immersive'>~ <Scene> -t-l
,J .J ~
<Shape>~
~ <Appearance> -t-!
~ ~ <Shape> ~J ,J <Appearance> ~l ~ <M at e ria I di f f use Color = " 1 0 0 rr I > ~.J ~ ~J -t-l
Fig. 5. X3D sample code.
Fig. 5 represents sample sub-code of X3D. In general, scene graph is conducted by defining 3D objects through <Scene> node. In this code, , one of 3D Primitives' node, can be used for 3D buildings and major roads. While, using in node of <Appearance>, 3D image texturing is carried out. In the case of location information, node of is used for model transformations such as translation, rotating, and scaling. Fig. 6(A) and (B) are 2D SVG processed sub-scene of TTA model[7] and a sub-scene as result by styling of defining 3D attributes composed of shape, location, properties of GML database and XSLT processing, respectively. In addition to 3D Primitives, X3D provides useful nodes such as IndexedFaceSet, IndexLineSet, and PointSet, capable of geo-referencing each vertex. In practice, it is necessary to use these nodes for more accurate and exact 3D urban modeling.
276
Hak-Hoon Kim and Kiwon Lee
(A)
(B)
Fig. 6. Examples of Web 2D/3D graphic implementation results: (A) SVG case and (B) X3D case of Fig. 5.
4. Conclusions A practical use case handling both actual standard database model , OGCGML, and SVGIX3D web graphics in this study can be used for one of main components in the development process of non commercialized GIS application system. Because this application system can be easily maintained and shared data of geographic information by using XML standard format, it will be based on efficient Web GIS application. In this study, for smart web-based GIS application, information about multiple geographic features or objects defined in the TTA transportation data model was encoded by OGC-GML and verified in the XML-databases environment, taking into account of practical applicability in the GIS application building without commercialized GIS software. This process is composed of linking process of one standard database model, and it need to implement GML editor software with the main functions of (1) supporting map style sheet, (2) attribute editing, (3) adding/deleting/manipulating geo-base features in standard data model and user-defined model for database extension, (4) web publishing through general browser, and (5) map exporting to SVG format and X3D format.
Web-based GIS-Transportation Framework Data Services using GML, SVG and X3D 277
6. Reference [1] ESRI, 1998. ESRI Shapefile Technical Description, An ESRI White Paper. [2] Geroimenko, V. and C. Chen, 2005. Visualizing Information Using SVG and X3D. [3] Kolbe, T. H., G. Groger, and L. Plumer, 2005. CityGML-Interoperable Access to 3D City Models. [4] ISO, ISO/TC 211/WG 4/PT 19135N 005r3 GML3.0 Specification. [5] ISOIIEC 19775:2004, eXtensible 3D (X3D) Abstract Specification. [6] Ron L., D. S. Burggraf, M. Trninic, and L. Rae, 2004. Geography Markup Language Foundation for the Geo-Web, Wiley. [7] Telecommunication Technology Association, 2003. TTAS.OT-10.0021 [8] URL: http://vvwvv.vv3.org/XMI.J
3D Geo-database Implementation using Craniofacial Geometric Morphometries Database System Deni Suwardhi and Halim Setan Medical Imaging Research Group (MIRG), Dept. of Geomatics, Faculty of Geoinformation Science & Engineering, Universiti Teknologi Malaysia (UTM), Malaysia.
Abstract Craniofacial Database System has been developed for storing, manipulation and analysis with Geometric Morphometries concept. The main objective of the system is to store the normal craniofacial (face and/or skull) of Malaysian people. The 'normal' is used for reference in many areas such as surgical planning, forensic, gear protective design, and anthropology researches. The data with which we are working are surface scans. Typically these are triangular meshes, and mayor may not have a texture image associated with them. The data can be obtained from photogrammetry system, laser scanning system, or iso-surfaces from volume images (CT or MRI images). Our geometrical model is based on existing standards of the Open GIS Consortium. There are new data types have been added to OpenGIS SQL Specification. They are Solid, Voxel, Tetrahedron, Pixel, Triangle, 3DRaster, TetrahedronMesh, 2DRaster, TriangularMesh, Node and Vertices. Implementation in PostGIS, extension module for PostgreSQL ORDBMS, was done. PostGIS follows the OpenGIS "Simple Features Specification for SQL". Some functions to enable spatial queries were added by C language. The modification is called PostGISPlus (PostGIS+). Because of characteristic of the geometrical data model is similar with geospatial data then we try to implement 3D geo-database in our developed database system. This paper reports the problems and their solutions in the implementation. KEY WORDS: Craniofacial, Geometric Morphometries, OpenGIS, 3DGeoDatabase
280
Deni Suwardhi and Halim Setan
Introduction Background The word craniofacial is derived from the word cranium, referring to the skull, and facial, referring to the face. Data of craniofacial is needed in many fields (Chong, et.al., 2004). Measurement or anthropometric of the surface of the human head and face is necessary to design personal protective items (respirators, eyeglasses, and other head/face gear), identify missing persons in forensic, and help predict growth patterns and evaluate patients in craniomaxillofacial surgery. Anthropometric data of the 'normal range' group (Farkas, 1994) of the population are needed to plan craniofacial reconstruction of malformation patients because the normal data are often used as the correct dimension for surgery. The step to achieve this goal is to build a database containing sets of "normal" craniofacial data which allows for a comparison of the current shape of a patient with a typical "normal" shape which taking such factors as age and sex of the patient into consideration. In addition, the normal data are required for the forensic applications. One important technical problem that we were facing in craniofacial database is that of managing surface data from laser scanner (and volumetric data from CT-scanner), processing them, storing the results of the statistical shape analyses, and making them available to all the institutions participating in the project. To solve that problem, we have developed spatial database system based on geo-database approach. We enhanced the OpenGIS simple feature specification by adding new 3-D primitive graphic classes and implemented the enhancement into PostGIS spatial database system.
Objective As we started with Open GIS simple feature as a reference, therefore we want to try implementing 3-D geo-database into our craniofacial geometric morphometries database management system. This paper reports design and implementation of geo-database using craniofacial geometric morphometrics database. In this paper we first describe geometric morphometries concept and craniofacial geometric database development, and followed by the difference between the spatial database in medical and GIS environment (Section 2). In Section 3, the relevant literature on spatial data model and struc-
3D Geo-database Implementation
281
ture is highlighted, followed by OpenGIS specification and the presentation of our proposed data structure for integration into an Object Relational DBMS. The implementation process is described in Section 4. It describes a database application to store, manage, retrieve and analyze craniofacial data as a complex object in PostgreSQL DBMS and followed by 3D visualization software. Finally, Section 5 concludes the paper and indicates research directions for future work.
Geometric Morphometries (GM) Morphometric is the literally the measurement of appearance. Practically it is the measurement of biological shape. Bookstein (1982) defined morphometric as the fusion of geometry and biology, and deals with the study of form in two- or three-dimensional space. Geometric morphometries began in the early 1980's and continues as the application of superimposition algorithms to coordinate data collected at anatomical landmarks. These techniques go beyond traditional multivariate statistics of linear or angle measurements by allocating size and shape differences between biological forms. These tools are relevant to applications in craniofacial surgical planning, forensic, anthropology and personal protective items design.
Data and method Landmarks For analysis purpose, shape is described by a finite number of points, which are called landmarks on object's surface. So a landmark is a point of correspondence on each object that matches between and within populations. Three basic landmarks are generally used (Dryden and Mardi a, 1998): 1. Anatomical landmarks. 2. Mathematical landmarks. 3. Pseudo landmarks.
Traditional Methods To perform a shape analysis, a biologist traditionally measures ratios of distances between landmarks or angles, and then submits these to a multivariate analysis. This approach has been called 'multivariate mophomet-
282
Deni Suwardhi and Halim Setan
rics' in biology (Dryden and Mardia, 1998). A considerable amount of work has been carried out in multivariate mophometrics using distance, ratios, angles, etc. and it is still very commonly used in the biological literature.
Geometrical Methods Like traditional morphometries, geometric morphometric methods allow statistical conclusion, but importantly, they present several advantages. They allow the geometry of the objects studied to be better retained. They often enable the quantification of features that are difficult to measure with traditional measurements, and are therefore usually described qualitatively. Finally, they allow morphological differences to be visualized in specimen space using interactive computer graphics. This ability to visualize morphological differences is invaluable, particularly when dealing with complex 3-D anatomical structures.
Dense Correspondence Surfaces In addition to applying existing geometric morphometric methods to answer the questions in craniofacial studies, we are working on the further development of geometric morphometric methods using pseudo-landmarks (vertices on outlines, surfaces and volumes) on surfaces data. In our working, we adapt and enhance the algorithms from Subsol, et.al. (1998) and Hutton, et.al. (2001) for automatically building threedimensional anatomical line features and dense correspondence mesh surfaces respectively. Once our algorithm computes dense correspondence surfaces, all models can be re-meshed using identical connectivity. This enables a whole series of applications ranging from averaging to the transfer of geometric or attributes properties, such as anatomical landmarks, or textures, from one model to a whole set of models. This method means that new scans can be manipulated and analyzed in a fully automatic, without needing to be manually annotated with landmarks before analysis.
Craniofacial GM Database Management System The research is attempting to build a database system in which forensic investigations, personal protective items designer, medical researches, anthropologist and especially, surgical planner will post, query, and analyze data, processed data, and the results of the analysis.
3D Geo-database Imp lementation
283
The requirements for a database of practical use in the craniofacial information system are quite challenging: 1. The database should store not only clinical data and numerical results of the analysis, but also more complex data like volumes, images , and surfaces. 2. Model from database must be ready for surgical planning and simulation software which usually uses two methods , finite element and mass spring elastic analysis. 3. Using of dense correspondence forms , primary keys and fast indexing mechanisms that speed up the databa se searches to optimize the database at the time of design .
@ +-llII-
<J Triangle
Triangular-Mesh
Tetrahedron
Tetrahedron-Mesh
~ ~llII--!
::' "
Pixel
Voxel
'1
:ii itt
Raster or 2D-lma ge
ffiEEfB __ Multi-Voxel or 3D-Image
Fig. 1. Geometric Primitive for Craniofacial Data Table 1. Geometry object in Craniofacial Database No 1 a b c d e 2 a b c d
Class Name Exam Image CT Image X-Ra y Image Laser Image (Surface) Laser Image (Texture) Model Landmark Mass-Spring Model Finite Element Model 3D Chainmail
Geometr y Type 2D Grid 3D Grid 2D Grid Triangular Meshes 2D Grid 3D Point Triangular Meshe s Tetrahedron Meshes 3D Grid
284
Deni Suwardhi and Halim Setan
There are two important spatial classes in Craniofacial Database, Exam and Model classes. The Exam class has data types including the whole raw data. The important sources of raw data are laser-scan and Computer Tomography images (CT) images, which provide dense surface and volumetric information of the head and face. Sometimes X-Ray images are used too for 2-D mophometrics analysis. Figure 1 visualizes primitive graphics for representation of craniofacial data and table 1 is a list of spatial object in Craniofacial Database. Each object has spatial type correlated with geometry type.
The Differences between CGM and GIS database The Database Management System (DBMS) technology in GIS environment could be applied into medical or biology environment with some improvements. The improvements or new developments could be introduced back into GIS application. The improvements are needed because there are some differences between the technology and the environment. Belussi et al. (1996) give an example of differences between spatial database in GIS and medical field. The spatial database in GIS consists of sets of objects embedded in some spaces. This condition implies that spatial links between objects and space, and also between objects themselves, are to be represented in the model. The spatial database in medical consists of sets of objects that describe different instances of the same object type; these instances could belong, for example, to temporal series or they describe a set of distinct physical objects. Considering that difference, Belussi et al. (1996) categorize two most important predicates in object retrieval. They are: 1. Spatial predicates 2. Similarity predicates Notice that, for the definition of both these types of predicates an important role is played by the spatial relationships. These relationships are based on topological and metric properties. In GIS, such relationships are derived from the embedding of all objects in the same reference space and spatial predicates are used for object query based on the relationships. In medical field, similarity predicates can be based on spatial relationships between objects. Thus two objects are similar, if the same spatial relationships exist among their objects.
3D Geo-database Implementation
285
3-D Spatial Data Model and Structure In literature many conceptual model for 3-D spatial object have been proposed. This chapter reviews the spatial model and how to implement the model into readable and efficient computer format. Spatial Models The 3-D spatial models are classified into surface and solid-based and that classification could be differentiated by two different views of space: Space in surface-based model is empty and just littered with objects. Each object is represented individually. Space in solid-based is continuous. Used heavily in the geosciences to study the variability of a phenomenon (called a field), e.g. terrain elevation, water salinity or %gold in rock. View 1 of space is what is used in vector-based GIS, and in 3-D is often used for the modeling a city with separate, unattached buildings. View 2 is what geology, for example, is about. We need to discretize a field to model it into a computer. A continuous field can be decomposed: 1. Regularly: elements having same shape 2. Irregularly: with simplices (e.g a Delaunay triangulation or tetrahedralization), or with arbitrary polyhedral. Each has its strong points, weakness and fields of appropriate application. One of the problems with surfaces is that it is very easy to make mistakes e.g. creating internal representations that are physically not possible or complete. Solid modeling is good for a variety of purposes beyond guaranteeing physically realizable objects. It is easy to derive properties such as length and volume from solids. One problem with solid models is the rendering algorithms are often difficult or produce results of less quality than surface models. This is often resolved by transforming the solid model into a boundary model prior to rendering. Spatial Data Structures We must differentiate between spatial data models and spatial data structures - spatial data models are useful as the foundation for spatial data structures and a spatial data model is implemented using a selected spatial data structure. Grid, shape model, facet model, and B-rep are examples of surface-based representation. 3-D array, octree, Constructive Solid Ge-
286
Deni Suwardhi and Halim Setan
ometry (CSG) and 3-D TIN (Tetrahedral network, TEN) are examples of volume-based representation. Based on reviewing from Gold and Ledoux (2005) proposal, the main 3D data structures would be classified as follows: 1. 2-D (surface) decomposition (b-rep, for a single solid object). • half-edge, winged-edge, quad-edge, doubly connected edge list (DCEL) • Triangulation (TIN) • GIS topological models (Zlatanova et.al., 2004) 1. Regular 3-D (volume) decomposition • voxel, octree 2. Irregular 3-D (volume) decomposition • facet-edge, G-maps, generalisation of half-edge to 3-D, and simple tetrahedralization 3. CSG (Constructive Solid Geometry - Boolean combinations of simple solids). 4. Non-manifold 3-D structures.
OGe Specifications Open GeoSpatial Consortium (OGC) is a non-profit organization dedicated to open systems geoprocessing. aGC gives standards in the GIS field and recommends the directions for GIS researchers. One of the main tasks of OGC is to develop specifications, which follows strict procedures and policies. The aGC creates two kinds of specifications: Abstract specifications and Implementation specifications.
Fig. 2. OpenGIS simple feature geometry class hierarchy
3D Geo-database Implementation
A spatial support the LineString, MultiPoint, face.
287
DBMS supported aGC implementation specification should following spatial types (Figure 2): Geometry, Point, Curve, Line, LinearRing, Polygon, Surface, GeometryCollection, MultiCurve, MultiLineString, MultiPolygon, and MultiSur-
Database Development One of the most important aspects of the craniofacial database development that is the integration of volumes (solid), 2-D images, 3-D images and surfaces into the database as data types. Volumes and surfaces (images are a sub-type of surface and volumes) are defined as data types that can be included as columns in an object-relational database, and a suitable algebra is defined in order to manipulate them. Therefore, we propose a model that is based on Open GIS simple feature specification, thus the manipulation is more compact and easier to handle.
Geometric Data Model and Structure We would like to generate a "complete" 3-D data structure that could integrate at least categories 1, 2 and 3 brief reviews in section 3.2. Boundary modeling (b-rep) is a starting point for our model design. Boundary modeling (b-rep) involves representing the solid object as a set of bounding faces. This requires information about the connectivity of the faces. Tetrahedron consists of 4 triangle faces that form a closed object in 3-D coordinate space (Fig. 3.a.). The object is well defined, because the three points of every triangle always lie in the same plane. It is relatively easy to create functions that work on this primitive. The disadvantage is that it could take many tetrahedrons to construct one factual object (Aren, et.al., 2005). To solve the disadvantage, we created tetrahedron-collection or tetrahedron-mesh. This is suitable with the concept of Geometry-Collection class which is defined in OpenGIS Simples Features Specification. Figure 3.a shows a tetrahedron object and figure 3.b. shows tetrahedron-meshes that represents 3-D geology layers. In 3-D GIS application, most of the work done for modeling cities uses vector-based models. Each building or object is represented with a b-rep (boundary representation). Most of the efforts of this community is to develop models to store individual objects and to detect 'topological relationships' between 3D objects, i.e. to know where and if objects touch/intersect each other. Doing these assumes, in our model, that only individual objects
288
Deni Suwardhi and Halim Setan
are stored in a database as polyhedron and topological relationships are restored 'on-the-fly'.
(~?Q)
Fig. 3. (a) tetrahedron , (b) tetrahedron mesh for 3D geology model, (c) polyhedron (Tetgen, 2006)
Polyhedron is made up of several flat faces that enclose a volume (see Figure 3.c). An advantage is that one polyhedron equals one factual object. Because a polyhedron can have holes in the exterior and interior boundary (shell) , it can model many types of objects. Trian gle, tetrahedron, and polyhedron are compo sed of vertices, edges, faces and an incidence relationship on them. The arrangement of the vertices of faces in the outer boundaries is counter-clockwise, as seen from the outside of an object, and arrangement of the vertice s of faces in the inner bound aries is clockwise (and all inner rings in reverse order). In other words, the normal vector of the face points to the outside of the object.
Definition of Abstract Data Types for New Geometric classes To achieve that integration, there are some data types must be added to OpenGIS SQL Specification. They are Solid , Voxel, Tetrahedron, Pixel, Triangle, 3DRa ster, TetrahedronMesh, 2DRa ster, TriangularMe sh, Polydedron , Node and Vertices , and the relationships between them are shown in Figure 4 using UML (unified modeling languag e). The relationship between the new classes are that Triangle and Pixel as new subcl asses from Surface, Solid as a new subclass from Geometry, Tetrahedron and Voxel as new subclasses from Solid , TriangularMesh and 2DRaster as new subclasses from MultiSurface, MultiSolid as a new subclass from GeometryCollection, and TetrahedronMesh and 3DRaster as new subclasses from MultiSolid. To support Triangle and Tetrahedron geometry construction, Vertice s and Node classes was created. Vertice s class is identical with MultiPoint class but different meaning in repre sentation of real feature s. Node is a 'symbolic pointer ' to point in vertices.
3D Geo-dat abase Implementation
289
Sp e.1 i o I Re "' ", n e e SY"' ~
Fig. 4. Enh anced OpenGIS geometry class hierarchy
Data Definition Language
In PostgreSQL, creating a new base type requires implementing functions to operate on the type in a low-level language, usually C. A user-defined type must always have input and output functions. These function s determine how the type appears in strings (for input by the user and output to the user) and how the type is organized in memory. An accepted way to repre sent new geometry classes in memory would be the follow ing C structures. The C structures were compiled and held in libpostplus .dll file. Schema of Data Table Once the geometric data type is created in PostgresSQL environment then we can create tables for implementing our database design. Creating a table with spatial data is done in two stages: create a normal non-sp atial table and add a spatial column to the table using the OpenGIS "AddGeometryColumn" function. In order to create a table named "models" containing geometric data, we can use the SQL statement as:
290
Deni Suwardhi and Halim Setan
CREATE TABLE models (id int4, type varchar (128), patient_id varchar (11) ); SELECT AddGeometryColumn( 'models', 'the_geom' , -1, 'GEOMETRY3D', 3 ); In OpenGIS specification, data can be organized using \tVKT or VVKB representation. For the purpose of supporting new geometry data, the corresponding WKT and WKB structure should be given. Table 2. lists the WKT representation Table 2. WKT Representations ~_ge~~~!~Y_IYl?.~w_~_~,_~,Q~""I~~!,_~!,!~~Lg_~££~~~!!~,! ,!_~!!__,__",,_,_,_~2~E!_~!!,! __,_,_~,_,_,_, __,
Tetrahedron
'TETRAHEDRON (VERTICES ( l O A 10 10, 20 20 20, 30 30 30, 40 40 40), CONNECTIVITY (1 2
, _
Tetrahedron
3 4))'
Tetrahedron11esh
'TETRAHEDRONMESH (VERTICES (10 10 10, 20 20 20, 30 30 30), CONNECTIVITY (1 2 3 4, 2 3 4 5))'
Polyhedral
Tetrahedron Mesh Consisting of 2 tetrahedron
'POLYHEDRON (VERTICES (10 10 10, 20 20 20, 30 30 30), CONNECTIVITY (1 2 3 4, 2 3 4 5) ) ,
Data Manipulation Language and Examples Manipulation operations on geometric data types include selections on the values associated, volume and surface creation operations, aggregate operations like averages of certain quantities, distance, volume, area, and join intersection unions. Considering that the current structure of SQL with the SELECT-FROMWHERE block is complex enough to use by naive users, some statements with SELECT-FROM-WHERE clauses are taken as case studies. The following two samples are simple and complex query. 1. To export the triangular meshes of a given patient and create an image in VRML format. 2. SELECT aswrl(the_geom) FROM models WHERE type='FEM' 3. To ask for all pairs of patients' p and q such that p has abnormality while q is normal but such that the "abnormal portion" of p's mandible is more extended than the q's mandible.
3D Geo-database Implementation
291
SELECT p.patientid, q.patientid, FROM model p, model q WHERE (p.diagnosis = ' abnormal' and q.diagnosis = 'normal') AND area(difference(p.the_geom, q.the_geom)) Spatial Index PostgreSQL supports three kinds of indexes by default: B-Tree indexes , RTree indexes, and GiST indexes. In this research, GiST is used. GiST stands for "Generalized Search Tree" and is a generic form of indexing. In addition to GIS indexing, GiST is used to speed up searches on all kinds of irregular data structures (integer arrays, spectral data , etc) which are impossible done by normal B-Tree indexing. To query for similar objects in Craniofacial Geometric Morphometries Database database, we first transform the correspondence facial models to be invariant with respect to scale, position and rotation by means of a modified Principal Component Analysis (PCA). After this normalization step, feature vectors are extracted to capture certain aspects of the models. Therefore, GiST indexing structures is performed for the typically highdimensional feature vector data. Figure 5 shows the similarity search idea.
~
'ft~-
......--......
-'
~~ , .
""
......---.....
,.......;',\-
- ....;;.
l~y:
,('t'P:, -'
~r:~" \t ,. ".~~
feature:
ext raeuon
""
Insert
'C r!#7}:r'
I II I II I high dimcn sioru l
3D mode l
feature:' \ « tor
high
d i m~.. msiona l
i~cb structure
Fig. S. Main idea behind the similarity search system.
GiST (Generalized Search Trees) indexes break up data into "things to one side", "things which overlap", "things which are inside" and can be used on a wide range of data-types , including GIS data . PostGIS uses an R-Tree index implemented on top of GiST to index GIS data. A·tre e
t
~ A
abc
B
d
Fig. 6. R-Tree Indexing for 2 Dimension Bounding Box
292
Deni Suwardhi and Halim Setan
The 3-D bounding box of geometry object is an object of indexing, so at each geometry objects there is an xmin, ymin, zmin, xmax, ymax, and zmax information. Figure 6 illustrates how the bounding boxes are indexed by R-tree structure. Validation
It is important that the geometric data be checked when it is inserted or changed in the DBMS. Checking the geometry of the spatial objects is called validation (Arens, et.al, 2005). PostGIS uses the GEOS library to provide geometry tests (Touches, Contains, Intersects) and operations (Buffer, GeomUnion, Difference). Because GEOS is not suitable for validation of our new 3-D geometry type then the program TETGEN (Tetgen 2006) developed by Hang Si is used for validation of new 3-D geometry data types. The inputs of TetGen are called piecewise linear complexes (PLCs). Any polyhedron is a PLC. Furthermore, PLCs are more flexible than polyhedra to represent three-dimensional geometric objects. The definition of PLCs requires that they must be closed under taking intersections, that is two segments only can intersect at a shared point, two facets are either completely disjointed or intersecting only at shared segments or vertices or a union of shared segments and vertices (because facets are non-convex). Another restriction of the facets of PLCs is that the point set which used to define a facet must be coplanar. Figure 7 shows non-closed configurations for examples.
Fig. 7. An invalid PLC example. two vertices and one segment are missing (Tetgen, 2006)
However, geometry objects created by most of the GIS or CAD tools usually do not satisfy this condition. TetGen can check and find out all the intersecting facets of the 3-D geometry record and report in query result.
3D Geo-database Implementation
293
Visualization Geometric objects and meshes are best understood by visualization. We used two types of programs: our developed client/server application or VRML viewer. Borland Delph i programming tools enriched with GLScene components (GLScene, 2006) were used in order to simplify the procedures of creating real 3-D scenes client/server application. GLScene is an OpenGL based 3-D library for Delphi . Zeos component (Zeos, 2006) is used for database connection. Zeos library is a component set for Delphi to access some database engines. The protocol property allow the selection of the database server. It supports Postgresql and other popular DBMSs . A number of functions enabling to modify the visualized 3-D scene and make it dynamic were introduced into the created application, referred to as "3-D Visualization", The choice of a given option is made in the unrolling menu, using scroll and mouse buttons .
Fig. 8. Client/server application: facial data and skull visualization
Fig. 9. Visualizations of 3D geodatabase by our client/server application
Figure 9 shows 3D spatia l data visualization by our client/server appli cation. Figure 9.a. is visualization of polyhedral of buildings, figure 9.b. is visualization of terrain surface and figure 9.c. is visualization of terrain by tetrahedral mesh.
294
Deni Suwardhi and Halim Setan
Conclusions New geometric data types implemented in extended PostGIS are suitable for geo-spatial database. The polyhedron structure and tetrahedron-mesh could be used for representing of building object in city modeling and solid terrain, respectively. Tetgen library supports some operations in system, especially for 3=D geometry validation process. In the further work, we will to include The Computational Geometry Algorithm Library (CGAL, 2006) to add 3D spatial operations. At last, a graphic library in the OpenGL standard was used in our original application designed for dynamic 3-D visualization of objects registered in database. Delphi programming tools enriched with GLScene components were used in order to simplify the procedures of creating real 3D scenes. The program was tested. The application proposed in the study enables dynamic and photorealistic 3D visualization of objects recorded in database.
References Arens, c., J.E. Stoter, and P.J.M. van Oosterom, 2003. Modelling 3D spatial objects in a GeoDBMS using a 3D primitive. In Proceedings AGILE 2003, Lyon, France, April 2003. Belussi, A., Bertino, E., Biavasco, A., and Rizzo, S. (1996). Filtering Distance Queries in Image Retrieval. In: Subrahmanian, V.S. and Jajodia, S. ed. Multimedia Database Systems. Berlin, Germany. Springer-Verlag. 185-211. Bookstein, F.L. 1991. Morphometric Tools for Landmark Data. (Cambridge University Press) Chong, A.K., Majid, Z.B., Ahmad, A.B., Setan, H.B. and Samsudin, A.B.R. 2004. The users of a national craniofacial database. New Zealand Surveyor. 294: 15-18. Dryden, I.L., and Mardia, K.V. 1998. Statistical shape analysis. London: John Wiley and Sons. Farkas, L.G. ed. 1994. Anthropometry of Head and Face. 2nd ed. New York: Raven Press. Hutton, T.J., Buxton, B.F., and Hammond, P. 2001. Dense surface point distribution models of the human face. In Proc. IEEE Workshop on Mathematical Methods in Biomedical Image Analysis, Kauai, Hawaii. p. 153-160. Subsol, G., Thirion, J.P. and Ayache, N. (1998). A scheme for automatically building three-dimensional morphometric anatomical atlases: application to a skull atlas. Medical Image Analysis. 2(1): 37-60. Zlatanova, S., A. A. Rahman, and W. Shi, 2004. Topological models and frameworks objects. Computers, Environment and Urban Systems, 30:419-428.
GIS-based Multidimensional Approach for Modeling Infrastructure Interdependency
Rifaat Abdalla, Harris Ali, and Vincent Tao GeoICT Lab, York University, 4700 Keele Street, Toronto, ON, M3J 1P3 Canada. Email [Abdalla.jaoj Oyorku.ca Faculty of Environmental Studies, York University, 4700 Keele Street, Toronto, ON, M3J IP3, Canada. Email: [email protected]
Abstract The information technology has been challenged to be a facility to improve the efficiency and effectiveness of managing the four phases of natural disasters (Preparedness, Mitigation, Response and Recovery phases). Addressing interrelationships between different critical infrastructure sectors during disasters is a complex process. This paper will present multidimensional approach that addresses the issue of Location-Based Infrastructure Interdependency (LBII). Key Words: Disaster Management, Emergency Management, Infrastructure, Interdependency, Modeling, GIS
Introduction Disasters are dynamic processes (Alexander, 1993) and by their very nature, are spatially oriented (Waugh, 1995). According to (MontoyaMorales, 2002) most current tools that are used for disaster management focus on the temporal component of the four phases of disaster management, leaving an obvious gap in dealing with the spatial element. The emphasis on the spatial dimension makes GIS technologies ideal for simulating the complex spatial relationships among critical infrastructures (i.e., their interdependencies) while still being able integrate other modeling tools. (Nash et al., 2005) showed that temporal GIS can effectively combine both the temporal and the spatial dimensions. Several studies including (Briggs, 2005; Dietzel et al., 2005; Giardino et aI., 2004; Gupta and
296
Rifaat Abdalla, Harris Ali, and Vincent Tao
Singh, 2005) (Abdalla et al., 2006); (Laben, 2002) outlined the importance of addressing the spatio-temporal affects related to disaster management. These studies have tried to establish an efficient and advanced information system that can accommodate multiple events with the aid of GIS. Infrastructure interdependency is a new multidisciplinary filed of research. Despite the clear definition of infrastructure sectors in Canada, there is no consensus about precise definition for the set of activities and operations that shape this field. Decision-making and support tools aided by case studies and scenario development simulations are key research areas in this new field (Trudeau, 2004) The estimation of risk for particular infrastructure sector can not be achieved without complete conceptualization of vulnerabilities and hazards surrounding it. Technology, in particular GIS can playa very positive role in this regard. GIS simulations and decision-making models can provide transferable solution that can be used for similar scenarios regardless of the location. Research in infrastructure interdependency has evolved as a branch of disaster and emergency management very recently. In Canada, the first report that addressed the issue of infrastructure dependency was published by the Department of National Defence, National Contingency Planning Committee in 2000. This study was initially prepared as a stage in preparation for compatibility with Y2K. Eventually, this study has become a major reference in infrastructure interdependency. Since then very few publications came to existence. Particular concerns were raised in order to address the serious questions regarding infrastructure interdependency following certain events such as the power blackout of August 2003 in parts of Ontario and the SARS outbreak of 2003 in Toronto. These two events have illustrated key interdependencies of critical infrastructures, and have contributed to expedite the process of dealing with the issue of infrastructure interdependency. September 11, 2001 event in the US and other human-induced threats have added to the importance of determining interdependencies among infrastructure systems in Canada.
Utility of GIS in Infrastructure Interdependency Research The work reported by (Abdalla and Tao, 2005b) and some of which is the backbone for this dissertation is unique in Canada, and among very few international studies that highlighted the contribution of GIS in the new field of infrastructure interdepdency. There are many GIS analytical techniques are useable for infrastructure interdependency modeling. In this section,
GIS-based Multidimensional Approach
297
the utility of these techniques for addressing particular issues related to infrastructure interdependency research will be highlighted. Attributes-based Analysis
The power of GIS stems from its capacity to combine both the spatial attributes and the graphical representation of the feature. There are powerful analytical functions that can be used to model critical infrastructure interdependency. Functions like Query Within can provide very useful information about particular sector elements within a specific location. Summarize Table function can provide important information about a specific subset of attributes for a particular location. Index Attributes attribute functions can provide details about indexing critical elements for a particular infrastructure sector unit. Node-based Feature Analysis
Node-based feature analysis or point analysis can provide useful information when modeling LBI!. Functions like Distance can provide actual distance information between different critical infrastructure sectors, or between critical facilities, like hospitals, schools and others. Point-based analysis can also provide very useful information through Attribute Analysis; for instance point features for emergency medical service can provide many attributes about coverage area, its capacity and possible alternatives at peak times. Area-based Analysis
Area-based analysis provides advanced functions for polygon features. Functions like Dissolve and Eliminate can be used for simulation of "what if' scenarios. It can provide detailed information about new polygon features that might be created, with respect to area, neighboring infrastructures and adjacent facilities. Clip operations can be used in conducting polygon-based analysis to show what features might look like in extreme situations. Particular features can also be Split. Buffer operations can be used as spatial constraints in particular sectors, for example the user can identify a buffer of 500 meter zonation.
298
Rifaat Abdalla, Harris Ali, and Vincent Tao
Network-based Analysis Network-based analysis provides a wealth of GIS operations that can be used for modeling infrastructure interdependency. Network operations include line operations such as Buffer, which can be used on infrastructure layers to apply a buffer of a specific distance around a feature. The Dissolve operation can be used to unify features, (e.g. the dissolve operation can be used to dissolve two line segments into one line. Intersect can be used to perform feature intersection operations, for instance, in simulation modeling this function can be used to provide information about location attributes that might need to be split as a result of intersecting with a particular feature. Optimal Route Finding is very useful in determining the shortest path between two locations. In emergency situations this can be of use when trying to determine the optimal path between facilities and services. Raster-based Analysis Raster-based analysis provides functionality that is of importance when dealing with elevation data and with image analysis. There are a number of 3D analysis functions that are based on raster analysis. These include: Contouring, which can be used to provide linear elevation features derived from an elevation grid. This function can be of great use in modeling density grids for a particular distribution. Slope and aspect analysis and Hill shade Analysis are also important and provide information, for instance, when dealing with plume analysis. This information can be of great practical importance.
Concepts of Infrastructure interdependency Increasing complexity and interconnectedness among infrastructures has resulted in a range of interdependencies. These interdependencies have introduced new vulnerabilities and risks to our society. Canada's critical infrastructures are those physical and information technology facilities, networks and assets, which, if disrupted or destroyed, would have a serious impact on the health, safety, security or economic well-being of Canadians, or the effective functioning of governments in Canada (Trudeau, 2004). There is a limited understanding of Canada's infrastructure interdependencies, vulnerabilities and the methods for measuring and quantifying them, or how to mitigate interdependencies, having said that, Geographic
GIS-b ased Multidimensional Approach
299
interdepend ency can further be scaled down to emphasis location specific geographic interdependency.
Modeling infrastructure interdependency Wh en dealin g with the issue of modeling infrastru cture inte rdependency for disaster mana gement always there is a question of about the effec tiveness of modelin g tools to mimic a very complex real world situations . Another question arises about accuracy and validity of these models and to what level decision-maker ca n trust them. Unce rtainty and model accuracy fact ors are cri tica l and might infl uence or misguide the process of informed decision-makin g process, when rapidl y requ ired in ex treme situation s. An oth er issue with modeling infrastru cture interd epend ency is to what le vel can these models fit in extreme situations and how eve ryday operation ca n be balanced with security concern s.(Rinald i, 2004) has classified infras truc ture interdepend enc y mod els into six typ es as followin g
Infrastructure
Characterisrics
Types of Illlerdependencies
Fig. 1. Dimensions of infras tructure interde pendency 1. Aggr egate supply and demand tool s. Thi s category of tools eva luates the total dem and for infrastru cture ser vices in a region and the ability to supply those services . 2. Dynamic Simulations. Thi s model type employs dynamic simulations to examine infrastructures operations, the effects of disruptions, and the associated down stream co nsequences.
300
Rifaat Abdalla, Harris Ali, and Vincent Tao
3. Agent-based models. In this modeling technique, physical components of infrastructures can be readily modeled as agents, allowing analyses of the operational characteristics and physical states of infrastructures. 4. Physics-based models. Physical aspects of infrastructures can be analyzed with standard engineering techniques. For example, power flow and stability analyses can be performed on electric power grids. 5. Population mobility models. This model examines the movement of entities through urban regions. For example, the entities may be people following their daily routines in a city. 6. Leontief Input-Output Models. Leontief's model of economic flows can be applied to provide a linear, aggregated, time-independent analysis of the generation, flow, and consumption of various commodities among infrastructure sectors.
Case study The mock storyline for the earthquake used in this paper has focused on a shallow 7.3 MMI subduction earthquake occurs in the Strait of Georgia (Latitude 49.45 Longitude 123.941) with no surface rupture. At this magnitude and location, it is plausible to have the following occur: Landslide on Hornby Island, Fracture building damage in the City of Vancouver, Dam breach (assumed) and Flood Inundation in the west coast (Tsunami Wave). Figure 2 is showing study area and scenario details.
GIS-based Mult idim ension al Approach
30 1
Population at risk Using the Shake map and based on the magnitude distributi on it was possible to ident ify population catego ries at each risk zone. The obtained result s were then summarized as shown in Fig. 2..Abd alla and Tao, (2004); Amdahl , (200 1) have show n a Shakemap of California 's Northridge earthqua ke , which applies the principle of using soil clas sificati on to detect the most vuln erable zones . Further analysis was co nducted to provide den sity of popul ation at risk in each zone. Th e result s obtained popul ation potenti al loss density was in complete agree ment with result s obtained fro m building dama ge densit y analysis as sho wn in Fig.3. A tabular report that outlines distribut ion or population at risk was produc ed using GIS analytica l cap abiliti es .
302
Rifaat Abda lla, Harris A li, and Vincent Tao
Population by MMI Zones 150000 100000 50000 o
5 6 9 10 3 2 7 4 8
Fig. 3. Popul ation at risk distribut ion by zone
--
a
9 ,500
19 ,00 0
33 ,000
57 ,00 0
Fig. 4. Map showing vulnerable popul ation
GIS-based Multidimensiona l Approach
303
Critical infrastructure at risk Th e City of Va ncouver critica l infrastru cture asse ts were ex plored and ana lyzed usin g GIS. It was possib le to model and visua lize cri tica l infras truc tur e on the area and that are vulnerable to damage. All infrastructure sectors at ris k were se lec ted using GIS ana lysis functions and after they were ide ntifie d in the map (A bda lla and Tao , 2005 a), it was possibl e to produce a tabl e that lists vul nerab le seg ment based on the data. T abl e 1 is showing highways that are vulne rab le to damage. Th e table co ntai ns informa tion abo ut the name of the highway, in which mun icipa lity it is located, its class , and highway len gth . It is also possible to estimate potent ial fin ancial for each highway class based on a generic cost per kilometer estim ation. Table 1. Attr ibutes of vulnerable highways LEFT_MUll 2INANAIAO,SU8D.8
I
14 AL8ER1lfIIIY 15AL8ER1lfIIIY
2NANAIAO, SU8D.8
INANftJMO,SU8D.8I'9P NANJlJMO, SU8D.8 ygp
OBJECTID'
STREET
Cf1
16PARKSym 8YPASS I NANAIAO,SU8D. 8 17 AL8ER1lfIIIY
2NANAIAO,SU8D.8
18 AL8ER1lfIIIY
2NANAIAO, SU8D.8
RIGHT_MUll LEFTJSRlGHTJSAI LEFT]RV RlGHURV UIIIQUEID 'NANftJMO,S~D. 8 WI WI 'Be 8C 3820086
,NANJlJMO,SU8D.8 ygp NANJlJMO, SU8D.8 ygp
WI Y9P ygp ygp
ShipeJength 0.001594
i8C !8C
8C
3168775
0.014105
8C
382(1187
0.004812
'8C
8C
3820388
0.001943
8C
3820389
0000364
19 N..ANDI:lAND fIllY I NANAIAO, SU8D. 8
jNAN!lJMO,SU8D.8Y9P
WI
i8C 'Be
8C
382(1190
0005613
20 !"-ANDISlANDfIllY I NANAIAO, SUBO.8
,NAN!lJMO,SU8D8'I'9P
ygp
!8C
Be
382(1191
0006573
21 AL8ER1lfIIIY
NAN!lJMO, SU8D.8Y9P
ygp
8C
382(1192
0004732
22 PARKSYLLE8YPASS I NANAIAO, SU8D. 8
INANftJMO,S~D 8 WI
ygp
i8C i8C
8C
382(1393
0.007128
23 'N..ANDa AND HVW I NANAIAO, SU8D.8
INANftJMO,SU8D.8 I'9K
Y9!(
i8C
'8C
31 68802
0.063191
24N..ANDISlANDHVW I NANAIAO, SU8D.8
INANftJMO,SU8D.8Y9K ,NANAIMO,SU8D.8Y9!(
Y9!(
8C
8C
3168803
0.062780
Y9!(
,8C i8C
8C
3168846
0.004250
Y9!(
8C
3168847
0.005610
WI
iBe
Be Be
3168942
0.003353
3168943
0.01 4E1J8
2NANAIAO, SU8D.8
25AL8ER1l fIIIY
2NANAIAO, SU8D.8
26 AL8ERM fIIIY
2NANAIAO, SU8D.8
27AL8ER1lfIIIY
2NANAIAO, SU8D.8
iNANftJMO,SU8D8 Y9!( :NANftJMO,SU8D.8Y9P
28AL8ERMfIIIY
2NANAIAO,SUBD 8
,NANftJMO,SU8D8 Y9P
ygp
29AL8ER1lfIIIY
2NANAIAO, SU8D. 8
NAN!lJMO, SUBD 8!Y9P
ygp
i
!8C
i8C iBe
8C
31689EIJ
0007829
8C
3168962
0011551
!8C
'8C
3168969
0011125
30 IN..ANDISlANDHVW I NANAIAO,SU8D.8
iNANftJMO,SU8D 8 WI
ygp
31iN..ANDaANDfIllY I NANAIAO, SU8D.8
:NANJlJMO,SU8D 8I'9P
ygp
32N..ANDISlAND HVW I NANAIAO,SU80.8
iNANftJMO,SU8D 8 Y9P .NANftJMO,SU8D.8 ygp
ygp
8C
8C
3168982
0021135
ygp
3168983
0007>38
ygp
,8C i8C
8C
iNANftJMO,SU8D.8 ygp NANftJMO,SU8D.8 Y9!(
8C
31 69047
0.027176
8C
3169085
0.006195 0.005]5
33 AL8ER1l fIIIY
2NANAIAO, SU8D. 8
34lU ND a ANO fIllY 1INANAIAO, SU8D.8
Y91{
36 GLANDHVW W
2NANAIAO, SUBD.8
NANftJMO, SU8D.8iY9!(
Y91{
iBe i8C
Be
3169094
37ELAND HVW W
2NANAIAO, SU8D.8
NANAIAO, SU8D.8Y9!(
Y91{
i8C
8C
3169113
0.005271
38GLANDHVW W
2NANAIAO, SU8D.8
iNANftJMO,SU8D.8Y9!(
Y91{
,Be
8C
3169161
0.011 485
35GLANDHVW W
2NANAIAO, SU8D.8
304
Rifaat Abdalla, Harris Ali, and Vincent Tao
Conclusions The application of GIS in disaster management activities has clearly proven its feasibility. Today's emergency management planning in its all levels utilize GIS data and functionality for simulation of "what if' scenarios. This paper has presented efforts that provide contribution in this field. The simulated earthquake model of the city of Vancouver has clearly demonstrated that GIS utility can further be utilized in modeling location based critical infrastructure interdependency. The presented model has provided a visual quantitative model of infrastructure and assets that are vulnerable within limited spatial extent. Geospatial data stored in municipal spatial databases and provincial data banks can be very useable when modeling small scale scenarios, however large scale simulations may require data and systems interoperability for effective decision-making process.
References Abdalla, R. and Tao, V., 2004. Applications of 3D Web-Based GIS in Earthquake Disaster Modeling and Visualisation. Environmental Informatics Archives, 2(2004): 814-817. Abdalla, R. and Tao, V., 2005a. Integrated Distributed GIS Approach for Earthquake Disaster Modeling and Visualization. In: v. Oosterom, Zlatanova and Fendel (Editors), Geo-information for Disaster Management. Springer, New York, pp. 1183-1192. Abdalla, R. and Tao, V., 2005b. A Network-Centeric Spatial Decision Support System for Modeling Infrastructure Interdpendency, Joint Infrastructure Interdepndencies Research Program Annual Symposium, Ottawa. Abdalla, R., Tao, V., Wu, H. and Maqsood, 1., 2006. A GIS-supported 3D approach for flood risk assessment of the Qu' Appelle River, Southern Saskatchewan. Int. J. Risk Assessment and Management, 6(4/5/6): 440-455. Alexander, D., 1993. Natural Disasters. Chapman & Hall, New York, 632 pp. Amdahl, G., 2001. Disaster response: GIS for public safety. ESRI Press, Redlands, Calif., v, 108 pp. Briggs, D., 2005. The role of GIS: Coping with space (and time) in air pollution exposure assessment. Journal of Toxicology and Environmental Health-Part aCurrent Issues, 68(13-14): 1243-1261. Dietzel, C., Herold, M., Hemphill, J.J. and Clarke, K.C., 2005. Spatio-temporal dynamics in California's central valley: Empirical links to urban theory. International Journal of Geographical Information Science, 19(2): 175-195. Giardino, M., Giordan, D. and Ambrogio, S., 2004. GIS technologies for data collection, management and visualization of large slope instabilities: two applications in the Western Italian Alps. Natural Hazards and Earth System Sciences, 4(2): 197-211.
GIS-based Multidimensional Approach
305
Gupta, P.K. and Singh, A.P., 2005. Disaster management for Nandira watershed district Angul (Orissa) India, using temporal Remote Sensing data and GIS. Environmental Monitoring and Assessment, 104(1-3): 425-436. Laben, C., 2002. Integration of remote sensing data and geographic information system technology for emergency managers and their applications at the Pacific Disaster Center. Optical Engineering, 41(9): 2129-2136. Montoya-Morales, A., 2002. Urban Disaster Management: A case study of Earthquake Risk Assessn1ent in Cartago, Costa Rica. Ph.D. Thesis, Ineternational Institute For Geo-Information Science And Earth Observation (ITC), Enschede, The Netherlands. Nash, E., James, P. and Parker, D., 2005. A model for spatio-temporal network planning. Computers & Geosciences, 31(2): 135-143. Rinaldi, S.M., 2004. Modeling and Simulating Critical Infrastructures and Their Interdependencies, Proceedings of IEEE 37th Hawaii International Conference on System Sciences, Hawaii, pp. 1-8. Trudeau, M., 2004. Infrastructure Interdependencies Workshop Report. PSEPC, Ottawa, pp. 27. Waugh, W.L., 1995. Geographic Information-Systems - the Case of Disaster Management. Social Science Computer Review, 13(4): 422-431.
Conception of a 3D Geodata Web Service for the Support of Indoor Navigation with GNSS1 Stephan Mas, Wolfgang Reinhardt and Fei Wang AGIS - Arbeitsgemeinschaft GIS Universitat der Bundeswehr Munchen Wemer-Heisenberg-Weg 39 85577 Neubiberg, Germany E-mail: {Stephan.MaesIWolfgang.ReinhardtIFei.Wang}@unibw.de http://www.unibw.de/bauvl1/geoinformatik/AGIS
Abstract This paper addresses the concept of a 3D Geodata Web Service for the support of indoor navigation with satellite-based positioning systems like Galileo or GPS. The presented work is part of a project called INDOOR, which principally aims at the support of satellite positioning and navigation within buildings. The intended web service shall provide 3D building data to mobile clients, where it will be used to calculate the indoor signal dispersion and to provide location based information to the user. After an expose of the detailed requirements on the service CityGML has been selected as best fitting data format. It is shown how the inherent material properties of 3D building elements can be included in CityGML. Further on the possibilities to transfer data, which is enabling for a personalised and a situation aware 3D route calculation, is shortly discussed.
1
The work presented in this paper is part of the project INDOOR (http://\v\vw.indoor-navigation.de), funded by the German Aerospace Center (Deutsches Zentrum fur Luft- und Raumfahrt, DLR) with financial resources of the German Ministry of Education and Research (BMBF) under grant no. 50 NA 0504. The demands on the proposed web service, extensively described in paragraph 3, mainly arise from consultations with the INDOOR coordinator lfEN GmbH \~~~~~~~J'
308
Stephan Mas, Wolfgang Reinhardt and Fei Wang
1. Introduction After the great hype around LBS (Location Based Services) at the beginning of the century in the meantime numerous solutions have demonstrated their feasibility, for example the services implemented in the Elf-funded research projects PARAMOUNT [Sayda et al. 2002, MundIe et al. 2004] and LoVeus [Blechschmied et al. 2004]. The positioning of many of these services relies on GNSS (Global Navigation Satellite System) signals, in particular because of its global availability and precision. For most applications satellite-based positioning with Galileo or GPS is still depending on the direct "visibility" of the satellites. Hence the mentioned location aware services can only be provided for outdoor environments with an adequate GNSS signal availability and not for buildings (indoor) or areas with comparable signal shadowing effects (border areas between indoor and outdoor). To extend the positioning capabilities also to indoor environments (inside buildings) additional localization techniques are required. In recent years a multitude of such indoor positioning technologies has been developed. Most of them are based on the concept of "pseudolites'V, on wireless communication networks like WLAN [Zehner et al. 2005] or Bluetooth [Genco 2005], make use of installed beacons like RFID [Okuda et al. 2005] or Infrared [Brassart et al. 2000] or apply additional positioning sensor systems [Okuda et al. 2005]. However, when a precise localisation is needed, all of these techniques have the disadvantage that they require an additional infrastructure installed in the buildings. An expansion of GNSS positioning to indoor environments would do without such local installations and therefore be much more convenient. As shown by Eissfeller et al. (2005) depending on the building materials sensitive outdoor receivers are capable to acquire the damped GNNS signals inside of a building, but the waiting time for such a measurement is by far too long for a practical application. The aim of the INDOOR project is to bridge these deficits through developing new GNSS signal processing methods and new receiver systems. This paper is focused on one part of the research, aiming at the setup of a geodata web service providing 3D building information to mobile clients. It will enable the dedicated simulation software to calculate the GNSS signal dispersion at their current position and to use this information for the improvement of the positioning accuracy. The remainder of this paper provides a more detailed insight into the objectives of the INDOOR project in the next paragraph. Based on that, the
2
http://en.wikipedia.org/wiki/Pseudolite
Conception of a 3D Geodata Web Service 309
requirements on the demanded 3D geodata web service, which is providing building information necessary to model the GNSS signal dispersion, will be defined in chapter 3. In chapter 4 an overview on possible data formats and standards will be given and an appropriate choice is made for the project. The paragraphs 5 and 6 address the particular concepts for the inclusion of material properties in the building model and the 3D routing data.
2. The INDOOR Project The principal aim of the INDOOR project is to support satellite positioning and navigation within buildings or in critical border areas between indoor and outdoor areas. Therefore, the main focus within the project is on the development of new signal processing methods and new receiver systems, the connection to modem wireless communication systems as well as external sensors and the transmission and ray-tracing modelling of the signal dispersion in buildings. Investigations will be focused on how building parts, like for example walls, influence the GNSS (Global Navigation Satellite System) signal transmission and which properties, like materials or age, are necessary to model the signal dispersion. Based on the results of this research a 3D geodata web service, which is providing the building information, will be implemented. On a mobile device, equipped with an indoor-capable GNSS receiver developed in the project, such information could then be used for the simulation of the GNSS signal strength of the current position in the building and therewith for the improvement of the positioning accuracy. The projects results will be exemplified and demonstrated by application scenarios within the domains of SAR (Search and Rescue) and logistics. For security-sensitive applications (e.g. public responsibilities of security politics, administration and services) and traffic-logistics applications, indoor navigation is of particular interest. For example the exact localisation of an emergency call within a building and later on the reliable localisation and navigation of the SAR teams can be deciding factors for the rescue mission. The availability of GNSS positioning within buildings offers new possibilities for LBS and other navigation applications.
Requirements on the 3D Geodata Web Service The main objective of the proposed web service is to provide the information, necessary to simulate the GNSS signal strength within a building.
310
Stephan Mas, Wolfgang Reinhardt and Fei Wang
Therefore, all building elements passed through by the available satellite signals on their direct paths between satellites and the receiver antenna, have to be known. Building elements information must contain their geometries as well as certain material parameters. Generally it should be possible to link a specific material (like concrete or steel) to properties (e.g. physical constants, refractive index) relevant for the signal dispersion calculation. Since it might not always be possible to assign a single material to a building elernent, alternatively an attribute containing a signal effect factor will be assigned. This factor has to be acquired through test measurements in the field. To simulate the signal dispersion a model, which is only containing the main building elements, like walls, ceiling, roof and basement, will not be sufficient. Every object in the building, which has a measurable influence on the signal, has to be considered in the model. This also includes things like beams, pipes, doors, curtain walls, bigger cables and railings. Summarized for this application a complex model for the appropriate representation of buildings has to be developed. Furthermore the data shall be visualised on mobile devices of users (e.g. PDAs or rugged tablet pes for the SAR teams), provide location based information and enable 3D routing calculations, in particular within buildings. The visualisation should contain detailed information about the indoor environment. It should comprise main landmarks, like furniture (e.g. the information desk in the entrance hall of an airport) or main way posts and textures for surfaces like walls or floors. Within the INDOOR Project the routing calculations will be made on client side, which means, that the data for the routing calculation, has to be made available through the service. The final route description or verbal route instructions will be calculated by the client application. Nevertheless the available data has to be adequate for the following requirements on the route calculation. It should be personalised and adjustable to specific situations, like for example an emergency rescue or logistic transports of heavyweight material. In complex public buildings, like for example an airport, the accessibility of a path depends not only on the direction and the means of transport (e.g. on foot or using a wheelchair). Many of the paths are only accessible for employees of the airport or the security staff. Other paths should only be used under certain circumstances, like for example an emergency exit window should only be smashed in emergency cases. At the same time these circumstances will probably prohibit the use of an elevator. These examples demonstrate the importance of a consideration of personalisation and the current situations as affordances [Raubal and Worboys 1999] of the paths for the route calculation. Emphasis is also put on the consideration of the weights of graph edges and the path optimisation strategy, like for example shortest path or opti-
Conception of a 3D Geodata Web Service 311
mised regarding to certain parameters. In emergency cases main exit routes should be preferentially chosen to allow for a safe and controlled exit. Thus a simple graph weighting according the length of the edges will not satisfy. In general, for the service interfaces and transfer data formats international standards are preferentially chosen. But, as it will be shown in the next paragraph, the different areas of application with their varying requirements on the building model and the service interfaces are not yet fully supported by any existing system or standard.
4. Overview on possible Data Formats and Service Interface Standards As mentioned in the previous chapter, for the service interfaces and the transfer data formats international standards are preferred. But the manifold requirements like the 1. detailed 3D visualisation including textures, 2. semantic information about building elements, 3. material properties of the building elements, 4. and personalised and situation aware routing possibilities are not yet fully supported by any available standard. Existing 3D graphics standards, like VRML or its successor X3D [ISOIIEC 19775:2004], accomplish all requirements in terms of 3D visualisation independently of the required level of detail (LoD). But they do not provide semantic information or additional thematic attributes for the graphical objects. With those standards it is not possible to define building elements or indoor installations as object classes (e.g. roof or door) and to attach the corresponding material properties. The IFC (Industry Foundation Classes, [IFC 2.x.2]) defines an exchange format for building models. It has been specified by the IAI (International Alliance for Interoperability) and became publicly available international standard. IFC provides a very detailed semantic model for building elements and building installations. Its known lack of defined concepts of outdoor object types (e.g. streets or vegetation), as pointed out by [Kolbe et al. 2005], is not crucial for indoor applications. IFe is supported by most CAD software products, in particular by those specialised on architectural design.
312
Stephan Mas, Wolfgang Reinhardt and Fei Wang
CityGML [OGC 06-057], which is probably the latest data format for city and building models. It is a GML profile (Geography Markup Language), which is defining building object types based on the IFC model. CityGML provides a geometric, topological and semantic data model. It is less detailed than lFC, but since it is extensible, missing object types can be added for specific applications. The required material properties can also be added to each object type. Compared to IFC, CityGML additionally provides the possibilit y to define different LoDs for each object and it contains more defined outdoor object types, in particular necessary for city models . For indoor applications only the most comprehensive LoD4 is useful, since this is the only one which considers interior structures like rooms , stairs and furniture . Anyway, for border areas between indoor and outdoor, which are also part of the INDOOR application frame, less detailed LoDs are a nice feature . For example, they can be used for the visualisation of the buildings in the direct neighbourhood of the target build ing. Another advantage of CityGML is the definition of terrain intersection curves for each building, which allow for a better integration of buildings and the surrounding terrain [Kolbe et al. 2005]. The available interfaces of the OGC (Open Geospatial Consortium) and their different areas of application are schematised in Figure 1. For the INDOOR project prototype system only such mobile client devices will be selected, that have a sufficient processing power. This means that the balancing between the client and the server is not a main issue . Medium Client
.1 Display
t
.
Internet
Render
Render
Internet Display Element Generator
t
II Element DisPlaY ·1 Generator
t Select '--------.--- - ' 1
e.z. WFS
Thin Server
I
.
Select
t
Thin Client
Display
t Medium Server
.
W3DS
II
.
Select
t
Thick
I Server
.
W1\IS, WT S
Fig. 1. Different balancing schemes between client and server, (taken from [OGC 05-019])
Conception of a 3D Geodata Web Service 313
The WMS (Web Map Service) and the WTS (Web Terrain Service), shown on the right side of fig.L, are not appropriate for this application, since with these interfaces the whole rendering is done on the server and only bitmaps are available for the client. The necessary 3 dimensional geometries and all semantic information of the object types are not transferable with these interfaces. The only "real" candidates are the W3DS (Web 3D Service) and the WFS (Web Feature Service). The W3DS merges different object types into a single scene graph, representing a visual representation of the basic 3D geodata. Server requests are defined through the viewing parameters, like angle of view, distance etc.. The resulting scene graph is rendered on client side. This allows for interaction possibilities for the user, but they are limited by the scene graph. The semantic characteristics and relations are not contained in a scene graph [aGC 05-019]. Thus the W3DS could be an appropriate solution for the 3D visualisation, but the other three main requirements, mentioned at the beginning of the chapter, are not fulfilled. The only interface which is also providing the semantic information of the objects is the WFS. This standard allows for server requests using an ID, bounding box or specific spatial and other operators. The selection of an appropriate interface for the service is not fully independent of the chosen transfer format. While the W3DS is supporting numerous formats (VRML, X3D) the WFS is mainly supporting GML as transfer format; thus only CityGML can be chosen since the other formats are not supporting GML. Recapitulating, CityGML is the best fitting format for the requirements on visualisation and availability of semantic information about building elements and building installations. Since CityGML is extensible, required material and routing information can be added to the object type definitions or included through additional object types. Further on CityGML can conveniently be published through the WFS interface, which is widely accepted and implemented in available software products.
5. Inclusion of Material Properties The example code in the appendix of the paper is illustrating how inherent material properties of 3D building elements can be included in CityGML. Therefore we introduce the InherentMateriallnfo element, which is treated like the genericAttribute defined in CityGML. It differs from the Material attribute defined in CityGML, since this one is defining appearance information of objects and not the physical properties of their inherent materials. The example is schematically demonstrating how different material
314
Stephan Mas, Wolfgang Reinhardt and Fei Wang
layers can be defined for, in this case, a door. Correspondingly the material properties of every layered object can be described.
6. Concepts for 3D Routing To apply existing shortest path calculation algorithms (like e.g. the Dijkstra-algorithm) for indoor routing the network of rooms has to be converted into a graph network representing the walking paths. But since the paths, people are using within a building, are not as well defined as for example the streets are for cars, it is difficult to do this conversion automatically. In particular in huge rooms, like e.g. the entrance hall of an airport, an infinite number of possible walking paths are existing. Every object which is a potential routing target must be represented through a node in the graph network. Further on, the information, necessary for a consideration of personalisation and the current situations in a route calculation, can not be directly attached to the existing building elements. For example rooms are not representing the boundaries of the accessible area, if dividing walls or other barriers are used to restrict the entry for certain persons. CityGML does not provide the possibility to define such areas of uniform accessibility. Because of these reasons we decided to model the graph network independently from the building elements and not to rely on an automatic conversion of the network of rooms. The defined graph network is represented by conventional nodes and directed edges. The nodes refer to the building elements or installations they are representing in the graph network. The edges contain additional attributes about their accessibility and weighting information for each situation, which has to be considered. If an edge is passing a particular landmark, a corresponding link will be stored. Possible restrictions along an edge, like for example the width of doors for logistic applications or the slope and the number of stairs for wheelchairs, are also included. Encoded in GML the resulting graph network can also be published through the WFS interface and enable the client to calculate the routes and verbal route instructions.
Conclusion The paper introduces the support of satellite positioning and navigation within buildings as a new area of application for 3D geodata formats and the corresponding web service interfaces. CityGML is currently the best fitting transfer format for the introduced server requirements on visualisa-
Conception of a 3D Geodata Web Service 315
tion and availability of semantic information about building elements and building installations. Since CityGML is extensible, required material and routing information can be added to the object type definitions or included through additional object types. For 3D route calculations a graph network defined independently from the building data has been preferred, since a personalised and a situation aware routing, which is only based on 3D building data, can hardly be realised. In our future work we will prove the concepts introduced in this paper by practical tests. This will also include investigations on the efforts necessary to provide such complex 3D models of buildings for Indoor navigation applications.
Bibliography Brassart, E.; Pegard, C.; Mouaddib, M. (2000): Localization using infrared beacons. In: Robotica 18 - 2; March 2000,153-161. Blechschmied, Heiko; Holweg, Daniel; Jasnoch, Uwe (2004): LoVeus - Multimedia gefuhrte Stadtrundgange. In: Geoinformation und Mobilitat - von der Forschung zur praktischen Anwendung: Beitrage zu den Mtinsteraner GI-Tagen 2004. Raubal, Martin u.a. (Hrsg.); IfGlprints 22, Munster, 2004, pp. 155-165 Eissfeller, Bernd; Teuber, Andreas; Zucker, Peter (2005): Indoor GPS: Ist der Satellitenempfang in Gebauden moglich? In: ZfV, Zeitschrift fur Vermessung, Heft 4/2005,130. Jahrgang, ISSN 1618-8950 Genco, Alessandro (2005): Three Step Bluetooth Positioning. In: Lecture Notes in Computer Science, Volume 3479, April 2005, Pages 52 - 62 ISO/IEC 19775:2004: Extensible 3D (X3D). International Standard IFC 2.x.2: Industry Foundation Classes, IAI, Model Implementation Guide.; Editor: T. Liebich Version 1.7, 18. March 2004 Kolbe, Thomas H.; Groger, Gerhardt; Plumer Lutz (2005): CityGML - Interoperable access to 3D City Models. In: Proceedings of the Int. Symposium on Geo-Information for Disaster Management, Delft, 2005 Mundle, H; Sayda, F.; Lohnert, E.; Wittmann, E. (2004): Location Based Service fur Bergsteiger durch Integration von mobilem Internet, Satellitennavigation und Geoinformation. In: Der Alpenraum und seine Herausforderungen an Orientierung, Navigation und Informationsaustausch. Ahorn 2004 OGC 05-019: Web 3D Service (W3DS). OGCTM Discussion Paper, Version: 0.3.0, Editors: Udo Quadt, Thomas H. Kolbe, 02. February 2005 OGC 06-057: OpenGIS® CityGML Implementation Specification (City Geography Markup Language), Version: 0.3.0, Editors: Gerhard Greger, Thomas H. Kolbe, Angela Czerwinski, 03. June 2006 Okuda, Kenji; Yeh, Shun-yuan; Wu, Chon-in; Chang, Keng-hao; Chu, Hao-hua (2005): The GETA Sandals: A Footprint Location Tracking System. In: Lecture Notes in Computer Science, Volume 3479, April 2005, Pages 120 - 131
316
Stephan Mas, Wolfgang Reinhardt and Fei Wang
Raubal, Martin; Worboys, Michael (1999): A Formal Model of the Process of Wayfinding in Built Environments. In: Lecture Notes in Computer Science, Volume 1661, Jan. 1999 Sayda, F.; Reinhardt, W.; Wittmann, E. (2002): GI and Location Based Services for Mountaineers. In: Geoinformatics, Magazine for Geo-IT Professionals, 2002, Nr. 5 Zehner, M.; Bannicke, K.; Bill, R. (2005): Positionierungansatze mittels WLANAusbreitungsmodellen. In: Beitrage zu den Munsteraner GI-Tagen 2005, Muenster, Germany, pp. 15-24
Appendix
<materialLayerSet> <materialLayer id="1 "> <material> wood 2.54<1physicaIConstant> 1.25<1refractivelndex> 1Ocm false <materialLayer id="2"> <material> air 1.53<1physicalConstant> 5cm true <materialLayer id="3 "> <material> wood 2.5<1physicaIConstant> 1.2<1refractivelndex> 1Ocm false
Reconstruction of 3D Model Based on Laser Scanning LU-Xingchang l, 2, LTIJ-Xianlinl 'Northeast Institute of Geography and Agricultural Ecology, CAS, Changchun, China 130012 2Jilin University, Changchun, China 130026 E-mail: [email protected]
Abstract Three-dimensional laser scanning can be used to collect spatial location of points rapidly and abundantly, and obtain three-dimensional coordinates of the target surface, which provides new technical means for the rapid creation of three-dimensional image model of the object. Obtaining 3D model from objects is a challenging problem and has been an important research topic in the areas of photogrammetry and computer vision for many years. In this paper, a three-dimensional modelling study on spatial object has been carried out by using the spatial data captured via ground-based 3D laser scanner, and a method of using laser range images for the reconstruction of 3D scene models is proposed. This approach has been described to implement a collection of algorithms for 3D reconstruction, known as image segmentation and range registration based on planar features. After acquisition of range data, images have been segmented to extract planar features and registered to detect the initial configuration between the sensor's coordinate system of two views. Finally, triangular meshes are built to generate a three-dimensional surface model. Through an outdoor experiment of reconstructing a statue, the results of our experiment indicate that the method is accurate and robust. It will be seen that the potential applications is wide, include facilities management for the construction industry and creating reality models to be used in general areas of virtual reality, for example, virtual studios, virtualized reality for content-related applications, social telcpresence, architecture and others. Keywords: Laser scanning, Three-dimensional modelling, image segmentation, image registration
318
Lu-Xingchang, LIU-Xianlin
Introduction 3D (three-dimensional) laser scanning is a new means to capture spatial data developed rapidly in recent years, which provides automatic, rapid and reliable tools for directly reconstructing precise geometric models of obj ect targets. Local spatial details will be obtained and the threedimensional objects be reconstructed by using 3D laser scanning system, and the ability to construct such models from real structures is thought to be promising, which opens up a diversity of additional opportunities. Realistic 3D geometric models are becoming increasingly important to many computer applications using virtual and augmented reality, particularly in architecture, design, heritage, simulators and entertainment applications. Therefore, 3D object reconstruction using laser scanner has got more concerned. In general, the technique for 3D reconstruction of real objects is mainly concerned with the reconstruction of small objects. Efficiency and accuracy of the technique has demonstrated in obtaining a map of ocean floor (Kamgar-Parsi et al., 1991), in generating 3D model of teeth, status and so on (Champleboux et al., 1992), in creating textured model of indoor objects (Sequeira et al., 1998), and in extracting buildings using air-borne system (Haala et al., 1998). The extraction of high-quality models of small objects using high-resolution range cameras based on structured light is the main application of such technique (Soucy et al., 1996). Recently, with the development of laser techniques, there have been attempts to use this technology for large objects reconstruction (Beraldin et al., 1997; Zhao et al., 1999; Frueh et al., 2001) and for three-dimensional urban modelling (Z. Huijing et al., 2001). In practice, the conventional methods to model urban buildings are mainly implemented by aerial photogrammetry, i.e., the 3D coordinates of building roofs are directly acquired to create geometric models, while the textures of buildings extracted from aerial images to mapping the geometric models (Grau. 0, 1997; M. Pollefeys et al., 2000). However, the work to build geometric model by this means is time-consuming with low accuracy, which cannot capture 3D spatial data rapidly and create 3D building models accurately. Moreover, modelling buildings for large objects reconstruction is a highly interactive process involving standard CAD tools. More sophisticated tools based on around photogrammetry methods are combined with CAD tools (Chapman et al., 1994). Three-dimensional laser scanning, on the other hand, can be used to collect spatial location of points rapidly and abundantly, and model urban buildings accurately. The building features are firstly extracted from 3D laser scanning data, and
Reconstruction of 3D Model Based on Laser Scanning
319
then combined with CCD image data to generate realistic 3D models (Dinesh M, 2002; LI Bijun et al., 2003). The most advantage is its high efficiency and accuracy in 3D geometric modelling. Commonly, a set of 2D (two-dimensional) ordered points is obtained by using ground-based laser range finder (LRF) to scan an object, in which each point involves the range information in corresponding scene. So, it can be called range image. To acquire the whole scene, it is necessary to scan the target from different locations and register the range images into a common coordinate system. In this paper, the urban buildings are used as research objects to explore the technique of reconstructing 3D object models using the spatial data acquired by laser scanning system. The approach to the problem of reconstructing 3D model from real world is based on spatial data, acquired with a terrestrial laser range finder. The principle of this approach is to use the laser range data to build 3D scene model. For massive of planes being in large-scale scene, the points cloud data have been segmented based on plane after acquisition of scene data. And then, range registration is carried out using the planar feature extracted from the segment. With a fully triangular meshes built, a final 3D scene model has been eventually created.
Technical line The regular buildings in following characteristics are discussed in this paper: First, the facades of buildings are basically vertical to the ground; Second, the building surfaces are roughly smooth; Third, the comers of buildings are rectangular on the whole. The adopted scheme is as follows: the objects are extracted from original scanning observation according to the known data and separated from topographical data at first, and the data are then denoised to get rid of the influence of measurement noise and occlusions (such as trees) etc. and to obtain the whole information of the buildings. Next, the buildings are detected and recognized via image segment based on planar features, and the successive laser measurement sections are wholly matched and corrected in terms of the characteristics of the buildings. Accordingly, the feature points of the buildings and the 2D planar features are obtained. Afterwards, the original measurement data are re-sampled and calculated in light of the general correction to get 3D scanning coordinates of the building surfaces. The 3D object models are eventually reconstructed.
320
Lu-Xingchang, LIU-Xianlin
The flow chart of the technical line is shown as figure 1. Original laser scanning measurement data
Building data Mate ing and correction 2D planar data of buildings
R istration 3D object model of buildings
I
Fig. I.Flow chart of technical line of 3D model reconstruction based on laser scanning
Spatial data acquisition The essence of acquiring 3D spatial information is to collect spatial positioning data, which can be quickly and accurately captured by using laserscanning technology. To reconstruct geometric models of the buildings, an instrument for capturing 3D coordinates of building surfaces should be conducted. RIEGL LMS (Laser measurement scanner), products of Austria RIEGL Company, providing wide field-of-view, high accuracy, and fast data acquisition for 3D reconstruction, can be used to rapidly acquire highquality 3D data of buildings. In this research, the 3D laser range finder RIEGL LMS-420i (Fig. 2) has been used as platform to acquire3D data. The software platform RiSCAN PRO is used to control the scanning process to specific building entities and reflection reference points and to obtain to the greatest extent. As several overlapping scanning images need to be obtained, 11 reflection reference points are chosen to scan the target entities, as shown in figure 3. The white dots in the figure are the reflection reference points, and the table to the light is the relevant information of the points. After the spatial coordinates of each sampling point on the surface having been acquired, a point set called "points cloud" is obtained, in which every pixel covers a distance and an angle (Fig.4). The geometric location information of spatial entities, the launching density of points cloud and
Reconstruction of 3D Model Based on Laser Scanning
32 1
the ima ge information via external camera are eventually acquired by using 3D laser scanners. The raw data are stored in specific projects archives.
Fig. 2.RIEGL 3D laser range finder LMS-Z420i
Fig. 3.Reflection refere nce points and their relevant infor mation
322
Lu-Xi ngc hang, LIU-Xianlin
Fig. 4.Points cloud obtain ed by 3D laser scanner
Data processing The captu red poin ts cloud data must be preprocessed to e liminate the errors and to obtai n efficient data for 3D buildings modelling.
Building extraction Th e data captured through laser sca nning gene rally co ntai n terr ain data, building data and other surface data. The aim of extrac t buildin g extraction is to separate target buildings from the laser measurements and provi de the basis for follow-up processing data. Th e poin ts with height values exceeding the instrument height are extracted firstly. And then, the build ing data are separated from other nonterrain data (such as trees) if the mean laser detected distance is smaller than a gi ven threshold .
Noise filtering Th e occlusio ns (e.g. trees, pedestrians and vehicles) between 3D laser sca nner and the measured objec t will generate such noises as sca tter point s and empty holes behind the measured object after laser sca nning. Data filtering is to remove the influ ence of measurement noises and occl usio ns (such as trees, etc.) and get buil dings data.
Reconstruction of 3D Model Based on Laser Sca nning
323
To disti nguish poin ts situate d on buildings and on the other objec ts, specia l filteri ng algorithms are nee ded. They dep end on the sca nning technique used and the point density . Th e denoising procedur e can be described as follows: iden tify obje ct points accordi ng to ec ho sign al inten sity of laser sca nning process. If the echo signal intensity is below the given thres hold, the distan ce signal value is inva lid and the point is filtered out. Thi s step is rep eated several times, changing the thres ho ld and fina lly leading to the mo re precise value. The odd points are re moved by using median filter ing algorithm, and the front occlusio ns are also rem oved by using surface fitting algo rithm. Th e erro neo us points and the rough point s contai ned in original points c loud data are removed throu gh the above approach. The ex port operatio n is performe d after the sepa ration of bu ildings, the rem oval of inva lid point s and the extraction of target zo nes are co mpleted by RiS CAN PRO attach ed to the laser scanne r. Fig ure 5 is the point-cloud depth image of the target buildin g after the remova l of no ises. In order to provide data for processing later, the deli ved document co ntains the following information at least: First, the three-d imension al coordinates of eac h point within target area; Second, the index in whole sca nning view point for each point within the target zone; Third, the number of row s and co lumns of each viewpoint. The coordinate of eac h point and the number of the target areas are obtained from .3PF (bina ry) file, and the index of eac h point in who le view point is then retrieved through the two text files.
Fig. S.Point s cloud image of target building after filtering
Range image segmentation Range image seg me ntation is to segment origi nal image (R (i, j) ) into inco mpatible sub regions {C,IU)I> C 1, CZ, .. .,C ll}, each of wh ich can be fitted
324
Lu-Xingchang, LIU-Xianlin
into polynomial surface peS). Here, C null represents scattered points that don't fall into any surface. The algorithm of quadratic surface fitting has been developed to segment range image (Besl et al., 1988). A new method is proposed here after improving the quadratic surface fitting algorithm. First of all, a covariance matrix A within k*k neighborhood is built for each point P in the range image: (1)
Here, Vi is neighborhood point, m is center of gravity of neighborhood points set. Then, traverse all the neighborhood points in turn and judge whether the current point P is in common aggregate of the neighborhood points according to the eigenvalue of the matrix A and its corresponding eigenvector. At last, some aggregates are generated and the points in the same aggregate are in the same plane. Thus, the range image has been segmented into different planes and the point-cloud data are identified and classified. The bad points are eliminated through planar segmentation of range image. The right segmentation will occur by adjusting the proper threshold. It is most important to detect feature points for geometric extraction from range image.
Geometric correction The objective feature points, such as boundary points and corner points, are extracted from which the feature lines and surfaces are built. The geometric feature gained from filtered data need to be rectified for further application. Hough transformation is used to connect discontinuous edge pixel points and extract linear information in scattered points, which results in the border curve. Thus, most of the major measured object data are retained, and the scattered points penetrated through transparent objects deleted as well. The processed points and lines, considering the overall character of the buildings, express the planar features of the buildings much better and more accurately. Accordingly, every measured facade are wholly matched and corrected.
Reconstruction of 3D Model Based on Laser Scanning
325
3D model reconstruction Model is abstract of the entity object. Geometric description of a model is the foundation in the design, analysis, simulation and research of handled objects. Three-dimensional modelling to spatial objects offers the possibility of simulating and modelling real world in the computer.
Original data re..sampling 3D scanning coordinates reflecting geometric features of building surfaces are retrieved after the original measurement data are re-sampled according to the overall corrected information. That is to say, the purpose of resampling is mainly to acquire right geometric information of building facades. The algorithm is as follows: First, each scanning line, viewing as partial smooth curves, is performed local secondary fit from top to bottom (or from left to right) to generate approximate range image. Then, the depth and normal continuity of the fitted points are tested and the edge mapping chart is created for 3D boundary to be extracted. Next, different edge chains are acquired via two-dimensional grid chain code tracking and the edge inflexions are searched and identified along the edge chains, which leads to remove the edge chains with a length less than a certain threshold. Finally, the range images are divided into N sub-images on the basis of actual need, with only one overlapping edge between them, in which the filtered points are sampled. Through re-sampling data, computing time reduces significantly.
Range image registration In order to obtain the whole 3D model of a building, the range images acquired from different locations need to be registered into a common coordinate system because the 3D coordinates of the points in the range image are relative to their own scanning space. To be brief, it is necessary to solve the coordinate system transformation. Plane is selected as feature to register in this research. After the planar segmentation made before, several point-sets have gained, each of which is corresponding to a plane. By planar fit to every point set, the normal vector is acquired. So, (n, m) can be used to describe a plane (Fig.6). Here, n is normal vector of the plane, and m is center of gravity of the points set falling into the plane.
326
Lu-Xingchang, LIU-Xianlin
Fig. 6.Plane pairs after range segmentation
As shown in figure 6, k pairs of planes corresponding to point sets are manually selected from two groups of range images. Suppose that the normal vector and center of gravity of the plane i selected in the first points cloud are separately n, and m; and the corresponding ones in the second points cloud are separately n, and m, . Since normal vectors are relatively self-existent to translation, the revolving matrix R is calculated first by using corresponding normal vector. This is done via minimizing k
Iw;h'-Rnill
2
(2)
i=l
Here, Wi is weight number between 0 and 1 set for each pair of plane, reflecting the importance of the feature in registration. Using the method of quaternion (Ma et al., 1998), the revolving matrix R is calculated. Then, minimize the distance between the corresponding centers of gravity in the normal direction to get translation vector. The following formula (3)
is derived from
(4)
Reconstruction of 3D Model Based on Laser Scanning
327
by differentiating t. Based on the equations above (Eq. 3), the translation vector t is obtained. For range registration based on plane feature, it is easy to determine the corresponding relationship because plane has large area to be recognized. At the same time, it is convenient to operate. On the other hand, since each plane is gained by fitting many points, the method is robust. In software platform RiSCAN PRO, scanned images are positioned and images are matched to scanned data by using 11 specific reflection targets selected before as ground control points and their high contrast. The points cloud data in different locations are put together and registered in common project coordinate system (Project Coordinate System, PRCS).
Triangular mesh generation Generally, the data captured from 3D laser scanner are dispersed points cloud, which cannot represent the surface of the object in 3D scene well. Using points cloud to build polygon mesh is an approach to surface of scanning object. Since range image itself is a group of organized 2D points set, which contains the adjacency between the corresponding spatial points, the triangular mesh can be generated only by judging and selecting the corresponding adjacency. Taken neighborhood consisting of 4 adjacent points as a basic unit, 0,1 or 2 triangular mesh can be separately created on the ground of some judging conditions. Considered the connective directions, the ultimate number of probable mesh is 7, shown as below (Fig.7). Suppose the range image is made up of m x n points, the triangular mesh of the whole 3D scene can be generated after the (m-I) x (n-l) basic units have been disposed.
Fig. 7. Mesh shapes generated from basic unit of 4 adjacent points
The generated triangular mesh approximates the 3D surfaces of the buildings better. Finally, the linear models are transformed to surface models, and further to generate volume ones by calculation using computer graphics.
328
Lu-Xi ngc hang, LIU-Xianlin
Experiment and results An experi ment is conducted to reconstruct a statue in the campus of Cap ital Normal University. A series of views of laser range are measured, then seg mented and sequentia lly aligned to a common coordinate sys tem through pair-wise regis teri ng neighb oring views. Here, a set of experimental resu lts in reco nstructing a 3D mode l using laser range views measured in the campu s of Capital Normal University are presented. Laser range images used in the experiment are measured by [-180°,+180°] in horizontal rotation angle and [-30°, +50°] in vertical rotation ang le, with the resolution of 50 sample per horizontal degree and per vertica l degree. Since some maj or statue surfaces are about 50m far from sensor's location, spatial reso lution of laser range points on maj or statue surface is about 1 point per square cen timeter. After plane seg mentation to range image, laser range po ints, which are not belonging to any planar faces, are triangulated . Fig ure 8 shows the range image after planar segme ntation. It can be seen fro m figure 8 that the statue surface has been segmented several planes, eac h of whic h is displayed in different colors. Neig hboring view pairs are registere d, and the views are integra ted to the coordinate system of viewpoint1 in a sequential mode, as show n in figure 9 and figure 10. The result of registration shows that the data of two neig hboring views are reinforced by each other , and the overlaps compactly stitched.
Fig. 8.Ran ge image after planar segmentation
Reconstruction of 3D Model Based on Laser Scannin g
329
Fig. 9.Range image of two views before registration
Fig. lO.Range image after registration
Conclusions In this paper, a techn ical method and process of 3D buildin g model reconstruction by using laser range images is discussed and developed through the extraction of target features. RIEGL LMS-Z420i 3D laser range find er with RiSCAN PRO software platform is the maj or instru ment to capture laser range image. In 3D data processing, a new method based on planar feature extraction is proposed to segment, register range images and another improved for buildin g triangular mesh which is beneficial to reconstruction of 3D models. Through an outdoor experiment of reconstructing a statue using views of laser images, laser range images are registered betw een the coordinate system of two views, and 3D model of statue is promptly reconstructed as well. It is demonstrated that reco nstruction of a 3D buildin g using laser rang images is possible and rapid.
330
Lu-Xingchang, LIU-Xianlin
On the other hand, some major problems have been encountered at the same time, which will have to be solved in further study. A stronger algorithm of surface fit is needed to segment, register and create 3D model. Furthermore, acquisition of building details and identification of measurement error need to be deeply explored. Future research will focus on the algorithm efficiency of surface fit and the extraction precise of irregular building boundary and facade.
Acknowledgements The authors want to acknowledge the various collaborators who have participated in the realization of the project discussed in this paper. We are indebted to Prof. ZHANG Aiwu for her logistical support throughout the work we did at the Key Lab of Three Dimensional Information Acquisition & Application. Many thanks to our colleagues in the lab group. Special thanks to Dr. ZHANG Xuexia , Dr. WANG Maojun and the unknown reviewers for hints and corrections. We would like to thank to Dr. SID Bo and Dr. DUAN Fuzhou for the provision of the data processing. Thanks also to RIEGL Co. for its technique guidance and direction.
References [1] Beraldin, J. A., Cournoyer, L., Rioux, M., Blais, F., EI-Hakim, S.F., Godin, G., 1997. Object model creation from multiple range images: acquisition, calibration, model building and verification. Proc. Int. Con! on Recent Advances in 3-D Digital Imaging and Modeling, Ottawa, Canada, pp.326--pp.333. [2] B. Kamgar-Parsi, J.L. Jones, and A. Rosenfeld, 1989. Registration of multiple overlapping range image without distinctive features. IEEE Trans. Pattern Anal. & Mach. Intell., Vol.ll, No.ll, pp.1158--pp.1167. [3] Bornaz L., Rinaudo F., 2004. Terrestrial laser scanner data Processing. XXth ISPRS Congress Istanbul. [4] Briese Ch., Pfeiffer N., Haring A., 2003. Laserscanning and Photogrammetry for Modelling of Statue Marc Anton. Proceedings LaserscanningDateneerfassung und anwendungsorientierte Modellierung, IPF, TU Wien. [5] C. Frueh, A. Zakhor, 2001. 3D model generation of cities using aerial photographs and ground level laser scans. Computer Vision and Pattern Recognition, Hawaii, USA, p. 11-31-8, vo1.2. 2.
Reconstruction of 3D Model Based on Laser Scanning
331
[6] Chapman, D.P., Deacon, A.T., Hamid, A., 1994. Hazmap: a remote digital measurement system for work in hazardous environments. Photogrammetric Record 14(83), pp.747--pp.758. [7]Dinesh M, Ryosuke S, 2002. Auto-extraction of Urban Features from VehicleBorne Laser Data. Symposium on Geospatial Theory} Processing and Applications) Ottawa. [8] G. G. Slabaugh, W. B. Culbertson, T. Malzbender, M. R. Stevens, and R. W. Schaefer, 2004. Methods for volumetric reconstruction of visual scenes. lnt. J. Computer Vision, No.57,179--199. [9] Grau, 0., 1997. A scene analysis system for the generation of 3-D models. Proc. Int. Con! on Recent Advances in 3-D Digital Imaging and Modeling, Ottawa, Canada, pp. 221--pp.228. [10] Huijing ZHAO, R. SHIBASAKT, 2000. Reconstruction of Textured Urban 3D Model by Fusing Ground-Based Laser Range and CCD Images. IEICE TRANS.INF. & SYST., VOL. E83-D,No.7, pp. 1429--pp.1440. [11] H. Zhao, R. Shibasaki, 1999. A system for reconstructing urban 3D objects using ground-based range and CCD sensors. In Urban Multi-Media/3D Mapping workshop, Institute of Industrial Science (lIS), The University of Tokyo. [12] 1. Stamos, P. E. Allen, 2000. 3-D model construction using range and image data. Computer Vision and Pattern Recognition, Hilton Head Island, pp.531 --pp.536. [13]LI Bijun, FANG Zhixiang et al., 2003. Extraction of Building's Feature from Laser Scanning Data. Geomatics and Information Science of Wuhan University, 28(1): pp.65--70. [14]M. Pollefeys, R. Koch, M. Vergauwen et al., 2000. Automated reconstruction of 3D scenes from sequences of images. ISPRS Journal of Photogrammetry & Re1110te Sensing, 55: 251--267. [15] N.Haala, C. Brenner, and C. Statter, 1998. An integrated system for urban model generation. IAPRS, vol. XXXII, part 2, pp.96--pp.l03. [16] Riegl J., Studnicka N., Ullrich A., 2003. Merging and processing of laser scan data and high-resolution digital images acquired with a hybrid 3D laser sensor, Proceedings of CIPA XIX international Symposium, Commission V, WG5, Antalya. [17] RIEGL Laser Measurement Systems Gmbll, 2005. technical data at
www.riegl.co.nl. [18] Sequeira, V., 1996. Active range sensing for three-dimensional environment reconstruction. PhD Thesis, Dept. of Electrical and Computer Engineering, 1ST-Technical University of Lisbon, Portugal. [19] Slob, S., and H.R.G.K. Hack" 2004. 3D Terrestrial Laser Scanning as a New Field Measurement and Monitoring Technique. In: Hack, H.R.G.K., Azzam, R., and R. Charlier (ed.s), Engineering Geology for Infrastructure Planning in Europe: a European Perspective. Springer, Berlin (Lecture Notes in Earth Sciences, n. 104), pp. 179-190. [20] Soucy, M., Godin, G., Baribeau, R., Blais, F., Rioux, M., 1996. Sensors and algorithms for the construction of digital 3D-colour models of real objects.
332
Lu-Xingchang, LIU-Xianlin
Proc. IEEE Int. Can! on Image Processing: Special Session on Range Image Analysis, ICIP'96, Lausanne, Switzerland, pp.409-pp.412. [21] Stamos I, Leordeanu M., 2003. Automated feature-based range registration of urban scenes of large scale [AJ. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Madison, No.2, pp.18'""'-'pp.20. [22] V. Sequeira, K. Ng, E. Wolfart, J.G.M. Gonc.alves, D. Hogg, 1999. Automated reconstruction of 3D models from real environments. ISPRS Journal of Photogrammetry & Remote Sensing, No.54, pp.1-pp.22. [23] Wagner T. Correa, Manuel M. Oliveira, Claudio T.Silva, Jianning Wang, 2002. Modeling and Rendering of Real Environments. RITA ,Volume 9,No. 2. [24J Y. Chen and G. Medioni, 1992. Object modeling by registration of multiple range images. Image and Vision Computing, vol. 10, No3, pp.145-pp.155. [25] Zhang Aiwu, Sun Weidong, Li Fengting, 2005. Basic Processing Methods of 3D Geometrical Signals from Large-Scale Scenes. JOURNAL OF COMPUTER-AIDED DESIGN & COMPUTER GRAPHICS, Vol.17, No.7, pp.1486-pp.1491. [26]Zhang Aiwu, Sun Weidong, et al., 2004. Basic data processing for creating textured 3D model of urban scenes from laser scans and CCD images [A]. Proceedings of the Asian conference on Computer Vision, Seoul, Korea, pp.19'""'-'pp.24. [27J Z. Huijing and S. Ryosuke, 2001. Reconstructing Textured CAD Model of Urban Environment Using Vehicle-Borne Laser Range Scanners and Line Cameras. Proc.2nd Int'l Workshop Computer Vision Systems, LNCS 2095, pp.284-297.
Automatic Generation of Pseudo Continuous LoDs for 3D Polyhedral Building Model Jiann-Yeou Raul, Liang-Chien Chen ', Fuan Tsai I , Kuo-Hsin Hsiao 2 & V/ei-Chen Hsu 2
'Center for Space and Remote Sensing Research, National Central University, Jhong-Li, Taoyuan, TAIWAN {jyrau, lcchen, ftsai} @csrsr.ncu.edu.tw 2Researcher, Energy and Environment Lab., Industrial Technology Research Institute, Chu- Tung, Hsin Chu, Taiwan. {HKS, ianhsu}@itri.org.tw
Abstract This paper proposes an algorithm on the automatic generation of PseudoContinuous Levels-of-Detail (PCLoDs) for 3D polyhedral building model. In computer graphics the Continuous LoDs (CLoD) for terrain geometry or general objects means the number of triangles was changed by one or two triangles between two consecutive LoDs. Since the geometrical structure is inherently different between the continuous terrain and the sparse distributed 3D building models, the creation of CLoD for 3D building models is impractical. In this study, a group of connected/contacted polyhedrons with vertical wall were considered as "one building", thus the generalization could be applied on each building respectively. The most detailed polyhedral building model was created by the SMS algorithm using threedimensional building structure lines which were measured from aerial photos. In a 3D visualization system, depend on the viewer's distance we can estimate a maximum distinguished "feature resolution". The feature resolution will be used to detect small 3D features in "one building". Two polyhedrons will be merged, if the height difference between them is smaller than the feature resolution. A wall will be eliminated if its length is smaller than the feature resolution. In order to maintain the building's principal structure, a piping process is necessary after wall collapsing. In an interactive 3D browsing system, since the distance variation is continuous while its corresponding PCLoD building model can thus be generated immediately. Experiment results demonstrate that the number of triangles
334 J-Y Rau, L-C Chen, F. Tsai, K-S Hsiao & W-C Hsu
was reduced as a function of logarithm of feature resolution. Additionally, the building's main structure can be preserved. Some case studies will be presented to illustrate the capability and feasibility of the proposed method either for regular or irregular type of buildings.
1 Introduction e
The process of cyber city modeling is a generalization from complex 3D world to a digitalized geometric data in the computer. Important spatial features such as road, building, bridge, river, lake, tree, etc. were digitized as two or three dimensional objects that composites a digital city. Above them the 3D building models is the most significant one in a cyber city. From the application point of view, the more detail of a building model will provide more geometrical information for spatial analysis. However, since the computer has a limit capacity of resources in computation, storage and memory, the geometry of building model has to be generalized to reduce its complexity and then the efficiency in demonstration or computation will be increased. That means, different Levels-of-Detail (LoDs) have to be generated for real-time visualization applications under the compromise between detail structures and browsing efficiency. In computer graphics, the efficiency of 3D browsing is highly depended on the number of triangles and textures to be rendered. Many algorithms regarding to terrain simplification have been discussed in the field of computer vision. For example, Hoppe [1] and Garland & Heckber t[2] introduced an edge collapse transformation to simplify surface geometry and results in continuous LoDs that can be applied for progressive meshing applications. For 3D building models generalization, Sester [3] proposed a least squares adjustment method to simplify building ground plans. Kada[4] adopt a similar concept and applied to 3D polyhedral building models without addressing the generation of LoDs. Based on scale-spaces theory, Mayer [5] suggested using a sequence of morphological operations to generalize 3D building models. However, the proposed method is suitable for CAD type and orthogonal building models only. In this paper, we adopt the term "Pseudo-Continuous LoDs" for the generalization of 3D polyhedral building models. A group of connected/contacted polyhedrons were considered as "one building", thus the generalization could be applied on each building model. The idea of PCLoD comes from the digital aerial camera. The computer screen is corresponding to the camera's focal plane with a fixed CCD sensor size, frame area and focal length. As computer rendering and simulating a virtual
Automatic Generation of Pseudo Continuous
335
landscape, it is similar to the photo imaging process. The building's detailed structure is easier to be recognized when the viewer's distance is getting close to the target. It is similar to the computer rendering on the screen. On the contrary, when the viewer's distance to the target is getting longer, the smaller building structure will be un-distinguishable. We can thus ignore them during computer rendering. In this study, we focus on the generalization of 3D polyhedral building models. The most detailed "3D polyhedral building model" was generated semi-automatically based on the SPLIT-MERGE-SHAPE (SMS) algorithm [6]. The SMS algorithm utilizes 3D building structure lines that were measured from aerial stereo-photos. The 3D structure lines could be derived from semi-automatic stereo-measurement [8] or extracted from highresolution satellite image and/or aerial imagery [9]. Buildings with gable roof, flat roof, regular or irregular ground plan could be described. In a 3D visualization system, depend on the viewer's distance we can estimate a maximum distinguished "feature resolution". The feature resolution will be used to detect small 3D features in "one building". For example, if the height difference between two connected polyhedrons is smaller than the feature resolution, they will be merged as one polyhedron. If one roof-edge has a length smaller than the feature resolution, its corresponding wall will be eliminated by wall collapsing follow by a piping process along the building's principal structure. In an interactive 3D browsing system, since the distance variation is continuous while its corresponding PCLoD building model can thus be generated immediately. In this paper, some case studies will be presented to illustrate the capability and feasibility of the proposed method.
2. Methodology The generalization for a building model is divided into two parts, i.e. small roof height variation and insignificant wall elimination. Fig. 1 illustrates the flowchart of the proposed generalization method. It contains two major parts, i.e. polyhedron merging and wall collapse. They are in charged of the above two simplification steps respectively. In the following sections, the proposed method for generating ·PCLoDs building models will be described.
3D Polyhedral Building Models
Topology Construction Given Feature Resolution (R) Principal Structure Detection Comparing Height Difference with Neighborhood Polyhedron Merging
Detect Insignificant Wall
Wall Collapse
Piping along Principal Structure
PCLOD 3D Building Models
(A)
(B)
(C)
338 J-Y Rau, L-C Chen, F. Tsai, K-S Hsiao & W-C Hsu
2.3 Wall Collapse The purpose of the previous polyhedron merging is mainly focus on treating the generalization on the rooftop structure. The result will keep a detailed building ground plans, such as intrusions or pillars locate at building comer. In 1996, Hoppe [1] introduced an edge collapse transformation to eliminate two neighbored triangles results in merging two vertices as one. This operation is cannot be applied directly for simplifying a polyhedral building model. For example, Fig.3 (A) illustrates three connected vertical walls that each wall was composed by two triangles. If the roof-top edge on the middle wall was eliminated by merging two corners into one that will introduce a sloped wall, as shown in Fig.3 (B). In this paper, we adopt the same concept of edge collapse but eliminating the whole wall not only one edge. The same example is shown in Fig.3 (C)-(D). The purpose of wall collapsing is thus to simplify the geometric building structure by detecting and eliminating those insignificant features.
(8)
(e)
(D)
Fig. 3. Wall collapse operation for 3D building model.
Before wall collapsing, we need to detect those insignificant geometric structures by searching for all short walls in one polyhedron. The shortest wall will be eliminated one by one up to a threshold length which is the same as in the principal structure analysis. However, in order to maintain the building's main structure, an additional co-linear processing is necessary. During wall collapsing the independent walls and shared walls are treated separately. • For a sequence of shared walls that related to the same neighbored polyhedron, we have to eliminate its corresponding wall at the same time. A fixed point was determined first by the center location of an insignificant wall, as shown in Fig.3 (D) denoted by the symbol "0". The endpoint of its previous wall and the start-point of its following wall were both moved to the fixed point. In case the previous wall or the following wall is related to different neighborhood, the fixed point is determined by the junction point of two walls that corresponded to different neighborhood. This will avoid a gap between two neighbored polyhedrons when using the center location as a fixed point.
Automatic Generation of Pseudo Continuous
339
• For independent walls, we have to maintain the principal structure of a building. So, the vertices of a principal structure wall cannot be moved during wall collapsing. That means the fixed point is set at the junction of the insignificant wall and the principal wall. Since the independent walls can construct the building boundary, the visual effect will be better and the number of walls could be reduced further by applying a colinear process along the principal structure. We adopt a piping technique to accomplish this procedure. A wall belongs to principal structure of the building was chosen as the pipe axis. The radius of the pipe is the same as the feature resolution. Any wall contained in the pipe is projected onto the pipe axis except for wall vertices that are connected to a shared wall to avoid the gap effect.
3. Case Study For the efficiency of computer graphics rendering, the total number of triangles is the most concerned information. It is crucial when the distance getting longer between the viewer and the target that the number of buildings within the view frustum will be larger. In this section we like to estimate the reduction rate when the feature resolution (R) getting bigger which is corresponded to move the viewer far from the target. Fig. 4 illustrates an example of a complex building model for experiment. Fig. 4(A) is the original 3D polyhedral building model. We change the feature resolution from 2 meters to 35 meters. Its generalization results were demonstrated in Fig. 4(B)-(I), respectively. Fig.5 illustrates the total number of triangles (y-axis) vs. feature resolution (x-axis). The total number of triangles includes the triangles from the roofs and the walls. In Fig.5, the number seated around the diamond symbol denotes the total number of triangles, which is reduced from 756 to 22 from the finest to the coarsest feature resolution. A logarithmic function (Ln) was chosen to fit the reduction rate. The result has a correlation coefficient of 0.98 and was plotted on Fig.5 with red dashed line for reference. The correlation coefficient is high that provides information about triangle reduction rate for the proposed generalization algorithm. For feasibility testing of the proposed algorithm, Fig.6 demonstrates another example of automatic generalization for an irregular shape of building. Fig. 6 (A) is the ground plan of the original building model. Its corresponding 3D view is shown in Fig. 6 (B). Fig. 6 (C) and (D) are the final generalization results using a feature resolution of 10 meters. The result demonstrates that the principal structure of the building was preserved, which reduce the total number of triangles from 719 to
(A) Original
(B) R=2m
(D) R=6m
(E) R=8m
(G) R=15m
(H) R=20m
(C) R=4m
(F) R=10m
(I) R=35m
800
756
y : Total of TINs
700
y = -306.84Ln(x) + 730.57
600
R 2 = 0.9828
504
500 400
376
300
262
200
232
207
164
100
124 22
0 0
2
4
6
8
10
15
x : Feature Resolution R (Meters)
20
35
342 J-Y Rau, L-C Chen, F. Tsai, K-S Hsiao & W-C Hsu
4. Conclusions This paper presents an automatic generalization approach for 3D polyhedral building models. Since a continuous generalization is not applicable to the 3D building models, we propose to generate a Pseudo-Continuous LoDs by changing of feature resolution. The feature resolution could be estimate from the image scale and the virtual CCD size. The number of triangles for a complex building has reduced significantly as a function of logarithm of feature resolution. In the meantime, the principal structure of the building can be preserved to avoid visual impact. The proposed algorithm can also applied for the generalization of irregular shape of buildings. The experimental results demonstrated that the proposed algorithm is effective in reducing the number of triangles and also maintaining the principal structure of a building. More than 10 times of reduction rate could be achieved on average. The result is applicable for 3D real-time visualization applications of Cyber City as the viewer's distance is considered. For future research, combining of aggregation in generalization for adjacent buildings is necessary to further reduce the amount of geometry data. For photo-realistic visualization applications, automatic generation of multi-resolution facade texture from its corresponding level of building models is necessary. Then, the proposed generalization algorithm would be valuable for diverse 3D applications.
Acknowledgment This paper is partially supported by the National Science Council under the project number of NSC 89-2211-E-008-086, NSC 93-2211-E-008-026 & NCU-ITRI950304.
References Hoope, H., 1996,"Progressive meshes", Proceedings of ACM SIGGRAPH 96, pp.325-334. Garland, M., Heckbert, P., 1997,"Surface simplification using quadric error metrics". In: Proceedings of ACM SIGGRAPH 97, pp.206-216, 1997. Sester, M., 2000, "Generalization based on least squares adjustment", International Archives of Photogranunetry and Remote Sensing, Amsterdam, Netherlands, Vol. XXXIII, Part B4, pp.931-938.
Automatic Generation of Pseudo Continuous
343
Kada, M., 2002,"Automatic generalisation of 3D building models". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Services, Volume XXXIV, Part 4, pp.243-248. Mayer, H., 2005,"Scale-spaces for generalization of 3D building", International Journal of Geographical Information Science, Vol.19, No.8-9, Sep.-Oct. 2005, pp.975-997. Rau, J. Y., and Chen, L. C., 2003, "Robust reconstruction of building models from three-dimensional line segments", Photogrammetric Engineering and Remote Sensing, Vol. 69.No.2, pp. 181-188. Rau, J. Y., Chen, L. C., and Wang, G. H., 2004, "An Interactive Scheme for Building Modeling Using the Split-Merge-Shape Algorithm", International Archives of Photogrammetry and Remote Sensing, Vol. 35, No. B3, pp. 584589. Chen, L. C., Teo, T. A., Shao, Y. C., Lai, Y. C., and Rau, J. Y., 2004, "Fusion of LIDAR Data and Optical Imagery for Building Modeling", International Archives of Photograrnmetry and Remote Sensing, Vol. 35, No. B4, pp. 732737.
Reconstruction of Complex Buildings using LIOAR and 20 Maps Tee-Ann Teo l, Jiann-Yeou Raul, Liang-Chien Chen', Jin-King Liu2, and Wei-Chen Hsu 2 'Center for Space and Remote Sensing Research, National Central University, Taiwan {ann ,jyrau,lcchen }@csrsr.ncu.edu.tw 2Energy and Resources Laboratories, Industrial Technology Research Institute, Taiwan {jkliu,ianhsu} @itri.org.tw
Abstract The extraction of the building models from remote-sensed data is an important work in the geographic information systems. This investigation describes an approach to integrate LIDAR data and 2D building boundaries for building reconstruction. The proposed scheme comprises of three major parts: (1) data pre-processing, (2) extraction of building primitive, and (3) shaping. In the data pre-processing, the LIDAR point clouds are structuralized by Triangulated Irregular Networks (TIN) and interpolated as Digital Surface Model (DSM). The building boundaries are traced to form closed polygons. In the extraction of building primitive, we use DSM with Canny Edge Detector to determine the step edges. The ridge lines are extracted from the raw point clouds by using a TIN-based region growing technique. The coplanarity and adjacency are considered for the TIN-based region growing. Finally, the building polygon is divided into several building primitives by the extracted structure lines. In the building reconstruction, we shape the roof of each building primitive by using LIDAR point clouds. The proposed method has been tested with LIDAR data acquired by Leica ALS50 and 1/1000 topomap. Experimental results indicate that the proposed scheme reaches high reliability, and the shaping error of the reconstructed models is better than 0.20 meter.
346
T.A.,Teo, J.Y., Rau, L.C., Chen, J.K., Liu, and W.C., Hsu
1. Introduction Building modeling in a cyber space is an essential task in the applications of three-dimensional Geographic Information Systems (GIS). The extracted building models are useful for urban planning [1], urban modeling [2], disaster management [3], as well as other applications. Traditionally, the reconstruction of building models is performed by using aerial photography. The procedure requires vast amount of manpower, resources, cost, and time. As an emerging technology, the airborne LIDAR (LIght Detecting And Ranging) system provides a promising alternative. Its high precision in laser ranging and scanning orientation makes the decimeter accuracy for ground surface possible. The three-dimensional point clouds acquired by an airborne LIDAR system provide abundant 3D information. Meanwhile, the large scale vector maps preserve accurate building boundaries. Thus, we propose here a scheme that integrates LIDAR data and building boundaries for building reconstruction. Several investigators have reported on the generation of building models from LIDAR and vector maps. The reconstruction strategy can be classified into two categories, i.e., model-driven and data-driven. Model-driven is a top-down strategy, which starts with hypothesis of synthetic building model, then, verifies the correctness of model with LIDAR point clouds. In the model-driven strategy, the building boundaries may be split into several primitives by some rules. A number of 3D parametric primitives are generated by the 2D primitives. Then, select the best fitting 3D parametric primitives from LIDAR point clouds. The building model is obtained by merging all 3D building primitives [4, 5, 6]. However, this method is restricted on the types of 3D parametric primitives. Data-driven is a bottom-up strategy which extracts the plane and line features in the beginning, and then, groups them into a building model through a hypothesis process. Overby et al [7] show a data-driven approach based on the 3D Hough Transform. Schwalbe et al [8] extract the plane features using orthogonal point clouds projection. The ridge line is obtained by planes intersection. However, these results are limited in gable roof. The step edges or height jump lines are not considered in the investigation. The crucial issue of building modeling from the LIDAR point clouds is to extract the facet feature. The facet feature can be obtained by using grid data with image-based region technique [9]. The interpolation process is required for resampling the data into regular grid, which is accompanied by interpolation errors. Elaksher and Bethel [10] transform the point clouds into parameter space and clustering technique to extract the facet
Data Pre-Processing Building Polygon Tracing
LIDAR DSM Generation
LIDAR TIN Construction Point in Polygon Examination
Building Primitives Extraction Step Edges Extraction
Ridge Lines Extraction
Building Shaping Building Primitives Generation
Roof Detemination
Ridge Line
Step Edge
Reconstruction of Complex Build ings using LID AR and 2D M aps
349
3.1. Step Edge Extraction Th e Step edge is a kind of building structure line be tween two facets , which describes the height jump between roofs. We detect the step edge from the rasterized LID AR DSM . First, the edges feat ures in a buildi ng polygon are obta ined by applying a Canny operator. The Ca nny edge detection algorithm is known as the optimal edge detector. A mo re detailed description of Canny opera tor can be found in [12] . Sometimes , the edges of building boundaries are also extracted, so we set a distance threshold to rem ove the edge fea tures overlap/close to the building boundaries. After ex tracting the edge feat ures, we perform straig ht line tracking by usin g strip algorithm [16]. The idea of strip algorithm is to ex trac t the straight lines from the adj acent pixe ls within a pipe line width . Once the straight lines are ex trac ted, a length threshold is applied to remove the short lines ca used by the noise. Th en, we include the build ing directi on from building boundaries in straig ht lines regul ariza tion . If the direction of straig ht lines is close to building direction , the straight lines are adj usted to be parall el to the building direction. Finally, we get the step edges. Th e steps of step edge extraction are illustrated in Fig. 4.
(a) (b) (c) Fig. 4. Illu stration of step edge extraction. (a) Digital Surface Mode l, (b) edges detected from Canny operator, (c) extracted step edges
3.2. Ridge Line Extraction The ridge line is a build ing struct ure line which is intersected by two facets. In order to get the intersec tion line, the facet ext raction is an important process in this step. A TIN-based region grow ing procedure is employed for face t extraction . The coplanarity and adjacency between triangles are co nsidere d for TI N-based region growi ng. The coplanarity condition is the distan ce of triangle center to the plane. Wh en the triangles meet the co planarity criteria, the triangles are merged together as a new face t. The process starts by se lecting a seed triangle and determining the initial plane param eter. If the distance of neighbour triangle to the init ial plane is smaller than the threshold, those two trian gles are combined. Th e refer-
350
T.A.,Teo, J.Y ., Rau,
r.c ., Chen, J.K.,
Liu, and W.e. , Hsu
ence plane paramete r is recalculated using all of the triangles that belong to the region. The seed region will grow as long as the di stance is under a thr eshold. A new seed triangle is chos en when the region stops gro wing. The region-growing step stops when all triangles have been tested . Due to the errors of the LIDAR data , tho se detected regions may consist of fragmental triangles. Thus, small regions will be merged with its neighbourho od that has the c loses t normal vector. Once the pl anar segments are extracted , the ridge lin e is obtained by the intersection of two adjacent planar faces. Fig. 5 is an example of rid ge line extraction.
(a) (b) (c) Fig. 5. Illu stration of ridge line extraction. (a) LIDAR TIN-mesh, (b) extracted facets, (c) extracted rid ge lines
3.3. Building Primitive Generation After extracted the building structure lines, we use the structure lines to split the buildi ng pol ygon to several building primitives. Each building primitive represents the small elem ent of buildi ng . In another words, we divide a complex building into man y simple building primitives , The complex building is reconstructed by combining all the building primitives together. Sp lit and merge process are two major steps in primitive gen eration [13]. First, we extend all the structure lines to the building boundaries , and then we calculate the intersection points am on g the structure lines . Th e building polygon is split into many small polygons . We merge the po lyg ons by examine the share lines between polygon s. Notice that the height information is ignored in this ste p. Fig . 6 is the illustration of the
bUild( ~eneration ~odeling
(a) (b) (c) Fig. 6. Illu stration of bu ildin g primitive generation . (a) buildin g structure lines and bound aries, (b) splitting by structure lines, (c) results of build ing primiti ve generation
Recon struction of Co mplex Buildin gs using LID AR and 2D Maps
35 1
4. Building Shaping The obj ective of building shaping is to shape the roo f-top of eac h build ing primiti ves. We con sider the LIDAR point cloud s are locat ed in bu ildin g primiti ves to per form the building shaping. The TIN-based reg ion gro wing procedure as described before is employe d in this stage . Once the planar segme nts are extract ed , we use least squares regression to determine the coplanarity function of the plan ar seg me nt. Thus, we use the 3D plane information to defin e an appro priate roof. Th e buil din g model is obtained by merging all 3D building prim itiv es. Fig. 7 sho ws the building sha ping for building reconstructi on.
(a ) (b) Fig. 7. Illu strati on of build ing shaping, (a) building primitives and LIDA R poin t cloud s, (b) recon stru cted DBM
5. Experimental Results The LIDAR data used in this research cover an area in Industri al Technology Research Institute of north Taiwan. Th e data was obtaine d by a Leica ALS 50 sys tem. Th e average den sity of LIDAR point clouds is 1.7 pts/rrr', The discrete LIDAR poin ts are rasteri zed to DSM with a pixel size of O.Sm. Th e used vector maps are with a sca le of 1:1,000. Fig 8 shows the LIDAR DSM ov erlap with vec tor maps. Fig . 9 shows the sate llite im age of the test area. In this test area , the roof types include flat , gable, and multiple roofs . The prop osed method is applied to the test area. Th e step edg es are obtained by LIDAR DSM with Ca nny edge detector. Th e 3D facets are extracted by TIN-based region growing, and the building structure lines are extracted fro m the 3D facets. Aft er the building primiti ve ge neratio n, we use the LIDAR point cloud s to shape the building roof-t op. The result of bu ilding mod els is show n in Fig. 10. Co mparing the vec tor map s and ge nerated building mod els, the generated building mode ls are more detail than the vector maps. It is obvious that the inner structure lines had been extrac ted successfully .
352
T.A.,Teo, J.Y., Rau, i.c., Chen, J.K., Liu, and W.e., Hsu
As we are lackin g of the on-site buildin g measur ements data, we use the LID AR point clouds to eva luate the relative accuracy . In this pap er, we name the relative acc uracy as Shaping Error. First, we select the bu ildin gs with flat roo f in the test area, then , the non-roo fing LID AR point clouds for eac h building are remove d manually. Co mparing the roo f-top planes in the reconstru cted mo de ls with the LIDAR point cloud s, the RMS E of shaping error is O.20m.
Fig. 8. DSM with buildin g boundaries
Fig. 9. Satellite image of the test area
/
(a) (b) Fig. 10. Results of buildin g models in the test area, (a) top-view of the generated DBM, (b) perspective view of the genera ted DBM
Reconstruction of Complex Buildings using LIDAR and 2D Maps
353
Conclusions In this investigation, we have presented a scheme for the reconstruction of building models by the fusion of LIDAR data and large-scale vector maps. The experimental results demonstrate the potential of the building reconstruction. The proposed method takes the advantages of high horizontal accuracy from vector maps and high vertical accuracy from LIDAR data. The shaping error of the generated building models is 0.20 m, which corresponds to the vertical accuracy of LIDAR data.
Acknowledgement This investigation was partially supported by the NCU-ITRI Joint Research Center under Project No. NCU-ITRI 950304. The authors would like to thank the Department of Land Administration of Taiwan and China Engineering Consultants, Inc. for providing the test data sets.
References 1. Hamilton, A, Wang, H., Tanyer A.M., Arayici Y., Zhang, X., and Song, Y., 2005.Urban information model for city planning, ITcon, Vol. 10, pp. 55-67. 2. Ulm, K., and Wang, X., 2005. Efficient reality-based 3D City Modeling with CyberCity Modeler - Management in ArcGIS (ESRI) and Visualization with Terrainview, Proceedings of 1st International Workshop on Next Generation 3D City Models, 21-22 June 2005, Bonn, Germany. on CD-ROM. 3. Dash, J., Steinle, E., Singh, R.P., and Bahr, H.P., 2004. Automatic building extraction from laser scanning data: an input tool for disaster management, Advance in Space Research, Vol. 33, pp.317-322. 4. Haala, N., and Brenner, C., 1999. Virtual city models from laser altimeter and 2D map data, Photogrammetric Enginerring & Remote Sensing, Vol. 65, No. 7, pp. 787-795. 5. Vosselman, G., and Dijkman, S., 2001. 3D building model reconstruction from point clouds and ground plans. IAPRS, Vol. XXXIV, Part. 3/W4, pp. 37-43. 6. Sugihara, K., and Hayashi, Y., 2003. Semi-automatic generation of 3-D building model by the integration of CG and GIS. International Geoscience and Remote Sensing Symposium (IGARSS), v 6, 2003, P 3919-3921. 7. Overby, J., Bodum, L., Kjems, E., Iisoe, P.M., 2004. Automatic 3D building reconstruction from airborne laser scanning and cadastral data using hough transform, IAPRS, Vol. XXXV, Part. B3, pp. 296-301.
354
T.A.,Teo, J.Y., Rau, L.C., Chen, J.K., Liu, and W.C., Hsu
8. Schealbe, E., 2004. 3D building model generation from airborne laserscanner data by straight line detection in specific orthogonal projections, lAP RS, Vol. XXXV, Part, B3, pp. 249-254. 9. Cho, W., Jwa, Y.S., Chang, H.I., and Lee, S.H., 2004. Pseudo-grid based building extraction using airborne LIDAR data. IAPRS, Vol. XXXV, Part B3, PP. 378-383. IO.Elaksher, A.F., and Bethel, J.S., 2002. Reconstructing 3D building from LIDAR data, IAPRS, Vol. XXXIV, Part, 3A, pp. 787-795. II.Hofmann. A.D., 2004. Analysis of TIN-structure parameter spaces in airborne laser scanner data for 3-D building model generation. IAPRS, Vol. XXXV, Part 3, pp.302-307. I2.Canny, J., 1986. A computational approach to edge detection, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. PAMI-8, No.6, pp. 679-698. I3.Rau, J.Y., and Chen, L.C., 2003. Robust reconstruction of building models from three-dimensional line segments, Photogrammetric Engineering& Remote Sensing, Vol. 69, pp.I8I-188. 14.Mortenson, M. E., 1999. Mathematic for computer graphics application, Industrial Press, New York, 2nd edition, pp. 202-204. 15.Golias, N. A., and Dutton, R. W., 1997. Delaunay triangulation and 3D adaptive mesh generation, Finite Element in Analysis and Design, Vol. 25, pp. 331-341. 16.Leung, M.K., and Yang, Y.H., 1990. Dynamic strip algorithm in curve fitting, Computer Vision, Graphics and Image Processing, Vol. 51, pp. 146-165.
Building Reconstruction .... Outside and In Christopher Gold, Rebecca Tse and Hugo Ledoux.
School of Computing, University of Glamorgan Pontypridd, Wales, CF37 1DL United Kingdom. E-mail: [email protected]
Abstract The modelling or reconstruction of buildings has two aspects - on the one hand we need a data structure and the associated geometric information, and on the other hand we need a set of tools to construct the building incrementally. This paper discusses both of these aspects, but starts from the simpler exterior model and geometry determination, and then looks at representations of the building interiors. Our starting point is a set of raw LIDAR data, as this is becoming readily available for many areas. This is then triangulated in the x-y plane using standard Delaunay techniques to produce a TIN. The LIDAR values will then show buildings as regions of high elevation compared with the ground. Our initial objective is to extrude these buildings from the landscape in such a manner that they have well defined wall and roof planes. We may have already been provided with the building footprint from national mapping information, or we may need to extract it from the triangulation. We do this by superimposing a coarse Voronoi cell structure on the data, and identifying wall segments within each. We then examine the triangulated interior (roof) data, identify planar segments and connect them to form the final surface model of the building embedded in the terrain. This is done using Euler Operators and Quad-Edges. Building interiors are added by using an extension of these.
1. Previous work We have previously demonstrated (Tse and Gold, 2002) that a TIN may be represented with advantage using the Quad-Edge data structure of Guibas and Stolfi (1985), and that this structure is closely related to the basic Euler Operators used in CAD systems for boundary representation (b-rep)
356
Christopher Gold, Rebecca Tse and Hugo Ledoux.
modelling of exterior surfaces (e.g. Mantyla, 1988; Lee, 1999). Other structures may also be used (e.g. Baumgart, 1972; Weiler, 1986). Based on this equivalence the TIN model may be modified using these Euler Operators, to permit the modelling of seamless exterior surfaces of buildings or other structures, for example by the splitting or merging of faces, the creation or deletion of bridges or tunnels, etc. Fig. 1 shows the basic QuadEdge element, and Fig. 2 shows the elementary Splice Operator. Fig. 3 shows a simple triangulation network, the Quad-Edges and the topological loops around faces and nodes. Note that this represents both the primal and the dual structure: loops around Delaunay nodes are equivalent to Voronoi cell boundaries, and loops around Voronoi nodes are equivalent to Delaunay triangle boundaries.
Quad-Edge data structure
,. - -
pi I "'" _ .... Iii'"
t Splice (a,b)
+
Fig. 1: Quad-Edge structure
Next Quad
-----)0
Vertex or Face ~
Fig. 2: Splice operation
Building Reconstruction - Outside and In
357
Fig. 3. Quad-Edge navigation
Euler Operators to create a tunnel or bridge In the CAD industry the most common elementary operations on surface models (b-reps) are called Euler Operators. These have been shown (Tse and Gold, 2002) to be simply constructed from the Quad-Edge operations Make-Edge (for a new edge) and Splice (to join or split two Quad-Edges). Each Euler Operator has an inverse. Shown here are MEV (Make Edge and Vertex, Fig. 4), MEF (Make Edge and Face, Fig. 5) and how to construct a tunnel or a bridge (Fig. 6). Fig. 7 shows two simple examples. pt2
*__------1
Fig. 4. Connecting two Quad-Edges with MEV
358
Christopher Go ld, Rebecca Tse and Hu go Ledo ux.
o Face - --- »-
New Edge
rt>
o Fig. S. Spl itting a polygon with MEF
Fig. 6. Tunn el Construc tion
Fig. 7. Bridges and tunn els
Bu ildin g Reconstru ction - Out side and In
359
Building Extraction with Provided Mapping Information If we are give n the building foot print we may insert points into the terr ain model along these boundary lines, and use the Euler Op erators to ex trude the buildin g vertically (Tse and Gold , 200 1). This will give a flat roof at the average height of the LIDAR data within the boundary , with the building modelled by Quad-Edges. Fig. 8 shows the terrain with the build ing footprints, and Fig. 9 sho ws the extrud ed buildings.
Fig. 8. Building foo tprints on terr ain
Fig. 9. Ex truded build ings
360
Christopher Gold, Rebecca Tse and Hugo Ledoux.
2. Building Construction from LIDAR data alone In principle it should be possible to extract good approximations of buildings from a sufficiently dense set of elevation data. In practice this is difficult. There are two steps: firstly to extract the vertical walls, and then to model the roof. There are two basic approaches: to attempt to fit a predefined template to the data (e.g. Vosselman, 2003; Rottensteiner and Briese, 2003); or to attempt to construct a building-like shape by extracting features from elevation data (e.g. vertical walls or roof planes). The first approach is limited by the models included in the system, while the second will only approximate the building form, and will often need subsequent rectification. We take the second approach, and only assume properties of "buildings" when absolutely necessary.
Automatic Wall Extraction While it is not difficult to identify the near-vertical triangles in the TIN it is not a simple task to form a complete building from these segments. The remote sensing literature has many examples of attempts to first detect line segments and then glue them together. Our approach is always to preserve a tessellation model, with connectivity, rather than attempting to connect line segments. We apply a coarse Voronoi diagram over the original data, with perhaps 50-100 LIDAR points in each cell. We then attempt to modify these cells so that the building boundaries (defined as a partition between "high" points and "low" points") are a subset of the Voronoi cell edges. We can split cells along the high/low edge to achieve this. Proposition 1: Buildings are collections oj contiguous elevations that are higher than the surrounding terrain. Their boundaries are ((walls)). The approach is based on calculating the eigenvalues and eigenvectors of the 3 x 3 variance-covariance matrix of the coordinates of the points within a cell. The first eigenvector (with the largest eigenvalue) "explains" as much of the overall variance as possible, the second (perpendicular) eigenvector explains as much as possible of what is left, and the third (perpendicular to the other two) contains the residue. (For example, a wrinkled piece of paper might have the first eigenvector oriented along the length of the paper, the second along its width, and the third "looking" along the wrinkles.) Thus the eigenvector of the smallest eigenvalue indicates the orientation of a wall segment, if present, and looks along it.
Building Reconstruction - Outside and In
361
The next step is to locate the line parallel to the smallest eigenvector that best separates "high" elevations from "low" ones within each Voronoi cell. This is achieved iteratively, by testing various positions of this line in order to find the greatest difference between the means of the elevation values in the Voronoi cell that are on each side of the line. (In order to minimize the effect of sloping roofs or terrain, only those elevations close to the line are used.) If this maximum difference is not sufficiently large then no wall segment was detected. Proposition 2: Walls have a specified minimu1J1 heigh 0 and this height dijJerence is achieved within a very jew "pixels". The Voronoi cells are then split along these lines, by adding a generator on each side of this line, at the mid-point. This gives a set of "high" Voronoi cells surrounded by "low" ones. Building boundaries are then determined by walking around the cells forming the high region, using the topological consistency of the Voronoi tessellation. This must form a closed region, or else the high region is not considered to be a building. The building outline is then estimated from the Voronoi boundary segments but only those that were created with the eigenvector technique, not those Voronoi cell boundaries that only connect them. Fig. 10 shows the Voronoi structure and the "high" LIDAR points, as well as the wall segments detected. Fig. 11 shows the "high" Voronoi cells before they are split, and Fig. 12 shows the approximation of the walls based on the split cells. Proposition 3: A building consists if a high region entireiy surrounded ry walls.
362
Christop her Gold, Rebecca Tse and Hugo Ledoux.
Fig. 10. Voronoi cells and wall segme nts
Fig. 11. Initial "high" Voronoi cells
Fig. 12. "High" Voronoi cells after splitting
Building Reconstruction - Outside and In
363
Roof Modelling Simple Roof Once the building boundary is determined, the interior points may be used to model the roof structure. Unlike other techniques, no assumptions are made about the form of the roof, except that it is made up of planar segments. (The method may be extended to detect other basic shapes if required.) Each of the interior triangles has an associated vector normal. The "smallest" eigenvector is used as described previously to estimate the main axis of the roof, all vector normals are plotted on a unit semicircle perpendicular to this, and clusters are found where all normals are very close to the same orientation. The mean of each cluster indicates the orientation of one or more planar roof segments with the same orientation. This works well even if the data is fairly noisy, and the scatter of the vector normals is fairly large. (If there are two or more parallel planes in the roof structure, these may be separated at this stage by constructing the Delaunay triangulation in x-y space for the data points of the cluster, extracting the Minimum Spanning Tree, and separating the two or more parallel roof portions. The general technique is described later.) Fig. 13 shows a simple roof, Fig. 14 shows the vector normals and Fig. 15 shows the clusters of unit vectors (small circles) on the semicircle. Each roof plane is described by its vector normal plus a "visible" point on the roof plane that is within the bounds of the cluster used to estimate the vector normal. This is usually just the mean of the x-y-z values of the relevant data points. It is not necessary that all triangles within a roof segment have vector normals in the cluster - just enough to detect the plane. Fig. 16 shows the resulting roof planes for clean data, and Fig. 17 for noisy data. Intersections of wall and roof planes are then calculated, and the building extruded or bevelled using Euler Operators to give the final b-rep building form. Proposition 4: Roofs are made up of planar segments, most of whose constituent triangles have similar vector normals.
364
Christopher Gold , Rebecca Tse and Hugo Ledoux. z axis
Fig. 13. Simple roof
Fig. 14. Roof vector normals
20
10
Fig. 15. Vector clusters on the unit semicircle
Fig. 16. Roof planes from clean data
Fig. 17. Roof planes fro m noisy data
Buildin g Reconstruction - Outside and In
365
Complex Roof The above method only works for roo fs with a simpl e axis. Where planar segments have man y orientati ons a different techn iqu e is required, using the projecti on of the vector norm als onto the unit hemi sph ere. Others have used different projecti ons (e.g. Hofmann et al., 2003 ). Wh ere seve ral triangles have similar orientations - usually from the same roof segment there will be a cluster of points that can be determined by first constructing the Del aun ay triangulati on on the hemisph ere and then extracting the Minimum Sp ann ing Tree (MST, which is a subset of the Delaunay triangulation). These clusters are then used to identi fy planar seg ments as before. Fig. 18 shows a compl ex roof, while Fig. 19 shows a 2D projection of the triangul ation of the vect or normals on the hemi sphere and Fig. 20 shows this in a 3D view. Fig . 21 shows the MST in 2D , and Fig 22 sho ws it in 3D. Four clu sters can be see n, correspo nding to the fou r roof planes. Fig. 23 shows the classified roof triangles, and Fig. 24 shows the fitted planes. Roof plan es ca n be considered to meet at triple junctions (cases of four planes meeting can be treated as two triple junction s with a very short edge between them). Vector algebra gives a concise calculation of these plane inte rsect ion s, and thus the general form of a roof may be represented as a triangulati on of the "v isible point s" on each roo f plane. Th e building' s form may then be extruded using Euler Operators as previously described. Proposition 5: The relationships between roofplanes mq} be represented as a dualtriangulation.
Fig. 18. Compl ex roof surface
366
Christopher Go ld, Rebecca Tse and Hugo Ledo ux.
Fig . 19. Triangul ation of unit hem isphere
Fig. 20. Unit hemi sph ere - 3D view
Fig 21. MST of unit hem isphere
Fig. 22. MST - 3D view
Fig. 23. Roof triangle clusters from normals
Buildin g Recon stru ction - Outside an d In
367
Fig. 24. Roof planes from clusters
Building Interiors Thi s gives the exterio r buildin g shape, embedded in the topograph y, defined with the Quad-Ed ge structure . However, where interior informat ion is to be added this struc ture needs to be augmented. T he Quad-Edge structure described previously has, in addition to its topo logical pointers to adjace nt edges , two pointers to the vertices of the prim al edge and two pointers to the left and right faces (Fig . 1). Ho wever, in 3D the dua l of a face is an edge (Fig. 25), so these pointers may be assigned to Quad-Edges of dual ce lls. The Augmented Quad-Edge structure has been de veloped to represent space-filling cells individually with QuadEdge structures, linke d throu gh the equivalent set of dual cell edges . Thu s if the orig ina l building (and adj acent grou nd) determined fro m LIDAR data was constructed as a single "Polyhedral Earth" mode l, usin g the standard Quad-Edge structure, the dual edges referred to by the origi nal face pointers woul d each connect to "Earth" at one end and "A ir" at the other. Proposition 6: Building exteriors, together with the adjacent terrain, form a portion if theglobal "PolYhedral Earth". The Augmented Quad- Edge is designed to link face pointers in one space with Quad-Ed ges in the dual space - a goo d example of this is the 3D Voronoi diagram and dual Delauna y tetrahedralization , where Del aunay edges represent Voronoi faces , and vice versa. This struc ture is ideall y placed to represent cellul ar co mplexes and their adjace ncies (Ledoux and Go ld, In Press). Thus the addition of a hollow building interior is achieve d by adding an interior cell where most of the edges correspond to those formi ng the pre viously-constructed building exterio r. Additional roo ms may be added by partitioning this interior ce ll, whi le the simultaneouslyconstructed dual ce ll edges form a structure for navigating the interior (Fig. 26 - two tetrahedra are linked by the dual Voro noi edge s that pene-
368
Christopher Gold , Rebecca Tse and Hugo Ledoux .
trate the common face) . Fig. 27 shows the basic relati onships between adjacent room s using this struc ture, consistin g of the standard Quad-Ed ge operators and the "through" and "adjacent" operators from the Augmented Quad-Ed ge. Clearly this approach is equ ally valid for subterranean constructions as well as above-ground bu ildings. Proposition 7: Building interiors mqy be constructed as individual po!Jhedra, linked together and to the exterior f:y edges ojthe dualgraph.
•
~ ~~ ,./. '
D) '.>, -
4>
rI
•
Fig. 25. 3D dual relationship s
c
Fig. 26. 3D navigation via the du al IJj ,adjacent ----,~
.'
.
D Fig. 27. Relationships between rooms
Conc lusions We have outlined a procedure for the direct extraction of bu ildin g exteriors from LIDAR data without any prior knowl edge of the dat a. Th is is based on five propositions that are necessary in ord er to reco gniz e the basic element s of our buildings. Two additional propositions are give n in order to provide a top ological con text for building interiors. Whil e the resultin g
Building Reconstruction - Outside and In
369
building forms have only limited precision they can be rectified with additional conditions as desired in any particular urban context.
References Baumgart, B.G. (1972): Winged edge polyhedron representation. Stanford University Computer Science Department, Stanford Artificial Intelligence Report No. CS-320. Guibas, L., and J. Stolfi (1985): Primitives for the Manipulation of General Subdivisions and the Computation of Voronoi Diagrams. ACM Transactions on Graphics, Vol. 4, No.2, pp. 74-123. Hofmann, A.D., Mass, H-G. and Steilein, A. (2003). Derivation of roof types by cluster analysis in parameter spaces of airborne laserscanner point clouds. In H.-G. Maas, G. Vosselman, and A. Streilein, editors, Proceedings of the ISPRS working group III/3 workshop '3-D reconstruction from airborne laserscanner and InSAR data', volume 34, Part 3/W13, Dresden, Germany, 2003. Institute of Photogrammetry and Remote Sensing Dresden University of Technology. Ledoux, H. and Gold, C. M. (In Press). Simultaneous storage of primal and dual three dimensional subdivisions. Computers, Environment and Urban Systems. Lee, K. (1999):Principlesof CADI CAM! CAE Systems, Seoul NationalUniversity. Korea. Mantyla, M., (1988): An introduction to solid modelling, Computer Science Press, Rockville, MD. Rottensteiner, F. and Briese, C., (2003). Automatic generation of building models from LIDAR data and the integration of aerial images. In H.-G.Maas, G. Vosselman, and A. Streilein, eds., Proceedings of the ISPRS working group III/3 workshop ,3-D reconstruction from airborne laserscanner and InSAR data', volume 34 Session IV. Institute of Photogrammetry and Remote Sensing Dresden University of Technology, Dresden, Germany. Tse, R. O. C. and Gold, C. M., (2001). Terrain, dinosaurs and cadastres - options for threedimension modelling. In C. Lemmen and P. van Oosterom, eels., Proceedings: International Workshop on "3D Cadastres", pp. 243-257. Delft,The Netherlands. Tse, O.C., and C. Gold (2002) "TIN Meets CAD - Extending the TIN Concept in GIS", Proceedings: ICCS 2002 Lecture Notes of Computer Science, Dongarra & A.G. Hoekstra (Eds) vol. 2331, March, Springer, Amsterdam, the Netherlands, pp. 135-143. Weiler, K. J. (1986): Topological structures for geometric modeling. PhD thesis, Rensselaer Polytechnic Institute, University Microfilms International, New York, U.S.A. Vosselman, G., (2003). 3d reconstruction of roads and trees for city modelling. In H.-G. Maas, G. Vosselman, and A. Streilein, eds., Proceedings of the ISPRS working group III/3 workshop' 3- D reconstruction from airborne laserscanner and InSAR data', volume 34, Part 3/W13. Institute of Photogrammetry and Remote Sensing Dresden University of Technology, Dresden, Germany.
Skeletonization of Laser-Scanned Trees in the 3D Raster Domain Ben Gorte, TV Delft, the Netherlands
Abstract The paper introduces storage and processing of 3-dimensional point clouds, obtained by terrestrial laser scanning, in the 3D raster domain. The objects under consideration are trees in production orchards. The purpose is to automatically identify the structure of such trees in terms of the number of branches, their lengths and their thicknesses. An important step in the process is skeletonization. On the basis of a previously developed methodology, a new skeletonization algorithm is developed, which delivers improved results.
Introduction The paper presents a method to analyze point clouds of terrestrial laserscans that are recorded inside a production orchard. The purpose is to reconstruct trees in 3D, in order to acquire a model which allows to for measurements that are considerably more accurate than currently used manual measurement methods. These could be used by fruit production experts e.g. to calculate parameters which allow drawing conclusions about the expected yields. The reconstruction of the tree crowns is the basis for investigations about the vitality or the competitive status of orchard trees. The ability to create interactively explorable three-dimensional models is another goal of this approach. A tree to be reconstructed is scanned from all sides by locating the scanner at a number of viewpoints (normally 4) around the tree and merging the resulting scans into a single point cloud. Because of the nature of laser scanning, it may be expected that the coordinates of the reflected points are located at the surfaces of the branches. Scans are taken during the winter season when the trees are without leaves. At the final stage of the reconstruction process a cylinder fitting process is carried out for each of the significant branches: a cylinder is fitted
372
TU Delft, the Netherlands
through those points that are at the surface of a particular branch. Before this is possible, however, it is necessary to have a rough idea which points belong to that particular branch, in other words to know the initial subset of points that should be involved in the fitting process. This process itself will be able to identify erroneous points (that do not fit) within the initial subset, and possibly to add points to the subset that are in the vicinity and fit on the same cylinder. The crucial point in the above is to detect structures in the point cloud which represent the tree trunk and the branches. Besides, points that do not seem to correspond to any branch must be identified as noise. This is the problem addressed in this paper. We are presenting an algorithm that subdivides (segments) a laser point cloud of a tree into subsets that correspond to the main branches. The algorithm segments a point cloud according to branches and at the same time identifies how the branches are connected to each other. As a result, a "tree" data structure is constructed, which represents the topology of the branches within a tree. Basically, the task of the algorithm boils down to detecting elongated, cylindrical structures in a point cloud. However, point cloud segmentation can be considered a difficult subject, especially in presence of the beforementioned noise, gaps within branches (missing points, e.g. caused by occlusions by other branches) and varying point densities (as a result of varying distance from the scanner). To establish the topology of branches on the basis of point clouds is an unsolved problem as well. Therefore we decided to perform this part of the analysis in the 3dimensional raster domain. Using morphological operations, which are well-known from 2D image processing, connectivity and neighborhood relations between voxels in a 3D raster can be established much easier than between the original (x,y,z)points. The extension of these operations to the 3D raster domain is described here. After converting the point cloud to 3D raster, 3D skeletonization is applied, which reduces the stem and branches to single voxel thickness. Then, Dijkstra's minimum spanning tree algorithms ensures that the skeleton from the previous step becomes a tree in the data structuring sense. Topology can now be established, in which the stem and the branches obtain unique id's. Finally, 3D distance transform is applied to assign to each point in the original set the ill of the nearest branch. As a result, the points are grouped by ID according to the tree structure, so that each scan point can be uniquely attached to a main branch. A crucial step in the procedure is skeletonization, which is the focus of the remainder of the paper. It will describe two variants of skeletonization,
Skeletonization of Laser-Scanned Trees in the 3D Raster Domain
373
and its role in the entire reconstruction process. Before, we will introduce some general notions of objects represented in 3D raster space.
3D raster space A 3D raster space consists of a regular 3-dimensional block made up of integer coordinates (p, I, c), denoting positions in terms of planes (from top to bottom), lines (from back to front) and columns (from left to right). At each position a voxel is located, in which a value V(P,l,c) is stored. An object can be represented in 3D raster space by assigning value 1 to voxels belonging to the object, while the remaining voxels obtain value 0 (background). When converting a laser scanned point cloud to a 3D raster, the (x,y,z)space containing the points is regularly subdivided in a large number of planes, lines and columns, and therefore in small cubes, corresponding to the voxels. The size of the cubes, measured in (x,y,z) units, is the resolution of the 3D raster. Next, voxels are given the value 1 if the corresponding cubes contain one or more laser points or 0 if this is not the case. AIternatively, it is possible to assign to each voxel the number of laser points located in the corresponding cube, which yields a multi-valued raster representing a measure of point density. Any voxel (P,l,c) that is not at an outer boundary of the 3D raster space has 26 neighboring voxels, to be subdivided in three categories, depending on whether the cubes of the voxel and the neighboring cube have a face, an edge or a comer in common. This yields 6, 12 and 8 neighbors respectively. Adjacency between voxels can be defined according to whether only the first category of neighbors is considered (6-adjancency), or the first two categories (18-adjacency), or all three (26-adjacency). An object is defined as a set of adjacent voxels, after choosing one of the adjacency definitions. This means that one can step between any two voxels within a single object over a path containing adjacent object voxels. When thin objects are to be represented (with a thickness smaller than the resolution), 26-adjacency is the most adequate choice, which will be considered from now on.
Skeletonization In a binary (0/1, i.e. objectlbackground) 3D raster space skeletonization is loosely speaking the operation that removes as many object voxels as pos-
374
TV Delft, the Netherlands
sible (i.e. changes voxels values from 1 into 0), while preserving the object structure. One should generally distinguish between surface skeletonization and line skeletonization, depending on whether single-voxel thick surfaces will be further reduced to lines, or not [1]. In our application of revealing the structure of trees from laser scans, line skeletonization is the appropriate choice. "Preserving the object structure" means here that the extremities of the branches should remain at their places; the length of the branches is not affected, but only the thickness of the stem and the branches is reduced as much as possible, i.e. to single voxel width under 26-adjacency, of course without breaking them into separate objects. Skeletonization has been a difficult problem in 2D and is even more so in 3D, but nowadays various solutions can be found in literature [2, 3, 5, 9]. Skeletonization algorithms iteratively peel layers from the outside of objects, and some approaches can be subdivided according to the number of sub-iterations within a single iteration. This number can be 6, 8 or 12, depending on whether a cubic object to be skeletonized would be 'attacked' from the faces, comers or edges, respectively. In [7] we implemented an 8 sub-iteration method, described in [6], which shows excellent performance.
Pre-processing and post-processing Having put skeletonization in the center of the process, two problems remain: 1. The tree under consideration may not be represented as a single object. Due to occlusions, and due to the fact that we aim for high resolutions, single trees are often represented by disjoint sets of object pixels. The skeletonization will therefore yield several separate skeletons for a single tree. 2. The generated skeleton may not adequately and unambiguously represent the tree structure, for example because of loops that are caused by touching branches. The first problem was handled by preprocessing [7], where 3D extensions of morphological operations [8], such as dilation and closing, where applied to close gaps between the different parts, unfortunately at the expense of distorting the shape of the objects. To solve the second problem, a post-processing step finds the structure of the tree as represented by the skeleton, in terms of branches, and of junctions connecting those branches. As suggested [4] this step is based on Dijkstra's algorithm, the well-known method for finding shortest routes in
Skeletonization of Laser-Scanned Trees in the 3D Raster Domain
375
a weighted directed graph (representing, for example, a road network in a route planner), and yielding a spanning tree of the graph.
Fig. 1. 2D illustration of skeleton segmentation. A. input skeleton; B. graph nodes and edges; C. spanning tree with shortest distances; D. segments and topology.
First, the graph is created. Its nodes are the voxels that participate in the skeleton (Fig. I.A), and any pair of adjacent voxels generates two arcs between the nodes, one in either direction (Fig. I.B, where each pair of arcs is represented by only one line, however). Arcs are assigned lengths of3, 4 or 5, depending on whether the two voxels share a face, an edge or a corner respectively [10]. Given such a graph, Dijkstra's algorithm finds the shortest route to a destination node (the root of the tree) from all other nodes in the graph. In our application, the lowest voxel of the tree trunk serves as the destination. The collection of shortest routes leading from anywhere to the root forms a spanning tree of the graph. The term 'tree' in the graph-theoretical sense is not chosen by accident: the spanning tree that we find provides a logical model for a tree in a rasterized laser point cloud. Any ambiguities, such as loops caused by falsely-connected branches, which might still exist after skeletonization, are resolved now (Fig. I.C) --- how realistic the results are depends mostly on the success of previous steps.
376
TV Delft, the Netherlands
As a highly desired side effect, each node (except the root) now knows its parent, i.e. the next node along the shortest route to the root. By traversing the tree once more in a so-called segmentation step, it is easy to find branches and their junctions, being nodes (voxels) that appear as a parent more than once. During this traversal each voxel is assigned a unique branch identification number. Each time a junction is found, a new identification number is given to this voxel and to the other voxels in the same branch closer to the root. (Fig. I.D).
The alternative: Dijkstra skeletonization As an alternative to the above-described procedure we propose to apply the Dijkstra algorithm, including the segmentation step directly to the original (un-skeletonized) raster data set. Also now, it will find for each object voxel the shortest route to a designated voxel at the base of the tree, and it will identify branches and junctions, at least in the graph sense. Note that a single real-tree branch may now consist of multiple graph-sense branches, running parallel to each other. Within this result, we can now identify so-called leafs: voxels at the starting points of routes, as opposed to voxels that are intermediate points (on routes from any starting point to the root). The segmentation step identified a (graph-sense) branch for every leaf. The distance from each leaf to the root is known, and is assigned to the branch as well. These distances are propagated down to the root, so finally every voxel obtains an attribute value that reflects the length of the longest route passing through that voxel. The idea is now the following: if a leaf, having a certain distance value, is in the vicinity of a branch (belonging to another leaf) with a larger distance value, then the leaf does not contribute to the skeleton and can be removed, together with its associated branch. These leafs can be easily identified using a local maximum operation in a neighborhood, whose size quantifies the notion of "vicinity". In this way, a large number of redundant leafs and branches are removed very quickly. When applying this process iteratively (Fig. 2), after a few iterations only leafs and branches remain that represent shortest routes from the extremities of the (real world) tree to the route: a segmented skeleton.
Skeletoniza tion of Laser-Scanned Trees in the 3D Raster Dom ain
377
Fig . 2. Intermed iate result of Dijk stra skeletonization: Pixels in red will be removed durin g the next iteration . Left: entire tree ; rig ht: detail
Multi-resolution approach The remaining problem , as before, concerns gaps in the original data set, potentially lead ing to multipl e skeletons being form ed for a single tree. Dijk stra ske letonizatio n, as described in the previous sec tion, lend s itse lf for application in a multi-resolution setting. First, the dataset is subsampled in a number of steps, each time decreasing the resoluti on with a factor 2, until (almost) the entire data set is captured with in a single object. This can be automatically checked using a connected component algorithm (Fig . 3).
Fig. 3. Input data at 4 different resoluti on levels
378
TV Delft, the Netherlands
Now, Dijkstra skeletonization is executed at the coarsest resolution. Next, the leafs are removed from the skeleton (Fig. 4A and 4B). Then, the remainder is super-sampled into the next higher resolution and combined there with the original data set (Fig. 4C) . The removal of leafs from the skeleton serves to ensure that the extremities of the tree are provided by the original data set.
A.
c.
Fig. 4. A. Skeleton with leafs; B. detail of A; C: combination of up-scaled skeleton with original data set at next higher resolution level.
Here, the first iteration runs with a small refinement: Routes should preferably follow the original data set, and only in case of gaps pass through the up-scaled skeleton. This can be easily achieved by giving distances between adjacent voxels a larger value when one of these (or both) do not belong to the original data set. These voxels will be by-passed whenever possible. Subsequent iterations do not need this refinement. The entire process continues bottom-up until the highest resolution is reached.
Skeletonization of Laser-Scanned Trees in the 3D Raster Domain
379
Fig. 5. Results . Left: original skeletonization; right: Dijkstra skeletonization
Conclusion The paper reviews a previously established method for revealing the structure of trees from laser-scanned point clouds, after transferring these into the 3D raster domain. On the basis of this, it proposes a new method, which is more efficient, and produces, qualitatively speaking, superior results on a real-life data set (Fig. 5). The main advantage of the newly proposed method it that closely follows the original data, which do not have to be affected by pre-processing. More in general, the aim was to demonstrate the power and versatility of representing 3D information in the 3D raster domain for a particular application .
References [1] K. Palagyi, E. Sorantin, E. Balogh , A. Kuba, Cs. Halmai, B. Erd6hel yi, K. Hausegger: A sequential 3D thinning algorithm and its medical applic ations , in Proc. 17th Int. Conf. Informat ion Processing in Medical Imagin g, !PMI 2001 , Davis, USA, Lecture Notes in Computer Science, Springer, 2001 [2] GuniIIa Borgefors, Ingela Nystrom and GabrieIIa Sanniti Di Baja, Computing skeletons in three dimensions, Pattern Recognition 32 (1999) 1225-1236.
380
TV Delft, the Netherlands
[3] Cristophe Lohou and Gilles Bertrand, A new 3D 12-subiteration thinning algorithm based on P-simple points, IWCIA, 2001. [4] K. Palagyi, J. Tschirren, M. Sonka: Quantitative analysis of intrathoracic airway trees: methods and validation, LNCS 2732, Springer, 2003, 222-233. [5] K. Palagyi, E. Sorantin, E. Balogh, A. Kuba, C. Halmai, B.\ Erdohelyi and K.Hausegger, A sequential 3D thinnig algorithm and its medical apllications, in M.F. Insana and R.M. Leahy (Eds.): IPMI 2001, LNCS 2082, Springer 2001, pp.409-415. [6] Kalman Palagyi and Attila Kuba, Directional 3D thinning using 8 subiterations, LNCS 1568, Springer, 1999, 325-336. [7] Norbert Pfeifer and Ben Gorte, Automatic reconstruction of single trees from terrestrial laser scans, IAPRS 35, Istanbul. [8] J. Serra, Image Analysis and Mathematical Morphology, Academic Press, London, 1982. [9] Alexandru Telea and Anna Vilanova, A robust level-set algorithms for centerline extraction, Joint EUROGRAPHICS-IEEE Symposium on Visualisation (2003) [10] B.Verwer, Local Distances for Distance Transformations in Two and Three Dimensions, Pattern Recognition Letters Vol. 12, No. 11 (1991),671-682
Automated 3D Modeling 'of Buildings in Suburban Areas Based on Integration of Image and Height Data Kourosh Khoshelham Center of Excellence in Surveying Engineering and Disaster Management, Dept. of Surveying and Geomatics Engineering, University of Tehran, Iran.
Abstract This paper presents an automated method for 3D modeling of buildings in suburban areas through the integration of image and height data. The method is based on matching a CAD model of the building against the image, while the selection of the CAD model relies on clues derived from height data. The matching procedure makes use of straight line segments extracted from the image and grouped on the basis of proximity and parallelism relations. For the selection of the model, a process of fitting planar faces to height data guided by a segmentation of image data is employed. Roof planes are recognized by taking into account the height of each plane over a DTM of the scene. The integration strategy proposed in this paper is capable of exploiting accurate and reliable information from both sources of data. Incomplete image regions can be refined by using clues from height data, and incomplete image edges can be efficiently handled in the model matching process. The reconstruction strategy takes advantage of the high accuracy of laser range data, but is not influenced by the low spatial resolution of the DSM. An experiment is conducted to evaluate the performance of the proposed approach. Results indicate the promising performance of the proposed approach in reconstructing buildings with an acceptable accuracy at a reasonable computational cost.
1. Introduction Automated generation of 3D models of buildings in urban and suburban areas has long been an attractive research topic for scientists within the
382
Kourosh Khoshelham
area of digital Photogrammetry and computer vision. A large number of methods have been developed to fully automate the process of building extraction. These methods mainly vary in the utilized source of input data and the type of output model. The input data that have been used for automated building extraction mostly include aerial imagery, laser range data, 2D maps and combinations of them. Recent developments show a trend towards integrating image and laser rang data. Complementary properties of image and laser range data suggest that these sources can be used as synergistic partners in automated building extraction. Aerial images can be obtained in a very high spatial resolution; however, 3D information extracted from such images is still very uncertain. In contrast, laser scanners provide accurate measurements of the terrain surface, but in a relatively low spatial resolution. This has led researchers to the idea of integrating image and height data for automated building extraction (Cord et aI., 2001; Jaynes et aI., 2003; Rottensteiner and Jansa, 2002; Rottensteiner et aI., 2004) . In terms of output model, two major types of building models have been commonly used namely specific models and generic models. Specific models are parameterized CAD models, which are specific to the domain of buildings since they are tightly constrained with geometric constraints that most buildings represent. These constraints may limit the number of buildings that can be modeled; but in return, they guarantee that other objects cannot be modeled using this type of models. Modeling with parameterized models is often based on a hypothesize-and-verify strategy whereby a match is found between a number of hypothesized models and the features extracted from the data (Gruen, 1998; Jaynes et aI., 2003; Khoshelham and Li, 2004). Generic models, on the other hand, are able to model a broader range of buildings because they represent looser constraints. The shortcoming of generic models is that other objects that satisfy the constraints might as well be modeled. Also, generic models carry no semantics about the type of the reconstructed building. Boundary representation with polyhedral models is the most commonly used method for generic modeling. In this method, planar faces are extracted from the image or height data and then stitched together to form a polyhedral model (Henricsson, 1998; Khoshelham, 2004; Vosselman, 1999). An intermediate type of model is the component-based model that consists of parameterized building parts with fixed geometry and unknown parameters. As component-based models cover a broader range of buildings, they have also been referred to as generic models (Braun et aI., 1995; Heuel and Kolbe, 2001). Fig. 1. shows different types of models.
Automated 3D Modeling of Buildings in Suburban Area
383
Fig. 1. Types of models. A. parameterized model; B. component-based model; C. polyhedral model. The performance of the modeling with generic polyhedral models highly relies on the completeness of extracted features. This is because the correctness of extracted features cannot be verified due to the lack of tight constraints in generic models. Therefore, if extracted features are incorrect or some correct features are missed, the modeling process is likely to fail. Parameterized models enjoy a higher level of robustness because the presence of tight constraints and the known geometry of such models greatly reduce the sensitivity of the modeling process to incomplete features. This paper presents an automated method for modeling buildings in suburban areas with parameterized models based on the integration of image and laser range data. Parameterized models are suitable for suburban areas since buildings in suburbs often have simple shapes. The modeling process is based on matching one or more hypothesized models with straight line segments extracted from the image. The generation of hypotheses relies on a roof plane reconstruction process, which works on the basis of fitting planar faces to height data guided by a segmentation of image data. The paper has six sections. Section 2 provides an overview of the proposed method. Section 3 describes the roof plane reconstruction process for the generation of model hypotheses. In section 4, the model matching process for the verification of the hypotheses and computation of the parameters of the correct model is discussed. Section 5 describes the experimental evaluation of the method. Conclusions are drawn in section 6.
2. An overview of the method In order to take advantage of parameterized models, a number of building models with fixed geometry are constructed and stored as sets of parameters in a library of models. The proposed method is basically a search for a
384
Kourosh Khoshelham
correct match between one of the models in the library and the set of linear image features. This search can be performed in a brute-force fashion by exhausting the entire library and matching all the models against image features. In practice, however, an exhaustive search can be very expensive because model matching is a relatively costly process and the library may contain a very large number of building models. Therefore, we adopt a hypothesize-and-verify strategy to limit the search and speed up the matching process. For the generation of model hypotheses, we employ a split-and-merge method that has been used for the reconstruction of building roofs from image and height data (Khoshelham, 2005; Khoshelham et al., 2005). In this method, first a segmentation of the image data is obtained. A plane fitting algorithm is then used to find planar surfaces in the height points within every image region. The plane fitting algorithm is followed by a split-and-merge process where incomplete image regions are refined and roof planes are identified. The number of the roof planes is used as an index to the library of parameterized models in order to select a limited number of model hypotheses. The verification of the hypotheses is based on a model matching method (Lowe, 1987; Khoshelham and Li, 2004). In this method, the hypothesized models are matched against the set of straight line segments extracted from the image. In addition to model parameters, a fitness error is also computed for each match. A correct match is identified by examining the fitness error as well as computed parameters.
3. Generation of model hypotheses For the generation of model hypotheses first a segmentation of the image is obtained. Every extracted image region is coupled with a number of height points from a last echo laser scanner DSM, which project to that region provided that both the image and DSM are referenced to a same datum. A robust regression method based on the least median of squares (LMS) (Rousseeuw and Leroy, 1987) is employed to fit planar surfaces to the height points contained in each image region. The LMS-type plane fitting algorithm is iteratively applied to each image region and its associated height points until no more planes are found. The split-and-merge process makes use of these detected planes to identify incomplete regions. If multiple planes are found in a single region, then the region is split according to the number of the planes. In case neighboring regions each have a single plane but their planes are coplanar, then the two regions are merged. In
Automated 3D Modelin g of Buildings in Suburban Area
385
other words, the split-and-merge process establishes a one-to-one correspondence between image regions and planar surfaces in height data . Fig. 2 illustrates the process of plane fitting in an overgrown region .
A
c 0 F ig. 2. Plane fitting process. A. A sample aer ial im age of a bu ild in g; B . The initial segmentation of the image repr esenting an over grown region and its associ ated hei ght points; C. Two plan es are detected in the height points co ntaine d in the overgro wn region (de picted with blue points and red points) ; D . Th e height points as seen from a different angl e.
For every region-plane two average height values are derived, one from the DSM and one from a DTM of the scene. The difference between these two values indicat es whether the region-plane belongs to a roof. Usu ally a threshold of about 2m can be applied to separate roofs from non-roof planes . More details on the process of roof plane reconstruction can be found in Khoshelham (2005) . The number of the detected roof planes for each building serves to form the model hypotheses. As can be seen in the example model library of Fig . 3, models in the library are categorized with respect to the number of roof planes. Therefore, having determined the number of roof plane s, the search in the library wi11 be narrow ed down to only one class of building model s. Usually all the models in the relevant class are hypothesized, although the slope of the roof plane s can also be used to further reduce the number of hypotheses.
386
Kourosh Khoshelham
Class 1: One rootplane
Class2: 'TWo root planes
DGD EBCEffi
Qass 3: TIlreerootplanes
Class4: Four root planes
Qass 5: Ave root planes
Qass 6: Sixroot planes
Fig. 3. An exa mple of a model library in which models are categoriz ed based on the numb er of roof planes (arrows denote slope).
4. Verification of the hypotheses Model match ing (Lowe, 1987) is an efficient method for the verification of the presence of a 3D object in a 2D image , provided that a model of the object is available. Having generated a number of model hypothese s, the model match ing method can be used to verify the hypotheses and find the corre ct match . The matchin g algorithm require s corresponding lines in image and model to be selected beforehand. For the selection of image lines, a line extraction algorithm is applied to the image data . The extracted line seg ments undergo a perceptu al grouping process, which selects a set of image lines that form a structure based on some perceptu al relations such as proximity and parallelism. Fig. 4 demon strates an example of the performance of perceptual grouping in the selection of a set of lines that are most relevant to a polyhedral object (here building) based on proximity and parallelism relations.
Automated 3D Modelin g of Bui ldin gs in Suburban Area
387
Fig. 4. Perceptual grouping of image lines based on proximity and par alleli sm relation s.
The same perceptual relation s are used to select the corresponding mode l lines. Every hypoth esized mode l is rearranged as a set of lines in a graph structure interlinked with respect to proximit y and parallelism relations. The selection of corresponding model lines is a search in this graph structure for those model lines that carry the same relations as the set of grouped image lines retains. Usually multiple sets of model lines satisfy the proximit y and parallel ism criteria, resulting in an increase in the num ber of hypotheses. Some properties of the grouped image lines, e.g. the length and the orientation of the longe st line in the group , can be exploited for the computation of initial values for the parameters of the hypothesized models. Once the corresponding image and model lines are selected and the initial values for the model parameters are computed, the set of model lines is projected to the image space and the distance between the endp oint of every model line to the corresponding image line is comp uted. With the computed distances a set of observation equations is formed that contain the parameters of the model as unknowns . These unknown model parameters are estimated in a least squares adjustment that minimizes the sum of squared distances. The estimated parameters for every hypothesized model and its selected sets of model lines , in addition to a fitness error computed as the mean of distance residuals after the adjustment, form the basis for the verification of the hypotheses. When an incorrect hypothesi s is matched, the least squares adjustment will usually result in implau sible estimations for the unknown parameters (e.g . negative length or width). The fitness error will also be high in the case of an incorrect match . A successful match is found
388
Kourosh Khoshelham
as one with plausible parameters and a minimum fitness error. Further details on the application of model matching to the verification of hypotheses can be found in Khoshelham and Li (2004) .
5. Experimental results A set of data consisting of an aerial orthoimage, a laser scanner DSM and a DTM of a suburban part of Memingen city, Germany, was obtained for the experimental evaluation of the method. The orthoimage was resampled by the supplier with a pixel size of 0.5m, while the height data had a LOrn ground resolution. Fig . 5 shows the orthoimage and the last echo DSM of the scene. A segmentation of the image was obtained using a morphological watershed algorithm. The split-and-merge process was then applied to the extracted image regions and roof region-planes were reconstructed. Fig. 6 depicts the reconstructed roof region-planes. As can be seen, in most cases roof planes are correctly reconstructed, although there are cases where roof planes are missed or some region-planes are wrongly detected as roof planes . The resu lts of the roof reconstruction process are summarized in Table 1. The numbering of the buildings is shown in Fig.7.
A
B
Fig. 5. A. Aerial orhtoimage of the scene of experiment; B. Last echo laser scanner DSM of the scene.
Automated 3D Modeling of Buildings in Suburb an Area
389
Fig. 6. Roof region-planes reconstructed in the split-and-merge process
1
2 8
6
5
4 10
11 14
Fig. 7. Numb erin g of buildin gs
Table 1. Peformance of roof plane reconstruction process Bldg. No. Total No. of roof planes No. of detected planes No. of missed roof planes No. of planes wrongly detected as roof
10
II
12
13
14
2 2
2
2
2
6
2
2 2 2 3 2 3 4 2 2
2
2
3
5
2
0 0 0 0 0 0 3 0 0
0
0
0
0 0 0 0 0
0
0
2 3 4 5 6 7 8 9 2 2 2 3 2 2 6
3 0 0
·0 0
0
The reco nstructed roof planes served for the generation of hypotheses. All hypothesized models were matched against the grouped image lines and verified according to the result of the matching. Model matching was
390
Kourosh Khoshelham
carried out in 2D since the 3D parametric forms of the roof planes were already derived and only 2D roof boundaries were unknown. Table 2 summarizes the fitness errors computed for the successful matches. As expected, a correct match was not found in cases where roof planes were not correctly reconstructed. Fig. 8 shows the roof models found in the matching process. It can be observed that the performance of the model matching algorithm is not influenced by the incompleteness of the image lines because perpendicular distances are used as observations in the least squares estimation model. A visualization of the reconstructed 3D models is shown in Fig. 9. Table 2. Fitness errors for successful matches Bldg. No.
1 2 3 4
0 0 Matching fitness er- o t.N a \0 \0 0 0 + : : ' 0 0 ror (pixels)
5
0
\0
\0
6
7 8 9
10
11
oo
12
13
14
Automated 3D Modeling of Buildings in Suburban Area
391
Fig. 8. Roof matching process. A. Original images; B. Grouped image lines (in red), corresponding lines of the initialized roof model (in white), and matched model lines in the last iteration (in yellow) for the successful matches; C. Complete roof models overlaid on images.
392
Kouro sh Khoshelham
Fig. 9. 3D visualization of the reconstructed models .
6. Conclusions In this paper a method was presented for automated 3D reconstruction and modeling of buildings from aerial image and laser range data. The method was shown to be capable of exploiting accurate and reliable information from both sources of data. Incomplete image regions can be refined by using clues from height data, and incomplete image edges can be efficiently handled in the model matching process. The reconstruction strategy takes advantage of the high accuracy of laser range data, but is not influenced by the low spatial resolution of the DSM. The experimental results also indicate the promising performance of the proposed strategy in automated 3D reconstruction of buildings in a suburban area.
References Braun, C. et aI., 1995. Models for photogrammetric building recon struction. Computers & Graphics, 19(1): 109-118. Cord, M., Jordan , M. and Cocquerez, J.-P., 2001. Accurate building structure recovery from high resolution aerial imagery. Computer Vision and Image Understanding, 82: 138-173. Gruen, A., 1998. TOBAGO -- a semi-automated approach for the generation of 3D build ing models. ISPRS Journal of Photogrammetry and Remote Sensing, 53(2): 108-118. Henricsson, 0., 1998. The Role of Color Attributes and Similarity Grouping in 3D Building Reconstruction. Computer Vision and Image Understanding, 72(2) : 163-184 .
Automated 3D Modeling of Buildings in Suburban Area
393
Heuel, S. and Kolbe, T.H., 2001. Building reconstruction: the dilemma of generic versus specific models. Ktinstliche Intelligenz, 3: 57-62. Jaynes, C., Riseman, E. and Hanson, A., 2003. Recognition and reconstruction of buildings from multiple aerial images. Computer Vision and Image Understanding, 90(1): 68-98. Khoshelham, K., 2004. Building extraction from multiple data sources: a data fusion framework for reconstruction of generic models. International Archives of Photogrammetry and Remote Sensing, 35(B3): 980-986. Khoshelham, K., 2005. Region refinement and parametric reconstruction of building roofs by integration of image and height data. In: U. Stilla, F. Rottensteiner and S. Hinz (Editors), CMRT05, Vienna, Austria, pp. 3-8. Khoshelham, K. and Li, Z.L., 2004. A model-based approach to semi-automated reconstruction of buildings from aerial images. Photogrammetric Record, 19(108): 342-359. Khoshelham, K., Li, Z.L. and King, B., 2005. A split-and-merge technique for automated reconstruction of roof planes. Photogrammetric Engineering and Remote Sensing, 71(7): 855-862. Lowe, D.G., 1987. Three-dimensional object recognition from single twodimensional images. Artificial intelligence, 31(3): 355-395. Rottensteiner, F. and Jansa, J., 2002. Automatic extraction of buildings from LIDAR data and aerial images, International Archives of Photogrammetry and Remote Sensing, Volume XXXIV / 4, pp. 569-574. Rottensteiner, F., Trinder, 1., Clode, S. and Kubik, K., 2004. Fusing airborne laser scanner data and aerial imagery for the automatic extraction of buildings in densely built-up areas. International Archives of Photogrammetry and Remote Sensing, 35(B3): 512-517. Rousseeuw, P.J. and Leroy, A.M., 1987. Robust regression and outlier detection. John Wiley & Sons, New York, 329 pp. Vosselman, G., 1999. Building reconstruction using planar faces in very high density height data, ISPRS Conference on Automatic Extraction of GIS Objects from Digital Imagery, Munich, pp. 87-92.
Automatically Extracting 3D Models and Network Analysis for Indoors Ismail R. Karas 1, Fatmagul Batuk", Abdullah E. Akay", Ibrahim Baz l 'Gebze Institute of Technology, Department of Geodesy and Photogrammetry Engineering, Gebze, Kocaeli, Turkey - [email protected] 2Yildiz Technical University, Department of Geodesy and Photogrammetry Engineering, Istanbul, Turkey 3Kahramanmaras Sutcu Imam University, Department of Forest Engineering, Kahramanmaras, Turkey
Abstract The areas such as emergency services, transportation, security, visitor guiding, etc. are the subjects of 3D network analysis applications. Especially, the problem of evacuating the buildings through the shortest path with safety, has become more important than ever in a case of extraordinary circumstances (i.e. disastrous accidents, massive terrorist attacks) happening in complex and tall buildings of today' s world. This study presents a model which, first, automatically extracts geometry and topology of a building; second, computes the distances among all entities and records them into the geodatabase: and then, investigates the shortest path between two user-specified entities by using 3D network analysis based on modified Dijkstra algorithm. The model also provides a user with a 3D interactive visualization feature. This paper briefly describes the model and presents a simple application, it is described somewhere else in detail.
1. Introduction The areas such as emergency services, transportation, security, VISItor guiding, etc. are the subjects of 3D network analysis applications for indoor. Especially, the problem of evacuating the buildings through the shortest path with safety, has become more important than ever in a case of extraordinary circumstances (i.e. disastrous accidents, massive terrorist attacks) happening in complex and tall buildings of today's world.
396
Ismail R Karas, Fatmagul Batuk, Abdullah E. Akay, Ibrahim Bazl
Interiors of buildings are often represented as two-dimensional spaces with attributes attached to them. Therefore, most of the available navigation programs use primarily 2D plans for visualization and communication features for indoors. These programs have not been structured with respect to the functionality of the building, but largely with respect to the navigation/visualization routes. However, in the last few years, there has been an increasing interest in modeling based on functionality of interiors (Meijers et al, 2005). Lee, (2001) has developed a topological data model, the Node-Relation Structure (NRS). In this model, Straight Medial Axis Transformation technique was used to obtain the Network Model of a building. Gillieron and Merminod (2003) have developed a personnel navigation system for indoor application. They have built a topological model for navigation applications and implemented route guidance and map matching algorithms. Meijers et al. (2005) have reported a semantic model representing 3D structuring of interiors to be used for an intelligent computation of evacuation routes. And they have demonstrated and tested the model on an implementation. Pu and Zlatanova (2005) have pointed out that automatically extracting geometry and logic models of a building is difficult and the nodes and links have to be created manually or half-manually. In this study, a new method was developed to automatically extract 3D Building Model (Geometry) and Network (Logic) Model from architectural plan of a building. In designed system, it is possible to perform network analysis based on the models· generated by using automatic extraction method. In the following sections, this method will be described and automation and 3D analysis systems will be introduced.
2. 3D Models of Building In order to perform network analysis indoors, 3D models of the building should be generated. The initial model is called "Building Model (BM)" which provides 3D visualization of the building. 3D BM is stored only for visualization and drawing purposes, while it does not contain any topologic information. The main objective is to let the user see the geometry of the buildings on screen (Figure 1a). 3D Network model is called "Network Model (NM)" which represents the path network in the building based on a graph representation. NM contains the paths between the entities in the building. Each entity is defined
Automatically Extracting 3D Models and Network Analysis for Indoors
397
as a node and links between the nodes are also stored in this model. Therefore, NM is a topological model since it contains connectivity information (Figure Ib), Besides, it contains geometric information such as coordinates of the nodes and link lengths to provide visualization and cost (the shortest/optimum path) computation. The model, which is represented both geometrically and topologically, is called "Geometric Network Model" by Lee (2001) (Figure lc). Modeling the corridors which connects the rooms in the building is very important. To ensure correct cost computation, these corridors are modeled in a separate graph formed by nodes, while other entities are represented by a node. In order to visualize the path alignments, the geometry of the corridors should be designed. Therefore, corridor which is modeled separately becomes a sub-graph of the NM (Meijers et al, 2005).
(a)
(b)
(c)
Fig. 1. (a) 3D Building Model. (b) Topologic Model (No geometry). (c) Network Model (Topology + Geometry)
398
Ismail R Karas, Fatmagul Batuk, Abdullah E. Akay, Ibrahim Bazl
3. Implementation Of The System The system contains two modules. The first module is to automatically generate BM and NM from architectural plan and the second module is the analysis module which computes the best path in a modeled building and provides 3D visualization and analysis of the path. By using the method developed here, it is possible to generate 3D BM and NM based on the floor plan of the building. Architectural plans of the buildings are generally either drawn on a paper as blueprints or stored in vector format. Blueprints can be stored in raster format after scanned by a scanner. Whether floor plan information is in raster or vector format, or even drawn on a form by a user, this method can be applied in any case. The key point is that images of the floor plan should be pictured on the form of the user interface. The models are then generated automatically by analyzing and processing these images. Therefore, the method is basically a image processing method. The method depends on generating middle points of the lines. BM and NM are generated and stored in a geo-database after a series of processes including generating and vectorizing the lines and optimization.
3.1. Generating 3D Building Model By using the method described in the previous section, BM is generated in the first stage. As shown in Figure 2, first, lines are determined, and then lines and points, both generate the floor plan, are obtained by determining intersection points using optimization process (Figure 2b). Based on user defined data such as floor numbers and heights, 3D BM is generated after designing each floor automatically by assigning different elevation values to floor plan (Figure 2c-d).
Automatically Extracting 3D Models and Network Analysis for Indoors 0-
0
e
.-------------.o--'w
• • • • e
e 0--
G
e
(a)
(c)
Fig. 2.
.. -.
-
•
399
0
,,0
.--e
-e-
o
• •
0
e
,-.
e--
-e
0
-0
• •
• (b)
(d)
(a) Floor plan of building. (b) Obtaining points and lines. (c) Designing the floors. (d) 3D Building Model.
3.2. Generating 3D Network Model 3.2.1.
Generating Corridors
In NM, corridor is the main backbone in the floor plan since it connects the rooms with all the other entities in the building. Therefore, determining and modeling the corridor is very important. Once corridor was provided by the user, algorithm leaves only the corridor in the image, and then, determines the middle lines based on the method described in the previous section. After number of processes on selected middle lines, topological model and coordinates of the corridor are found as seen in Figure 3.
400
Ismail R Karas, Fatmagul Batuk, Abdullah E. Akay, Ibrahim Bazl
•
.-• Fig. 3. Genera ting Corridors.
3.2.2.
Generating the Rooms
In determining the room s, corridor is excluded from the image and only the room s are left. Then, by applying the method, middle point of the rooms are determined and defined as the nodes which represent the rooms (Figure 4). o
•
• •
•
• • • • • • • 0 --
• • -
-
-
-•
•
• •
Fig . 4. Generatin g the Rooms.
3.2.3.
Integrating Corridors with Rooms
After locatin g the nodes that indicates corridor and rooms, user interactively points out which room nodes connect with which corridor node s, and geometric network for 2D floor plan is generated (Figure 5a) . After stairs (or elevator) node s are indicated by a user, the netw ork is automatically designed by assigning different elevation values for each floor based on various data such as floor number and floor height, and then , 3D NM is generated as seen in Figure 5b.
Automatically Extractin g 3D Models and Network Analysis for Indoors
x
401
. ~:-
(b)
(a)
Fig. 5. Generating Network Model.
3.3. Geo-Database After genera ting BM and NM as explained in the previous section, they are transferred to a geo-database to be stores. This section describes the structure of this database. BM contains two different tables: point and line tables (Figure 6). Point table indicates all the intersection points, while line table contains the lines which are gene rated by connecting the intersec tion points. Th e lines are used to draw 3D model of the building on the screen . BM is generate d only for visualization of the building. When a user looks at the building model, he can easi ly analyze network model, which is difficult to comprehend visually .
End_Point 1 2 3 4 5 6
26.001046512' 98.001046512 26.001021978 74.001021 978 26.001 036364 74.001036364
5 5 5 5 5 5
2 4 6 8 10 12
Fig . 6. Points and Lines tables of BM in the Geo-Database.
NM also contains two different tables: nodes and links tables (Fig ure 7). Eve n though table struc tures in NM looks like the ones in BM, eac h node is unique and its coordinates are stored in node table in NM . Netwo rk can be visualized and the distance between the nodes can be computed based on the
402
Ismail R Karas, Fatmagul Batuk, Abdull ah E. Akay, Ibrah im Bazl
coordinates. Link tables contain the conn ecti vity relationship s between the nodes. Besides, the cost between two nodes is stored in the link table. The cos t can be travel distance information computed based on the coordinates of the nodes (Euclidean Lenghts) or travel time. --------------l1li Nodes : Tablo - - - -
y
Node ID
.!. -
-
-
_
1 92001347826 - --286"--2 3 _ _92,001 4 91.999870229 1 1 5, 92 1 6, 62 1 7 50 1 1 8 134 I
145 145 293 145 1 83 1 319 247 253
5 5 - 5 5 5 5 5 5
~:
Fig . 7. Nodes and Links tables of NM in the Geo-Database.
3.4. Determining the optimum path After storing BM and NM information into the database, shortest (optimum) paths are generated. Once all the paths are determined by using Dijkstra algorithm, a table is develop ed for each node in the network. Node tables contain the paths from a node to each and every one of the nodes in the network. Therefore, the number of node tables equ als to number of nodes. Figure 8 indicates a node table for node 44. This table can provide the information on the shortest paths from node 44 to all the other nodes. For example, the table indicates that the shortest path from node 44 to node 29 contains the following nodes: Node: 29 - Path: 23, Node: 23 Path: 26, Node: 26 - Path : 46, Node: 46 - Path: 43, Node: 43 - Path: 44. Therefore , the optimum path betwe en node 44 and 29 follows the node s of 29-23-26-46-43-44.
Autom atically Extracting 3D Model s and Network Analysis for Indoors
-1 -1 -1 -1 -1 -1 -1 -1 -1
-1 -1 -1 -1 -1
403
o 6 9 10 14 7
109 109 112 209 112 209 118 116
-1
120
-1
113
44 43
46 46 26 26
66 66
23 23 23 23
Fig. 8. Optimum paths generated by using Dijkstra algorithm for Node 44.
3.5. Networ k Analysis Network analysis can be performed after generating and storing the shortest path information into the database. It is possible to visualize building model and whole netw ork in 3D (Figure 9a). An optimum path and cost between user-defined two entities can be determined. The optimum path can be analyzed and visualized in 3D (Fig ure 9b).
IT' IT'
tr
If
-, II
,~
.
.....
.:::..
1-
,
-
[fL
"
9.
1 <,
~
Fig.
IJ
,
-
(a) 3D visualisation of Building and Network (b) 3D visualisation of optimum path between two entities.
'-,,}
Models.
404
Ismail R Karas, Fatmagul Batuk, Abdullah E. Akay, Ibrahim Bazl
4. Conclusions In this study, a new method has been developed to automatically generate building and network models for building which are subject to 3D network analysis. By using the software designed based on the generated models, it was indicated that the model data can be produced quickly, efficiently, and systematically and can be stored into a geo-database to perform network analysis.
References Gillieron P, Merminod B (2003) "Personal navigation system for indoor applications", 11th lAIN World Congress Lee J. (2001), "3D Data Model for Representing Topological Relations of Urban Features' ", Proceedings of the 21st Annual ESRI International User Conference, San Diego, CA, USA Meijers M., Zlatanova S. and Preifer N. (2005), "3D geoinformation indoors: structuring for evacuation", in: Proceedings of Next generation 3D city models, 21-22 June, Bonn, Germany, 6 p. Pu S. and Zlatanova S., (2005), "Evacuation route calculation of inner buildings", in: PJM van Oosterom, S Zlatanova-& EM Fendel (Eds.), Geo-information for disaster management, Springer Verlag, Heidelberg, pp. 1143-1161
Improving the Realism of Existing 3D City Models Martin Kada, Norbert Haala, Susanne Becker Institute for Photogrammetry, Universitat Stuttgart, Germany [email protected]
Abstract Within the paper, a novel approach for the reconstruction of geometric details of building facades is presented. It is based on 3D point clouds from terrestrial laser scanning. By a segmentation process, the approximate boundaries of the windows are detected and a cell decomposition of the facade is created. A classification of the cells determines a symmetric window arrangement of maximum likelihood.
Introduction The acquisition of 3D city and landscape models has been a topic of major interest and a number of algorithms have become available both for the automatic and semiautomatic data collection. Usually based on aerial data like stereo images or LIDAR, these algorithms provide an efficient way for generating area covering urban models. Although such building models already allow a number of applications, further quality improvement is required for some scenarios. A realistic visualization from a pedestrian's viewpoint is one example where the quality and amount of detail needs to be improved. This can be achieved e.g. by the additional use of terrestrial images mapped against the building facades [1] (see figure 1). Real world imagery, however, is only feasible to a certain degree. Balconies, ledges or windows e.g. will not appear realistically if oblique views are generated. This situation can only be slightly improved if rendering techniques like bump or displacement maps are used [2]. Thus, a geometric refinement of the building facades is necessary. Tools for the generation of polyhedral building models are either based on constructive solid geometry (CSG) or a boundary representation (BRep) approach. In the following section, the pros and cons of both approaches will be discussed. This will motivate the use of cell decomposi-
406
Martin Kada, Norbert Haala, Susanne Becker
tion as our favored form of solid modeling in order to reconstruct the facade geometry of coarse buildin g models.
Fig. 1. 3D city model of Stuttgart with terrestrially captured facade textures.
Solid Modeling The buildings of 3D city models are generally reconstructed and represented as 3D solids. There exist a number of theoretical concept s for solid modeling, the most prominent being bound ary representation (B-Rep) and constructive solid geometry (CSG).
In boundary representation, the geometry of an object is defined by its surface boundary, which consists of vertices, edges and faces. A reconstruction from airborne laser scanning data can e.g. be directly generated by triangul ating the 2.5D point clouds. However, the architectural characteristics of buildings like right angles and parallel faces are not captured in such an approach. Therefore, a number of reconstruction algorithms first extract planar region s of appropriate size in a segmentation step. A polyhedral building model is then constructed from the resulting segments by intersection and step edge generation. Although numerous such approaches exist, an implementation that produces topological correct bound ary repre-
Improving the Realism of Existing 3D City Models
407
sentations is difficult [3]. This is additionally aggravated if geometric constraints, such as meeting surfaces, parallelism and rectangularity need to be guaranteed. Regularization conditions are easier met if constructive solid geometry is used [4]. Simple primitives are combined using Boolean operators such as union, intersection and difference. This way of modeling objects is more intuitive than specifying boundary surfaces directly. A CSG representation is also always valid as the combination of Boolean operations yields topologically correct objects. Because simple primitives allow robust parameter estimation for error prone measurements, the CSG concept has been very popular for building reconstruction. If supplied with an appropriate set of primitives, even complex buildings can be constructed. However, the visualization and analysis of the data requires a transformation into boundary representation. This so-called boundary evaluation is conceptually not difficult, but problems of numerical precision and unstable intersection algorithms can prohibit the generation of a valid object topology [5]. In order to overcome the aforementioned problems, we demonstrate the applicability of cell decomposition as a tool for 3D facade reconstruction. Cell decomposition is a general form of spatial-partitioning representations which subdivide space into a set of simple primitives. A complex object is constructed by a set of adjoining primitives that are glued together. The gluing operation is a restricted form of a spatial union operation. As the primitives are not allowed to intersect one another, the implementation of algorithms does not suffer numerical instabilities. In contrast to the constructive nature of the CSG representation, cell decomposition is generally used as an auxiliary representation for analytical computations [6]. However, we will demonstrate in the following chapters that cell decomposition can be an effective tool for the automatic reconstruction of topological correct facade models from terrestrial LIDAR point clouds.
Facade Refinement by Terrestrial LIDAR For a number of applications, 3D city models extracted from aerial data are sufficient. Some tasks, however, require an increased amount of geometric details for the respective 3D building models. One example is the realistic visualisation as seen from a pedestrian viewpoint. As already mentioned, this can be achieved by mapping terrestrial images or displacement maps against the facades, These techniques have their limitations though, as bal-
408
Martin Kada, Norbert Haala, Susanne Becker
conies, windows and doors will look disturbed when seen from oblique views or close distance. This comes from the fact, that rectangular geome tries can not be substituted by 2D or 2.5D maps. A geometric refinement of the building facades is therefore necessary for an increased visual appearance. As it will be demonstrated in the following sections, a realistic refinement of window objects based on cell decomposition is well suited for this task .
Pre-processing In principle, window silhouettes can be detected from terrestrial images [7]. We, however, use a densely sampled point cloud from terrestrial laser scanning as it contains a considerable amount of geometric detail. Such data sets are generally collected from multiple viewpoints in order to allow for a complete coverage of the scene and to avoid any occlusions due to other objects. It is therefore required that a co-registration and geo-coding of the different scans is done in a pre-processing step. For this purpose, control point information is traditionally used from specially designed scan targets. Alternatively, an approximate direct geo-referencing can be combined with an automatic alignment to existing 3D building models [8]. With this information, the point clouds can be transformed into a common reference system and the relevant point measurements extracted for each facade by a simple buffer operation .
ia
uJ::
Mill:
...... g::J :r.. .II
II
.HI.D
.. '. "ii.
a' .
..
I;;••
•I •I
Fig. 2. 3D point cloud used for the geometric reconstruction of building facade .
For instance, the facade points of our example data set were first selected by a buffer operation and then transformed to a local coordinate system. The reference plane was determined for this purpose from the 3D
Improving the Realism of Existing 3D City Models
409
points by a robust estimation process. Due to this mapping of the points, parts of the processing can be simplified by the use of 2D and 2.5D algorithms. Figure 2 shows the transformed LIDAR points that were originally measured with a HDS 3000 scanner at an approximate point spacing of about 4cm.
Cell Generation The idea of our reconstruction algorithm is to first partition a 3D object with a flat front face into 3D cells. This can either be a coarse building model or some block-like object of the same size as the facade, We then determine which cells are in the window areas and discard them, carving out the window geometry in the process. The remaining cells are for now considered to form a homogeneous facade and are therefore glued together to form the refined 3D facade model. The difficulty is finding planar delimiters from the LIDAR points that generate a good working set of cells. Because our focus is on the geometry of the windows, we need to identify the points that were measured at the window borders. We find those points by a segmentation process.
Point cloud segmentation As can be seen in figure 2 fewer 3D points are usually measured on the facade at window areas. This is due to specular reflections of the LIDAR pulses on the glass or points that refer to the inner part of the building and were therefore cut off in the pre-processing stage. If we only consider points that lie on or in front of the estimated facade plane, then the windows are areas with no point measurements. Thus, our point cloud segmentation algorithm detects edges by these no data areas. In principle, such no data areas can also be the result of occlusions. However, if the facade was measured from different viewpoints, these occlusions can in most cases be avoided. So only a reduction of the point density can be observed in these areas. The segmentation differentiates four types of window borders: two horizontal types for the top and bottom, and two vertical types for the left and right window border. As an example, the edge points of the left border of a window are then detected if no neighbor measurements to their right side can be found in a pre-defined search radius. This is necessary as no edge points would be identified otherwise. We used a search radius a little higher than the scan point distance on the facade, As it can be seen in figure 3 a lot of points can be correctly identified this way, although the algo-
410
Martin Kada, Norbert Haala, Susanne Becker
rithm often fails to find points at window comers. This is not a real problem, as long as there are enough points to determine the horizontal and vertical boundaries. These are depicted in figure 4 as horizontal and vertical lines, which can be estimated from the non-isolated edge points.
f,wr'-t 1
~
I
l".".J...,~.i
Fig. 3. Detected edge points at horizontal and vertical window structures.
Fig. 4. Detected horizontal and vertical window lines.
Spatial-Partitioning
Each detected boundary line defines a partition plane that is perpendicular to the building facade. They are then used to intersect a cuboid that is aligned and of the size of the facade, Its depth is two times the depth of the windows, which is available from the LIDAR measurements at the cross bars. This is detected by searching for a plane parallel to the facade by
Improving the Realism of Existing 3D City Models
411
shifting it in the planes normal direction. The result of the partitioning of the cuboid by the planes is a set of small 3D cells.
Classification of 3D cells According to the general outline of our algorithm, all the generated 3D cells have to be classified into building and non-building cells. For this purpose, a binary 'point-availability-map' is generated. Within this image, which is depicted in figure 5, black pixels are grid elements where LIDAR points are available. In contrast, white pixels define raster elements with no 3D point measurements. Of course, the already extracted edge points in figure 3 and the resulting structures in figure 4 are more accurate than this rasterized image. However, this limited accuracy is acceptable since the binary image is only used for classification of the 3D cells as they are already created from the detected horizontal and vertical window lines. This is implemented by computing the ratio of favade to non-facade pixels for each generated 3D cell.
II
IIIIII
II" I.
•
IIII
II II 1IIIII
IIII
II 1111'11.11
•..!'- •
1,1
II 'II II
Fig. 5. Binary point-availability-map.
As a consequence of the relative coarse rasterization and the limited accuracy of the edge detection, the 3D cells usually do not comprise of favade pixels or window pixels, exclusively. Within the classification, 3D cells including more than 70% facade pixels are defined as facade solids, whereas 3D cells with less than 10% facade pixels are assumed to be window cells. These segments are depicted in figure 6 as grey and white cells, respectively.
412
Martin Kada, Norbert Haala, Susanne Becker
~
'-'
-
I
I '--
i
'--
I ;.-.;
I
f-
Fig. 6. Classification of 3D cells before (left) and after enhancement (right).
Classification Enhancements While most of the 3D cells can be classified reliably, the result is uncertain especially at window borders or in areas with little point coverage. Such cells with a relative coverage between 10% and 70% are represented by the black segments in the left image of figure 6. For the final class ification of these cells depicte d in the right image of figure 6, neighborhood relationships as well as constraints concerning the simplicity of the resulting window objects are used. As an examp le, elements between two window cells are assumed to belong to the facade, so two small windows are reconstructed instead of one large window. This is justified by the fact that facade points have actually been measured in this area. Additionally, the alignment as well as the size of proxi mate windows is ensured. For this purpose the class ification of uncertain cells is defined dependi ng on their neighbors in horizontal and vertical direction. Within this process it is also guaran teed that the merge of window cells will result in convex window objects. I I I
i.•
I I I I I
Fig. 7. Integrat ion of additional facade cells.
As it is depicted in figure 7, additiona l facade cells can be integrated easily if necessary. Figure 7 shows the LIDAR measurement for two
Improving the Realism of Existin g 3D City Models
413
closel y neighbored windows . Since in this situation only one vertical line was detected, a single window is reconstructed figure 7b. To overcome this problem , a window object is separated into two smaller cells by an additional facade cell. Th is configuration is kept, if it can be verified as a valid assumption. This occurs if facade points were actually measured at this position figure 7c.
Results and Conclusion The final result of the building facade reconstruction from terrestrial LlDAR is depicted in figure 8. For this examp le, window areas were cut out from a coarse model. While the windows are represented by poly hedral cells, also curved primitives can be integrated in the recon struction process. This is demonstrated exemp larily by the round-headed door of the building.
DO DO
DO
00 Fig. 8. Refined facade of a 3D building model.
Within the paper, an approach for facade reconstruction based on cell decomposition was presented. The use of cell decomposition prove d to be very flexible to add details to an existing building model. While in our ap-
414
Martin Kada, Norbert Haala, Susanne Becker
proach windows are represented by indentations, details can also be added as protrusions to the facade. Another option is to efficiently subtract rooms from an existing 3D model if measurements in the interior of the building are available. Still there is enough room for further algorithmic improvement. However, in our opinion the concept of generating 3D cells by the mutual intersection of planes already proved to be very promising and has a great potential for processes aiming at the reconstruction of building models at different scales.
Acknowledgments The research described in this paper is founded by "Deutsche Forschungsgemeinschaft" (DFG - German Research Foundation). The research takes place within the Center of Excellence No. 627 "NEXUS - SPATIAL WORLD MODELS FOR MOBILE CONTEXT-AwARE ApPLICATIONS" at the University of Stuttgart. The geometry of the building models is provided by Stadtmessungsamt Stuttgart (City Surveying Office of Stuttgart).
References [1] Kada, M., Klinec, D. Haala, N.: Facade Texturing for Rendering 3D City Models. In: Proceedings of the ASPRS 2005 Annual Conference, Baltimore, USA, pp. 78-85 (2005) [2] Bohm, J.: Terrestrial Laser Scanning - A Supplementary Approach for 3D Documentation and Animation. In: Photogrammetric Week '05, Wichmann Verlag, pp 263-271 (2005) [3] Rottensteiner, F.: Semi-Automatic Extraction of Buildings Based on Hybrid Adjustment using 3D Surface Models and Management of Building Data in a TIS. PhD. thesis TU Wien (2001) [4] Brenner, C.: Modelling 3D Objects Using Weak CSG Primitives. In: Proceedings of the ISPRS Symposium, Vol. 34, Part B, Istanbul, Turkey (2004) [5] Hoffmann, C.M.: Geometric & Solid Modelling. Morgan Kaufmann Punblishers, Inc., San Mateo, CA, USA (1989) [6] Mantyla, M.: An Introduction to Solid Modeling. Computer Science Press, Maryland, U.S.A. (1988) [7] Mayer, H., Reznik, S.: Building Facade Interpretation from Image Sequences. In: International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. 36-3 / W23 (2005)
Improving the Realism of Existing 3D City Models
415
[8] Bohm, J. & Haala, N.: Efficient Integration of Aerial and Terrestrial Laser Data for Virtual City Modeling Using LASERMAPS. IAPRS Vol. 36 Part 3/W19 ISPRS Workshop Laser scanning, pp.192-197 (2005).
Different Quality Level Processes and Products for Ground-based 3D City and Road Modeling Bahman Soheilian", Olivier Tournaire l ,2, Lionel Penard l , Nicolas Paparoditis 1 1 Institut Geographique National, Laboratoire MATIS - 4, avenue Pasteur, 94165 Saint-Maude Cedex France firstnalne.1astnarne(igign.fr 2 Universite de Marne- La-Vallee, Laboratoire Geomateriaux et Geologie de l'Ingenieur - 5, Bd Descartes, 77454 Marne la Vallee France
Abstract This paper proposes ideas and some solutions for providing different quality level processes and products of a ground-based Mobile Mapping Data Collection System (MMDCS) for 3D city and road modelling. The primary products are image-based databases with different georeferencing qualities. Then cooperative applications of terrestrial and aerial images for higher level processes such as road-marks extraction and building reconstruction are discussed. We will also show that an image database is of great potential to improve quality of positioning in autonomous navigation applications. Keywords: 3D city modelling, Mobile mapping, Georeferenced image database, Road modelling, Facade modelling, Autonomous navigation.
1. Introduction There is a growing need for 3D city and road models for very various applications: city planning, wave propagation analysis, architectural state documentation, realistic gaming, autonomous navigation, traffic monitoring and virtual tourism projects. Since the beginning of the 90s, IGNs MATIS laboratory has been conducting research in the field of fully automatic 3D building reconstruction from digital aerial images. Production results with the developed tools are very promising especially from an automation rate point of view [1]. This
418
Bahman Soheilian, Olivier Toumaire, Lionel Penard et al
automation is absolutely necessary to provide high quality products with lower prices and higher updating cycles . Most of the research efforts in 3D reconstruction and topographic object extraction, in the recent past years, have gone into building reconstruction. Nevertheless for many applications, roads, vegetation, and even building facades have to be modelled with a higher level of detail. Even though, new digital aerial imagery sensors potentially allow centimetric ground pixel sizes , there is no real need for such a rich information on buildings except for extracting roof superstructures (chimneys, dormer windows). Photogrammetric ground-based acquisitions systems together with Lidar devices are much more suited than aerial systems to model facades and streets. What should an ideal 3D city and road model contain? Indeed, the needs of users are very different in terms of database contents , level of detail, and relative and absolute accuracies. To fulfil all possible customers, a very efficient and simple product would be to provide a database of georeferenced terrestrial stereopairs covering the whole city. This would mean that customers, with adapted softwares that already exist at low prices (e.g. Photomodeller, Real Viz), could then reconstruct a model of the scene adapted to their needs and budget. Nevertheless, some specific objects can be extracted in a systematic way (e.g. pavements, road marks, facade geometry).
Fig. 1. Horizontal base images (left) - vertical base images (middle) - The Stereopolis Mobile Mapping System (right)
ION has developed the Stereopolis Mobile Mapping System (MMS) at MATIS laboratory for generating the georeferenced image database [2]. Stereopolis is equipped with three couples of 4000 by 4000 CCD cameras (a horizontal stereo-base for roads and two vertical ones for facades) and
Different Quality Level Process and Products
419
georeferencing devices (GPS/INS) (see figure 1). The system provides good imaging geometry and coverage of object space. Most of the MMSs are based on direct georeferencing devices like GPS/INS [3]. However in dense urban areas GPS masks and multi-paths do corrupt the quality of measurements and provide only 50 em to 1 m absolute positioning accuracy. In order to satisfy all the users, we provide a fine georeferencing by post-processing at the best possible accuracy (5 to 10 ern for absolute accuracy) We define as level LA product of our system the direct georeferenced image database with 50 em to 1m absolute accuracy (decimetric). We define as level LB product of our system the photogrammetric-processed georeferenced images database with 5 to 10 em relative and absolute accuracies (sub-decimetric). In this paper, we will show how these two databases I.A and I.B can be used for some modelling and to generate higher level products, e.g. ILA, ILB, etc. for integrating a city and road modelling process. These products can be raster, vector (3D features) or both such as 3D textured models for instance. Section 2 provides a brief introduction to our fine georeferencing strategy. Section 3 describes cooperation of aerial imagery and fine georeferenced terrestrial images to generate automatically a 3D road-mark database. Section 4 describes a potential application of our image database for an autonomous navigation application. Section 5 explains the application of fine georeferenced terrestrial image database for texturing buildings' facades,
2. Fine terrestrial images georeferencing As mentioned before the direct georeferencing devices provide only a decimetric absolute positioning. So for sub-decimetric georeferencing accuracy our idea is to take benefits from the good absolute position given by the aerial images. Indeed, classical bundle adjustment performed for the aerial images orientation leads to an absolute accuracy better than 10 cm for a pixel size of nearly 20 ern, If we are able to match corresponding parts of both terrestrial and aerial images, we will have the aerial image absolute positioning accuracy for the terrestrial image, thus leading to a sub-decimetric pose estimation for the MMS. The matching process uses the images as the key frame for refining the position with creating orthophotos from the both points of view and trying to find the best absolute position by maximizing a multi-image correlation score [4].
420
Bahman Soheilian, Olivier Toumaire, Lionel Penard et al
We first select in the terrestrial images a contrasted road area. Then we compute the corre spond ing DSM and orthoimage (figur e 2, left) . Then , the DSM and the orthoimage are under-sampled at the aerial resolution (figure 2, middle , right) and orthoimages from multiple aerial images are computed using this new DSM (figure 3).
Fig. 2. Terrestr ial orthoim age at full resolution (left) - Under-sampled DSM (middle) - Under-sampled orthoimage (right).
An iterative proces s is then launched to find the best position for the vehicle with maximizing a correlation score. So, the DSM is moved in object space with different offsets (dx, dy , dz) and the related orth oimages are computed and correlated to obtain a similarity measure. The best score gives the best position for the DSM , thus leading to the best absolute position for the vehicle [5]. This process provides an absolute positioning accuracy better than 10 ern and constitute fine georeferenced image database.
Fig. 3. Three orthoimages computed for a given offset (dx , dy , dz) from three different aerial images.
3. Road mapping applications Road network extraction has been an active field of researc hes for many years. Indeed, it is an interesting topic for road databases update or creation [6]. Some strategies use multiple clues (vehicles, road-marks ... ) for detect ing and reconstructing roads [7], but accuracy remains poor. We have focused on the road-marks because they give many helpful information (topological, semantic and geometric) on the road network, e.g. number of circulation lanes, circulation ways, road width.
Different Quality Level Process and Products
421
In this sectio n we will first describe our strategy for road-mark extractions from aerial images, and then we will explain how these extracted road-marks can be used as ROIs (Regions Of Interest) to perform a finer reconstruction of the same road-marks using our terrestrial sub-decimetric georeferenced image database.
3.1. Aerial road-marks reconstruction Our strategy as described in [8] uses a geometric approach combined with perceptual grouping to obtain fine localisation of the road-marks. Some external knowledge is also introduced to constraint detection of specific objects. Some results on a zebra-crossing and dashed lines network are shown on figures 4 and 5. The algorithm has been evaluated using a reference datab ase of 114 stripes surveyed with GPS and a total station. 95% stripes are detected and well reconstructed.
Fig. 4. 20 cm ground pixel size aerial images of Amiens, France (top) - 3D reconstruction of the zebra-crossing (bottom left) - Projection of the 3D reconstruction on an image (bottom right).
Fig. 5. Dashed lines detection on a 20 em ground pixel size image.
422
Bahman Soheilian, Olivier Toumaire, Lionel Penard et al
3.2. Terrestrial road-mark reconstruction As explained before the reconstructed road-marks from aerial images provide ROIs to cut-down the number of terrestrial images to be processed. We have foc used firstly on zebra-crossi ngs. The reconst ruct ion is performe d by a contour matching with a dynamic programming approac h between the images taken by our MMS horizontal stereo-base . This provides 3D reconstructed contours . A model-based approach that uses some a priori known specifications on the bands (width and minimal length) is used to generate a quasi parallelogram model for each band. The method provides 1 em relative accuracy [9]. The absolute accuracy , depending on georeferencing quality, is better than 10 ern (see section 2). The detectio n rate is about 92%. Figure 6 illustrates some results. We are now working on an adaptation of the same algorithm for lane reconstruction.
Fig. 6. Image projec tion of reconstructed zebra-crossing in stereo images (left and midd le) - 3D reconstructed model (right).
4. Autonomous navigation The most important component of autonomo us navigation is permanent and reliab le localization. Indeed for autonomo us navigation, a robot needs to know where it is located and what its trajectory relative to its environ ment is. This information is generally given to the robot by first driving it manually on a trajectory in a learning mode. The robot takes the images from this trajec tory. These images will then be positioned (online or offline) and will constitute a learning image database . The robot can then localize himself relatively with respect to the same trajectory autonomously. The online positioning is performed, by compari ng the actual image (taken in autonomous navigation mode) with the neares t image in the learning mode image database. Two main approaches exist. The first one as explained in [10], estimates direc tly the position by a 2D matching of interest points between the image taken in autonomous navigation mode and the nearest image of learning mode. In this approac h
Different Quality Level Process and Products
423
our sub-decimetric georeferenced image database can provide the needed learning mode data for a desired trajectory with a higher positioning accuracy. The second one creates a 3D map of the scene (3D interest points) from the georeferenced image database . The position of an image taken in autonomous navigation mode is then computed by a 3D resection using 3D interest points and their images position [11]. Thanks to very high positioning accuracy of our sub-decimetric georeferenced images a more accurate 3D map can be generated (see figure 7 left and middle). This can improve considerably the quality of positioning in autonomous navigation.
Fig. 7. 3D mapping (learning mode) by matching interest points between the subdecimetric positioned stereo images of our MMS (left, middle) - example of an image in autonomous navigation mode (right). Even though false matches can occur, the majority of matches are correct thus can be handled by robust techniques.
Finally we believe that road-marks are very good landmarks to be used in autonomous navigation as they are available nearly all the time on the vehicle trajectories (roads). In addition, their online detection is quite easy (see [12]) and they provide good 3D positioning accuracy . In this case aerial reconstructed road-mark database or terrestrial one can be used as a function of needed accuracy for navigation .
5.
Fa~ade
applications
As mentioned before the aerial imagery does not provide accurate facade geometry and texture. In order to realize realistic models suitable for walkthroughs, it is necessary to add images taken from the ground. 1\11\15 are well adapted to automate the collection of this data. One of the key points is to ensure the geometric coherence of terrestrial and aerial data, that is to say to register the terrestrial views. Depending on the desired result, different qualities of georeferencing have to be achieved .
424
Bahman Soheilian, Olivier Toumaire, Lionel Penard et al
5.1. Direct texturing A basic method directly uses the result of GPSIINS fusion as the correct image registration solution, and the textures are directly created. Position and orientation errors imply offsets, often visible near the edges of facades, An illustration is given on figure 8. But such a product can be easily available on a wide area, without any development. A better result could be achieved if the terrestrial images were aligned on the 3D model. This has already been investigated by some authors such as [13]. Their methods rely on the fact that the initial pose is good enough to ensure that building contours in the image are close from the projection of the 3D model in the same image. A transformation is computed by matching image contours with projected contours, and after this step a refined orientation is available as well as well-adapted textures. Such a product gives satisfying results, and can be completely automated. Nevertheless, 3D models are generalized representations of reality (figure 8 shows an example of a generalized gutter line), and facades are represented as planes. This is clearly not always the case in practice. Corrections due to the facades' relief are not applied to the textures: these images are not ortho-rectified and thus artifacts can appear.
5.2. Facade geometry reconstruction and orthorectification A top level result would consist in producing the ortho-rectified texture. This requires the 3D shape of the facades to be recovered. In the following, we present our framework to produce 3D facade models and orthorectified textures; a previous paper [14] gives more details. This application requires all camera parameters to be known with really high precision. Extrinsic parameters are calculated with a bundle adjustment, but this requires a good initial solution to ensure the convergence of the iterative optimization. This is the reason why a sub-decimetric georeferencing is needed as initial solution.
5.2.1.
Digital Peceae Models (DFM)
We aim at reconstructing the topographic surface of the buildings facades, The facade images acquired by the MMS give a sufficient overlapping: each facade point is seen on at least 4 images. The DFM is calculated by a multi-image correlation-based method. It consists in dividing the 3D space in a set of voxels, which size depends on the desired accuracy. Then, the best depth is calculated for each position on the facade plane by evaluating
Different Quality Level Process and Prod ucts
425
a multi-image corre lation score. It was previous ly shown [14] that better resu lts could be obtained with an image- based geometry rather than an object-based geometry. In order to reduce noise on the resulting surface , a regularization step is performed, which is an adaptation of the graph cut algorithm developed in [15]. The whole process follow s a multi-resolution framework, thus speeding up the algorithm . Results are shown on figure 9.
5.2.2. Ortho-rectification With a DFM , an ortho -rectified texture can be produced (see figure 10).
Fig. 8. Textures mapped to the Fig. 9. Origina l image (left) and shaded fawron g buildin g due to pose error s cade model (right). (ellipse) and gutter line generalization
Fig. 10. Ortho-rectified texture of the whole building
6. Conclusion We have presented different levels of products of our MMS. The two primary level products are terrestrial image databases with different georeferencing qualities. We have also showed how to use cooperatively aeria l and terrestrial images to reconstruct road-marks and facades databases as higher level products and we have discussed the quality of results . The users can choose a product or another depending on their needs in terms of quality and cost. We believe that the sub-decimetric georeferenced image database is an interesting product because it enab les users to access a rich
426
Bahman Soheilian, Olivier Toumaire, Lionel Penard et al
source of localized information. So users can reconstruct all their objects of interest to constitute a city model adapted to their application.
References 1. Flarnanc, D., Maillet, G., Jibrini, H., 3D City Models: An Operational Approach Using Aerial Images and Cadastral Maps. In IAPRS, Vol. 34, Part 31W8, Munich (1998). 2. Paparoditis, N., Bentrah, 0., Penard, L., Toumaire, 0., Soheilian, B., Deveau, M, Automatic 3D recording and modeling of large scale cities : the ARCHIPOLIS project. In Recording , Modeling and visualisation of cultural heritage. ASCONA (2005). 3. Zhao, H, Shibasaki, R, Reconstructing textured cad model of urban environment using vehicle-borne laser range scanners and line cameras. In proceedings of ICVS '01, pages284.297. Springer-Verlag (2001). 4. Paparoditis, N., Thorn, C., Jibrini, H, Surface reconstruction in urban areas from multiple view of aerial digital frame. IAPRS, Vol. 33, Part B3 (supplement): 43-50 (2000). 5. Tournaire, 0., Soheilian, B., Paparoditis, N., Towards a sub-decimetric georeferencing of ground-based mobile mapping systems in urban areas: matching ground-based and aerial imagery using road-marks. IAPRS, proceedings of ISPRS Commision I Symposium, Paris, France (2006). 6. Zhang, C., Updating of cartographic road databases by image analysis. PhD Thesis, Institut fur Geodasie und Photogrammetrie, Zurich (2003). 7. Hinz, S, Automatische extraction urbaner strajlennetze aus luftbildern, Dissertationen, Deutsche Geodatische Kommission (2004). 8. Tournaire, 0., Paparoditis, N., Jung, F., Cervelle, B, 3D road-marks reconstruction from multiple calibrated aerial images. IAPRS & SIS, proceedings of ISPRS Commision III Symposium, PCV06, Bonn, Germany (2006). 9. Soheilian, B., Paparoditis N., Boldo, D., Rudant, J.P., 3D reconstruction of zebra crossing from stereo rig images of a ground-based mobile mapping system. Proceedings of ISPRS Commision V Symposium, 'image Engineering and Vision Metrology', Dresden, Germany (2006). 10. Remazeilles, A., Chaumette, F., Gros, P., Robot motion control from a visual memory, In IEEE ICRA'04, Vol. 4, pp 4695-4700, (2004). 11. Royer, E., Born, J., Dhome, M., Thuillot, B., Lhuillier, M., Marmoiton, F., Outdoor autonomous navigation using monocular vision. IEEEIRSJ IROS2005, pages 3395-3400, Edmonton, Canada, (2005). 12. Bertozzi, M., Broggi, A., GOLD: A'parallel real-time stereo vision system for generic obstacle and lane detection. IEEE Tr. Im.Proc. 7(1),62-81 (1998). 13. Bohm, 1., Haala, N., Kapusy, P., Automated appearance-based building detection in terrestrial images. IAPRS, Volume XXXIV, Part 5, ISPRS Commission V Symposium, Corfu, 491-495 (2002).
Different Quality Level Process and Products
427
14. Penard, L., Paparoditis, N., Pierrot-Deseilligny, M., 3D building facade reconstruction under mesh form from multiple wide angle views. Proceedings of the ISPRS Working Group V/4 Workshop 3D-ARCH, Venice, Italy (2005). 15. Roy, S., Cox., 1., A maximum-flow formulation of the n-camera stereo correspondence problem. International Conference on Computer Vision (1998).
Texture Generation and Mapping Using Video Sequences for 3D Building Models Fuan Tsai l, Cheng-Hsuan Chen l , Jin-Kim Liu 2 and Kuo-Hsing Hsiao 2 'Center for Space and Remote Sensing Research, National Central University, Zhong-Li, Taiwan. 2Energy and Resources Laboratory, Industrial and Technology Research Institute, Hsin-Chu, Taiwan
Abstract Three-dimensional (3D) building model is one of the most important components in a cyber city implementation and application. This study developed an effective and highly automated system to generate and map (near) photo-realistic texture attributes onto 3D building models using digital video sequences. The system extracted frames with overlapped textures of building facades and integrated them to produce complete texture images. Interest points on the extracted video frames were identified using cornerdetectors and matched with normalized cross-correlation for seamless stitching. Shadows and foreign objects were identified and removed with morphological algorithms and mended by mirroring neighborhood textures. Completed mosaicked texture images were mapped onto corresponding model facets by linear or parametric transformation. Test examples demonstrate that the developed system can effective generate seamless photo-realistic texture images and correctly map them onto complicated 3D building models with high efficiency. Keywords: texture mapping; video mosaic; 3D building model; cyber city; visualization
1. Introduction Cyber city is one of the emerging topics in 3D geoinformatics. A cyber city resembles the layouts, activities and functionalities of a real city in a computer generated environment. In real world, buildings are the most ubiquitous objects. Similarly, building model is one of the most important
430
Fuan Tsai, Cheng-Hsuan Chen, Jin-Kim Liu and Kuo-Hsing Hsiao
components in a cyber city implementation. The geometric outlines of buildings can be reconstructed from remote sensing data effectively using a variety of algorithms [1-3]. However, currently, most reconstructed models do not have sufficient or accurate texture attributes about the exteriors or facades of buildings. The lack of accurate textures not only makes 3D building models less realistic, it may also fail to provide necessary information in certain applications of cyber city. Shading models with pseudo textures is a commonly adopted approach in visualization [4], but does not represent true building textures. For realistic texture mapping of 3D building models, using images acquired with digital cameras is a viable approach [5-9]. However, for a large-scale cyber city implementation, it will require a significant amount of images for a complete texture mapping. Digital video (DV) is a more efficient source of acquiring raw texture images. Therefore, this study developed a highly automated system for photo-realistic textures generation and mapping with video sequences for cyber city visualization and applications.
2. Texture mapping of 3D building models Video data have been used in the virtual environment for visual effects or city model visualization [10-12]. To use DV for realistic texture mapping of 3D building models, a few issues must be addressed, including: • Photogrammetric correction: establishing relationships between texture images and models. One way to accomplish this task is to acquire camera parameters with additional equipments [13]. A few algorithms were developed to reconstruct original or related camera parameters, projection geometry and pose, such as using correlations of overlapped images [5], by vanishing points [14], or vision-based modeling [15]. Another approach for photogrammetric correction of raw texture images is to register a group of correlated images to a common image space [8, 16]. • Mosaicking: stitching multiple images to generate a complete texture that is continuous in geometry and color shading. Photogrammetric correction should have addressed the geometric issues, but the colors and shadings of all frames should also be integrated. Histogram match or equalization is popular in adjusting color distributions of images into the same range [17]. However, directly applying it to close-range DV data may cause serious misrendering. Weighted blending algorithms, such as alpha blending (feathering) [18], pyramid blending [19] or gradient domain stitching [20], are more likely to generate smooth and seamless results.
Texture Generation and Mapping
431
• Removal and mending of non-interested regions. Shadows and regions blocked by other buildings or foreign objects such as trees and cars. should be identified and removed from texture mosaics and mended with correct or similar texture blocks. • Transformation: mapping generated texture mosaics to model facets. This is a transformation from image or texture space to object space. Depending on the photogrammetric correction, this can be done by projective texture mapping [5, 7, 14, 15] or linear transformation in either object space or parametric space as described in [8].
3. Methodology This developed texture generation and mapping system can be categorized into several phases as described below.
3.1. Preprocessing The raw texture images were collected using a Sony digital video camcorder (Sony DCR-DVD201). Video frames were extracted from video sequences to generate a texture mosaic. Each extracted frame image had a 40% to 60% overlap with adjacent frame image.
3.2. Image registration This was to transform a group of texture frames of the same facade to a common image space. Two types of registration were performed. For regular (planar) texture images, they were registered to building model directly. The registration was done by interactively identify four roof and ground comers on the image. Their coordinates were obtained from building layout CAD files and digital terrain models. A few points in between were then interpolated from roof-ground point pairs. These points were then used as tie points to register the image to building model using the eightparameter algorithm and least-squares-fitting. The registration result is shown in Fig.-l.
432
Fuan Tsai, Cheng-Hsuan Chen, Jin-Kim Liu and Kuo-Hsing Hsiao
Fig. 1: Registration of a regular-shaped facade to model.
For irreg ular-shaped (non-pla nar) facades, a two-step polynomial fitting registratio n algorithm [8] was applied to transform them to a common image space (but not the object space).
3.3. Mosaicking Firs t, Harris Com er Detector [20] and non-maximum suppression was applied to eac h frame to generate interest points (Fig-2) . Then, tie points were identified by matching interested points with a simple but effective algorithm. Since adjacent frames in a video sequence had almost identical viewi ng conditions, the overlapped region of two frame s should have high correlation. Examining a pair of extracted frame s a time , the relative displacements (in image coordinates) of the two frames should be identical whether using the left or right frame as the refer ence. Th is can be used to preliminarily matc h the detected Fig. 2: Detected corner points. comer points.In addition, because the two frames had 40 % to 60% overlaps, a Normal ized Cross Correlatio n (NCe) operation was carried out on eve ry pair of adjacent frames, using the ending 40 % of the left frame as the target window and the whole image of the right frame as the searc h window and 0.9 as the correlatio n threshold, to further identify highl y correlated (ma tched) points as the tie points. Fig-3 shows an example of the matching result.
Texture Genera tion and Mapping
433
Fig. 3: Interest point matching using image displacement criteria and NCC .
After tie points were identifie d in all frames, they were merged toge ther to form a texture mosaic . The blending algorithm demonstrated in [8] was
Fig. 4: Mosaicking of two overlapped frames.
used to adjust the colors and shadings in overlapped areas. A mosaic of two frames is displayed in Fig-4. As displayed in the figure, the mosaic is continuo us in geometry and seamless in colors and shadings.
3.4. Removal and mending of non-interested regions The developed system utilized Greenness Index (Gl) and morphological operations to remove non-interested regions. GI was used to separate areas blocked by gree n vegetat ion; while morphological "close" (i.e . dilation then erosio n) and "bottom-hat" (i.e. subtracting origina l form closed image) operations segme nted texture images according to specified structur-
434
Fuan Tsai, Cheng-Hsuan Chen, lin-Kim Liu and Kuo-Hsing Hsiao
ing element. The removal of non-interested regions using these algorithms is demonstrated in Fig-5 . This study also developed a self-mending algorithm. Most building facade textures have repeated patterns (e.g. windows). Therefore, it is possible to identify the areas of repeated patterns and its mirroring axis, so the removed regions can be refilled by mirroring correct texture blocks. The mirroring axis was also identified using a combination of image morphology and region growing. Fig-6 shows the mirroring axis of a complete mosaic and the patched up image. After texture mosaics of all building facades were generated and patched up, they were than mapped onto corresponding model facets.
Fig. 5: Removal of non-interested regions.
Fig. 6: Mirroring axis of a texture mosaic (top) and the result after patching up non-interested regions.
Texture Generation and Mapping
435
4. Experimental Results The developed texture generation and mapping system was applied to a block in NeD campus to test its performance. In addition to texture images acquired using DV, 3D polyhedral building models and DTM data were obtained for the experiment. Fig-7 displays two perspective views of the testing area. Fig-8 simulates a street view of the same area. As demonstrated in these figures, the developed algorithms successfully generated texture images for building facades and correctly mapped them onto corresponding model facets. The correction and mosaicking algorithms were effective in generating texture mosaics with continuous geometric outlines and seamless color shadings. Most of non-interested regions were removed using the GI and morphological operations and patched up reasonably. (Only parts of the lower texture areas are still blocked by trees or other small objects.) More importantly, the developed system requires little human interaction except when identifying the initial four tie points for photogrammetric corrections and measuring the image displacements when adjusting irregular-shaped facade textures. The rest of the algorithms are highly automatic.
5. Conclusions This study developed a highly automated system to perform photo-realistic texture generation and mapping using video sequences for 3D building models. The system employed corner detection algorithms to identify interest points. The interest points were then matched based on a image displacement restriction and normalized cross correlation measures. The mosaicking algorithms were effective in generating complete texture images that are continuous in both geometry and color domains. Non-interested regions could also be automatically removed with GI and morphological operations and then mended using mirroring of correct texture blocks, where the mirroring axis on each mosaic was also identified automatically using image morphology and region growing. The resultant 3D building models consisted of more completed and accurate attributes and had photo-realistic appearances. More importantly, the algorithms were more efficient comparing to photograph-based texture generation and mapping systems. This should be a valuable addition to large-scale cyber city implementations and applications.
436
Fuan Tsai , Cheng-H suan Chen, Jin-Kim Liu and Kuo-Hsing Hsiao
Fig. 7: Two perspective views of the test area.
Fig. 8: Simulation of a street view in the test area.
Texture Generation and Mapping
437
Acknowledgment This investigation was partially supported by the NCU-ITRI Joint Research Center under Project No. NCU-ITRI 950304 and National Science Council under Project No. NSC-94-2211-E-008-031.
References [1] Rau, J-Y and L-C Chen, 2003, Robust reconstruction of building models from three-dimensional line segments. PE&RS, 69(2), pp. 181-188. [2] Vosselman, G. and S. Dijkman, 2001, 3D building model reconstruction from point clouds and ground plans. Int'l Archives of Photogrammetry and Remote Sensing, XXXIV-31W4, pp. 37-43. [3] Chen, L-C, T-A Teo, J-Y Rau, J-K Liu and W-C Hsu, 2005, Building reconstruction from LIDAR data and aerial imagery. IGARS'05, 4, pp. 2846-2849. [4] Beck, M., 2003, Real-time visualization of big 3D city models. Int'l Archives of Photogrammetry and Remote Sensing, XXXIV-5IWI0. [5] Coorg, S. and S. Teller, 2000, Spherical mosaics with quaternions and dense correlation. Int'l J. Computer Vision, 37(3), pp. 259-273. [6] Gunadi, C. R., H. Shimizu, K. Kodama and K. Aizawa, 2002, Construction of large-scale virtual environment by fusing range data, texture images, and airborne altimetry data. 3DPVT'02, pp. 772-775. [7] Lee, S. C., S. K. Jung and R. Nevatia, 2002, Automatic integration of facade textures into 3D building models with projective geometry based line clustering. EUROGRAPHICS 2002,21(3), pp. 259-273. [8] Tsai, F. H-C Lin, J-K Liu and K-H Hsiao, 2005, Semiautomatic texture generation and transformation for cyber city building models. IGARSS'05, 7, pp. 4980-4983. [9] Zheng, J.Y. And M, Shi, 2003, Mapping cityspaces to cyber space, CW2003, pp. 166-173. [10] Chon, J., T. Fuse and E. Shimizu, 2004, Urban visualization through video mosaics based on 3-D multibaselines. International Archives of Photogrammetry and Remote Sensing, XXXV-B3, pp. 727-731. [11] Gibson, S., B.J. Hubbold, 1. Cook and T.LJ. Howard, 2003, Interactive reconstruction of virtual environments from video sequences. Computer & Graphics, 27(2), pp. 293-391. [12] Nicolas, H., 2001, New methods for dynamic mosaicing. IEEE Transactions on Image Processing, 10(8), pp. 1239-1251. [13] Spann, J.R. And K.S. Kaufman, 2000, Photogrammetry using 3D graphics and projective textures. IAPAS, Amsterdam, vol. 33. [14] Guillou, E., D. Meneveaux, E. Maisel and K. Bouatouch, 2000, Using vanishing points for camera calibration and coarse 3D reconstruction from a single image. The Visual Computer, 16, pp. 396-410.
438
Fuan Tsai, Cheng-Hsuan Chen, Jin-Kim Liu and Kuo-Hsing Hsiao
[15] Kumar, R, H.S. Sawhney, Y. Guo, S. Hsu and S. Samarasekera, 2000, 3D manipulation of motion imagery. Proc. ICIP 2000, vol. 1, pp. 17-20. [16] Kim, D.H., Y.!. Yoon and J.S. Choi, 2003, An efficient method to build panoramic image mosaics. Pattern Recognition Letters, 24, pp. 2421-2429.] [17] Du, Y., J. Cihlar, J. Beaubien and R. Latifovic, 2001, Radiometric normalization, composition, and quality control for satellite high resolution image mosaics over large area. IEEE TGARS, 39(3), pp. 623-634. [18] Uyttendaele, M., A. Eden and R. Szeliski, 2001, Eliminating ghosting and exposure artifacts in image mosaics. CVPR 2001,2, pp. 509-516. [19] Adelson, E.H., C.H. Anderson, lR. Bergen, P.l Burt and J.M. Ogden, 1984, Pyramid method in image process, RCA Engineer, 29(6), pp. 33-41. [20] Levin, A., A. Zomet. S. Peleg and Y. Weiss, 2004, Seamless Image Stitching in the Gradient Domain. ECCV 2004, LNCS 3024, pp. 377-389.
Design and Implementation of Mobile 3D City , Landscape Authoring/Rendering System Seung- Yub Kim and Kiwon Lee Dept. of Information System Engineering, Hansung University Seoul, Korea, 136-792
Abstract 3D mobile GIS application is at the beginning development stage in the GIS domain, compared to other GIS application using handheld devices on wireless communications and various sensor systems. In this study, an integrated 3D authoring and rendering system, based on object-oriented approach, running on handheld devices was proposed, designed and implemented; for the graphic pipeline processing in this system, OpenGL(Open Graphic Library) and OpenGLIES (Embedded System) are used for standalone platform and mobile platform, respectively. In these systems, 3D objects such as terrain, building, road and user-defined geometric ones can be modeled and integrated for augmented 3D landscape scene generation. In both systems, 3D urban landscape features authoring system by geo-based spatial feature database schema definition is composed of several functions: modeling, editing and manipulating of 3D landscape objects, generating of geometrically complex type features, and supporting of both database and file system with manipulating of attributes for 3D objects. As well, for realistic scene rendering, texture mapping of complex types of 3D objects with image library is also possible. In developing on the mobile system, main graphical user interface and core components was implemented under EVC 4.0 MFC and tested at PDA as iPack H4100 and LGDMP M80 device with Pocket PC. It is expected that dual interfaced 3D geo-spatial information systems supporting registration, modeling, and rendering systems for can be effectively utilized to 3D urban environments analysis, 3D simulation and 3D navigation related to further 3D mobile mapping.
440
Seung- Yub Kim and Kiwon Lee
1. Introduction In general, 3D modeling and rendering can be regarded as data production and visualization, as basic functions in a conventional GIS. Generic modeling with 3D GIS in stand-alone should be considered with the database issues such as handling and managing of 3D geometry or topology attributes [3][6], whereas 3D rendering and visualization is basically related to 3D computer graphics [4]. 3D geo-spatial information in GIS meets various industry needs such as ITS, LBS, vehicle navigation/guidance, augmented landscape modeling, photo-realistic visualization [2],[5],[7],[8],[9],[10]. Most mobile GISbased application is mainly manipulation and real-time display for 2D geobased features. As for the recent inclination for modeling, analyzing, and rendering 3D geo-based spatial features have been available in the platforms of stand-alone system. While, 3D mobile GIS is at the beginning development stage in the GIS domain, compared to other GIS application using handheld devices on wireless communications and various sensor systems. Although system resources of mobile devices such as small-size memory, slow CPU, low power and small screen provide to developers who should handle a large volume of geo-based model and multi-stage graphic pipeline for visualization, it comes to be demanded because of the major advantages of mobile devices such as portability and mobility. Despite of studies in 3D GIS, it is hard to find mobile 3D GIS with authoring and rendering for 3D geo-based Feature. It has been recognized that hardware and software of mobile devices is insufficient to the efficient process for the authoring and rendering of geo-based 3D information. In this study, we attempted to design and implementation of mobile 3D city landscape authoring/rendering system for mobile platform, and additional stand-alone version, capable of modeling and rendering 3D data by the same data structure to any user. In this system design, 3D features can be separately processed with the functions of authoring and rendering of terrain segments, building segments, traffic segments, vegetation segments, and user defined models. 3D features in this system are basically for 3D urban landscape model, composed of lots of landscape component and segment. Implementation of this system focused on the efficient architecture and simplification.
Design and Implementation of Mobile 3D City Landscape Authoring/Rendering System 441
2. System Design and Implementation 2.1. Mobile Graphic API Types of available Mobile 3D API are Direct3D Mobile, JSR-184 and OpenGLIES [1]. Direct3D Mobile is supported by Wince5.0 and high cost, it is dependent to the Windows operating system. And JSR-184 for J2MEbased GSM(Global System for Mobile communication)is optimized to the Java operating system. So we selected OpenGLIES for 3D graphic processing or pixel pipeline in this system Because of advantage such as independency of operating system, royalty free, and connection of OpenGL. Furthermore Direct3D Mobile and JSR-184 are also available, but it is not considered due to dependency and lower computation performance in this implementation stage. OpenGLIES is Mobile 3D Graphic API for Embedded systems by the subset, of OpenGL, it is a low-level, lightweight API for advanced embedded graphics using derived subset profiles of full OpenGL API, the most widely adopted cross-platform 3D graphics API. And it is to provide a well-documented, standardized API for mobile devices, optimized graphic pipeline process for 3D data, and independence of platform. Therefore, this advantage makes it possible to provide 3D graphic process to the embedded system.
2.2. 3D Data Model and System Design This study focused on development of the authoring and rendering system for mobile 3D city landscape system. We have tried to perform featurebased approach, dealing with separate terrain, building, traffic, and vegetation among city components. Detailed urban landscape components, like tall and small buildings, parks with trees and fences, roads and railroads, utilities, can be also considered along with feature-based approach. 3D authoring and rendering system by feature-based approach was designed and implemented using OPENGL (Open Graphic Library) and OPENGLIES(Embedded System) for stand-alone platform and mobile platform due to connection between OpenGL and OpenGLIES, respectively. Figure 1 shows the system design strategy and the operational view. In the stand-alone system, authoring system by 3D geo-based spatial feature database schema definition is composed of several functions handling urban landscape features: Modeling, editing and manipulating of 3D landscape objects, Generating of geometrically complex type features, Supporting of both database and file system, Handling of attributes for 3D ob-
442
Seung- Yub Kim and Kiwon Lee
jects, Texture mapping of complex types of 3D objects with image library . While, in the mobile 3D system, additional functions as well as those of stand-alone 3D system are provided: Sync-sharing and linking of 3D database model schema, Optimizing of integrated multiple 3D landscape objects, and Rendering of texture-mapped 3D landscape objects. In these dual systems, 3D objects such as terrain, building, road and user-defined geometric ones can be modeled and integrated for augmented 3D landscape scene generation. (st and- alone]
L....---------.I,
[File system]
(Data man ipulation] Ieren Feat\le Building FeallEe
Transoatallon FMtlSe Vegetallon Feature
[3D Graph ic API]
User defined FeaMe
Terrain (Authoring]
[Mobile Device]
Fig. 1. System design strateg y and operational view of target system in this study .
In terrain feature modeling shown in Figure 2, it can be used DEM and TIN data and extract 3D coordinates, it generates surface relief through TIN to construct surface of terrain.' In building feature modeling (Figure 3), it can be modeled by the compound types of 3D primitive such as cube, triangle , square, pentagon, hexagon, and generated complex type. Traffic feature can be composed of road, traffic utilities representatively. In the case of road, it can be modeled by using polygon with a four 3D coordinates, and it generates road network through mixing the generated road model. Traffic utilities can be modeled using method of road feature. In some cases , complex traffic utilities model could use 3D geometric modeling and billboarding, one of graphic techniques . But, unlike other feature s
Design and Implementation of Mobile 3D City Landscape AuthoringfRendering System 443
with actual 3D coordinates, vegetation features such as tree, flower, fence, rock, sky and water are based on image-based 3D model. Figure 4 represents road, traffic utilities, and vegetation feature generation. 1
3
e- e - e- ·e - e e .. e e e
.,I; ,J , I , I, I
---- ~~ '~~ ' ~~ '.'~ -- ' ~ ' ~ Rtnd~ Exl rtlCl
..
30 CoorCSnal t '
..
i ' I ' 1 J' l . . i
.. e e e - e
OEM
I
rexnee mapPing ar Image draping
[TINg~n ...c fion/ rerUring precess for l..-rein]
Fig. 2. Schematic view of 3D terrain features modelin g and rendering in mobile handheld device.
Corrpoundng
-
XI prirritive
[Multi- Object in Building usingfeature models for complex building model]
Fig. 3. Schematic view of 3D complex-typed features modelin g and rendering in the mobile handheld device.
444
Seung-Yub Kim and Kiwon Lee
•• [Texture mapping and Alpha Blending for traffic f90ture] Center + (Up - Right) upV~to[ Center + (Up + Right)
Center - (Up + Right)
Center + (Right - Up)
[Bill-boarding and alpha blendingfor vegetation)
Fig. 4. Schematic view of modeling and rendering for 3D roads and utilties features, traffic signboard or road-side vegetation.
2.3. Implementation and Results Although In this study focused on mobile 3D GIS system, stand-alone system also needs to implement, so that their functions used in a target system can be also utilized and common architecture and graphic pipeline process and connection 3D graphic API between stand-alone system and mobile system. Basic architecture in this system is data I/O , user interface, major graphic process, data manipulation. First, data input/out function have file system of binary/ASCII type and database system using the query interface , it generate attribute data and geometry data created. Major graphic processes are to model , project, and render terrain , building, road , and other urban 3D features . Data manipulation is linking proces s for geometry and attribute information, and all data that is used using file system or database system. In this stand-alone syste m, main graphical user interface, database, core components was implemented under Visual C++ 6.0 MFC and tested at Desktop PC based Windows XP. And mobile syste m, main grap hical user interface and core components was implemented under EVC 4.0 MFC and tested at PDA as iPack H4100 and LG-DM P M80 device based Pocket PC. Figure 5 shows actual snap -shot of 3D modeli ng and integrated rendering proces ses.
Design and Implementation of Mobile 3D City Landscape AuthoringlRendering System 445
Fig . 5. Snap-shots of implementation results: (A) Rendering 3D featur es, (B) Authoring process of complex typed building feature on PDA.
3. Conclusion In this study, design and implementation of authoring and rendering system as core of 3D GIS for mobile 3D GIS . Its authoring func tions are comp osed of several core functi ons handling urban landscape features: modeling , editing and manipulat ing 3D landscape objects , generating geom etrically complex type feature s, storage and management of database attributes of 3D objects, and texture mapping of complex types of 3D objects with users' image library. The rendering process functions are composed of optimizing of integrated multiple 3D landscape objects, rendering of texture-mapped 3D landscape objects, and real time perspective navigation of 3D integrated scene using user interaction. Using 3D graphic API based on dual platform, we tried to design prototype of 3D GIS, with efficient real-time terrain rendering sys tem, photorealistic model using texture mapping , applying the blendinglbill -boarding functi on. It is expected that mobile 3D geo-spatial information systems supporting registration, modeling, and rendering systems can be effectively utilized to real time 3D urban planning and 3D mobile mappin g on the site.
446
Seung-Yub Kim and Kiwon Lee
4. References [1] Astle, D. and D. Durnil, 2004. OPENGLIES Game Development, Premier Press, 293p. [2] Christian Stock, Ian D. Bishop, Ray Green, 2006. Exploring landscape changes using an envisioning system in rural community workshops, Landscape and Urban Planning. [3] Coors, V., U. Jasnoch, and V. Jung, 1999. Using the Virtual Table as an interaction platform for collaborative urban planning, Computers & Graphics, 23: 487-496. [4] Ervin, S. M. and H. H. Hasbrouck, 2001. Landscape Modeling: Digital Techniques for Landscape Visualiztion, McGraw-Hill, 289p. [5] Huang, B. and C. Claramunt, 2004. Environmental simulation within a virtual environment, ISPRS Journal of Photogrammetry & Remote Sensing, 58: 7384. [6] Losa, A. and B. Cervella, 1999. 3D Topological modeling and visualization for 3D GIS, Computers & Graphics, 23: 469-478. [7] Peng, C., 2006, In-situ 3D concept design with a virtual city, Design Studies, 27: 439-455. [8] Pullar, D. V. and M. Tidey, 2001. Coupling 3D visualization to qualitative assessment of built environment designs, Landscape and Urban Planning, 55: 29-40. [9] Ranzinger, M. and G. Gleixner, 1997. GIS Datasets for 3D Urban Planning, Computers, Environment and Urban Systems, 21: 159-173. [10] Suveg, 1. and G. Vosselman, 2004. Reconstruction of 3D building models from aerial images and maps, ISPRS Journal of Photogrammetry & Remote Sensing, 58: 202-224.
Macro to Micro Archaeological Documentation: Building a 3D GIS Model for Jerash City and the Artemis Temple Nedal AI-Hanbali l , Omar Al Bayari', Bassam Saleh', Husam Almasri ', Emmanuel Baltsavias' Surveying & Geomatics Eng. Dept., Al-Balqa Applied University, Al Salt, Jordan, [email protected], [email protected], [email protected], 1
[email protected] 2 Institute
of Geodesy and Photogrammetry, Swiss Federal Institute of Technology (ETH) Zurich, CH-8093 Zurich, Switzerland,
[email protected]
Abstract Digital photogrammetry and geographic information system (GIS) techniques have a direct and major role for fast and accurate measurements to generate 3D object models and perform reverse engineering. Hence, these techniques can be favorably employed also for archaeological documentation and preservation. This work serves as a pilot project to illustrate the capabilities of such techniques and also aims at generating some expertise for a longer-term objective of a national project that can be carried out in cooperation with the Department of Antiquities, Jordan. The main objectives are threefold. The first one is to build a 3D virtual reality model of the Artemis Temple that will serve as an important documentation of the Temple. It was never documented as such before. The second one is to construct a GIS model of the Jerash City. Finally, the Artemis model will be incorporated in a 3D GIS model, which will conclude Phase I of this project. The documentation is in fact very important as all temple areas will be modeled with very accurate measurements and detailed texture, which can allow visualization, preservation and reconstruction of the temple.
44 8
Nedal AI-Hanbali, Omar AI Bayari, Ba ssam Saleh et al
1. Introduction Jerash city has a significant importance in the ancient history of Jordan . The ancie nt town of Jerash (Gerasa or Jarash) is located in Jordan and has a remar kable record of human settlement since the Neo lithic times (see Figures 1 & 2). Few ancient towns are as complete and well-pre served as
UmmQais
Peuz.
•
J
p tAi1lluP.
• Umm Al-Jimal
I
~~? JeraSh
~ • • Amman . Baptls m Site
. ;
Deadsee
F ig. 1. Relief map of Jord an show ing Jerash .
• • Madaba • Umm
•
Mt·Nebo
~
i1.Petra
Ar-Ra...
JORDAN
~
Jerash, a city compl ex that once was a thriving commercial zone and • Wadi Rum part of the Decapolis. Built in the 2nd century BC , the city was ~Aqaha conquered in 63 BC by the Roman t......==---------' general Pompey. Fig. 2. Hist oric Sites in Jordan. The town reached its peak in the (htlp://www.lourism.jo/lnsidelMap.asp) 2nd century and declin ed after a series of Chri stian and Muslim : invasions and earthquakes in the mid 8th century. Jerash is a high -profile archaeological site with theatres, public squares, baths and temples (Artemis and Zeus) and although well-preserved, the great majority of the monuments are unrecorded and unprotected and are constantly endangered by development projects . Due to the population growth and the expansion of modern Jerash, many important monuments in the site begun to disappear (cemeteries, monuments from Bronze and Iron Age) and therefore the monitoring and the digital documentation of the site, including both the built and the natural environment are essenti al for the conservation and protection of the cultural heritage of the ancient city. Archaeological researchers are currently aware of the capabilities and the importance of
Macro to Micro Archaeological Documentation: Building a 3D GIS Model for Jerash City and the Artemis Temple 449
modem technologies, such as satellite images, remote sensing, photogrammetry and GIS, (Grosman, 2000) to their work. New survey technologies play an important role in the mapping and the management of the archaeological data (Kvamme, 1999; Bewley and Raczkowski, 2002; Grtin et al., 2002; Lertlum, 2003). The Department of Geomatics Engineering at Al Balqa Applied University had started to use Geomatics technologies and techniques such as GPS, satellite images, photogrammetry and classical geodetic instrumentation to build a 3D GIS and precise base maps for documentation purposes of this archaeological site. The objective is to start a national project that can be organized in cooperation with the Department of Antiquities, Jordan for comprehensive documentation of the Jerash city. This work is planned to be carried out from macro level, i.e. the modeling of the modem and ancient city via 3D GIS, to micro level, i.e. building 3D virtual models as well as 3D GIS databases for each monument. The paper presents the first phase of our implementation that serves as a pilot project. The performed tasks include: 1. 3D GIS modeling of the modem and ancient Jerash. Initial work using GPS, satellite images, and surveying was performed to model the modem Jerash city around the ancient monuments, as well as the ancient city. Modem base maps were produced to include the residential areas around the archaeological site in addition to the ancient site, and a DTM and orthophotos of Jerash were produced. These were used in the GIS modelling process (AI Bayari, 2005). 2. Detailed 3D GIS modeling of the monuments and the stones in the ancient city. 3. 3D virtual reality model of Artemis temple that will be subsequently incorporated into a GIS environment.
2. 3D Modeling of Jerash City The objective of this work is the production and the development of a base map for the old and new Jerash city. This base map could be used in the archaeological GIS at different levels (see Al Bayari, 2005). At the first level (national level), it allows the user to zoom into different sites in the country and select sites based on different criteria. This includes basic information on the site, old and new images and information about related objects in the whole city. At the second level, it displays a detailed map of the site and its general components. At the third level, it portrays detailed data of selected monuments and dis~lays a plan of the monument, as well
Artemis Temple
Macro to Micro Archaeological Documentation: Building a 3D GIS Model for Jerash City and the Artemis Temple 451
and the stereo capability of the sensor. Such sate llite images can supply substa ntial topographic and thematic information to a GIS database (e.g. the prod uced orthophoto base map in Figure 3). In this work, we are not intereste d in disc ussing the photogrammetric aspects but in using existing software to get reliable results and also use the capa bilities of our digital photogrammetry lab and modern techniques for documentation and 3D modeling . The produced digital map, using satellite images, digital photogrammetry and GIS, could be used for classifying and analyzing spatia lly distributed archaeo logical, geo morphological, and eco logica l data, for historic and prehistoric studies (see Figure 4).
3. IGIS and archaeology IT developm ents facilit ate new research into using GIS to assess the envi-
Fig. 4. Building a 3D GIS environment for the modem and an-
452
Nedal AI-Hanbali , Omar Al Bayari, Bassam Saleh et al
ronme ntal and soc ial risks of archaeologica l sites (Nikna mi, 2002). The Dep artment of Antiquities of Jord an use JAD IS (Jordan Antiquities D atabase & Information System, see JADIS , 2005), as arch aeol ogical database. However, the data collected for the JADIS dat ab ase is fai rly general, whi le in our proj ect we intend to produce a precise and detailed geos patial database for Jera sh . Completion of the Jerash geospatial database and its regio na l setting is the most important goal. Th en , this datab ase could be used and managed by the Department of Antiquities and possibl y integrated with JADIS.
x/
......
Fig. 5.: JADIS environment (Jordan Antiquities Database & Information System).
Subsequent web-b ased mapping (e.g., ArcIMS) will enable dat a access and visua lization by arc haeologists wo rking in the site, that are not-experts in GIS . In the seco nd stage, spatia l data from a larger area aro und the city will be used to assess the impact and risk to the site. Applications of GIS in archaeolog y are relati vely new and will be enhan ced by the proposed GIS data model. Th e research will develop ne w but wide ly applicable meth od ologies for site risk ass ess me nt, the results of which will contribute to dev eloping policies for the pre servation of sensitive archaeological sites . GIS technology will show the exact location of . the archaeological site and all the monuments and their di stribution within" the site . A virtual reality environment will be set up to allow persons to visit some of the most important mon uments in the site, hoping to offer complete virtual reality visits for every part of the site in the near future. Finally, we intend to allow acces s to our data to everybody through a website. Th e GIS data model and the base map will greatly help go vernment officia ls working in the field of archaeological heritage managem ent , as well as all sc ientists interes ted in the site and will support tourism in Jordan .
Macro to Micro Archae ological Documentation: Building a 3D GIS Mode l for Jerash City and the Artemis Temp le 453
The archaeological geodatabase offers a valuable tool for better management , monitoring and updating of the site, since new information and data can be added to it, according to the changing conditions of the site (Harrower et aI., 2002). It will also enhance the capabi lity of those working on actual conservation and restoration of archaeological sites and their mon uments . As soon as the data is integrated in the GIS, a map can give a global view of the statu s of a whole site, as well as detailed information on a specific monument. An updated database, especially at an accessible web site, can better aid researchers, profes sionals, students and others , as it disseminates current information.
GIS DATA MODEL FOR ARC HEO L OGY IN JORDAN
Fig. 6: Suggested GIS data model for Jerash city.
4. Building a 3D Virtual Reality Model of the ancient Jerash City The objective is to build a 3D virtual reality mode l of the ancie nt city of Jeras h, that will serve among other as an important information source for tourists to view the area befo re visiting the site. The accuracy is not a main concern in this application. What is importan t is a fast automated procedure. Fou r main steps were taken to achieve this goal, as follows : 1. The first step is the production of high resolution orthop hoto s (25 ern ground pixel size) and a DTM of the ancient city from stereopairs of aerial photos . The orthophotos will be used as the defau lt base layer for all other types of data layers.
454
Nedal Al-Hanbali, Omar Al Bayari, Bassam Saleh et al
2. In the second step, the detailed blue-print tourist map for the area was scanned, registered, digitized and added to the GIS layers. These GIS layers form the base for all subsequent field work needed to complete the documentation of all site features, following the GIS data-model suggested above (Figure 6). Completing these layers through field trips will be based on using a PDA that has a built-in GPS receiver and a camera. Some experiences on that have already been collected. Using ArcPAD software, the GIS base maps were inserted in the PDA and going through each monument-feature in the map the field worker can: verify its position on the map using the built-in GPS receiver, take photos of the monument to document it and add these as attributes to the GIS feature, add any remarks or information about the feature in its attribute list, measure heights with a laser spot to add 3D information, add comments about the condition of the monument and finally save all data according to the data-model requirements. This procedure reduced and optimised other previously used data collection methods and minimized the time to execute this work. Figure 7 shows some clips of the GIS base layers and the used PDA (HP IPAQ 6515). 3. The third step is to generate 3D models of individual temples, theatres, churches etc. (surface wireframe). To achieve this, three steps were followed. The first step was to extract height and import dimension information from the aerial stereopairs. The second was to take terrestrial photos of the monument showing all details with overlap between the shots to help connecting subsequently the elements together and to also make some field measurements of small details. In the third step, using CAD software, these photos were registered (i.e. affine transformation) using the measurements made in the stereo models and the field measurements. All details are then digitized from the photos. Finally, using the 3D features of the CAD software and the GIS base maps, all digitized details are put together to generate the 3D surface model of the monument. 4. The fourth step is to combine all models together in one virtual environment. 3D Studio Max (3DSMax) was used for this purpose. Also, texture can be added at this stage. Figure 8 illustrates some of these 3D models. An animation video can be further produced for visualisation. In addition, these models can be linked as hyper-links to the 2D GIS environment, and thus by clicking on one monument, an animated virtual model will run. Furthermore, in the Arcscene environment of the ESRI software, 3D models can be imported as 3D shape files and thus, one can perform some 3D GIS browsing in this environment.
Macro to Micro Archaeological Documentation: Building a 3D GIS Model for Jerash City and the Artemis Temp le 455
HP
6515
Fig. 7. Clips of the generate d GIS base layers for the ancient Jerash City. The figure to the left shows the PDA model that has been used for field work.
5. Building a 3D Model of the Artemis Temple The objective is to build a 3D virtual reality model of the Artemis Temple (see Figure 3 and 7, and 9) that will serve as an important and accurate documentation of the temple. It was never documente d as such before . The Artemis model will be then used in a 3D GIS mode l, which when com pleted will constitute the end of the pilot project (phase I), towards a comprehen sive nationa l projec t and better archaeological documentation of Jeras h. The Artemis temple is quite large: it is about 53 m long with a 13 m part for the entrance and stairs, 15 m high and 23 m wide . The interior chamber is 26 m long and 13 m wide, see Figure 10.
North Theatre
Main Gate
South Theatre
Hippodrome
Macro to Micro Archaeological Documentation: Building a 3D GIS Model for Jerash City and the Artemis Temple 457
Fig. 10. Top view of the Artemis temple map with the control point distribution.
The preliminary field work was performed in April 2005. Some parts of the building's faces have been reconstructed. The task requires huge amount of work and will take quite some time to be completed. One has to plan such work very carefully, otherwise, it may be an overwhelming operation with vast amounts of image data, some of which may be useless. Figure 11 illustrates a flowchart diagram of the steps taken to achieve our objectives. The main steps to fulfill this work are: 1. Perform detailed planning for the data acquisition strategies, i.e. selection of control point and camera positions. 2. Use Digital Photogrammetry techniques: using the camera calibration parameters and control points, to adjust and orient the photos in order to start measuring the building faces. 3. Find the best matching procedure to speed up and automate the surface measurement process. 4. Build the virtual reality models that can be incorporated into the 3D GIS.
Po>
3
~
(1)
..,
~
2
:::. ..,
tJ
w
~
::: ..,
n
r:
Po>
::l
Po>
E. 0:
cr"
8
en
(1)
(JQ
S
en
;>\'"
0
..,
:E
fD (1) c-
"0
--
3 ...,., 0
(1)
>-3
en ..,
3_. (JQ ;.
..,0 0 .
;J> E;
-
::r"n (1) ::r"
o_ ...,., 0:E
-'Tj
0 0."" (1) •
3~ ,...
::;: ""'i '< _.
I
I
I
~
I
;1.-
~
~
r....J~1'
I
Generation video
web
rr-
I
I
I
I
~
.J 1
IGETTING FINAL 3D MODEL I Editing and complete the 3D Model
~
.l~RK..S~T..aGE.S
,uilding 3D Virtual Reality MOdel
I
I
I
I On-line VisU;llization in
I
CAMERA CA LI BRATION
I
SOFlWARE REOUIERMENTS
I
IIISTRUMEIIT S AIIO REOUIREMEIIT S
I EOUIPMEIIT S
......
Merging sub-projects
...
I
ORTHOPH OTOS EXTRA CTION
HE CKIIIG THE AC CURAC Y
.j:>.
(1)
Po>
~
~
::r"
(i"
Po>
tI:l
3
Po>
en en
Po>
ttl
p.
Po>
'<
ttl
2:
E;
3
o
~
g.
:I: Po>
2:
0. ~
z
VI 00
Macro to Micro Archaeological Documentation: Building a 3D GIS Model for Jerash City and the Artemis Temple 459
5.1. Construction of the control point field This is very important for the generation of the 3D model. It is quite time consuming and has to be well planed and performed accurately. Figure 10 illustrates the control field of the geodetic network constructed around the Artemis Temple using GPS, laser theodolite and total stations. Figure 12 shows the target control points mounted on the wall that are used to orient the images in a common coordinate system and perform 3D measurements. All shown points in Figure 12 are known in one local coordinate system that is referenced to the national system. As shown in Figures 10 and 12, several types of control fields were used, as follows: 1. GPS points to define a local datum with respect to the national one. 2. 16 geodetic network points are established from which all target control points were measured. Total station and least squares adjustment (using the LISCAD software) were used for this task. 3. Two types of target control points were used to determine the camera positions: rectangular tape pieces stuck on the monument (green in Figure 12) and tennis balls with a nail that was stuck by tape on stones (red in Figure 12). Over 160 such points were measured with an estimated accuracy of about 1 em. The ball control points were not stable enough and were finally not used.
460
Nedal AI-Hanbali, Omar AI Bayari, Bassam Saleh et al
Southern Wall
NORTH ERN SIDE
•
T" PE CONTROl. FON T
•
9IoLLCCtiTROLl POINT
~
" ';!I~I~~&I;~ ~ ~;?@ ~ WI
Camera orientation process.
Fig. 12. The target contro l points used for camera orientation deter mination.
Macro to Micro Archaeological Documentation: Building a 3D GIS Model for Jerash City and the Artemis Temple 461
5.2. Acquisition and orientation of photos
The digital camera used for this project is a high resolution digital camera (4000X4000 pixels) with the Kodak ProBack Plus sensor, with Hasselblad body and a 38 mm Biogon lens. The ground pixel size was on the average 3mm, with an average distance from camera to the wall of about 12 m. The average distance between camera stations was 4 m. At each camera stations images were taken at a height of 1.6m and 1.95m using a tripod, and at about 4.5 m using a ladder. At each height, 3 images were taken, one frontal and two (right and left) at 45 deg angle. At narrow spaces (e.g. corridors between the base wall and the temple wall), the images had a small base and due to the oblique view, of the objects the point measurement accuracy was poor. The camera was calibrated using the 10-parameter model of D. Brown with the iWitness software and special color coded targets. The amount of work required for image acquisition is quite intensive. About 800 photos were taken for this project and out of these the following were finally selected: • 11 photos for camera calibration, using the special targets of the iWitness software. • 8 photos: were chosen for camera calibration using the PhotoModeler software. • 78 photos: used to build a 3D model for southern walls, see Figure 13e and 13f. • 28 photos: used to build a 3D model for western walls, see Figure 13b. • 65 photos: used to build a 3D model for northern walls, see Figure 13a. • 37 photos: used to build a 3D model for front face, see Figure 13c. • 73 photos: used to build a 3D model for the inner and outer roomS, see Figure 13d.
462
Nedal Al-Hanb ali, Omar Al Bayari, Bassam Saleh et al
Fig. 13.a The Northern part.
Fig. 13b. The Western part.
Fig. 13e. The Front part.
Fig 13d. Inner and outer rooms.
Fig. 13e.The Southern part.
Figure 13f: The Southern part.
More photos , and additional field work, may still be needed to complete missing parts of the 3D model at the columns. As shown in Figure 14, the acquired photos were taken from different camera stations and views to cover the whole area of the project with mul-
Macro to Micro Archaeological Documentation: Building a 3D GIS Model for Jerash City and the Artemis Temple 463
tiple overlapping images and different viewing angles to achieve good ray intersection.
Fig. 14. Distribution of camera stations used to acquire suitable photos to cover all sides and the interior of the monument; (a) Southern part, (b) Northern part, (c) Eastern part, and (d) interior.
464
Nedal AI-Hanbali, Omar Al Bayari, Bassam Saleh et al
5.3. Building the 3D Model Due to the huge amount of photos, the project was divided into 5 parts (Southern, Western, Northern, Front and Interior parts). Each temple part was measured separately and then the results were exported to the 3DSMax format to build the virtual 3D model. The coordinates of the target control points were transformed from the WGS84 to a local coordinate system, suitable for further software processing, by using a 3D conformal transformation. The procedure of building the 3D model using PhotoModeler is as follows: 1. Set approximate project size and units of object coordinate system: the approximate size is 20m. 2. Give the camera parameters such as focal length, sensor physical size and number of pixels and initial distortion parameters. Then, after importing the initial photos, one can perform camera calibration and edit some of these parameters, e.g. to correct the principal point position and the lens distortion parameters. 3. Import initial photos. 4. Measure image points: first, tens of tie point were measured in the first photo imported into the project, then these tie points were measured in the next photo imported that had overlap with the first one, and so on. At these stage the control points can also be measured and used as tie points 5. Process the project: this procedure will solve for the exterior orientation parameters, with respect to a relative object coordinate system that is defined in the program. Any photo that we want to process must have at least 6 tie points with good distribution and good intersection angle with another photo in the same overlap area. After orienting the first photos, we added new photos, measured tie points and repeated the processing. 6. Check the residuals and precision of tie points. Delete or adjust the positions of the points with high residuals or poor precision. 7. Reorient all photos in the same project with the coordinate system of the control points. To do this, we have to change the property of all photos to "don't use in processing" and "unoriented", Then, we assign the measured control points as control and import their object coordinates, the property of the photos is changed to "use and adjust" and then the bundle adjustment is run again. 8. Import new photos in the narrow areas, then referencing and process. 9. Check the residual and precision of tie points and delete or adjust the positions of the points with high residual or poor precision.
Macro to Micro Archaeological Documentation: Building a 3D GIS Model for Jerash City and the Artemis Temple 465
1O.Measure lines, 3D surfaces, curves . Cylinders were used to model the shape of some surfaces . More than 21,000 points were measured to create surfaces and lines . The accuracy of 3D measurements was 1.5-2 em with good images and clear areas, while with narrow areas and poorly oriented images it deteriorated to 5-10 em. 11.Generate orthophotos and export texture files. 12.Mosaic the orthophotos with Photoshop, as PhotoModeler can not perform mosaicking. 13.Export 3D results to AutoCAD format (best scenario for large projects to get stable results) and then import to 3DSMax software. Merge the projects in AutoCAD or 3DSMax . 14.Edit the models, complete missing parts and render them with the exported orthophotos to build the virtual model in 3DSMax environment (see Figure 15). In particular, floors were added and the column bases and capitals were modeled . Add illumination and atmospheric effects and generate a video.
Fig. 15. Left: at top left the geometric surface model and top right the projected texture with two zoom-ins. Right: adding column bases and capitals to the model.
5.4. Problems encountered 1. Shortage of photos at some locations: a) Normal and oblique photos for the right part of the southern side . b) Normal photos for the center part of the north side This is causing poor co-registration and fit between the southern and front projects, and lower accuracy in 3D measurements at this part.
466
Nedal Al-Hanbali, Omar Al Bayari, Bassam Saleh et al
Thus, the registration of the texture with the geometric model will be poor. This problem could have been avoided if all images could have been checked in the field, which was not done due to the large amount of data and time pressure. 2. Most edges are not sharp and clear. This reduces the accuracy of the measured lines and edges. 3. The residuals of the referenced points in the inner and outer room project are higher than in the other projects after using the control points. 4. PhotoModeler Pro 4.0 can not assign texture for cylinders and curved surfaces and it is difficult to measure 3d curved surfaces. 5. Due to hundreds of photos and thousands of tie points the processing time is long. 6. There is a problem in merging all projects with PhotoModeler, since the program crashes when processing starts. But since the 3D results are georeferenced accurately, we can merge the final results together in 3DSMax. 7. The digital camera was heavy and cumbersome to use. For unknown reasons, in some cases it stopped working for hours. The LCD viewer had poor contrast, in some cases the images were shown with artifacts, although the saved images were correct, and in other cases the LCD was showing nothing, thus images were taken blindly and several times to ensure full coverage of the object. 8. The 3D measurement of surfaces and lines was all manual. Commercial matching software unfortunately produces very poor results with such difficult objects and close-range imagery. In the near future, we will test a high quality matching method embedded in the software package SatPP of ETH Zurich.
6. Discussion of the importance of 3D modeling and spatial analysis GIS systems allow the overlaying of spatial data from different sources, using different structures and resolutions, thus providing a tool for modeling spatial data. The first analysis we were able to perform was the evaluation of the urban expansion of the modem city around the ancient city. Using GIS technologies the study was done by overlaying the layers created
Macro to Micro Archaeological Documentation: Building a 3D GIS Model for Jerash City and the Artemis Temple 467
from aerial photos of 1981 and the satellite images of 2004. This comparison revealed that the expansion of the city is in the North West direction, meanwhile, there are many archaeological sites still not excavated in that area. The Department of Antiquities is aware of this expansion, but its prevention requires the cooperation between several authorities. Using a GIS model and periodic satellite images is the best way for monitoring and analyzing the modern city expansion over the archaeological site. Continuous monitoring of these phenomena is possible using our database through our university or in collaboration with other interested international institutions. The archaeological site entrances are poorly designed to serve the tourists. The created GIS with 3D models and the landscape will help in planning better entrances, rest areas, and maybe an audiovisual theatre and library to inform about the site.
7. Conclusions The archaeological site of Jerash city is important and has a great deal of archaeological details which deserve the efforts and the work for its database collection and management. GIS is already proven to be extremely helpful and effective in the field of archaeology. It allows archaeologists and technicians to analyze all existing data and to look for patterns amongst the different layers of spatial data. The produced digital base data using satellite images and digital photogrammetry is a powerful and fast technique, which allows the production and updating of all needed archaeological maps. This base data could be used for classifying and analyzing spatially distributed archaeological, geomorphologic and ecological data for historic and prehistoric studies. The area of Jerash needs more attention and collaboration between different authorities for controlling and planning the protection and the conservation of the archaeological site. The produced geodatabase and the 3D model will help in performing more advanced analysis and studies.
References Al Bayari, 0., 2005. New Survey Technologies for Production of GIS Model of the Ancient Roman Jerash City in Jordan. Proc. CIPA XX International Symposium, Torino, Italy, Sept. 26-0ct. 1, 6 p. (on CD-ROM). Baudoin, A., Schroeder, M., Valorge, C., Bernard, M., and Rudowski, V., 2003. The HRS-SAP initiative: A scientific assessment of the High Resolution
468
Nedal Al-Hanbali, Omar Al Bayari, Bassam Saleh et al
Stereoscopic instrument on board of SPOT 5 by ISPRS investigators. Proc. ISPRS Workshop "High resolution mapping from space 2003", Hannover, Germany,6-8 Oct., on CD-ROM. Bewley, R., and Raczkowski, W., 2002. Past achievements and prospects for the future development of aerial archaeology: an introduction. In: R. Bewley and W. Raczkowski (eds.), Aerial Archaeology, Developing Future Practice, lOS Press, Amsterdam, pp. 1-8. Grosman, D., 2000. Two examples for using combined prospecting techniques. In: M. Pasquinucci and F. Trement (eds.), Non-destructive Techniques Applied in Landscape Archaeology. The Archaeology of Mediterranean Landscape 4, Universita di Pisa, pp. 245-255. Grun, A., Remondino, F., Zhang, L., 2002. Reconstruction of the Great Buddha of Bamiyan, Afghanistan. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. 34, Part5, pp. 363-368. Harrower, M., J. McCorriston and E.A. aches, 2002. Mapping the roots of agriculture in Southern Arabia: the application of satellite remote sensing, global positioning system and geographic information system technologies. Archaeological Prospection 9: 35-42. JADIS, 2005. http://archaeology.asu.edu/Jordan/JADISGIS.htm#MainJadis (accessed 10 June 2006). Kvamme, K.L., 1999. Recent directions and development in geographical information system, Journal of Archaeological Research 7 (2): 153-201. Lertlum, S., 2003. Remote Sensing and GIS for Archaeological Applications in Thailand: Case Studies of Royal Road from Angkor to Phimai, the Study at Sukhothai World Heritage Site, and Ayuttaya's Multi-temporal GIS Database. Proc. of Nara Digital Silk Road Symposium, Nara, Japan, Dec. 10-12. Niknami, K.A., 2002. Landscape archaeological heritage management in the information age. Proc. of the conference "Heritage Management Mapping: GIS and Multimedia", 21-23 Oct., Alexandria, Egypt. Vadon, H., 2003. 3D Navigation over merged panchromatic-multispectral high resolution SPOT5 images. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. 34, Part51W10, on CD-ROM.
Building. 3D GIS Modeling Applications in Jordan: Methodology and Implementation Aspects Nedal Al-Hanbali 1, Eyad Fadda 2, Sameeh Rawashdeh' 'Surveying and Geomatics Eng. Dept., Al-Balqa Applied University, P.O.Box: 143025, Amman 11844, Jordan, nhanbaliriqindcx.com.jo nedalalhanbalirgjyahoo.ca
2Surveying and Geomatics Eng. Dept., AI-Balqa Applied University
Abstract Demands for three-dimensional GIS modeling are increasing rapidly with the new evolving and supporting technologies for various applications. It is charting the new trend/direction for our Geo-Spatial community. Threedimensional modeling is the true Simulation of Reality, especially if it is relatively accurate. On the other hand, using 3D modeling in GIS environment offers a flexible interactive system for providing the best visual interpretation, planning and decision making process. The built 3D models are becoming one of the most efficient technologies for spatial data management and analysis. The objective of this work is to elaborate on the demand and usefulness of three-dimensional modeling, and also, explore the capabilities of current technologies via several pilot projects. Through this, one can discuss the corresponding required production workflow for building three-dimensional GIS Model of a complex site of interest using Digital Photogrammetry techniques, other techniques and GIS modeling. Several case studies are presented: first for a university campus; second for city modeling; and for the country level; and finally for cadastral application. Another important aspect one should consider in 3D modeling is to add true texture to your model to provide better prospective and clearer view of the required scenario. This is very important, especially for decision maker and valuators who are not familiar with geo-informatics analysis. Presentations of two procedures followed are briefed.
1. Introduction In the past twenty years, three-dimensional modeling is the focus of many researches. Nowadays with the new development of technology, fast proc-
470
Nedal AI-Hanbali, Eyad Fadda, Sameeh Rawashdeh
essors, and digital devices tools, and software, 3D modeling is becoming more feasible and requires much less time and effort using suitable procedures. Three-Dimensional Modeling Using Digital Photogrammetry techniques to build 3-D model is one of the most suitable and efficient technologies for producing fast accurate geographic data sources, see Wolf 2000 and Atkinson 1996 [9,10]. It optimizes time and cost for producing 3D spatial data by indirect measurement technique rather than field surveying. Also, one get the highest precision of 3D spatial data for large area which consider one of the most reliable method for obtaining 3D data which cant be obtained from GIS. The Digital Photogrammetry techniques is used to build accurate 3-D model for two of our University Campuses, the Yarmouk University (in Irbid City) , and the Balqa Applied University (in Salt city). The two University campuses are modeled using two different aerial cameras: the first uses the RC30 metric camera, and the second uses new up-to-data digital camera technology (4000X4000 pixels with the Kodak ProBack Plus sensor, and Hasselblad frame body with a 38 mm Biogon prices lens). Assessment of the results and discussions of the over all precision and reliability, and orthophoto products are briefed. More over, two digital Photogrammetry software packages were used. The two 3D GIS models are constructed based on two phases procedures. These phases are briefed via two schematic diagrams that show the required steps. The first phase is the digital Photogrammetry part to build the 3D model. the second phase is the construction of the 3D GIS datamodel with required relational database. Thus, resultant model offers the possibility of flexible and interactive visual GIS analysis of the ever changing situations in the University site. Furthermore, the work shed lights on possible GIS spatial analysis techniques in three-dimensional, which will provide a very useful and effective interactive tool. An example is presented based 3D spatial analysis scenario to illustrate a decision support system for planning purpose in a 3D visual environment for emergency, evacuation and security matters. The mechanism of using suitable software that are of importance for the success of this work will be explored. Three dimensional modeling of cities is becoming an important trend for virtual city modeling and cadastral mapping. Along this track, a pilot project to build 3D model for an area in Amman city, Jordan, is presented. The model uses real tracks using PDA with GPS tracks in a nice 3D virtual environment setup. The focus of this study is for cadastral, navigation, address geocoding and tourism applications. It is important to automize a simple procedure using PDA (integrated with GPS) to record a track, then
Building 3D GIS Modeling Application in Jordan
471
it is visualize directly in a 3D GIS environment software such as the ArcScene software. At the country and city level, requirements, spatial resolutions and implementation in a 3D GIS environment will be elaborated. In fact, it is very essential in creating virtual reality of an area to show the complete picture of the whole region model. Then, zooming to the designated area provides better effect of such visualization on the decision maker to get better picture of a certain scenario. Finally, the paper briefs on the gained experience in texture mapping through two pilot projects using off-the-shelf softwares. The first one is building accurate 3D texture mapping of an archaeological site in Jordan, which is the Artemis temple in Jarash city, using close range Photogrammetry techniques. The second one is done within the GIS environment with an add-on-module that reflects the true/actual scene. However, the texture is not as accurate. It can't be used for reverse engineering texture mapping for example. On the other hand, it is fast, true scene and deliver a solution for some applications such as 3d Cadastral mapping, and tourism. The implementation aspects and methodology used will be also discussed.
2. Project Planning: Three phases are require to build a true reality 3D GIS model, these are: • Build 3D model of all the needed features • Build a 3D GIS model with all relational spatial data base that correspond to the 3D model • Append texture to all facets of the 3D features The resultant 3D reality model Offer flexible and interactive visual decision support system for data management. The following sections detailed methodology related each phase are discussed with corresponding application to illustrate the use of suggested methodology.
3. Three-Dimensional Modeling Using Digital Photogrammetry techniques to build 3-D GIS model With the vast development in digital imaging and Photogrammetry techniques and computing technology, automation of the process to build suitable 3D models are becoming more feasible, faster and less expensive. However, proper planning and knowing the detailed steps to achieve required objective is a necessity. Furthermore, building 3D models is impor-
472
Nedal AI-Hanbali, Eyad Fadda, Sameeh Rawashdeh
tant, but more interesting if this model can be used in a GIS environment and used for 3D spatial analysis. At this stage it becomes very useful, since GIS technology is used in vast majority of applications especially for planning, monitoring and spatially oriented studies. Decision makers are becoming more and more familiar with the GIS technology. The following discusses the important aspects that are encountered in sever case studies towards building 3D GIS models. The aim of this discussion not developed new techniques rather to discuss the experience and the possibility of achieving required objectives using off-the-shelf software packages and determine certain procedures.
3.1. Three-Dimensional Modeling With the new advancement in digital Photogrammetry and software automation techniques, 3D modeling is becoming more feasible and much less time-consuming process, especially with the use of digital cameras. However, the technique requires the availability of Digital Photogrammetric software (such as SOCET SET or Z/I) and experienced human resources in this field. It is well acknowledge that photogrammetric modeling based on collinearity equations eliminates these errors most efficiently, and creates the most reliable orthoimages from the raw imagery (Wolf and Dewitt 2000). In essence, photogrammetry produces accurate and precise geographic information from a wide range of photographs and images (Demers 2000). Any measurement taken on a photogrammetrically processed photograph or image reflects a measurement taken on the ground. Rather than constantly go to the field to measure distances, areas, angles, and point positions on the Earth's surface, photogrammetric tools allow for the accurate collection of information from imagery. In addition, with the fast moving development of LIDAR in combination with photogrammetry techniques, the production of 3D models is much faster, efficient and accurate. However, it requires more financial and human resources and in many cases it is not available. Figure 1 illustrates the workflow procedure that can be followed to build the 3D features and DTM models, in addition to the extract 2D GIS layers projected from the produced ortho-photo.
Create Project Import Imagery
Scanner
Aerial Photograph
Interior Orientation Geo-Positioning Triangulation DTM Creation Automatic Terrain Extraction
Terrain Analysis
Feature Extraction
Merge spatial data
Interactive Terrain Editing Data Export
Terrain Visualization
GIS Software
Ortho photo Generation
Attribute data
Perspective Scene Generation
Mosaicking Animation True Ortho photo Generation Image Map Production
End of First stage of Building 3D Model Start Second stage of Building 3D-GIS Model
Data Export
474
Nedal AI-Hanbali, Eyad Fadda, Sameeh Rawashdeh
and 4 illustrate the 3D building model built for Al-Balqa Applied University site overlaid with DTM and orthophoto. Both digital Photogrammetry software packages found to have similar automation procedure to produce orthophotos, DTM and feature extractions capabilities. It is very hard to critique the fine details as each in some areas has certain adds and cons, but with the newer versions it is improved. Thus, what is important is both can deliver the required automation tasks.
3.2. Digital Camera Vs Aerial Camera
Table 3: 3D building features overlaid with DTM
Figure 4: Three dimensional Model For AI-Balqa Applied University
Recently, Al-Balqa Applied University acquired a new digital camera that can be used for various aerial or terrestrial projects. It is the Kodak ProBack, a sensor of 4Kx4K camera resolution (see Figure 5) with a size of approximately 40X40 mm, with Hasselblad frame body and a 38 mm Biogon prices lens. It is equivalent to small camera format. Table la shows the specs of the camera compared to the RC30 camera used to build the 3D model for the Yarmouk University. Table lb shows the flight planning and project implementation specs. When using a digital camera, there is no need for scanning or fiducial transformation. Thus, taken photos are free from the scanning introduced noise lelve (such as for the scanned images from the RC30), which provide much sharper image that allow the full use of the image resolution. Table 1: (a) Cameras Specifications. (b) flight planning (a) (b)
Building 3D GIS Modeling Application in Jordan
475
Image Specs
RC30 metric camera
Kodak digital camera
Flight Planning
AlYarmouk
AIBalqa
Focal Length
153.279
38.274 mm
2200m
1935 m
Dimension
23x23 em
4x4cm
1:10000
1:25000
Image resolution Storage size Bands
12000X 12000 410MB 3RGB
4080x4080
4km2
1 km 2
Fiducials Pixel size
Yes 20um
No 9.1 urn
+60% 740m (MSL) 1 3
+85% 960m (MSL) 1 4
GSD
20 em
23 em
Flying Height Photo scale Photo coverage Overlap Avergae terrain Strips Photo per strip Total photos
3
4
Enhancement
Equalized
Equalized
48MB 3RGB
The flight planning with the Kodak camera was via a Helicopter. The operator was acquiring the images directly according to special theme. Hand held GPS was utilized in addition to the Helicopter navigation system to organize the acquisition process. Although the flying height and the photo scale is larger than for the digital Kodak camera, the sharpness we achieved from the photos allowed to get very accurate resolution measurements for building our 3D model. However, because of the flying conditions, we chose a very high overlapping percentage to manage the adjustment of the photos with good accuracy and stable results. Detailed comparison aspects are provided in the Al-Hanbali 2005[a,b,c,d,e], AIDurgham and Nour 2005 and Al-Awamleh 2005, and also, more aspects related to the digital Kodak camera is provided in Kolbl 2005.
3.3. Three-Dimensional GIS Modeling The detailed Schematic diagram shown in Figure 6, illustrate the procedure followed to build the three-dimensional GIS model and the required relational database for Yarmouk University site
476
Nedal Al-Hanbali, Eyad Fadda, Sameeh Rawashdeh
Rtpro J« tMD
Figure 6: Schematics diagram of the Pilot Project Implementation Steps to build 3D GIS data model.
The following are the data sources used for this: 1. Aerial photograph for AL- r.1"""~~ ""'--' l·"'t'~:--..., Yarmouk university scale 1110000, dated 2000. It is used to produce orthophoto and 3d model from the stereo models. 2. Topographic maps of the area that can act as control field area. 3. Blueprint maps from the Engineering department of the university. It is used as a guide for extracted feature layers 4. Attribute data from Yarmouk university administration office about all colleges and departments in the university. Three software packages were used to Figure 7: Extracted build the feature class 's layers geofeature layers for Yarreferenced to the Jordanian projection sysmouk University tem as shown in Figure 7. I. 3D geo-referenced feature classes: The software used to extract these features is the feature collection module of the SOCET SET package installed within the Microstation software as an mdl module. The extracted 3d feature were the
Building 3D GIS Modeling Applicati on in Jordan
II.
III.
477
Buildings layer and the contour line layer, and saved as 3D shape file, see Figure 3 Geo-referenced Orthophoto and true Orthophoto maps of the University campus (Map sca le up to 1:500) . SOCET SET package was used, see Figure 7. 2D feature class layers geo-referenced, Microstation software is used to create the following layers (, see Figure 6) : the roads layer; the parking layer; the forest layer; the olive trees layer; the garden layer; the pavement layer; the tiles layer; and the land sport layer, see Figure 7.
3.4. Relational Database Modeling All layers then exported to the Arc GIS env ironment to build the relational database and to work in three dimensional environments using the ArcScene/ArcGIS software, as shown in Figure 8 and 9, see Al-Hanbali 2005 [a,b,c,d,e] , and Al -Awarnleh 2005. It is important to note tha t to import 3d features from the digital Photogrammetry software, one sho uld ex ported as a 3D shape files as a polyline and as a polygon feature classes . The po lyline takes the attrib ute of each facet that can be useful for texture mapping and the polygon takes attr ibute about the whole building.
Figure 8: Relationa l Database scheme of 3D GIS data-models
3.5. Three Dimensional Spatial Analyses As an example, one scenario is illustrated in Figure 9. In the case a fire breaks down in the Chemical labs Supply building, what are the affected neighborhood buildings and what are the safe buildings and the safe floors of the affected buildings ? Also , it can be illustrated with different zone co lors from deadly (red co lor) , to dangerous (ye llow color) to warning (gray color) areas. The illu strated spatial query shows the answer in three dimensional en vironments with animated virtual reality . This made the
478
Nedal AI-Hanbali, Eyad Fadda, Sameeh Rawashdeh
problem easily visualized that provided very solid base to take imm ediate actions that is almo st impossible to do without it. The feasibility of this type of analysis opens new era of applications.
Figure 9: Three-Dimensional Spatial analysis with Virtual Realit y Animation results.
4. Three Dimensional Modeling for City Applications
4.1. Virtual city Modeling Muni cipalities are becomin g more and more interested in virtual city modeling. Th is trend coincides with the need to better understand the spatially related problems/issues visually. With the increase types of information that can be linked spatially, and also, the increase size of popul ation and build ings, better city plann ing to manage the third dimension is becoming a necessity. This 3D virtual modeling provides the decision maker more eyes and prospective to assess reality and find suitable solutions. En vironmental issues for exa mple such as studying noise and pollu tion effects in city areas are based on 3D rather than 2D to define affected areas in space and find suitable measures. The problem is to build models using digit al Phot ogramm etry is very time consuming and not cost effec tive. On the other hand , 3D virtual city models don't have to be accurate but well representing the reality. Therefore, if simple procedure one can follow that is less expensive and time consuming would be very useful and preferred. The following is a suggested simple procedure that is followed for a pilot project area to show the effecti veness of such approach. The approach followe d is based on building the third dimension approximately as followin g:
Building 3D GIS Modeling Application in Jordan
479
1. Building type, commercial, residential, publicI government! ministry, hotellhospital/mall, mosque/church, school/police-center, gas station ... etc. All these information are normally available in the files/databases of the Municipality. The problem is if it is converted to digital spatially-linked format such as shape files. Nowadays, there is a big motivation towards building a 2D GIS for cities and urban areas. In Jordan, Amman is almost completely converted to GIS formats. Thus this kind of information can be determined. 2. Determining Floors numbers. Based on the type of building, each floor can be assigned a certain height that follows the buildingcodes in the country. For example in Jordan, the normal height is about 2.75 m. To determine this, one or more of the following can be followed. a. The municipality files. However, it might not reflect true condition as the information is not updated. Thus, other methods should be used for cross-checking. b. High resolution Satellite images, which show some details for tall buildings as it has some inclinations. c. Field survey with printed out satellite image of the area of interest. This solution would be reliable. This can also be conducted with a laser distance measurement device (hand-held device) such as the Lieca model that doesn't need more than shooting various spots representing the floors heights of the building. 3. Determining the locations of bridges/tunnels and their height is also important. Satellite images and field survey can assess. 4. Building 2D GIS layers of the features of interest, and adding these information as attribute to the 2D GIS layers. 5. Duplicating the buildings layers for the case of modeling multistory buildings. Thus two layers for two story buildings, three layers for three story buildings ....etc, tell all multi story buildings are modeled. 6. Assigning in ArcScene/ArcMap, environment the height attribute and base height, result in building up the buildings in 3D automatically. 7. Adding to these layers a DTM as the base height will result in a 3D view of your area of interest. 8. Classifying the buildings with various colors and texture symbol according to their type and usage would make very representative and distinguish and closer to reality.
480
Nedal AI-Hanbali, Eyad Fadda, Sameeh Rawashdeh
Figure 10 and 11 illustrate clips from a 3D virtua l reality an animation generated for Al-Hussien area and Amman City zooming from 3D Jordan going down to Amman then to Al-Hussien area . Where real tracks using PDA with GPS are imported as GIS files, then adding these track files to ArcScene environment, showed a nice 3D virtua l environment of the area . See Kassab et. all. 2006 for more details .
~.
i')
\ =
~.
Figure 10: 3D model of J or dan and Amman City clips
4.2. 3D Country/City Modeling for Tourism and Navigation Applications Three dimensional modeling for the country level provide a broader perspective of your area of interest as shown in Figures 10 and 11. It is very important visually to show to the decision maker/planner the current location with-respect-to the city and the country. . . ·41: .
-
Figure 11: 3D mo del of Amma n City and AI-H ussie n clips
Building 3D GIS Modeling Application in Jordan
481
Looking at the global picture visually at the country or the whole city, would help to better visualize, define and understand the needed info. Examples are when decisions are related to city-expansions, farms and rural areas, industrial and environmental aspects, sensitive projects and urban areas. Furthermore, tourism and Navigation applications are expanding in our country and the required level of demands for such visualization is increasing. However, going into 3D for a country requires powerful processors with lots of memory and hard drive space. For example, to represent Jordan, using LandSat with 25m resolution needed an image of about 2.2 GB size, while if its 12.5 m the image increased to around 8.5GB. Using SPOT with 20 meters resolution resulted in an image size of about 5.7 GB in color and 1.7 GB gray. Displaying and generating 3D view would take all the computing resources and time. Thus the best scenario is to decide on a certain grid size say 5X5 KM and split your image accordingly. This would make your processing time much faster and easier to implement in navigation/tourism software. The software also should handle browsing the images directly with various resolution levels depending on the used zoom option. A good example is similar to the Google Earth software. Each time you zoom in you get better resolution and at that level further raster-data is downloaded. Another important aspect in processing large areas is the projection system. The best scenario is one should always be in a geographic coordinate system with the WG84 default datum of the GPS receivers. Then fewer distortions effects are encountered and better matching between satellite image edges are determined. Furthermore, it is important to note that the freely available SRTM data (3 arc second) form the net provides the needed DTM for generating orthophotos and 3D models from LandSat and SPOT images with 10 plus resolution. In fact, our experience shows that using it for generating orthophotos from Ikonos imagery with 1 meter resolution for a mountainous area provide an accuracy of about 5-7 meters. See Kassab et. all. 2006 for more details. Which is great compared to what one have to pay for a more accurate DTM model. Figure 12 illustrates the generated orthophoto from LandSat with 12.5 meter resolution using the DTM based on 3SRTM data.
482
Nedal AJ-Hanbali, Eyad Fadda, Sameeh Rawashdeh
Figure 12: LandSat orthophoto with 12.5 meter resolution generated using the DTM based on 3SRTM data
4.3. Three Dimensional Modeling for Cadastral Application During the last century, population density has increa sed much more than the normal growth, which put more effort s on managing lands and cadastre. Asset management in Cadastral application are directly affected by 3D generation. Most of our residential and commercial areas are utilizing the space in 3D, and thus, it is becoming a necess ity to model this space to allow 3D spatial analysis scenarios. Assets valuation is affected by many factors and many real-state institu tes and Banks are interested to build and or develop a Land Information system with 3D spatial analysis capabilities . How ever, a limiting factor is normally the 3D database in addition to software capabilities . The following are some factors that are important for asset valuation in Jordan : Supplied basic services; Permitted number of floors; Permitted construction area; Landscape view; Access to street; Environment; Parcel location within block; Street frontage; Distance from nuisances; Land parcel shape; Currently usable area; Distance to city centre; Distance from noise; Soil condition; Distance to educational centre; Distance to health services; Access to highway; Distance to Shopping centre; Available utilities; Distance to recreational areas; Topography; Distance to religious place, car parking area, play garde n, fire station, police station; Access to waterway; or to railway . Many of the above mentioned factors are directly or indirectly related to 3D. For example, Topography and permitted number of floors are directly related to 3D modeling and thus it is becoming a necessity. To model this, a pilot project for a very commercial area in Amman City calle d AlGardens was carried out to show the real value of 3D possible spatial analyses one can conduct in 3D environment. The software used in this
Building 3D GIS Modeling Application in Jordan
483
work is the ArcScene/ ArcMap of the ESRI software. The following are the steps followed : 1. Define a suitable GIS data-m odel for the area of interest decidin g on required feature classes (simil ar to item one in Section 4.1) and attributes such as the above listed valuation factors 2. Define the boundary of the area of interest 3. Digitize streets, all building and features of interest 4. Get the cadastral map from authority for this area 5. Overlay all in a GIS environment 6. Get field survey data to: a. Get building inform ations such as:names, heights by number of floor s b. Note the buildings on street comers, main or sub streets c. Specify special cases build ings such as: Ministries, embassies, centers, malls d. Get the market value of each build ing , floor, apartment, store e. Get the estimated value from authorities for the same area
=:~oJ~:~E
(a) (b) Figure 13: (a) The offset height for each floor level, which is 3 meter each. (b) Classifying buildings and floors using color scheme.
f. Take digital photos for the buildings g. Build the required geodatabase and the relati onal classes h. Use spatial intersection to overlay prices from the market compared to the estimated values from authorities. 1. Follow the same procedure to model the third dimen sion of the build ing as in Section 4.1. Figure 13 illustrates such proce ss and Figure 14 shows the resultant 3D model. j . The following are some important categories for building classifications: i. Condominium apartmentslFlats. ii. Independent Residential
484
Nedal Al-Hanb ali, Eyad Fadda, Sameeh Rawashd eh
Adjacent Villas, Single Villa or Multiple Villa iv. Commercial galleries. v. Offices.
in.
Figure 14: 3D View of the commercial area.
In order to reach area of interest, PDA with integrated GPS is used to define track that can be exported to the GIS env ironment. 3D animation can be autom atically visualized based on this track to reach directly to the required space in the GIS environment. Due to available database, current market value of this asset can be extracted based on specifying 3D info like the floor level and/or topography. Special customization of GIS/GPS navigation software can be developed and utilized for this purpose. One example of a performed 3D spatial analysis is illustrated in Figure 15. The resultant quer y is to select all avialalbe offices, at the third, forth or fifth flours with an area of 80-100 m2 and a price range of 60000-70000 JD . See Saleh and Al-Jabra 2006 for more scenari os. )( ."'....... A""_ .. J -......
cc_.• ,
t.
" I')(
r.-J~M
--!:..J -=.J -=.J ..=.J...=..J • ~~
_ "J
Figure 15: 3D spatial analysis to select all available office, at the third, forth or fifth flours with an area of 80-100 m2 and a price range of 60000-70000 JD.
Building 3D GIS Modeling Application in Jordan
485
5. 3D Texture Mapping Texture mapping is the process of adding reality to built three dimensions CAD model. Thus, human can visualize the exact object in computergenerated graphics virutally. The basic idea is to stetch images of the building facets to their CAD model using affine transformation method. Normally, the image is for a flat surfaces, otherwise distortions occur. Two procedure will be discussed in this section. The first one is an acurate generation of orthophoto image for all building facets that are stetches accurately. The other method is less accurate and is based on directly stitching digital camera shots of the building sides without any ortho-rectifications,
5.1. OrthoPhoto Texture Mapping In archaeological documentation of monument, accuracy of exact shape and texture is very important. For example, one conducted project is to generate true texture and a virtual reality model for the Artemis Temple, in ancient Jerash City, which will be basis for accurate documentation for monument resurrection (reverse engineering) See Al-Masri. 2005 and AIhanblai et. AI. 2005a for more details .. In order to do that, accurate surface wire-mish is generated using Close Range Photogrammetry Techniques. Then for each facet (a triangle) an orthophoto is generated from the resected/triangulated stereo images. The software used for this modeling is the Photomodeler software. After ward, using 3D Max Studio, all images are stitched to their surfaces since the orthophoto is georeferenced in the same coordinate system as the wire-mish model. The final result is illustrated in Figure 16 and 17.
Figure 16: Wire mish model of the Artemis Temple.
486
Nedal AI-Hanbali, Eya d Fadda, Sameeh Rawashdeh
Figure 17: Adding orthphoto images as texture to the CAD model.
5.2
Direct Photo-Shots Texture Mapping
In this case texture mapping can be made using specia l software like Ske tch-up to stitch directly taken photo-shots of the build ing sides using a digital camera with suitable resolution. In order to have goo d results though , the camera axes should be as perpendicular to the view as much as possi ble. Also, there should be no abstractions between the camera and the objec t. The software allows you to customize and duplicate any shape or repeate d pattern that can be fitted to various facets, such as similar windows, doors and openings or special decorations on the surface /sides of a building. This proce dure is useful for creati ng virt ual real ity animation for displaying and tourism applications, but can't be as accurate for archaeo logical documentation. The same software can operate within the Arc GIS environment and ArcScene, and thus, one can create a 3D GIS features with attrib ute for 3D spat ial analysi s and viewing. Figures 17 and 18 displayed clips taken for Al-Balqa Applied Univers ity Campus . See Khair et. all. 2006 for more details.
Figure 17: Top view of the extracted 3D model of Al-Balqa Applied University overlaid with the orthophoto and the DTM of the area.
Buildin g 3D GIS Mode ling Application in Jordan
Ii I II
II I
-I
487
.::: : : :
I:
II '" : : :: I I I
II I I
. I
Ii I I
I. I
.
"
I III • III • II . I I
11,
Figure 18: Comparisons between the actual photo a nd the built up 3D texture mapping model using the Sketch-up softwar e.
6. Results, Conclusions and Future work This work is the result of two year of research that explores the effective use of 3D GIS modeling in various applications. The paper illustrates the feasibility of 3D GIS modeling using off-the-shelf softwares. Several applications and procedure followed to exp lore the vario us possibilities, suitability and advantages for each application. Four applications are discussed in this work ; each has a unique applications and perspective. Th e first project is to explore build ing 3D models from a small format digital camera (Kodak 4X4 k resol ution with 40mrnX40mm and 38 mm lens) compare d to a frame metric camera (RC30) in aerial mappin g applications. This project corres ponds to the complete production workflow of the new trend in building up three dimensional model of a complex of interes t using the Digital Photogrammetry tech niques to extract DTM, to produce orthophoto and true orthophoto, and finally to extract 3D features and building. The two cameras produced compara ble resu lts with over all positiona l acc uracy of 10 em, when 3d extracted features co mpare d to the blue-prints drawing of the buildings. The concluded Photogrammetry results discussed shows good conversion results with overall rigorous aerial triangulation adj ustment of 0.1 12 m. The quality of the produced ortho photo with a GSD (pixel to ground ratio) of 0.15 meter matches the observation acc uracy of abou t 0.14 m, which shows relia ble resu lts and similarly the res ults for digital camera .
488
Nedal AI-Hanbali, Eyad Fadda, Sameeh Rawashdeh
The project is then extended to illustrate the possibility of performing 3d GIS spatial analysis in our university campus. To do that, database layers for all features were created to model life-reality of these sites, so that any three-dimensional analysis will include all related information that affects any decision. All these, were integrated into one environment to build the 3d GIS model of the university campus. Finally, 3D perspective scene with true animation for the area using the SOCET SET software and ArcScene software packages were produced. Virtual city modeling to build three dimensional GIS models that allow 3D spatial analysis for various applications is presented. The approach used in this work presents a simple strategy that is suitable for required spatial applications. The procedure followed is economic based on available databases and can be implemented easily to deliver quick and sufficient results. For example, what is important in Cadastral applications is not how accurate is your model rather than how presentable to satisfy the required conditions and spatial criteria. Similarly, the same is for Tourism and Navigation. Hence, the paper summarized suitable procedure that requires fewer resources (financial, instrumentation, software and human) to deliver the required 3D spatial analysis quickly. It is important to note that 3d modeling for large areas such as the whole cities or the country would be more feasible using low resolution images to deliver similar and suitable results and less computational powers that don't require special processing speeds. Furthermore, our experience shows that 3d modeling for cities using Ikonos imagery and the freely available 3sec SRTM DTM data can deliver suitable and economic solution for producing orthophoto and 3d models for cities with a produced accuracy of up to Sm for a mountainous area and a 2m for a flat area. To provide more realistic view and real virtual reality environment, two procedures to add texture mapping to the 3D objects are discussed and compared. The first one using a close range Photogrammetry software (Photomodeler) to create 3D model and orthophotos, then using the 3D Studio software to produce very accurate 3D model with true texture (accuracy is within 3cm). The other discussed application is stitching directly the taken photo-shots of the a building sides to its 3D model using the sketch-up software to produce 3D model with true texture. The results were nice and little distortions were encountered and deliver good results for many applications in virtual city modeling. The Sketch-up software, can be also launched from within ArcGIS/ArcScene environment, shows to be very useful and sufficient for suggested applications, which are >- 3D Cadastral Applications to build virtual reality city-models in 3D for roads and Buildings.
Building 3D GIS Modeling Application in Jordan
489
>-
3D GIS Models for Tourism for Archaeological Documentation such as for Jerash and Petra cities in Jordan. Finally more work it still required in the area of adding true texture to 3D model. It is important to define an automatic procedure for fitting available vector data such as streets to DTM and OrthoPhotos to build virtual 3D GIS models. More work and investigation for 3D virtual city models for cadastral applications such as adding details for >- Building Texture >- Building Floors using the CAD drawing details >- Adding Attributes to these floors >- Adding streets feature based on textured captured from orthophoto
Acknowledgement This work is the result of collaborative efforts and the help of many organizations that supported directly and indirectly. The authors would like to express their appreciation to the AI-Balqa Applied University who supplied all the hardware and software used in executing the work, the Switzerland Government who provided funding to purchase the digital camera and the digital photogrammetry software and paid the flying cost. Also, special appreciation to and to Prof. Otto Koelbl who helped us planning and flying using our digital camera, and also, thanks to all the students who helped in processing the data: Bashar Awamleh, Muhannad Aldurgham, Iyad Noor, Alaa Kassab, Mohammed Ayesh, Rabee Alsmadi, Husam Almasri, Jamil Saleh, and Hossam Aljabrah,. Also appreciation is extended to the Royal Jordanian Geographic center who supplied aerial images to one our students. Finally, appreciation is extended to the Engineering Division at Al-Balqa and Yarmouk Universities, who were very helpful giving us all required information and CAD models to complete our modeling.
References Al-Hanbali, N. N., Albayari, 0., Saleh, B. (2005a):" From Macro to Micro Archaeological Documentation: Building 3D GIS Data Base-Model for Jerash City and Artemis Temple", The fourth International Conference on Sciences & Technology In Archaeology and Conservation, Amman-Zarqa, Dec 7-11. Al-Hanbali, N. N., Sana Batarseh (2005b):" Orthophoto and Mapping Techniques and their Application at Qasr Al Bent", NAMO - Nabatean Mortars- Technology and Application Workshop (,Consortium research group of: Seibers-
490
Nedal AI-Hanbali, Eyad Fadda, Sameeh Rawashdeh
dorf Research; Institute for Restoration and Conservation Techniques, Directorate General of Antiquities and Museums, and the Royal Scientific SocietyBuilding Research Centre),Amman, Nov 28-29. AI-Hanbali, N. N. (2005c):" Building 3D GIS Modeling of a University Campus", GIS User Conference, Yarmouk University, Irbid, Nov 16. AI-Hanbali, N. N, (2005d):" Building GIS database for Banking", GeoSpatial World 2005 (Enabling the Spatial Enterprise), The Intergraph GeoSpatial Users Community, San Francisco, California, USA, April 26-28, 2005. Al-Hanbali, N. N., (2005e):" GIS Modelling for Spatial Decision Support Systerns", Kuwait 1st International Geographic Information Systems (GIS) Conference, Kuwait, Kuwait, Feb 5-7, pp 10. AI-Durgham M. and Nour , I, 2005 "Aerial Traingulation, DTM, OrthoPhoto and 3D Model Extraction" BSc. Thesis, Surveying and Geomatics Eng. Dept, AIBalqa Applied University, Jordan. Al-Awamleh, B, 2005 "Building 3D GIS Modeling using Photogrammetry for Yarmuk University",BSc. Thesis, Surveying and Geomatics Eng. Dept, AIBalqa Applied University, Jordan. Al-Masri, H, 2006 "Building 3D Model for Artimes Temple in Jerash City using Close Range Photogrammetry Techniques",BSc. Thesis, Surveying and Geomatics Eng. Dept, AI-Balqa Applied University, Jordan. Atkinson, K.B., Close Range Photogrammetry and Machine Vision, Department of Photogrammetry and Surveying University College London, 1996. Kassab, A. Alsmadi, R. and Ayesh, M., 2006 "Geospatial Information System for Navigation, Tracking and Address Location Services in Jordan",BSc. Thesis, Surveying and Geomatics Eng. Dept, Al-Balqa Applied University, Jordan. Khear, A.Q., AI-Tarazi, H, Salamah, M, H, 2006 "3D Texture Modelling for AIBalqa Applied University",BSc. Thesis, Surveying and Geomatics Eng. Dept, Al-Balqa Applied University, Jordan. Koelbl, 0, 2005 "The Challenge Of New Sensors To Create A General Basic Information System For The Various Tasks In Planning And In Infrastructure Management (Progress And Limitations", International conference on Advanced Remote Sensing for Earth Observation; Systems, Techniques, and Applications, Riyadh, Saudi Arabia, May 8-11. Saleh, 1. and Al-Jabrah H., 2006 "3D Cadastral Application",BSc. Thesis, Surveying and Geomatics Eng. Dept, Al-Balqa Applied University, Jordan. Wolf P. R., Dewitt A. B., Elements of Photogrammetry with Application in GIS, 3rd edition, McGraw-Hill, 2000. Demers M. N., Fundamental of Geographic information Systems,2nd edition, John Wiley &Sons, Inc., USA, 2000.
Moving Towards 3D - From a National Mapping Agency Perspective Dave Capstick" and Guy Heathcote" "Ordnance Survey, Research & Innovation, Romsey Road, Southampton, SOI64GU, United Kingdom. (Dave.Capstick, [email protected])
Abstract As the National Mapping Agency of Great Britain, Ordnance Survey develops high quality and detailed topographic products at national coverage. We are always developing our product line and are currently investigating the addition of 3D data. In line with our role as a national mapping agency the approach is to do this, such that any data is part of a coherent range of products that are sympathetic with the terrain and existing 2D topographic data, are of national cover and are maintained. Such intentions create challenges beyond those faced by data suppliers of existing 3D data given that these are stand-alone, of limited geographic extent and unmaintained. We are therefore faced with the following challenges: 1. Determination of a specification and an appropriate capture methodology. The first of these is a non-trivial process since it has to be a compromise between multiple end uses and the economics of data capture. Automatic data capture techniques, for 3D features, are in their infancy and if a national coverage dataset is ever to be realised, these techniques must be significantly advanced. 2. Data modelling and storage of 3D data. 3. Maintenance in line with the existing policy of continuous update. 4. Supply of products in a format that is understandable to our customers systems and of a useable form. This paper will discuss those issues most pertinent to the development of a potential 3D product within Ordnance Survey. Subsequently, a better understanding of the complexity of implementing a 3D dataset into a national mapping agency's product portfolio can be achieved.
492
Dave Capsticka and Guy Heathcotea
1. Introduction Ordnance Survey is the National Mapping Agency (NMA) of Great Britain and as such is uniquely placed within the country's Geographic Information (GI) industry in terms of the expectations and demands placed on us from our customers. As a national mapping agency our policy is to support products that are detailed, information rich, mutually compatible, current (and hence maintained) and of national coverage. Most importantly each product forms part of a national topographic referencing system. Additionally there is an expectation that we should provide geographic information that complies with international standards. Over the last few years there has been an increased awareness of the potential that 3D GI could provide. This has been given a significant boost, in terms of public awareness, through the release of global exploration systems, such as Google Earth and 3D editing tools such as SketchUp. There are now many and varied applications that require the provision of 3D information. In response to this as is performing significant research to investigate the capture and modelling of 3D data. However, due to Ordnance Survey's position within the GI industry in Great Britain, there are both significant advantages and challenges that are unique to a NMA. Specifically these are as follows. 1. The creation of a data specification and data capture process. 2. Data modelling and the integration into existing products and systems within Ordnance Survey. 3. The maintenance of a national database. Product specification and data format. This paper will discuss those issues mentioned above and view them in the context of a National Mapping Agency.
2. Data specification and capture The first step in the creation of a 3D geospatial infrastructure is the definition of the data specification. The data or capture specification is a compromise between what content is required to construct a range of products designed to meet different customer needs and what can be economically and feasibly collected. Currently we supply data to a wide range of customers representing a broad range of market sectors and any 3D product(s) are likely to be similar. In order to develop the product specifications we undertook consultations with a wide range of customer groups about their specific tasks and
Moving Towards 3D - From a National Mapping Agency Perspective
493
from this a series of real world objects (RWO's) have been identified. The users and the specific tasks that have been investigated include organisations involved in: planning and construction for urban design and planning applications; telecommunications industry for network design and signal propagation studies; catastrophe modelling by insurance companies and flood risk modelling. Table 1 details a small selection of the information that has been extracted and the source of the information. Table 1. Some of the RWO' s hi hli hted b customer/users as bein si nificant. WA"W'«:'~~...:W$i~~~~~~Q..WM"
""~~~_~t~~E!~"~Q1!1~~~""~""mN,~~~~!:yiew ,"~<2!!Ece ~_~'~_'__""""_~"'_~_"_""""N"~V_~_""h'N,"m_",~,_",,_m,. Local authority planning, flood risk modelling, Building Storey telecommunications, urban design, risk management, landscape and architectural investigation. Local authority planning, planning application Building Roof consultancy, telecommunications, urban design. Flood risk assessment Wall Flood risk assessment River Bank Tree Local authority planning, telecommunications. Road Network Public transport route planning, asset manHeights agement for County Councils Tunnel Telecommunications Wall Flood risk assessment Land Cover Telecommunications
Another aspect of the capture specification definition is to consider the possibilities and limitations of the data capture methodology. Currently, the main sources of 3D data are: LiDAR; Photogrammetry; Ground Survey and Satellite imagery. None of these technologies are sufficiently advanced for our purposes to enable automatic capture of features in 3D. As such, we need to consider the utilisation of manual and semi automatic methods. The work carried out to date in our primary data capture processes have been based on photogrammetry. This is due to our extensive experience in photogrammetric data capture, since this is already utilised within the existing 2D capture flowline. Allied to this is the availability of appropriate stereo imagery and software tools. For these reasons we initially opted to use BAE's SOCET SET photogrammetric software package. We have explored the 3D capture functionality within SOCET SET and this has shown that it is capable of constructing simple 3D building features from a library of set building shapes. More complex shapes can also be built by combining several library features. However, it is ac-
494
Dave Capsticka and Guy Heathcotea
knowledged that SOCET SET was not designed specifically for inclusion in a 3D capture flowline. This is apparent from certain limitations that we have encountered within the software . For instance, there is an inability to geometrically link 3D features with existing 2D datasets. Fig 1 shows this aspect in more detail. The ability to add attribution that may enhance the richness of the dataset is also lacking. Additionally, there is no functionality to enable the merging of multiple primitive objects that are used when constructing complex building shapes. Finally this is still a manual process which is impractical on a large scale.
Fig. 1. Left OS MasterMap, showing buildings in blue and gardens in green Right: With 3D buildings captured via photogrammetry, using SOCET SET. Note the discrepancy between the 3D building footprint and the 2D footprint.
What this investigation has highlighted is, not necessarily the best data capture method available, but that the 3D and 2D data capture processes need to be done either at the same time, or are done in such a way as to ensure synchronisation between our different products. It is the inconsistencies between different capture methods that are likely to cause the greatest problems when producing a coherent and interoperable suite of 2D and 3D products. CyberCity Modeler is also being investigated as to its value in capturing 3D features by Ordnance Survey. It is envisaged that this software will solve many of the issues highlighted in the SOCET SET trial. OS Master Map could be incorporated within the CC Modeler data capture process in order to locate features and to create a geometric and topological link between the 2D and 3D datasets. Both aerial and ground based LiDAR techniques are being investigated by Ordnance Survey, although we have, as yet, few practical results available. Automatic feature extraction techniques are in their infancy for LiDAR and are currently not sufficiently advanced for our requirements; however, there has been some success in the semi automatic feature extraction [Flood 2002]. Satellite imagery can largely be discounted for our purposes, since the spatial resolution is currently considered too coarse to
Moving Towards 3D - From a National Mapping Agency Perspective
495
meet our requirements. Ground based surveying methods are also being investigated, which involves the capture of features using traditional ground surveying equipment. This method is particularly suited to 'filling in' detail that can't be seen from the air and for maintenance. Investigations are currently looking at the efficacy of reflectorless total stations. It is envisaged that no one data capture methodology would be used in isolation to capture 3D features. Photogrammetry alone cannot capture all feature detail due to obscuring of features. Thus these diverse datasets need to be integrated in order to produce a set of coherent features.
3. Data modelling and integration within existing systems As most of Ordnance Survey's production processes are designed to support 2D data in mind, the seamless integration of a potential 3D dataset is problematic. Future products, whether 3D or not, will have to be developed with incorporation into this environment in mind. The logical model that has been developed within Ordnance Survey is shown in Fig 2. The primary unit of information within the model is the feature along with a managed identity. The identity is the link to a set of extensible data components that contain detailed information about that feature. In Fig 2, one component is the 2DLocationalComponent, which describes the 2D geometry of that feature. In 3D, the component nature of the logical model will allow us to extend the model to include a 3D geometry component.
Fig. 2. Ordnance Survey logical data model
496
Dave Capsticka and Guy Heathcotea
The logical model will therefore allow considerable flexibility and extensibility in how we can represent geographic features. Potential problems arise when we physicalise this model. The database environment under which 3D data will need to be stored is Oracle SDO and ESRI SDE. Neither of these provides support for true 3D structures [Zlatanova et al 2002]. However, it will not be necessary to query 3D elements, only position will be required for search purposes which can be achieved using the 2D representations. From this it follows that 3D data could be stored as a BLOB component associated with the respective feature and only interpreted for editing purposes and ultimately by the customer systems. One potentially problematic aspect of the model concerns the relationship between 2D Features within MasterMap and the 3D structures. In many cases there will be a one to one relationship with the 3D structure e.g. a detached house. This will not always be the case and most typically semi-detached and terraced houses will find one 3D structure mapping to many OS MasterMap features. This can be handled by introducing into the data model composite features that map to two or more individual features and which have a single 3D structural representation. Based on the results from our user consultation exercise we have a clearer understanding of what features need to be included in our model. Allied to this, researching the generation of 3D based products has given us further practical insight into user requirements and specifications. These results have been gathered together to provide us with a general overview of the nature of the data that is required within our model. Fig 3 shows those components that are considered essential for a 3D based product supplied by Ordnance Survey. Essentially it shows that geographic features with a 3D aspect to them can broadly be broken down into 4 main areas, terrain, the built environment, water and vegetation. The built environment class can be further broken down into buildings and structures, where structures would include bridges, tunnels, flyovers etc.
Fig. 3. Essential features contained within a 3D model.
Moving Towards 3D - From a National Mapping Agency Perspective
497
4. Maintenance and update One of Ordnance Survey's, core strategic objectives is to 'improve and maintain the definitive databases in a form that facilitates the association and integration of additional geographic data'. As such, when we contemplate how we might embrace the concept of 3D, it's only natural that the process of 3D maintenance would be considered to be a key element. A simplistic way of examining the 3D editing process can be visualised as follows: Job Manager
~
---. ;
; ;
~
Query / Access Control
Editing Tools
1 1 . _ - - - -...... Databank / Access Control
Quality Assurance
Fig. 4. Simple overview of a 3D maintenance flowline.
Within such an environment, the initiation of an update process would be event driven, as it is in our existing 2D environment. Within our existing model, such events can be called for a variety of reasons - significant change, quality improvement sweeps and the correction of reported errors being common triggers. As such, a natural question would be to consider how these apply to 3D and whether additional, 3D specific, triggers might also be required. In order to ensure the maintenance process is in line with the update of the existing database, capture from photogrammetry alone is insufficient. One alternative is ground capture. Indeed, both photogrammetry and ground survey techniques are used concurrently within Ordnance Survey to service our existing database. We might initially approach this by considering how 3D maintenance tools might work either alongside or as a part of our existing 2D tools. Whilst a single editing environment would be desirable, it is more likely that separate solutions are a more likely scenario. Software offering 3D modelling functionality, which is at the heart of our requirements, is not, of course new. Modelling packages, such as Autodesk's 3D Studio Max, have been on the market for many years, steadily offering more complexity and refinement. In addition, specialist 3D tools aimed at distinct market sectors - say, engineering, architecture,
498
Dave Capsticka and Guy Heathcotea
entertainment - have also established themselves within their own specific industries. Many of these tools share the same set of underlying features, tools and methodologies. As such, it's not hard to imagine a surveying/modelling/maintenance system using similar principles, but with the addition of specialist surveying functionality. Existing field survey tools are hosted on a pen based tablet computer which may not be suitable for 3D capture given the relatively small screen size. Also, the simultaneous selection and navigation functions that 3D work requires are likely to prove inefficient and difficult if a touch pen were the primary method of input. A possible solution would be to limit the amount of modelling work in the field, with surveyors instead required to finish the processes on more suitable equipment in an office environment. However, even this would have disadvantages, as it could require the surveyor to partially rely on photographs or memory. A 3D toolset would inevitably require that the field surveyor become familiar with a host of new tools and techniques. Although our surveyors are likely to embrace such changes with enthusiasm, a costly and lengthy training programme would still be required. Additional to techniques already mentioned, there is also the possibility of pre-captured 3D data being sourced from third parties. These could be planning departments, architects, builders or even on-line enthusiast communities. We already use a similar approach within our 2D flowline where architectural plans are employed as source data. As we've found within our own work, 3D capture and modelling is a complex task that can be error prone. This indicates quality assurance is an important production issue. Problems include geometric positioning, scaling, topological inconsistency, texture coordinates, attribution and integration with other products. The relatively small scale of our test environment has meant we have been able to use visual and manual methods to identify and resolve such issues but it is obvious that these are not scalable. Therefore, a suite of more automated test processes need to be developed. How we might go about creating such tools is a matter for further research.
5. Product supply This involves the construction from the database of products suited to one or more user scenarios and their packaging and delivery using appropriate formats and delivery technologies.
Moving Towards 3D - From a National Mapping Agency Perspective
499
File formats capable of transferring 3D data are already numerous. Although each of these formats serves a similar role, the actual content within each can vary considerably, in line with the differences between their intended use. For example, some formats describe objects via indexed geometry, whilst others use explicit (and thus, duplicated) vertices. If a 3D file is to be used within a GIS context, then it is desirable that that format can hold a spatial reference system, custom attribution or possibly even 3D topology. Despite the number of formats existing in the market, there are a few that stand out. For example VRML97 which, despite its age and simplicity, is still widely supported by many commercial applications. However, GeoVRML [GeoVRML] and X3D (XML based VRML successor) have struggled to obtain similar levels of acceptance. Another widely supported format is 3DS (previous default format for 3D Studio Max) which is still used extensively. One problem with such formats, though, is that they've been developed for and by the mainstream CG industry and do not support the attribution and geo-referencing required by GIS applications. As a result of these deficiencies, a new format, CityGML, is currently being developed to address these issues. This project, under the control of the Geodata Infrastructure North-Rhine Westphalia (GDI NRW) in Germany, is an extension of GML3.0, with the aim of establishing the format as an OGC standard [Kolbe et al 2005]. Given Ordnance Survey's existing adoption of GML for its OS MasterMap product, it should come as no surprise that CityGML is of great interest to us and that we are participating in the project's Special Interest Group. Current Ordnance Survey policy is to release products using international standard formats rather than vendor formats and specifically this has meant GML for vector products. We can envisage a format such as CityGML serving a similar purpose for 3D data. However, whilst CityGML may be acceptable to the GIS industry it's harder to be sure that such a standard would also be adopted by the CG industry in general. As such, it would seem more likely that we'd additionally have to supply data in one or more of the more established 3D formats.
6. Conclusion This paper has discussed those aspects pertinent to the introduction of a 3D based product with respect to a NMA. We have highlighted that there are four main areas that are of major importance. These being:1. Specification derivation is being driven by the results from an extensive user consultation exercise. The photogrammetric work has highlighted
500
Dave Capsticka and Guy Heathcotea
that the integration with existing capture flowlines and the integration of multiple data sources would be of primary concern when capturing 3D data. 2. Data modelling needs to be carried out in such as way that not only are relevant features modelled, but we also need to take into account the existing systems and infrastructure that models our 2D digital data. 3. Maintenance and update is an important activity within Ordnance Survey's 2D digital mapping. We aim to provide digital data that is as up to date as is possible for 3D data as well. 4. Finally, product supply is an area that needs consideration. Ordnance Survey is often seen as an organisation that should develop and promote new data standards. For example the introduction of Gl\1L for MasterMap. There are many 3D standards that are commonly used with the wider CG industry, however few of these have a geographic aspect to the features that are being modelled. CityGl\1L is an exception to the rule and is a possible solution to the 3D geospatial standard issue.
References Flood, M. (2001) LiDAR activities and research priorities in the commercial sector. International Archives of Photogrammetry and Remote Sensing, vol. XXXIV-3/W4 GeoVRML http://www.geovrm1.org/ Kolbe, T.H, Groeger, G, Plumer, L. CityGML - Interoperable access to 3D city models. In Oosterom, Zlatanova, Fendel (Eds): Proceedings of the Int. Symposium on Geo-Information for disaster management. March 2005 in Delft, Springer Verlag. Zlatanova, S. Rahman, A. A, Pilouk, M. 2002. 3D GIS: Current status and perspectives, in Proceedings of the Joint Conference on Geo-spatial theory, Processing and Applications, 8-12 July 2002, Ottawa, Canada, 6p
An Approach for 3D Visualization of Pipelines Y. Du and S. Zlatanova The Urban Planning Bureau of Yibing city, Sichuan Province, P.R.China GISt, OTB, Delft University of Technology, Delft, The Netherlands [email protected]
Abstract Utility networks (pipelines and cables) are usually modelled by lines and nodes. The systems used for maintenance of utilities are typically twodimensional, allowing for creating vertical profiles in limited key places. Since the position is visualized on 2D maps, it is rather difficult to distinguish between cables and pipelines "with the same x,y coordinates. Such visualization might be misunderstood by both field workers (maintaining the utilities) and specialists. Therefore the interest in 3D visualization is rapidly increasing. Generally, visualization of lines in 3D is quite challenging. In most of the cases, the best solution is to replace the lines with tiny cylinders or other 3D shapes. However, substituting simple lines with parametric shapes in the database is time consuming, expensive and economically not justified. This paper presents an approach for utility management, according to which the pipelines and cables are still maintained as lines but while visualising in 3D, the lines are substituted with cylinders created on the fly. This paper presents the used systems architecture and developed prototype. A final discussion summarises on advantages and disadvantages of the used approach and outlines directions for further research.
Introduction Visualisation of utility networks has always been problematic. Utility networks are usually represented as lines (segments of the networks) and points (connections, valves, etc.) predominantly with their x,y coordinates. Depending on the type of the utility networks (water, swage, telecom, etc.), the depth or (more rarely) the z coordinates (in given points) might be registered. The software (GIS, CAD, AMJFM) for utility maintenance is
502
Y. Du and S. Zlatanova
typically 2D, i.e. the visualisation of all the elements of the networks is on 2D maps (Fig. 1). Such a visualisation usually serves the needs of a company (or state authority) that is responsible for a particular network, but can result in misinterpretation when provided to third parties . Various factors contribute to confusion and misinterpretation of the information on 2D maps. Firstly, the major trace of pipelines or cables per network is mostly the same, i.e. under the streets, which results in overlapping lines on the map. To avoid this overlap, many companies offset the multiple pipelines to increase the readability of the map. Such an approach, however, could mislead unfamiliar users. Secondly, the trace of the different utility networks also overlaps. Colour and depth (depicted near a segment) of a particular pipe or cable are often the only parameters to distinguish between different networks. Integrating several networks on one map is almost an impossible task. Thirdly, a large number of important elements of the networks (such as valves, connections) are given with symbols, which might be challenging for interpretation from non-specialist and even from some of the less-qualified field workers . Finally, some of the networks (e.g. sewage) contain a large amount of vertical elements, which visualisation on the 2D maps is only as points. Explanations about the vertical elements is often not included in the maps relying on the on-site experience of field workers
Fig. 1. Visualization of Utilities on a 2D map
These drawbacks of 2D visualization of utilities are not new. However, the rapidly increased utilization of underground space by utility companies requires more extended knowledge about the position of underground utility networks that ever before. The intensive expansion and modernization of cities (involving re-construction of streets, buildings etc.) needs reliable information about existing infrastructures. Resent investigations (Roberts et al 2002) have revealed an increased number of accidents of various ranges and scales. It is clear to many governments that a 'centralized management' of utilities is the only way to improve the knowledge on the un-
An Approach for 3D Visualization of Pipelines
503
derground infrastructure (Chong 2006, Rei et al 2002, Penninga and Oosterom 2006). Among all the aspects related to a centralized registration such as appropriate system architecture, type of information to be stored, legislation, etc., is also the visualization approach. 3D visualization of utilities is considered by many (Zlatanova 2004, Du 2005, Chong 2006) as able to solve many of the drawbacks mentioned above. 3D visualization of pipelines is necessary tendency for development of urban pipe and cable systems, because it can clearly express the position and spatial relationship of all pipelines. Arbitrary displays of pipelines from any view and/or from any a place (also for profiles) can be created. In such away, blind-cutting and fault damaging of pipelines can be considerably avoided. Particularly, in the case of crisis recovery after unexpected accidents or natural disasters, e.g. emergent fires, gas-leakage, anti-terrorism, flooding and earthquake, 3D visualization may provide vivid graphics in such a crisis epoch for a quick decision-making, in order to save precious time and to avoid life and economical loss as soon as possible. Research is emerging aiming at improved utility visualization. Roberts et al 2002 suggest an augmented reality system for 3D visualization of utilities (showing their position on the surface with attached depth information). Peng et al 2002 discuss profile creation from a utility model to maintain the pipes and the lines with their 3D coordinates. He et al 2002 present a formal approach for underground utility system. This paper concentrates on 3D visualization created on the fly from utility records maintained in DBMS. Next section discusses possibilities for realizing such a concept and gives a motivation for the selected system architecture. Section 3 and section 4 elaborate on developments in detail. The last section provides an extended analysis on the obtained research and outlines directions for further research.
System Architecture for management and 3D visualization of utilities Currently, there are numerous approaches to implement 3D visualization for pipelines. First, it could be only for the purpose of visualization, i.e. one can execute algorithms or functions of CAD (Computer-Aided Design) to draw 3D figure with OpenGL or Direct3D language (e.g. Yong 2003). However, this may not fully make use of semantic (attribute) information. The second approach is based on CAD software packages for modelling, visualization and, quite often, some network analysis specific
504
Y. Du and S. Zlatanova
for a particular utility network. Such systems, known as automated mapping and facility management (AMIFM) were extensively developed in early 80's and some of them provided even 3D visualization. Many originally CAD vendors nowadays provide extensions to their software that deal with utilities. These systems provide reach tools for 3D visualization but for a specific utility network. The third approach is based on 2D GIS, 3D visualization packages are provided by most of traditional GIS vendors, such as 3D Analyst of ArcGIS, GeoMedia Terrain, and so on. Whereas, they do not provide a solution for 3D modelling of pipeline. Spatial DBMS is yet another possibility to organize information of utilities. The advantages of a central spatial DBMS is apparent: spatial and attribute data are maintained in one integrated environment (Oosterom et al 2002). Data in spatial DBMS can be currently accessed by most of CAD/GIS system, which is yet another very promising solution to 3D visualization (Zlatanova et al 2002, Zlatanova and Stoter, 2006). Among all the approaches, this paper gives preferences to the last one motivated by the strong management characteristics of DBMS and 3D editing/visualisation possibilities of CAD software. On the one hand, there are many convincing reasons to use Spatial DBMS for management of utility data sets: multi-user control on shared data and crash recovery, automatic locks of single objects while using database transactions, advanced database protocol mechanisms to prevent the loss of data, data security, data integrity and operations that comfortably retrieve, insert and update data. On the other hand, CAD systems provide flexible tools for 3D editing, adaptable, user-friendly graphic user interfaces, power means for realistic rendering and navigation through 3D models, possibilities to create animations, different views and export data in various formats (Breunig and Zlatanova, 2006). Therefore, in our approach spatial DBMS is used to manage the utility networks (with their 3D coordinates but still organised as lines and points) and CAD software is used for the 3D visualisation, as the 3D shapes are created on the fly.
From 20 to 30 In this paper we concentrate on the visualization of pipeline networks since their complexity is a bit higher compared to cables. However, the assumptions bellow can be easily extended to cable networks. Compared to other topographic data, pipelines and cables are relatively simple types of data. Pipelines are generally represented by the x,y coordinates of their central lines. Depth or z-coordinate is not a compulsory parameter for all net-
An Approach for 3D Visualization of Pipelines
505
works. For example, the water company Waterleidingbedrijf Amsterdam responsible for drink water in Amsterdam, the Netherlands does not maintain any of them. In contrast, the urban offices in China have records for both depth and z-coordinate. Pipelines are mostly straight lines with relatively small number of turns. Very often the turns of the pipes are represented in 2D maps with one point. Relatively large turns are currently presented using short straight lines connecting successive interpolation points. The interpolation points of arc are required to be within a certain precision. The accuracy of points within the pipeline networks is relatively high. The horizontal precision is around ±(5+0.08d) em, and the vertical precision is within ± (5+0.12d) cm, d is the depth of the centre of pipeline. Shape and size of pipelines are relatively consistent, and materials of pipelines are rather limited. Joints of pipelines are almost the same for a single pipeline. The shape of the most pipelines is cylinder though some of them are ditch. As already mentioned above the vertical segments in current networks are either not represented or indicated implicitly with symbols and textual information. Accesses from the surface are also defined only by symbols. These properties of pipelines make the visualization of pipelines in 3D quire straightforward and easy. Long straight lines can be replaced with tiny cylinders considering the diameter of the pipe and consequently visualized as 3D objects. The real challenge is the missing information: • absence of vertical segments, • visualization of turns and intersection of cylindrical pipes, • re-construction of rectangular pipes, • 3D symbols to replace the 2D symbols. Absence of vertical elements Originally, the networks contain (in most of the cases) only the horizontal segments. In case of different depth, double depth values are recorded by the end points of the segments. Fig. 2 illustrates the creation of the vertical elements. In the given example, the original pipe consists of only four segments represented by 5 points (Fig. 2, above). These segments are further extended with two more points (6 and 7) and two more corresponding vertical segments (Fig. 2, bellow). Clearly, if the depth (z-coordinate) is not available it should be measured. Measuring methods to obtain the exact position of the underground utility networks is discussed in Du, 2005.
506
Y. Du and S. Zlatanova
4
5 points
7 points
Fig. 2. Transition from 2D data set to 3D: creation of vertical segments
Visualization of turns and intersections of cylindrical pipelines Reconstruction of the shape of cylindrical pipelines is relatively simple. Cylindrical pipelines are practically composed of many cylinders taking into consideration the value of diameter and coordinates obtained from centre lines. In case the pipe has a variable diameter, the shape to be created is a cone. Since these cylinders are all straight, gaps and superimposition appear at the joint between two segments (Fig. 3). It is apparent, the larger changes in the directions of the pipes, the more visible discrepancies would appear. The size of gap may change also with respect to the diameter of the pipe. The best way to avoid such gaps is to use smooth transition from one pipe to another considering the pipeline radius (Fig. 4). Such a transition can be realized by introducing a new curve segment (e.g. torus) between the two pipes. However, this approach may require quite significant changes in the original data. The torus should be created with a certain offset depending on ·the diameter of the pipes and the angle between the central lines. Since the complexity of the computation increases, such a solution might result in dropping down the performance. Another option is the display a sphere at the joints of any successive segments. The radius of the sphere is to be determined with respect to the radius of the two connecting pipes. An advantage of this approach is that sphere is the simplest surface and does not depend on the rotation matrix in 3D space (needed to adjust a shape in the 3D scene). Although we have not tested the influence of adding spheres in all connections for very large data sets, we expect that the performance will not be violated. The 3D visualization on-fly procedure includes the following steps: • Query the lines and the points to be visualised in 3D from the DBMS (get the 3D coordinates) • Query the diameter of the pipe segments
An Approach for 3D Visualization of Pipelines
• • •
507
Construct a cylin drical segme nt (or a cone element) Construct a sphere joint Display in the view windows of a CAD
(1) 20 planar grap hics (Red is central line. Green is angle bisector)
(2) 3D visualization
Fig. 3. 3D visualization of cylindrical pipes without conn ections
(1) 20 planar grap hics (Red is ceotralline . Green is angle bl&edor)
Fig. 4. 3D visualization of cylindrical pipes with smooth transition using toru s
3D visualization rectangular pipes Rectangular pipes are mainly ditches and those contammg cables and groups of cables of telecom and electricity co mpanies. The containers are
508
Y. Du and S. Zlatanova
defined by the Minimum Bounding Cuboid (MBC) that minimally encloses the cables. 3D visualization of rectangular pipelines is carried out by simulating a hexahedron, which is constructed by eight vertices and four faces. The rectangular shape obeys two restrictions: for planarity and verticality. In order to keep the visual impression undisturbed, the faces (composed of four vertices) are considered planar. Having in mind the shape of rectangular pipelines in reality, the two side faces (right- and lefthand) are restricted to be vertical. The computation of vertices of the hexahedron is the critical step of visualization of rectangular pipelines. This paper proposed a bit simplified computational approach. The coordinates of the vertices are determined by the coordinates of the central line and the size of the pipeline. Firstly, a vertical section is made through the central line. In the vertical section, two lines parallel to the central line are constructed, by both sides respectively. The distance to the central line is half of pipelines size, see Fig. 5. The cross points of parallel lines with the transects of two ends in AB segment are presented as point a l , a2, b3 and b4. Whereas, there may be 4 points in about one central point of pipeline, i.e. b l , b2, b3 and b4 in point B. The coordinates of aI,a2,b3 and b4 then can be easily computed (see Du, 2005).
Fig. 5. Computation of rectangular pipes
It should be noticed that only if the inclination angles ( and ) of the forward and afterward segments are equal, point b l and point b3, and b2 and b4 are same points. Otherwise there are always 4 points (only Z is different) at any middle vertex of central line of pipeline. This means that there is a sudden change in the heights. Because the angles are rather small, we have decided to considered these points equal. Point b l and point b3 are moved to the location of point b5, and point b2 and b4 moved to the location of point b6. Applying this simplification, we ignore the small gaps between bI-b3 and b2-b4 to ensure the 3D visualization. The disadvantage is that the height of the pipelines may be changed though the change is rather small. We have investigated all possible changes in the direction of the pipes and we have concluded that there are four distinct cases possible (Fig. 6,):
An Approach for 3D Visualization of Pipelines
509
• Vertices of the beginning (or ending) pipeline, see location A, • Vertical middle vertices (only Z is different), location BC, • Middle vertices, see location D, • Vertical ending vertices, see location EF. It should be noticed that the direction of the consequent segment (up or down \) is not of importance. Cases A and EF differ from respectively D and Be in the closing faces. Detailed formulas regarding the computations in each particular case are given in Du, 200S.Fig. 7 illustrates the obtained results.
Fig. 6. Four different cases of pipe changes
3D symbols Usually, 3D symbols can be easily created in any CAD program, stored in a uniform library and used when necessary. A symbol is made up of series of graphic elements. Each symbol has an origin, which is defined when the cell is created. For this study, we have created only few basic 3D symbols, i.e. valve, hydrant, well for check, man-hole, hand-hole and control box (Fig. 8). One can update and/or append a symbol at any time. The steps to follow are almost similar for all CAD programs. Bellow the procedure to create symbol library in Microstation is listed: • Attach a file containing a cell library (if no cell library, create new one). • Draw the contents of the cellon the desired levels.
510
Y. Du and S. Zlatano va
•
Select all the elements to be included in the cell with the Element Selection tool or fence tool. Define Cell Origin tool. The cell origin is the point that is used to position the cell in a design. When a cell is placed in a design , the cell origin will lie on the data point entered to position it. Click the Create button in Cell Libra ry dialo g box. Inpu t the name of cell. Repeat above action to create other cells.
•
• •
Beginni ng(Ending)
Vertical Ending
/
Vertica l Change
Other Change
Fig. 7. 3D visualization of rectangular pipes
Implementation in Oracle Spatial and Microstation The concept for 3D visualisation of pipelines is tested in Oracle Spatial and Microstation. The test is performed on two data sets (from China and Netherlands). The data for pipelines are organised in SDO_Geometry, the Object-Relational model of Oracle Spatial. Discussion and examples regarding the SDO _Geometry model are available elsewh ere in the literature (e.g. Zlatanova and Stoter, 2006, Zlatanova et al 2002) .
An Approach for 3D Visualization of Pipelin es
511
The data sets used were prepared (as described in Section 3) to include the central line with its x,y,z - coordinates, terrain height (or depth , zt), diameter of pipe (d), width and height of ditch (d, h). The descripti on of a pipe network can be therefore seen as a functi on of six parameters per pipe seg ment, i.e. P = F(x,y,z,zt,d,h)
Fig. 8. Examples of 3D symbo ls
SDO_Geometry supports only 4D coordinates . This mean s besides the coordinate (x,y,z), only one more parameter can be included in the SDO _Geometry object. Since the terrain height is the most important value to obtain the depth , we have included it into SDO_Geometry as the fourth coordinate. It is used also for preparing vertical profile and calculating insertion points within pipelines. The .advantage of this representation, i.e. terrain height instead of depth value, is that pipes above the ground can be also incorporated in the model. For example, a pipeline is under groun d if (zt-z) is positive, otherwise, the pipeline is above ground. The information about the pipelines is stored in two tables , i.e. ' line table ' and ' point table ' . The content of the line table follows:
LINE TABLE Colum n Name
Datatype
Descr iption
512
Y. Du and S. Zlatanova
mslink start_id end_id MATERIAL diameter tall Pressure Cabnum Sumhole Usedhole bdate lshape
NUMBER(8) NUMBER(8) NUMBER(8) VARCHAR(8) NUMBER(6,3) NUMBER(6,3) VARCHAR(8) NUMBER NUMBER NUMBER VARCHAR(7) SDO_GEOMETRY
PRIMARY KEY Start Node End Node Material Diameter (width of ditch) Height of ditch pressure (gas, electricity) Cable number (electric ity) Total holes (telecom) Used holes (telecom) Build date Gtype =4002 (pipeline segment, 4D)
Clearly, the data contained in the line table is sufficient to visualize the pipelines in 3D and represent their location (both with respect to other networks and the terrain). However, some points contain attribute information, which cannot be assigned to the point if it is not a separate SDO_Geometry data type. Therefore those points that have specific attribute information are organized in a separate table. Practically, all the end points of a pipeline are represented by a 3D symbol and therefore the coordinates (x,y,z) are still sorted in point table. Additional information that helps to create the 3D symbol can be height of ground; height of pipelines bottom and azimuth is also included.
POINT TABLE Column Name Mslink Component Top_h Bot_h Azimuth Pshape
Datatype NUMBER VARCHAR2(6) NUMBER(8,3) NUMBER(8,3) number(8,6) SDO_Geometry
Description ill of point Attachment or Node Height/depth relative to ground Bottom depth of pipeline Direction of symbol x,y,z coordinates
The Oracle Spatial data types can be readily accessed and visualised in Microstation, however, they would be visualised as they are stored in the database, i.e. as simple lines and points. Therefore, several new programs had to be developed within Microsation programming environment fo the visualisation on the fly. There are several development environments, such as MDL (Microstation Development Language), JMDL (Java edition of
An Approach for 3D Visualization of Pipelines
513
MDL) and VBA (Visual Basic for Applications) available in Microstation. We have selected JMDL. Although JMDL is extensions of java in Microstation, there is a slight difference between JMDL and java. For example, JMDL supports the Boolean data type and all of the Java integral and floating-point types. In addition, JMDL supports unsigned types. Java operates on only two types: primitive types and reference types. JMDL operates on a third type, the complex type. The JMDL complex types are sort of structures and static Carrays, both of which are JMDL extensions. The Java implementation included in JMDL is completely standard. In the initial releases of Microstation/J, JMDL can be used to run existing Java applications, while standard Java interpreters are not be able to interpret JMDL byte code. This is due to the built-in extensions made to the JMDL. Java can also be used to develop JMDL applications that access Microstation via the DGN Package. The DGN Package is a set of classes that allow Java/JMDL programmers to query and manipulate design (DGN) files. The methods contained in the DGN Package classes are not simply MDL functions wrapped up into JMDL methods. It is better to have classes that take advantage of the techniques that an object-oriented language provides like inheritance and message passing between objects. These classes are in the com.bentley.dgn package, such as ConeElemnt, ShapeElement, CellElement, etc. Further implementation details can be found in Du, 2005. Table 1: Number of records in the original data, 3D representation and using SDO G eome t rv
Pipelines Type
Original data (RDBMS) Point
Pipelines data (3D)
OS (SDO ~geometry)
Segment
Point
Segment
Point
Line
Communication
788
768
968
917
422
407
Electricity
1056
1046
1095
1071
383
513
Gas
1937
1917
1939
1919
758
1293
Waste
1241
1218
1592
1413
1174
386
Water
4061
4076
4084
4089
1868
2260
514
Y. Du and S. Zlatanova
Tests are performed with several pipeline networks (Table 1). The table indicates also the number of records in the original data sets (only horizontal segments), the extended with vertical elements data set and the organisation using SDO_Geometry data type. The numbers in the second column illustrate the increased number of records due to the newly introduced vertical segments. The third column contains the number of records after using the Oracle Spatial data type LINE . Clearly , the number of records is significantly reduced. Additionally, 3D building data and terrain data are organized in the same DBMS following 3D data organizations approaches described in Zlatanova and Stoter 2006. While the pipeline data are displayed using the developed programs, the building and surface data are extracted from the database using the standard means of Bentley Geographies (e.g. Spatial Viewer). Fig. 9 portrays some of the obtained results. For clarity the terrain surface is not visualized in the snapshot.
Fig. 9. Examples of 3D visualization of pipelines
An Approach for 3D Visualization of Pipelines
515
Conclusions and further research This paper presented an approach for data organisation and 3D visualisation on the fly. All the experiments and test have clearly shown that 3D visualization of pipelines is much more appealing compared to 2D visualization. Relationships between pipes and other objects (buildings, road, surface data) are well visible in the 3D scene. The entire pipeline network is also much better represented and can be visually investigated. The inclusion of 3D symbols to show pipeline attachments (e.g. valves, hydrants, wells) helps to provide additional information on the particular pipelines, including its function, direction of flow, connectivity etc. Besides the improved visualisation, the reported approach has the following advantages: The existing pipelines and cable data sets do not need to be changed besides for the creation of vertical segments (which can be automated and checked for consistency). Clearly, if existing pipeline networks do not have z coordinate, additional measurements to determine z-coordinate (depth) are unavoidable. Having preserved the original centre lines (stored in SDO_Geometry), editing of pipelines remains the same. If one or more segments have to be modified, the user can extract from DBMS only the centre lines and apply the usual procedure for editing using the preferred software, i.e, CAD or GIS. The DBMS storage of pipelines allows for performing of spatial analysis, within the network and between other data sets (e.g. cadastre parcels), supposedly other data sets are also available in the DBMS. As mentioned elsewhere, the spatial functions and operations in Oracle Spatial are currently only 2D but operations accept the 3D/4D coordinates. This means that a large number of queries such as 'which pipelines go under parcel 11' or 'which telecom cables are within 100 meter from my house', still can be performed. The use of Oracle Spatial data type LINE has lead to a reduction of the number records compared to the original data sets. Although the implementation is completed for Oracle Spatial and Microstation, the developed algorithms can be easily adapted for any frontend and therefore readily used for other combinations of DBMS and CAD/GIS software. The selected system architecture, .i.e, DBMS for storage of utility networks and front-end (CAD or GIS) for visualisation and editing, can be considered by many local and national authorities as a promising option for a centralised registering. The spatial schema in the DBMS can be tuned with respect to the legislation of a particular country.
516
Y. Du and S. Zlatanova
Despite the promising results still various questions have to be investigated. Some specific functions for pipelines, i.e. analysis of section, profiles and intersections, as well as analysis of probably malfunctioning pipes and connections, have to be developed under the software used as a front-end. Additional program developments are desirable to refine the 3D visualization, such as torus, rectangular cylinder, sphere. A further study is needed to investigate which functions are worthy to implement at DBMS level to manage efficient and flexible different networks.
Acknowledgements The authors express their gratitude to the China Scholarship Council, Bentley Inc. and GDMC, TUDelft for making these developments possible.
References Breunig, M. and S. Zlatanova, 2006, 3D Geo-DBMS, Chapter 4 in S. Zlatanova&D. Prosperi (Eds.) Large-scale 3D Data Integration: challengies and opportunities, Taylor&Francis, A CRC press book, pp. 88-113 Chong SC, 2006, Registration of Wayleave (cable and pipeline) into the Dutch cadastre, Case study report, available at http://www.gdmc.nl/publications. 56 p. Du, Y., 2005, 3D visualization of urban pipelines, Case study report, available at http://www.gdmc.nl/publications. 44 p. He, J., L. Li and M. Deng, 2002, The design, of urban underground pipe line GIS, based on UML flexibility software developing model. IAPRS, Vol. XXXIV, PART2, Com II, Xi'an, Aug.20-23, China, pp. 169-170 Oosterom, P. van, J. Stoter, W. Quak and S. Zlatanova, 2002, The balance between geometry and topology, in: Advances in Spatial Data Handling, 10th International Symposium on Spatial Data Handling, D.Richardson and P.van Oosterom (Eds.), Springer-Verlag, Berlin, pp. 209-224 Peninga, F. and P van OOsterom, 2006, Kabel en leidingnetwerken in de kadastrale registratie, GISt rapport No 42, available at http://www.gdmc.nl/publications. 36 p. (in Dutch) Peng, W., X. She, H. Xue and W. Znang, 2002, Integrating modeling for profile analysis of urban underground pipelines, based on 3D GIS, IAPRS, Vol XXXIV, PART2, Com. II, Xi'an, Aug.20-23, China, pp. 379-382 Roberts, G. et aI, 2002. Look Beneath the Surface with Augmented reality, GPS World, February, pp. 14-20. Yong, Y. 2003, Research On 3D Visualization Of Underground Pipeline, Transaction of Wuhan University, edition of science information, June, 2003
An Approach for 3D Visualization of Pipelines
517
Zlatanova, S. 2004, Workshop on 3D city modelling, UDMS 2004, textbook, available at http://www.gdmc.nl/publications. 52 p Zlatanova, S. and J. Stoter, 2006, The role of DBMS in the new generation GIS architecture, Chapter 8 in S.Rana&J. Sharma (Eds.) Frontiers of Geographic Information Technology, Springer, pp. 155-180 Zlatanova, S. A.A. Rahman and M.Pilouk, 2002, Trends in 3D GIS development in: Journal of Geospatial Engineering, Vo1.4, No.2, pp. 1-10
Developing Malaysian 3D Cadastre System preliminary findings
Muhammad Irnzan Hassan', Alias Abdul Rahman' and Jantien Storer'
'Department of Geoinformatics, Faculty of Geoinformation Science and Engineering, Universiti Teknologi Malaysia, 81310 Skudai, Johor, Malaysia. Email: {imzan.aliasj Ofksg.utm.my
2Department of GeoInformation Processing, International Institute for GeoInformation Science and Earth Observation (ITC), Enschede, The Netherlands. Email: [email protected]
Abstract 2D cadastre being practice in Malaysia for decades and at the moment it provides vital land and property information like ownerships of the parcels for most parts of the country. The current 2D cadastre system is regularly updated both in rural and in urban areas by the national survey agency (NMA) , i.e. the Department of Survey and Mapping (WPEM). NMA is the responsible agency with survey and mapping to deal on the technical part of producing the cadastral map. They have a very well designed system called Cadastral Database Management System (CDMS) with Digital Cadastral Database (DCDB). On the other hands, the ownership information (who owns what) comes from Land and Mines Offices (PTG) and all these information are considered two-dimensional (2D) in nature. Typical information related to parcels and property ownerships like parcel numbers or IDs, parcel's geometry and dimensions, etc. are digitally available. In Land Office, they have their own registration system called Computerized Land Registration System (CLRS). Unfortunately, these valuable information being handled by two different agencies in two non-integrated systems. It can be realized that these information still in 2D system and most of the existing 2D cadastre systems are hardly able to provide more realis-
520
Muhammad Imzan Hassan, Alias Abdul Rahman and Jantien Stater
tic and meaningful information to users. With the rapid development of hardware, software and knowledge, we believed Malaysia are ready to develop a 3D Cadastre System to solve the problems with the complexity of cadastral registration of 3D property situations. In Malaysia, hybrid 3D cadastre will be discussed as this model was proposed by previous research work, i.e. Stoter (2004). However, the design system should be realistic and practical to be applied based on Malaysian cadastral environment either on regulation or historically. The aim of this paper is to discuss some initial literature reviews of the subject or problem with respect to current 2D Cadastre System and 2D registration system in Malaysia. The early findings of the problems will be served as starting point in developing and addressing a much bigger problem for a PhD research work. Key words: 3D registration system, 3D database, 3D cadastre.
1 Motivation Traditionally, cadastral registration systems are parcel-based systems, since the individualization of land started with a subdivision of land using 2D boundaries. A country is divided into parcels, where rights and limited rights on property are established and registered on 2D land parcels. This 2D cadastral concept has served its purposes for decades. 2D cadastre mapping being practice in Malaysia and at the moment it provides vital land and property information like ownerships of the parcels for most parts of the country. The current 2D cadastre system is regularly updated both in rural and in urban areas by the national survey agency (NMA), i.e. the Department of Survey and Mapping (WPEM). Typical information related to parcels and property ownerships like parcel numbers or IDs, parcel's geometry and dimensions, etc. are digitally available. The ownership information (who owns what) comes from Land Offices and all these information are considered two-dimensional (2D) in nature. Obviously current cadastral information serves most of the users need. However in very near future these 2D information may not be able to serve for serve more advanced situations for example in large city centers. One way to deal with this situation is by having a more advanced cadastral system like 3D cadastre. This means we need to extend the 2D system into a threedimensional (3D) (Abdul-Rahman, et al, 2005). Stoter (2004) described some of the situations where 2D cadastre faces some difficulties in presenting information for such 3D situations. These include pressure on land in urban areas and especially their business cen-
Developing Malaysian 3D Cadastre System - preliminary findings
521
tres have led to overlapping and interlocking constructions. Underground infrastructure like number of tunnels, cables and pipelines, underground parking places and shopping malls are not only involve a single parcel but covered by a series of parcels (on the surface). The existence of this crossboundary infrastructure, raise the question about the property rights involved and how to register these rights. Even when the creation of property rights to match these developments is available within existing legislation, describing and depicting them in the cadastral registration poses a challenge. The challenge is how to register overlapping and interlocking constructions when projected on the surface in a cadastral registration that registers information on 2D parcels. Although property has been located on top of each other for many years, it is only recently that the question has been raised as to whether cadastral registration should be extended into the third dimension. The growing interest for 3D cadastral registration is caused by a number of factors such as; a considerable increase in (private) property values; the number of tunnels, cables and pipelines (water, electricity, sewage, telephone, TV cables), underground parking places, shopping malls, buildings above roads/railways and other cases of multilevel buildings has grown considerably in the last forty years; an upcoming 3D approach in other domains (3D GIS (Geographical Information Systems), 3D planning) which makes a 3D approach of cadastral registration technologically realizable (Pilouk, 1996; Zlatanova, 2004; Abdul-Rahman, 2000). This paper discusses some motivation factors in Section 2 and describes current cadastre practices in Malaysia in Section 2. We discuss the possible cadastre models in Section 3 and the hybrid approach in Section 4. Finally, the remarks and further works are highlighted.
2
Current Situation in Malaysia
Cadastre system is being used in some government agencies to support the valuation and taxes system. In Malaysia, there are two organizations that responsible on managing and maintaining the cadastre system. The Department of Survey and Mapping Malaysia (i.e. mPEM) is responsible for preparing, producing and managing the spatial including the surveying and mapping of the cadastre parcels. The non-spatial data (i.e. the registration) is being the responsibility of the Land Office (i.e. the PTG). Here, the cadastre system is being managed by two different organizations.
522
Muhammad Imzan Hassan, Alias Abdul Rahman and Jantien Stater
2.1 The National Mapping Agency (NMA) Jabatan Ukur dan Pemetaan Malaysia (JUPEM) is a National Mapping Agency (NMA) for Malaysia and the custodian for the spatial data. The increasing demands for digital cadastre data from users (government and private agencies) led JUPEM to this centralized database.
Digital Cadastral Database (DCDB) The development of DCDB by JUPEM is to cater the needs of cadastre survey data management and with the DCDB the cadastre data management would be more effective. The data in the DCDB can be integrated with the data from Computerized Land Registration System (CLRS) at Land and Mines Office (PTG). Currently, DCDB stores lot/parcel spatial data in two representations, raster and vector format. Previously JUPEM stored the spatial cadastre data in analog format (paper format). In DCDB the entire paper map have been scanned and converted into digital
DCDB Data Structure The DCDB has been developed by using relational database. It has three layers, namely, • Cadastre Lot Layer (LOT.shp) • Boundary Line Layer (DYH.shp) • Boundary Stone Layer (STONE.shp) Figure 1 shows the relationships between entity and attribute in DCDB data model. Each Lot has their Boundaries and each Boundary connected to the Stones. The Lot store in the polygon feature type, Boundary been stored as Line feature type while Stone as point feature type. The identification of each lot been determine by the Lot ill called UPI (Unique Parcel Identifier) where the UPI is define by the location code such as state code, district code, area code, section code and the lot number.
Developing Malaysian 3D Cadastre System - preliminary findings
523
Fig. 1. The entity relationships of the Lot, Boundary, and the Stone.
2.2 The Land and Mines Office (PTG) In 1992, PTG has introduced Computerized Land Registration System (CLRS). The purpose of introducing CLRS is to modernize the manual transaction into computerized system. The introduction of CLRS is to increase work productivity at land offices.
Computerized Land Registration System (CLRS) CLRS is an online system that operates using a UNIX based server. The CLRS main users are Land Office and Minerals (state), and Land Office (district). The registration of land can only be done by a registrar of the respective offices and the land registration information can only be obtained from the local network in the responsible offices.
524
Muhammad Imzan Hassan, Alias Abdul Rahman and Jantien Stater
COMPUTER SYSTEM Ministry of Natural Resources and Environment
I CLRS - State Land office & Miner
CLRS - State Land office & Mineral
CLRS - State Land office & Mineral
I CLRS - District District Land Office Offices
CLRS - District District Land Office Offices
Fig. 2. Structure of CLRS configuration
The general structure of CLRS configuration can be seen in Figure 2. Ministry of Natural Resources and Environment is the agency that supervised the CLRS. The registration of land can either be registered at the State Land Offices and Mineral or District Land Offices. The system used Oracle database to organize the recorded data. The information recorded in the CLRS database as shown in Table 1.
Developing Malaysian 3D Cadastre System - preliminary findings
525
Table 1: Information in CLRS database No. l.
CLRS Information Landtitle database
2.
Landinformation
3. 4.
Restriction of ownership Recordof ownership
5. 6. 7. 8.
Partyof interest Planof land Land tax Dealings of land
Remarks Data of ownership, no. of land title, location, type of used, type of ownership and periodof used Lot number, area, standard sheet no. and certified planno. Explicit andimplicit restrictions Detail of land lord & information of dealings Detail party(s) ofinterest Planof landoccupied in fonn Yearly tax paid Transfer, changeof restriction, sulx:1ivision, lease, isemenandetc.
3 The Possible Models 3D cadastre could be implemented by using three possible approaches, i.e. full approach, hybrid approach, and the simple 3D tag approach as proposed by Stoter (2004).
3.1
Full 3D Cadastre
In a full 3D cadastre, 3D space (universe) is subdivided into volumes partitioning the 3D space. The 3D parcels form a volume partition (no overlaps or gaps in 3D). Relationships between two 3D parcels may be necessary to take care of for example the accessibility of a 3D parcels which is not directly connected to the surface. In a complete implementation of a full 3D cadastre, the only real estate objects that are recognized by the cadastre are 3D parcels (bounded in all dimensions) and the 3D parcels form a complete partition of space. It is no longer possible to entitle persons to infinite parcel columns, defined by surface parcel boundaries, but only to well defined (and totally bounded), surveyed volumes. It requires a total renewal of the cadastre. However it also can be done with the conversion of the conventional representation of parcels into the third dimension: a parcel is
526
Muhammad Imzan Hassan, Alias Abdul Rahman and Jantien Stoter
no longer only defined by the parcel boundary but also by an infinite (or actually indefinite) parcel column that intersects with the surface at the location of the parcel boundary. In this alternative cadastral registration of the whole country is converted into 3D. With this approach, full 3D cadastre still has a strong link with current cadastral registration: traditional 2D situations (parcels with only one person entitled to it) can be kept largely unchanged. The UML class diagram of this solution is shown in Figure 3.
3.2
Hybrid 3D Cadastre
The hybrid approach is basically a registration of 2D parcels in all cases of real property registration and additional registration of 3D legal space in the case of 3D property units. This approach means preservation of 2D cadastre and the integration of the registration of the situation in 3D by registering 3D situations integrated and being part of the 2D cadastral geographical data set. The 3D representation can be either the volume to which a person is entitled to or a physical object itself. Parcel area: float
gm~"soJid
objeetld: int
area:
HlghtOrRestriction
typo:enum
Person subjectid: in!
Fig. 3. UML class diagram of full 3D cadastre that supports both infinite parcel columns and volume properties.
Developing Malaysian 3D Cadastre System - preliminary findings
527
Registration of 3D right-volume A 3D right-volume is a 3D representation of the legal space related to a (limited) right (or apartment right) that is established on a parcel and concerns a 3D situation, for example the right of superficies, established for a tunnel, refers to a volume below the surface. The landowner is restricted in using the whole parcel column and the volume that is 'subtracted' from this parcel column is visualized in 3D as a 3D right-volume as part of the cadastral map in a 3D environment. The right-volume is only registered for the person, which is entitled to the bounded volume, while the spatial extent of the property of the bare owner can be derived from the registered information.
Fig. 4. UML class diagram of 3D right-volumes.
Rights are still always established (and registered) on surface parcels. A 3D right-volume is extended into 3D ('extruded') by means of defining the upper and lower limits of the right. The upper and lower limits of 3D rightvolumes are initially defined with horizontal planes. This type of registration is sufficient to warn the user that the landowner is restricted in using the whole parcel-column. It also gives an indication on the space to which the limited right applies. More precise information (with juridical status) can be obtained from deeds and survey plans archived in the land registration. If it will be possible in the future to register a real right on only part of a parcel, a 3D right-volume can be defined as a polyhedron located anywhere within a parcel. The first aims of 3D right-volumes are to warn the users something is located above or below the surface and to indicate approximately the space where this 'something' is located. Therefore the quality of the 3D representations should be exact enough for practical use. All the parties involved should agree on the upper and lower levels of the 3D right- volumes. The levels should be laid down precisely in the concerning deeds and survey plans. Based on this information the 3D rightvolumes can be generated and inserted in the cadastral registration. The UML class diagram of 3D right-volumes is shown in Figure 4. The 3D right-volume is a 3D representation of the right, of which the geometry is
528
Muhammad Imzan Hassan, Alias Abdul Rahman and Jantien Stater
maintained in the DBMS as type gm solid, which is a geometry type defined by OGe and ISO. The most basic improvement of the registration of 3D right-volumes compared to the current cadastral registration is that the 3D extent of rights can be visualized in one integrated view in the cadastral map and not only per parcel in isolated visualizations. Furthermore, the 3D situations can be queried since the 3D right-volumes are linked to non-spatial information in the cadastral database in contrast to. the (scanned) drawings available in a cadastre containing only tags to 3D situations. Registration of 3D physical objects The registration of physical objects is independent from the question of whether there have been rights established and registered on the intersecting parcels. The physical objects are added for the same purpose in the cadastral geographical data set as buildings: to link cadastral registration with representations of reality (i.e. topography) for orientation and refer-
Hatd
Fig. 5. UML class diagram of 3D physical objects
ence purposes. A physical object is a construction above or below the surface which may cross parcel boundaries. In the case of physical objects, the objects themselves are registered and not the 3D legal space (as in the first alter- native). The legal space is the space to which the holder of a physical object wants to have a right to ensure the property of the object, which is usually larger than the physical extent of the object itself (for example including a safety zone. The main objective of the registration of physical objects is to reflect the construction itself. A registration of 3D physical objects needs to be organized and maintained and this registration will become a cadastral task. In the cadastral
Developing Malaysian 3D Cadastre System - preliminary findings
529
registration spatial as well as non-spatial information on the whole 3D physical object is maintained. A 3D physical object can be queried as a whole. For example, which parcels are intersecting with (the projection of) a 3D physical object (this is a spatial query)? Which rights are established on these parcels? Who are the associated persons? The solution of registering 3D physical objects (including geometry in 3D) meets the need of a 3D cadastre to register constructions themselves, or at least to have the location of physical objects available in the cadastral registration (and included in the cadastral map). A 3D description of physical objects can be used if the cadastral map is available in 2.5D. A limited right still needs to be established on the intersecting parcels referring to the physical object to explicitly secure the legal status (the 2D parcel is still the basic entrance for establishing real rights and for the cadastral registration). Since the physical objects are integrated in the cadastral geographical data set, the real situation is much better reflected than in the current cadastral registration. For the registration of 3D physical objects the U:ML class diagram in Figure 5 applies. The only right that a person can get on a 3D physical object is that he can become the holder of this object. A 3D physical object is not a specialization of real estate objects: 3D physical objects are maintained in addition to parcels and parcels are still the basic entity of registration. Surface parcels (defined in 2D or 2.5D) are still always needed to ensure the legal status in 3D. We have described the two possible approaches and few models for 3D cadastre; the approaches have positives and negative points. Malaysia could embark on this 3D cadastre system relatively straight forward since it has good 2D cadastre framework. From the foregoing discussion it can be realized that the hybrid approach certainly would be a good to start with. Experiences from researchers show that this 3D technology in cadastre has to be embedded with the legal entity before it can be fully implemented and realized.
4 The Hybrid Approach for Malaysian 3D Cadastre Good cadastre system needs to have full coordination institutionally. In the Torrens system, land registration needs a plan or map with all the lot/parcel geometry information. The registration needs the map as a legal prove under the rules and regulation. With this basis, it is important to have the understanding and coordination between existing two different systems that are the DCDB and the CLRS. The land registration and other
530
Muhammad Imzan Hassan, Alias Abdul Rahman and Jantien Stater
cadastral works involve several processes such as ownership registration, cadastral survey and others. It is good to have one organization that deal with the cadastre system in a country. However, in reality, e.g. in Malaysia, the idea seems difficult to be realized. This is due to historical reason. Nevertheless, the different organization that handle those cadastre data is not the constrain to have full cadastre system with separate databases, where CLRS Database stores all the land registration attribute while the DCDB store the cadastre map with the geometry information. Both agencies are the main players and the responsible parties that need the integration and coordination to each other's to have an integrated or fully cadastre system in Malaysia. This is where the idea of having the hybrid approach of developing 3D cadastre system in Malaysia. Malaysia could embark on this 3D cadastre system relatively straightforward since it has good and well-defined 2D cadastre framework and other emerging cadastre related project. It would be wise to incorporate the 3D aspect into the ongoing 2D cadastre process.
PTG
PTG
SMA
SMA State
LAND OFFICE
JUPEM (NMA)
National
CLRS
CDMS
3D Lot/Parcel DaUPI CLRS Database DCDB Integration Level
532
Muhammad Imzan Hassan, Alias Abdul Rahman and Jantien Stoter
sentation, so this 3D database should be managed by NMA. From the foregoing discussion it can be realized that the hybrid approach certainly would be a good starting point for having 3D Cadastre in Malaysia.
5
Remarks and Further Work
From the foregoing discussions it can be realized that most of the existing 2D cadastre systems are hardly able to provide more realistic information to users like property owners, etc. Obviously, such 2D system inevitably needs to be upgraded to cater for the next generation of information community including cadastre community in Malaysia. Attempts to address the 3D cadastre have been carried out in several countries like in the Netherlands, Australia, Sweden, Finland, British Colombia (Canada), and Israel, but most of these works are still not operational and still needs a lot of research efforts before it could be fully implemented and realized. As far as Malaysia is concerned the needs for this research output is clearly sought after by the National Mapping Agency (NMA), i.e. JUPEM. The aspect of 3D property registration as well as in the technical aspect such as the 3D modelling and 3D geo databasing are the focus of this research. This research also attempts to address integration issue, i.e. existing 2D system to 3D system (hybrid approach). The detail experiment on the mentioned problems will be carried out in the very near future and certainly a prototype of 3D cadastre that works with existing Malaysia cadastre framework is our next task.
REFERENCES Abdul-Rahman, A. and 1. E. Stoter (2005). 3D Cadaster in Malaysia - how to realize it? 7th • Surveyors Congress, Petaling Jaya, Selangor, Malaysia. Abdul-Rahman, A. and 1. E. Stoter, A.F. Nordin (2005). Towards 3D Cadaster in Malaysia. International Symposium and Exhibition on Geoinformation (ISG 2005). Penang, Malaysia. FIG (1995). The FIG Statement on the Cadastre. Technical Report Publication 11, Federation International des G"eom' etres, Commission 7, 1995. Genggatharan, M. (2005). Conceptual Model for Integration of Cadastral Data Management System (CDMS) and Computer Land Registration System (CLRS). MSc. Thesis, Universiti Teknologi Malaysia. (In Malay) JUPEM, Manual Pengguna Sistem Pengurusan Pangkalan Data Kadaster. Department of Survey and Mapping Malaysia. (In Malay)
Developing Malaysian 3D Cadastre System - preliminary findings
533
KPTG (1997). Manual Sistem Pendaftaran Tanah Berkomputer: Latihan kepada Pendaftar. Department of Director General of Lands and Mines. (In Malay) KPTG, Manual Sistem Pendaftaran Tanah Berkomputer: Bidang Liputan SPTB. Department of Director General of Lands and Mines. (In Malay) National Land Code (1965). Omar, A.H. (2003). The development of Coordinated cadastral System for Peninsular Malaysia. MSc. Thesis, Universiti Teknologi Malaysia. Stoter, J.E., M.A. Salzmann, P.J.M. van Oosterom and P. van der Molen (2002), Towards a 3D Cadastre, FIG XXIIIACSM-ASPRS, 19 - 26 April, 2002, Washington, USA. Stoter, J.E (2004), 3D Cadastre, PhD Thesis, International Institute for GeoInformation Science and Earth Observation (ITC), the Netherlands.
Developing 3D Registration System for 3D Cadastre Mohd Hasif Ahmad Nasruddin and Alias Abdul Rahman Departments of Geoinformatics, Faculty of Geoinformation Science and Engineering, Universiti Teknologi Malaysia, 81310 UTM Skudai, Johor, Malaysia. Email: {hasif aliasj Ofksg.utrn.my
Abstract The establishment of Malaysia cadastral system is to monitor matters of land especially the land ownership. The system is setup to ensure consistency in cadastral information that comes from cadastral survey and cadastral land registration. In general, the cadastral survey and cadastral land registration are managed by Department of Survey and Mapping Malaysia (DSMM) and Land Offices (LO) respectively. The system for land registration is known as Cadastral Data Management System (CDMS) which is controlled by DSMM where as Computerized Land Registration System (CLRS) administered by the Land Office. A cadastre is normally a parcelbased and up-to-date land information system containing a record of interests on land (e.g. rights, restrictions and responsibilities). Currently, the system is based on 2D space. The cadastral object can be either a complete land parcel, parcel of storey houses (strata) e.g. apartments, flats residential or parcel below the surface (stratum). Demands from land owners as well as property owners show that it is insufficient to register cadastral parcel based on 2 D space. Real-estate property is actually in 3D. Current 3D GIS technology could be used to develop a registration system for 3D Cadastre and eventually a solution for complex 3D Cadastre objects or situations. Therefore, the new cadastral system should reflect the existence of the real-world objects - 3D physical objects and have its own legal rights. 3D technologies make it possible to establish 3D cadastre. This paper discusses a development of 3D registration system for cadastre objects in 3D. The main objective is to look into possibility of enhancing the current systems practiced by the DSMM and LO towards 3D cadastre registration. In other words, this research attempts to address one of the aspects in 3D Cadastre, i.e. how to register 3D objects and properties for 3D cadastre.
536
Mohd Hasif Ahmad Nasruddin and Alias Abdul Rahman
Key words: 3D properties, registration system and 3D Cadastre
1. Introduction A cadastre is normally a parcel-based and up-to-date land information system containing a record of interests in land e.g. rights, restrictions and responsibilities (FIG 1995). It usually includes a geometric description of land parcels linked to other records describing the nature of the interests, the ownership or control of those interests, and often the value of the parcel and its improvements, (Stoter 2002). Each country has it own agency which responsible to monitor all mater pertaining with cadastre. In Malaysia, the responsibility is hold by Department of Surveying and Mapping Malaysia (DSMM) and Land Offices (La). Cadastral System is used to manage land registration system related to cadastral objects also known as real-estate objects. In Malaysia, generally the cadastral object can be either a complete parcel of land, parcel of storey houses (strata) e.g. apartments, flat houses or parcel below the surface (stratum). The above-mentioned objects need to be registered as legal objects. Currently, the system is based on planar map, which is in 2D space. In the real world today, the 2D system looks to be insufficient to cater the advance development of the country especially information regarding the 3D spatial objects. In major urban centre (and especially their business districts), land use is becoming so intense, that different types of land use and properties are located in a complex 3D situations. This puts the practicality of the existing 2D cadastral parcels to the limit (Stoter 2002). Therefore, it is vital to develop and implement the 3D cadastre system to solve and address some of the problems. This research paper attempts to study one of the aspects in 3D Cadastre that is how to register 3D objects and properties for 3D cadastre. Based on the foregoing discussions it shows that several aspects of 3D cadastre problem domain need to be investigated. The 3D situations need better system and clearly revealed the drawbacks for such real world situations (Abdul-Rahman and Stoter, 2005; and Abdul-Rahman et al, 2005). In major urban areas the concept of 2D is insufficient to solve problems with these 3D situations such as below: • Constructions on top of each other; e.g. fly-over of highways, • Infrastructure above and under the ground e.g. tunnel for LRT, PUTRA, etc. and • The increasing number of cables and pipes, etc.
Developing 3D Registration System for 3D Cadastre
537
These 3D spatial objects do in general not correspond with legal objects that are legally registered and defined. The implementation of 3D Cadastre should be a priority so that the land information can take 3D information into account for the registration. Information on the 3D objects such as the location, geometry, function and legal aspects can be maintained and accessible in the proposed cadastral system. The study of 3D cadastre in Malaysia is still at early stage. It is important to develop land information system to cater rapid changes in development so that the cadastre database will be up-to-date. This paper describes a research conducted in developing 3D registration system for 3D cadastre. In section 1, the paper explains the cadastral administration in Malaysia and the needs for registering 3D objects. In section 2, the current registration practice is introduced. Hence, topic in the sub section 2, will explain on Computerized Land Registration System (CLRS), the system architecture and the registration process. Then, it will continue with 3D cadastre aspects. In Section 3 topic of current registration with hybrid cadastral, conceptual model and logical model of CLRS and CDMS, and the concept proposed for CLRS of 3D Objects will be explained. Finally, all of the discussion will be concluded and future works will be remark.
2. Current Registration Practice In Malaysia, cadastral registration system is governed under National Land Code 1965 (NLC 1965). The system can be divided into two components called land registration and cadastral registration. The land registration and cadastral registration are managed by Department of Survey and Mapping Malaysia (DSMM) and Land Offices (LO) respectively. The system for land registration is known as Cadastral Data Management System (CDMS) which is controlled by DSMM where as Computerized Land Registration System (CLRS) administered by the Land Office. The CDMS is a system that stores digital land registration parcel i.e. cadastral maps where as CLRS store cadastral registration attribute i.e, ownerships, restrictions, etc. The cadastral object can be either a complete land parcel, parcel of storey houses (strata) e.g. apartments, flats residential or parcel below the surface (stratum) e.g. tunnel. The cadastral object in Malaysia is still registered in 2D. Current 3D situation involves a lot in residential flat houses. Here, strata title is given to the owner base on the area they occupy. However, when it comes to complex situations it is still doubtful on how to fully show the legal rights
538
Mohd Hasif Ahmad Nasruddin and Alias Abdul Rahman
of the ownership. In other 3D situations, the air space permit needs to be applied from the land office. For example, a fly-over at the highway which crosses the local state land needs to apply air space permit under NLC 1965 jurisdiction. Here, the legal object is shown in the 2D CAD drawing overlap with cadastral parcel a permit is written out. Overall, the current registration just shows the outer boundary as a frame of reference to define the 3D legal objects. There is a question arise on how to determine and register a complex situations. The new cadastral system should reflect the existence of the real-world objects i.e. 3D physical objects and have its own legal rights. Therefore, 3D cadastre is hopefully a good way to solve the problem occurred. 2.1. Computerized Land Registration System (CLRS)
Land administration in Malaysia used Torrens System as the background and being portrayed in the National Land Code 1965, (NLC 1965). Land tenure using the Torrens System shows the evidence that the land owner has the right to the land without any objection. Traditional land registration system used manual based system. In 1992, National Land Code 1965 has been amended to introduce Computerized Land Registration System. This modernization step was taken ahead to change all manual transaction to computerized systems. Table 14 section 5A-NLC 1965; gives a right for a registrar and explain the responsibilities uphold concerning on the transformation. The introduction of CLRS is to enhance work productivity at land offices. Furthermore, the land database that being developed can be a useful added for land information system user. 2.1.1.
The System Architecture
State-of-the-art computer technology makes it possible to develop Computerized Land Registration System (CLRS). CLRS is a mini online system that operates to a server in UNIX environment. The interface of CLRS is shown in figure 1. The main users are Land Office and Minerals (state), and Land Office (district). All of the information can only be obtained via the local network in the specific offices. The registration of land can only be done by a registrar of the respective offices. Figure 2 show the general structure of CLRS configuration. The system is being supervised under Ministry of Natural Resources and Environment. The registration of land can either be registered at the State Land Offices and Mineral or District Land Offices. Type of registration is applicable on the types of land being
Developing 3D Registration System for 3D Cadastre
539
registered either under ownership of Registrar office or ownership of Land office. There are five sub section menus operating in CLRS. The structure of the menus and it remarks are shown in figure 3. The system used Oracle database to organize the recorded data. The information recorded in the CLRS database can bee seen in table 1.
MENU UTAMA
SPTB MAIN MENU Ve$j 2 .7.4 10/1212004
25/05/2006 SAPTG06
S is t e m Pend aftaran T anah Berkomputer
.
~.
.
. "
..
,'nntl$;i"HMirm:m'id i,
Fig. 1. Interface of CLRS
540
Mohd Hasif Ahmad Nasruddin and Alias Abdul Rahman
COMPUTER SYSTEM Ministry of Natural Resources and Environment
I CLRS - State Land office & Mineral
CLRS - State Land office & Mineral
CLRS - State Land office & Mineral
I CLRS - District District Land Office
CLRS - District District Land Office
Fig. 2. Structure of CLRS configuration
Preliminary
Entry
Result
Specify types of dealings Prepare endorse for applications
Make decision on land registration and documentation of ownership
Utilities
Updates & corrections
Retrieve Informa-
Search record of documents & details
Fig. 3. Sub section of operations in CLRS
Developing 3D Registration System for 3D Cadastre
541
Table 1. Information in CLRS database CLRS
l"'n'f-n..~n.t- .. n."'n
No. 1.
Landtitle database
2.
Landinfonnatioo
3. 4. 5. 6. 7. 8.
Restriction ofownership Record ofownership Partyof interest Planofland Landtax Dealings ofland
2.1.2.
Remarks Dataofownership, no.oflandtitle, location, typeof used, typeofownership andperiod ofused. Lotnmnber, area, standard sheetno. andcertified plan no. Explicit andimplicitrestrictions Detail oflandlord & information of dealings Detail party(s) of interest Planoflandoccupied in fonn Yearly tax paid Transfer, change ofrestriction, subdivision, lease, isemenandete.
The Registration Process
Basically, the registration process follows the work flow shown in figure 4. At preliminary stage, the application form will be sent to the counter and the details will be checked. Second stage, the staff will key-in all details into the CLRS system. The details in the CLRS system will be sent to the registrar in Intermediate Titles Files (ITF) format. The registrar will proceed and make decision whether to register the document or not from the ITF file. If all of the details are correct, the application will be registered else the application will be suspended. The output will be printed documents know as Computer Registered Document of Title (CRDT) keeps by the land office and Computer Issued Document of Title (CIDT) deliver to the owner. Pre limin ary Data Entry to Systern Decision of Registrar Document Handling
Fig. 4. Registration Process in CLRS
542
Mohd Hasif Ahmad Nasruddin and Alias Abdul Rahman
3. Current Practice With Hybrid Cadastral In Malaysia, a solution with the hybrid cadastral is a promising one. As the research is still at early stage, it is not relevant to develop a full 3D cadastral registration for this early study. However, the potential of full 3D registration will be taken into account. In summary, hybrid cadastral proposed by Stoter (2004) means that preservation of the 2D cadastre and the registration of the factual situation in 3D by registering 3D objects within 2D cadastral registration. The relationship between the parcels and 3D objects can be maintained explicitly. The hybrid solution will combine the legal and factual registration. The registration of 3D object can be done by intersection of 3D building plan with the cadastral parcels. Here, it can define and show the legal rights of the property in 3D. The exact legal rights and documents which contain precise 3D information can be retrieved at the relevant offices. Thus, this solution should be aware that the registration of 3D factual objects is not identical to the definition of 3D legal objects. The hybrid solution offers a combination of 2D and 3D data in a single database. Therefore, the spatial information in terms of vertical dimension can be retrieve and taken into account when it is needed. Hence, spatial operation such as storage, access, query and analysis is possible. The database will also maintain administrative properties such as the rights, restriction and also the limited rights of the particular objects.
3.1. Conceptual Model and Logical Model of CLRS & CDMS Computerized Land Registration System (CLRS) is basically involved with land administrative. The schematic diagram of the system is shown in figure 5. Generally, the system has registration coverage, processing, functionality and output. The CLRS administrative- model is bind into 3 components of entities which is legal rights, cadastral objects and cadastral subject. The relationship of the entities is shown in figure 6. Cadastral object can be a parcel, strata or stratum. Cadastral subject is a person who has rights for the cadastral objects. A legal right is registered between cadastral objects and cadastral subjects.
Developing 3D Registration System for 3D Cadastre
Ownership
543
Infonnati on retrieve
Dealings
Filing
Non-de alings
0
f le tter s
Fe e 0 f re gistratio n
Notes Preliminary
Utilities
Data Entry
Retrieve Info.
Result Plan Retrieve C ertific ate V erific ati on doc. Land Office Delivery record Grant
Muk:im
Infonnati on
Grant State Lease
MuJdm Lease
Fig. 5. Schematic diagram of CLRS
Cadastral Subject
HAS
.....
CadastralObjects
Legal rights
Fig. 6. Entities relationship of CLRS administrative-model
Cadastral Data Management System (CDMS) is more towards land registration system. The conceptual model of CDMS is portrayed from the relationship of data model components. Data model of CDMS covers 3 types
544
Mohd Hasif Ahmad Nasruddin and Alias Abdul Rahman
of entities namely Cadastre Lot (parcel), boundary line and boundary mark (stone). The relationship of the entities and attributes is shown in figure 7. The logical model of the system shows the relationship of entities and the spatial objects. Each of the entities has its own attributes and represents as spatial object namely point, polyline and polygon. The relationship of the data in the CDMS is shown in table 2.
Fig. 7. Entity Relationship and Attributes in CDMS Table 2. Data Model of CDMS Entities Boundary Marks Boundary Line Cadastre Lot
Attributes Pointkey, Apdate, Mark_desc, serial, coordjype, R_east, R_north, S_comment, status, Gill Apdate, parcel_key, bearing, distance, units, class, line_code, line_type, entry_mode, PA, Fnode, Tnode, AdjParcel, Status, Gill Negeri, Daerah,Mukim, Seksyen, Lot,Svy_area, Area_unit,
Spatial Point
Polyline Polygon
Developing 3D Registration System for 3D Cadastre
545
3.2 CLRS for 3D OBJECTS It is clear that current Computerized Land Registration System (CLRS) does not include 3D objects registration. The system is more towards land administrative. In addition, the parcel plan (certified plan, CP) also does not correlate systematically in the system itself. The CP needs to be acquired from the Survey Department database i.e. Cadastral Data Management System (CDMS). Then, later it can be incorporated in the registration form. Therefore, this research is interest to develop CLRS for 3D objects. It is the aim of this research to develop CLRS for 3D objects. Figure 8 shows our proposed concept of the registration of 3D objects. It will include two components of discussed in the preceding sections 2 called cadastral registration (land administrative) and land registration (3D objects).
Land Registratio n
Cadastral Registratio n
3D Objects Visualization Comb inatio n of Database Fig. 8. Conceptual Model of 3D CLRS for 3D objects
4. Concluding Remarks There is a need to introduce 3D registration system for 3D cadastre as current registration in Malaysia still practice in 2D system and there is a demand especially in managing 3D physical object property. There are two organizations involve in the cadastral system. They are Department of Survey and Mapping (DSMM) and Land Offices (La). DSMM deals with land registration system that produce maps where as La is responsible for cadastral registration - cadastral administrative matters. Registration of le-
546
Mohd Hasif Ahmad Nasruddin and Alias Abdul Rahman
gal cadastral objects is done through Computerized Land Registration System (CLRS). The system keeps all information on the registration. Here, relevant parcels for registration are retrievable externally from CDMS database. Based on the proposed concept, we intend to develop a CLRS for 3D objects like buildings inside 2D parcels where an integration of geo-database (CDMS) and administrative-database (CLRS) will be carried out and eventually a 3D registration system for 3D Cadastre. Thus, aspect of 3D registration will be portrayed on the proposed system.
References Abdul-Rahman, A. and J. E. Stoter (2005). 3D Cadaster in Malaysia - how to reSurveyors Congress, Petaling Jaya, Selangor, Malaysia. alize it? Abdul-Rahman, A. and J. E. Stoter, A.F. Nordin (2005). Towards 3D Cadaster in Malaysia. International Symposium and Exhibition on Geoinformation (ISG 2005). Penang, Malaysia. FIG (1995). The FIG Statement on the Cadastre. Technical Report Publication 11, Federation International des G''eometres, Commission 7, 1995. Genggatharan, M. (2005). Conceptual Model for Integration of Cadastral Data Management System (CDMS) and Computer Land Registration System (CLRS). MSc. Thesis, Universiti Teknologi Malaysia. (In Malay) mPEM, Manual Pengguna Sistem Pengurusan Pangkalan Data Kadaster. Department of Survey and Mapping Malaysia. (In Malay) KPTG (1997). Manual Sistem Pendaftaran Tanah Berkomputer: Latihan kepada Pendaftar. Department of Director General of Lands and Mines. (In Malay) KPTG, Manual Sistem Pendaftaran Tanah Berkomputer: Bidang Liputan SPTB. Department of Director General of Lands and Mines. (In Malay) National Land Code (1965). Omar, A.H. (2003). The development of Coordinated cadastral System for Peninsular Malaysia. MSc. Thesis, Universiti Teknologi Malaysia. Stoter, J.E., M.A. Salzmann, P.J.M. van Oosterom and P. van der Molen (2002), Towards a 3D Cadastre, FIG XXIIIACSM-ASPRS, 19 - 26 April, 2002, Washington, USA. Stoter, J.E (2004), 3D Cadastre, PhD Thesis, International Institute for GeoInformation Science and Earth Observation (ITC), the Netherlands.
r:
Volumetric Spatiotemporal Data Model Mohd Shafry Mohd Rahim 1, Abdul Rashid Mohamed Shariff 2, Shattri Mansor 2, Ahmad Rodzi Mahmud 2, Mohammad Ashari Alias 1 1 Faculty of Computer Science and Information System, University Technology Malaysia, 81310 Skudai, Johor, Malaysia. {shafry, ashari} @fsksm. utm.my, 2 Institute of Advance Technology, University Putra Malaysia, 43400 Serdang, Selangor, Malaysia. {Rashid, shattri, arm}@eng.upm.edu.my
Abstract This paper summarizes the Volumetric Spatiotemporal Data Model which has been developed to manage surface movement in the Virtual Geographical Information Systems (VGIS). Volumetric is one type of spatial object in the VGIS, which is used to develop visualize 3D information. In order to develop real process in VGIS, temporal element is an important issue to integrate with 3D object. Here, we propose one spatiotemporal data model used to manage volumetric surface movement according to movement criteria used in VGIS. Time has been integrated with the data model and temporal version of volumetric surface data can be stored by using the model. This model has been implemented by developing prototype of visualization system using Triangulation Irregular Network (TIN) with integrated time in the TIN structure. Result shows that the data model which has been proposed can work with a visualization algorithm using linear interpolation and improvement TIN structure. Keywords: Volumetric and 3D, Spatiotemporal Data Model, Database, TIN, VGIS.
1 Introduction Virtual Geographical Information Systems (VGIS) is about the use of the computer graphic technology to improve presenting geographical information. So, Geographical Information Systems (GIS) become more realistic with a real process and real presentation exactly as in the real world.
548
Mohd Shafry Mohd Rahim, Abdul Rashid Mohamed Shariff et al
In VGIS or GIS, volumetric and dynamic phenomena are, by definition, multi-dimensional, which means that they are conceptually and computationally challenging. Dynamic phenomena refer to movements which contribute to changes in geographic features. Some of the changes or movements may contribute to changes of the geometric object. The challenge becomes greater when we consider large-scale geographic processes. In many cases, simply introducing an additional orthogonal axis (Z) is convenient but insufficient, because important spatial and temporal characteristics and relationships may be indiscernible in this approach. Although visualization techniques for three or more dimensions have become popular in recent years, data models and formal languages have not yet fully developed to support advanced spatial and temporal analysis in multiple dimensions (Yuan M., et. al, 2000). In this research, the discussion focuses on the volumetric surface movement in the real world, and we develop spatiotemporal data model which suitable to manage volumetric surface movement data call Volumetric Spatiotemporal Data Model. Besides that, the discussion also focuses on the implementation of Volumetric Spatiotemporal Data Model and development of prototyping application to simulate volumetric surface movement.
2 Spatiotemporal Data Model and Its Visualization In GIS, data model is the abstraction of the real world phenomena according to a formalized, conceptual scheme, which is usually, implemented using the geographical primitives of points, lines, and polygons or discredited continuous fields (Nadi, S., et. al. 2003). Data model should define data types, relationship, operations, and rules to maintain database integrity (Nadi, S., et. al. 2003). In the VGIS, data model also with the same meaning with enhancing to focus on the 3D data. Spatiotemporal Data Model in VGIS is an abstraction of managing 3D spatial with the temporal element. Spatiotemporal data model is very important to create a good database system for VGIS which deals with space and time as a main factor in the system (Rahim M.S.M, et. al, 2005). Many spatiotemporal data model has been developed previously. We collected and analyzed 9 data models during the study namely, GENSTGIS (Narciso, F.E, 1999), Cell Tuple Based Spatiotemporal Data Model (Ale,R., and Wolfgang, K,1999), Cube Data Model (Moris, K, et. al. 2000), Activity Based Data Model (Donggen, Wand Tao,C. 2001), Object Based Data Model, Data Model for Zoning (Philip, J.U, 2001), Object Ori-
Volumetric Spatiotemporal Data Model
549
ented Spatial Temporal Data Model (Bonan, L., and Guoray, C.2002), Multigranular Spatiotemporal Data Model (Elena, C, et. al. 2003) and Feature-Based Temporal Data Model (Yanfen, L. 2004). They are several issues has been addressed from the researchers. In our observation, the spatiotemporal data modeling is lacking the foundation of understating real world phenomena. In the future, the foundation of spatiotemporal is very important to be considered to develop real-time process in GIS (Rahim M.S.M, et. al, 2005). Very interesting spatiotemporal data model should be focus on the volumetric data and geographic movement behavior in order to created VGIS with a realistic process. Other issue which is related to the capability of spatiotemporal data model is visualizing 3D or volumetric spatiotemporal object in order to increase user understanding in the geographic phenomena and make some simulation or prediction in the future. Thus, data model must have capability to do user query for visualizing information on the 3D with movement in the VGIS. This is very challenging issue. This issue also addressed by John. et. al. (2004) to development of techniques and tools that are simply unable to cope when expanded to handle additional dimensions.
3
Formalism Volumetric Surface Movements
In the real world, all objects are volumetric. It is very difficult to manage the volumetric object. So, volumetric object can be presented in the 3D object. Focus of this discussion is to identify the movement criteria which can be use to develop spatiotemporal data model. Volumetric surface movement always involve with transformation of the set of point in the surface. Figure 1 shows the process of movement in the volumetric surface. Definition 1: A Volumetric Surface Movement (mv) consists of two parameters which are volumetric object (v), and time (t) Volumetric surface object (v) is a set of geometric object which is created by the surface -7 triangle -7 line -7 point Time (t) which represents the period of changes and duration of changes. Definition 2: Time (t) changes continuously. When Object (v) and Attribute (a) change, time (t) acts as reference of the change process. I1t will present period of change. I1t has a start time ts and end time t, representing duration of a change. Definition 3: When volumetric objects (v) change, all parameters (surface -7 triangle -7 line -7 point) and time as a major parameter in the ob-
550
Mohd Shafry Mohd Rahim, Abdul Rashid Mohamed Shariff et al
ject to identify period and duration of change will change accordingly. Time (t) belonging to object represents changes or movement events. The basic requirement for construction of a volumetric object is a point (P). So, for reconstruction of the volumetric moving object , we must add time parameter in the representation of the point (P), i.e., (x , y, z, t). Volumelrtc + Time (x. y. z) + I for Po int
~ ~ Time st art
_~ m m ~ ~
Fig. 1. Volumetric Surface Movement
From the definitions, we can say that movement on the surface happen s during point which is created the volumetric surface. We can precisely shown, as in Figure 2, that movement will change location of the point to a new point and generate a new line which contribute to changes the volumetric surface.
P (x. y.z ) ,. '
"
p ' ( x+ ll x. y+lly .z+ll zl &
I - I + lll
Fig. 2. Tran sformation of point within one line
The movement process can be described as ilt = t ' - t which represent time changes, ilx = x' - x which is represent the changes along the x axis , ily = y ' - y which is represent the changes along the y axis, and zlz = z' - z which is represent the changes along the z axis. Direction of movement can be defined based on the modules value of the x, y, and z axis . Table I
Volumetric Spatiotemporal Data Model
551
show the direction of movement when volumetric surface object was changing from one state at certain time to another time . Table 1: Description of Transformation Point in Vo lumetric Surface Movement No State of Point Movement Direction of Movement 1 (Ioxl & loyl & lozl) = 0 2 (Ioxl > 0) & (Ioyl & lozl) =0 X axis 3 (Ioxl & lozi) = 0 & loyl) > 0 yaxis z axis 4 (Ioxl & loyl) =0& (lozl) > 0 x & y axis 5 (Ioxl & loyl) > 0 & lozi) =0 x & z axis 6 (Ioxl & lozi) > 0 & loyl) =0 y & z axis 7 (Ioyl & lozi) > 0 & loxl) = 0 x & y & z axis C10xl & loyl & lozJ) > 0 8
Movement can be visualized simply by using the linear interpolatio n model. Fig ure 3 show the process of interpo lation. Pat Ts
P' at Te
Fig. 3. Linear Interpolation Process for Simulating Movement
Based on the figure 3, Let Ptx, y, z) at time start Ts and move into P'(x',y', z') at Te. So, the move ment process changes withi n the three axis which are L1x = x ' - x, L1y = y' - y, L1z = z ' - z and L1t = Te - Ts. The next point P' , is the previous point P, and added with differences within the axis. The next point can be calcu late as x ' = x + L1x, y' = y + L1y, z' = z + L1z, and Te = Ts + L1t.
4
Volumetric Spatiotemporal Data Model
By using a formalism which has been defined in the section 3, Volumetric Spatiotemporal Data Model will be propo sed and discussed. In the data model, it is important to define the geometric of the object for stori ng and retrieval purpose. Based on the formalism , every point must have a time as an entity which requires answering user query base d on time .
552
Mohd Shafry Mohd Rahim, Abdul Rashid Mohamed Shariff et al
Usually data are collected based on the version approach. We need to manage especially in term of the volumetric surface movement. It means that, every time data was collected, user had identified the movement or changes happened in a space. This will be a basic of the spatiotemporal data model development. This situation can be transform into mathematical equation to define fundamental of proposed data model. Definition 4: Assume that version of spatial data are f(t1), f(t2),., f(tm). Object at time 1: F(t1) ={ <x1,y1,zl,t1>, <x2,y2,z2,t1>, ... <xn.yn.zn.tl>, } Object at time 2: F(t2) ={ <x1,y1,zl,t2>, <x2,y2,z2,t2>, ... <xn.yn.zn.t?», } Object at time: F(tm) ={ <x1,y1,zl,tm>, <x2,y2,z2,tm>, ... <xn.yn.zn.tm>, } Definition 5: Letf(mv) is define as volumetric surface movement process. So, Set of Volumetric Movement data is union of all set of version data. f(mv) = {<x1,y1,zl,t>, <x2,y2,z2,t>, ... <xn.yn.zn.t»] U { <x1,yl,zl,t2>, <x2,y2,z2,t2>, <xn.yn.m.t?>, } U { <x1,y1,zl,tm>, <x2,y2,z2,tm>, <xn.yn.zn.tm>, } In real process, not all the points in the volumetric surface changing or move. The question is, is it necessary to store all the points, which will increase the storage usage in the implementation? To avoid data redundancy, the data model must be able to identify which point has been changing. To identify, data model must have capability to check every point among version of data and capture the changes point. To perform this task, the formalism in the section 3 will be used. The conceptual identification is: If <x1,y1,zl,tn> - <xl,yl,zl,tn+l>=O, data at tn+1 equal to data at tn, else <x1,y1,zl,tn> - <x1,yl,zl,tn+1> -;;= 0 , data for tn+1 is <x1,yl,zl,tn+1> Volumetric Spatiotemporal Data Model is designed based on the common data structure of the volumetric object normally use in computer graphics. The model simply define that surface at volumetric object stated in definition 3 in section 3. Enhancement on the model is by adding time as an attribute in the every points and the model is an incorporate structure of data model as formalize in definition 4 and 5. Figure 4 show the conceptual model of the Volumetric Spatiotemporal Data Model.
Volumetric Spatiotemporal Data Model
553
Fig. 4. Volumetric Spatiotemporal Data Model
5
Data Model Implementation and Results
In the implementation phase, data model has been developed by using Relational Database Model which has been acceptable for many databases systems. Prototyping of visualization system has been developed with it's data format for visualizing the volumetric surface movement data . Structure of the databases has been design to store a series of point which created a surface with the line and the polygon. Every point which stored in the database will be incorporated with time entity. There are fours major entities: point, line, polygon, and surface. In the database, a simple structure of database has been design as bellow: Points (Idjroint, Time, x, y, z) Line (Id_line, Point 1, Point 2) Polygon (Idpolygon. Line 1, Line 2, Line 3) Polygon is based on the TIN structure which created triangle surface Surface (Id_Surface, Time, Polygon 1, Polygon 2, ... Polygon n) Query processing has been designed to retrieve data from database by using area identification. The database has IdSurface in the surface entity with a start time and end time. Result from this query is a set of points which represents volumetric surface from the start time and the end time.
Time Start
Time 2
Time End
Volumetric Spatiotemporal Data Model
555
6 Conclusion and Future Works According to Mountrakis, G., et. al. (2005), a successful GIS query process should be able to support difference user preferences in spatiotemporal scene queries, and do not have a fixed-metric approach where all users are considered equal. These required spatiotemporal databases can integrate all of series of data. Volumetric Spatiotemporal Data Model can be an idea to resolve this kind of issue. All changes of the volumetric surface movement had been integrated into single database system which reduces their redundancy of data. This will give advantages in design analysis and manipulation of the spatiotemporal data based on the user requirement. In the near future Volumetric Spatiotemporal Data Model will be tested with a real application related with morphology of limestone are which involve suitable to evaluate the data model as well as prototyping of visualization application. In the future, proposed model need to be modified to take into account the uncertainties in data movement in the surface. This data model can handle only fix number of points only, which may not the case in real dynamic world. Currently when this issue arises, we coordinate the number of point to become the same with previous data. Besides that, improvement of the Volumetric Spatiotemporal Data Model can be done in the implementation aspect. Several issues need to be enhanced to index the spatiotemporal data which is always increasing, and formalize spatiotemporal query according to surface movement. As a conclusion, our major contribution in this paper is, proposing spatiotemporal data model which has the capability to manage volumetric surface movement. Besides that, the TIN structure has been used with slight modification with temporal element which requires development of visualization system.
7
Acknowledgments
This research has been sponsored by Ministry of Science, Technology and Innovation (MOSTI), research grant 74260. Special thanks to Institute of Advance technology for the advices.
556
Mohd Shafry Mohd Rahim, Abdul Rashid Mohamed Shariff et al
References Yuan, M., D. M. Mark, M. J. Egenhofer, and D. J. Peuquet, 2004. Extensions to Geographic Representations. Chapter 5 in R. B. McMaster and E. L. Usery (eds.), A Research Agenda for Geographic Information Science, Boca Raton, Florida: CRC Press, 129-156 Nadi, S., Delavar, M.R. (2003) Spatio-Temporal Modeling of Dynamic Phenomena in GIS, ScanGIS 2003 Proceeding, p.p 215-225 Rahim M.S.M, Shariff A.R.M, Mansor S., Mahmud A.R. (2005), A Review on Spatiotemporal Data Model For Managing Data Movement in Geographical Information Systems (GIS), Journal of Information Technology, FSKSM, UTM, Vol 1, Jld 9, December, p.p 21 -32 Narciso, F.E. (1999) A Spatiotemporal Data Model For Incorporating Time In Geographical Information Systems (GEN-STGIS), Doctor of Philosophy in Computer Science and Engineering, University of South Florida, April. Ale, R., Wolfgang, K. (1999) Cell Tuple Based Spatio-Temporal Data Model: An Object Oriented Approach, Geographic Information Systems, Association of Computing Machinery (ACM) Press New York, NY, USA, 2-6 November, p.p 20-25 Moris, K. , Hill, D. , Moore, A. (2000), Mapping The Environment Through Three-Dimensional Space and Time, Pergamon, Computers, Environment and Urban Systems Vol 24 , p.p 435-450 Donggen, W. , Tao, C. (2001) A Spatio-temporal Data Model for Activity-Based Transport Demand Modeling, International Journal of Geographical Information Science, Vol. 15, p.p: 561-585. Philip, J.U. (2001) A Spatiotemporal Data Model for Zoning, Masters of Science Thesis, Department of Geography, Brigham Young University Bonan, L.,Guoray, C. (2002) A General Object-Oriented Spatial Temporal Data Model, Symposium on Geospatial Theory, Processing and Applications, Ottawa, March. Elena, C, Commossi, Michela, B. Elisa, B. Giovanna, G. (2003) A Multigranular Spatiotemporal Data Model, GIS'03, New Orleans, Louisiana, USA, November 7-8, p.p 94-101 Yanfen, L. (2004) A Feature-Based Temporal Representation and Its Implementation with Object-Relational Schema for Base Geographic Data in ObjectBased Form, UCGIS Assembly. Roddick, J.F., Egenhofer, M.J, Hoel E., Papadias D, (2004), Spatial, Temporal, and Spatiotemporal Databases, SIGMOD Record Web Edition, Vol 33, No.2, June 2004. Mountrakis G., Agouris P., Stefanidis A., (2005), Similarity Learning in GIS: An Overview of Definitions, Prerequisites and Challenges, Spatial Databases Technologies, Techniques and Trends, Idea Group Publishing, p.p 295-321
Use of 3D Visualization in Natural Disaster Risk Assessment for Urban Areas S. Kemec* and H. S. Duzgun** * METU, Graduate School of Natural and Applied Sciences, Geodetic and Geographic Information Technologies, 06531 Ankara, Turkey [email protected]
** METU, Graduate School of Natural and Applied Sciences, Geodetic and Geographic Information Technologies, 06531 Ankara, Turkey [email protected]
Abstract Visualization is the graphical presentation of information, with the goal of improving the viewer understands of the information contents. As today's world is getting richer in information, visualization of the information is important for effective communication and decision making. The basic objective of the study is to create effective 3D visualization tools for earthquake vulnerability level of each building by using the 3D visualization methodologies of Geographic Information Systems (GIS), Remote Sensing (RS) and Computer Aided Design (CAD) systems. Two separate 3D city models are created, the first one is developed by using GIS and RS technologies and the other one is created in CAD environment. 1/25.000 scale digital elevation contour maps, stereo satellite images, building facade images and 2D layers of the study area are used to create these models. The study area is the Cumhuriye Quarter of Eskisehir City, Turkey, where a spatial decision support system (SDSS) for earthquake risk assessment is developed. The process of 3D city model generation in GIS/RS is divided into two main parts; the first part is related to the processing of satellite images, and the second part consists of the vector layer operations. In CAD environment, 2D building foot print layer is used, after extrude operations; building facade images are covered extruded buildings by using texture mapping tools. The developed city models form one of the components of SDSS for Eskisehir. The results of social and building vulnerability analyses are visualized on the 3D city model to support earthquake risk assessment analyses.
558
S. Kemec and H. S. Duzgun
KEY WORDS: Earthquake Risk Visualization, 3D City Model, GIS/RSCAD, Decision-Making
Introduction Natural disasters might affect all sorts of regions; this region could be a single building or even a country. Moreover, they could cause big damages on human life and urban infrastructure. Natural disaster losses could be listed as life, structure, infrastructure, work and economical losses. Disaster risk could be reduced by using effective risk management strategies. These strategies have to be associated with the recent technological and scientific developments. GIS and RS can support decision-making throughout all phases of emergency management cycle. Decision maker should be well introduced to the problem so that they could generate applicable strategies [Godschalk et aI., 1998]. The disaster risk reduction framework's basic action fields are; • Risk awareness • Knowledge development • Application of measures • Early warning systems[ISDR, 2002] Effecti ve visualization properties could play important roles in all fields of this framework. In this study, 3D city models are used for the visualisation of disaster risk indexes and vulnerabilities to support disaster decision processes.
Overview of 3D City Models 3D city models generally used by government agencies for urban planning, public safety studies such as fire propagation, commercial usages including phone, gas or electric companies etc. Most of these examples are interested in modeling buildings, terrain and traffic network in the city model. Models, used for visualization, could be grouped into three with respect to their data acquisition methods. These are photogrammetric, active sensors and hybrid sensors [Hu et aI., 2003]. Photogrammetric methods are cost effective at large scale 3D urban models. Terrestrial, panoramic and aerial image methods are in this group of modeling. There are some example models that created from panoramic
Use of 3D Visualization in Natural Disaster Risk Assessment
559
photos. An example could be the MIT campus study of Teller et al. [2001]. The basic data source for photogrammetric methods is aerial images which provide reliable footprint and roof height information for each building in the model area. Today 3D city model from aerial image studies generally focused on extracting accurate roof models from images, Lin and Nevatia's model predefine L, T and I shaped roofs and assign most suitable roof shape to the examined building's roof [Lin and Nevatia, 1998]. Stereo aerial images could also useful tools for extracting 3D objects [Baillard and Maitre, 1999]. Lack of building facade information requires additional sensor data to be acquired for visual realism. Manual and semiautomatic methods give more mature results than fully automatic systems. Knowledge based and machine learning algorithms are another hot research area for the atomization of that kind of systems. Bellman and Shortis, [2002] and Nevatia and Huertas, [1998] are two example application of this kind. Both of them used knowledge and machine learning algorithms to improve the performance of the automatic building extraction. Active sensor could be defined as; a detection device that emits energy capable of being detected by itself. Basically there are two kinds of active sensors used for city modeling. Ground based and airborne based. Ground based systems are ideal for vertical textures especially for historical structure facades [Frueh and Zakhor, 2001]. However, they provide less accuracy for upper portions of tall buildings and these systems could not obtain roof and footprint data. Light detection and ranging (LIDAR) technology is also used for 3D city modeling studies. LIDAR technology constitutes great potential for the atomization of modeling issues. Most city model applications use different data sources. This requires hybrid usage of different sensors. Each of these data sources and corresponding modeling technique have advantages and disadvantages [Ribarsky et al. 2002]. Today 2D GIS data are often available in most cities. Many model applications use these data for their application [George, 2002 and Norbert and Anders, 2002] 2D CAD data provide accurate urban features and boundary data. Mainly boundaries are very usable data sources for image segmentation. Limitations for CAD systems could be listed as data management capabilities, spatial data management and spatial modeling. Main strengths of the CAD systems are clear and detailed graphical presentation controllability and editing capability of graphical features, Graphical presentation is important for the proposed purposes of the paper and CAD properties are used for the generation of the 3D city model and the visualization of the related data.
560
S. Kemec and H. S. Duzgun
Study Area and Data Sets Study Area: For 3D city model generation a pilot area is selected from Eskisehir metropolitan area in Turkey, This pilot model has been thought to undertake an overview function for how a complete 3D city model of the region would be look like? Using areas defined with legislative boundaries (quarter in this study) could be better than a solely physical area boundary definition to apply that kind of model. Therefore Cumhuriye quarter in the central business district of the Eskisehir city with 4113 population according to the 2000 census is selected. There are 434 buildings within the quarter. Data Sets: Four main data sets are used in the study, • Geometrically corrected stereoscopic Ikonos images with 1 m ground resolution, • Contour maps at 1/25.000 scale, obtained from General Command of Mapping, • Vector layers from Eskisehir Municipality, • Street layer of the city • Building layer of the Cumhuriye Quarter (pilot area) • Building facade images collected by field survey • Auxiliary data of various vulnerability indexes (socio-economic, structural, physical accessibility and total disaster vulnerability indexes) for the buildings. The created 3D city models form the basis of visualization of various vulnerabilities such as structural, socio-economical and physical accessibility in spatial decision support system.
Model Creation Phases After collection of required data and hardware infrastructure, study the study is performed in two main phases: First creation of two discrete city models on GISIRS and CAD environments, and then looking for the integration opportunities between these two 3D city models. Model Creation Phases at GIS/RS Environment
GIS intrinsically use tabular and geographic data. This characteristic is the main superiority of GIS environment than CAD. Hence a 3D model in
Use of 3D Visualization in-Natural Disaster Risk Assessment
561
GIS/RS has effective geographical querying capabilities with visualization of related data. Each building's disaster vulnerability index result could be queried on the created model for floor level. The 3D city model I GIS/RAS has two main parts. First part is related with the raster data model, creation of reference image map from stereo satellite images and digital elevation model (DEM) generation. The other part is related to the vector data model with 3D layer operations. A DEM is defined as a file or a database containing elevation points over a contiguous area [Manual of Photogrammetry, 2004]. The required height data for the DEM generation could be gained from point, line or polygonal vector height maps or stereo satellite/aerial images. Generated DEM is used for height base for created 3D city model and DEM is also used for the photogrammetric rectification of satellite images. There were 21 pieces of elevation maps covering whole Eskisehir city region. All these maps are combined and pilot area and near surroundings region are cropped to DEM generation. Then geometric correction of the satellite images and orthographic correction of this geometrically corrected image are carried out by using previously generated DEM. Coarse satellite images may have geometric, atmospheric, radiometric and angular distortions. Apart from the geometric and angular distortions other distortions are rectified at the satellite ground receiver stations. In order to fit vector GIS layers and reference image, images have to be rectified in advance. To rectify satellite images, image to map image rectification algorithm is applied. Three different mathematical transformation functions are tried. These are polynomial, rational and satellite model. It is found that satellite model gives the most appropriate results. Vector road layer obtained form Municipality is used at this stage. Ground Control Points (GCP) for image registration are collected from road layer and the results of the registration are checked by using the same layer. There are 42 GCPs collected for image registration. The overall root mean square error (RMSE) values computed for GCPs using satellite model are 0.47 at x axis and 0.63 pixels at y axis. There are two pairs of Ikonos images for related area. After creation of the georeferenced orthophotos, left and right stereo pair's are mosaic to generate a whole reference image. Left and right pairs nadir images are used to mosaic. The vector layer operations are performed over these resultant raster products.
562
S. Kemec and H. S. Duzgun
The main intend of the GIS/RS model is the thematic visualization of the disaster vulnerability condition of the buildings on the generated 3D model. The generated DEM model and the ortho satellite image are used as reference for 2D vector building footprints acquired from Eskisehir Municipality . Building floor is the unit object for desired 3D urban model. Maximum floor number is 10 in the study area. To handle floor units from building layer, building foot prints and floor information in the attribute table of the building layer are used. There are 10 separate floor layer, which are created for each floor and these 10 floor layer are overlaid to the generated reference layers (DEM and ortho satellite image). After creation of floor layers then these are extruded by using DEM height values. Each floor is assumed to be 3m. An example frame from created 3D GIS/RS model could be seen in figure 1.
Fig. 1. 3D per specti ve view of th e generated 3D GI S/RS city model (same colour represent same floor at each building). Ortho satellite image a nd other vecto r layer s merged on DEM.
Model Creation Phases in CAD Environment In the seco nd phase of the study another 3D city mode l is create d in CAD environment in order to have better visualization facilities for disaster management. The created 3D city model forms the basis of simulating various disaster scenarios and their visualization. Advanced visualiza tion improves decision maker's perception level and this property constitutes the main benefit of the CAD based 3D city model. The 3D city model in CAD also provides a Virtual Reality (VR) environme nt. In order to create VR environments, first of all digital 3D scenes have to be constituted . For digita l 3D city scenes creation 2D vector data (building foot prints and road) are used. Layers in CAD enviro nment are constructed based on the number of story of the buildings. The facade images, collected from the field by using a digital camera are utilized to map for the external surfaces of the 3D solid buildings. Dif-
Use of 3D Visualization in Natural Disaster Risk Assessment
563
ferent apertu re values are used to gain homogeneous exposures of all images. Several difficulties are also faced during field data collections which are difficulty of taking photographs of some buildings which have official secrecy (i.e. police and military buildings) , dense and tall buildings in narrow roads , with perspecti ve distortions on the facade images. Before mapping the field images to the building surfaces, to improve the image quality and remo ve the distortion s, image rectification functions are applied [Figure 2]. Depending on the 3D city model generation aim , the accuracy level of the model varies. Because of the fact that our study did not have any photogramrnetric goal, there is no photogrammetric network study for each facade image . Each image is manually rectified to be a facade image for 3D city model. After image rectification operation, four comer points of images are used for coord inate system transformation of the related facade image.
Fig. 2. Before and after image rectification of an example facade image from the study area
After image rectification operation, four com er points of images are used for coordinate system transformation of related facade image. Applied image rectification functions could be listed as; cropping, contrast enhancement, rotating and perspecti ve adjustment. The created 3D solid model by using 2D footprints maintained from Eski sehir Municipality should be made ready to facade image mapping. Separate objects are created for four facade s of each building to map each facade's origin al images on it [Figure 10]. Roads with river layer also superposed to the building layers to have a complete appearance of 3D city model. The created 3D drawing in CAD environment and processed facade images then transferred to another CAD environment where images could be map to the related facade .
564
S. Kemec and H. S. Duzgun
Visualization/Simulation of CAD Model
In order to enhance virtual reality of generated environment, some other complementary elemen ts like river and road texturing and creatio n of sky over building model is also necessary. A simulation video produced with these captured scenes to simulate constituted 3D city mode l [Figure 3]. Video was produced by using 4000 single image, captured on three different camera lines. Each single image was at 1600x1200 pixels resolution. The creation and visualization of the 3D model is an end less operation in CAD environment. Each chance has to be upgraded on the model to have a fully complete city model. Most time consuming and labor-intensive part of the 3D mode l generatio n is the creatio n of the virtual reality environments. There is no commercial software which has realistic buildings, with their true facade texture and fully automatic, modeling functions yet. On the other hand laser terrain scanning technology constit utes great potential for this atomization issue. Disaster vulnerability indexes of the buildings are visua lized by using index tables . Index table of each building could be seen with dropping building facade in the created video. When bypassing building, facade image is dropping and the index results are seen on the index table. Facade images are raised after passing near a building [Figure 4].
Fig. 3. Ge neral View fr om th e 3D city model
Fig. 4. Visualization of various vulner abilities in the 3D city model
Use of 3D Visualization in Natural Disaster Risk Assessment
565
Conclusions and Future works Two separate model contracted are constructed in GISIRS and CAD environments. Both environments have their pros and cons. GIS intrinsically use tabular and geographic data. This characteristic is the main superiority of GIS environment than CAD, RS is main spatial data source and complete GIS/RS environment. The main goal of these 3D city models is visualization of different (as total, socio-economic, structural) disaster vulnerability indexes. Coding of small Gills can help non-expert GIS users for effective and visualization. The authors are working on this issue for future advance of the study. Another environment used for the model generation is CAD, which provides advanced visualization properties. This advanced visualization improves decision maker's perception level. The integration of CAD and GIS/RS visualizations will be searched in future studies of the authors.
References C. Baillard and H. Maitre, 1999, "3D Reconstruction of Urban Scenes from Aerial Stereo Imagery: A Focusing Strategy", Computer Vision and Image Understanding, vol. 76, no. 3, pp. 244-258. C. Frueh and A. Zakhor, 2001 "3D Model Generation for Cities Using Aerial Photographs and Ground Level Laser Scans" Proc. Conf. Computer Vision and Pattern Recognition, IEEE CS Press, vol. 2.2, pp. 31-38. C. Lin and R. Nevatia, 1998, "Building Detection and Description from a Single Intensity Image", Proc. Computer Vision and Image Understanding (CVIU), vol. 72, no.2, Elsevier Science, pp. 101-121. Godschalk, David R., Edward J. Kaiser, Philip Berke, 1998, "Integrating Hazard Mitigation and Local Land-Use Planning" In Confronting Natural Hazards: Land Use Planning for Suitable Communities, ed. Raymond J. Burby, Washington, D.C.: National Academy Press, Joseph Henry Press. H. Norbert and K. Anders, 1997, "Acquisition of 3D Urban Models by Analysis of Aerial Images, Digital Surface Models and Existing 2D Building Information ", Proc. SPIE Conf. Integrating Photogrammetric Techniques with Scene Analysis and Machine Vision III, SPIE, pp. 212-222. Hu J., You S. and Neumann U., 2003, "Approaches to Large-Scale Urban Modelling", IEEE Computer Graphics and Applications, Published by the IEEE computer Society, November/December 2003, page:62-69. International Strategies for Disaster Reduction's (ISDR) publication, 2002 "Living with Risk: a global review of disaster reduction initiatives", http://www.unisdr.org/eng/media-room/facts-sheets/fs-Defining-a-few-keyterms.htm (accessed 20 Apr. 2006).
566
S. Kemec and H. S. Duzgun
J. Bellman and M.R. Shortis, 2002, "A' Machine Learning Approach to Building Recognition in Aerial Photographs" Proc. Photogrammetric Computer Vision (PCV 02), part A, Int'l Soc. Photogrammetry and Remote Sensing (ISPRS), pp.50-55. Manual of Photogrammetry Fifth Edition, (2004), American Society for Photogrammetry and Remote Sensing R. Nevatia and A. Huertas, 1998, "l(nowledge-Based Building Detection and Description", Proc. DARPA 98: Image Understanding Workshop, DARPA, pp. 469-478. T. Kersten, C. A. Pardo and M. Lindstaedt, 2004, "North German Renaissance Castles in the Computer", Architectural Photogrammetry to Virtual Reality. W. Ribarsky, T. Wasilewski, and N. Faust, 2002 "From Urban Terrain Models to Visible Cities ", IEEE Computer Graphics and Applications, vol. 22, no. 4, pp. 10-15. V. George and D. Sander, 2001, "3D Building Model Reconstruction from Point Clouds and Ground Plans", Proc. ISPRS Workshop Land Surface Mapping and Characterization Using Laser Altimetry, ISPRS, pp. 37-43.
Development and Design of 3D Virtual Laboratory for Chemistry Subject Based on ConstructivismCognitivism-Contextual Approach
Norasiken Bakar a,*, Halimah Badioze Zamanb
"Jabatan Media Interaktif, Fakulti Teknologi Maklumat Dan Komunikasi, Kolej Universiti Teknikal Kebangsaan Malaysia, 75450 Ayer Keroh, Melaka, E:mail: [email protected]. "Iabaran Sains Maklumat, Fakulti Teknologi Dan Sains Maklumat, Universiti Kebangsaan Malaysia, 43600 Bangi, Selangor Darul Ehsan, E-mail: [email protected].
Abstract
This paper addresses about designing virtual laboratory for chemistry subject, acid, base and salt (VLab-CHEM) in three dimensions (3D). Discussion involved on theoretical framework modelling where it divided into two parts that is analysis, design, development and evaluation. For the second objective, researcher was view on constructivism- cognitivism contextual lifecycle model where it used for 3D VLab-CHEM. This model defined about education media, how to measure content, authoring program and systematic instructional .design. For the third objective, researcher also explained the 3D laboratory architecture and story board for the early state of development VLab-CHEM. Others objective are defining Instructional Design (ID) model ( VLab-CHEM ) where it include 3D virtual laboratory modules. ill model focused on learning and teaching aspects and science education, education process and by using multimedia interactive module. While designing the development of VLab-CHEM, researcher has used approach in learning theory such as constructivismcognitivism-contextual. Concept through learning-by-doing, contextual education, simulation, 3D animation and real time animation to create 3D virtual based on reality, added in the VLab-CHEM.
568
Norasiken Bakar and Halimah Badioze Zaman
KEY WORDS: 3D, Virtual Laboratory, Experiment, Learning by Doing, Chemistry, Education Theory.
1 Introduction The development of computer technology especially multimedia technology has been a major factor in education world. By that reason, the using of computer in learning and education has been increased. Ahmad Fauzi Mohd Ayub et al. (2005). Research on virtual laboratory in chemistry subject, acid, base and salt ( VLab-CHEM ) is the main research based on problem found by the student and teacher in conventional chemistry education especially in understanding on the experiment process that is not easy to prove in real world. Norasiken & Halimah. (2005) VLab-CHEM is developed as a new value in chemistry education. Besides that, VLab-CHEM that will be developed can get more attention by the student in their new way learning. Activity in the VLab-CHEM is based on interactive laboratory in order to ensure the understanding student in chemistry material process and also to do experiment. By that, VLabCHEM will reduce cost such as time in lab, material and component used in the experiment where it differs in conventional lab. In other way, student and teacher can do the experiment as long as they want and it will avoid the student from using the dangerous material. Norasiken and Halimah. (2005)
2 Educational Theories 2.1 Constructivism Theory Student can be defining as an active individual in developing their knowledge. Education involved creating and arrangement student knowledge through experience and environment around them and their interaction within software. Mohd Arif et al ( 2005 ). Through constructivism educational theory, students have their own mind that has been created by interaction with the environment. Concepts that own by each student are differing and mistake will be occurred if the concept created opposite with the concept accepted in the classroom.
Development and Design of 3D Virtual Laboratory
569
Through constructivism approach in classroom, student will actively involved in educational process and they have the chance to create their own knowledge based on their background. Roziah Abdullah (2004). Jonassen (1994), constructivism members found that student should learn in order to create construct, define and make their understanding in many ways based on experience and on-line relation with environment. Education is one of activity that been done by student by their self, Shapiro (1994). By helping student in their integration with experience and new knowledge they already have, the activity should be in the context of needed requirement and in different perspective. Jonassen 1994, Oliver 2000.
2.2 Cognitivism Theory Cognitivism theory refers to the process of thinking that happened to someone while in the process of learning. It relate with short time and long-time memory. While in learning, student created a cognitive structure in their memory. Every time student learn, they will used all its experience in learning and store all the experience in their memory until they want to used it, in order to help him in the learning process. One of the cognitivism theories is information processing in learning computer. This theory offer active learning where student actively to get, restructure and define knowledge in order to make learning more fun. It is because student needs a transformation in learning and knowledge. The theory focused on new knowledge and past knowledge. To help student in learning, software designing should be in terms of symbol and other channel so that learning more accurate and easy to get. Based on previous research, in designing learning activity, cognitive structure of student should be taken. Alessi and Trollip 2001; Simonson and Thomson 1990.
Cognitive theories figures, Brunner, Piaget and Papert focused on main concept that is ( Simonson and Thomson 1990 ) : (a) How knowledge can be arranged and structured. (b) The willingness of student to study. (c) Give value upon intuition and intellectual approach to achieve conclusion without following analytical steps. (d) The important of motivation to study or having positive attitude in learning.
570
Norasiken Bakar and Halimah Badioze Zaman
Based on cognitive theory, some guidelines have been used in creating and evaluate learning based on computer. Simonson and Thomson 1990. The guidelines as below: (a) (b)
(c)
(d) (e)
2.3
The willingness to study is important to start, maintain and ensure the objective of learning. Structure and types of knowledge to teach. It is based on opinion, that student start to understand concrete operation, graphic display from reality and abstract expression and number system. The sequences of learning material are important to define the type of student in processing the information they get. Knowledge of cognitive children style through parts of dominant brain and processing level are important to know style of learning. Type and style given depend on time and place. Learning in terms of exploration and discovery is the important approach. LOGO language by Papert (1980) is one example learning based on computer, one way of problem solving by student.
Learning Approach Based On Contextual
Scars (1999) found that teaching and learning approach based on contextual is one concepts for helping teacher to explain content of learning in real situation and also as a motivation -for student to relate the knowledge gain with real life time. Ketter and Arnold (2003), research in case study of using teaching and learning approach based on contextual, found student can relate concept with real lifetime much better. Hardy (2003) found student more successful in gaining knowledge and increase their performance. Approach also focused on practical approach to give student more experience using material and hands-on concept.
3
Research Purpose The purpose of the research can be divided into two components:
(i) Developing virtual lab for chemistry subject, acid, base and salt topic for form 4 science students. (a) Define methodology for virtual lab content for chemistry equation, acid topic, base and salt. (b) Creating Instructional Design Model (ill Model) for virtual reality content of chemistry subject.
Development and Design of 3D Virtual Laboratory
571
(c) Develop prototyping the content virtual lab based on constructivism-cognitive-contextual approach. (ii) Do a research on the successfulness of virtual lab among chemistry student at school in Alor Gajah district, Melaka in one case study. (a) Testing the ability of virtual lab in chemistry subject, acid, base and salt. (b) Testing the effect of using virtual lab in the science process in the chemistry subject. (c) Test the effect of the achieving student used the virtual lab based on pre test and post test.
4
Research Objective
Result will give input to teachers and software design in the aspects; the suitable teaching for science software, technique or using software with the constructivism-cognitivism-contextual approach in learning and teaching. Result also will input the school execution towards the virtual lab in the readiness of teacher requirement and the willingness of student to use the knowledge based on ICT. Besides that, research will give a chance to Education Legislation Principle to legislate the ICT in education and give the suitable accommodation for student in upper level and also to curriculum legislation to make the virtual lab and ICT in teaching and learning in lab. Overall, research will give the input as below: (a) Developing a methodology of virtual lab in chemistry education. (b) Designing an instruction model of virtual lab with constructivismcognitivism-contextual approach in chemistry course. (c) Developing a virtual lab modules for constructivism-cognitivismcontextual approach (d) Developing virtual lab prototyping (e) Defining methodology for partial experiment to ensure the successfulness of using a virtual lab. (f) Design and produce software evaluation checklist by using the Statistical Package for the Social Sciences (SPSS). (g) Design questionnaire for student and teacher of chemistry course to ensure the difficult topic in chemistry subject. (h) Produce the successfulness data of virtual lab using constructivismcognitivism-contextual approach. (i) Produce the data of using virtual lab differ with conventional chemistry laboratory.
572
Norasiken Bakar and Halimah Badioze Zaman
5 Theoretical Framework Model Theoretical framework model showed the structured profile that is Analysis and Design ( I ), Development ( II ) and Evaluation ( III ) for VLabCHEM. The development model can be seen in figures 1.1, research on theoretical framework model. It's including research questioning and research hypothesis. To achieve the purpose of research, some main question and research hypothesis is designed as below: (a)
What is the methodology used in developing virtual lab for the chemistry subject (acid, base and salt)? (b) What is the instructional model design suitable in order to increase the cognitive skills based on virtual lab for chemistry subject (acid, base and salt)? (c) What is the chemistry virtual lab that is suitable for instructional design? (d) Is there any differ in terms of achieving cognitive skills between students using virtual lab with student using conventional lab? i. Hypothesis Moll (Ho 1): No difference marks in pre test and post test for the experiment group in acid, base and salt topic. ii. Hypothesis Mol 2 (Ho 2): No difference marks in pre test and post test for the control group in acid, base and salt topic. iii. Hypothesis Mol 3 (Ho 3): No difference in achieving between students from experiment group using virtual lab based on constructivism - cognitivism - contextual approach and control group that using conventional lab. (e) Is there any difference in overall achieving between students using virtual lab for chemistry subject (acid, base and salt) with student using conventional lab? iv. Hypothesis Mol 4 (Ho 4): No significant different between marks in pre test and post test within student in experiment group for the topic acid, base and salt. v.Hypothesis Mol 5 (Ho 5): No significant different between marks in pre test and post test within student in control group for the topic acid, base and salt. vi.Hypothesis Mol 6 (Ho 6): No significant different in overall achieving between students learning through chemistry virtual lab with student learns through conventional lab. (f) Is virtual lab for chemistry education (acid, base and salt) can improve the science skills process?
""l
i
!!.
0-
a o
;>;"
::e ....o
(1)
'" a
::t'
e,
?lo
.... ~
8
o ::: ;;
::l"
(3
~
~
;...
,...
ciQ.
•
• • •
ti!~
I
•
•
cycl.e
S~deJWpmm.life
sol!lvw!~~
Com.birJ.tin IDmo&!llhi
& lJiIIml ~ ~ IlI.e1:lodllog;Tof ~i.d..1:>N mi su
Sal
!loIODEL
CHEMISTRY
ID
UB
\lIMODEL
DE\1ELOI'MENT
t.
VDITUAL
VIRTUAL ..I LAB
~,
Jr1C'M
KBEb: ~A1:dlMr. DICK& CAP.EY
'lh<mt
(ID):
&!s:ign
B~ SocW. Qns~ dm.ctim
CogJtir~
1JWOO.ch
!rroJ.di:n1Jl
• •
~
theories: omtIIJdilrtJD.cogrJiltr ~- =tI!X!ll.U
L~n'Iing
• Vi=l~3D
1:>JD:.ins)
lJOke. wa.lhi book (gJ.o.:s1.lI'f.!I11JP. sefttg. llilp) Com.birJ.tin 2D & 3D lab o:t\C~ • &'USComID ~~(UlllI'lhg
.
ht.enanr~
W:=tDrj(.4rt. -
Virtrnl~~
•
(Virtua! lab ron:ept, P & P theOly andID model)
ANALYSIS ANDDESIGN
I
f !
I
sa
..
I
I
43, 4, 5, 6
Reporl on electionic experimeltl
Testing
•
RevisDl1.
Experinard
•
•
•
Acid, base and sail softwale module
HoI,
rooms
• i\gffit. • St.=~& 4'P=.Js) • Help (G1o,;sary,=.p , Help and ~quipm;rt) • Elq)ffimffitlab (Acid, Bas~ and sall.) • Notic~ board Lab map and list of lab
Virtuallabmod1.lli
(Virtual lab & devel.opmenl nndule fOr acid, base andsail )
DEVELOPMENT
II
• IMPRESSIVENES S • EVALUATION ON VIRTUAL LAB ON lHE ASPECT OF SKILL COGNITIVE
SALT)
( ACID, BASEAND
CHEMISTRYLAB
VIRTUAL
lHEUSINGOF
(C011Irol & experiment glOIlp)
EVALUATION
ill
i
tJ (1)
V.>
-.J
Ul
'<
s'"....
e, r' ~ o....
S: I::
<:
V.>
tJ
o ....,
e§0
en
(1)
tJ
0-
§
......
ag
"0
0"
< (1)
574
6
Norasiken Bakar and Halimah Badioze Zaman
Constructivism-Cognitivism-Contextual Model Life Cycle (LCC 3 ) VLAB-CHEM
The development of VLab-CHEM is designed in a form of cycle to show the whole package. Adaptatio n from KHGK2 model, Rozia h Abdullah, based on waterfall model , will give the inner and outer entity for the education needed so that there is an expanded in terms of cognitive, affective and psychomo tor. 5 phases involved in the mode l that are: ana lysis, design, development, implementation and evaluation. Therefore this model is called constructivism-cogn itivism-contextual mode l life cycle . Leaming and teaching objective
ModifI:idDn hallucinali:~l
r - - - --.l,II
Pure value
A-nUl' -l.- i:;
I 11-------, Silibu$!
I
cmllelll
subject Leaming &
Addingbetter
I
teaching obiectiw
EX€mtion
+-
Evaluation
te>!
,
I Developrner1l User
De$ign_
IDModel
Pie test, pos
Obm l'lllg
I~
Development and Design of 3D Virtual Laboratory
575
Fig. 1.2. Constru ctivism-Cognitivism-Contextual Mod el Life Cycle (LCC3) for VLab-CHEM
1111
PM;:~1: ~
1 Ir-------- --------I
i
I ~~~" I
I
:1
IiiII
I I
T ''I _I er
n1
1 c_ _ c1'JmlistIy robj~ o:l.
E"f~1 Ii I I Ii .. I- -r ! I
1_·'1 I~~ I :: ~ ~a.dy has
1------------------
I
I
I I ---------- ---------11
l
11=~II :aDrol i Il
---------'I
-'
II J
Fig. 1.3. Constructivism-Cognitivism-Contextual Life Cycle (LCC 3) for VLabCHEM : Analy sis
576
Norasiken Bakar and Halimah Badioze Zaman
•
ID ModelVL1Jb- CHEM
Acid
BaselA1b:ili •
Science chemistry "PPI"O~ch
NeuIIilimm Salt.endW'J1er
ConstIurti:rimJ.· c ognii1:rW!~ ~·
c~~~
Le;m\ing sequence p lm'litgm.odul.e Iro1enctivelrl1llI:iIllefu. Terltirtg JMt.erW
erd
l.emring cont.ert
CcmstJ:1Jin1
L
_
-------_.....1
Fig. 1.4. Constructivism-Cognitivism-Contextual Life Cycle (LCC 3) for VLabCHEM: Design phase
Development and Design of 3D Virtual Laboratory
577
r - -10pn.oI
l'Io wC1ut
YMk-CHl!M .VlIhlioIl •
M il, ~.ua IUtlo ftwuo I'Dltt;pir.f
• n.GtIom.
.:tporill.oI
~
T
~
Fig. 1.5. Construct ivism-Co gnitivism-Contextual Life Cycle (LCC 3) for VLabCHEM: Development phase
578
Norasiken Bakar and Halimah Badioze Zaman
~Dent.titl~
I rx p e r in~t
1uestil nnt re • ~I·tll~
ao\'lltp1llOllt
PlletoJUre ~ hn tt ~ n • !'q'lrDrwlltltlp
ObseMrtn~ d~ cus sitn
• • •
• ~pltll~ tllUnf • Di~
CNttl mmNII;O hll Illtl CtIl;W»ll
t
+ T COgJ:dir~ 51rill:
Fig. 1.6. The design of electronic experiment report
Questl nntire Cn~ tlilt tilll GtIl;Opt tf d.tmir.ltilll
"I
Development and Design of 3D Virtual Labora tory
--
I
-
-
579
,...-------
fu Clllion
I
w"w, CHEM ID I\fudel
m.w, C HEM t.e ~
· · ·
PmiM
Aci.d,b~~md~u
~~
~ CJllW?Jre ~ El~ctroUc ~~
• Po!l.t.e!l.
report ~~Qlli rom
Consttuctir ism.·cogntilrism· le=itg end
c0l'l1eXlill.l
t.e ~ ~W"L
I1fudific~
-.
md hilllcnwm
J
F.e~mchdW!
I
I---
I
El7~tim
1-----
....J
s
Fig. 1.7. Cons tructivism-Cognitivism-Contextual Life Cycle (LCC 3) for VLabCHEM : Exec ution phase
580
Noras iken Bakar and Halimah Badioze Zaman
~~ ~Imm------~---------I ElJolu11im I I I Remrdl. d1JU. I I I I Ihe using of I I ~~ I I I I + rb Impmti
VI..'o-C1Il!M IIJ
14oa..l
II ~. I 8 ~" ::-.,
I;;:",:, III
u,..
I
:
Fig. 1.8. Construetivism-Co gnitivis m-Contextual Life Cyele (Lee 3) for VlabChern: Eval uation Phase
: I
I
-J
Development and Design of 3D Virtual Laboratory
7
581
VLAB-CH EM Model Instructional Design
ill model for the development of VLab-CHEM in educational for the chemistry subject Form 4 is based on technology integration. The core of teaching and learning process based on teaching theory and pedagogy. The purposes of the model are to increase the knowledge of acid, base and salt, high level thinking skills and scientific skills within the student of the chemistry science. The development of the model based on educational environment such as: (a) Learning result. (b) Pure value. (c) Scientific skills and high level thinking skills . (d) Teac hing material based on achieving level. (e) Holistic development. (f) Science approach with the suitable module. vw,·CIlEM
CllDlUllY itIlNCl APPUJACH
' b.j1lily
. l!xpt,i111'j
. OWtrcihf .ltpltutLtll.
.PIoIll"'''lLi>f • lof!>,tio. .l1ifi: ,llll
( o.\-s.D:trn\l\.\I. ('O.... \1m·c..r ", (O..\lLxn·,\1. "!nOKII
·· ·
Cogdirm BehID'iour SociU
L!IJllII!G l!QUlIlCl
··
Sc~lding
Jrmasing diIficulty !!vel Miscell= sirmasilg
rr~\( JU\ (j"1.A~~1\G
HOLYlIIC Ill\IlLOPIl!llII
• Cogdirm • Aff;ctilm\m • PI)'Chomotor
·
Fig. 1.9. VLab-CHEM ID Model Development
ItA.'\JD 0\.' uml.u u
TT.·\fI~· (jA.\])II_ ·\J: ~
"(j
nll
• • • • •
.wrt.IrlOdule ffilpIrlOdule ImpIrlOdule ~rnoou.WJ IrlOdule
~md
1Il!leIiU sm WJ IrlOdule
• ~1mWlI'l
• • • • • • •
Ekmiple EkeI>:ise Topict!st. By selflemling FillIlest. Style LeMl'lir€by(l/ffi self
582
Norasiken Bakar and Halimah Badioze Zaman
VL~· CHEM Modules
II or
I Al" JlIw dUl or
I
tOI'IlI vimU ll~ , WdUl , , hom.hy lo",ho,
. fvimU ll~
or
I HoJr WdUl
Mtp Wd Ul
!l.,.
K~
K~d.>
or
"'~
U,in{w i; . w.i!b. Ion
I
or
.1.:1: :1:u1
ul
:IDJ ....',
El{Jrnment fuvisim .f
n un. UIl ii' £=tit",
3«r nC' 1>lJu aal•. hC':p un 'uti c.l.: a'1 I fa d :aJ .uol :.u.1
~,l(t'e.:
I
or
vimUll~
:n1'l::l:a:I:.H 'h2p 1
L1da :bJal
or
Lio t
or Mohlo
1l~pJ.u..
,'DI" U):n ~
or " l!xporim.o Jllr..~
vm..J.
c.~r CUl I CIII III ;~l l
L~
1
II
I or
~"llliApJ'U't>o
Mind~~
Electnnic ~rnment
report
, '"
·· · ·
~"'lf: fl:Ir :IIJ).
::1: ),It.urulldl,l .
or
I
"
~
··
Obseromm A!I'JngeIIlenl
· ·
App~
sh~e
Other equipm.erd.
I
Obseromm
A!I'JngeIIlenl L~ e1tJMter:i
'l1symbol I\1M.eri'l1 ~e
Fig. 1.10. VLab-CHEM modules
INTERACTIVITY
-
r....pulduf)'
fl
-1
cOO: I:..tr.~;T
f-.
~ o.l1oice.
U ..,h:.... 1!
r~i.)'U~;"_1!
(u· ..." b."...r ::.. :.al!
ClII!JdIlIl.Y
( .1 0ll!! "-\; .. ~ , &
WllJl!C 1 EBIIll llLIBUl
("'UI I! I' I';~ (".n :lll!bI
Button· iWlitiw
(],.rru' l!
--t
.."ub.u::,r:....
--t
IlL'I1lphn
%tn . ~. ]n. v, ii. ,
):cl.aJ:,...
f---< Step1>ystTIU'Ollim 'ol1irlg ID"loetim
cuphi;' , uriIMtith
('" •• ,,::.J:Lu. lulIl . . .." : n ;·l.u:Lu a
IUWb1ltud
~
, hnU tIy :ll. c;til lL )(.-" ..:"""•••", .. e
V
1
H\'hk.l\"
t
w • ...:r1u1:.o.:ol.b..e
:IIJ"illl.l.i:
ClII!JdIlIl.Y
loAl.Jall
WllJl!C 1 EBIIll llLIBUl
r.. . I':.:UI.al! w a "'1!l (.l oa!!C\; .. ~ , &
L ...
-.
,
-0
Mem- """
Cb.n>:.al!
1i&, IUhl»"lill.
;1<1:•.1l :'I ~.
"I"rt
Help f-<
k ..ta. hJa
A
("" a >u::..J;U lIl
B
C H E
r.... pu>.:lllr:t U"j
HeIpWlon
$j
1
~U\·Dlr:sm;c.
I
.' k:Wl ' t dl t ~tU:t wa "'l'~
'"lllW ~lI :1:., t :"-' c:.. ~lIJl::aJ:a ~
:.:,c:..,
:a;i.~ ~:~f1
"ibl a :a J
~~ a dll;>.Olo
M
(,d",:J.i:JJa :~,uU ldlt Ic .., :a~ :aJ r.oJI ;J1~
O1lj,etiv,
~ lontim
'IU'~
Fom>tio' ..o!uolion SuniIIl>liI1' ..o!uolion
Topic oM <=olio~ t<
-+
Ikt>phorbutton H Te:«s , colour, voice !vLIl<,M"",
e;"phic,.'1Oic, .t<xlS Prollim ' omhg ID "loetim
~mm.:ing
f1"'=-~
s.:n.1::1.~ ,l
wa>u:..J:iblll:.a I"I::~i.
n.rr'nosx f_\J'ncnrr.!'.1 RIJ'OU
Ih:U1nr: ><.: lnI b1:a Ll u:.a~ Il'><':,d ~;IJ'C'II n ' .II f1u ::l(:., t>l
I'u"" :'w'"
Pruticol
procednre Qu<~
oM disnssion
MindWlhg
.
-
Problen sobrng
Di =
Ehquiry -
"'P--t
Co!l>bmlim
Fig. 1.11. Modules inside VLab-CHEM Experiment Lab
~t<xlS f-
~oM
'PP= Practicalrepon
. ~"Uf1 :d.' ~
Ll:.loW>"-IJa l:d'\'ltrll ~" " P ~ >.>,,",n l:d'\'lttll
•
1::"'I :a Ll.....Jalllo
~tll"'Ull
ntrr ....ul6.~
Development and Design of 3D Virtual Laboratory
8
583
VLAB-CHEM Lab Architectures
The lab architectures were designed into 3 dimension and it has three sections as stated below:-
8.1 Main Hall User will meet the main hall first after entering the VLab-CHEM lab via the main door. Therefore, the user can enter either Store room or Experiment room through this main hall section.
8.2
Store Room The store room for VLab-CHEM are divided into two section which is : a. Chemical Equipments This section are mostly contains all the needed equipments that will be use during the practical experiment under the topic of Acid, Base and Salt. This section also equipped with relevant images, voices and information regarding on those equipments. b. Chemical Substances. This section will be including all the substances that will be use during the experiment. as for related topic which comes with relevant images, voices and information.
8.3
Experiment Room This room also divided into two sections:a. Acid and Base Experiment. All experiments and test for Acid and Base topic will be explained in type of lectures, videos, animations, graphic, text and audio. b. Salt experiment. All experiment that will be conduct by the user are in interactive mode, where the user can choose any substances and equipments to conduct the experiment. As for the result, the experiment can be seen in real.
584
Norasiken Bakar and Halimah Badioze Zaman
lJotlOt •
Fig. 1.12. VLab-CHEM Plan
Fig. 1.13. VLab-CHEM Lab Architecture (Hall, Store and Experiment Room)
Development and Design of 3D Virtual Laboratory
585
9 VLAB-CHEM Storyboard According to Jamalluin Harun (2001), most of all the multimedia software development process are always referring to the process the been used to developed a film or movie, including the use of storyboard. Storyboard represent of what will be display on the screen and how the screen are connected into each other. Therefore, the storyboard for this VLab-CHEM is still on the early stage where the development is only consisting of the three main room or hall as told above. OBJ!C f me lIPlI01i l'lAMI!J :113 unu:!
1- VLAB-CmVIlIUAL llE01U01Y
n- MAtElIA.L ~ !J- APPill!m n -VL-CBlM !+-ni !~ - QlIHrIOU U-!OPIC !7- LAB lOOM !$- mOll GlJID!
Gl· lIUD1U.sor Cm::m~UY
G1- ~.PP.ill!U~ or CmnU1Y G3 - Gll.PliIC rau.. G; -GD.PliIC DI~m:IDJ1i G5 -GD~ lIA.!!ll!U
G'- GD..PBIC U.B Am (17 - Gll.PliIC PEID1AL GlJID!
Fig. 1.14. VLab-CHEM Frame 1 Storyboard
DInllY ID.m /tIm GI-G7,!- IS
586
Norasiken Bakar and Halimah Badioze Zaman l'llME~
IUHJ!
:113
.n
OBJEC I memnCllr
FLOVCRA.!l
n- "v1J.B.c:a:EM: LA.:OC:Jl.U01..Y n - A.t~ID, BAmAND W,I
u-uau:!OJM 14-UBMAP
Ij· ".;mUll llB C:a:nn:~U.Y
u -mmr n.oom 17-GUlU' ~llA.Ml IS - PU fi,lQl.D Gl-IDJl IJ:
H:
-J
L-I
17 ;
IS:'1.-
---"
Fig. 1.15. VLab-CHEM Frame 2 Storyboard
W~Ml~:l
!J
OB.T1!CI DECliPUON
IUlLE:U
n - VLAB·C:BJ!]I UJDUIDl.Y H-OUI n-j\.PPAlllm U-nll!ID~$
GI-ro:D.. 0-2 -CAUl!% (H- CU'PJrOllD CH - CUPD.J.D
Fig. 1.16. VLab-CHEM Frame 3 Storyboard
nun
10 IRl! =IW lOOLt
Development and Design of 3D Virtual Laboratory
587
10 Conclusion This paperwork has been discussing the research methodology including VLab-CHEM development methodology. Detail description through phases involved in the development. Through virtual lab methodology, constructivism-cognitivism-contextual life cycle for chemistry subject has been produced. In the VLab-CHEM, one instructional model has been developed with the constructivism-cognitivism-contextual approach where model focused on science approach, learning sequence, holistic development, planning learning, and teaching and learning through interactive multimedia.
References Ahmad Fauzi, Norhayati & Tengku Muhammad. (2005). Pembangunan Perisian Kursus Tutorial Multimedia Matematik Kalkulus Menggunakan Pendekatan Pembelajaran Masteri. Konvensyen Teknologi Pendidikan Ke-18. Inovasi Teknologi Instruksional Dalam Pengajaran Dan Pembelajaran : 179-184. Alessi S.M. & Trollip, R. (2001). Computer Based Instruction: Methods and Development. Ed. Ke-3. New Jersey: Prentice Hall. Hajah Norasiken Bakar & Halimah Badioze Zaman. (2005). Analisis Awal Makmal Maya Bagi Pengajaran Kimia (Asid, Base dan Garam). Konvensyen Teknologi Pendidikan Ke-18. Inovasi Teknologi Instruksional Dalam Pengajaran Dan Pembelajaran : 809-816 Hardy, T. C. (2003). Contextual Teaching In Science. Citing Internet sources URL http://\v\v\v.kennesaw.edu/english/ColltextualLearning/2003fBartowrrerallar dy.pdf. (accessed 25 January 2006) Jonassen, D.H. (1994). Thingking Technology. Educational Technology 34 (4):3437. Ketter, C. T., & Arnold, J. (2003). Implementing Contextual Teaching and Learning : Case Study Of Nancy, A High School Science Novice Teacher. Internet sources URL http://vV\vw.coe.uga.edu/ctl/casestudy/Arnold.pdf. (accessed 20 February 2006) Mohd. Arif Ismail, Abdullah Mohd. Sarif, Rosnaini Mahmud. (2000). Pembangunan Perisian Multimedia Interaktif Geografi. Prosiding Konvensyen Teknologi Pendidikan Ke-13. Persatuan Teknologi Pendidikan Malaysia. Diedit Oleh Yusup Hashim & Razmah Man. Oliver, K. M. (200). Methods for Developing Construktivist Learning On The Web. Educational Technology 40 (6) : 5-18. Papert, S. 1980. Mindstorms : Children, Computer and Powerful Ideas. New York : Basic Books
588
Norasiken Bakar and Halimah Badioze Zaman
Roziah Binti Abdullah. (2004). Pembangunan Dan Keberkesanan Pakej Multimedia Kemahiran Berfikir Bagi Mata Pelajaran Kimia. Ph.D.diss., Universiti Kebangsaan Malaysia. Sears, S. J. (1999). What Is Contextual Teaching And Learning? Internet sources URL http://w\v\v.contextuaLorg/. (accessed 15 December 2005) Shapiro, B. (1994). What Children Bring To Light: A Constructivist Perspective On Children Learning In Science. Teachers College Press: New York. Simonson, M. R. Dan Thomson, A. (1990). Educational Computing Foundations. Ohio: Merill Publishing Company
The 3D Fusion and Visualization of Phototopographic Data Yang Ming Hui, Ren Wei Chun ", Guan Hong Wei 2 , Wang Kai 3 'Chinese Academy of Surveying and Mapping, China [email protected] [email protected] 2China Beijing Si-Wei New Century Information Technology Ltd., China [email protected] 3School of Remote Sensing Information Engineering of Wuhan University, China. [email protected] Key Words: Visualization; 3D Reconstruction; Phototopographic data; Data Fusion; Image Registration
Abstract At first, the concept of T-collinear equation is explained in this paper. Based on T-collinear equation the rules and mathematic model for 3D registration and 3D reconstruction are put forward. The computation method for 3D registration and 3D reconstruction with two conjugate images is studied. The rule "Sensation-Thinking-Recognition" with which human being recognize the reality, is applied to visualization for understanding the Datum world. A test prototype is established to realize the multi 3D view and landscape view driven features fusion for map updating and land use change interpretation. Also the test in two areas are under taken.
1. Introduction Along with the spatial technology development, there are more and more satellite and airborne Earth Observation System collecting diversified image data which are different time phase, multi spectral data from optical sensor and all weather multi electromagnetic waveband, multi polarization SAR data, used for globe surveying and mapping. Accelerating the data in-
590
Yang Ming Hui, Ren Wei Chun, Guan Hong Wei and Wang Kai
terpretation and mapping for such phototopographic data is required. So, the operational intelligent analysis method and object recognition method are developing. That is phototopographic data: The image data with different resolution different electromagnetic waveband (optical microwave), acquired by airborne or satellite sensor in different time phase, as well the data as control datum DEM/DSM and vectorized map data etc. If the phototopographic data sets exist, which is correlative to an area on globe surface. Then it can be 3D registered and fused through absolute and relative orientation in an uniform reference or an uniform radiometric reference. The result of the 3D registration and fusion is that physical attributes such as spectrum feature ( colors ), structure and texture etc will give up to object point P with the global coordinates ( L, B, h ) correlative to time.
Satellite
--4----+-~~---Airborne
Landscape
Fig. 1. The 3D registration and fusion of phototopographic data
Fig 1 shows the conception of 3D registration and fusion for phototopographic data. For phototopographic mapping purpose, the image interpretation is based on the utilization of great phototopographic data: the multi spectral image data is rich with spectrum ( colors) information. The panchromatic image with waveband 0.49 urn -0.69f.lm benefits the high spatial resolution and is rich with structure / texture information from grey variation. Comparing optical image data, the multi electromagnetic waveband and multi polarization SAR image is great rich with structure / texture caused by roughness wave and physical structure of landcover and relief the radar wave illuminates. The rule:"Sensation -Thinking-Recognition" which human being recognizes the objective reality with, is applied to visualization to recognize the datum world For research of 3D fusion and visualization. The establish-
The 3D Fusion and Visualization of Phototopographic Data
591
ment of Visualization Interpretation Station prototype is carried on which realizes initially the multi 3D view and 3D view driven features fusion for the land use change and map updating interpretation.
2. The Explanation of T-collinear equation T-collinear equation is the collinear equation correlative to time, which has the formula as:
Xij
v..
= x; + (Mt ) + (hij -ZSi); 1J
Yij
v.
= ~i + (i.1y; ) + (hij - ZsJ ;...
(1)
1J
T''
Xij
Uij
~j
w,
=
RqJOJK
Yij
(2)
Zij
And in the formula (2), the transformation matrix
RqJOJK
is the function
of platform attitude (jJi' OJi .«, in time tie In the modem satellite or airborne phototopographic system, the sensor platform is always integrated with an accuracy time measurement system, a platform positioning and attitude measurement system. The key of such modem system integration is based on the theory of time / space relation in modem physics. So, it is seen that previous two of formula (1) are implicit function of time t., and In general way the third one of formula (1) requires time synchronous realized during the system integration. T-collinear equation shows only in the case of instantaneous moment t, the image point, projection centre and object point are collinear The two images are called the conjugate image which is obtained in different position in the Space, and has the common coverage on earth surface. In this case, for 3D registration and reconstruction with conjugate image, the rules should be following. According to formula (1): Rule 1: computed coordinates from two conjugate images in 3D space should be equal for two conjugate images point:
592
Yang Ming Hui, Ren Wei Chun, Guan Hong Wei and Wang Kai
(Xij)
= (Xij)'
(Yij) = (Yij)' (Xij)' (Yij) is computed from one image and (Xij)', (Yij)' is from another.
Rule 2: If the DEM data, DSM data or the reference point (or control point said) are existed in common coverage, then according to formula (1), the computed coordinates from two conjugate images in 3D space should be equal to correspond from DEM,DSM or from reference point:
=X k ,
(~k) = Yk ;
(X ik ) ' = X k ,
(~k)' = Yk ;
(X ik )
(Zij)=Zk
X, ,Yk , Z,
is the coordinates ofDEM,DSM or reference point.
The 3D registration following rule 1 and rile 2 can be carried on for those conjugate image collected from different sensor with different time phase, different resolution and with different electromagnetic waveband.
3. The 3D registration and reconstruction for phototopographic data Two conjugate phototopographic image correlative to an area, can be registrited and 3D geometry and perception can be reconstructed. The following is the computation method and-results presented, the conjugate phototopographic image acquired with different sensor and different mode. (1) The registration of forward / backward vision image In the same orbit, SPOTS HRS collects the image couple with 90 second time separation and forward / backward vision mode. For 3D registration, All observation image data, satellite emperies and attitude data, DEM data and ground control point should be transformed to a uniform 3D spatial reference.
The 3D Fusion and Visualization of Phototopographic Data
593
Fig. 2A. SPOTS HRS forward / backward vision Based on T-collinear equation and rule 1 and 2, the observation equation, conditional or errors equation are established according to least square method for the solution of 3D registration . After 3D registration, the HRS couple is used to 3D reconstruction.
(2) 3D registration for conjugate image of SPOTS / SPOT4 The conjugate images are from SPOT4 and SPOTS in different time and orbit. SPOT4 HRVIR multi spectral image lateral 26'.4 viewing from east to west November 2004,and SPOTS HRG multi spectral image lateral 15'.7 viewing to same test area from east to west May 2004. using similar computation method, The conjugate image couple is 3D registered , The registered couple is used to 3D reconstruction.
594
Yang Ming Hui, Ren Wei Chun, Guan Hong Wei and Wang Kai
Fig.3A. 3D registration for conjugate image from SPOT5 and SPOT4
Fig.3B.Conjugate images after 3D registration and fusion
(3) The 3D registration for neighbor strip of airborne SAR The conjugate image comes from Neighbor strips with 60% coverage. The airborne SAR flight from east to west, The INS (Inertial Navigation System on board keep the SAR vision right to north. The 3D registration can be computed with the similar method as satellite image, The conjugate image registered used to 3D reconstruction
The 3D Fusion and Visualization of Phototopographic Data
595
s
Fig . 3A.. 3D registration for neighbor image strip of airborne SAR
Fig. 4B. Conjugated image of airborne SAR registered
4. The 3D fusion and visualization for Phototopographic data In the procedure of 3D fusion and visualization for Phototopographic data, the effects could be expected.
596
Yang Ming Hui , Ren Wei Chun, Guan Hong Wei and Wang Kai
- Multi 3D view of landscape could be show ing, Ident ifying and comp aring for phototopographic data interpreter - The main 3D view drives the features data fusion for phototopographic data interpretation. - The procedure visualization of the reasoning data fusion for phototopographic data interpretation, The 3D fusion and visualization for phototopographic image data can observe the geometric from and structure of the object on the globe surface with different projection and different viewing angle; making use of the spectrum features (colors) of multi spectral image as well the structure and texture which the multi electromagnetic waveband multi polarizable SAR image get, the classification for land use and vegetation coverage could be much better improved, moreover,3D fusion for multi landscape view will raise the equivalent spatial resolution and electromagnetic waveband resolution of phototopographic data. For the research how the 3D fusion and visualization of phototopographic data can be utilized to interpretation for landuse change detection and map updating, a prototype of phototopography visualization interpretation station are establishing to realize the 3D multi view and main view driven features fusion .
Fig. 4. The pr otot ype of phototopography interpretation station
The 3D Fusion and Visualization of Phototopographic Data
597
5. Conclusion • In the paper T-collinear equation conception is explained based on Tcollinear equation, the rules and mathematic model are put forward for 3D registration and reconstruction. • The 3D registration and fusion in different case for pototoppographic data are studied. The test results show: either satellite or airborne phototopographic image data could be 3D registered and 3D reconstructed. The 3D registration of SPOT4 HRVIR multi spectral image and SPOTS HRG multi spectral image are realized and the landscape model for different season are reconstructed for land use change interpretation. • A test prototype for Phototopography Visualization Interpretation Station is established to realize the multi 3D view fusion and 3D view driven features data fusion for map updating and land use change interpretation.
Reference Yang Ming Hui, (2006):"The Topography In Twentieth One Century"SCIENCE of SURVEYING and MAPPING February 2006 Beijing China.
Integrating a Computational Fluid Dynamics Simulation and Visualization with a 3D Virtual Walkthrough - A Case Study of Putrajaya Puteri Shireen Jahnkassim', Maisarah AliI, NoorHanita Abdul Majid2 and Mansor Ibrahim 3
2
Department of Building Technology and Engineering, Kulliyyah of Architecture & Environmental Design, IIUM, Malaysia Department of Architecture, Kulliyyah of Architecture & Env. Design, IIUM, Malaysia Department of Urban and Regional Planning, Kulliyyah of Architecture and Environmental Design, IIUM, Malaysia
Abstract This paper discusses the possibilities of applying Virtual Reality techniques within an analysis and presentations of the results and visualisation output of a computational fluid dynamics simulation study of an urban master plan proposal with a focus on semi-outdoor spaces in selected areas. The environmental performance analysis was conducted to make an assessment in terms of air flow with the aim to enhance thermal comfort and predict the outcome of extreme wind events in the locality. This includes the investigation of the occurrence of wind-tunneling phenomena linked to pedestrian discomfort. The numerical and visual output of the simulation was used as a framework to integrate environmental performance data with a walkthrough of pedestrian walkways in selected areas in Precinct 4 of the Putrajaya Master plan. Hence the paper is divided into two aspects i.e. the results of the Computational Fluid Dynamics (CFD) analysis and an example of how the data or graphical output is integrated with a virtual walkthrough of the selected 'parcel' or area.
1. Introduction Virtual Reality (VR) which has received an enormous amount of publicity over the past few years is dynamic, .interactive and experiential. Its capa-
600
Puteri Shireen Jahnkassim, Maisarah Ali, Noorhanita Ali et al
bilities can not only simulate real environments with various degrees of realism, but assist in visualizing data such that better design decisions can be made in line with the sustainable approach. Converging developments in simple non-immersive virtual reality techniques have contended that the extensive output from a state-of-art environmental predictive and environmental simulation tool can be better presented and understood through a VR environment. Past studies have looked into integrating more dynamic aspects of visualization with environmental data gathered from simulations and calculations of environmental performance of buildings and spaces. Follut and Grolea (2000) investigated how aspects of virtual reality can be used to present the environmental characteristics of an urban space where environmental simulation results concerning light, heat and sound in integrated in a virtual form to enhance the visualization of space and demonstrate how the human being will perceive and experience urban design. Primikiri and Harris (2002) studied how computational dynamics visualized data can be integrate in a virtual cave environment where the dynamic experience of CFD data can help to enhance the understanding of thermal environments can assist in design decision making. Environmental simulation analytical tools generally generate results which are both numerical and graphical and not having the sequence in which human beings experience the internal environments which influences the impact of environmental parameters and perception. Virtual reality (VR) enables the environmental data be presented in a way where spaces can be visualised sequence and link the environmental data with sensations arising from the external environment. Hence the aim is not only, to visualize the 'data', but assist in the experience of the thermal or environmental character of an urban context where the user can enter, move around freely and environmental data can be seen at a particular point in urban space. Better representation of simulation can be achieved by visualising Computational Fluid Dynamics (CFD) and environmental data in a virtual environment. The aim is to enhance a dynamic visual experience of environmental data.
2. Aims of study The aims of the study are focused on: 1. to assess comfort condition related to urban design with focus on semi-outdoor spaces in terms of environmental parameters linked to human comfort such as airflow - through state-of-art environment simulation tools;
Integrating a Computational Fluid Dynamics Simulation and Visualization
2.
601
to validate the simulation data gathered from (1) through field measurement ( not within the scope of this paper); 3. to identify the limitations of the environmental prediction tools such as CFD (computational fluid dynamics) in terms of terrain modelling and urban massing; 4. to integrate the environmental data in terms of an animated walkthrough sequence and later, interactive 3D real time visualisation; This project aims to implement the use of virtual reality techniques particularly in terms of 3D visualisation in order to present the data gathered from environmental simulations and field analysis when applied to a large scale urban design proposal. The focus is on urban design issues related to human comfort and experience of urban spaces, particularly in the hot humid climates such as Malaysia. In such a climate, air flow, heat gain and radiation are critical issues that have to be assessed through numerical simulation and field work. Microclimate studies are commonly required for new projects especially urban planning that involves the development of a sizeable area within a city The focus is on issues to comfort the predictions for thermal comfort conditions of pedestrians at naturally ventilated plazas and outdoor spaces as plazas are divided into typologies of open plazas, plazas surrounded by tall buildings and covered plazas. Although the relationship between the form of a city and its climate has been intuitively understood, predictions of how specific future buildings will affect climate conditions are important as these will affect the comfort of pedestrians on sidewalks or in public open spaces. A combination of experimental and computational techniques is necessary to make comfort predictions. The thermal conditions that affect the physiological and psychological well-being of a person walking are generally affected by 6 variables i.e. solar radiation affects the human thermoregulatory system. A human body exposed to wind, exchanges body heat through convection two other climate variables, humidity and ambient air temperature, and people's activity levels and their clothing are also important factors. Winds are focused here as it is a highly variable natural phenomena. At present, one can only predict the probability of events by long term records of wind events to estimate the probability of wind i.e. the likelihood of a minimum wind speed from a particular direction needed for indoor thermal comfort being equaled or exceeded.. Extreme wind events or 'gusts' are also of interest as wind acceleration causes pedestrian discomfort and represent common occurrence at the base of tall buildings. In certain cases, protection from wind, if combined with shade, will be enough to ensure comfortable spaces. However, shade combined with a light breeze is ideal, particularly in the hot and humid climates. Both the magnitude and direc-
602
Puteri Shireen Jahnkassim, Maisarah Ali, Noorhanita Ali et al
tion of wind are of interest, and this has to be associated with magnitude of indoor airflow within the open plazas or naturally ventilated indoor plazas.
3. Methodology The analysis took two stages during its development: • Generating data(s) from the computational fluid dynamics analysis • Visualization in 3D environment The modeling and simulation of the urban microclimate was carried out using the IES (Integrated Environmental Solutions) MICROFLO CFD subprogram which is known for the extent of their validity to mimic real settings. CFD modeling adds to the accuracy of the simulation as it represents the fluid flow problem by mathematical equations and fundamental laws of physics. By solving these equations, its able to predict the variation of the relevant variations within the flow field, pressure and temperature. Firstly, the geometry of the buildings within the space has to be abstracted and modeled. The detail and extent of the computational grid determine the simulation time. The number and size of the grid cells represents the level of resolution that the calculation can achieve i.e. the larger the number of cells and the smaller their size, the higher the accuracy of the solutions. In this initial part of the study, the location and magnitude of heat sources and their rate of heat transfer including the incidence of solar radiation is not taken account in the simulation runs. The results are then manipulated; output in the form of vertical and horizontal slices were read in terms of magnitude and direction of airflow and later were integrated into the 3D visualization. In the walkthrough, 3D model was built in a 3D software where camera points are placed to generate a 3D video. Both 3D animation and data collected were then arranged in a layout for the final product. This analysis was prepared as a non-immersive VR, but for future studies the use of an interface with a controllable gaming engine will allow us to actually "be" inside the city. The study shows that in addition to the 3D modeling of urban layouts, wind velocity and direction and later, temperature can be integrated in virtual sequence and represent useful data to describe the urban environment.
Integrating a Computational Fluid Dynamics Simulation and Visualization
603
Fig. 1. The simulation process (environmental analysis and data integration)
4. Urban context -description and features Putrajaya, covering an area of 4,931 hectares of land, is situated 25 kilometres from the capital city of Kuala Lumpur in the north and 20 kilometres from Kuala Lumpur International Airport (KLIA) at Sepang in the south. Putrajaya is the new Administrative Centre of the Federal Government of Malaysia. Situated within the Multimedia Super Corridor (MSC), the development of Putrajaya marks a new chapter in the history of modem city planning in Malaysia. It is set to be a model garden city with sophisticated information network base on multimedia technologies. Architecturally, Putrajaya will be an indigenous city with a modem look and about 40% of Putrajaya is natural lush greenery which enhanced by large bodies of water and wetlands. The study focuses on the urban space at Dataran Gemilang Presint 4. Dataran Gemilang is a circular shaped plaza which marks the end of the southern portion of Putrajaya Boulevard. The round-shaped open space
604
Puteri Shireen Jahnkassim, Maisarah Ali, Noorhanita Ali et al
plaza will be turned into a park surrounding by high rise buildings followed by lower buildings behind them. (See Figure 2-5)
Fi . 2 Location of the stud
Fi .3 Dataran Gemilan and 4GII buildin
Integrating a Computational Fluid Dynamics Simulation and Visualization
Fig. 4 Bird-eye view of Dataran GemiIan
605
Fig. 5 Selected Towers in Dataran Gemilang
4.1. Simulation parameters The simulation parameters were based on meteorological data obtained from Subang weather station. Figure 6 shows the percentage of occurrence of wind by speed and direction. The data was an average data taken from 1975 - 2003. From this figure it can be concluded that most of the low velocity wind come from the North direction however the stronger wing (between 5.5 to 7.9 m/s ) come from the West and South (including SW and NW).
Figure 7 shows the percentage of occurrence of wind and its direction. From this figure it is found that most of the winds come from North followed by the North West and South direction. However 36.4% of the time the condition is calm (no wind).
606
Puteri Shireen Jahnkassim, Maisarah Ali, Noorhanita Ali et al
aO,3·1,S 111,6·3,3
E = 6,3%
a3,4·5,4 as,S·7,9
N
r
NE
1 E
SE
S
SW W
NW
sw= 6,5%
S = 9,9%
SE = 5.3%
Fig. 7 Percentage Occurrence And Wind Direction (ANNUAL) (Calm conditions = 36.4 %
5. Results - Computational Fluid Dynamics Analysis Figure 8 - 15 show the results of the CFD analysis. Figures 8 - 10 show the result of wind distribution at the body level from winds originating from South while Figure 11 shows the impact of winds from the North. These figures also show the distribution of wind in between the buildings surrounding the circular plaza. The 'red' spots indicate the potential the tunneling effects or accelerated winds caused by the buildings along the Putrajaya Boulevard. From Figure 11 for example, the results suggest some wind tunneling effect through the boulevard (in the case of north winds). Some 'venturi' effect can be observed at the circled area on Figure 11. The Figure 12 shows the wind distribution slicing vertically from the North wind. Figure 13-15 shows the close up view of wind and pressure distribution around a selected building i.e. the 4G 11 tower. The results show that when wind flows around a high rise structure three distinct regions are developed: 1- positive pressure region on the upstream face 2-
negative pressure zones at the upstream comers
3- negative pressure zone on the downstream face Basically there is a positive pressure on the windward face and suction or negative pressure on the non-windward faces and roof. Figure 12-14 demonstrate how wind when acting on a square or rectangular building,
Integrating a Computational Fluid Dynamics Simulation and Visualization
607
results in positive (inward acting) load on the windward face; the other sides particularly the leeward face are subjected to negative (outward acting) or suction loads. The pressure distribution can be even more critical when the wind direction is angular to the building face. Figure 13 show how regions of negative pressure or suction are developed on both the vertical and horizontal face of the non- windward regions of the building. The negative pressure regions are particularly present at the higher regions on the face of the tower due to the vortex phenomenon. (Figure 14) Although results again demonstrate the negative-positive pressure difference between the windward side and the leeward side of the tower and the areas of vortex created in regions between buildings, the results indicate how the separation layers can generate vortices which are shed into the wake flow behind the 'bluff body' or building structure. Such vortices have been known to cause extremely high suctions near separation points such as comers and eaves.
Fig. 8 South Wind at initial speed of 1.32 mis, conditions at 20m
Fig. 9 South Wind at initial speed of 1.32 m/s, conditions at body level
608
Puteri Shireen Jahnkassim, Maisarah Ali, Noorhanita Ali et al
Fig. 10. South Wind at initial speed of 1.32 m/s, condition at body level
.. . ,
Fig. 11. Nort h Wind at initial speed of 1.32m1s, conditions at body level
J
Fig. 12. Wind from the North (vertical slicing at 4011 tower)
Fig. 13. Wind from the Nor th at 4011 tower (vertic al slicing)
r;;:
Fig. 14. Wind fro m the South (vertica l slicing to show wake region in the north face)
..
.
.
- .-----,
Fig. 15. Close up view of south wind hitting the building facade and canopy and entering the open indoor plaza
5.1 The VR walkthrough It is contended that the dynam ic experience of virtual reality can further enhance understanding of the environmental performance of urban spaces and improve assessment of alternative design feature s and options. Modeled with 3D MAX, the buildin gs of Presint 4 Putrajaya was constructed using primitive polyg ons. Material textures were then mapped onto the object to get a realistic look. Setting up lighting was undertaken using ray trace and anti-aliasing rendering functions that took a long time eve n for a preview rendering. Though MAX and VIZ have the capability to apply "sky-light" , the "direct-spot" and "om ni" were used throughout the simula tion , keeping in mind that in order to be more detailed the quality of the fi-
Integrating a Computational Fluid Dynamics Simulation and Visualization
609
nal rendered simulation must be, if not identical, similar to real-conditions at 2PM in the afternoon where it is assumed to be the hottest time of the day. By gathering all necessary data from the simulation tool an ad-hoc storyboard can be conceptualized. Due to software limitations, the walkthrough will only move in a still moment - guiding the flow of the walkthrough. The synchronized-data are displayed and denoted with "gauges". These gauges represent the output or data generated by the computational fluid dynamic analysis. One example is shown in Figure 16. As the 'user' walks through the spaces in the VR, the line graphs on the gauges will change to indicate the variations in the environmental parameters such as light, air speed and room temperature. In this paper, these variations represent variations for a typical day in a year. These variations can also be viewed against a 'comfort' band to give an indication of whether or not the parameter viewed lies within the comfort range pertaining to the specific climatic context of the building. Typically these gauges can be in the form of 'bands' where an approximate comfort band is indicated (an example is shown in Figure 16) however in the VR walkthrough these 'gauges' are integrated with the walkthrough animation in order to assist in the understandin-g of the thermal environment and the users experience of comfort as they walkthrough the outdoor spaces. These data were integrated into the simulation with the use of video editing software, applying the data with a frame-by-frame sequence. (See animation screen shot in Figure 17)
610
Puteri Shireen Jahnkassim, Maisarah Ali, Noorhanita Ali et al
Bandwidthmoves Up/Down withregard to targeted time concern te. 2PM and the 28-30C comfort zone
34 33 32 31 30 --i
3 1)
29 28
-t
~
c-t 27
0
26
0
25 24 23 00:00
06:00
12:00 TIME
Current Temperature Curve Temperature Curve fromanother room Fig . 16. An example of a 'moving' temperature "gauge"
Integrating a Computational Fluid Dynamics Simulation and Visualization
Screenshot (c) Putrajaya
Screenshot (e) Putrajaya
Screenshot (f) Putrajaya
Fig. 17: Animation Screenshot - The Walkthrough Interface
611
612
Puteri Shireen Jahnkassim, Maisarah Ali, Noorhanita Ali et al
6. Conclusions It is known that conditions at ground level in cities can be highly affected by the presence of tall buildings near to each other. The shading impact of such tall buildings is relatively well-known - however the impact of wind under the tropics has been less studied. Hence in this study, a computational fluid dynamics simulation tool is used to assess the conditions in spaces such as plazas and open areas adjacent to highrises. Particularly on days with extreme wind events, such wind will frequently accelerate between highrise towers, and is deflected downward towards the sidewalks and open spaces. This accelerated wind speed can exert a mechanical force on pedestrians. That part of the wind is deflected downwards and forms a vortex and some sweeps the ground in a reverse flow while another part is accelerated around building comers and forms jets that sweeps the ground near facades. The prediction of wind provides one of six variables that affect thermal comfort for people outdoors. At a particular open space location, the occurrence of all four climate variables can be predicted from weather records, from calculations of the shade produced by buildings at specific times, and from wind studies. For example, a prediction can be made as to whether a section of sidewalk or open space is likely to be comfortable at a given day and time, from knowledge of the statistically typical temperature and humidity, whether the sidewalk is in the shade, and the extent to which winds over that section are accelerated or decelerated as observed. VR can integrate such computation to indicate, for example, the thermal comfort of a person in a business suit leisurely strolling along a sidewalk at lunch time. The thermal comfort assessment in this simulation is more inclined towards air velocity contribution to the open plaza. Figure 16 showed that the temperature increases from sunrise at approximately 7:OOam and peaked at around 2:00pm. At this time of the day the temperature is beyond the comfort zone of 28° - 30°C. However the conditions at the open plaza can be manipulated through providing shading from the talk building adjacent to the plaza. Previous researchers have proven that shaded areas combined with air velocity of approximately 2 mls will contribute towards thermal comfort at the pedestrian level of open plazas. This paper focuses on the integration of virtual reality with computerbased tools for predicting and assessing building performance with respect to environmental impact criteria for the design of green buildings. Environmental simulation tools generate a vast amount of data which are both numerical and graphical. VR enables the organization and presentation the data based on the sequence in which human beings experience these urban
Integrating a Computational Fluid Dynamics Simulation and Visualization
613
environments which actually influences the impact of environmental parameters on human psychology and perception through time. The paper forwards a method into integrating more dynamic aspects of visualization with environmental data gathered from simulations and calculations of environmental performance analysis of buildings and outdoor spaces. The study forwards a concept of a "VR tool" where architects and designers can just derive the graphs from environmental analysis and simply "ADD" into the VR tool to generate an interface that can enhance environmental understanding which could not be realized by conventional systems. The VR Tool can increase the capability of architects and planners to derive with more environmental responsive designs thus making conceptual design stage more technical to facilitate the development of a more environmental responsive design and to deliver the idea more concise, understandable, and far more effective. This paper forwards a method that provides a means of understanding the 'inaccessibility' and incomprehensiveness' of detailed environmental simulation output from which better design decisions can be made based on the numerical data. The internal environment can be experienced against the external environments and in sequence where features can be tested and numerical data visualized - this provides a method of visualizing indoor climate, organizing data and virtually navigates through a 3D environment. The visualization and dynamic presentation of thermal and environmental data can assist in understanding the properties of buildings that occur in time and space. It is aimed at feeding the data from the simulation engine into an environment that links to a tool and increasingly being used by architects and urban planners.
References
ALI MALKAWI and ELENI PRIMIKIRI, (2002) Visualizing Building Performance in a Multi-User Virtual Environment, In Proceedings of the ACSA International Conference, August, Havana, Cuba. FOLLUT, DOMINIQUE, GROLEAU D., (2000) 'Ambianscope' - An immersion method to analyze urban environments, Proceedings of The Greenwich 2000 International Symposium on Digital Creativity, 13-15 January, The University of Greenwich, United Kingdom; JAHNKASSIM, P.S., IP, KENNETH, (2000) "Environmental and architectural impacts of bioclimatic highrises in a tropical climate" ; Proceedings of the Sustainable Building 2000 International conference. 22-25 October, Maastricht, Netherlands.
614
Puteri Shireen Jahnkassim, Maisarah Ali, Noorhanita Ali et al
LAM, K.P., WONG, N.H. AND CHANDRA, S., (2001). The use of multiple Building performance simulation tools during the design process - a case study in Singapore, Proceedings of the Seventh International IBPSA Conference, Rio de Janeiro, Brazil, August 13-15. LIM, BILL, (1994). 'Natural ventilation and air movement in tropical highrise buildings', Environmental Design Criteria of tall buildings, Lehigh University, Bethlehem: Pennsylvania USA; POTVIN, ANDRE, (2000). Assessing the microclimate of urban transitional spaces, Proceedings of Architecture, City and Environment, Proceedings of PLEA 2000, Cambridge, UK July pp581-586; PRIMIKRI, ELENA, HARRIS, JON, (2002).Visualising Post processed CFD data in the CAVE Automated Virtual Environment, Proceedings of the 36 th Annual Australian and New Zealand Architectural Science Association conference, November, Gee long, Australia. SAPIAN A.R., MADRaS, N.H., AND AHMAD, M.H. (2002). "Computer Simulation: An Alternative Methodin Studying Natural Ventilation at HighRise Building Due to Urban Wind Condition". Integrated Energy Building Design Seminar. Organized by Centre for Education, Universiti Teknologi Malaysia. STEEMERS, KOEN AND RATTI, CARLO, (1999). 'Informing Urban Bioclimatic Design'. in Proceedings of the 17th EAAE International Conference, 'The Teaching of Architecture as a multi-disclipinary practice University of Plymouth 1999;
A Geospatial Approach to Managing Public Housing on Superlots Jack Barton and Jim Plume City Futures Research Centre, Faculty of the Built Environment, University of New South Wales, Sydney 2052, Australia
Abstract This paper outlines how an object-oriented geospatial approach to Australian public housing management may help solve a core problem currently faced by the NSW Department of Housing (DoH) - locating individual tenancies within large, unconsolidated cadastral units, or Superlots. The concentration of high-rise housing built upon these lots has exposed the limitations of the standard cadastral data structures and two-dimensional systems currently in use within the DoH. In this paper, we explore the capacity for Building Information Models (BIMs) based on Industry Foundation Classes (IFC) to create semantically rich, yet minimal, representations of individual buildings and tenancies located within the Superlot. This schema provides a methodology to move beyond the ubiquitous land parcel with a visualisation system that spans from a broad, urban scale to that of an individual building within a Superlot, and progressively into the individual tenancies. This forms the underlying structure for our Spatial Decision Support System (SDSS) to perform operations ranging from simple queries to more advanced analysis and decision support for Public Housing areas.
Introduction This paper forms part of an on-going research project to investigate the potential of Spatial Decision Support Systems (SDSS) to assist in the management of Public Housing. The NSW Department of Housing (DoH) in Australia faces a sizable challenge in managing its physical assets in relation to their tenancies. Vast amounts of asset and client data exist within the DoH and, in particular, the geographic nature of this data lends itself to using standard geospatial tools to both manage and visualise it. As the po-
616
Jack Barton and Jim Plume
tential of these systems is being exploited, not only are more powerful tools being employed, but also certain gaps in the data structure are becoming apparent. In particular, the difficulty in using existing GIS tools to handle land parcels that are broken down into many tenancies, especially where those are arranged in high-rise apartments. Within the context of the DoH, these are commonly referred to as Superlots and we adopt that term for the purpose of this paper. The paper begins by reviewing the nature of this problem, beginning with an outline of the context within the DoH and describing the specific problems that have been identified. We then go on to discuss a proposed approach that is to be implemented in prototype form during 2006, making use of emerging object database technology to facilitate the management of large complex geospatial datasets.
Managing Public Housing in NSW As one of the world's largest Public Housing providers, DoH is responsible for both the tenancy management and residential property management of some $A28.5 billion worth of assets consisting of over 130,000 Public Housing tenancies in NSW. The tasks undertaken by staff within the department may generally be divided into three categories: client, asset and business management. However, the distinction between these categories is blurred as DoH seeks to match the dwellings it provides with the needs of its clients in an optimal way. Furthermore, DoH is intricately involved in all levels of the asset lifecycle, encompassing: • the development of new properties; • maintaining and redeveloping existing estates; • strategically disposing of individual lots, and; • acquiring and leasing stock from the private market. In an initial audit of existing DoH systems, it was found that the department operates with three separate and distinct major databases: an asset database recording details of all buildings and properties owned by the department; a client database with details of tenants, their current status in the system, contact details and financial standing; and drawing records of all buildings owned by the DoH. One fundamental problem that was identified is the lack of connection between the asset data and the client data, with the result that all planning tends to be based on analyses of these separate datasets rather than any attempt to correlate the results in any rigorous way.
A Geospatial Approach to Managing Public Housing on Superlots
617
Many complex and varied indicators are used to measure performance in and around the two areas of assets and clients. Much of the work undertaken by DoH staff is concerned with identifying and communicating these indicators and organising appropriate responses. This process is closely related to the geography of places and their people. Some basic queries DoH decision-makers might have include [Barton 2003]: • Where are the assets? What are they? How many of them are over what area (density)? • What is the social and physical environment and its context (diversity)? • How many properties is DoH developing? • Are the goals of DoH being met relative to pre-determined criteria? • How well are the assets in one area being used relative to other areas? • Should DoH redevelop an area of existing stock or sell? • Can DoH leave existing tenant groups with minor tweaking and nonphysical improvements? In considering these questions, one must assess quantitative information about the physical assets in conjunction with the relevant client data and site-specific qualitative issues. These questions revolve around the Public Housing contained within a specific area, and a common unit of measurement is the individual tenancy. An accepted approach to dealing with these types of issues is to develop a Spatial Decision Support System (SDSS) and that has been the overall goal of this current research project. An SDSS has been described as an "interactive, computer-based system designed to support a user or group of users in achieving a higher effectiveness of decision-making while solving a semi-structured spatial decision problem" [Malczewski 1997]. Characteristics of spatial decision problems include a structured part of the problem that is amenable to automated solutions by the use of a computer, and unstructured aspects to be tackled by human decision-makers. In this paper, we are looking at improving the spatial organisation of the structured elements of the problem. For an SDSS to be a functional tool that assists in solving the spatial problems DoH is facing, it must: • facilitate use of detailed datasets from across a diverse range of providers that are able to be linked and geo-referenced; • have a system for storing, retrieving, visualising and interrogating data; • have the ability to illustrate data in a three-dimensional context; • assist in the sharing of knowledge and improve continuity in projects.
618
Jack Barton and Jim Plume
The Problem of Data Structure In developing an SDSS for the management of Public Housing, the hierarchy of data, information and knowledge serves as a convenient framework to organise the elements with which we are dealing. An SDSS is concerned with spatial knowledge: in order to support decisions, it collates many individual sources of spatial information and presents these to the decisionmaker(s) for consideration. Good information relies on a solid foundation of well-structured and semantically rich (geo )spatial data, describing where individual buildings are in relation to each other and what is inside their envelopes. This provides a capacity for analysis that ranges from a broad urban scale right down to specific individual building components. This range of scales is the focus of concern for DoH decision-makers and the digital tools in use at DoH must be developed to present this information as clearly as possible. In our SDSS research and development, we have recognised that the data foundation must be as solid as possible to improve the fidelity of these instruments at both the information and knowledge levels. Whilst all the systems in use at DoH relate to specific geographic areas, datasets have typically been engineered around either assets or clients, and decision-makers must manually sift out the information in which they are interested. Asset and client databases are not directly connected, and the basic digital assets consist of disaggregated architectural drawings, geographic information and tabular datasets from legacy database systems. These all revolve around the basic tenancy and the occupant( s) within it. CAD drawings are used for detailed building management: this data generally consists of descriptive, yet isolated, site plans. Geographic Information Systems (GIS) are employed for urban to neighbourhood level analysis, but do not directly link to any CAD data, besides tabular datasets. Resolving this disconnection between the tools may also hold the answer to one of DoH's ongoing problems: that of locating properties within Superlots. DoH has been left with the legacy of Superlots from past planning practices. The GIS is constructed around the Digital Cadastral Database (DCDB), where the finest unit of resolution is the lot itself. This project treats that as an opportunity to explore ways of more accurately defining the Superlots through use of technologies that bring together both geospatial and building component data (this aspect will be elaborated further in the following section). Another problem is that most DoH datasets work on a two-dimensional representation of a three-dimensional workspace. A recent trend, particularly in city planning, is to develop accurate three-dimensional city models that collect cadastral, topographical and
A Geospatial Approach to Managing Public Housing on Superlots
619
CAD data and combine these with aerial imagery to accurately describe the basic shape and location of objects. These systems rationalise many different types of geospatial data and fluidly present the resulting information to experts and non-experts alike. The rich interactive and kinetic experience offered by these systems support visual decision-making, but substantial development is now being directed toward the development of the analytical capacities of these systems. In 2000, the Centre for Advanced Spatial Analysis (CASA) conducted a comprehensive survey and reviewed over 60 such models [Batty et al 2000]. The diversity of models illustrated the plurality of techniques and approaches to three-dimensional city modelling in use but showed that, while they begin to address the fundamental need for integrated information by permitting visualisation of that descriptive data, most failed to have a significant impact on decision support because the data was purely visual. Beyond making visual judgments of the urban impact of specific interventions or developing visualisations that depict future scenarios for development, the models provide little opportunity for systematic social, economic and environmental analysis. Without this fidelity of detail, the capacity of these models to assist in complex decision-making is limited by their lack of ability to harness more meaningful information about the geographic areas they represent. This paper looks at an approach to refer explicitly to building objects as representations of the actual built environment and the associated properties that embody its components with meaning.
The Problem of Superlots In New South Wales, over one-third of Public Housing dwellings have been clustered into large estates known as Superlots. These estates generally represent disadvantaged and often stigmatised communities. As such, the demands on intensive community management are paramount. Inner city estates may contain up to 4,000 dwellings, generally in the form of high-rise apartment blocks [DoH 2001]. Superlots were a result of past DoH practices where these clusters of properties were built on single land parcels. At the time of their creation, the housing authority was exempt from local government planning requirements. The tenure of these land parcels today is represented on the Cadastre by large, unconsolidated expanses. Figure 1 illustrates a typical housing estate positioned on several Superlots. The standard neighbouring suburban lots may be seen greyed to the top of the image, typically with one residence per lot. The housing es-
620
Jack Barton and Jim Plume
tate build ings are repre sented in figure-ground footprints, depicting several multi-storey blocks per lot.
Fig. 1. DoH tenancies within a Superlot.
Users of the Cadastre-b ased systems currently in use at DoH experience difficulties locating individual properties within these uncon solid ated lots, as the finest geospati al identifier is the lot itself. Subd ivision of these lots is costly and is only justified on areas intended to be sold and divided into smaller tenable lots. In a managerial and operational context, DoH must specify the expl icit location of the properties so the information is useful internally. This include s tasks where identifying an individual tenancy is necessary, such as coordinating maintenance teams or communicating with external agencies such as emergency services. Locating individual properties within Superlots also has a vertical compon ent for high-ri se estates, making two-dimensional method s of representing the tenancies somewhat problematic. We propose creating a geosp atial identifier linked to each individual tenancy to serve as a node for attaching more intelligent sub-lot building information. Locating buildings within the Superlot requires that their digital repre sentations be positioned and orientated relative to the site. To resolve these issues, DoH needs to employ tools that are more sophisticated with geospatiallocators and a three-dimensional interface in order to view individual units within the Superlot. This will require going beyond the current cadastral level, which is a little problem atic since the DoH does not control that data , but simply makes use of it. The Australian Government
A Geospatial Approach to Managing Public Housing on Superlots
621
has already established a robust geospatial information infrastructure at the cadastral level. The Development Assessment Forum has the aim of harmonising and streamlining Development Assessment processes in Australia. The Electronic Development Assessment (eDA) project aims to develop a National Electronic Data Exchange Standard to describe the content and structure of data transactions associated with the Development Assessment process [DAF 2002]. Billen and Zlatanova (2003) have addressed some of these issues where they discuss the limitations that traditional systems encounter when three-dimensional models are interrogated and fundamental modelling methodologies are developed. They propose an objectbased model, conceptualising real objects into four distinct groups: juridical objects; topographic objects; fictional objects; and abstract objects. This methodology holistically encompasses the context within which decision-makers are concerned rather than just the physical built environment in isolation. It also represents a move from spatial data toward spatial information that may then be incorporated into a rule-based knowledge system to support spatial analysis, querying and decision-making. Since these elements have a topology, it is important that the relationship of one object to another and the spaces in between be considered quantitatively in order to illustrate and avoid conflicts and assist automation. Furthermore, several objects may conglomerate to form new composite objects [Louise- Smith 2002]. Our project builds on this in order to provide a much-needed link between GIS analysis at the cadastral level and the need for improved housing management at the sub-cadastral level. Our proposition is that this can be achieved through an information framework based on an objectoriented approach known as Building Information Modelling (BIM). This project extends the BIM concept into the urban domain with an innovative approach that treats buildings as singular elements, thus facilitating their analysis within their urban context, while still maintaining the depth of information available through the complete BIM model.
Building Information Modelling In order to explain our approach, we briefly introduce the concept of BIM. The fundamental notion of BIM is that the building is seen as a complex assemblage of component objects that represent both the physical fabric of the structure (walls, slabs, doors, fixtures, etc) and conceptual objects such as spaces, storeys, functional zones and sites. This permits the component objects, and the essential relationships or dependencies that exist between
622
Jack Barton and Jim Plume
them, to be explicitly represented within the model, allowing an unlimited range of systematic analyses to be undertaken. In the literature, this is referred to as semantic modelling [Kalay 1998] because it allows us to capture the deeper meaning of the real-world entities that we are modelling. In order to take full advantage of BIM for design analysis at the scale of a single building, an effective method is needed to share the rich model data between the various analysis tools. The most promising current development to support interoperability between building design analysis applications is the building model schema known as Industry Foundation Classes (IFC) developed by the International Alliance for Interoperability (IAI). This is an object database schema that provides an accurate representation of a complete building project to support the design and management of that facility throughout its complete life-cycle [IAI 2005]. Specialist packages are available to perform complex analyses with regard to a whole range of concerns, including costing, environmental performance, structural performance and asset management. Applying these tools to building construction has been shown to yield increases in efficiency and improved co-ordination of the workflow over the life span of the building [Froese et al 1999]. Industry take-up has been steady since the initial implementation of IFC 1.0 in 1997. The effectiveness of that technology in supporting design collaboration has now been demonstrated on several major building projects throughout the world [e.g. Fischer and Kam 2002]. A notable application of this technology in the urban management domain is Singapore's CORENET e-PlanCheck system [Matssa et al 2002], which carries out automated building code compliance checking through a single portal integrating (IFC model format) building submissions for over 12 regulatory agencies. The building code checking system in Singapore (ePlanCheck) and a planning zoning system (Byggsok) in Norway have built Web-based industry portals to facilitate access to diverse specialist government agencies. The IAI has recently extended the IFC schema to include GIS objects so that the schema is now able to incorporate the kind of data usually manipulated by GIS analysis: Cadastre, street centrelines, contour lines and aerial photography. Figure 2 illustrates the capacity for importing a standard IFC model into a GIS environment. The resulting object data schema provides the kind of integrating technology that can be used to effectively interoperate between traditional GIS applications, the emerging city models that are used for visualisation and the full range of traditional CAD tools used by the built environment professions.
A Geospatial Approach to Managing Public Housing on Superlots
623
Fig. 2. An IFe ElM located in an environment of GIS data.
Research such as that reported by Hamilton et al (2005) is beginning to look at how that technology can be applied in an urban planning context. This creates an opportunity for the development of a whole suite of new computer tools that can access that meaningful model data and undertake multiple analyses at an urban level, ranging from issues of urban ecology, urban sustainability, transport planning, demographic change and economic development. The IFC schema is supported by tools already in use at DoH. It is important to note that the technology works not by usurping the role of existing software, but by providing an open standard interchange model format that all such applications can access. Further, the IFC schema can take advantage of existing robust model server technology that is currently used very widely in large-scale manufacturing sectors such as car making, shipbuilding and aeronautical engineering. The availability of the server technology leads to the possibility of establishing a shared, object-based repository of meaningful urban model data that can support the full range of Public Housing management processes, as well as provide the raw data needed by existing application tools. This object-based approach to the modelling of Public Housing on Superlots, with its strong links back to the emerging BIM tools being increasingly adopted within the building industry worldwide, provides the ideal base for an information system that can support the collection and management of rich urban data and thereby support effective decision-making and housing management.
624
Jack Barton and Jim Plume
Using 81M for Housing Management DoH is responsible for the entire life-cycle of its assets, from inception through design, documentation and construction, then facility management and finally demolition and/or disposal. The fact that the IFC concept considers all stages of the building life-cycle is especially applicable to this situation. The legacy of high concentrations of Public Housing on isolated estates continues to present on-going management demands. It is feasible that Superlots may be retrospectively digitised according to the IFC schema. Existing databases may be joined to the tenancy with embedded metadata in the object file correlating the building reference number and client reference number. Consultations with DoH staff have suggested that helpful features in geospatial datasets would include descriptive data on the exact location of building footprints, envelopes and levels, along with demographic information pertaining to the urban context including neighbours, adjoining holdings, and the 'vertical communities' formed within high-rise areas [Barton 2003]. In order to analyse the topology of these elements, data must be organised in a logical and structured manner. One Superlot is comprised of many buildings. These buildings are comprised of storeys and, in turn, spaces aggregated together to form dwelling units. These individual units equate to individual DoH tenancies. The IFC Schema defines the sets of objects that combine to describe the fabric of buildings and site as a whole. The schema also includes definitions for the 'spaces-in-between' the physical structure of the building, such as IfcSpace, IfcZone and IfcOccupant. These IFC elements correlate precisely to those with which DoH decision-makers are concerned: the Superlot (IfcSite); the buildings (IfcBuilding) and the individual spaces (IfcSpace). For administrative purposes, thematically similar spaces (such as those grouped to form tenancies) may be grouped by IfcZone, and the DoH client acts as the IfcOccupant. The schema supports a rich set of facility data, even down to things like tenancy agreement types and concepts of ownership. It can be seen that the hierarchical geospatial structure of the IFC Schema lends itself well to rationalising Superlots for the purposes of the DoH. However, it is important not to distort the semantic integrity of the IFC schema, while at the same time, maintaining an appropriate level of detail in the structure to represent the actual data needed by the department. The aim in this project is to achieve simplicity, but not at the expense of semantic integrity. Traditionally, IFC have been used for single, complex buildings. As the DoH has a large number of relatively simple
A Geospatial Approach to Managing Public Housing on Superlots
625
apartments, we propose to use the IFC schema to manage these areas using a minimum set of elements. The benefits of this minimal approach are: • a common geospatial identifier has been developed at a sub-cadastral level, whilst maintaining geographic and topological registration with the original GIS dataset; • these geospatial 'place markers' may be used as a common entity that links standard CAD systems with standard GIS; • the costs of modelling simple entities is lower, while the capacity is there to develop the model into more complex structures. • any unnecessary complexity and detail is avoided; • large areas, such as Superlots, can be efficiently and effectively represented; and, • smaller quantities of digital data optimise transmission of information over low-bandwidth Internet connections. The minimum description of a building should explicitly illustrate where the object sits within its site. Combined with existing data already in use by DoH, these 'place markers' serve as a common identifier to organise otherwise disconnected datasets describing the social and physical environment. For this task, a single 'point' may serve the purpose of a digital place marker, however, the relative scale, position, orientation and clustering of individual real-world elements requires at least a defining perimeter. The !FC schema permits the use of the IfcProxy to describe any objects not defined by the schema, however the IfcSpace entity has an IfcRepresentationldentifier: footprint. The minimal description of the individual tenancies is essentially a new source of high-fidelity data that can smoothly augment other systems and add value to already existing datasets. A significant advantage to be derived by adopting an object model approach based on the IFC schema is the option of establishing a serverbased environment, providing a rich repository of information that can service a range of applications or, more precisely, a range of views of the data to satisfy the needs of all stakeholders. In the case of the DoH, that means the data can be extracted and aggregated in different ways to meet the housing management needs of the department. A prototype model server will be developed that can support concurrent, multi-faceted, online data querying and presentation. This will provide a platform for the effective collection, storage and manipulation of the kind of accurate and geospatially located objects that will address the need for better housing management. The importance of this approach is that it is founded on an object-based view of the world that models the geospatial, functional and relational features of the building envelopes and fabric, as well as land parcels, infra-
626
Jack Barton and Jim Plume
structure networks and planning context that make up the greater urban environment. Figure 3 shows a screen shot of a web-based VRML/X3D environment presenting DoH BIMs. This illustrates how information from one application domain could be exported to another through an interoperable exchange of data: in this case , the X3D viewer would be able to simply extract the required data from the server model in order to build the scene, rather than that process being done manually.
Fig. 3. An example of a VRML/X3D based visualisation of a Superlot, buildings and tenancie s.
Decision-makers need to identify individual characteristics of each tenancy, when undertaking tasks such as performance monitoring or tenancy allocation, along with identifying whether its neighbours are dispersed over a large geographic area or clustered in high-rise on a Superlot. The ability to view IFC data using three-dimensional viewers, combined with its extensible structure (lfcXML), lends itself to visualising built forms and communicating site-specific geosp atial information. There is one further reason for adopting an IFC approach to this problem. DoH discontinued forming Superlots in the 1980's, moving to its current policy that requires new assets to be created in a manner that integrates with exist ing communities. The result of this policy is that public housing assets and tenancies are even more dispersed, maintaining an important role for geospatial systems to assist in managing these assets. IFC offers a higher-level 'common language' for the sharing of semantically
A Geospatial Approach to Managing Public Housing on Superlots
627
rich objects between disciplines across the building lifecycle, and it is in a format that is usable across different systems. Furthermore, with the recent extensions of the IFC schema to incorporate GIS data, it is better positioned to handle geospatial information and therefore, the dispersed nature of the property holding of an organisation like the DoH. By rationalising the current geospatial information framework and then setting protocols in place for the creation of new assets, a new data source is created, one with a greater semantic content and capacity for detail at all scales.
Conclusion The unique position in which the DoH finds itself is conducive to the development of BIM systems. Large collections of assets, combined with a custodianship of the entire building process, present many opportunities to increase efficiencies in the management of Public Housing areas. One single reference point, a simple geocoded marker, may act as a unique identifier reconciling CAD, GIS, building and demographic information. The hybridisation of BIM, advanced visualisation and the development of a set of web-ready tools represent an innovative convergence of emerging technologies. An important aspect of this project is its reliance on a nonproprietary approach to information modelling by using open geospatial standards and modular frameworks; establishing a 'foundation of open information'. Many proprietary application tools perform useful management functions, but rely on their own database structures and do not easily interoperate with other systems. This project will use and extend a single open data format, which can be accessed by many systems. The focus on Public Housing areas provides an opportunity to test the current IFC schema, direct the on-going development of the schema, and apply the technology to form a solid base for continuing urban research. It neatly merges research and development in this field which is focused on urban visualisation and adds detailed and meaningful content through the use of an entirely open information standard that integrates a broad range of current development efforts rather than simply replacing them.
Acknowledgements This research has been supported by the NSW Department of Housing and forms part of the project A Spatial Decision Support System for the Management of Public Housing. The Authors would like to acknowledge the
628
Jack Barton and Jim Plume
support of Margaret Maljkovic from DoH, Dr Bruce Judd and Dr Bruno Parolin from UNSW and John Mitchell from the International Alliance for Interoperability.
References Barton, J. (2003) "Report 2: Survey of Department of Housing Staff', An SDSS for Public Housing, Faculty of the Built Environment, University of New South Wales. Batty, M., Chapman, D., Evans, S., Haklay, M., Kueppers, S., Shiode, N., Smith, A., Torrens, P. (2000) Visualising the City: Communicating Urban Design to Planners and Decision-makers, Centre for Advanced Spatial Analysis (CASA), University College London. Billen, R. and Zlatanova, S. (2003) 3D Spatial Relationships Model: A Useful Concept for 3D Cadastre? Computers, Environment and Urban Systems, Volume 27, Issue 4, July 2003, Pages 411-425 DAF (2002) Development Assessment Forum Draft eDA Development Application Transaction VOl ReI 01, Available Online: http://locale.nsw .gov .au/documents/files/eda_transtandard.pdf Fischer, M., and Kam, C. (2002) CIFE Technical Report Number 143: PM4D Final Report. Stanford: CIFE, Stanford University. October 2002. Froese, T., Fischer, M., Grobler, F., Ritzenthaler, 1., Yu, K., Sutherland, S., Staub, S., Akinci, B., Akbas, R. Koo, B., Barron, A., Kunz, J. (1999) Industry Foundation Classes for Project Management - A Trial Implementation, ITcon Vol. 4, pg. 17-36, Available Online: http://www.itcon.org/1999/2 Hamilton, A., Wang, H., Tanyer, A. M., Arayici, Y., Zhang, X. and Song, Y. (2005) Urban Information Model for City Planning, ITcon Vol. 10, Special Issue From 3D to nD modelling, pg. 55-67, Available Online: http://www.itcon.org/2005/6 IAI (2005) International Alliance for Interoperability Home Page, Online: http://www.iai-international.org Kalay, Y.E. (1998) P3: Computational environment to support design collaboration, Automation in Construction, 8(1),37-48. Louise-Smith, S. (2002) Multi-Dimensional Modelling For The National Mapping Agency, Centre for Advanced Spatial Analysis (CASA), University College London. Marssa, S., Frachet, J.P., Lombardo, J.C., Bourdeau, M., Soubra, S. (2002) Regulation checking in a Virtual Building, International Council for Research and Innovation in Building and Construction. CIB w78 conference 2002. CSTB / ENSAM, 12 - 14 June 2002. Malczewski, J. (1997) Spatial Decision Support Systems, NCGIA, Available Online: http://www.ncgia.ucsb.edu/giscc/units/u127/u127.html
3D Visualization and Virtual Reality for Cultural Heritage Diagnostic L. Colizzi 1, F. De Pascalis 2 , F. Fassi 3. 'Consorzio CETMA, Cittadella della Ricerca s.s. 7 Appia Km 712+300 72100 Brindisi, Italy 2ENEA - UTS MAT TEC, Cittadella della ricerca s.s 7 Appia km 712+300,72100 Brindisi, Italy 3DIIAR - Politecnico di Milano, P.zza Leonardo da Vinci, 32 20133 Milano, Italy
Abstract Over the last years a lot of new technologies for Cultural Heritage Diagnostic have been developed in these years. In particular laser scanner surveys integrated with digital photogrammetry and also multi-spectral surveys have begun to become very useful and inalienable tools for not invasive diagnosis. Inside SIDART Project (Integrated System for Cultural Heritage Diagnostic) we developed software for visualize and elaborate triangulated surfaces coming from high resolution laser scanner survey. In this paper we wanted to present the most innovative aspect of our study, consisting in the possibility to visualize and to work in default mode or in immersive Stereoscopy (3D mode). In this way the operator can perceive the third dimension and the "virtual investigation" of the object become more realistic. This let to take consideration in more simple, natural and correct way and also reduce the possibility to make evaluation mistakes due to the false prospective of the classic visualization.
Introduction This work takes place inside SIDART project (Integrated System for Cultural Heritage Diagnostic), [5][6] whose objective is to develop hardware and software packages for cultural heritage diagnostic. The project involves some partner such as ENEA Group of Frascati, CNR-INOA, and CETMA that is a private research center located in south-east of Italy, leader of the project. Aim is to create an integrated system to acquire, to
630
L. Colizzi, F. De Pascalis and F. Fassi
visualize and to integrate information produced by different survey instruments (laser scanner, multi-spectral-camera, calibrated metric photogrammetric camera, thermography) used in cultural heritage diagnostic. This would be a new completed tool, useful for restoration in architectural and cultural study applications. The Laboratory of Survey, Digital Mapping, GIS of Politecnico di Milano has cooperated with CETMA to create the software able to display and to model laser scanner points clouds. The analysed clouds come from a new prototype of laser scanner product from CETMA, with Enea (Frascati Group) competence [1]. This instrument has a very good accuracy (at test phase) and assure 10-1 space resolution. All test and other survey measures, such as multi-spectral and thermographic survey, are conducted by CETMA with Politecnico permanent collaboration.
THE SOFTWARE: LASER CLOUDS The language: VTK Library The software has been developed using VTK library. The Visualization ToolKit (VTK) [2] is an open source, freely available software system for 3D computer graphics, image processing and visualization, used by thousands of researchers and developers around the world. VTK consists of a C++ class library, and several interpreted interface layers including Tcl/Tk, Java, and Python. VTK supports a wide variety of visualization algorithms including scalar, vector, tensor, texture, and volumetric methods; and advanced modeling techniques such as implicit modeling, polygon reduction, mesh smoothing, cutting, contouring, and Delaunay triangulation.
The PLY format The software is able to load files with PLY format. The PLY file format is a simple object description that was designed as a convenient format for researchers who work with polygonal models. Early versions of this file format were used at Stanford University and at UNC Chapel Hill. A PLY file consists of a header followed by a list of vertices and then a list of polygons. The header specifies how many vertices and polygons are in the file, and also states what properties are associated with each vertex, such as (x, y, z) coordinates, normal and colour. The polygon faces are simply
3D Visualization and Virtual Reality for Cultural Heritage Diagnostic
631
lists of indices into the vertex list, and each face begins with a count of the number of elements in each list. These files come direct from the laser instrument that provide the triangulated surface of the acquired object and infrared colour information. The following script can be used to load and visualized 3D scanned object in ply format: vtkPLYReader *reader=vtkPLYReader::New(); reader->SetFileName(file); vtkPolyDataMapper *mapPD = vtkPolyDataMapper::New(); mapPD->Setlnput(reader->GetOutput()); vtkActor *actorPD = vtkActor::New(); actorPD->SetMapper(mapPD); vtkRenderWindow *renWin = vtkRenderWindow::New(); vtkRenderer *renderer=vtkRenderer: :New(); renWin ->AddRenderer(renderer); renderer->AddActor(actorPD); renWin->Render();
Data matching
The most important application that was implemented in the software is the possibility to create a data matching from the triangulated surfaces and digital images. The aim is to place the colour information obtained with other diagnosis instrument images on the 3D laser surface model. The implemented algorithm is the DLT equation (Direct Linear Transformation) solved with least squares methods [3],[4]. We need a set of points that are known in the two spaces; each point enables to write two equations. As a result, by matching at least six points between the 3D object and 2D image, we are able to know for each 3D coordinates the corresponding point on the image. From the coordinates it is then possible to extract the RGB values and to associate them at the laser cloud. The DLT methods guarantee in this type of analysis adequate precision because the laser surface and digital image have very deep resolution. The presented software let to georeference different kind of digital images. We made some tests with multispectral image (infrared, ultraviolet) and also with image from thermocamera. Integration of different diagnosis data and techniques (Laser scanner, thermographic, multispectral) is a good valuation method of cultural object's condition. Different colour layers can be added to 3D object, furthermore every layer is stored in a list, so the user can switch among different object's view.
632
L. Colizzi, F. De Pascalis and F. Fassi
A perfect points matching is possible because the laser SIDART 100 enables to acquire both depth and intensity images (in infrared band): so it is easy to find features in common between the 3D object and the corresponding image.
Fig. 1. Our first test. From the top left: the raw scan of a niche of the S.S.Stefani Crypt in Poggiardo, Vaste (LE)! The same with high digita metric image, and some example of mapping of the surface with multispectral or elaborated digital images
!
The crypt of S.S. Stefani in Vaste (LE). It's excavated in the stone and is formed by three naves separated with pillars. The crypt is decorated with frescos, even if some of them are in abandonment condition. The frescos show images of Saints, Archangels, Our Lady and Christ. The original frescos date back to Xth century, but they were substituted with other paints afterwards (about XIVth and XVth century). We test our laser SIDART 100 and the software with high reso-Iution photogrammetric digital camera Rolleiflex DB44 Metric .
3D Visualization and Virtual Reality for Cultural Heritage Diagnostic
633
Fig. 2. Examples of 3D surfaces scanned with Laser SIDARTlOO. The left image shows the raw surface and the right image the coloured mesh. The example shows the fresco in Santa Maria Antiqua-
Other function With the software is possible to apply some elaboration to the 3D surface: export in ascii file and in 3ds format, cut with plane, filtering of large tri-angles, decimate, modify light illumination, shave light simulation, exporting section in ascii or dxf format.
3D Visualization and Virtual Reality The "Virtual Cultural Heritage" is a recent area which has been spectacular growth over the past few years. At the beginning the Virtual Reality was used in design or industry contest to have a whole and immersive representation of the projected objects or in the entertainment field for immersive games and virtual adventure. Now VR can be considered the most powerful "knowledge tool" that the technology gave us at the beginning of this millennium. To understand this we must realize what is the Virtual Reality. The VR let to reproduce an environment or an object by a software technology but this is done so that th
Santa Maria Antigua (Rome) was erected in the middle of 6 sec. on the Palatino Hill; is the most ancient and important Christian monument of the Roman Forum. All the walls are painted with frescos. We test our technology before the beginning of restoration works. 2
634
L. Colizzi, F. De Pascali s and F. Fassi
it is not distinguishable from the reality . So I can move myself in this environment, observe and examine it and also work with it inside it, like it were a real, natural world. So the VR let an intuitive , unconscious, almost childish approach to a phenomenon: this is the first and the most natural knowledge form .[7] So the Virtual Reality (VR) technologies begin to be today a very useful technique also in the cultural heritage field to repre sent, to show, to explain historical and architectonical object to a large public but also provides an useful and powerful instrument for the operators of restoration fields. We believe that the combination of computer graphic and VR technique could provide an important set of information to better understand the decay phenomenon, to choose the more suitable restoration intervention. Virtual Reality Device Our VR device is a Table Projector Baron (Barco) [8], with workstation equipped with nVidia Quadro 4500 graphic card. Such system allows us to make both passive and active stereoscopic visualization.
Fig. 3. Table Projector Baron (Barco) During the active 3D projection the viewer wears special eyewear consisting of two IR-controlled LCD light shutters working in synchronization with the projector. When the projector displays the left eye image , the right eye shutter of the active stereo eyewear is closed, and vice versa. The single projector used must be capable of displaying at a refresh rate to alter-
3D Visualization and Virtual Reality for Cultural Heritage Diagnostic
635
nate high enough that the viewer does not perceive a flicker between alternate frames. We develop also the passive (Anaglife) technology, so the two images (the right and the left one) are processed with two filter of complementary colours . In accordance with the International Stereoscopic Union the colours are red for the link image and Cyan for the right one and you can see in 3D with the famous "red and green glasses ". The first technology gives better results and is surely more useful than the first one. We develop the two techniques only to give the 3D possibility also when is not possible to use or to have the active visualization that is more expensive. We have also tested the software inside a CETMA's Virtual Reality Center. This system is based on ORAD cluster architecture [9] with twelve DVG (digital videographic).
Fig. 4. ORAD cluster architecture with twelve DVG
Each DVG is composed of two render units, each units have: a Motherboard ASUS PC-DL Deluxe 2, two INTEL XEON CPU 3.06 GHz (on each motherboard), a BFG 6800 ULTRA by NVIDIA graphic card . These renderers are chained each other by means 1Gb ethernet network, and their output is composite with a genlock. The virtual theatre is composed of three mobile screens that can assume two configurations like traditional cad wall or immersive cave . The system use six Infitec Technology Projectors (two for screen).
636
L. Colizzi, F. De Pascalis and F. Fassi
Infitec is the latest stereo technology. It delivers superior stereo separation without ghosting, with full freedom of motion, independent of head tilt. Left and right eye images are displayed simultaneously, while the Left/Right image selection is made through Infitec Filter Technology. This technology needs two projectors with two optical Infitec filters that split the colour spectrum in 2 parts: one for the left and one for the right eye information. If each of the two Infitec filters is mounted into the optical light path of its own projector, then a stereoscopic image can be made by putting the left eye information in the image of one projector, the right eye information in the image of the other and by using the matching Infitec filter in the pair of Infitec glasses in front of the corresponding eye (Both projectors displays their information at standard refresh rate. Compared to active stereo it is completely free of flickering, more performant and comes with more affordable glasses. Compared to passive stereo it achieves superior color fidelity.
Fig. S. The virtual theatre is composed of three mobile screens that can assume two con-figurations like traditionalcad wall or immersivecave
Stereoscopic Visualization This software can work using classic prospective visualization or with 3D visualization. The high resolution of the laser product in the project plus the 3D vision allows to make the detachments of anomalies very easy and
3D Visualization and Virtual Reality for Cultural Heritage Diagnostic
637
natural. The 3D picking of the points for data matching will be more accurate and let to produce more precise colour transfer. The possibility to add colour information obtained by diagnostic instruments such as (thermocamera, multi-spectral machine) at the geometry let investigate the damage cause in a more deep level.
Fig. 6. Passi ve 3D visualization of frescoed wall in Santa Maria Antiqua in Rome
638
L. Colizzi, F. De Pascalis and F. Fassi
Fig. 7. 3D picking in passive stereo
Using VTK is enough simple to manage the stereo capability; in fact the library provide a complete set of class to develop render window adapt to view the two scene (right and left) for stereoscopic visualization. Here is part of the code in order to manage passive and active stereo . ren Win->StereoCapableWlndowOnf): ren Win->SetStereoTypeToCrystalEyesO; or renWin->StereoTypeToRedBlu ren Win->StereoRenderOnO;
References [1] Ricci R, Fantoni R, Ferri M, de Collibus, Fometti G, Guarneri M, Poggi C (2003) High-resolution laser radar for 3D imaging in artwork cataloging, reproduction, and restoration. Optical Metrology for Arts and Multimedia, Proceedings of the SPIE, ed Salimbeni Renzo, Vol 5146, pp. 62-73 [2] Schroeder W, Martin K, Lorensen W (1998) The visualization Toolkit: An Object-Oriented Approach to 3D Graphics, ed Prentice-Hall [3] Fangi G (1995) Photogrammetry Notes.
3D Visualization and Virtual Reality for Cultural Heritage Diagnostic
639
[4] Tsai, Roger Y (1986) An efficient and Accurate Camera Calibration technique for 3D Machine Vision. Proceedings of IEEE Conference on Computer Vision, pp. 364-374 [5] Brumana R, Fregonese L, Fassi F, De Pascalis F (2005) 3D Laser Scanner Points Clouds and 2D Multi-spectral images: a data matching software for cultural heritage conservation. CIPA 2005, XX International Symposium, Torino [6] Brumana R, Fassi F, Fregonese L, Monti C, Potenza A, Colizzi L, De Pascalis F (2005) SIDART-A New Integrated System for diagnostic of the cultural heritages. CIPA 2005, XX International Symposium, Torino [7] http://www.mediamente.rai.it/biblioteca/biblio.asp?id= 13&tab=int#link009 [8] www.barco.com [9]www.orad.tv
True Ortho-photo Generation from High Resolution Satellite Imagery A.F. Habib, K. I. Bang, C. J. Kim, and S.W. Shin Department of Geomatics Engineering, University of Calgary habib @ geomatics.ucalgary.ca, kibang@ ucalgary .ca, cjkim @ ucalgary.ca Spatial Information Research Team, ETRI sshin @etri.re.kr
Abstract Ortho-photos contain highly valuable data with high potential. They are useful in various applications, such as the creation of image maps and texture information in Geographic Information System (GIS). The Z-buffer method has been one of the most popular methods for the true ortho-photo generation. However, it has strict requirements regarding the Digital Surface Model (DSM) cell size. Furthermore, Z-buffer Method have false visibility problmes in narrow vertical structrues. The Z-buffer Method can be modified to avoid this problem using pseudo points along vertical surfaces. However, this method still has problems with certain DSM resolutions. This study implements two technical approaches for true ortho-photo generation u(l) the horizontal position of a point along the search path reaches the nadir point in the object space sing high resolution satellite imagery. The first approach deals with the scan line search method, which is required since each scan line has its own perspective center. The other approach is concerned with occlusion detection, which is required for the creation of true ortho-photos. Final experimental results with real data have demonstrated the feasibility of this proposed true ortho-photo generation methodology. Key words - True Ortho-Photo, Satellite Imagery, IKONOS
1 Introduction Ortho-photo production aims to eliminate sensor tilt and terrain relief effects from captured perspective imagery. Uniform scale and the absence of relief displacement make ortho-photos an important component of GIS da-
642
A.F. Habib, K. 1. Bang, C. 1. Kim, and S.W. Shin
tabases, in which the user can directly determine geographic locations, measure distances, compute areas, and derive other useful information about the area in question. Recently, with the increasing resolution of modem imaging satellites (e.g. half a meter ground resolution imagery will be offered by OrbView-5, to be launched in 2007), there has been a persistent need for a true ortho-photo generation methodology that is capable of dealing with the imagery acquired from such systems. Differential rectification has traditionally been used for ortho-photo generation. The performance of differential rectification procedures has been quite acceptable when dealing with medium resolution imagery over relatively smooth terrain. However, when dealing with high resolution imagery over urban areas, differential rectification produces artifacts in the form of ghost images (double-mapped areas) at the vicinity of abrupt surface changes (e.g. at building boundaries and steep cliffs). The effects of these artifacts can be mitigated by true ortho-photo generation methodologies. The term 'true ortho photo' is generally used for an ortho-photo in which surface elements that are not included in the digital terrain model (DTM) are rectified to the orthogonal projection. These elements are usually buildings and bridges (Amhar, 1998). Kuzmin et aI. (2004) proposed a polygon-based approach for the detection of these obscured areas in order to generate true ortho-photos. In this method, conventional digital differential rectification is first applied. Afterwards, hidden areas are detected through the use of polygonal surfaces generated from a Digital Building Model (DBM). With the exception of the methodology proposed by Kuzmin et aI., the majority of existing true ortho-photo generation techniques is based on the Z-buffer algorithm (Catmull, 1974; Amhar et aI., 1998; Rau et aI., 2000; Rau et aI., 2002; Sheng et aI., 2003). In this paper, existing methods for ortho-photo generation are comprehensively discussed while focusing on the line scanner imagery, and new alternatives and their experiments are introduced.
2
Forward and Backward Projection
There are two basic approaches for generating an ortho-photo: forward projection and backward projection (Novak, 1992). In forward projection, each pixel in the source image is projected onto the ortho-photo; the corresponding pixels in the ortho-photo are then determined via the intersection of a ray from the perspective center of the source image with the three dimensional ground surface defined by the DSM. In backward projection,
True Ortho-photo Generation from High Resolution Satellite Imagery
643
each pixel in the ortho-photo takes its pixel value from the source image using the collinearity condition and the object space coordinates X, Y, and Z of the corresponding DSM cell. P.C.
Image
Ground
Slope of ground surface
(a)
Image
Slope of ground surface
(b)
Fig. 1. Iterative ground point search method: (a) Case 1: the slope of a ray from the source image is steeper than the slope of the ground surface. (b) Case 2: the slope of the ground surface is larger than the slope of a ray from the source image.
In forward projection, even though Exterior Orientation Parameters (EOPs) of the source image and DSM are available, the object point corresponding to each image pixel cannot be directly found because three dimensional ground coordinates are calculated using two dimensional image coordinates and EOPs. For this reason, an iterative process is required to find the DSM point corresponding to a pixel in the source image, and two cases are possible (Fig. 1). In the case where the slope of a ray from the source image is larger than the slope of the ground surface, the convergence is guaranteed (Fig. 1 (a»; however, in the case where the slope of the ground surface is larger than the slope of a ray from the source image, this iterative process may not converge (Fig. 1 (bj). Even though common situations are usually similar to the first case, especially with satellite imagery, we must avoid any possible problems. Another shortcoming of for-
644
A.F. Habib, K. I. Bang, C. J. Kim, and S.W. Shin
ward projection is that some missed pixels might appear in the resulting ortho-images.
I
Source Image
I
DSM
Fig. 2. Ortho-photo generation using differential rectification with backward projection
Unlike forward projection, backward projection goes from the object space to the image space using the object to image transformation. Therefore, the ground surface slope is not a serious issue, and furthermore, an iterative process is not required only for frame cameras. In Fig. 2, target orthophotos are generated through backward projection without an iterative ground point search and missed pixels in the result images. Backward projection, however, does requires an iterative process for line scanner imagery, and this is discussed in the next section. The backward projection is commonly known as indirect method or differential rectification.
3 Z-buffer Method The Z-buffer algorithm proposed by Amhar et al. (1998) has mainly been used for true ortho-photo generation. As it can be seen in Fig. 3, double mapped areas arise from the fact that two object space points (e.g., A and D, Band E, or C and F) are competing for the same image location (e.g., d, e, or f, respectively). The Z-buffer method resolves the ambiguity of which object point should be assigned to the image location by considering the distances between the perspective center and the object points in question. Amongst the competing object points, the closest point to the perspective center is considered visible while the other points are judged to be invisible in that image.
True Ortho-photo Generation from High Resolution Satellite Imagery
645
Perspective center ~~..L--_ Digital
image
DSM ----G----4>----o.--e---e)--o------g(d)
gee)
Datum
g(f) g(d) gee) g(f)
Fig. 3. Z-buffer Method
However, this methodology has several limitations such as its sensitivity to the sampling interval of the DSM as it relates to the ground sampling distance (GSD) of the imaging sensor. Another significant drawback of the Zbuffer methodology is the false visibility associated with narrow vertical structures. This problem is commonly known in the photogrammetric literature as the M-portion problem, which can be resolved by introducing additional artificial points (pseudo grounds) along building facades (Rau et aI., 2000; 2002). For more details regarding the drawbacks of the Z-buffer method, interested readers can refer to Habib et al. (2005).
4 Angle Based True Ortho-photo Generation Habib et al. (2005) proposed two methods for the true-ortho generation using imagery captured by frame cameras: 1) Adaptive Radial Sweep Method, and 2) Spiral Sweep Method. These methods depend on the angle between the nadir direction and the line connecting the perspective center and a given DSM cell. When considering a vertical structure, its top and bottom are projected onto the same location without any relief displacement. However, in a perspective projection, the top and bottom of that structure will be projected as two points. These points will not be at the same location; they will be spatially separated by the relief displacement. This displacement takes place along a radial direction emanating from the image space nadir point (Mikhail, 2001). The radial extent of the relief
646
A.F. Habib, K. I. Bang, C. J. Kim, and S.W. Shin
displacement is the source of occlusions in perspective imagery. The presence of occlusions can be discerned by sequentially checking the off-nadir angles to the lines of sight connecting the perspective center to the DSM points along a radial direction starting from the object space nadir point (Fig. 4). Perspective center
--EI'-----+---L.------L----<~--O'---
OSM
~~~~:::.:::.::::::~~~::::::::::::::::::.:::.::::::::.::::::::::L_ _
Datum
I V
l;;;~!!;;il
V
I V I V I V I
Visibility map
Fig. 4. Angle based method for true ortho-photo generation
Angle based methods are feasible for frame imagery because in these images, only one nadir point is present. Therefore, the occlusion search can be done along the radial direction from the nadir point. However, in line scanner images, we must consider multiple exposure positions. In this research, we will investigate the feasibility of extending the angle-based method to imagery captured by line cameras.
5 Proposed Method A. Removing Occlusion Using the Sorted Height Method
In backward projection, another alternative for true ortho-photo generation is a method that uses a sorted DSM. If the DSM is sorted by height values, the true ortho-photo is easily obtained and the result image is same as that from the Z-buffer Method. Ortho-photo pixels are filled in the order of heights, and each pixel in a source image is accessed only once. Thus,
True Ortho-photo Generation fromHigh Resolution Satellite Imagery
647
only visible points in the object space are assigned pixel values, and the required memory is less than with Z-buffer Method. In Fig. 5, five DSM cells are sorted in the order of height value and backward projection can be done along the order. Thus, the 5th point in Fig. 5 accesses the same image pixel which is already accessed by the 2nd point. Therefore, the 5th point is marked as an invisible point. To keep track of projected image pixel, we can use an additional array, which is of the same size as the source image, that counts the number of object points associated with a given image pixel. Alternatively, once an image pixel has been projected, the associated gray value can be changed to black, thus eliminating the projection of the same gray value more than once. The computational load strongly depends on the sorting algorithm used, because sort is very heavy and complex computationally. Nevertheless, if we already have a sorted DSM, this method is very useful; however, this method has the M-portion problem like the Z-buffer Method and pseudo points are required to avoid this problem. Perspective
center Image
I
I I I
·,".:-··;:··.·~,<·I :,',;
I I
I
,<.;,"·1".,1''''';'''';';.:'''1]'
Fig. 5. The Sorted Height Method
B. The Iterative Scan Line Search Method If the source images are from line scanners like in IKONOS imagery, another problem should be considered when using backward projection. In line scanner images, the EOPs are not the same for scan lines constituting the scene because the line scanner continuously moves along the trajectory during data acquisition. In line scanner imagery, the main issue is to figure out when the sensor will image the object point. This will happen if and only if the y coordinate of the image point with respect to the image coordinate system is equal to offset value, d (Equation 1).
648
A.F. Habib, K. 1. Bang, C. 1. Kim, and S.W. Shin
(1)
Where: (x,y)
are the image coordinates of objects,
(X,Y,Z)
d
are the ground coordinates of objects, is the offset value for the line scanner in the y-direction,
f
is the focal length of the lens,
(Xf'~,Zt)
are the ground coordinates of the perspective center at time t
is an element of the rotation matrix at time t. In order to generate ortho-photos using line scanner images, an additional iterative process is required to find the perspective center corresponding to each ground point. For this problem Habib et al. (1997) solve the polynomials of the trajectory using Newton Raphson's Method. Kim et al. (2001) proposed a method of determining 2D image coordinates using 3D object space coordinates; this method requires solving a 2nd order polynomial to find the appropriate exposure station. In this paper, the 'Iterative Scan Line Search Method' is introduced as an alternative method of determining a scan line regardless of the trajectory model without a numerical analysis method Each perspective center can be found through an iterative process using the properties of a one dimensional image. First, the approximate location of the perspective center must be estimated. In the next step, estimated image coordinates are obtained using the object space coordinates and the estimated coordinates of the perspective center (Equation 1). We expect that the obtained image coordinates will have discrepancies. Particularly, the y coordinate has a different value from d, and that would mean that the estimated perspective center coordinates are wrong (Fig. 6 (a)). Therefore, discrepancy in the y coordinate can be directly used to correct the estimated perspective center coordinates (Equation 2). This process is repeated until the discrepancy is lower than a given threshold (Fig. 6 (b)). For the next object point, the previous result can be used as an initial approximation because the two points are adjacent in the object space; this can reduce the processing time.
True Ortho-photo Generation from High Resolution Satellite Imagery Updated ~ perspective center
<.
Approximate perspective center
z
10 image space
649
<, i
/
20 scene
Discrepancy
space
l
y
LGround object (a) (b) Fig. 6. The Iterative Scan Line Search Method: (a) Correction term in each iterative process and (b) Iterative search for a corresponding perspective center
t n+l = t n
y-d
+----;;L
(2)
Where: is the scanning rate, is the approximate scanning time, is the updated scanning time, and (y-d)
is the discrepancy.
c. Removing
Occlusion Areas Using the Height Based Method
The introduced angle-based methodologies by Habib et al (2005), after some modification, can be used for occlusion detection when using imagery captured by line cameras. As it has been already mentioned, a line camera scene has multiple exposure stations. Therefore, for such a scene we have multiple nadir points. After determining the exposure station for a given object point, we can use the angle-based technique to check if this point is visible or not. To evaluate the visibility of a given point, a search path, which is defined by the projection of the object point onto the orthophoto plane and the respective nadir point, should be investigated. However instead of moving from the nadir point in an outward direction, the search should start from the object point and proceeds inwardly towards the respective nadir point. In this case, if the angle (a) along the search path exceeds the a angle for the point in question, then this point is
650
A.F. Habib, K. 1. Bang, C. J. Kim, and S.W. Shin
deemed invisible. For example, in Fig. 7, a 1 and a 2 are the angles of points 1 and 2 along the search path, and a o is the angle of the projection ray associated with the point in question. Using these angles, we can determine the visibility of the point in question. If the angle of a point along the search path is larger than the projection ray's angle, the selected point is marked as an invisible point. However, the angle based method is computationally expensive. Therefore, we will propose an alternative methodology that is more computationally efficient. In this study, a method called the 'Height-Based Method' is proposed for removing occlusion areas. The basic principle of the Height-Based Method is that no point along the search path is higher than the projection ray if the object point in question is successfully captured by the cameras or the scanners. Here, the projection ray is a ray of light from the perspective center to the ground object and the search path is the footprint of this ray projected onto the object space (Fig. 7). To determine whether each pixel selected is visible in an ortho-photo, the height of the ray is compared to the point's height value on the search path in the ground space. The height of the ray can be simply and regularly calculated using Zo and ~z which is obtained by the projection ray's slope and the given interval (Fig. 7). If any point along the search path is higher than the projection ray, it means that the target point in the DSM is obstructed. Fig. 8 shows the flow chart of the proposed method. In this process, the horizontal positions of the points to be examined along the search path are determined by a given regular interval. And this process continues until the one of the following conditions is satisfied: (1) the height of the ray path is higher than Zmax, the maximum value of the given DSM and (2) the horizontal position of a point along the search path reaches the nadir point in the object space.
True Ortho-photo Generation from High Resolution Satellite Imagery
651
Perspective center rx, Yt, Zt)
z
1
/.~
Nadir point
Search path
Fig. 7. Height-Based Method for removing occlusion areas
Unlike with the Z-buffer Method and the Modified Z-buffer Method, the source image resolution is not an issue in the Height-Based Method because, in this method, only ground points and perspective centers are used for searching for the occlusion areas. Therefore, the proposed method produces relatively clear boundaries and occlusion areas, and there are few points falsely categorized as an invisible point in the result images, when compared with the Z-buffer Method and the Modified Z-buffer Method.
652
A.F. Habib, K. I. Bang, C. J. Kim, and S.W. Shin
i++ from 0
N
Zj >((i.X L\Z)+Zo)?
Y •((iXL\Z)+ZO)
Fig. 8. Flow chart of Height-Based Method
6 Results and Discussions In this study, five methods for generating true ortho-photos were tested: Conventional Digital Differential Rectification, Z-buffer Method, Modified Z-buffer Method, Sorted DSM Method, and Height-Based Method. Fig. 9 shows the source image, the DSM and the output images of each method. The input DSM derived from LIDAR data has 1.0 meter resolution, and the source image is an IKONOS image having about 1.0 meter nominal resolution. The test field is in Daejeon, the Republic of Korea. When it comes to processing time, Conventional Digital Differential Rectification is the fastest method and Modified Z-buffer Method is the slowest (see the graph in Fig. 10). Sorted Height Method is the second fastest method; however, this method requires a sorted DSM according to height. Although the Z-buffer Method and the Sorted Height Method are
True Ortho-photo Generation from High Resolution Satellite Imagery
653
very fast methods, the quality of the results is not as good as the Modified Z-buffer Method and the Height-Based Method. The rectangular area in Fig. 9 (c) shows the double mapping problem. This result image is generated by conventional digital differential rectification, and double-mapped areas are caused by occlusion areas. However, in Fig. 9 (d), (e), (f), and (g), the double-mapped areas are removed and these areas are filled with black color instead of image textures. The ellipsoidal areas in Fig. 9 (d), (e), (f) and (g) show the quality of each method for the true ortho-photo generation. The quality of the Sorted DSM Method is similar to the Z-buffer Method. The Modified Z-buffer Method shows the better quality compared to the Z-buffer Method in occlusion areas. The Height-Based Method proposed in this study is quite fast compared to the Modified Z-buffer Methods and the quality of the result is better than any other result's quality. For example, the building boundaries are very clear compared to any other true ortho-photo method and there are few false occlusion areas (see ellipsoidal areas in Fig. 9).
(a)
(b)
(c)
(d)
(e)
(f)
True Ortho-photo Generation from High Resolution Satellite Imagery
655
Fig. 9. Test data and results: (a) Source image, (b) Input DSM , (c) Convent ional digital differenti al rectification, (d) Z-buffer Method, (e) Modified Z-buffer Method, (f) Sorted DSM , and (g) Height-Based Method
..
Perforrrance ( Height-Based Method = 1(0)
o
Mxlifirl Z. bufrer Metlod
00
100
uo
an
2'D
~.
.J. ·1 I
I
Testin g conditions: -. CPU: Athlon 2.6 GHz, and Memor y: 1,024 MB -. Image resampling method: Bilinear Interpolation -. Data Size: Image 13816 X 13824, and DSM 1449 X 1692 Fig. 10. Processing time of the tested methods
3J)
656
A.F. Habib, K. 1. Bang, C. J. Kim, and S.W. Shin
7 Conclusions and Future Works True ortho-photo generation faces a challenge with line scanner images that it does not face with frame images because line scanners continuously move during image capturing and each scan line has its own perspective center. For this reason, we must consider two technical challenges in generating an ortho-photo: 1) Scan line search, and 2) Occlusion removal. This study proposes the Iterative Scan Line Search Method and the HeightBased Method and experiments show that these two methods are feasible and practical alternatives for true ortho-photo generation.
References Amhar, F., et aI, "The generation of true orthophotos using a 3D building model in conjunction with a conventional DTM", International Archive Photogrammetry and Remote Sensing, 1998, vol. 32, pp. 16-22. Catmull, E., "A Subdivision Algorithm for Computer Display of Curved Surfaces", PhD dissertation, Department of Computer Science, University of Utah, Salt lake city, Utah, 1974. Habib, A., B.T. Beshah, "Modeling Panoramic Linear Array Scanner", Technical Report of Department of Civil and Environment Engineering and Geodetic Science, OSU, Nov 1997, Report NO. 443. Habib, A., E. Kim, and C. Kim, "New methodologies for true ortho-photo generation", Photogrammetric Engineering and Remote Sensing, Accepted in 2005. Kim, T., D Shin, and Y Lee, "Development of a Robust Algorithm for Transformation of a 3D Object Point onto a 2D Image Point for Linear Pushbroom Imagery", Photogrammetric Engineering and Remote Sensing, April. 2001, vol. 67, pp. 449-452. Mikhail, E., J. Bethel, "Introduction to Modern Photogrammetry", WILEY, 2001, pp. 234-238. Novak, K., "Rectification of digital imagery", Photogrammetric Engineering and Remote Sensing, Mar. 1992, vol. 62, pp. 339-344. Rau, 1., N. Chen, and L. Chen, "Hidden compensation and shadow enhancement for true orthophoto Generation", Proceedings of Asian Conference on Remote Sensing 2000, 4-8 Dec. 2000, Taipei, CD-ROM. Rau, J., N. Chen, and L. Chen, "True orthophoto generation of built-up areas using multi-view images", Photogrammetric Engineering and Remote Sensing, 2002, 68(6):581-588. Sheng, Y., P. Gong, and G. Biging, "True orthoimage production for forested areas from large-scale aerial photographs", Photogrammetric Engineering and Remote Sensing, 2003, 69(3):259-266.
Development of Country Mosaic Using IRS-WiFS Data Ch.Venkateswara Rao, P.Sathyanarayana, D.S,Jain, A.S.Manjunath, K.M.M.Rao National Remote Sensing Agency, Dept. Of Space, INDIA. E-mail: {rao_cv, satya_p, jain_ds, manjunath_as, rao_kmm} @nrsa.gov.in
Abstract The remote sensing satellites revolve around the earth in the polar orbits and they acquire the data over a predefined swath which is defined by its field of view (FOV) and orbit. Each of these satellites is equipped with imaging sensors. Which are broadly categorized based on their spatial resolution, spectral resolution, radiometric resolution and temporal resolution. Indian Remote Sensing Satellites-IC/ID (IRS - lC/ID) are equipped with a Wide Field Sensor (WiFS), which has a moderate spatial resolution of 188 meters and temporal resolution of 5 days. The remote sensing operators provide the data by defining a scene in terms of Path-Row, which cuts across the swath over a particular path. The IRS-WiFS scene covers about 810km x 810km on the ground. If the area of interest is not covered in one scene with a particular spatial resolution, then the multiple scenes are needed for specific studies over an area larger than a scene. In order to study the vast countries, like India, spreading largely from west to east by about 3000km and north to south by about 2900km, several scenes spreading over different paths are required. Since they are acquired over different paths on different dates the radiometry of these scenes will not be the same. Sometimes the same scene acquired temporally may also need to be processed if it has partial cloud cover. Stitching (joining) of large number of such scenes alone will not be sufficient to create a seamless coverage of entire geographical area of interest due to the geometric and radiometric distortions that are present inherently in each of these scenes. Hence the entire scenes required have to be essentially corrected for geometrical distortions and radiometric balance across the area of interest by the process called mosaicing. In this case study a country mosaic like India is taken up for creating a seamless Mosaic using IRS - lC/ID WiFS sensor data. Since the country
658
Ch. Venkateswara Rao, P. Sathyanarayana et al
is spreading largely from west to east, all the images required for mosaic need to be projected. The LCC projection is best suitable for making proper measurements of such geographical extents. All the adjacent images need to be geometrically corrected to the sub pixel accuracy to avoid the discontinuities or blurring in the features at the overlapping regions. By identifying the matched Ground Control Points (GCPs) across the pairs of images, a polynomial transformation model is established to rectify the geometrical distortions. Radiometric normalization is obtained by achieving normalization of the adjacent scenes over the required geographical extent. Two adjacent images are normalized by histogram matching, either by considering the histograms of the entire image or by considering the histograms of the overlap areas only. Generally these automatic techniques used for this do not result in the best normalization, particularly when the large area has to be mosaiced. Hence, suitable interactive techniques are adopted in this case study. The geographical extent of India is spread between the paths 90 and 115 covering 50 scenes. Firstly, path wise mosaics are created by mosaicing all the scenes in a path. Secondly, the adjacent paths are mosaiced to cover the entire extent of Indian region. In order to achieve the desired scales of photographic product, the source image spatial resolution of 188m is modified to 200 m pixels by resampling the image data. To obtain the precise statistics, authenticated boundary information has been overlaid and used for clipping the final mosaic of India. This paper describes various techniques adopted in creating the mosaic of India using IRS - lC/ID WiFS sensor data.
1. Introduction With the launch of IRS series of satellites the acquisition of remote sensing data is possible in different spatial resolutions, revisit capabilities, which enable natural resources management studies. Some of the important studies like estimating agricultural crop acreage and yield, survey and management of Forest resources, Water bodies, Disaster monitoring etc., are feasible at appropriate scales. To monitor the vegetation cover of a large geographical area covering a country like India; WiFS sensor of IRS1ell D with a spatial resolution of 188 m and a swath of 810 km is suitable. Higher spatial resolution data can cause many fold increase in the data volume to be handled for generation of country mosaic. Mosaic of required geographical extent is useful for obtaining a synoptic view, continuation in geographical features, estimation of resources at regional level
Development of Country Mosaic Using IRS-WiFS Data
659
and planning. Generation of country level mosaic involves the handling of voluminous data. The country mosaic generation involves the following steps: 1) Selection of satellite standard scenes, 2) Identification of ground control points in the overlap areas for geometric registration, 3) Radiometric normalization across scenes using histogram matching, normalization and feathering, 4) Generation of seamless final mosaic, 5) Geocode the final mosaic to the required scale and projection. 6) Overlaying the administrative boundary for the extraction of required geographical extent, 7) As the final output is referenced with map, the prominent places are annotated in the final mosaic. Table 1. IRS lC/ID Wifs Sensor Characteristics
Resolution Swath Repetitivity
188 m 810 km 5 days
Spectral Band3
0.62 - 0.68 f.1
Spectral Band3
0.77 - 0.86 f.1
The 5 day repetitive WiFS sensor at 188m spatial resolution with a swath of 810 Km. provides the information on the crop condition at regional level. Hence, this sensor is best suited because it is designed to acquire the information at NIR (band 4) and Red (band 3) wavelengths. NIR band is sensitive to crop growth condition and Red band is useful for other information. Hence, a ratio of the two is most useful in crop / vegetation condition assessment. Indian agriculture pattern follows mainly two cropping seasons, namely Kharif (June to November) and Rabi (December to April). For periodical crop inventory study needs country mosaic for each season. However in this paper we have presented the methodology for the generation of country mosaic for Kharif season. Kharif season is covered during monsoon period, due to which the geographical extent under study is covered often by clouds. Hence this involves the elimination of cloudy portions by using multiple data sets. Indian land mass is covered fully with 35 WiFS scenes. However 50 IRS WiFS scenes from different satellites like l C, ID had to be used for Kharif season in order to generate the cloud free mosaic. The same procedure can be easily followed for Rabi season as not much of the cloud cover issues exist during this season. India lies between longitudes from 69° 27' to 98° 35' and latitudes from 7° 12' to 35° 56' is covered by IRS reference paths form 90 to 115 and rows from 48 to 70. The Andaman
660
Ch. Venkateswara Rao, P. Sathyanarayana et al
& Nicobar and Lakshadweep islands have been mosaiced independently to form a complete mosaic of the country.
2. Geometric Correction The path row based satellite scene which is generated after applying the scene based standard corrections will have residual errors like location error, scale error, internal distortion and radiometric error. Unless these errors/mismatches are eliminated, the neighboring scenes can not be properly mosaiced. Geometric corrections are essential to rectify the location error, scale error and internal distortion in order to obtain the accurate edge matching of the features. Radiometric balancing is essential across the scenes as radiometric differences exist due to different sun illumination conditions. Due to the geometrical distortions in individual scenes the features will not have continuity across the scenes. Geometric correction is essential for addressing these distortions. Keeping one of the images as reference and by identifying the accurate common points in the overlap region of the subsequent image as Ground Control Points (GCP), a polynomial transformation relation is established. 2.1. Polynomial Transformation:
Polynomial transformations are used to relate the source image coordinates to reference image coordinate system[l]. Depending up on the distortion in the imagery, polynomial equations of appropriate order will be required to establish the needed transformation. The degree of complexity of the polynomial is expressed as the order of polynomial. The order is the highest exponent used in the polynomial. The number of GCPs needed is decided by the order of the polynomial. The objective of deriving the coefficients of the transformation relation is to obtain the polynomial equation which best represents the transformation with least amount of error. It is not possible always to derive coefficients that produce zero Root Mean Square (RMS) error. Every GCP influences the coefficients, even if there is not a perfect fit of each GCP to the polynomial that the coefficients represent. The distance between the GCP reference coordinates and the curve is the error as shown in Fig.l. The transformation matrix for a first order transformation consists of six coefficients, three for each coordinate X and Y. These are used in the following polynomial relations (see Eq 2.1).
Development of Country Mosaic Using IRS-WiFS Data
Yo = B I + B2Xi Where
Xi
661
+ B3Yi
and Yi are source or input image coordinates
X, and Yoare transformed or rectified image coordinates
Q.)
10 c
:eo o
o
><
........1 -----
GCP
Q.)
o c ~
..
2Q.)
Polynomial Curve
a:::
Source X Coordinate
Fig. 1. Polynomial Curve vs GCPs.
The first order transformation is a linear transformation and it can perform translation, scale change, skew and rotation operations on the source image. The Second order polynomial relation is as follows (see Eq. 2.2) (2.2)
Where Al to A 6 & B 1 to B6 are coefficients Second order polynomial transformation is used for, the data coverage over a larger area to account for its earth's curvature and for taking care of large FOV of the sensor.
662
Ch. Venkateswara Rao, P. Sathyanarayana et al
3. Radiometric Normalization The Remote Sensing satellites acquire the data over a path on a given orbit. Hence there will be no significant change in the radiometry among the adjacent scenes within the path. But to avoid the cloudy regions of a path, additional scenes may need to be collected on different dates. Under such circumstances radiometry between scenes usually differs due to the difference in the date of acquisition and also due to the difference in the sun illumination angle. If the seasons are different then there will be a significant change in image contents acquired on different dates. Three different automatic radiometry normalization techniques are available. 1). Histogram matching: This technique compares both the histograms and matches second image histogram to the reference image histogram. 2). Histogram Normalization: Computes the statistics in the overlapping regions of both the images, adjusts the second image histogram based on the mean difference and standard deviation ratio. 3). Feathering: Uses Ramp function, to blend both the images in the overlapping regions. These three methods sometimes may not provide a satisfactory radiometric normalization. In such cases interactive radiometric adjustments are essential.
3.1. Histogram matching The method used to generate a processed image that has a specified histogram is called Histogram Matching or Histogram specification [1]. Let r & z be the gray levels of the input and output (processed) images respectively, and let Pr(r) & Pz(z) denote their corresponding probability density functions. Input image histogram is equalized by the following transformation (see Eq. 3.1).
n. Ip (r.)= I_1 . n k
Sk = T(rk) =
}=o
k
r
1
.
k
= O,1,2, ...,L-1
(3.1)
1=0
It is a mapping from the gray levels in the original image into corresponding gray levels Sk based on the histogram of the original image. Where n is the total number of pixels in the image, nj is the number of pixels with gray level rj , and L is the number of discrete gray levels. Similarly histogram equalization is applied to the output image, which will be computed from the pixels in the image (see Eq. 3.2).
Development of Country Mosaic Using IRS-WiFS Data
k
vk =G(Zk)=
LPz(Z;)=Sk k=O,1,2, ... ,L-l
663
(3.2)
;=0
The following equation gives the approximation of the desired levels of the image with the histogram (pz (z) ) that corresponds to the output image (see Eq 3.3).
Zk = G-1(Sk)
k = 0, 1, 2,
, L-1
(3.3)
1
= G- [T(rk)] k = 0, 1, 2, , L-1 This approach gives an approximation of the desired gray levels of the output image.
3.2. Histogram Normalization By considering the image sensors are linear and time invariant, the following methodology can be used to normalize the images. The normalized output image, x', can be computed as a function of the offset and gain, which are derived from the combined statistics from both the images. The original image x can be modified as follows (see Eq. 3.4).
x' = f ( x ) = a + b-x,
(3.4)
Each of the adjacent image pairs will have its own, fixed values of offset, a, and gain, b. In the absence of calibration object, one can estimate the (relative) values of gain and offset using simple statistics of observed overlap regions. If this assumption is correct, then the gain and offset constants can be estimated from the mean and standard deviation of the sensed image gray values. Sub images of the overlapping regions are normalized by considering the relative differences of mean and standard deviations. Hence the difference of mean between the two images gives an indication for the offset of the two images and the ratio of the standard deviations defines the gain. The two scenes gets normalized, since each sub image in the overlapping region has the same mean and standard deviation, and, if the assumption introduced earlier is valid, the same linear relationship can be applied to the entire image.
664
Ch. Venkateswara Rao, P. Sathyanarayana et al
3.3. Feathering: In this method the weightage of the left image is most at the left most point of overlap and decrease the weightage to zero as one move to the right end of the overlap for each scanline (see Eq. 3.5).
O(x) = (d-x)*l(x)/d + x*r(x)/d
(3.5)
Where d is the size of the overlap, x is the distance of pixel from leftend of overlap, lex) and rex) are the left & right image gray values at x, and O(x) is the output (mosaiced blend) gray value at x.
4. Methodology Every satellite follows a predefined- path depending on their orbit, during its revolution around the earth called referencing scheme [2]. It is a means of conveniently identifying the geographic locations on the earth. Referencing scheme is designated by path & rows. An orbit is the course of motion taken by the satellite in space and the ground trace of the orbit is called path. The continuous stream of data is segmented into a number of scenes of convenient size. Each scene as shown in Fig.2 is referred with a path & row number. Due to the large coverage of the WiFS scene (810km X 810km) there is an overlap of 85% (approx.) i.e., 692.5km between adjacent paths & overlap between adjacent rows is 676km. This is because of basic referencing scheme has been derived for the LISS-III coverage which has a swath of 141 Kms. Hence choosing the scenes at every 5 paths or 5 rows interval will be sufficient for maintaining the continuity of the data.
Development of Country Mosaic Using IRS-WiFS Data
665
Fig. 2. WiFS scene- Path: lOS Row: 60 Date ofpass:2-Nov-2001
4.1. Intra-path Mosaic India is spread between 90 and liS IRS paths, covering 35 scenes. Firstly the scenes within the specified path are registered to eliminate the geomet rical distortions between the successive scenes. The registration proc ess is initiated by identifying the Ground Control Points (GCPs) in the overlapping regions of successive scenes , and the same is extended for covering the complete path up to the Indian boundary . The types of features chosen for GCPs are linear feature crossings , edge crossings and simple high variance points. The pairs of GCPs are used to build geometric model whose parameters are found by least square estimation to transform first image geometry, to the second image. The models usually considered are polynomia ls up to the degree 2. One of the images is taken as reference and the obtai ned model is applied to transform the second image. Geometrically uncorrected image is warped using Cubic Convolution (cq interpo lation. The first order transformation model is used while carrying out the geometric corrections, within the path covering all the scenes . Since linear distortions are only present, first order transformation model is sufficient. To avoid the blurring in final mosaic, the registration accuracy has to be maintained as low as possible or the RMS error should be less than a pixel. After confirming the accurate geometric registration to the sub pixel
666
Ch. Venkate swara Rao, P. Sathyanarayana et al
a) Path-91
b) Path-96
Fig. 3. Path wise mosaics
accuracy all the scenes within the path are mosaiced. The path wise mosaic is shown in Fig..3. Generally radiometry will be uniform, since they are all acqui red on the same day. Even though the Indian land mass is covered in 35 scenes we had used 50 scenes to replace the cloudy regions. Under such circumstances radiometric normalization techniques need to be applied. In order to maintain the best radiometry across all the scenes , the dates of acquisition are chosen in such a way that all the scenes covered in the same season (closest possible within the season) . The same procedure is adopted to create all the required paths .
Development of Country Mosaic Using IRS-WiFS Data
667
4.2. Inter-path mosaic
Once the individual paths are available, path wise geometric correction need to be carried out by using the similar procedure of identifying the GCPs in the side lap areas. Individual paths are covering approximately 2500 Km long. Long coverage will have the residual, earth curvature effects even after the standard corrections applied on individual scene. Due to this second order polynomial transformation model is essential to account for such distortions. After the completion of the precise geometric correction between the paths, images can be stitched across the paths while doing the radiometric normalization to cover the complete geographic extent. By keeping the path number 100 as reference all other paths are geometrically rectified successively towards both the directions. This approach will not propagate the geometric distortions as the land mass is present in the major portion of the image. It is essential to apply the radiometric balancing across the paths, because they are acquired on different dates. Some of the paths could be normalized by adopting the above mentioned three methods: Histogram matching, Histogram normalization and Feathering. None of these automatic methods will give the required performance if the images have the diversified landscape. Interactive radiometric adjustment only could yield the satisfactory results. The individual paths and seam less mosaic is shown in Fig.4.
668
Ch. Venkateswara Rao , P. Sathyanarayana et al
Fig. 4. Seamless Inter-path Mosaic of Paths 91 & 96
Development of Country Mosaic Using IRS-WiFS Data
669
4.3. Mosaicing
Two individual mosaics are generated covering the paths 90 to 100 as left mosaic and 105 to 115 as right mosaic. It is mainly to facilitate the control over radiometric balancing across all the paths. Large land mass is present in the 100 and 105 paths, hence keeping these paths as reference
Fig. 5. Full India Mosa ic (untrimmed)
670
Ch. Venkateswara Rao, P. Sathyanarayana et al
successive paths are populated in each side. This approach facilitated to conveniently generate the seam less left and right mosaics. Subsequently the final mosaic could be generated by carrying out the interactive radiometric adjustment as the total data is present only in the two major halves. The final mosaiced data is shown in Fig.5.
4.4. Geocoding Having generated the final mosaic, -this data set can be geocoded so that geographic location correspondences can be maintained. Well distributed 1:250,000 scale maps are used to provide the geographic control similar to the Ground Control Points identification described earlier. For a country like India sprawling predominantly from east to west, with larger area, the LCC projection with two standard parallels which preserves the true shape every where, is best suitable. Standard parallels are chosen such a way that aerial distortion is minimal in both the sides of the parallels in the full image. The administrative boundary is used to extract the country mosaic. The final country mosaic with boundary is shown in Fig.6
5. Conclusions To develop mosaic of larger areas like country mosaic we need to start the process with geometric correction between individual adjacent scenes of a path. Radiometric correction between the scenes of same path is avoided for the scenes acquired on same day. However radiometric correction is required for the scenes of a path acquired on different date, due to cloud cover. For inter scene geometric correction within a path, the first order geometric model is adequate to address the inter scene geometric distortions. The registration accuracy should be maintained within a pixel level to have the perfect continuity of features between the adjacent scenes.
Development of Country Mosaic Using IRS-WiFS Data
671
Fig. 6. Full India Mosaic (trimmed with boundary)
Second order geometric model is used for inter-path geometric correction to correct the residual earth curvature effect which can not be ignored for larger areas. Once the individual path mosaics are created, the geometric error can be put in control by choosing the center path as a reference for creating complete mosaic. Paths on the left side are registered with reference path. Similarly the paths on the right side are registered with the reference path. These two parts are radiometrically normalized using the above mentioned methods before creating full mosaic. If these automatic methods fail interactive radiometric adjustments are to be applied. The minor radiometric adjustments are applied to the full mosaic after joining the
672
Ch. Venkateswara Rao, P. Sathyanarayana et al
left and right parts. The full mosaic is geocoded by obtaining the control from the surveyed map. The full mosaic is trimmed as per the administrative boundary of the country. Prominent places are annotated on the extracted country mosaic.
6. Acknowledgements The authors would like to thank Dr. K.RADHAKRISHNAN, Director National Remote Sensing Agency for encouraging us to carryout this work.
References [1] Rafael C.Gonzalez and Richard E. Woods, "Digital Image Processing" Pearson Education (Singapore) Pvt. Ltd., Second Edition ,2004. [2] IRS - lC / ID data users hand book.
Digital Terrain Models Derived from SRTM Data and Kriging T. Bernardes', I. Gontijo', H. Andrade', T. G. C. Vieira2, H. M. R. Alves' 'Universidade Federal de Lavras, Lavras, Minas Gerais, Brasil, tiago @epamig.ufla. br 2CTSM, Empresa de Pesquisa Agropecuaria de Minas Gerais, Lavras, MG, Brasil, [email protected] 3EMBRAPA cAFE, Empresa Brasileira de Pesquisa Agropecuaria, Brasilia, DF, Brasil, [email protected]
Abstract The objective of this work is to define procedures to improve spatial resolution of SRTM data and to evaluate their applicability in the Serra Negra region, in the district of Patrocinio, state of Minas Gerais in Brazil. The region's structure is a result of past tectonic processes that have arched it into a dome shape. Besides the already existing agriculture exploitation, Serra Negra also has strong tourism and mining potential. The Digital Elevation Model (DEM) was done using different interpolation methods in a resolution of 30 meters or I arcsec, among them kriging, ideally used to manipulate random spatial variations due to its capacity for dealing with spatially variable components. The accuracy of the resulting DEM and slope maps modelled were evaluated based on slope measured in the field. The correlation coefficients were determined from the field data and those derived from the DEM. Analyses and tests with SRTM data, divulged for South America are presented to better adequate the model to the study area. The correlation coefficients in the estimates by kriging and by bicubic interpolator were similar, with a slight difference in favour of kriging. Therefore, kriging is an interesting alternative in elaborating Digital Elevation Models that are in keeping with the dome structure of the Serra Negra region. In order to measure operational aspects of the pre-processing methods, the study area data were prepared under a resolution of 30 meters and evaluated through statistical analysis and visualizations of the DEMs and slope. The data presented strong restrictions to being used in their original form due to the low spatial resolution. However, the pre-processing allows their use in relatively detailed scales. Based on the results, a proposal for the
674
T. Bemandes, 1. Gontijol, H. Andrade, T. G. C. Veira et al
development of a DEM with the SRTM data for the Serra Negra region is presented.
Introduction Many aspects of the landscape of terrestrial systems have been evaluated viewing a disciplined management of the information on natural resources. A large part of the features of the landscape is due to the shapes of the terrain derived from topographic variables when we assume that the different geographic phenomena on the Earth's surface establish occupation patterns associated to implicit and explicit inter-relation mechanisms. Points of correspondence can be traced between a region's relief, soil and vegetation. It is known, for example, that the soil presents distribution patterns in the landscape associated to soil formation factors, as shown in Jenny's equation: S =f( Cl, 0, R, P, T), where relief (R) can be modelled computationally to contribute to the elaboration of detailed maps at low costs and relative precision. Intrinsically, climate (el), organisms (0) and time (T) are also incorporated to the model, as they are responsible for the relief modelling when they act upon the geologic substratum or parent material (P). Climate and vegetation, at the same time that alter and denounce development stages in the soils in interaction with the relief, are also influenced by alterations on the earth surface. Precipitation data are related to relief by mechanisms resulting from the reliefs influence allied to the dynamics of air masses that condition precipitation. Aspects related to shadows geometry in more mountainous portions of the landscape result in a trend towards the evolution of microclimates favourable to frost, for example. In Brazil, there is a lack of data on the topography of the terrain due to the extension of the lands, which makes their mapping by conventional planialtimetric survey more difficult. However, the development of hardware engineering, allied to powerful data processing systems and the advances of automatized cartography, have modified earth surface detailing methodologies. Added to the collection of earth surface data by technologies such as remote sensing, an astounding amount of information is generated daily. As an example, there are the 12 Terabytes of data collected by interferometry by the SRTM Project (Shuttle Radar Topographic Mission) during a period of 11 days and 176 orbits of the Earth. Studies applied to the characterization of the landscape with morphologic variables have been aided by the development of automatic methods of extraction of topographic variables. It is important, in this sense, to define procedures of extraction of digital
Digital Terrain Models Derived from SRTM Data and Kriging
675
information on topography and relief units in a computational environment.
Location of the area The study area is situated in the southeast region of the state of Minas Gerais, in the watershed of Alto Paranafba. It is geographically referenced by coordinates 18° 58' 29.61" and 18° 50' 55.17" latitude Sand 46° 55' 6.86" to 46° 45' 27.95" longitude W. The area can be distinguished from the regional context in orbital images, aero photographic surveys and other cartographic documents. The total study area is 231.84km2 (16,8 km in the east-west by 13,8 km in the north-south) and is approximately 280 meters above the local base level, with a maximum altitude of 1270 meters to the southwest, on the border of the dome structure, and 1160 meters in the centre, where the lagoon Chapadao de Ferro is located. Access from Belo Horizonte, capital of the state, is by highway BR 262 to the district of Ibia, where the highway MG 187 leads to an earth road 2 krn from the town, in the direction of the district of Cruzeiro da Fortaleza, which cuts the whole complex in the Western-Eastern direction.
Obtaining and preparation of data The SRTM data are available on the USGS (United States Geological Survey) site, in a resolution of approximately 90 meters, and were obtained in TIFF format. Among the undesirable characteristics of the original data, only the extremely high or low points (peaks and vortices) were removed using the ENVI (Research Systems Inc., 2002) software, from where the data were exported in two different formats, ASCII and TIFF. Eventual objects on the terrain surface, such as edifications or even different land cover types, are incorporated into the model, giving a false impression of the relief. These features were not removed because the filtering techniques available cause an indistinct softening of the relief, leading to a loss of information. According to Valeriano (2003), unnecessary softening of the MDE hinders the performance of the slope algorithms. Moreover, the interpolation processes by kriging present a capacity for dealing with the components of spatial variability, providing an interesting way of manipulating such random spatial variations.
676
T. Bernandes, I. Gontijol, H. Andrade, T. G. C. Veira et al
Treatment of the data The objective was to modify the resolution of the original data from 3arcsec to 1arcsec, or from 90 to 30 meters approximately. To this end, the interpolations were carried out: bicubic, available in SPRINGIINPE software, and kriging according to a flux of operations in different softwares. After removing the out Hers in the original file, this was directly imported to SPRINGIINPE in TIFF format where a new grade was generated, by bicubic interpolation, with a resolution of 30 meters. For the kriging, this same file was exported from ENVI in ASCII format with 3 columns representing coordinates X, Y and Z, where Z are the height values to be read by Software R (GeoR package) for exploratory analysis of the data. After exploratory analysis, the GS+ ® (Gamma Design Software, 2000) was used in geostatistical analysis which allowed the choice of a semivariogram model that better represented the data. The file containing all the digitalized points was then imported by the Surfer (Golden Software Inc., 1995), where interpolation by kriging was carried out.
Field observations The field work consisted of measuring the slope in 40 sample points distributed in the whole study area. The observations were carried out manually using a clinometer and the points were georeferenced with a GPS Promark Il. The GPS model allowed post-processing of the data improving their precision. The position errors were then confined to values less than 2 meters. These errors were considered satisfactory to the work as in all the points observed the slope remained the same in greater radius than these values. These georeferenced points were plotted on slope grades derived from the DEMs for comparison. Slope was measured according to Ostman (1987), as the use of digital elevation models relapses especially on obtaining variables derived from altimetry (slope being the most frequent example). According to Valeriano (2004), slope is a more rigorous test because derivative calculations evidence structures that are too subtle to be perceived in the first order variable.
Correlations with field data The DEM with 30 meters resolution obtained by the bicubic interpolator and by kriging were transformed into slope. The data were tabulated to ob-
Digital Terrain Models Derived from SRTM Data and Kriging
677
tain the correlation coefficients between the field data and those obtained from the DEMs interpolated by the two methods. Both methods were also compared to each other to verify their similarities. For this, the SAEG statistical software was used.
Trend and semivariogram analysis Exploratory analysis showed no trend in the data, so spatial analysis could be carried out without altering the data. As shown in Figure 1, the spheric model was the one adjusted, presenting the following parameters: nugget (Co): 10 m 2 , sill (Co+C): 1480 m 2 and range (A) of 11500 m. Isotropic Variogram
o
14958.
o Il)
11218.
.~
7479.
o c as
'E
Il)
(f.I
3739.
O.- I ' = - - - - - 4 - - - f - - - - - f - - + - - f - - - + - - - 4 - - - - + - - - f - - - - f . - - 4 - - 0 1 - 0.00
3750.00
7500.00
11250.00
15000.00
Separation Distance
Spherical model (Co =10.0000; Co + C =14180.0000; Ao RSS = 2.112E+06)
=11550.00; r2 =0.986;
Fig. 1. Semivariogram model fitting
The CO parameter (nugget) represents undetected variability, according to the distance used, and can refer to an analytic error indicative of an unexplained variability. As nugget (Co) is very low in relation to sill (Co+C), there is a strong spatial dependence in the data in question (Cambardella et al., 1994). The CO + C parameter, called sill, is the value in which the semivariogram stabilizes itself. The parameter A is the amplitude of dependence and indicates the limit distance between the samples that have, and those that do not, a spatial autocorrelation.
678
T. Bernandes, I. Gontijol, H. Andrade, T. G. C. Veira et al
Prepared models Both interpolation methods improved the definition of slope features in relation to the original data with 3 arcsec resolution. As shown in Figure 2 subtle variations were observed in the generated models such as softening of wrinkled flat areas and of artificial features in the terrain.
(c)
Fig. 2. (a) DEM original resolution (Jarcsec); (b) DEM by bicubic interpolation (Larcsec); (c) DEM by kriging interpolation (larcsec)
In both cases, features of objects on the earthy surface such as edifications, deforestation and irregularities in the area corresponding to the lagoon, due to aquatic macrophyte, remained in the products obtained. However, as was expected, kriging was more efficient in the treatment due to its capacity for dealing random spatial variations such as these. Bicubic interpolation highlighted the more mountainous features of the landscape, but it also highlighted the variations provoked by the dossel of the cerradol vegetation in detriment of the topographic information. Similar results were obtained by Valeriano (2004), when the high frequency features 1
Cerrado is a type of savanna vegetation of the central Brazil.
Digital Terrain Models Derived from SRTM Data and Kriging
679
represented by edifications in urban areas were enhanced by the triangular irregular network (TIN). When the data were transformed to slope grouped in classes, as shown in Figure 3, there were also few differences in terms of general distribution. Here also, the softening generated by the kriging model was fundamental in the performance of the slope algorithm, especially in determining flat areas, between 0 and 3%, reducing the modeling of areas with slopes between 3 and 12% distributed within flat areas.
(c) Fig. 3. (a) Slope derived from original; (b) DEM Slope derived from bicubic interpolation; (c) Slope derived from kriging interpolation
680
T. Bemandes, 1. Gontijol, H. Andrade, T. G. C. Veira et al
Correlation with field data Table 1 shows the number of observations and correlation coefficients (R2) between kriging and field data, bicubic interpolation and field data and kriging and bicubic interpolation. Table 1: Correlations coefficients between different interpolation methods and field data Number of observations Kriging and field data 40 0.9659 Bicubic interpolation and field data 0.9506 40 0.9827 ",..,.L.LJ.,.., and bicubic mternotanon 40 .4. .......
As in the visual analysis of the models generated, the scatter plot of the field data with the data simulated by both interpolation methods used (Figures 4, 5 and 6) showed a slight improvement of the data treated with geostatistical techniques. The model generated by kriging presented a slightly superior correlation than the method of bicubic interpolation, with correlation values of 0.9659 and 0.9506 at 1% significance level. The similarity between the two methods can also be verified by the high correlation coefficient among them, 0.9827 at 1% significance level. In fact, according to Diggle et al (2002) and Diggle et al (2003), when working with regular samples and with interest limited to on the dot predictions, this similar behavior is justified due to the use of total neighborhood.
41,00 36,00· 31,00 26,00 OJ
U
1Il
21,00
Vi UJ
16,00
1-'
11,00
6.001 1,00 i
15,00
i
i
20,00 25,00 REAL
i
JO,OO
I
35,00
40,00
45,00
Fig. 4. Scatter plot for correlations between bicubic interpolation and field data
Digital Terrain Models Derived from SRTM Data and Kriging
681
Oispersao simples
41,60 36,60
31,60 26,60 (S) ~
;:.c
21,60
1-' II'l
UJ
16.60
11,60 0.60 1,60
i I
0,00
I
5,00
10,00
15,00
20,00
25,00
JO,OO
35,00
40,00
45,00
REAL
Fig. 5. Scatter plot for correlations between kriging and field data Dispersao strnples 41,60
36,60 31,60
1.6,60
0 C!
:s::
1IJl
W
2.1,60
1
16,60 11,60 6,60 1,60 I
1,00
6,00
11,00
16,00
21,00
26,00
i
i
36,00
41,00
EST_BeB
Fig. 6. Scatter plot for correlations between bicubic and kriging interpolation
Conclusions In their original form, the SRTM data present a strong limitation to detailed terrain modeling, due to their low spatial resolution and the incorporation of objects on the landscape surface that mask the real aspects of the relief. They must also be treated to remove very discrepant values or out liers that can interfere in the performance of the slope algorithms, contaminating the really valid information. Interpolation by kriging and by the bicubic interpolator improved spatial resolution of the original data from 3 arcsec to 1 arcsec. The data obtained by kriging were more efficient in softening the artificial features and other objects on the surface of the terrain, and also in generating derived prod-
682
T. Bernandes, 1. Gontijol, H. Andrade, T. G. C. Veira et al
ucts such as slope thematic mapsThe slope classes derived from the DEMs were better simulated in more mountainous areas. Considering the similarity in the performance of both interpolation methods, the decision to use one of them should be based on the presence or absence of non relief features in the terrain, such as edifications, forest remnants within grazing lands and/or deforestation in areas occupied by high canopy vegetation. In such cases kriging is recommended because of its capacity to softening these noisy features. When this is not a problem, the bicubic interpotator is easier to be used.
References Cambardella, C. A.; Moorman, T. B.; Novak, J. M.; Parkin, T. B.; Karlen, D. L.; Turco, R. F.; Konopka, A. E. 1994. Field-scale variability of soil properties in central Iowa soils. Soil Science Society of America Journal, Madison, Vol. 58, No.5, pp. 1501-1511. Diggle, P. J.; Ribeiro Jr, P. J.; Christensen, O. F. 2003. An introduction to model based geostatistics. In: Jesper Moller. (Org.). Spatial statistics and computational methods. Springer Verlag, Vol.173, pp.43-46. Diggle, P. J.; Ribeiro Jr, P. 1. 2002. Bayesian Inference in Gaussian model based geostatistics. Geographical And Environmental Modelling, Vol. 6, No.2, pp.129-146. Instituto Nacional de Pesquisas Espaciais - INPE. 2005. SPRING 4.2 Sao Jose dos Campos, CD - ROM. Golden Software. 1995. Surfer Version 6.01 - Surface mapping system. Golden: Golden Software, Inc. GS+ Geostatistical for Environmental Sciences. 2000. Version 5.0.3 Beta, Professional Edition, Plainwell, Gamma Design Software. Ostman, A. 1987. Quality control of photogrammetrically sampled Digital Elevation Models. Photogrammetric Record, Vol. 12, No.69, pp. 333-341. Research Systems Inc. 2002. Environment for Visualizing Images - ENVI Version 3.6. Colorado: Boulder, 2002. 126p. Valeriano, M. M.; Carvalho Junior, O. A. 2003. Geoprocessamento de modelos digitais de elevacao para mapeamento da curvatura horizontal em microbacias. Revista Brasileira de Geomorfologia, Vo1.4, No.1, pp.17-29. Valeriano, M. M. 2004. Modelo digital de elevacao com dados SRTM disponfveis para a America do SuI. INPE: Coordenacao de Ensino, Documentacao e Programas Especiais (INPE-10550-RPQ/756), Sao Jose dos Campos, SP. 72p.
The St Mark's Basilica Pavement: The Digital Orthophoto 3D Realisation to the Real Scale 1:1 for the Modelling and the Conservative Restoration L. Fregonese, C. C. Monti, G. Monti , L. Taffurelli
DIIAR - Politecnico di Milano P.za Leonardo da Vinci, 32 - Milano - Italy
Abstract
Fig.!. A view of St Mark's Basilica
The pavement of St Mark's Basilica in Venice is very famo us for its historical-artistic importance, for its undulation, for the richness of the materials. Conservative restoration and the necessity of a precise acquaintance of the entire mosaic induce d to the digital relief in scale 1:1 of all the musive surface to obtain a 3D orthophoto. In a first test area, and in a subsequently area about 40mq in the right transept, technologies , elaborations, instruments for the appraisal of the operating feasibility of a relief have been experienced. After the verification of the relief feasibility 3000 targets have been placed on the pavement (one; each 80 em per 2000mq),
684
L. Fregonese, C. C. Monti, G. Monti, L. Taffurelli
those ones have been topographically surveyed by the means of a total station and they have been levelled in altitude. All has been punctually geo-referenced in the national cartographic system. With an high resolution metric digital camera, employing a f/40mm lens with nadiral shot about 2.20m have been realized more than 2000 shots . Ground pixel measures 0,5mm. The DSM (Digital Surface Model), with lcm net, has been obtained by the auto-correlation of the image by the means of Socet Set 5.2.0 software property of BAE System. As from the DTM we finally proceeded with the realization of the 3D orthophoto of the all pavement.
Fig. 2. The actual state of the orthophoto is inserted into the plan
This one gives us the correct position of each pavemental tessera and permits a totally automatical form to extract altimetric profiles between prearranged points and so realize, always automatically, templates for the restoring of the basement with his characteristical undulation.
The St Mark's Basilica Pavement: The Digital Ortophoto 3D Realisation
685
First test of 3D ortophoto feasibility The construction of a 3D orthophoto of the San Marco's Basilica pavement was certainly a unique initiative whether the methodological point of view, for the precisions to attain, whether the importance of the documentation about this kind of relief, the conservative restoration interventions of the pavement. The acquaintance of the 3D digital model permit visually and metrically to understand every piece of the mosaical pavement, and also to extract automatically the sections in every directions. Then, the realization of the digital pavement by the means of the geo-referencied 3D orthophoto permits the organization of restoration by the help of GIS .
Fig. 3. The 3D orthophoto of the test area
The construction of DSM of the pavement is the base of an orthophoto production. This one has been created in a double form, in this experimental phase: for the autocorrelation of the image, with laser scanner. Shots op-erations have been executed on a limitated area by the means of a Rollei D7 mertic camera and, if the experimentation was succeful, the relief would be apply at all the pavement. The photographic covering plan considered a ricovering of 60% between the frames in the strip direction (longitudinal) and 20% between adjacent strips (transversal) with a 1 mm pixel ground resolution. The laser scanner employed was Callidus, to realize the DSM in the interested area, DSM also realized for aerial triangulation and image correlation by topographic tie points for a comparison of the results obtained by two different methodologies. , Softwares employed are Spider Alias Wave Front for net points manage-ment , Arclnfo 8.0.2, ArcGIS 8.1 for DSM calculation and visualization, APEX PCI 7.0 for the orthophoto production.
686
L. Fregonese, C. C. Monti, G. Monti, L. Taffurelli
The 3D orthophoto is obtained from the DSM and the images. The altimet-rical profiles of the pavement have been automatically obtained from the DSM. The 3D orthophoto has been plotted on an indeformable transparent background at scale 1: 1. With the except ion of the chromatic appearance that is surely improvable, the coincidence between representation and reality results are practically perfect.
Fig. 4. The confrontation of the orthophoto printed on transparent paper and the pavement original
This first phase, intentionally limitated, doesn't posseed numeric compari-son of the results. Comparison obtained from DSM by autocorrelation of the image was perfect, the laser one presented some incertitudes. Sample narrowness didn 't permit any credit evaluation. As a point of view, tecni-cally, accured feasibility suggests some operative and instrumental modali-ties in view of the pavement general relief. First of all, have been realized the reduction the dimension of ground pixel from lrnrn to 0.5mm, by the means of a camera, meanwhile avaiable, with a major sensor dimension and a major high resolution (Rollei DB44 metric). Shot scale, approxi-mately 1:50, guaranteed with this camera is 0.5mm ground pixel.
The St Mark's Basilica Pavement: The Digital Ortophoto 3D Realisation
687
Schema delle prese (r= 4Om",) - ;sc..aldl f(, (o9 rflIRllli . 1;55
·dloo_I.. ~e ,bee l ~ lerr~: ll.:Jmm
ro~i>
- 1. 2 ....
2O'lt ,
0.4 ."
.JO%
1
III
Fig. S. The project of shots into test area
A second test was made on an area about 40mq: camera was mounted on a trestle with nadiral shot distance from the object about 2m, permitting a ground recovering for each frame of 4mq for the amount of 27 shots, shared in three srips, with the same transversals and longitudinal recovering as mentioned. On the basis of the recovering of each photographic shot and of the superposition between them we started the considered distribution of the tie points to the ground and with the same procedure able to contain error position of the points in a pair of millimeters and under half millimeter in altitude. Globally 70 tie points have been employed to calcu late the orientation of the shots by TA for projectives stars. At that time (may 2004), we employed APEX PCI 7.0 software for the orthophoto and the 3D model realization and for the visualization of the SW mentioned. Shots have been imported in Apex sofware in Tiff format without any compression for the necessity of maximum high resolution for an orthophoto with 1:1 scale. Software have to manage huge data dimensions, every image size is approximately 48,7 MB, for an amount of 1,25 GB of informations. Sofwares employed to·visualize all the images use so-called "pyramid of images", with eight levels of resolution decreasing little by little, used in accordance with activated zoom degree. DSM construction has been structured on a very thick net composed by knots (l5mmxl5mm) that software measures levels on the oriented images and determine the space between every single point by interpolation, so as
688
L. Fregonese, C. C. Monti, G. Monti, L. Taffurelli
to the surface results uninterrupted and coherent the most possible with reality.
Fig. 6. The serious swelling and "bubble" in the pavement
An acceptable equilibrium between the estimated time and net dimension is required. Also is possible to define the surface smoothing degree and the eventually breakline insertion. In this test area an high degradation characterize the mosaic, with surface swollen up and abnormal fractures, due to powerful tensions between the elements who forms the tessere support stratum. These elevations are imputable of the use, in the slackening mortar, of cement which has chemically react with sea salts that go up in surface from the subgrade, causing, after a short time from his employment, an high degradated condition. As result, the error position is quietly small and maximum error between the tie points of topographic levels and the concerning points that have been measured on the model, after aerial triangulation AT, are less than a millimeter. Also DSM quite remains, in deviations from tringulated points , within reasonables values (a=± 1,5mm).
The St Mark's Basilica Pavement: The Digital Ortophoto 3D Realisation
689
Fig. 7. The deviation between GCP-TA, GCP-DTM and TA-DTM
The succes in terms of feasibility, the importance of the orthophoto for restorers, employed for the right transept area, the possibility to automatically extract profiles from shooted surface, conviced us to extend the 3D orthophoto at all the pavementation. Project considered a team composed by four operators, programming shots during a period of times about six months (2005), in line with the religious activity and touristic flew. Great part of the job, obviously, had to be done during closing hours of the Basilica. Project Strips have been followed the informations gived by the test area whether recovering, or like tie points. The orthophoto obtained, with ground pixel about 0,5mm, would permit to document the disposition and the conservation state of each mosaic tessera, and parallelly give informations about altitude levels of the all pavement. Project expected to start the restitution of the North Narthex. That was to experiment the DSM creation also originated from "laser clouds". In effect the Nord Nartece was closed to the tourists. The restitution took place with the new software Socet 5.2.0 property of BAE System, an Apex PCI evolution.
690
L. Fregonese, C. C. Monti, G. Monti, L. Taffure lli
DSM obtained from laser scanner: comparison with DSM obtained from AT by image correlation The DSM realization, from "clouds of points" obtained by laser scanner is surely faster than the auto-correlation procedure. Time, costs and precision are the primary factors. That's why a compariso n test has been optained to see the results between the two systems of DSM production. The test has been applied on the entire North Nartece whether with DSM photogrammetically obtained, whether with Cyrax HDS3000 laser property of Leica Geosystem, laser who is particullary suitab le, for his peculiarities, for achitecturals reliefs . The instrument permit to manage geo-referencied coordinates form laser station, to obtain a "c loud of points" direct ly in the geographic system work . Esse ntially 3 laser scanning about North Nartece have been realized standed on specific geo-referencied targets. We started a comparison between laser data and photogrammetric data and particullary the difference between topographic points (OPe) and the DSM obtained by AT . Maximum differences repor ted results around few miIIi-meters near zero and forms a good result confirming the results obtained in test area, but on a bigger sample . Than we evaluated the differe nce between topographic points (OCP) and DSM , extracted from latex "clo uds of points". In that case the differences resulted few centimeters near zero. On the average we can affirm that re-sult obtained by DSM from AT is better than the one obtained from laser even if results obtaine d quite remains in measure incertitude of the laser instrument. DSM realization time as well as the orthophoto are faster. In this kind of relief laser utilisation isn't proper, beca use the scannings take place on grazings surfaces, very different is his employment for elevations reliefs where laser permits precise model construction.
Fig. 8. Laser points cloud and orthop hoto inserted into the model
The St Mark's Basilica Pavemen t: The Digital Ortophoto 3D Reali sation
691
The 3D orthophoto for mosaic restoration The Procuratoria of San Marco always take care to give importants informations to optimize the conservation and the enjoyment of the minor construction who alway s presents high complexi ty problems of conservation. The degradation state of the pavemental mosaic is also evident by a superficial point of view, even though it is subjec t at constant and repeated maintenance job (restaurations and partials remakings) by Procuratoria skilled workers and preserve vast portions form touristic flew aggre ssion (2 milions visitors for year) by the means of different kinds of physicals barriers(transenna, set courses, carpets) .
.a _, \
1
.
..
Fig. 9. Some details from the orthophoto, in scale 1:1
In addition to these factors there are particulars environmentals condi tions due to the lagoonal environement, like high water level phenomenon and presence of salt, nitrogen oxide, sulphur dioxide (due to veicula r traffic on the canals), sulphuric acid (coming from the algae fermentation), who con-tribute to improve and to make more complex the safeguard problem. The mosaic part in the second test area is an example. The general undularity of the pavement appears like a succession of reliefs and subsidences and is especially due to the arrangements of the terrain: in part natural (subsistence phenomenon), in part due to structural changes that have been modified by loaded unitary in the underground sandless clays, that has been subside in differentiated ways. The mosaic restoration take place by the "reverse" work, frequently used, who permit to dispose the tessere about the compos ition of the images on mobiles supports and so be able to work with little portions of the mosaic, easily transportable and manageable. The "originary" picture is survey with a photogarphy that we employ the negative, printed on a special mate rial.
692
L. Fregonese, C. C. Monti, G. Monti, L. Taffurelli
~--
- -
--
---
Fig. 10. Orthophoto and detachment of mosaic
The process employed by the Basilica technicians for the rectification of the photographed pavement is reported in real scale, is not proper to represent the characteristical pavement undularity, opposite to the 3D digital orthophoto. This one permit to attain a level and metrical detail that has never been attained before and the avaibility of a cognitive instrument.
Fig. 11. Details of recostruction
The digital orthophoto in 1:1 scale fixes definitely the planimetrical and al-timetrical placement of the tessere from photographics images, where metric informations are associed with the visuals ones. All these informations will be introduced in the GIS.
References Achille C, Brumana R, Fregonese L, Savi C, Monti C, (2002) 3D models generation from laser scanner data to support digital orthophoto 3D: the mosaic surface of the floor of S. Marcus Basilica in Venice, Commision V Symposium, Atti ISPRS WG V\4 - Closerange vision. Corfu, Grecia
The St Mark's Basilica Pavement: The Digital Ortophoto 3D Realisation
693
Achille C, Fregonese L, Savi C, Monti C, (2005) Advanced methodologies for surface analysis: methods, comparaison and monitoring of the mosaic surface floors of the S1. Mark's Basilica in Venice, CIPA 2005, XX International Sympo-sium. Torino Brumana R, Galeazzo G, Monti C, Vio E, (1989) Rilievo e Rappresentazione della Basilica di San Marco a Venezia: iconografia storica e attuale per un sistema in-formativo integrato, CIPA, XII Symposium International de Photogrammetric Ar-chitecturale. Roma Brumana R, Monti G, Vio E, (2005) Laser scanner integraded by photogrammetry for riserve engineering to support architectural site and restoration of the mosaic floor inside ST. Mark's Basilica in Venice, CIPA 2005, XX International Sympo-sium. Torino Demus 0, (1975) The mosaics of San Marc in Venice, Research Library and Collection Trustees for Harvard University. Chicago and London Piegl L, Tiller W, (1997) The NURBS Book. ed Springer, Berlino Kraus K, (1994) Fotogrammetria - Teoria e applicazioni, Vol I. ed Libreria Universitaria Levrotto & Bella, Torino Kraus K, (1997) Photogrammetry - Advanced Methods and Applications, Vol II. ed Dummler editore, Bonn Monti C, (1998) Misurare, interpretare, -conoscere, XY-Dimensione del disegno, anno dodicesimo, numeri trentadue-trentatre, Officina Edizioni, Roma gennaio-agosto Monti C, (1996) Per una conoscenza metrica dell'Architettura. ed Franco Angeli Editore, Tema, n.3-4, Milano Vio E, (2001) La Basilica di San Marco a Venezia. ed Scala, Bergamo
The Application of GIS in Maritime Boundary Delimitation I Made Andi Arsana', Chris Rizos 2 , Clive Schofield
3
IDepartment of Geodetic Engineering Gadjah Mada University, INDONESIA madeandi @ugm.ac.id 2School of Surveying & SIS University of New South Wales, AUSTRALIA [email protected] 3Centre for Maritime Policy University ofWollongong, AUSTRALIA [email protected]
Abstract The Democratic Republic of Timor-Leste (East Timor) attained independence on 20 May 2002, marking its separation from Indonesia. As a newly independent country, East Timor is faced with a number of significant international opportunities, together with some obligations that it must fulfil, including the delimitation of its international boundaries. Similarly, for Indonesia, with 10 maritime neighbours, the delimitation of maritime boundaries is a significant challenge. This paper describes a preliminary study on the delimitation of the Indonesia - East Timor maritime boundary, with a focus on technical aspects. Geospatial data has been obtained from the Indonesian government and processed with the assistance of a specialised Geographic Information Systems (GIS) application: CARIS LOTSTM. The main tasks are to simulate the maritime claims of Indonesia and East Timor, to identify overlapping claims and to delimit theoretical maritime boundaries between the two states. The United Nations Convention on the Law of the Sea 1982, LOSC, provides the main legal reference point, along with relevant state practice and jurisprudence. Potential delimitation lines were calculated in a geodetically robust manner. Three major locations for maritime boundaries were identified as requiring delimitation. These are in the Ombai Strait, the Wetar Strait and in the Timor Sea. A number of alternative potential boundary alignments have also been examined and analysed for the three locations, in the context of
696
I Made Andi Arsana, Chris Rizos, Clive Schofield
future maritime boundary negotiations between the two States. However, the results presented are not the final boundaries that Indonesia and East Timor are bound to accept. Ultimately it is for the governments of Indonesia and East Timor to negotiate an equitable solution that will satisfy both parties. However, it is hoped that this study will contribute towards achieving that goal. Keywords: maritime boundaries, .delimitation, geographic information system, United Nations Convention on the Law of the Sea
1 Introduction The Democratic Republic of Timor-Leste (hereinafter referred to as East Timor) attained independence on 20 May 2002 from Indonesia. As a newly independent country East Timor is faced with a number of international opportunities and obligations to fulfil - including the delimitation of international boundaries. It is the case that most of the maritime boundaries around the world are not yet completely delimited. However, defining the boundaries is important as they provide jurisdictional clarity and certainty (Prescott and Schofield 2005: 216-218). This can have multifaceted benefits, for instance in facilitating the sustainable and effective management of the ocean environment and enhancing maritime security. Perhaps of most pressing importance for a developing country, agreement on the limits of maritime jurisdiction serves to secure coastal state rights to access and manage marine resources, both living and non-living. Three maritime areas around Timor Island are of interest with regard to potential resource management and security concerns: Ombai Strait, Wetar Strait and Timor Sea. Figure 1 show Timor Island where two different countries are situated: Indonesia and East Timor, together with the three maritime areas of interest. Indonesia and East Timor are obliged to negotiate in good faith to establish their international boundaries, both on land and at sea. This means that all parties should follow the agreed procedures and conduct all phases "in a spirit of fairness and effectiveness" (United Nations, 2000: 72). The two states are not, however, obliged to reach agreement through such negotiations. Ultimately, the key objective is to achieve an equitable solution, acceptable to both parties, ideally achieved through negotiations undertaken in good faith and with mutual respect.International land boundaries between Indonesia and East Timor are being negotiated based on treaties from the colonial era, and have subsequently been subject to joint technical surveys in 2002 and 2003. One of the significant achievements with re-
The Application of GIS in Maritime Boundary Delimitation
697
gards to the land boundary is a provisional agreement signed by Indonesia and East Timor on 8 April 2005. However, agreement on final delimitation and subsequent demarcation of the land boundary has not yet been wholly achieved and it is understood that some of the most contentious parts of the boundary remain unresolved. These include the four termini of the land boundary on the coast that, crucially, will serve as the starting points of the maritime boundaries proceeding offshore. Without those termini agreed to, Indonesia and East Timor cannot start discussions on maritime boundaries .
.. DO
.-
Fig. 1. Map of Timor Island showing Indonesia and East Timor. (http://www.lib.utexas.edu/maps/midd le east and asia/ea st timor pol 03 .jpg)
With regard to maritime limits, Deeley (2001: 22) stated that no negotiations have been made to establish the maritime boundary between Indonesia and East Timor or between their colonial predeces sors (The Netherlands and Portugal). Consequently, no convention, or treaty had been concluded between The Netherlands and Portugal, or Indonesia and East Timor, concerning maritime boundary delimitation. The present paper aims to provide a critical study of the maritime boundary delimitation issues between Indonesia and East Timor. At present, this task has not yet been the subject of negotiation and can be regarded as a current and urgent matter to be resolved between the two states. For coastal states like Indonesia and East Timor, claims to maritime zones of jurisdiction are important in terms of security, acces s to and management of resources, ocean management (Schofield, 2003: 3). Once such
698
I Made Andi Arsana, Chris Rizos, Clive Schofield
claims have been made, the definition of maritime boundaries becomes highly desirable. As argued by the International Boundary Research Unit (hereinafter referred to as IBRU), that "governments across the world agree that clearly-defined maritime boundaries are essential for good international relations and effective ocean management". In a similar vein, boundary delimitation can provide an attractive way for a newly independent state, such as East Timor, to assert its sovereignty and sovereign rights. This paper aims to provide a critical study of the maritime boundary delimitation issues between Indonesia and East Timor with a focus on technical aspects. At present, this task has not yet been the subject of negotiation and can be regarded as a current and urgent matter to be resolved between the two states. Delimitation also eliminates zones of overlapping maritime claims which can cause competition and conflict between neighbouring states, thereby removing a significant potential source of friction and dispute in international relations (Schofield 2005a: 101-102). Geospatial data has been obtained from the Indonesian government and processed with the assistance of a specialised Geographic Information Systems (GIS) application: CARIS LOTSTM (Law Of The Sea). The main tasks described in this paper are the simulation of the maritime claims of Indonesia and East Timor, identification of overlapping claims and delimitation of the maritime boundaries between the two states. The United Nations Convention on the Law of the Sea 1982, LOSC, provides the main legal reference point, along with relevant state practices and jurisprudence. Potential delimitation lines were calculated in a geodetically robust manner.
2
Data Management and Software
This section outlines the key issues relating to the type of data and the data collection process required for the analysis undertaken in the current research. It represents precisely the kind of information gathering exercise faced by any coastal state prior to undertaking the delimitation of its maritime boundaries. Data collected for this research is based on technical considerations states should take into account before proceeding with maritime boundary delimitation. Prescott and Schofield (2005: 305-326) describe the practical considerations in maritime boundary delimitation, including preliminary issues, preparing for maritime boundary delimitation, the negotiations themselves and the drafting of the final treaty. The core data collected and analysed includes conventions, treaties and legal documents, the authorative nautical chart of Timor Island and the
The Application of GIS in Maritime Boundary Delimitation
699
surrounding area, sufficient-scaled topographical map of Timor Island, geological and geomorphological data, demographic data of Timor Island and other subsidiary information, physical geographic data of Timor Island and the surrounding area, and coordinates of the existing basepoints and baselines of Timor Island. Much of the material was collected from Indonesian government institutions, including the Indonesian National Coordinating Agency for Surveys and Mapping (Badan Koordinasi Survey dan Pemetaan Nasional - Bakosurtanal), and the Hydro-Oceanographic Office of Indonesian Navy (Dinas Hidro-Oseanografi - Dishidros). Digital nautical charts obtained from Bakosurtanal are EEZ Charts of the Indonesian Archipelago, or Peta Zone Ekonomi Exclusif Wilayah Negara Kepulauan Indonesia. Being in Adobe Illustrator format and therefore not recognised within the processing software, CARIS LOTSTM, they are therefore not ready for analysis. They were converted to a particular format and were geo-registered so that the coordinate system is appropriate to represent the "real world". These digital charts cover Timor Island, the surrounding area, including Alor Archipelago, Pulau Wetar, and Pulau Sermata, and small islets, such as Pulau Kisar, Pulau Leti, Pulau Moa, Pulau Lakor, and Pulau Meatimiarang. They are in the scale of 1:1,000,000 and are divided into two files, each covering a different area. They use the WGS 1984 as datum and are depicted in the Mercator Projection System. These are joint products of Bakosurtanal and Dishidros published in 1999. Collected data was processed with the assistance of Geographic Information System (GIS) technology. CARIS LOTS v4.0 was used as the facilitating GIS software to simulate maritime claims, identify overlapping claims and finally to generate potential maritime boundaries between Indonesia and East Timor. The School of Surveying and Spatial Information Systems, University of New South Wales, in Sydney, Australia, provided the software and licence. CARIS LOTSTM is one of the recent products of CARIS Canada, which has been developing geomatics tools for 26 years. Sutherland and Nichols (2002: 6-7) explain that "CARIS LOTSTM is a geographic information application, based on CARIS GIS. It is .designed to aid in the delineation and delimitation of marine boundaries as required by the United Nations Convention on Law of the Sea (the LOSC)." Carleton and Schofield (2002: 40) give some additional information on CARIS LOTSTM, describing it as a specially designed GIS, which visualizes the interpretation of the LOSC Article 76 for definition of the outer limit of the continental shelf. The present research focuses on those particular features of CARIS LOTSTM related to the outer limits and boundaries functions for delimitation of maritime zones. This is one of the three stand-alone products of
700
I Made Andi Arsana, Chris Rizos, Clive Schofield
CARIS LOTSTM, known as LOTS Limit and Boundaries. CARIS Canada claims that LOTS Limits & Boundaries includes all geodetic tools and data necessary for specialists in law of the sea applications to work within the EEZ. Since the early 1980s, CARIS LOTSTM has been used by many organizations around the world to generate maritime boundaries. For example, the UKRO has been using CARIS LOTSTM for about six years, assessing the quality and ability of the software as a tool for assisting with generating maritime boundaries. According to UKRO, CARIS LOTSTM makes this process significantly easier, providing an acceptable level of accuracy while expertly handling the complicated geodetic calculations related to marine boundary delimitation (Gent, 2001). The same author also highlights another important issue, namely "how to build trust within the CARIS LOTSTM user?" While it is clear, according to Gent (2001), that CARIS LOTSTM is capable of assisting experts to efficiently generate maritime boundaries, users also need to be convinced of the ability of the CARIS LOTSTM software. It has been shown that using GIS in maritime boundary delimitations significantly enhances efficiency, thus saving both money and time (Gent, 2001). According to the same author, GIS can be a powerful tool as long as users understand its limitations as well as its advantages. To some extent, however, GIS is only an assisting tool, based on mathematical procedures. Results necessarily depend on the quality of the input data. Incomplete or inaccurate data will never produce good results, or "garbage in garbage out" (GIGO), as is emphasised by Gent (2001: 3). GIS is also still costly, considering both the price of the software and the expense of training personnel in its use. Regardless of how useful GIS is in assisting people to generate maritime boundaries, it will never "turn amateurs into experts" (Gent, 2001: 5). Like all other technology, it too has its limits.
3 GIS and Maritime Boundary Delimitation The Environmental Systems Research Institute (ESRI), Redlands, California, define a GIS as "an organized collection of computer hardware, software, geographic data, and personnel designed to efficiently capture, store, update, manipulate, analyse, and display all forms of geographically referenced information" . At the time of its early development in the 1970s and 1980s, GIS largely dealt with the management of land-based geospatial data (Carleton and Schofield, 2002: 39). Since then, GIS has been widely
The Application of GIS in Maritime Boundary Delimitation
701
used in many fields, such as surveying, environmental management, tourism, urban planning, taxation, location based services and health. GIS has now entered the field of maritime boundary delimitations and is integral to the process of delimiting and managing maritime boundaries (Hirst and Robertson, 2003). Australia has been using GIS to maintain its data on maritime boundaries for many years, in a system known as the Australian Maritime Boundary Information System (AMEIS). This system is used to store the data rather than to actually generate the maritime boundaries. However, GIS is also important in resolving the issue of lack of knowledge on maritime boundaries and limits by enhancing the visualization of marine boundaries that do exist. Furthermore, GIS can be used to illustrate problem areas and offer alternative solutions (Sutherland and Nichols, 2002: 6). GIS was initially used in the 1970s to calculate geodetically robust maritime boundary delimitation lines. Unfortunately, there were only limited advances until the mid-1980s, when software such as DELMAR was created (DELimitation of MARitime Boundaries ). DELMAR was developed by Geodesist Galo Carrera of Geometrix, with funding provide by the Canadian government. According to Carleton and Schofield (2002: 40) DELMAR enables the computation of maritime areas, determination of offshore limits and computation of equidistant/median lines. These same authors also assert that even though the development of GIS software has produced many innovative solutions, the requirement for geodetic calculations still had to be satisfied in separate programs and coordinate results then imported into the GIS. Palmer and Pruett (1999) conducted research into the application of GIS for maritime boundary delimitation. Their paper, presented at the 19th Annual ESRI International User Conference in San Diego, California, describes the GIS approach to maritime boundary management. The authors consider maritime boundaries as spatial data that can be manipulated within a GIS. Three requirements for this are spatial information (where), attribute data (what) and spatial relationship of each feature (neighbourhood). According to the same authors, "these conditions are met by the declarations of the coastal state in applying for recognition of its jurisdiction". Basic features enabling GIS manipulation of the data are the points (basepoints, geographic features serving as points such as island, turning point of baseline, etc.), line (baselines), and polygon (e.g. islands, mainland, etc.). These features are accompanied by their attributes, and finally GIS relates the feature to neighbouring features. Basic universal operations in GIS have been used to define basepoints (plotting), generate baselines (drawing), define maritime zones (envelope of arcs, buffering) and delimit maritime boundaries (distance computation and plotting).
702
I Made Andi Arsana, Chris Rizos, Clive Schofield
As the need for defining maritime boundaries has increased, commercial software has likewise become more readily available. For example, the Department of Geomatics at the University of Melbourne, Australia, on behalf of AUSLIG (now Geoscience Australia), has developed software for maritime boundary delimitation. This is commonly known as MarZone, which is an abbreviation of Maritime Zone boundary. Collier et al. (2002: 77) state that-MarZone is a maritime boundary delimitation software, which can satisfy requirements of and meet the technical specifications for maritime boundary delimitation. Defining arcs on the surface of the reference ellipsoid by a locus of points equidistant from the circle centre, calculating the intersection point between such arcs, offsetting lines from straight baselines defined as geodesics, intersecting geodesics with arcs and computing geodesic azimuths and distances over very long lines (up to 350 nm) are all included. The authors confirmed that the emphasis in the MarZone design is on the development of "an efficient and robust solution based on strict implementation of the relevant UNCLOS provisions and a rigorous geodetic methodology." (Collier et aI., 2002: 77-78). Geoscience Australia uses this software to rigorously compute Australia's maritime boundaries, which are then integrated within the Australian Maritime Boundaries Information System (AMBIS). According to Hirst (2005, personal communication via email) of Geoscience Australia, AMBIS is going to be modified into another software/data system known as the Australian Maritime Space Information System (AMSIS), which will contain more general information pertaining to maritime matters, rather than only that relating to boundaries. Indonesia is another country now assessing the possibility of developing an integrated system for managing its land and maritime boundaries. Sobar Sutisna, Head of the Centre for Administrative Boundary Mapping of Bakosurtanal, touched upon this idea in his seminar paper presentation in Yogyakarta, Indonesia, on 3 May 2005. According to Sutisna (2005), the new system will be referred to as ILMBIS, standing for Indonesian Land and Maritime Boundary Information System. However, this is a very preliminary idea, which will need considerable further development.
4 Methodology A literature review has been undertaken to explore cases and developments in maritime boundary delimitation. In delimiting maritime boundaries, several different methods and approaches have been adopted. Some popular methods include equidistance line, parallels and meridians, parallel
The Application of GIS in Maritime Boundary Delimitation
703
line, and natural boundaries (naturalprolongation, and thalwegj-. Among those approaches demonstrated either by third parties such as the International Court of Justice or state practises, there is a trend to implement an approach called the two-stage approach, and has been used in several recent maritime boundary delimitation cases. This approach requires the drawing of a strict or robust equidistance or median line between the states in question as the provisional line. After drawing the provisional boundary line, relevant factors are then considered to assess whether the line needs to be shifted to achieve an equitable solution.
5 Indonesia-East Timor Maritime Boundary Delimitation Maritime boundary delimitation between Indonesia and East Timor includes several steps. The steps are designation of basepoints and baseline descriptions, assessing claims of each State and potential overlapping claims, creating robust equidistance lines, shifting robust equidistance lines by considering relevant factors and finally the generation of potential final boundaries between Indonesia and East Timor. a) Designation of basepoints and baseline description Defining basepoints and the baseline is essential before delimiting maritime boundaries, because the baseline will be the reference from which the delimitation line is measured, especially when the equidistance method is employed. Indonesia, in this case, has defined basepoints and baselines across the Indonesian Archipelago. Coordinates of basepoints and baselines are listed in a Government Regulation issued in 2002, PP No. 38/2002. Unfortunately, the basepoints and baselines listed in PP No. 38/2002 do not completely cover the entire Indonesian Archipelago. On the other hand, no legal product has been promulgated by East Timor to specify its basepoints and baselines. Therefore, new basepoints and baselines must be defined, especially at the border area between Indonesia and East Timor for this study. Thus, the maritime boundary delimitation will consider both existing basepoints as well as some new basepoints. To define new basepoints and baselines for Indonesia, it is appropriate to consider the basepoints and baselines that Indonesia had prior to the integration of East Timor as one of the Indonesia's provinces in 1976, listed 1
See for example Evans (1989), Legault and Hankey (1993), IHO (1993), United Nations (2000), Prescott and Schofield (2005), and IHO (2006).
704
I Made Andi Arsana, Chris Rizos, Clive Schofield
in the Law No. 4/Prp/1960. Additionally, it is worth considering some potential basepoints proposed by Bakosurtanal (2005)2 and LPPM ITB (2002). Figure 2 and 3 depict the configuration of basepoints proposed by Bako surtanal and LPPM ITB, respectively .
~
P
~
P. L eI! ~
(]
P. AL'l UfO
!
~ I Ool
P, l.lIkm
~~d
p
""'q,
l
:-' lcabmi a ran~
19 72 Seabed Bo unda ry
J oint Petro le um
Development Area (Jl'lJA)
...
Proposed basepoints for Indo nes ia
medi,",,"'~
---:', :\ <;;.0- 6O'~o63
'--::;"...---
TI.\fOR SE.4
,~
Fig. 2. Basepoint configuration proposed by Bakosurtanal
~
~ P. Ahmro
Joint Petroleum Develo pme n t A rea (J PDA)
Basepoints (Law No . 4/Prp/1960)
med;a,,"'~
T I .\I 0 R
S EA
Fig. 3. Basepoint configuration proposed by LPPM ITB
2
Data obtained from the Centre for Administrative Boundary Mapping, Bakosurtanal, through Mr. Eko Artanto,
The Application of GIS in Maritime Boundary Delimitation
705
b) The assessment of maritime jurisdiction and potential overlappingclaims From the geographical setting and distance measurement from normal baseline, Indonesia and East Timor need to delimit maritime boundary lines in three different areas. These three locations are to the north of the Oecussi Enclave (Ombai Strait), north of Timor Island (Wetar Strait), and south of Timor Island (Timor Sea). The maritime zones involved at each location must be clearly defined. Three maritime zones have been identified, relevant to this study including territorial sea, EEZ and continental shelf. As the contiguous zone is part of the EEZ, along with the EEZ this too has been recognized. By defining the zones, it will also be clear whether each maritime zone is an adjacent or opposite one. For the Ombai Strait, three considerations need to be taken into account including the Enclave of Oecussi, the termini of Noel Besi and Noel Meto, and Pulau Batek. Oecussi is geographically located in the western part of Timor Island, approximately seventy km west of East Timor proper. Oecussi encompasses an area of about 2,700 sq. km (Bano and Rees, 2002). Its population is approximately 58,521, which equates to about 6.3 percent of the total population of East Timor (UNFPA Census East Timor, 2004). The existence of Oecussi enclaved within West Timor, save for its north-facing coast, means that there are two land boundary termini on the coast, located at Noel Besi to the west and Noel Meto to the east. These two termini will serve as the starting points of maritime boundaries between Oecussi and Indonesia's West Timor. After observing the location of the two termini in the 1904 Treaty between the Netherlands and Portugal, and also considering an analysis carried out by Prescott (2000), LPPM ITB (2002), and Murphy (2002), it has been identified that the location of Noel Meto is 09° 20' 09.7" Sand 124° 02' 39.1" E, while Noel Besi lies at the position of 09° 10' 27.1" S, 124° 28' 33.2" E. Another important phenomenon in Ombai Strait is the existence of Batek Island (Pulau Batek), whose sovereignty was disputed by Indonesia and East Timor. Pulau Batek, located near Kupang regency in east Nusa Tenggara, became a contentious issue after the East Timor government claimed, at the beginning of 2004, that Pulau Batek was part of the Oecussi Enclave. However, after much debate in August 2004 East Timor finally acknowledged Indonesian sovereignty over Pulau Batek, after reviewing several documents and maps proving that the island is within Indonesian territory. However, this does not resolve the potentially contentious issue of Pulau Batek's role in the delimitation of an adjacent maritime boundary between Indonesian West Timor and East Timorese Oecussi. Figure 4 shows Ombai Strait and its surrounding features.
706
I Made Andi Arsana , Chris Rizos, Clive Schofield
P. Atauro
-:
Ombai Strait
P. Batek
o
Oecussi
(East Timor)
VVest Timor
(Indonesia)
Fig. 4. The Ombai Strait
Maritime boundary delimitation in the Wetar Strait includes two important considerations: the Indonesian archipelagic baseline 'cutting off' some of Pulau Atauro's entitlement and the location of the land boundary terminus on the coast at Mota Biku (see Figure 5). Pulau Atauro is part of East Timor and is mentioned in its constitution. Pulau Atauro is identified as being nearer to Indonesia than to East Timor proper. Its closest geodetic distance to Indonesia's Pulau Liran is approximately 7 nm (13 km) while its closest geodetic distance to the Timor mainland is about 13 nm (23 km). Pulau Atauro is recorded as being about 140 sq. km or around 750 times larger than Pulau Batek. The population of Pulau Atauro is about 7,500 (UN, 2001). However, other sources indicate that this has increased to around 8000 (Atauro Island, 2006). Being an island that meets the qualification defined in Article 121 of the LOSe, Pulau Atauro can be incorporated as part of East Timor's baseline system. It is therefore entitled to generate its own maritime zones. These include territorial sea, EEZ and continental shelf. This means that the island will significantly contribute to the maritime area that East Timor can secure. Therefore, Pulau Atauro will also be an influencing factor on the final maritime boundary between Indonesia and East Timor. Another important consideration is the terminus of Mota Biku that will be serving as the starting point of maritime boundary line, which is the prolongation of land boundary terminated in Mota Biku . After observing the location of Mota Biku terminus in Article V of the 1904 Treaty between the Netherlands and Portugal, and also consider-
The Application of GIS in Maritime Boundary Delimitation
707
ing the analysis done by Prescott (2000), LPPM TB (2002), and Murphy (2002), it has been identified that the Mota Biku Termini lies in position 09° 57' 23.0" Sand 124° 56' 58.4" E.
P. Atauro
P. Kisar
G
(j
strait
• Dili
o
wetar
P. Leti
Str q o
. /1
0
P. Jako
East Timor
Fig. 5. The Wetar Strait
Among the three boundary locations, the Timor Sea seems to be the most complicated area. This is because there exist some earlier boundary agreements to take into consideration In addition, four small islands close to Timor Island, including Pulau Leti, Pulau Moa, Pulau Lakor (of Indonesia) and Pulau Jaco (of East Timor), also need to be taken into account. There will be two lateral boundary . Iines in the Timor Sea: the western segment departing from the land boundary terminus of Mota Masin and the eastern segment departing from a point between Pulau Jaco and Pulau Leti. After observing the location of Mota Biku terminus in Article V of the 1904 Treaty between the Netherlands and Portugal, and also considering the analysis done by Prescott (2000), LPPM TB (2002), and Murphy (2002), it has been identified that the Mota Biku Termini lies in position 09° 27' 41.4" Sand 125° 05 ' 18.1" E. Figure 6 depicts the existing maritime boundaries in the Timor Sea, as well as relevant features to consider for maritime boundary delimitation between Indonesia and East Timor. Robust equidistance line
708
I Made Andi Arsana, Chris Rizos, Clive Schofield
~
ll"
-
•,~.
p , 1,~l i
+
P . Ata llao
P .lI.I O&
I
__.
h~ .-- -"'=>
Dw
E:I~ 19 7 2 Seabed Boundary
J o mt Petr o le um Devel o p ment Area
\
./
l\. loU 1974 In uoll C' iH-A ustral iB
Fig. 6. The Timo r Sea
State practice, together with jurisprudence from several court j udgments on maritime boundary cases, indicates that the delimitation of mari time boundaries is usually initiated by the generation of a provisional equidistance line which are attractive largely for reasons of certai nty, both in calculation and in practice. In many cases the provisional equidistance line forms the basis of the final boundary line, modified one way or another in light of relevant circ umstances. The present analysis follows the same strategy, drawing robust equidistance lines at the three different locations . This applies to both the adjacent and the opposite cases . The generation of a provisional line will follow the method of generat ion of a stric t equidistant line. Any features influencing the generation will be considered as equal in terms of importance and value. Technically, the generation of robust equidis tance line was achieved with the assistanc e of CARIS LOTS TM
c) Equidistance line shift and potentially final boundary line This step analyses factors that influence the robust equidistance line as a provisional line. By critically considering the relevant factors, it is possible to shift the provisional line from its original position. The line can be shifted either closer to or further away from either Indonesia or East
The Application of GIS in Maritime Boundary Delimitation
709
Timor. Defining and analysing relevant factors will be the most important part of this step. This is because the' final position of the maritime boundary line will depend on a number of factors and the comprehensive analysis of these factors. Obviously, geographic aspects will be the most important ones to take into account. They will be considered by assigning different weight/effect to special features such as small islands and giving different weight proportionally to, for example, the length of coastlines. However, other factors such as natural resource deposits, respective economic situations of the parties in question, together with demographic conditions will also be assessed to see whether they may significantly shift the robust equidistance line. However, this is the most uncertain part of the research and ultimately it is up to the authorities of the two negotiating States to make the final analysis.
6
Results and Discussion
In this paper some options have been assessed for the maritime boundary between Indonesia and East Timor. The different options were obtained by implementing different types of baselines for Indonesia (normal and straight baselines) and weightings assigned to particular geographical features, such as small islands, that arguably have a disproportionate influence the potential delimitation line. With regard to the type of baseline applied, Indonesia is entitled to implement archipelagic baselines as it is an archipelagic State. However, as previously mentioned, Indonesia has not fully completed the definition of its archipelagic basepoints/baselines in the area where maritime boundary' delimitation between Indonesia and East Timor is required. This research assessed the possibility of using normal baselines for Indonesia, theoretical archipelagic baselines and existing baselines as listed in PP No. 38/2002, while normal baselines are employed for East Timor.
a) Options in the Ombai Strait With regard to the Ombai Strait, the provisional line generated uses the theoretical Indonesian archipelagic baselines connecting Pulau Pantar, Pulau Batek and Noel Besi, but these are not assigned full effect. The two lateral segments, departing from the termini of Noel Besi and Noel Meto, use normal baselines for Indonesia and East Timor. Meanwhile, the opposite boundary line between Oecussi and Indonesia's Pulau Pantar, Pulau
710
I Made Andi Arsana, Chris Rizos, Clive Schofield
Treweg and Pulau Alor use the theoretical archipelagic baseline for Indonesia, connecting Pulau Pantar, and Tanjung Sibelah at Pulau Alor. Unlike the options described in the report issued by LPPM ITB (2002), this analysis considers Pulau Batek a part of Indonesia. Consequently, only options treating Pulau Batek under Indonesian sovereignty are described. As earlier explained, this is because the East Timorese authorities have already recognized Pulau Batek as part of West Timor, Indonesia. Figure 7 shows the options in Ombai Strait. The effect given to Pulau Batek will be the key issue regarding the final western lateral line. As Pulau Batek is a relatively small, unpopulated island, East Timor is highly likely to argue that it should not be given full effect. However, if Pulau Batek were considered a fully-fledged island rather than a 'rock' (according to Article 121 of the LOSC) it would be entitled to claim a full suite of maritime zones. This would then provide a strong argument for Indonesia's stating it be given a full effect. However, even if regarded as a fully-fledged island under Article 121 of the LOSC, modem international law, as evident in some cases, tends not to allow a small island to have a disproportionate effect that may then result in inequitable maritime boundaries. As highlighted by Lowe et al. (2002), this was the situation in the Western Approaches case (1977) between France and the United Kingdom, where the United Kingdom's Scilly Isles were given half effect by the arbitral tribunal. Another instance is the TunisialLibya case (1982) by the ICI, in which the Tunisian Kerkennah islands were similarly given half effect. Line A shown in the Figure 7 is the line by giving full effect to Pulau Batek and line B is the line with half effect given to the island. Line C is the one constructed by ignoring Pulau Batek except within a 12-nm territorial sea, where it is accorded full effect and thus is semienclaved. In terms of the maritime area Indonesia and East Timor may secure, options Band C are not significantly different. However, the shape of the two maritime areas obviously differs. This has implications for subsequent maritime management. For option C, Pulau Batek secures a pocket of territorial sea with East Timor's EEZ extending to the northeast of the island, beyond Pulau Batek's territorial sea. On the other hand, if option B is chosen, the line will head to the north and slightly east without a semienclaving effect for Pulau Batek. Considering the potential complications of maritime boundary delimitation and the subsequent administration caused by each option, along with the question of equitability for Indonesia and East Timor, option B - that is, affording Pulau Batek a half-effect is viewed as a plausible option that Indonesia and East Timor may seriously consider.
The Application of GIS in Maritime Boundary Delimitation
711
Fig. 7. Options in th e Om bai StraitF igure 8 Op tion in the Weta r Strait
For the eastern lateral boundary and the opposite one, there is no such complication. It seems that there is only one feasible option for the eastern lateral boundary, with no geographical features to be taken into consideration (such as islands, LTEs, rocks and so on). In addition, the use of normal baselines for both Indonesia and East Timor will not result in many different options . For the opposite segment, the median line seems to be the most equitable solution for both countries. The difference between the application of Indonesia's theoretical archipelagic baselines, or ignoring them in favour of normal baselines, appears to be negligib le. There do not seem to be any special considerations that act as reasons for shifting the potential delimitation line from the median line either, as there are no special features relating to Indonesia (Pulau Pantar, Pulau Treweg and Pulau Alor) or East Timor (Oecussi).
b) Options in the Wetar Strait Concerning delimitation options in the Wetar Strait, the definition of equidistance lines is influenced by the selection of basepoints for Indone sia, while East Timor uses the basepoints along its normal baseline . Figure 8 shows the options in Wetar Strait by simulating Indonesian archipelagic baseline with one of its segment connecting Tg. Manamoni to Pulau Reong to the northwest of Pulau Wetar. If the baseline segment was ignored, East Timor's maritime claim may fall within Indonesian waters beyond the
712
I Made Andi Arsana, Chris Rizos, Clive Schofield
baseline segment. This is undoubtedly problematic and is not an option Indonesia can accept. To deal with this, the baseline segment needs to be considered so that East Timor's outer maritime claim does not go beyond indonesia's baseline. Line a-b is a median line between Pulau Atauro and Indonesia's baseline segment connecting T. Manamoni and Pulau Reong, by giving half effect to Indonesia's baseline segment. Figure 8 suggests that East Timor's potential maritime area beyond line a-b (area A) is conceded to Indonesia and that East Timor be given the area Band C as compensation. Area A is measured about 300 sq. km while area Band Care 100 sq. km and 200 sq. km respectively. Therefore, the area traded off between Indonesia and East Timor is equivalent - an area compensated solution. However, as also highlighted by LPPM ITB (2002: 7-4 - 7-5), the size of the area traded off is not the only issue. Both sides should also be seriously concerned with the economic and security factors involved in these options. 125-00E
125 -1 9E
7 -3 9S
Fig. 9. Option in the Wetar Strait
125-39 E
The Application of GIS in Maritime Boundary Delimitati on
713
c) Options in Timor Strait The western lateral boundary in the Timor Sea is relatively straightforward. The line departs from the Mota Masin land boundary terminus on the coast southward, using the normal baseline for East Timor and the archipelagic baseline for Indonesia. The equidistance line seems to be an acceptable method for producing an equitable solution for both Indonesia and East Timor. To an extent , the argument proposed by Antunes (2003), suggesting the use of a "corridor solution" to avoid East Timor from being squeezed, is considered to have some merit, but it is not considered convincing enough to suggest departure from the equidistance line proposed here. While East Timor's coastal front has been calculated at around 140 nautical miles , each lateral line of equidistance only narrows by around 10 nautical miles where they intersect with a median line drawn from the opposite Australian coast in the central Timor Sea, approximately 120 nautical miles apart (Prescott, 1999: 80). In addition, to define the perpendicular line from the general direction of the coast is often problematic. This is due to the exerci se being very much dependent on the scale of the chart and the coastlines under consideration. Figure 9 depicts the western lateral boundary in the Timor Sea. 1 25 -1 9 E
125-39E
7-5 9 5
I
50 K ilo m etr es
r'-"
Fig. 10. Option for the eastern segment of the Timor Sea
A more serious complication occurs with regard to the eastern segment. Some small islands have the potential to significantly influence the loca-
714
I Made Andi Arsana, Chris Rizos, Clive Schofield
tion of the boundary line. As stated earlier, Pulau Jaco and Pulau Leti both seem to need particular consideration. According to the Law of the Sea , relatively small islands should not be allowed to have a disproportionate and inequitable effect upon maritime boundaries (Lowe et aI., 2002) . It is suggested that Indonesia's Pulau Leti be not given full effect when delimiting the maritime boundary, while giving Pulau Jaco less than full effect is also worth considering. Therefore, the options for boundary lines in the Timor Sea (eastern segment) will actually depend on the effect/weight that is finally assigned to both Pulau Leti and Pulau Jaco. Figure 10 shows two different lines for options . Line A is with zero effect to Pulau Jaco and half effect to Pulau Leti, while line B is with half effect to Pulau Jaco and zero effect to Pulau Leti. Provided that Pulau Leti is significantly larger than Pulau Jaco , which is the case , it is contended that greater weight should be assigned to Pulau Leti, thus favouring line A as a potential delimitation line in this area. 1 2 5:39E
8-30S - _
129_19E
-
_
= - ..-.- ... -
:;.-
.~ ~...:<..:...
':.9-005
.
-
.
-
- ~
--;:- ....
10--305
-
u
T_
_
T
60
90
0
R
120
Fig. 11. Option for the western segment of the Timor Sea
After exploring some different variations, it is found that ultimately there is relatively little difference between the various lines, in terms of a result. It is also obvious that lines obtained by giving partial effect to Pulau Leti do not create a major divergence. Apparently, this is due to the presence of the other Indonesian islands to the east of Pulau Leti. This factor therefore needs to be anticipated by both Indonesia and East Timor. For instance, East Timor might claim that the islands to the east of Pulau Leti (i.e. Pulau Moa. Moa , Lakor and Meatimiarang) should also be discounted
The Application of GIS in Maritime Boundary Delimitation
715
- in order for East Timor to avoid being 'squeezed'. In any case, although the parties involved in the negotiations may consider discounting islands to the east of Pulau Leti, it is not viewed, in this analysis, as equitable. By combining the above results, Figure 11 depicts the combined maritime boundaries in the Timor Sea. 128 -OOE
-.-
~-",:, ~~
--.-::.= -. --
_.
.
•.::
"
1 0-49S
.....-: :-
... -.... ; .
~
Fig. 12. Combined option
The preferred potential delimitation option in the Ombai Strait is based on the first alternative, where the choices of boundaries are very much influenced by the effect given to Pulau Batek. Giving half effect to Pulau Batek is the option recommended in this result. In the Wetar Strait, the boundary line to the north of Pulau Atauro seems to be reasonably complicated. This is because of the existence of the Indonesian archipelagic baseline segment which potentially cut-offs Pulau Atauro's territorial sea. To overcome this, Indonesia may implement archipelagic baselines with a segment connecting Pulau Reong and Tanjung Manamoni. In addition, certain maritime areas may need to be traded off. In the Timor Sea, two lateral boundaries are assessed. The western segment seems to be less complicated than the eastern one, as the existing geographical features in the area are considered irrelevant for any shift in the equidistance line. Meanwhile, some options are possible for the eastern segment. The alternatives for the eastern segment are influenced by the combination of effects given
716
I Made Andi Arsana, Chris Rizos, Clive Schofield
to Pulau Jaco and Pulau Leti. Giving half effect to Pulau Leti and zero to Pulau Jaco is recommended in this case. It is worth stressing that in reality the final course of the maritime boundary is subject to negotiations between Indonesia and East Timor. The solution proposed here, whilst it lacks official character, can, hopefully, be treated as an initial comparative study.
7 Conclusions and Suggestions a) Concluding remarks 1. Three possible locations for maritime boundaries between Indonesia and East Timor have been identified. The first of these is in the Ombai Strait, between East Timor's enclave of Oecussi and Indonesia's Pulau Pantar, Pulau Treweg and half of Pulau Alor. The second is in the Wetar Strait, between the eastern half of Timor Island and Indonesia's Pulau Alor, Pulau Liran, and Pulau Wetar. The third one is in the Timor Sea, where, in the maritime area to the south of Timor Island, two segments of boundaries need to be delimited. They are the western segment, which is an adjacent boundary between the western half and eastern half of Timor Island; plus an eastern segment, which departs from a point between East Timor's Pulau Jaco and Indonesia' Pulau Leti southward. 2. To delimit the maritime boundaries, a two-stage approach has been proposed. The first step is to generate the strict equidistance lines and then shift the lines to achieve what is considered to represent a more equitable solution overall. This method has been accepted as an effective approach, and has used by various States as well as the leI to delimit maritime boundaries. 3. In the Ombai Strait, the effect given to Pulau Batek will have the most significant influence when considering options for maritime boundaries. Giving half effect to Pulau Batek seems to be a preferable solution for both Indonesia and East Timor. However, other options, such as giving Pulau Batek nil effect or full effect, have also been analysed. 4. In the Wetar Strait, the existence of East Timor's Pulau Atauro, which lies closer to Indonesia than to the mainland of East Timor, requires an 'innovative' solution, because a segment of the baseline could possibly cut off East Timor's territorial sea. One alternative is to implement an archipelagic baseline for Indonesia connecting Pulau Reong, Tanjung Manamoni at Pulau Alor and drawing a line between the segment and
The Application of GIS in Maritime Boundary Delimitation
717
Pulau Atauro (see Figure 8). This is done by trading off the maritime area in order to achieve an area compensated solution. 5. In the Timor Sea, the delimitation of the western segment is relatively straightforward. It is concluded in this study that there are no special circumstances that would justify any shift in the boundary line from the position of the equidistance line. By contrast, the options in the eastern segment are significantly more problematic. They are very much influenced by the effect/weight given to two small islands - East Timor's Pulau Jaco and Indonesia's Pulau Leti. The recommended alternative for the eastern segment in the Timor Sea is to give zero effect to Pulau Jaco and half effect to Pulau Leti. Greater effect is given to Pulau Leti because it is comparatively larger than Pulau Jaco. 6. It is clear that a number of potential boundary situations exist between East Timor and Indonesia. In this context, it is useful to recall the maxim in negotiations that "nothing is agreed until everything is agreed" (Prescott and Schofield 2005: 325). Therefore, there is great scope for both flexibility and compromise in seeking a pragmatic boundary solution that fulfils each State's key objectives. Ultimately, however, it is up to the two governments concerned, and whether or not they have the political will and determination required to reach a mutually acceptable compromise position - one that each can regard as an equitable result.
b) Suggestions 1. Establishing maritime boundaries between Indonesia and East Timor needs to be given a high priority, in order to prevent boundary uncertainty. 2. Each party needs to establish a .dedicated and well-coordinated task force to deal with the issue of maritime boundary delimitation, from preparation through to execution. This has already been at least partially achieved by both countries. 3. Each side should carry out studies to comprehensively analyse all scientific, legal, political social aspects before proceeding with negotiations. Again, there are indications that these preparations are underway. 4. Maritime boundaries have to be negotiated in good faith to achieve an equitable solution for both sides. 5. Scientific analysis conducted by independent parties should be treated as positive input, although recommendations may not necessarily be adopted. It is the responsibility of the governments of Indonesia and East Timor to agree on a final solution.
718
I Made Andi Arsana, Chris Rizos, Clive Schofield
6. Ideally, both governments should conduct consultations at the local level, in order to accommodate the aspirations of their people, who will be directly affected by the maritime boundary delimitations. Any agreement reached should cause no, or minimal, harm to the needs and rights of these people, likely to be traditional fishers, many of whom have been living in the area for a long time. In several instances this has been the case long before the emergence of either State in question. 7. As an archipelagic State, Indonesia has potential boundaries to negotiate with ten of its neighbours. Some of these are already well-established. Several others remain either undelimited or only partially delimited. While there is scope for further research on Indonesia's outstanding maritime boundaries, it is essential to remember that establishment of boundaries is not the end point of this process. Maintenance and management are also important, and ongoing,tasks to be undertaken. 8. The parties also have a responsibility under the LOSC to give "due publicity" to baseline claims (Article 16) and boundary agreements (Articles 75 and 84 for the EEZ and continental shelf respectively), and to deposit a copy of the relevant geographical coordinates (specifying geodetic datum) orncharts of scale (or scales) adequate for ascertaining their position with the Secretary-General of the United Nations. 9. Future work also includes the integration of the existing boundaries into an information system. This information (regarding maritime boundaries, the importance of boundaries and their impact on society) should then be made accessible to the public. This is especially critical for those people living in border areas, who will need to be both better educated and more aware of the actual consequences of boundary delimitation. Finally, by signing the LOSC, a State is obliged to respect and to follow the rules related to ocean management. This, therefore, requires a State to prepare its national systems, including the legal processes, ensuring that law enforcement in these matters is conducted properly and appropriately.
8 Bibliography Antunes, N.S.M. (2003). Towards the Conceptualisation of Maritime Delimitation. Martinus Nijhoff Publishers, Leiden/Boston. Bano, A, and Rees, E. (2002). The Oecussi-Ambeno Enclave - What does the future hold for this neglected territory? Indonesia Inside, July - September. Retrieved on 12 May 2005 at 2 pm from http://wW\V.insideindonesia.org/edit71/0ecllssi.htlll
The Application of GIS in Maritime Boundary Delimitation
719
Carleton C. and Schofield, C. (2002). Developments in the Technical Determination of Maritime Space: Delimitation, Dispute Resolution, Geographical Information Systems and the Role of the Technical Expert, Maritime Briefing, Vol.3, No.4,International Boundaries Research Unit,Durham, UnitedKingdom. Collier, P.A., Murphy, B.A., Leahy, F.J. and Mitchell, DJ. (2002). The Automated Delimitation of Maritime Boundaries : An Australian Perspective. International Hydrographic Review, Vol. 3, No.1, 68-80. Deeley, N. (2001). The International Boundary of East Timor. Boundary and TerritoryBriefing, Vol3No.5, IntemationalBoundaty Research UnitDt.nham, UnitedKingdom. Hirst, B. and Robertson, D. (2003). GIS,' Charts and the LOSC - Can they live together? Proceeding of the 2003 ABLOS Tutorials & Conference "Addressing Difficult Issues in UNCLOS" 28-30 October, International Hydrographic Bureau, Monaco. Retrieved on 3 July 2004 at 2 pm from http://\vww.grnat.uns\v .edu.au/ablos/ABLOS03Folder/PAPER3-3 .PDF Lowe, V., Carleton, C., and Ward, C., (2002). In The Matter of East Timor's Maritime Boundaries Opinion. Retrieved on 30 August 2004 at 9.30 am from http://\vww.petrotirTIor.colTI/lglop.httnl. LPPM ITB (2002). Laporan Akhir Pengkajian Awal Delimitasi Batas Laut Indonesia- Timor Leste (A Final Report of the Preliminary Study on the Maritime BoundaryDelimitation betweenIndonesia andTimorLeste), Bandung, Indonesia. Palmer, Harold D. and Pruett, L. (1999). GIS Applications in Maritime Boundary Delimitation. Proceeding of ESRI User Conference, 26-30 July. Retrieved on 10 October 2004 at 4 pm from http://gis.esri.com/library/userconf/proc99/proceed/papers/pap938/p938.httn Prescott, J.R.V. (1999). The Question of East Timor's Maritime Boundaries. Boundary and Security Bulletin, Vol.7, No.4, 72-81. Prescott, V. (2000). East Timor's Potential Maritime Boundaries, The Maritime Dimensions of Independent East Timor. In Rothwell, D., and Tsamenyi, M. (eds) The Maritime Dimensions of Independent East Timor, Wollongong papers on Maritime Policy No.8, 79-105 Prescott, V. and Schofield, C. (2005). The Maritime Political Boundaries of the World, Second Edition, Martinus Nijhoff Publishers. Schofield, C. (2003). Maritime Zones and Jurisdiction. Proceeding of the 2003 ABLOS Tutorials & Conference "Addressing Difficult Issues in UNCLOS" 2830 October. International Hydrographic Bureau, Monaco. Retrieved on 17 September 2004 at 5.30 pm from http://\vww.gnlat.unsw.edu.au/ablos/ABLOS03Folder/SESSION3 .PDF Schofield, C.H. (2005). Cooperative Mechanisms in Disputed Areas. In Cozens, P. and Mossop, J. (eds), Capacity Building for Maritime Security Cooperation in the Asia-Pacific, Centre for Strategic Studies: New Zealand, Wellington. Sutherland, M. and Nichols, S. (2002). Marine Boundary Delimitation for Ocean Governance, Proceeding of FIG XXII International Congress. Washington, DC, USA, 19-26 April. Retrieved on 18 November 2004 at 7 pm from http://ww\v.fig.net/pub/fig 2002/Js12/JS12 sutherland nichols.pdf. United Nations (2000). Handbook on the Delimitation of Maritime Boundaries, United Nations, New York. 204 pp.
Integration of GIS and Digital Photogrammetry in Building Space Analysis Mokhtar Azizi Mohd Din 1, Mohammed Yaziz Ahmad 'Department of Civil Engineering, Faculty of Engineering University of Malaya 50603 Kuala Lumpur 2Faculty of Architecture, Planning & Surveying University of Technology MARA (UiTM), 40450 Shah Alam, Selangor
Abstract A case study using digital photogrammetry approach in collecting information concerning the size of buildings has been carried out in the campus of the University of Malaya. A digital map that consists of spatial data of buildings and infrastructures in the campus is produced. The spatial data are arranged in layers using the base map with the scale of 1:3000. A GIS software called UM-INFO has been developed for the use in analyzing the information as well as for producing report on distribution of building spacious utilization map. Sufficient database such as numbers of floor for each block, areas of floor, numbers of student, numbers of staff etc were input to UM-INFO software. The use of building space for various purposes and other facilities in the campus are identified. Furthermore, the sizes of building space utilities are compared to the guideline by the National Economic Panning Unit of Malaysia. The study has discovered all the faculties and academic centers have at least 3% more than the nominal area except the Faculty of Art & Social Sciences (AFSSO). The AFSSO required another 50% more of the nominal area. In conclusion, this study has proven the effectiveness of Digital Photogrammetry and Geographical Information System (GIS) in producing a map to show the distribution of space usage in higher learning institution. This research has successfully shown that the digital photogrammetry method is able to capture digital data much faster and accurately. The combination of digital photogrammetry and GIS technology has proven to be the best choice in optimizing the space usage. This technique increases productivity at low cost as well as serves as a very useful tool for planning purposes. Key words: Digital photogrammetry, GIS, spatial data, building usage, space utilization
722
Mokhtar Azizi Mohd Din, Mohammed Yaziz Ahmad
1 Introduction This research evaluates the photogrammetry method in producing input data for the GIS system and the use of GIS in analyzing building space utilities. University of Malaya was taken as a case study. The first phase covers the spatial data collection. All data's are collected from various resources such as plans and maps including text data. To allow the data to be analyzed by computer, the data from the map sheet must be transformed into digital format through digitalization process. The process of coordinate registration and scanning needs to be done at the map sheet first before the digitization process. Thematic digital map of University of Malaya that contains spatial information such as buildings and certain infrastructures has been arranged in order using the base map with the scale of 1: 3000. The coordinate registration process is done to standardize the coordinate system of the map layers. The software GIS MapInfo is used for digitizing operation. Inaccurate geometry and updated maps are two major problems in producing map base. Digital Photogrammetry is used to overcome this problem. This method provides air orthophoto as additional information in GIS software. Combination between digital Photogrammetry and GIS has formed the GIS MapInfo database. Attributed information from MapInfo data to every building is shared with the data from RDBMS. The structure of database is restored to add more information. Most of the GIS procedures are reorganized to analyze and to gather extra information. Some of the analysis that can be done using this method to establish space use status for building base on floor areas in every faculty in University of Malaya.
2 Target The main target of this research is to develop and set up a GIS database of buildings in University of Malaya. One of the applications is to analyze building space utilities. This proposal is aimed to specify the ability of digital photogrammetry method in providing main input data for GIS and ascertains GIS software in the building space utility analyzation.
3 Objectives This research aims to achieve the following objectives:
Integration of GIS and Digital Photogrammetry in Building Space Analysis 723
a) Determine the digital photogrammetry ability in producing resource on spatial GIS data for building surfaces. b) Develop GIS data base for buildings c) Apply building database information to establish the efficiency of database in analyzing space utilities using GIS method.
4
Methodology
Inaccurate spatial information is the main problem in GIS. In GIS, spatial information can be obtained from various data resources with difference precision. The first phase of the research is to analyze the precision of digital photogrammetry (Digital Surface Model and orthophoto) and the second phase is to analyze the efficiency of GIS software in collecting building spatial information and analyzing building space utilities.
4.1
Data Process via Digital Photogrammetry Method
Digital Photogrammetry method via diapositive air-photo to produce digital orthophoto (as GIS data resource) is used. A diapositive air-photo in black and white with a scale of 1: 20000, which covers the entire Malaya University dated on 24/01/97, has been used in the research. The photo was revised at the resolution of 300dpi and 600dpi by desktop format at A3 Sharp JX - 610 to manufacture a pair of digital diapositive. (Format TIFF with greys 256 scales). Using Desktop Mapping System (DMS) software, 12 digital surface models were produced by auto correlation method. Every model has different matrix size correlations, which are 7x7, llxll and 17x17. A 3x3 Median Filtration will be imposed on every DSM model. As orthophoto geometry precision is based on the accuracy of DSM model, the best of DSM model is to be chosen. Elevation of grid point sample at the 150 m gap for every DSM model is chosen to ascertain the precision. The height of reference elevation for grid points was created using the analytical plotting equipment (PROMAP). The statistical test, variance analysis (ANOVA) is applied to ascertain the final precision and as comparison of DSM models (see table 1). The finest DSM model is selected to generate digital orthophoto of University of Malaya. The precision of Geometry orthophoto is tested using the sample points from GIS MapInfo software and is compared to the measurement from the analysis stereo model.
724
Mokhtar Azizi Mohd Din, Mohammed Yaziz Ahmad
4.2 Building Spatial Data Extraction Building spatial data extraction is designed using GIS MapInfo software. Digital orthophoto in DMS format is transferred into GIS MapInfo format. Orthophoto image is registered into Selangor Cassini coordinate system in MapInfo software. Enhancement image process is done to upgrade the view on the computer screen. A quality image will make it more userfriendly and easier to be digitalized. Building blocks is digitalized on monitor screen using MapInfo toolbar Drawing: Tool> DRAW. Model data polygon is used in the digitalization. The information concerning infrastructure is also extracted to locate the buildings.
4.3 GIS Data Base Structure of Building For spatial analysis, building spatial information must be linked with the attribute information. In MapInfo software, the attribute information is kept in the database and is systematized in tables. Building database structure contains information such as the ill, building name, building code, the storey of the building and floor area.
4.4 Space Used Analysis Using GIS Maplnfo Software With GIS MapInfo, floor area of each block can be automatically determined using menu Table>Update Column. Every record in the floor area attribute is updated after the model is generated. The operator only needs to fill in some information regarding the block such as the building's ID, storey, building's name etc. If the number of storeys of the building is known, then the entire floor space and building area can be determined using module Table>Update Column again. However, the attribute information about the floor area must be filled up first. The name or the code is needed to classify the building utilities, In this research, buildings are classified according to the faculty. Therefore, the building space area is depending on this information (Appendix3). Building thematic map with the information of building space is produced. The next analysis will verify whether the building space in each faculty is sufficient. The number of students is multiplied by the floor space needed to calculate the sufficient building space used (depends on faculty-UPEN TABLE 4). In result, a table concerning space used utilities is produced. Information of student's enrolment in each faculty is obtained from University annual report of 1997/98. The space area, which is obtained from GIS, is compared to the
Integration of GIS and Digital Photogrammetry in Building Space Analysis 725
information from the mentioned calculation (UPEN DATA). From this information, the current space used could be recognized (Appendix-Table 5).
5 Analysis and Result 5.1 Digital Stereo Model Two pairs of digital stereo model from air-photo with resolution of 300 dpi and 600 dpi has been produced to generate DSM. Both stereo models comprise a minimum level of 29 m and maximum level of 145 m above sea leveL Both levels are stereo-measured on computer screen using anaglyph viewing. Orientation parameters for models are determined with 13 earth control points. The result from the calculation produced deviation of rms (x, y) 0.33 pixels or 0.61 m above ground level with resolution of 300 dpi. While model with 600 dpi, the rms (x, y) deviation was 0.68 pixel or 0.62 m above ground. Nominal pixelresolution for model 300 dpi is 1.836 m and for model with 600 dpi, the pixel resolution is 0.918 m above ground (see table 6-appendix).
5.2
Digital Surface Model
For this research, DSM has been generated and developed by autocorrelation using Desktop Mapping System (DMS). There were 12 DSM models (DSM1 to DSM12) that have been produced where six of them were based on 300 dpi stereo model while the other six DSM models were based on 600 dpi stereo modeL The size of 7x7, 11x11 and 17x17 of matrix correlation has been used to generate and develop these DSM model. Median Filtration 3x3 will be imposed at every of the DSM in order to vanish all the spike effects. Every DSM model is created by 6000-point level at the range of 30m grids. However, for the purpose of this research, only 200 points will be selected from each of the DSM model. Aggregation method is used to select these points and this will produce 12 new DSM model with the range of 150m grids. The analysis plotting equipment with high accuracy (PROMAP) will then is used to measure the height of the same grid level for comparison purpose. The results that have been obtained from the analysis showed; first, the accuracy of the level increased correspond to the higher size of the matrix correlation (see Figure 6 - Appendix). Second, the accuracy of the level is better obtained when the DSM model is
726
Mokhtar Azizi Mohd Din, Mohammed Yaziz Ahmad
imposed by Median Filtration (see Figure- Appendix). However, the accuracy became consistent when the size of the correlation matrix is increased. Therefore, it can be concluded that DSM12 model is the most similar to the digital surface model reference (Figure 9-Appendix). Thus, this model has been selected for the generation and orthophoto development purpose.
5.3
Digital Orthophoto
Figure 1 shows digital orthophoto, which has been developed by using the DSM 12 model. This orthophoto was generated by the nearest interpolation method. Cassini Coordination system was used for orthophoto registration in GIS MapInfo software. The Standard Deviation (S.D) of the orthophoto at checking point X and Y is 4.6m and 1.8m respectively.
Fig. 1. Orthophoto of University of Malaya
5.4
Building Mapping
Building digitalization is done using GIS MapInfo software by referring to the orthophoto image. Building digitalization can be done directly on the monitor. The polygon shape is used to map all the buildings in University of Malaya area. Figure 2 shows the overlaid building mapping on the orthophoto. The accuracy of the digitalization is based on the map base of 1:3000 scales. The scale on the monitor must at least four times larger than 1:3000 scales during the digitalization process so that better accuracy can be obtained from the mapping.
Integration of GIS and Digital Photogrammetry in Building Space Analysis 727
Fig. 2. Map of UM's buildings, road infrastructures, and rivers
5.5 To Create Building Topology In general, Topology is known as "spatial relationship between geographic objects". Nowadays, most GIS software requires creating topology first before the line of object can be changed to polygon object. Mistakes such as enjoined line, over lined and suspended line needs to be fixed before creating topology . One of the advantages of using topology data of model method is to prevent the repeat of digitalization at similar border of polygon. Besides that, it also allows the operator to use lined object only during digitalization. Since MapInfo software does not use topology data model , the operator needs to apply polygon shape during building digitalization. The operator can choose polygon button and set a tolerance snap before proceeding to digitalization. Spatial information of building and data platform will be directly connected with MapInfo system by using an identifier. The structure of the data platform can be modified to supplement the number of the attribute regarding the needs where the required information of building will be entered.
5.6 GIS Data Platform for Building The data platform for building needs to have structure that is able to restore the building attribute information. The attributes for building are building's name, building's code, and number of floors , floor area, and building's ID. Variables of integer type are addressed to the floor's num-
728
Mokhtar Azizi Mohd Din, Mohammed Yaziz Ahmad
ber attribute and floor 's area attributes while variable of character type is addressed to building's name and code attributes . Modify Table Struclure EI T}'pe
field s NamaBangunan K odB angunan Lua sTapakBanguna Bil_Tingkat Lua s_lantai
Inde:-:ed
Character(6B) Character(5) Decimal(10. 3) S mall Integer D e cimal(20.0)
t!i.-.el I II
Q.own
e,dd Field Bemove Field
Field I nformation~-----------------11I
tl arne: JIIE¥I.,E¥i:.ii.GNi,r¥i'j llpe:
ICharacter
Width:
f68!
T able is Mappable .E'rojection. . .
OK
Cancel
Help
Fig. 3. Structure of buildings data platform
The building information such as ID, name, code the number of floors will be entered into the MapInfo data platform by using Window>Ne w Browser Windo w menu in the MapInfo. Figure 4 shows some part of the building attribut es information.
ra:tfti ffltlj File
Ed it
Qffli
Tools
4Hi.i,g' Obrects
Query
Tab le
OQtiom
M ap
Window
Map !mage r'y He lp
~ 1~ 1~ 1.!:::J I Wl l fill l G51 I m:J I 8!l I I~ IIh.f5lRlfuJ r±l.1e. 1cu. l fi ll~I~I~ ~ '" l 'I L'lI ,I C'>I OI DI O I ~ I ~ m -'lI ' IIl",-' I--,,:'!" IA'11
G¥4P i d. '1:•. 1.
_IDI Xl li j !!t!!!!h l.! Ii!
...::J
~AKADEMI
H iilITl a B ilnguniln
III ~
II III III
PENGAJIAN ISLAM
AAPIS AAPIS
AKADE MI PENG A JIA N ISLAM
AAPIS
AKADEMI PENGAJ IAN ISLAM
A APIS
AKADEMI PENGAJ IAN ISLAM
10 10
o
'I
'!"'I
AAPIS AAPIS AAPIS AAPIS
AKADE MI PENGAJIAN MEL A YU
AAPML
AKADEMI PENGAJ IAN MELA Y U
AAPML AAA JP
10
A MBANG AS UHAN JEPUN
AAAJP
10
A MBANG A SUHA N JEPUN
AAAJP
AMBANG A SUHAN JEPUN
AAAJP
BAKTI SISV\IA
PB8 S1
BA NGUNAN PEPERIKSAAN
PBPPS
10
BANGUNAN PEPERIKSAAN
P8PP S
0
DEVVAN TUNKU CA NSELOR
PDTCN
o :0 I
AAPIS AAPIS
A M8ANG A SUHA N JEPUN
10
II.r
AKADEMI PENGAJ IAN ISLA M
I7i T · - -·· ·_· . . _- _.
Fig. 4. Relationship between attribute data and spatial data
_ ID' x l
KodD :iIlngumlll(
A KADE MI PENGAJIAN ISLAM
I: :::~:~: ::~~:~::~ :~~:~ I: :::~:~: ::~~:~::~ ::~:~ iii
!
Ld .!.J
Integration of GIS and Digital Photogrammetry in Building Space Analysis 729
Figure 4 also showed the attributes of NamaBangunan on the left and building map on the right. Every record of NamaBangunan is related to the building map and is represented by the index box in bangunan browser window. Figure 5 showed some part of the building's record that has been entered and displayed using Window>New Browser Window . RIiII3
~ loI apl nfo Professional - [bangunan Browserl Map!mager~
iii D D D D D D D D D D D D D
AKADEMI PENGAJIAN ISLAM
AAPIS
AKADEMI PENGAJIAN MELAYU
AAPML
4,618.239
AMBANG ASUHANJEPUN
AAAJP
9,534.476
Help
7,885 .988
FAKULTI SAINS
AFSNS
BAKTI SISVVA
PBBSI
3,467.457
BANGUNAN PEPERIKSAAN
PBPPS
4,578.989
BANGUNAN PEPERIKSAAN
PBPPS
260.695
DEWANTUNKU CANSELOR
PDTCN
5,355.188
FAKULTIBAHASA & L1NGUISTIK
AFBHL
6,113.324
FAKULTIEKONOMI & PENTADBIRAN
AFEKP
559.565
FAKULTI EKONOMI & PENTADBIRAN
AFEKP
5,471 .969
FAKULTIKEJURUTERAAN
AFKEJ
2,419 .830
FAKULTI KEJURUTERAAN
AFKEJ
2,375.096
FAKULTI KEJURUTERAAN
AFKEJ
861.155
962.46 2
Fig. 5. Records of buildings in GIS MapInfo data platform
5.6 Calculation of Standard Space Used based on Faculty (refer to UPEN). UPEN has outlined the application of nominal space and is depending on the size of the building's floor in Ideal universities. The area unit of the floor space of each student is different depending on the faculty's type which has been set by UPEN (see Appendix). The equation below is used to determine the size of nominal space of a building: Size of nomin al space =Number of students x Area unit Where ; Area unit = area (nr') / 1 student of its faculty (see Table 4).
730
Mokhtar Azizi Mohd Din, Mohammed Yaziz Ahmad
Example: The number of student's enrolment in the faculty of engineering = 2500 Area Unit =5m2 Space size = 5 x 2500 = 12500m2
5.7 Space Used Estimation by GIS Maplnfo Every building's polygon in the Maplnfo data platform consists of Standard Attributes such as area, perimeter, central coordinate, etc. However, this kind of information is not displayed directly in the table windows unless it is requested by an operator from the Query>SQL Select menu. Based on the area information, the space size of every building can be estimated. To determine the space size of the building, the information of the number of floors is needed. This information can be obtained by site observation. Then, this information will be entered into the building data platform. By using Query>SQL Select menu,' the area of building space can be determined using the equation below: Area of building's space = area of building's polygon x number of floor
5.8 The Needs of Space Used Analysis Table 5 showed the building's space used referring to their faculty in University of Malaya. Based on the space utility required, the Sport Centre (PPSUK) has 297% more space from the nominal space and this is the largest extra space. Meanwhile, the faculty of social science has an insufficient space of 50% less than the nominal space. According to the current space used, the Faculty of Science posses the largest (94415 m') while the Faculty of Law has the smallest space size (12120m2) . Therefore, it can be concluded that every faculty has the smallest extra space size of 26% and the largest extra space size of 297% except for the Faculty of Social Science, which possesses no extra space.
6.0 Conclusion The Digital Photogrammetry method has the ability to update the spatial data more efficiently. The accuracy of digital Photogrammetry product depends on the scanned quality, earth control point, air-photo quality, air-
Integration of GIS and Digital Photogrammetry in Building Space Analysis 731
photo resolution, DSM accuracy and algorithm in developing orthophoto. Digitalization of digital orthophoto on the monitor is easier to operate compared to stereo digital method in obtaining geographic information. GIS software based on personal computer has the ability to do spatial analysis such as space utility analysis etc. Combination of Digital Photogrametry method and GIS system using personal computer is very effective compared to analysis method with network system or mainframe, to analyze space utilities using GIS. Other than Photogrammetry scanner with high accuracy, the non-Photogrametry scanner or A4 Desktop Publishing Scanners has a high potential to shoot digital air-photo (W.M.N Wan Mohamad). However, the preprocess must be done first to clear the sway effect on the systematic scanner that produces air-photo. Lastly, the integration of digital Photogrammetry and GIS in analyzing space used can help university management to control the space used utilities and the needs of new building with more efficient. Therefore, detailed research is required to identify the causes of systematic deviation in Photogrammetry system and how to minimize the deviation.
References ESRI, 1992,"Understanding GIS and the ARCINFO method, ESRI, USA. MapInfo Professional User's Guide, Fourth Edition, June 1997, MapInfo Corporation, Troy, New York. MapInfo Professional Reference Guide, Fourth Edition, June 1997, MapInfo Corporation, Troy, New York. M.A Mohd.Din ,1997, "UM-INFO project", Proposal to the University of Malaya UPEN, 1989, "Garis Panduan dan Peraturan-Peraturan bagi Perancangan Bangunan bagi Jawatankuasa Kecil Piawaian dan Kos Bagi IPPN", Jabatan Perdana Menteri. W.M.N Wan Mohamad, 1999,"Evaluation of a Low-cost Photogrammetry and GIS in Landfill Site Selection Study", Ph.D. Thesis, University of Edinburgh, U.K.
732
Mokhtar Azizi Mohd Din, Mohammed Yaziz Ahmad
APPENDIX Table I : Results and analysis at the meant level of 0.05 ANOV A using Dunnet method Depen dent Varlable: Ketingg lan Aras Grid DTM Dunnett t (2·s lded) '
95% Confidence (I)
(J)
Kaedah_DlgitaU·121 Kaedah Analisis 99 1
Kaedah_DlgltaU · l21 Kaedah Analisis 99 99
2 3
99 99
Interval
Mean
Dilference (I.J)
4
99
5.3758'
Std. Error 1.7663 1.7663 l.I bbJ 1.7663
5 6 7
99 99 99
4.9199'
1.7663
3 . 9~ 9 5
U6 M
8 9
99 99
11.1246' 9.376S-
1.766 3 1.7663
10 11 12
1.7663 1.7663
99 00
5.7130" 4.5800 4.4424 :J.4497
14.1797' 11.8814' ~ . J /W·
99
1.7663 1.766:J
Sig. .000 .000 .UlJ
.000
Lower Bound 9.2881 6.9898
1U.~b/~
023
.4842
10 2675
.048 .-183
2.826E- 02 -.932'1
9.8116
6.2329 4.4848
16.0162
.000 .0 12 .080
.8214 -.3116
097 .:J24
-.4492 · 1.4420
a. Dunnettl-tests treat one group as a control, and compare all other group s again st It.
Table 2: Structure of buildings data platform
13
Modify Table Structure Indexed
NamaBangunan KodB angunan LuasTapakBanguna Bil_Tingkat L uas_, u a n ~a ng ga ,a
Characte,(68) Character(5) Decimal(6.0) Small Intege, Decimal(6.0)
Field Information= = = = = = = = = = = = r tlame: 11M1$i,i.ji,[email protected]
!ype :
ICharacte,
Yiidth:
f68
Table is 1:1appable ,E,ojection...
1:ielp
1677 30
.4b4~
' . The me an dille renee Is sig nificant at the .05 level.
[ ields
Uppe r Sound 19.0714
8 . 8 ~ ·1 2
14.2681 10.6047 9.4716 9.3341 0.:J41:J
Integration of GIS and Digital Photogrammetry in Buildin g Space Analy sis 733 Table 3. Space area of building s block according to faculty flamaBangun an
Kod B.ngunlll luesTIIpak8anguf Bil Tingkat
AKAOEMIPENGAJ AN MELA YU
AAP ML
1,662
3
4 ,986
AKADEMIPENGAJI.A.N MELA YU
AAPML
4,618
2
9,236
LuasJ u an g_8nggar an
AMBANG ASUHANJEPLN
AAA .P
1,055
2
2,110
AMBANG ASUHAN JE~
AAAJP
5,776
1
5,776
AMBANO ASUHAN JEPl..N
AAAJP
2,023
1
2{J23
AMBANG ASUHAN JEPl..N
AAA JP
680
2
1 ,360
8 AKTl S1S1NA
PffiSI
3,467
1
BANGUNAN PEPERIKSAAN
PBPPS
261
1
261
BANOUNANPEPERlKSAAN
PSPPS
4 ,579
3
13 ,737
DMAN TUNKU CANSELOR
POl eN
5,355
1
5,355 6,5S6
3,467
f AKULTI BAHASA .&L1NOUISTIK
AFBHL
3 ) 79
2
FAKUL11BAHASA .&L1NGUISTIK
AFBHl
635
2
1 ,270
FMULTI BAHASA s L1NGUISTIK
AFBHL
1,312
3
3 ,936
FAKULn BAHASA 8. L1NGUlSTIK
AFBHl
178
1
178
Table 4.Ma ximum area of floors space according to its faculty (Resource - UPEN) ~UNPEN Bro..."er
IFakulti
I Kod
"f"akulti
I Unit
luas l"'naksilTIulTI
BAHASA DAN LAN GUISTIK
AFBHL
EKONOMI DAN PERNIAGAAN
AFEKP
9
PENDIDIK AN
AFPD K
20
PER GIGI AN
A F PGG
40
PERUBATAN
AFPBT
55
9
S A INS
AF SN S
20
SAINS KO M P U T E R DAN T EKNOL (
A H K IT
20
SAINS SOSIA L
A FSS O
9
UNDANG-UNDANG
AFUND
9
PERAKAUNAN
AFPA K
9
Table 5.Current space used according to its faculty KodBanguna, Sum_LuasJuang_an Luas nominal Beza ruang Peratus_beza AFSNS
94,415
42,680
51,735.0
AFPBT
86,256
70 ,180
16,076.0
22.9
AFKEJ
82,151
54,999
7,152.0
13.0 232.7
121.2
AFPAK
28,266
8,496
19,770.0
AAPIS
28,863
16,731
10,132.0
60.6
AFSSO
16,481
33,102
.1 6,621 .0
·50.2
AFPGG
15,343
10,560
4,783.0
45.3
AFPDK
15,226
12,915
2,311.0
17.9
AA PML
14,222
10,638
3,584.0
33.7
AFEKP
13,799
10,458
3,341 .0
31.9
AFKIT
13,212
12,880
332 .0
2.6
AFBHL
12,684
4,275
8,409.0
196.7
PPSUK
12,250
3,087
9,163.0
296.8
AFUND
12,120
5,724
6,396.0
111 .7
734
Mokhtar Azizi Mohd Din, Mohammed Yaziz Ahmad
Table 6a. Orientation Parameter of stereo model (300 dpi resolution)
Table 6b. Orientation Parameter of stereo model (600 dpi resolution)
~: ~.J ~ : 5...
:;::; . ~
~
11 ....~_
: ::-::::::: -... """"
-."
~
~
_ ......;;:= =~=_ ==_
........
Ij •.1-
_ _
- -li..--------------------
Proto P.e;jolutb n O'QQ ctpi
•
Proto P.e30 1utb n 600 dpi
-_ - - - - - - - o-, ]: -1-1--------~-------. 7
11
Ii'
Fig. 6. Accuracy of level compared to the size of matrix correlation
Integration of GIS and Digital Photogrammetry in Building Space Analysis 735
14
"'r-~
o' 12 u:. c 10 '-'
:~.'
'].0
0 OJ
....0";
~
;;; t:;
;-- -
--.-
~
I'" ith Fitt8 rir'J
---tIl
a
- -. -
6 4 2
0
I
7
I
" itt)) ut Filt8 rir(l
I
7
1 Si::€' cf Corr elat i en 1.1 atri x
Fig. 7. Accuracy of level before and after filtration
·1 [I(tl
r,~) d8b
Fig. 8. Accuracy of level with photo resolution of 300 and 600 dpi
Table 9. Comparison between mean digital levels via mean analyzed level (reference) cooo O.( II J ~
-a'
.,.
- -'; '
"' 0 .0 0
00
.
An Integration of Digital Photogrammetry and GIS for Historical Building Documentation Seyed Yousef Sadjadi Department of Geographical and Earth Sciences, University of Glasgow, G128QQ Glasgow, U.K. E-mail: [email protected]
Abstract An assessment of the contribution of digital photogrammetry and GIS to the surveying and documentation of cultural heritage objects is the focus of this paper. The approaches include digital image enhancement, digital rectification and restitution, feature extraction for the creation of 3-D GIS models from the photogrammetric record and the computer visualisation of cultural monuments. Can these relatively new technologies offer more than the analogue methods and manuscript archives of the past? For example, even in the very recent past, manual measurements and direct copying of pictures onto transparent foil have served well. Manual 3D processing of terrestrial images using analogue photogrammetric procedures is slow, can register little information and has limited application and cannot be re-examined if the information desired is not directly presented. In addition, it is a very time consuming task and requires the expertise of qualified personal. The paper also highlights the creation and implementation of an Architectural/Archaeological Information System (Al AIS) by integrating digital terrestrial photogrammetry and CAD facilities as applicable to support the reconstruction of the Abbey of St. Avit Senieur in France, Strome Castle in Scotland, Gilbert Scott Building of Glasgow University, Hunter Memorial located in Glasgow University and Anobanini Rock Project in Iran. The recent project has been described in detail now.
Introduction In projects involving surveying and documenting the archaeological and architectural heritage of a region, fast, accurate and relatively inexpensive methods of data capture, analysis and representation have to be available,
738
Seyed Yousef Sadjadi
particularly as such projects often emerge unexpectedly and need to be dealt with rapidly [Robson et al., 1994], [Ogleby, 1995]. Authors have been privileged to collaborate with archaeologists and architects involved with the measurement and analysis of the remains of castles, churches and other major structures in France, Scotland and Iran. Hard-copy photogrammetric approaches to supporting such analyses are well established [Ogleby, 1995], and indeed the primary data capture is fast, but the requirement to interpret, classify and quantitatively process photographs can be a time consuming task which requires highly experienced personnel. Their absence may result in data of inadequate accuracy. Are there alternatives to the hard-copy photogrammetric approaches? One alternative is soft-copy or digital photogrammetry, and the author's collaborators felt comfortable with this. The technical reasons for considering soft-copy photogrammetry include: • convenient re-measurement; • accuracy; • cost; and, • support of photo-realistic modelling. Considering convenient re-measurement, traditionally, hard-copy photogrammetry for archaeological and architectural applications has produced 2-D building elevations drawings and large-scale planimetric maps [Fraser, 1993] of predictable accuracy. Whilst these plans constitute a useful record, they have limited application and cannot be interrogated further if the information desired is not directly presented. These limitations mean that the available information relates to matters of importance current at the time of measurement, but is inadequate for answering questions that may be raised later and by other workers. It will be necessary for the archaeologist to make a time-consuming return to the photogrammetrist for the additional information. Soft-copy photogrammetry produces digital orthophotos, which allow a 3-D spatial model of photographed surfaces to be viewed, and from which accurate measurements can be rapidly made, by a non-photogrammetrist. That the orthophotos and their stereo mates can be stored ready for immediate use represents a great timesaving over hardcopy approaches. Re-measurement in the hard-copy environment involves the time consuming re-orientation of the original photos in a stereo plotter and a much more challenging extraction of 3-D coordinates than in a digital photogrammetric workstation where the correction for x-parallax is automatic. Thus, without inconvenience, the object can continue to be studied or re-studied, or can be reconstructed. Considering cost, Bahr and Wiesel [1991] believed that low-cost workstations and scanners would grant useful economic solutions to digital
An Integration of Digital Photogrammetry and GIS
739
processing and eventually produce a product of the quality formerly expected from (hard copy) analytical plotters. In the case of digital photogramme try, digital scanners are used if digital frame cameras are not available. Both low-end (PhotoModeler Software) and high-end systems (SOCET SET Software) have been used in this study, but as formerly high-end systems become less expensive, price is not particularly considered an issue. Fifteen years ago a digital photogrammetric system of the type used in this study was only available in the defence sector (with corresponding budgets) but now similar systems are regularly being bought by less well-funded organisations, including academic archaeology departments [Cooper et al., 1992]. Using digital photogrammetry, the spatial model of photographed surfaces can be viewed at any time and accurate measurements made on the photos. Computerised documentation of originals can be made by a computer assisted interactive graphical system based on digital models. With the use of spatial graphical programmes the corresponding walls or shapes can be matched, certain elements can be defined and the figures or writing on walls can be completed or reconstructed by using known representations and texts. By transforming the photogrammetric pictures, true scale images can be produced. Digital photogrammetry can also supply textural information from photographs that can be used in rendering. CAD systems based on 3-D extracted features provide a first stage in the integration of photogrammetric knowledge with the interpretative process of evaluating the object and making decisions. The user is thus able to perform the whole reconstruction of the 3-D objects without manual measurements. The automatic data transfer of the photogrammetrically generated digital surface model to the CAD system gives a flexible 3-D geometric object description appropriate for eventual input to the GIS environment. Thus, on completion of this work, we expect several research questions could be answered (i.e. whether the newer technologies of GIS and CAD offer more than the analogue methods of the past). Digital terrestrial photogrammetry permits rapid data recording at lowcost relative to other techniques, and it offers flexibility and high measurement accuracies, such as if the bundle adjustment method is selected. For example, in a small project like Hunter Memorial, capturing digital data, data processing, recording and documenting of the historical building can be done within a day. The AIAIS has been developed to the point that it can provide a precise and reliable technique for non-contact 3-D measurements. The speed of on-line data acquisition, high degree of automation and adaptability has made this technique a powerful measurement tool with a great number of
740
Seyed Yousef Sadjadi
applications for architectural or archaeological sites. The designed tool (A/AIS) has been successful in producing the expected results in tasks examined in this paper.
The Photogrammetric Survey of the Anobanini Project in Iran Prior to the author's involvement in this investigation five convergent photos (using approximately 1:50 scale photography including visible damage, e.g. cracks) of the Anobanini Rock Sculpture were captured, using a SONY DSC-F828 digital camera, by IPCO (Iranian/Persian Professional and Cultural Organisation). This represents a stage in archiving the country's cultural heritage. The author investigated the use of this photography for the 3-D modelling of part of the sculpture - a necessary step should any maintenance be required.
1. Description The investigation of an Archaeology/Architectural Information System (AI AIS) has been carried out recently using the Anobanini Rock Sculpture
at Sar-E-Pol-E-Zahab in the North West of the Islamic Republic of Iran. An objective of this investigation was to use the archived photography to describe the sculpture at Anobanini Rock using a 3-D surface model. The comprehensive documentation of a cultural object requires that reconstructing the whole object both geometrically and pictorially is achievable. Digital photogrammetry techniques and digital facet modelling (involving representation of the object by surfaces instead of lines) combine the integration of the geometric and the pictorial characteristics of a building. Additional information is accessible through the "hotlinking" or an equivalent facility. The elements of the relational database can be integrated with graphical entities like raster images and textual information for visualisation. The output of digitised points can be fed directly into a 'hub' package such as AutoCAD. Using digital terrestrial photogrammetry for this project the existing sculpture can create features, whereas other data sources are required to determine the size, shape and location of missing features. The investigation involved creating the CAD model of a part of Anobanini Rock using digital photogrammetry as the basis, the documentation of the archaeology
An Integration of Digital Photograrnrnetry and GIS
741
elements in the computer and the recording of other attributes of the Anobanini Rock. Although the acquisition of some tape measurements was a component of this task to make a scaled model (land surveying is an important part of any project of this type), the accuracy obtained from the photogrammetric procedures was considerably in excess of requirements (RMSE 1.00 em) . It was determined that the desired accuracy could be achieved if: 1. the acquired data (photographs) was taken with a digital camera appropriate for terrestrial photogrammetry; the appropriate camera be calibrated for lens distortion, principle point and focal lens; 2. camera location and camera angles information used during the photography be determined; 3. an experienced photographer be responsible for photography; 4. object point locations be clear on the photography; 5. control point locations be clear on the photography; and, known dimensions on the object, scale information, etc. was available. Figure 1 shows a view of Anobanini Rock. A few dimensions were added to this figure when the author investigated the site.
Fig. 1. A View of Anonbanini Rock in Sar-E-Pol-E-Zahab
2. Data Acquisition For the 3-D model of Anobanini Rock, the most significant features were acquired from terrestrial photographs using a digital camera. Data extrac-
742
Seyed Yousef Sadjadi
tion from terrestrial photographs to create 3-D models depends on a number of factors such as: data type, data resolution, data quality, accuracy and representation. Close-range photogrammetry was used for the documentation of the facets, to direct the archaeologist to adequately identify the conservation needs of the stonework. This must b-e done using approximately 1:50 scale photography for all the archaeological features. This integrated all stonework, inscriptions, epigraphs, etc. including visible damage, e.g. cracked stone.
3. Data Input Probably this is the first time that the Anobanini Project provided data for input to the PhotoModeler Program (PMP). The data are then extracted from PMP and used in AutoCAD for the further processing towards building the AlAIS. The PMP procedures interactively construct the geometric base for the digital records of the AI AIS. All photographs were imported into a directory (a Windows folder) and were then examined to eliminate the poor photographs and checked to ensure the required coverage of the object was achieved.
4. Processing Processing the photographs was achieved with PMP. In PMP, the procedure of creating 3-D models starts with the connection of 3-D points, edges, curves, etc. As already indicated (see section 6.8.3), there are eight steps in creating a 3-D model: • Create a camera calibration report, • Plan the measurement of the object, • Capture photographs of the object, • Import the captured photographs, • "Mark" the features on one photograph, that will be 3-D vertices (either control points, feature points or tie points), • Identify the same 3-D vertices on the other photograph, • Process data, and • Export the results in the form of 3-D coordinate data for the vertices or orthophotos - for example to AutoCAD or ArcGIS. The sculpture can be considered as a set of objects. For each object there were many 3-D object points to process. For example, in the left figure's
An Integration of Digital Photogrammetry and GIS
743
headwear object, of a pyramidal and peaked design (the 'Anobanini Hat'), about 270 points were referenced. Once the processing was completed and the photographs were oriented, the orthophotos were generated. A sample of a created orthoimage is shown in Figure 2.
5. Linking: the Anobanini project As mentioned in section 6.8.7, text and DXF files can be used while linking different programs, for example Object Linking and Embedding (OLE) can be used. DXF and Text file formats support Running External Programs (REP) for an embedded link between different programs. An object created by AutoCAD can be used as an OLE object. A destination application creates the compound document that accepts OLE objects created with the program. The OLE links to one or more compound documents and exports the information to other applications. Many CAD and rendering packages, and others, such as AutoCAD and ArcGIS can import text, diagrams, digital photos and DXF data files for more detailed measurements in AIAIS applications. Figure 2 shows a result for the AlAIS, an orthophoto of the head of Anobanini. The OLE capability exported the data to create the surface model from PMP to AutoCAD for further processing.
Fig. 2. Orthophoto of the' Anobanini Hat' imported into AutoCAD
744
Seyed Yousef Sadjadi
6. Triangulated Irregular Network A Triangulated Irregular Network (TIN) modelling the object's ('Anobanini Hat') surface is created from registered data points, and fits the original data exactly. Figures 3 shows the 3-D point cloud prepared to generate the TIN for the Anobanini Rock.
Fig. 3. Illustrates a 3-D Point Cloud for TIN Generation at the Anobanini Rock
6. Surface Modelling To satisfactorily visualize an archaeological object through raster graphics, a complete description of the object's surface is needed; this may require interpolation. A TIN was preferred because no more interpolation, of captured vertices, is needed for further processing. AutoCAD reads all the coordinates of vertices, which have been stored already in a text file. 3DPoly establishes the connectivity of these vertices automatically, as closed polygons, to form facets (or planes) with the final point connected to the first one. The script file, 3dpoly.scr, contains a list of all the planes of the object. All-the vertices are stored as 3-D coordinates, with each set of these vertices representing a plane of the object. The resulting 3D surface model could then be rendered and enhanced. Figure 4
An Integration of Digital Photogrammetry and GIS
745
presents the 3-D points of the vertices of the' Anobanini Hat'. The AutoCAD environment provides the surface modelling. There has been considerable recent work in developing efficient 'reverse engineering' tools which can take a point cloud and produce a surface (facet) model, but the author only investigated that provided by AutoCAD. For example, one of these (Cloudworx) uses the drawing tools of either AutoCAD or MicroStation to snap to the scanned (i.e. point cloud) points, thus making tracing accurate and works with the CAD dimension tools, to get exact measurements in 3D space, which is appropriate for remediation.
Fig. 4. The Vertices for the Anobanini (hat) Surface Modelling
7. Data Extraction AutoCAD is able to supply the data for different layers to illustrate the archaeological elements, damaged surfaces, etc. of a cultural object. The content of layers and their representation (e.g. colours, line types, etc.), and the operations between layers depend on the proposed applications. The different facets of the Anobanini sculptor were archived for further development of the AIAIS. The data extracted from PMP are raster, vector, or text data. Orthophoto extraction is proposed using the uncompressed tiff files. Vector data extraction employed DXF, in this project. In PMP, there are eight output vector file formats available. Because of time limits, more extraction using other formats was not carried out. Preparing a script file of the archived 3D coordinate points inserted into AutoCAD produces a measurable visualisation of the 'Anobanini Hat'. It is expected that the same holds for other components of the rock sculpture.
746
Seyed Yousef Sadjadi
Conclusion Photogrammetric techniques bring many benefits to architectural recording. Bryan and Clowes [1997] indicate that photogrammetry prompts an excellent stereo photographic record of monuments, and provide a homogeneous level of recording across a whole facade or structure, being largely independent of the level of detail. They suggest results can be provided rapidly and in advance of other site works such as scaffolding. In comparison with traditional surveying and manual methods, a huge volume of primary data is captured and recorded quickly. Likewise, it can be assumed, applications in the archaeological area can benefit similarly. Archaeologists require the creation of plans and sections. Much of this continues to be carried out by the traditional archaeologist's methods, such as placing a meter square grid over each area of excavation and graphically recording the detail, on taping features. But, rectified photographs are now becoming more commonly used across the archaeological field and since the mid-nineties archaeologists has taken far more interest digital terrestrial photogrammetry [Dallas, 1996]. Although not a common application generally, Dallas [1996] states that terrestrial photogrammetry has now become one of the best-known applications of photogrammetric science in the specific fields of architecture and archaeology.
References Bryan, P. G. and Clowes, M. 1997. "Surveying Stonehenge by Photogrammetry", Photogrammetric Record, 15(89), pp. 739-751. Cooper, M. A. R. Robson, S. Littleworth, R. M.1992 "The Tomb of Christ, Jerusalem: Analytical Photogrammetry "and 3-D Computer Modelling for Archaelogy and Restoration", International Archives of Photogrammetry and Remote Sensing, 29(5), pp. 778-785. Dallas, R. W. A. 1996. "Architectural and Archaeological Photogrammetry" Chapter 10 in "Close Range Photogrammetry and Machine Vision", K. B. Atkinson (Ed.), Whittles Publishing, pp. 283-302. Fraser, C. S. 1993 "A Resume of Some Industrial Applications of Photogrammetry", ISPRS Journal of Photogrammetry and Remote Sensing, 48(3), pp. 12-23. Ogleby, C. L. 1995 "Advances in the Digital Recording of Cultural Monument", ISPRS Journal of Photogrammetry and Remote Sensing, 50(3), pp. 8-19. Robson, S. Littleworth, R. M. Cooper, M. A. R. 1994. "Construction of Accurate 3-D Computer Models for Archaelogy, Exemplified by a Photogrammetric
An Integration of Digital Photogrammetry and GIS
747
Survey of the Tomb of Christ in Jerusalem", ISPRS Journal of Photogrammetry and Remote Sensing, Proceeding of the Commission V Symposium Close Range Techniques and Machine Vision, March 1994, Melbourne, J. G Fryer and M. R. Shortis (Eds.), 30(5), pp. 338-344. Bahr, H.P. and Wiesel, J. 1991 "Cost-Benefit Analysis of Digital Orthophoto Technology", H. Ebener, D. Fritsch and C. Heipke (Eds.), Digital Photogrammetry System, Wichmann Verlag, Karlsruhe, Germany, pp. 59-73.
Reconstruction of Three Dimensional Ocean Bathymetry Using Polarised TOPSAR Data Maged Marghany and Mazlan Hashim Department of Remote Sensing Faculty of Geoinformation Science and Engineering Universiti Teknologi Malaysia 81310 UTM, Skudai, Johore Bahru, Malaysia Emails:[email protected]@fksg.utm.my
Abstract This study presents work utilizing TOPSAR polarized data to generate three-dimensional (3D). The Fuzzy arithmetic algorithm was used to construct ocean bathymetry from polarized TOPSAR data. Estimates of sea surface current were computed by using Volterra and Fuzzy B-spline models. The continuity equation was used to estimate the water depth based on current pattern information. A Fuzzy B-Spline model which construct a global topological structure between the data points were used to support an approximation of real surface. The best reconstruction of coastal bathymetry of the test site in Kuala Terengganu, Malaysia, was obtained with polarised Land C bands SAR acquired with HH and VV polarizations, respectively. With 10m spatial resolution of TOPSAR data, an accuracy of (root mean square) ± 7.23m was found with LHH band.
Introduction The coastal bathymetry is considered to provide key parameters for coastal engineering and coastal navigation. The bathymetry information is valuable for economic activities, for security and for marine environmental protection. Ship borne echo sounders, single-or multi-beam were the classical system which were used to map the sea bottom topography (Vogelzang et aI.,1992; Vogelzang et al., 1997; Hesselmans et al., 2000). Although these conventional techniques provide high precision results, they are very costly and time consuming especially when the large areas are being mapped. Remote sensing methods in real time could be a major tool for bathymetry mapping which could produce a synoptic impression over
750
Maged Marghany and Mazlan Hashim
large areas at comparatively low cost. The ocean bathymetry features can be imaged by radar in coastal waters with the strong tidal currents. Several theories concerning the radar imaging mechanism of underwater bathymetry have been established, such as big Alpers and Hennings (1984); Shuchman et al (1985); and Vogelzang (1997). The physical theories describing the radar imaging mechanisms for ocean bathymetry are wellunderstood as three stages: (i) the modulation of the current by the underwater features, (ii) the modulation of the sea surface waves by the variable surface current, and (iii) the interaction of the microwaves with the surface waves (Alpers and Hennings, 1984). The imaging mechanism which simulate under water topography from a given SAR image consists of three models. These models are a flow model, a wave model and the SAR backscatter model. These theories are the basis of commercial services which generate bathymetric charts by inverting SAR images at a significantly lower cost than conventional survey techniques (Wensink and Campbell, 1997). However, the high speckle noises in the SAR images has posed great difficulties in inverting the SAR images to real coastal bathymetry. In order to reduce these speckle effects, appropriate filters, i.e Lee, Gaussian, etc. (Lee et al 2002) could be used in the pre-processing stage. The effectiveness of these speckle-reducing filters is however much influenced by local factors and application. In this context, Hesselmans et al (2000) developed the Bathymetry Assessment System, a computer program which can be used to calculate the depth from any SAR image and a limited number of sounding data. They found that the imaging model was suitable for simulating a SAR image from the depth map. It showed good agreement between the backscatters in both the simulated and airborneacquired images, when compared, with accuracy (root mean square) error of ± 0.23 m within a coastal bathymetry range of 25-30m. In this paper, we emphasized how the 3-D coastal water bathymetry could be reconstructed from single airborne SAR data (namely the TOPSAR) using integration of the Volttera kernel (Inglada and Garello ,1999) and Fuzzy B-spline models (Maged et al 2002). Four hypotheses examined are: (i) Volterra model can be used to detect ocean surface current from TOPSAR polarised data, (ii) there are significant differences between the different bands in detecting ocean currents, (iii) the continuity equation can be used to obtain the water depth, and (iv) Fuzzy B-spline can be used to invert the water depth values obtained by the continuity equation into 3-D bathymetry.
Reconstruction of Three Dimensional Ocean Bathymetry
751
Methodology Data Set
The Jet Propulsion Laboratory (JPL) airborne Topographic Synthetic Aperture Radar (TOPSAR) data were acquired on December 6, 1996 over the coastline of Kuala Terengganu, Malaysia between 1030 5'E to 1030 9'E and 50 20'N to 50 27'N. TOPSAR is a NASAlJPL multi-frequency radar imaging system aboard a DC-8 aircraft and operated by NASA's Ames research Center at Moffett Field, USA. TOPSAR data are fully polarimetric SAR data acquired with HH-, VV-, HV- and VH-polarized signals from 5mx5m pixels, recorded for three wavelengths: C band (5 em), L band (24 em) and P band (68 em) at 10m spatial resolution. A further explanation of TOPSAR data acquisition can be found in Melba et al (1999). 3-D Coastal Water Bathymetry Model
Two models are involved for bathymetric simulation: the Volterra model and Fuzzy B-spline model. The Volterra model is used to simulate the current velocity from TOPSAR data. The simulation current velocity is used with the continuity equation to derive the water depth variations under different current values. The Fuzzy B-spline is used to reconstruct the twodimensional water depth to a 3-D dimensional (Figure 1).
752
Maged Marghany and Mazlan Hashim TOPSAR Data acquired in L, C bands with HH and VV polarizations
Volterra's Series (nonlinear) )al-
Inverse filter of Volterra model
Continuity equation
Fuzzy B-splines
3D Bathymetry reconstruction
Fig. 1. Flow Chart of 3-D Bathymetry Reconstruction Volterra Model
A Volterra model can be used to express the SAR image intensity as a series of nonlinear filters on the ocean surface current. This means that the Volterra model can be used to study the image energy variation as a function of parameters such as the current direction, or the current waveform. A generalized, nonparametric framework to describe the input-output x and y signals relation of a time-invariant nonlinear system is provided by the Inglada and Garello (1999). In discrete form, the Volterra series for input, x(n), and output, yen) as given by Inglada and Garello (1999) can be expressed as:
Reconstruction of Three Dimensional Ocean Bathymetry 00
yet)
= ho + Lhl (i l )x(n -
il ) +
i.=l
l,
iz )x(n - il )x(n - iz) +
i.=l i2=l
LLLh (i ,iz,i 3
LLhz(i
753
l
3)x(n
- il)(n - iz)x(n - i3 ) +
+
i.=l i2=l i3=l 00
00
LL··············Lh (i k
i.=l i2=l
l,
.i, )x(n - i l )x(n - iz) ·.········x(n - ik )
iz ,
ik=l
(1)
where, n, i, , iz ,...,ik , are discrete time lags. The function hk(il ,iz ,...,ik) is the kth-order Volterra kernel characterizing the system. The hI is the kernel of the first order Volterra functional, which performs a linear operation on the input and hz, h3 , ••• ,hk capture the nonlinear interactions between input and output TOPSAR signals. The order of the non-linearity is the highest effective order of the multiple summations in the functional series.
Following Inglada and Garello (1999), the expression of the Volterra kernels for the current flow in the range direction can be described as:
Hly(iix,vy)=kyU.
d~ [K-I[~+ dC! + d~ +O.043(ii KY].[dlf/]. au at ax ax Wo aw w
x
c (KYU+ j.O.043(ii K) g
w
-
-
-
2
ai, -I 1
[cg (K)U]z + [0.043(u w K )zWo- ]z
•
-2 - -4
R_
+;.(0.6.10.K )(-)u
v
x
(2) ~
where U is the mean current velocity,
ux is the current flow along the
azimuth direction while if w is the current gradient along the azimuth direction,
ky is
number,
K is
wave number along range direction,
W o is
the angular wave frequency,
c is g
the spectra wave
the wave velocity
group, If/ is the wave spectra energy and R/V is the range to platform velocity ratio, in case of TOPSAR is 32 s.
According to Vogelzang et aI., (1997) the current movement along the range direction can be estimated by
754
Maged Marghany and Mazlan Hashim
(3)
where
FT[ ~ X(t)] is the linearity of the Fourier transform for the in-
put TOPSAR image intensity. The inverse filter P(Ux' Vy ) is used since
H1y(u x ' vy ) has a zero for (u x ' vy)which indicates that the mean current velocity should have a constant offset. The inverse filter P(u x ' v y ) can be given as
(4) Then, the continuity equation is used to estimate the water depth as given by Vogelzang (1992)
as at + V.{(d + sYU/u
x'
v
y ) }
=0
(5)
where ( is the surface elevation above the mean sea level, which is obtained from the tidal table, t is the time and d is the local water depth. The real current data was estimated from Malaysian tidal table of December 6, 1996.
The Fuzzy B-splines Method
The fuzzy B-splines (FBS) are introduced allowing fuzzy numbers instead of intervals in the definition of the B-splines. Typically, in computer graphics, two objective quality definitions for Fuzzy B-splines are used: triangle-based criteria and edge-based criteria. A fuzzy number is defined using interval analysis. There are two basic notions that we combine together: confidence interval and presumption level. A confidence interval is a real values interval which provides the sharpest enclosing range for current gradient values. An assumption level f.1 -level is an estimated truth value in the [0,1] interval on our knowledge level of gradient current
Reconstruction of Three Dimensional Ocean Bathymetry
755
(Anile 1997). The 0 value corresponds to minimum knowledge of gradient current, and 1 to the maximum gradient current. A fuzzy number is then prearranged in the confidence interval set, each one related to a assumption [0,1]. Moreover, the following must hold for each pair of conlevel J1 fidence interval which define a number: Jl >- Jl'=> h >- h, Let us consider a function f : h ~ h , of N fuzzy variables hI' h2 , •••• , hn • Where h; are the global minimum and maximum values water depth of the function on the current gradient along the space. Based on the spatial variation of gradient current, and water depth, the algorithms are used to compute the functionj. The construction begins with the same pre-processing aimed at the reduction of measured current values into a uniformly spaced grid of cells. As in Volettra model data are derived from the TOPSAR polarised backscatter images due to the application of 2-DFFT. First of all, each estimated current data along fix window size of 512x512 pixels and lines, is considered as a triangular fuzzy number defined by a minimum, maximum and measured value. Among all the fuzzy numbers falling within a window size, a fuzzy number is defined whose range is given by the minimum and maximum values of gradient current and water depth along each window size. Among all the intervals extremes, and whose central value is chosen as the "best choice" proficient considering the density of data within window size. Then, a membership function is defined for each pixel element which incorporates the degrees of certainty of radar cross backscatter. In order to evaluate the simulation method quantitatively, the regression model and root mean square were computed for the simulated bathymetry from TOPSAR data and bathymetry points have been extracted from topographic map of 1998 sheet number 4365 of 1:25,000.00 scale.
Results and Discussion Figure 2 shows the regions of interest that were used to simulate the bathymetric information from L-band with HH polarization. The bathymetry information has been extracted from of the 4 sub-images, each subimage was 512x512 pixels. Figure 3 shows the signature of the underwater topography. Underwater topography is obvious as a frontal lines parallel the shoreline. Due to the fact that ocean signature of boundary is clear in the brightness of a radar return, since the backscatter tends to be proportional to wave height (Vogelzang, 1992). In C-band with VV polarization, this feature is clearly weaker than at L -band with HH polarization. It is possible that the character of the current gradient is such that the LHH-band
756
Maged Marghany and Mazlan Hashim
surface Bragg waves are more strongly modulated than for C vv band. This may be provide an explanation for weaker bathymetric signatures at C vv band. The finding is similar to that of Romeiser and Alpers, (1997).
--15°
~.~-~.--~-------
30'N
I
I
I I u I 'I I II I I I Ii I II I
II
15 20' N o
\I
=-_-==. -=_=_=_ -::. '= _ -=
i: ~
::_::-.-:_.=.-
-:_:::
I tl
=__::._=J
103° 4'E 103 o 20'E Fig. 2. Selected Window Sizes of A to D with 512 x 512 pixels
r-· ·· .-.I I
II
, ,I LID!
C vv
•••••••••.••••••••• •• J 103" fE
103" 20'£
Fig. 3. Bathymetry Signature wfth Different Bands Figure 4 shows the comparison between 3-D bathymetry reconstruction from topography map, the LHH band data, and the C vv band data. It is ob-
Reconstruction of Three Dimensional Ocean Bathymetry
757
vious that the coastal water bathymetry along the Sultan Mahmoud Airport has a gentle slopes and moving parallel to the shoreline. Closed to the river mouth , the bathymetry at location shows a sharp slope. The L HH band captured an approximately real bathymetry pattern compared to C vv band. This result could be confirmed with regression model in Figure 5. The scatter points in Figure 5b are more close to regres sion line in case of bathymetry simulation from L HH band with r2 value of 0.677 and accuracy (root mean square) of ± 7.2 m which is less than that obtained by using C vv band with accuracy of (root mean square) ± 9 m. It might be the HH polarization has a larger tilt modulation compared to VV polarization. Tilt modula tion explains that the Bragg scattering is dependent on the local incident angle. The long wavelength of L- band HH polarization modul ate this angle , hence, modifying the Bragg resonance wave length. It might be due to the fact that the first - order Bragg Scattering gives good results for long radar wavelengths (L-band) , but for shorter radar wavelength (Cband) the effects of waves longer than the Bragg waves must be taken into account (Shuchman et aI., 1985 and Romeiser and Alpers, 1997) This could be due to strong current flow from the mouth river of the Kual a Terengganu . This study confirms the study of Maged et aI., (2002).
z ~
"
z ~
"
758
Maged Marghany and Mazlan Hashim
(c)
Figure 4: Three-Dimensional Bathymetry Reconstructions from (a) Real Topography Map (b) L uu Band and (c) C vv Band. (a) 25 - , . . . . - - - - - - - -y = 0.8962x + 1.8886
5 <:>
':::'";20
2
R
=--~
..
--------,
= 0.6235
rms=i;9m
-~
"" cu
5 = 15
=><.>
cu
~
aJ Q;
10
~5
'::>-
cu= ;;I: ';;
5
-"" cu ""
..
0 . J - - -......- - - . . . _ - - _ - -......------l o 5 10 15 20 25
0::
WaterBa thymetry from Cvv Band: (m)
(b) 25
E o~
-... --
: '.§20 E~15
T"""-----------------, y =0.7509x + 3.4653 2
R
=0.6775
..
rms-""'± 7 .2 m
>-u
= ->
.;;.;; 10
:;;E
->
;~ 5 -
'"
~
<0
0 ~--......- -__...._--..,....--......- -----l o 5 10 15 20 25 Wate r Bathymetry from LHHBand (m )
Fig. S. Regression Model between Real water Bathymetry from Bathymetry Chart and (a) Water Bathymetry from C vv Band and (b) L uu Band
Reconstruction of Three Dimensional Ocean Bathymetry
759
Conclusion This work has demonstrated the 3-D bathymetry reconstruction from TOPSAR polarized data. The inversion of Volterra model was used to estimate the water current movements. The Fuzzy-B splines were used to reconstruct the 3-D bathymetry pattern. LHH band provides an approximately real 3-D bathymetry than Cvv band. It can be said that the integration between Volterra model and the Fuzzy B-splines could be an excellent tool for 3-D bathymetry reconstruction.
References Alpers, W. and Hennings, 1., (1984). A theory of the imaging mechanism of underwater bottom topography by real and synthetic aperture radar, J. Geophys. Res., 89,10,529-10,546 Anile, A.M., Gallo, G., Perfilieva, 1., (1997) Determination of Membership Function for Cluster of Geographical data. Genova: Institute for Applied Mathematics, National Research Council, October 1997, 25p., Technical Report No.26/97. Anile, A. M, (1997) Report on the activity of the fuzzy soft computing group, Technical Report of the Dept. of Mathematics, University of Catania, March 1997, pages 10. Anile, AM, Deodato, S, Privitera, G, (1995) Implementing fuzzy arithmetic, Fuzzy Sets and Systems, 72. Inglada, J. and Garello R.,(1999) Depth estimation and 3D topography reconstruction from SAR images showing underwater bottom topography signatures. In Proceedings of IGARSS'99. Lee, J. S., D. Schuler, T. L. Ainsworth, E. Krogager, D. Kasilingam, M.A. and W.M.Boerner, (2002). On the estimation of radar polarization orientation shifts induced by terrain slopes, "IEEE Trans.Geosci.RemoteSens., Vol. 40, no. 1, pp. 30-41. Fuchs, H. Z.M. Kedem, and S.P.Uselton. (1977) Optimal Surface Reconstruction from Planar Contours. In: Communications of the ACM, 20(10):693-702. Hesselmans, G.H, G.J Wensink C.G. V. Koppen, C. Vernemmen and C.V Cauwenberghe (2000). Bathymetry assessment Demonstration off the Belgian Coast-Babel. The hydropgraphic Journal. No.96. pp.3-8, Keppel, E. (1975) Approximation Complex Surfaces by Triangulations of Contour Lines. In: IBM Journal of Research Development, 19:2,. Maged M, (1994). Coastal Water Circulation off Kuala Terengganu, Malaysia". MSc. Thesis Universiti Pertanian Malaysia. Maged M., H. L. Mohd., and K., Yunus. (2002), TOPSAR Model for bathymetry Pattern Detection along coastal waters of Kuala Terengganu, Malaysia. Journal of Physical Sciences. Vol (14)(3), 487-490.
760
Maged Marghany and Mazlan Hashim
Melba M., Kumar S., Richard M.R., Gibeaut J.C. and Amy N., (1999), Fusion of Airborne polarimteric and interferometric SAR for classification of coastal environments.. IEEE Transactions on Geoscienceand Remote Sensing, vol. 37, pp. 1306-1315,. Romeiser, R. and Alpers, W., (1997), An improved composite surface model for the radar backscattering cross section of the ocean surface, 2, Model response to surface roughness variations and the radar imaging of underwater bottom topography, J. Geophys. Res., 102,25,251-25,267 Shuchman, R.A., Lyzenga, D.R. and Meadows, G.A. (1985)., Synthetic aperture radar imaging of ocean-bottom topography via tidal-current interactions: theory and observations, Int. J. Rem. Sens., 6,1179-1200 Vogelzang, J., Wensink, G.J., de Loor, G.P., Peters, H.C. and Pouwels, H. (1992)., Sea bottom topography with X band SLAR: the relation between radar imagery and bathymetry, Int. J. Rem. Sens., 13, 1943-1958 Vogelzang, J. (1997)., Mapping submarine sand waves with multiband imaging radar, 1, Model development and sensitivity analysis, J. Geophys. Res., 102, 1163-1181 Vogelzang, J., Wensink, G.J., Calkoen, C.l and van der Kooij, M.W.A. (1997)., Mapping submarine sand waves with multiband imaging radar, 2, Experimental results and model comparison, J. Geophys. Res., 102, 1183-1192 Wensink, H. and G. Campbell, (1997). Bathymetric map production using the ERS SAR. Backscatter, 8,1,17-22.