AU8231_C000.fm Page i Thursday, October 19, 2006 6:55 AM
Half OFFICIAL Title Page
(ISC) GUIDE TO THE 2
®
CISSP CBK ®
®
AU8231_C000.fm Page ii Thursday, October 19, 2006 6:55 AM
OTHER BOOKS IN THE (ISC)2® PRESS SERIES
Building and Implementing a Security Certification and Accreditation Program: Official (ISC)2® Guide to the CAPcm CBK® Patrick D. Howard ISBN: 0-8493-2062-3 Official (ISC)2® Guide to the SSCP® CBK® Diana-Lynn Contesti, Douglas Andre, Eric Waxvik, Paul A. Henry, and Bonnie A. Goins ISBN: 0-8493-2774-1 Official (ISC)2® Guide to the CISSP®-ISSEP® CBK® Susan Hansche ISBN: 0-8493-2341-X Official (ISC)2® Guide to the CISSP® CBK® Harold F. Tipton and Kevin Henry, Editors ISBN: 0-8493-8231-9
AU8231_C000.fm Page iii Thursday, October 19, 2006 6:55 AM
OFFICIAL Title Page
(ISC) GUIDE TO THE 2
®
CISSP CBK ®
®
Edited by Harold F. Tipton, CISSP-ISSAP, ISSMP, and Kevin Henry, CISSP-ISSEP, ISSMP, CAP, SSCP
Boca Raton New York
Auerbach Publications is an imprint of the Taylor & Francis Group, an informa business
AU8231_C000.fm Page iv Thursday, October 19, 2006 6:55 AM
This edition published in the Taylor & Francis e-Library, 2008. “To purchase your own copy of this or any of Taylor & Francis or Routledge’s collection of thousands of eBooks please go to www.eBookstore.tandf.co.uk.” Glossary © 2007 by Taylor & Francis Group, LLC.
Auerbach Publications Taylor & Francis Group 6000 Broken Sound Parkway NW, Suite 300 Boca Raton, FL 33487-2742 © 2007 by (ISC)2® Auerbach is an imprint of Taylor & Francis Group, an Informa business ISBN 0-203-88893-6 Master e-book ISBN
International Standard Book Number-10: 0-8493-8231-9 (Hardcover) International Standard Book Number-13: 978-0-8493-8231-4 (Hardcover) This book contains information obtained from authentic and highly regarded sources. Reprinted material is quoted with permission, and sources are indicated. A wide variety of references are listed. Reasonable efforts have been made to publish reliable data and information, but the author and the publisher cannot assume responsibility for the validity of all materials or for the consequences of their use. No part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. For permission to photocopy or use material electronically from this work, please access www. copyright.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC) 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Library of Congress Cataloging-in-Publication Data Tipton, Harold F. Official (ISC) 2® guide to the CISSP® CBK® : (ISC)2® Press / Harold F. Tipton, Kevin Henry. p. cm. Includes bibliographical references and index. ISBN 0-8493-8231-9 (alk. paper) 1. Electronic data processing personnel--Certification. 2. Computer networks--Examinations--Study guides. I. Henry, Kevin. II. Title. QA76.3.T565 2006 004.6--dc22 Visit the Taylor & Francis Web site at http://www.taylorandfrancis.com and the Auerbach Web site at http://www.auerbach-publications.com
2006043032
AU8231_C000.fm Page v Thursday, October 19, 2006 6:55 AM
Foreword to CBK® Study Guide As the networked world continues to shape and impact every aspect of our lives, threats to the global network infrastructure continue to rise in parallel. That’s why there has never been a greater urgency for a global standard of excellence for those who protect the networked world. That has been the mission of the International Information Systems Security Certification Consortium (ISC)2® from its inception. Formed in 1989 by multiple professional associations to develop an accepted industry standard for the practice of information security, (ISC)2 created the information security industry’s first and only CBK®, a global compendium of industry best practices. Continually updated to incorporate rapidly changing technologies and threats, the CBK continues to serve as the basis for (ISC)2’s education and certification programs. To date, (ISC)2 has certified more than 40,000 security professionals and practitioners in over 100 countries and continues to meet the growing demand for information security accreditation. Just as technology and its impact on society have dramatically changed since (ISC)2 was first envisioned, so has the role of information security professionals. The need for highly qualified information security professionals to protect information assets has now been accepted by organizations worldwide both private and public. In recent years, the rise of the Chief Information Security Officer position has been a watershed event in the influence and significance of the information security professional in maintaining effective IT governance and risk management. Results from the 2005 Global Information Security Workforce Study, conducted by global analyst firm IDC and sponsored by (ISC)2, revealed that ultimate responsibility for information security moved up the management hierarchy, with more respondents identifying the board of directors and CEO, or a CISO/CSO as being accountable for their company’s information security. The study also showed that nearly 75 percent of all respondents believed their influence with executives and the board of directors would v
AU8231_C000.fm Page vi Thursday, October 19, 2006 6:55 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® increase in the coming year. These findings bode well for the profession and for effectively securing infrastructure. (ISC)2 is continuing to do its part to assist all those who choose this profession and proliferate standards for professionalism, whether by creating the first information security career guide for high school and college students to meet the growing demand for new talented entries into the field, establishing Affiliated Local Interest Groups to meet the peer networking and professional growth needs of (ISC)2 members and other information security professionals worldwide, working with top organizations such as Microsoft to require certifications of security partners, or organizing seminars around the world with the most respected thought leaders in the industry. With the ever-growing importance to organizations and society-at-large, (ISC)2 remains committed to ensuring the highest standards of information security are maintained by certified professionals worldwide. Its Certified Information Systems Security Professional (CISSP) certification, considered the Gold Standard in the information security industry, continues to be an invaluable tool in independently validating a candidate’s expertise in developing information security policies, standards and procedures as well as managing implementation across the enterprise. In addition to passing the six-hour CISSP exam, applicants must be endorsed by an existing (ISC)2 credential-holder, demonstrate sufficient professional experience in one or more of the CBK domains, and subscribe to the (ISC)2 Code of Ethics. The Code of Ethics describes the professional behavior expected of the CISSP. A major factor that sets the CISSP apart from other security certifications is the breadth of knowledge and the experience necessary to pass the exam. CISSP candidates can’t be overly specialized in just one domain — they must know and understand the full spectrum of the CBK to become certified. In order to maintain their certification, holders of the CISSP are required to earn 120 Continuing Professional Education (CPE) credits every three years. CPE credits are earned through activities related to the information security profession including, but not limited to, the following: • • • • • • • •
vi
Attending educational courses or seminars Attending security conferences Being a member of an association chapter and attending meetings Listening to vendor presentations Completing university/college courses Providing security training Publishing security articles or books Serving on industry boards
AU8231_C000.fm Page vii Thursday, October 19, 2006 6:55 AM
Foreword to CBK® Study Guide • Self-study • Completing volunteer work, including serving on (ISC)2 volunteer committees Re-certification is required for information security professionals to maintain their CISSP title. In addition, the CISSP was the first information security credential to be accredited by ANSI (American National Standards Institute) under ISO/IEC Standard 17024. ISO/IEC 17024 establishes a global benchmark for certification of personnel and is becoming increasingly important to organizations for ensuring competency in different professions. This is the only document that addresses all of the topics and sub-topics contained in the CISSP CBK. The authors and editors of this comprehensive textbook have provided an extensive supplement to the official CISSP CBK Review Seminars, which are designed to help candidates study for CISSP certification. The Official (ISC)2® Guide to the CISSP® CBK® is ideal not only for information security professionals attempting to achieve CISSP certification but also for those who are trying to decide which, if any, certification to pursue. Executives and organizational managers who want a more complete understanding of all the elements that are required in effectively protecting their enterprise will also find this guide extremely useful. Sincerely,
Tony Baratta, CISSP-ISSAP, ISSMP, SSCP Director of Professional Programs (ISC)2®
vii
AU8231_C000.fm Page viii Thursday, October 19, 2006 6:55 AM
AU8231_C000.fm Page ix Thursday, October 19, 2006 6:55 AM
About the Editors Harold F. Tipton, CISSP-ISSAP, ISSMP, currently an independent consultant and past president of the International Information System Security Certification Consortium, was director of computer security for Rockwell International Corporation for 15 years. He initiated the Rockwell computer and data security program in 1977, and then continued to administer, develop, enhance, and expand the program to accommodate the control needs produced by technological advances until his retirement from Rockwell in 1994. He has been a member of the Information Systems Security Association (ISSA) since 1982, was president of the Los Angeles chapter in 1984, and president of the national organization of ISSA from 1987 to 1989. He was added to the ISSA Hall of Fame and the ISSA Honor Role in 2000. He received the Computer Security Institute Lifetime Achievement Award in 1994 and the (ISC)2 Hal Tipton Award in 2001. He was a member of the National Institute of Standards and Technology (NIST) Computer and Telecommunications Security Council and the National Research Council Secure Systems Study Committee (for the National Academy of Science). He has a B.S. in engineering from the U.S. Naval Academy, an M.A. in personnel administration from George Washington University, and a certificate in computer science from the University of California at Irvine. He has published several papers on information security issues in Information Security Management Handbook, Data Security Management, Information Systems Security, and the National Academy of Sciences report, Computers at Risk. He has been a speaker at all of the major information security conferences, including Computer Security Institute, the ISSA Annual Working Conference, the Computer Security Workshop, MIS conferences, AIS Security for Space Operations, DOE Computer Security Conference, National Computer Security Conference, IIA Security Conference, EDPAA, UCCEL Security and Audit Users Conference, and Industrial Security Awareness Conference. He has conducted and participated in information security seminars for (ISC)2, Frost & Sullivan, UCI, CSULB, system exchange seminars, and the Institute for International Research. He is currently serving as editor of Data Security Management and Information Security Management Handbook.
ix
AU8231_C000.fm Page x Thursday, October 19, 2006 6:55 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Kevin Henry, CISSP-ISSEP, ISSMP, CAP, SSCP, is a well-known speaker and consultant in the field of information security and business continuity planning. Kevin provides educational and consulting services to organizations throughout the world and is an official instructor for (ISC)2, the world’s leading certification body in information security. He is responsible for course development and delivery for several (ISC)2 programs. Kevin has a broad range of experience in both technology and management of information technology and information security programs. He has worked for clients ranging from the largest telecommunications firms in the world to governments, military, and small home-based operations. Kevin is a highly respected presenter at conferences, seminars, and educational programs worldwide. With over 20 years in telecommunications and government experience, he brings a relevant and interesting approach to information security and provides practical and meaningful solutions to the information security challenges, threats, and regulations we face today.
x
AU8231_C000.fm Page xi Thursday, October 19, 2006 6:55 AM
Contributors Alec Bass, CISSP, is a senior security specialist in the Boston area. During his 25-year career, Alec has developed solutions that significantly reduce risk to the digital assets of high-profile manufacturing, communications, home entertainment, financial, research, and federal organizations. He has helped enterprises enhance their network’s security posture, performed penetration testing, and administered client firewalls for an application service provider. Before devoting his career to information security, Alec supported the IT infrastructure for a multinational Fortune 200 company and fixed operating system bugs for a leading computer firm. Peter Berlich, CISSP-ISSMP, is working as an IT security manager on a large outsourcing account at IBM Integrated Technology Services, coming from a progression of IT security- and compliance-related roles in IBM. Before joining IBM, he was global Information security manager at ABB, after a succession of technical and project management roles with a focus on network security management. Peter is a member of the (ISC)2 European Advisory Board and the Information Security Forum (ISF) Council. He is the author of various articles on the subject of security and privacy management in publications such as Infosecurity Today. With a degree in physics, his personal motto is to give clarity and empowerment. Todd Fitzgerald, CISSP, CISA, CISM, is the director of information systems security and a systems security officer for United Government Services, LLC (UGS), Milwaukee, WI. Todd has authored articles on information security for publications such as Information Security Magazine, The Information Security Handbook, The HIPAA Program Reference Book, and Managing an Information Security and Privacy Awareness and Training Program. Todd, a member of the editorial board for Information Systems Security: (The ISC)2 Journal, is frequently called upon to present at national and local conferences, and has received several security industry leadership awards. Todd holds a B.S. in business administration from the University of Wisconsin–LaCrosse and an M.B.A. from Oklahoma State University. xi
AU8231_C000.fm Page xii Thursday, October 19, 2006 6:55 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Bonnie Goins, CISSP, with over 17 years of experience in management consulting, information technology, and security, is a recognized subject matter expert in information security management. Her security and business expertise has been put to use by many organizations to enhance or develop world-class operations. Bonnie holds an M.S. in information systems and a bachelor’s degree in psychology, as well as the following certifications: BS 7799 lead auditor, Certified Information Systems Security Professional (CISSP), National Security Agency information assurance methodology (NSA IAM) certified assessor, global information assurance certification (GIAC), certified information security manager (CISM), and Internet security specialist (ISS). Paul Hansford, CISSP, Dip.Infosec, CISMP, FBCS, FCIPD, CLAS, is a principal consultant with Insight Consulting, part of Siemens Communications. He has worked in risk analysis and management policy development, system accreditation, security training, competency, and certification issues. In 2001, he established the U.K. Government’s Infosec Training Paths and Competencies scheme and, between 2004 and 2006, delivered the Infosec syllabus at the U.K. National School of Government. Paul is a member of the (ISC)2 European Advisory Board and the CBK Committee, and in 2005 was involved in the development of the new U.K. Institute for Information Security Professionals (IISP). Kevin Henry, CISSP-ISSEP, ISSMP, CAP, SSCP, is a well-known speaker and consultant in the field of information security and business continuity planning. He provides educational and consulting services to organizations throughout the world and is an official instructor for (ISC)2, the world’s leading certification body in information security. He is responsible for course development and delivery for several (ISC)2 programs. Kevin has a broad range of experience in both technology and management of information technology and information security programs. He has worked for clients ranging from the largest telecommunications firms in the world to governments, military, and small home-based operations. He is a highly respected presenter at conferences, seminars, and educational programs worldwide. With over 20 years of telecommunications and government experience, he brings a relevant and interesting approach to information security and provides practical and meaningful solutions to the information security challenges, threats, and regulations we face today. Rebecca Herold, CISSP, CISM, CISA, FLMI, is an information privacy, security, and compliance consultant, author, and instructor with over 16 years of experience assisting organizations of all sizes in all industries throughout the world. Rebecca has written numerous books, including Managing an Information Security and Privacy Awareness and Training Program (Auerxii
AU8231_C000.fm Page xiii Thursday, October 19, 2006 6:55 AM
Contributors bach Publications) and The Privacy Management Toolkit (Information Shield), along with dozens of book chapters and hundreds of published articles. Rebecca speaks often at conferences, and develops and teaches workshops for the Computer Security Institute. Rebecca is resident editor for the IT Compliance Community and also an adjunct professor for the Norwich University Master of Science in Information Assurance (MSIA) program. Rebecca has a B.S. in math and computer science and an M.A. in computer science and education. Carl B. Jackson, CISSP, is the Business Continuity Program Director for Pacific Life Insurance Company in Newport Beach, California. He brings more than 30 years of experience in the areas of continuity planning, information security, and information technology internal control and quality assurance reviews and audits. He has also served with various consultancies specializing in Business Continuity Planning and Information Security where his responsibilities included development and oversight of continuity methodologies, project management, tools acquisition, and ongoing testing/maintenance/training/measurement of the enterprisewide business continuity planning. Carl recently served as Chairman of the Information Systems Security Association (ISSA) International Board of Directors. Previously, he was a founding board member and past-president of the ISSA as well as serving as a founding board member of the Houston, Texas, chapter of the Association of Contingency Planners (ACP). He is a past member and past Emeritus member of the Computer Security Institute (CSI) Advisory Council and is the recipient of the 1997 CSI Lifetime Achievement Award. He has also served on the editorial and advisory boards of both the Contingency Planning Management (CPM) magazine and Datapro Reports on Information Security. William Lipiczky has practiced in the information technology and security arena for over two decades, beginning his career as a mainframe operator. As information technology and security evolved, he evolved as well. His experience includes networking numerous operating systems (UNIX, NetWare, and Windows) and networking hardware platforms. He currently is a principal in a security consulting and management firm, as well as a lead CISSP instructor for the International Information System Security Certification Consortium. Sean M. Price, CISSP, is an independent information security consultant located in the Washington, D.C. area. He provides security consulting and engineering support for commercial and government entities. His experience includes nine years as an electronics technician in metrology for the U.S. Air Force. He has completed a B.S. in accounting and an M.S. in computer information systems. Sean is continually immersed in research and development activities for secure systems. xiii
AU8231_C000.fm Page xiv Thursday, October 19, 2006 6:55 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Marcus K. Rogers, Ph.D., CISSP, CCCI, is the chair of the Cyber Forensics Program in the Department of Computer and Information Technology at Purdue University. He is an associate professor and research faculty at CERIAS. Dr. Rogers was a senior instructor for (ISC)2, is a member of the quality assurance board for the SCCP designation, and member of the international CBK committee. He is a former police detective who worked in the area of fraud and computer crime investigations, and he sits on the editorial board for several professional journals and is a member of various national and international committees. Robert M. Slade, CISSP, is an information security and management consultant from Vancouver, Canada. His research into computer viral programs started when they first appeared as a major problem “in the wild”; he is best known for a series of review and tutorial articles that were eventually published as Robert Slade’s Guide to Computer Viruses. As an outgrowth of the virus research, he prepared the world’s first course on forensic programming, which became the first book on software forensics. As a senior instructor for (ISC)2, he is currently working on a glossary of security terms, as well as references for CISSP candidate students. James S. Tiller, CISSP, CISA, is an accomplished executive with over 14 years of information security and information technology experience and leadership. He has provided comprehensive, forward-thinking solutions encompassing a broad spectrum of challenges and industries. Jim has spent much of his career assisting organizations throughout North America, Europe, and most recently Asia, in meeting their security goals and objectives. He is the author of The Ethical Hack: Framework for Business Value Penetration Testing and A Technical Guide to IPsec Virtual Private Networks. Jim has been a contributing author to the Information Security Management Handbook for the last five years, in addition to several other publications. Also, he is the managing editor of the Information System Security journal. Currently, Jim is the managing vice president of security services for INS.
xiv
AU8231_C000.fm Page xv Thursday, October 19, 2006 6:55 AM
Introduction to the (ISC)2® CISSP® CBK® Textbook The Official (ISC)2® Guide to the CISSP® CBK® is an important milestone in the history of (ISC)2. Since its days as a small volunteer organization in 1989 to today’s position as a leader in the field of information security, (ISC)2 recognizes, educates, and supports the critical role that information security professionals play in the stability of the global infrastructure. Current industry changes have caused information security professionals to reflect on the key role that each individual plays in designing, developing, implementing, and maintaining a strong information security program and aligning personal objectives with the requirements of business, organizations, society, governments, and the military. To write this valuable reference, skilled authors, who are experts in their fields, were chosen to contribute the various chapters and share their passion for their areas of expertise. This book was written as an authoritative reference that can be used not only for gaining better understanding of the CISSP CBK, (ISC)2’s global compendium of information security best practices, but as a reference book that will hold a prominent position on every CISSP’s bookshelf to be turned to repeatedly for insight into the vast field of information security. The (ISC)2 CISSP CBK is a taxonomy — a collection of topics relevant to information security professionals around the world. The CISSP CBK establishes a common framework of information security terms and principles that allows information security professionals worldwide to discuss, debate, and resolve matters pertaining to the profession with a common understanding. Understanding the CBK allows intelligent discussion with peers on information security issues. The CISSP CBK is continuously evolving. Every year the (ISC)2 CBK committee reviews the content of the CBK and updates it with a consensus of xv
AU8231_C000.fm Page xvi Thursday, October 19, 2006 6:55 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® best practices from an in-depth job analysis survey of CISSPs around the world. These best practices may address implementing new technologies, dealing with new threats, incorporating new security tools, and, of course, managing the human factor of security. (ISC)2 strives to represent changes and trends in the industry through our award-winning CISSP CBK Review Seminars and other educational materials. One of the most obvious changes in this book is the streamlining of the domains of the CISSP. While the number of domains still stands at 10, the content in some of the domains has been shifted to other domains to allow for a more appropriate placement in the flow of material. Some of the domain titles have also been revised to reflect changing terminology and emphasis in the security professional’s day-to-day world. The following revised ten domains of the CISSP CBK with brief descriptions, are in the order recommended for (ISC)2 review seminar instructors and are in the order you will find them in this book: • Information Security and Risk Management Addresses the framework and policies, concepts, principles, structures, and standards used to establish criteria for the protection of information assets, to inculcate holistically the criteria, and to assess the effectiveness of that protection. It includes issues of governance, organizational behavior, ethics, and security awareness. This domain also addresses risk assessment and risk management. • Access Control The collection of mechanisms and procedures that permits managers of a system to exercise a directing or restraining influence over the behavior, use, and content of a system. Access control permits management to specify what users or processes can do, which resources they can access, and what operations they can perform on a system. • Cryptography Addresses the principles, means, and methods of disguising information to ensure its integrity, confidentiality, and authenticity in transit and in storage. • Physical (Environmental) Security Addresses the common physical and procedural risks that may exist in the environment in which an information system is managed. This domain also addresses physical and procedural defensive and recovery strategies, countermeasures, and resources available to the information security professional. These resources include staff, the configuration of the physical environment, security policies and procedures, and an array of physical security tools. xvi
AU8231_C000.fm Page xvii Thursday, October 19, 2006 6:55 AM
Introduction to the (ISC)2® CISSP® CBK® Textbook • Security Architecture and Design Addresses the high level and detailed processes, concepts, principles, structures, and standards used to define, design, implement, monitor, and secure/assure operating systems, applications, equipment, and networks. It addresses the technical security policies of the organization, as well as the implementation and enforcement of those policies. Security Architecture and Design must clearly address the design, implementation and operation of those controls used to enforce various levels of confidentiality, integrity, and availability to ensure effective operation and compliance (with governance and other drivers). • Business Continuity and Disaster Recovery Planning Addresses the preparation, processes, and practices required to ensure the preservation of the business in the face of major disruptions to normal business operations. BCP and DRP involve the identification, selection, implementation, testing, and updating of processes and specific actions necessary to prudently protect critical business processes from the effects of major system and network disruptions and to ensure the timely restoration of business operations if significant disruptions occur. • Telecommunications and Network Security Encompasses the structures, transmission methods, transport formats, and security measures used to provide integrity, availability, authentication, and confidentiality for transmissions over private and public communications networks and media. • Application Security Refers to the controls that are included within and applied to system and application software. Application software includes agents, applets, operating systems, databases, data warehouses, knowledge-based systems, etc. These may be used in distributed or centralized environment. • Operations Security Addresses the protection and control of data processing resources in both centralized (data center) and distributed (client/server, etc.) environments. Although Operations Security involves the confidentiality and integrity of information and processes, a major focus is on ensuring the availability of systems for business units and their end users. • Legal, Regulations, Compliance and Investigations Addresses general computer crime legislation and regulations, the investigative measures and techniques that can be used to determine if an incident has occurred, and the gathering, analysis, and management of evidence if it exists. The focus is on concepts and international generally accepted methods, processes, and procedures. xvii
AU8231_C000.fm Page xviii Thursday, October 19, 2006 6:55 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® This textbook has been developed to help information security professionals who want to better understand the knowledge requirements of their profession and have that knowledge validated by the CISSP certification. Since few practitioners have significant work experience in all 10 domains, the authors highly recommend that they attend a CBK Review Seminar to identify those areas where more concentrated study is necessary, then read the sections on the selected domains in-depth in this book where they feel they are most deficient. Another way to utilize this book is to test yourself first on the 200 CISSP exam sample questions dispersed throughout following each domain chapter. When you find you don’t know the answer to a domain question, simply read the preceding chapter. Although this book includes a broad range of important material, the information security field is so wide that professionals are advised to review other references as well. A list of (ISC)2 recommended reading can be found at https://www.isc2.org/cgi-bin/content.cgi?category=698. We would like to thank the authors who contributed to this book and the efforts of the many people who made an undertaking such as this successful. We trust that you will find this to be a valuable reference that will lead you to a greater appreciation of the important field of information security.
xviii
AU8231_bookTOC.fm Page xix Monday, October 23, 2006 12:52 PM
Contents Domain 1 Information Security and Risk Management............................................ 1 Todd Fitzgerald, CISSP, Bonnie Goins, CISSP, and Rebecca Herold, CISSP Introduction.................................................................................................... 1 CISSP Expectations ................................................................................... 2 The Business Case for Information Security Management ...................... 4 Core Information Security Principles: Confidentiality, Availability, Integrity (CIA).................................................................... 5 Confidentiality....................................................................................... 5 Integrity ................................................................................................. 6 Availability............................................................................................. 6 Security Management Practice................................................................ 7 Information Security Management Governance ........................................ 7 Security Governance Defined .................................................................. 8 Security Policies, Procedures, Standards, Guidelines, and Baselines .................................................................................................. 9 Security Policy Best Practices .......................................................... 10 Types of Security Policies ................................................................. 12 Standards............................................................................................. 13 Procedures .......................................................................................... 14 Baselines.............................................................................................. 15 Guidelines............................................................................................ 16 Combination of Policies, Standards, Baselines, Procedures, and Guidelines .................................................................................. 16 Policy Analogy .................................................................................... 16 Audit Frameworks for Compliance ....................................................... 17 COSO .................................................................................................... 17 ITIL........................................................................................................ 18 COBIT ................................................................................................... 18 ISO 17799/BS 7799............................................................................... 18 Organizational Behavior ............................................................................. 19 Organizational Structure Evolution ...................................................... 20 Today’s Security Organizational Structure..................................... 21 Best Practices .......................................................................................... 22 xix
AU8231_bookTOC.fm Page xx Monday, October 23, 2006 12:52 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Job Rotation ........................................................................................ 23 Separation of Duties ........................................................................... 23 Least Privilege (Need to Know) ........................................................ 25 Mandatory Vacations ......................................................................... 25 Job Position Sensitivity...................................................................... 25 Responsibilities of the Information Security Officer .......................... 26 Communicate Risks to Executive Management .............................. 26 Budget for Information Security Activities...................................... 27 Ensure Development of Policies, Procedures, Baselines, Standards, and Guidelines............................................................... 28 Develop and Provide Security Awareness Program....................... 28 Understand Business Objectives...................................................... 28 Maintain Awareness of Emerging Threats and Vulnerabilities..... 29 Evaluate Security Incidents and Response...................................... 29 Develop Security Compliance Program ........................................... 29 Establish Security Metrics................................................................. 29 Participate in Management Meetings............................................... 30 Ensure Compliance with Government Regulations........................ 30 Assist Internal and External Auditors .............................................. 30 Stay Abreast of Emerging Technologies .......................................... 30 Reporting Model ...................................................................................... 31 Business Relationships ...................................................................... 31 Reporting to the CEO ......................................................................... 31 Reporting to the Information Technology (IT) Department ......... 32 Reporting to Corporate Security ...................................................... 32 Reporting to the Administrative Services Department.................. 33 Reporting to the Insurance and Risk Management Department ....................................................................................... 33 Reporting to the Internal Audit Department ................................... 33 Reporting to the Legal Department.................................................. 34 Determining the Best Fit .................................................................... 34 Enterprisewide Security Oversight Committee................................... 34 Vision Statement................................................................................. 34 Mission Statement .............................................................................. 35 Security Planning..................................................................................... 42 Strategic Planning ............................................................................... 43 Tactical Planning ................................................................................ 43 Operational and Project Planning .................................................... 43 Personnel Security .................................................................................. 44 Hiring Practices................................................................................... 44 Security Awareness, Training, and Education ......................................... 51 Why Conduct Formal Security Awareness Training? ......................... 51 Training Topics ................................................................................... 52 What Might a Course in Security Awareness Look Like?............... 52 Awareness Activities and Methods....................................................... 54 xx
AU8231_bookTOC.fm Page xxi Monday, October 23, 2006 12:52 PM
Contents Job Training ............................................................................................. 55 Professional Education........................................................................... 56 Performance Metrics .............................................................................. 56 Risk Management......................................................................................... 56 Risk Management Concepts................................................................... 57 Qualitative Risk Assessments ........................................................... 58 Quantitative Risk Assessments ........................................................ 60 Selecting Tools and Techniques for Risk Assessment .................. 62 Risk Assessment Methodologies ...................................................... 62 Risk Management Principles ................................................................. 64 Risk Avoidance ................................................................................... 64 Risk Transfer ....................................................................................... 64 Risk Mitigation .................................................................................... 65 Risk Acceptance ................................................................................. 65 Who Owns the Risk?........................................................................... 66 Risk Assessment...................................................................................... 66 Identify Vulnerabilities ...................................................................... 66 Identify Threats .................................................................................. 67 Determination of Likelihood ............................................................. 67 Determination of Impact.................................................................... 68 Determination of Risk ........................................................................ 68 Reporting Findings ............................................................................. 69 Countermeasure Selection ................................................................ 69 Information Valuation ........................................................................ 70 Ethics............................................................................................................. 71 Regulatory Requirements for Ethics Programs................................... 73 Example Topics in Computer Ethics .................................................... 74 Computers in the Workplace ............................................................ 74 Computer Crime ................................................................................. 74 Privacy and Anonymity ..................................................................... 75 Intellectual Property .......................................................................... 75 Professional Responsibility and Globalization ............................... 75 Common Computer Ethics Fallacies..................................................... 75 The Computer Game Fallacy............................................................. 76 The Law-Abiding Citizen Fallacy....................................................... 76 The Shatterproof Fallacy ................................................................... 76 The Candy-from-a-Baby Fallacy ........................................................ 77 The Hacker’s Fallacy .......................................................................... 77 The Free Information Fallacy ............................................................ 77 Hacking and Hacktivism......................................................................... 77 The Hacker Ethic ................................................................................ 78 Ethics Codes of Conduct and Resources ............................................. 78 The Code of Fair Information Practices........................................... 78 xxi
AU8231_bookTOC.fm Page xxii Monday, October 23, 2006 12:52 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Internet Activities Board (IAB) (now the Internet Architecture Board) and RFC 1087 ................................................ 79 Computer Ethics Institute (CEI)........................................................ 79 National Conference on Computing and Values ............................. 80 The Working Group on Computer Ethics ........................................ 80 National Computer Ethics and Responsibilities Campaign (NCERC)............................................................................................. 80 (ISC)2 Code of Ethics .......................................................................... 81 Organizational Ethics Plan of Action .................................................... 82 How a Code of Ethics Applies to CISSPs............................................... 84 References..................................................................................................... 87 Other References ......................................................................................... 87 Sample Questions ........................................................................................ 88 Domain 2 Access Control.......................................................................................... 93 James S. Tiller, CISSP Introduction.................................................................................................. 93 CISSP® Expectations................................................................................ 93 Confidentiality, Integrity, and Availability ........................................... 93 Definitions and Key Concepts .................................................................... 94 Determining Users................................................................................... 95 Defining Resources.................................................................................. 96 Specifying Use.......................................................................................... 97 Accountability.......................................................................................... 97 Access Control Principles ...................................................................... 98 Separation of Duties ........................................................................... 98 Least Privilege ................................................................................... 101 Information Classification .................................................................... 101 Data Classification Benefits ............................................................. 102 Establishing a Data Classification Program................................... 103 Labeling and Marking ....................................................................... 107 Data Classification Assurance......................................................... 107 Summary ............................................................................................ 108 Access Control Categories and Types .................................................... 108 Control Categories ................................................................................ 108 Preventative ...................................................................................... 108 Deterrent............................................................................................ 109 Detective ............................................................................................ 109 Corrective .......................................................................................... 110 Recovery ............................................................................................ 111 Compensating ................................................................................... 111 Types of Controls .................................................................................. 112 Administrative................................................................................... 113 xxii
AU8231_bookTOC.fm Page xxiii Monday, October 23, 2006 12:52 PM
Contents Physical.............................................................................................. 124 Technical ........................................................................................... 125 Access Control Threats ............................................................................ 130 Denial of Service.................................................................................... 130 Buffer Overflows.................................................................................... 131 Mobile Code ........................................................................................... 132 Malicious Software................................................................................ 133 Password Crackers ............................................................................... 134 Spoofing/Masquerading ....................................................................... 136 Sniffers, Eavesdropping, and Tapping................................................ 137 Emanations ............................................................................................ 138 Shoulder Surfing.................................................................................... 139 Object Reuse.......................................................................................... 139 Data Remanence.................................................................................... 140 Unauthorized Targeted Data Mining .................................................. 142 Dumpster Diving.................................................................................... 143 Backdoor/Trapdoor.............................................................................. 144 Theft........................................................................................................ 144 Social Engineering................................................................................. 145 E-mail Social Engineering ................................................................ 145 Help Desk Fraud................................................................................ 146 Access to Systems ..................................................................................... 147 Identification and Authentication ....................................................... 147 Types of Identification ..................................................................... 148 Types of Authentication .................................................................. 149 Authentication Method Summary .................................................. 167 Identity and Access Management ....................................................... 169 Identity Management............................................................................ 170 Identity Management Challenges ................................................... 172 Identity Management Technologies............................................... 173 Access Control Technologies .............................................................. 179 Single Sign-On.................................................................................... 179 Kerberos ............................................................................................ 181 Secure European System for Applications in a Multi-Vendor Environment (SESAME) ................................................................. 184 Security Domain ............................................................................... 185 Section Summary.............................................................................. 186 Access to Data............................................................................................ 186 Discretionary and Mandatory Access Control .................................. 186 Access Control Lists ........................................................................ 188 Access Control Matrix ..................................................................... 188 Rule-Based Access Control ............................................................. 188 Role-Based Access Control ............................................................. 189 Content-Dependent Access Control............................................... 191 Constrained User Interface ............................................................. 191 xxiii
AU8231_bookTOC.fm Page xxiv Monday, October 23, 2006 12:52 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Capability Tables .............................................................................. 191 Temporal (Time-Based) Isolation................................................... 192 Centralized Access Control ............................................................. 192 Decentralized Access Control ......................................................... 192 Section Summary .............................................................................. 192 Intrusion Detection and Prevention Systems......................................... 194 Intrusion Detection Systems................................................................ 195 Network Intrusion Detection System ............................................. 196 Host-Based Intrusion Detection System ........................................ 197 Analysis Engine Methods ..................................................................... 198 Pattern/Stateful Matching Engine ................................................... 199 Anomaly-Based Engine..................................................................... 200 Intrusion Responses ............................................................................. 201 Alarms and Signals ........................................................................... 203 IDS Management .................................................................................... 204 Access Control Assurance ........................................................................ 205 Audit Trail Monitoring .......................................................................... 205 Audit Event Types ............................................................................ 205 Auditing Issues and Concerns......................................................... 206 Information Security Activities............................................................ 207 Penetration Testing .......................................................................... 208 Types of Testing ............................................................................... 213 Summary ............................................................................................ 215 References................................................................................................... 215 Sample Questions ...................................................................................... 215 Domain 3 Cryptography ......................................................................................... 219 Kevin Henry, CISSP Introduction................................................................................................ 219 CISSP Expectations................................................................................ 219 Core Information Security Principles: Confidentiality, Integrity, and Availability.................................................................................... 219 Key Concepts and Definitions .................................................................. 220 The History of Cryptography............................................................... 222 The Early (Manual) Era .................................................................... 222 The Mechanical Era .......................................................................... 222 The Modern Era ................................................................................ 223 Emerging Technology ........................................................................... 223 Quantum Cryptography ................................................................... 223 Protecting Information ......................................................................... 225 Data Storage ...................................................................................... 225 Data Transmission............................................................................ 225 Uses of Cryptography ........................................................................... 226 xxiv
AU8231_bookTOC.fm Page xxv Monday, October 23, 2006 12:52 PM
Contents Availability......................................................................................... 226 Confidentiality................................................................................... 226 Integrity ............................................................................................. 226 Additional Features of Cryptographic Systems ................................ 226 Nonrepudiation................................................................................. 227 Authentication .................................................................................. 227 Access Control.................................................................................. 227 Methods of Cryptography.................................................................... 227 Stream-Based Ciphers...................................................................... 227 Block Ciphers.................................................................................... 229 Encryption Systems................................................................................... 229 Substitution Ciphers............................................................................. 229 Playfair Cipher .................................................................................. 229 Transposition Ciphers ..................................................................... 230 Monoalphabetic and Polyalphabetic Ciphers .............................. 231 Modular Mathematics and the Running Key Cipher.................... 233 One-Time Pads.................................................................................. 234 Steganography .................................................................................. 235 Watermarking.................................................................................... 235 Code Words....................................................................................... 235 Symmetric Ciphers........................................................................... 236 Examples of Symmetric Algorithms ............................................... 237 Advantages and Disadvantages of Symmetric Algorithms ......... 252 Asymmetric Algorithms ....................................................................... 253 Confidential Messages ..................................................................... 253 Open Message................................................................................... 254 Confidential Messages with Proof of Origin.................................. 254 RSA ..................................................................................................... 254 Diffie–Hellmann Algorithm .............................................................. 257 El Gamal ............................................................................................. 258 Elliptic Curve Cryptography ........................................................... 258 Advantages and Disadvantages of Asymmetric Key Algorithms....................................................................................... 258 Hybrid Cryptography....................................................................... 259 Message Integrity Controls....................................................................... 260 Checksums ............................................................................................. 260 Hash Function........................................................................................ 260 Simple Hash Functions .................................................................... 261 MD5 Message Digest Algorithm...................................................... 261 Secure Hash Algorithm (SHA) and SHA-1 ...................................... 262 HAVAL................................................................................................ 262 RIPEMD-160 ....................................................................................... 262 Attacks on Hashing Algorithms and Message Authentication Codes .................................................................... 263 Message Authentication Code (MAC) ................................................ 264 xxv
AU8231_bookTOC.fm Page xxvi Monday, October 23, 2006 12:52 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® HMAC.................................................................................................. 264 Digital Signatures ....................................................................................... 265 Digital Signature Standard (DSS)......................................................... 265 Uses of Digital Signatures..................................................................... 266 Encryption Management ........................................................................... 266 Key Management ................................................................................... 266 Key Recovery .................................................................................... 267 Key Distribution Centers ................................................................. 268 Standards for Financial Institutions.................................................... 268 Public Key Infrastructure (PKI) ........................................................... 269 Revocation of a Certificate .............................................................. 271 Cross-Certification............................................................................ 271 Legal Issues Surrounding Cryptography ............................................ 271 Cryptanalysis and Attacks ........................................................................ 271 Ciphertext-Only Attack ......................................................................... 271 Known Plaintext Attack ........................................................................ 271 Chosen Plaintext Attack ....................................................................... 272 Chosen Ciphertext Attack .................................................................... 272 Social Engineering ................................................................................. 272 Brute Force............................................................................................. 272 Differential Power Analysis .................................................................. 273 Frequency Analysis ............................................................................... 273 Birthday Attack...................................................................................... 273 Dictionary Attack................................................................................... 273 Replay Attack ......................................................................................... 273 Factoring Attacks .................................................................................. 273 Reverse Engineering ............................................................................. 273 Attacking the Random Number Generators....................................... 274 Temporary Files..................................................................................... 274 Encryption Usage ....................................................................................... 274 E-mail Security Using Cryptography ................................................... 274 Protocols and Standards ...................................................................... 275 Pretty Good Privacy (PGP)................................................................... 275 Secure/Multipurpose Internet Mail Extension (S/MIME) ................. 275 Internet and Network Security ............................................................ 275 IPSec ................................................................................................... 275 SSL/TLS .............................................................................................. 276 References................................................................................................... 276 Sample Questions ...................................................................................... 277 Domain 4 Physical (Environmental) Security........................................................ 281 Paul Hansford, CISSP Introduction................................................................................................ 281 xxvi
AU8231_bookTOC.fm Page xxvii Monday, October 23, 2006 12:52 PM
Contents CISSP Expectations ............................................................................... 282 Physical (Environmental) Security Challenges...................................... 282 Threats and Vulnerabilities ................................................................. 283 Threat Types..................................................................................... 283 Vulnerabilities................................................................................... 285 Site Location............................................................................................... 285 Site Fabric and Infrastructure ............................................................. 285 The Layered Defense Model..................................................................... 286 Physical Considerations....................................................................... 287 Working with Others to Achieve Physical and Procedural Security............................................................................................ 287 Physical and Procedural Security Methods, Tools, and Techniques...................................................................................... 288 Procedural Controls......................................................................... 288 Infrastructure Support Systems .......................................................... 290 Fire Prevention, Detection, and Suppression ............................... 290 Boundary Protection........................................................................ 292 Building Entry Points............................................................................ 293 Keys and Locking Systems .............................................................. 293 Walls, Doors, and Windows ............................................................ 295 Access Controls ................................................................................ 296 Closed-Circuit Television (CCTV) .................................................. 296 Intrusion Detection Systems ........................................................... 298 Portable Device Security ................................................................. 299 Asset and Risk Registers ................................................................. 299 Information Protection and Management Services............................... 300 Managed Services ................................................................................. 300 Audits, Drills, Exercises, and Testing ................................................. 300 Vulnerability and Penetration Tests................................................... 301 Maintenance and Service Issues ......................................................... 301 Education, Training, and Awareness .................................................. 301 Summary ..................................................................................................... 302 References .................................................................................................. 302 Sample Questions ...................................................................................... 303 Domain 5 Security Architecture and Design ......................................................... 307 William Lipiczky, CISSP Introduction................................................................................................ 307 CISSP® Expectations.............................................................................. 307 Security Architecture and Design Components and Principles .......... 308 Security Frameworks: ISO/IEC 17799:2005, BS 7799:2, ISO 270001................................................................................................... 308 Design Principles................................................................................... 309 xxvii
AU8231_bookTOC.fm Page xxviii Monday, October 23, 2006 12:52 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Diskless Workstations, Thin Clients, and Thin Processing ......... 309 Operating System Protection .......................................................... 310 Hardware ................................................................................................ 311 Personal Digital Assistants (PDAs) and Smart Phones................ 314 Central Processing Unit (CPU)........................................................ 315 Storage ............................................................................................... 316 Input/Output Devices ....................................................................... 318 Communications Devices ................................................................ 319 Networks and Partitioning............................................................... 319 Software .................................................................................................. 320 Operating Systems............................................................................ 320 Application Programs ...................................................................... 321 Processes and Threads.................................................................... 322 Firmware................................................................................................. 323 Trusted Computer Base (TCB) ............................................................ 323 Reference Monitor................................................................................. 324 Security Models and Architecture Theory ............................................. 324 Lattice Models ....................................................................................... 324 State Machine Models........................................................................... 325 Research Models ................................................................................... 325 Noninterference Models .................................................................. 325 Information Flow Models ................................................................. 325 Bell–LaPadula Confidentiality Model.................................................. 325 Biba Integrity Model.............................................................................. 326 Clark–Wilson Integrity Model .............................................................. 326 Access Control Matrix and Information Flow Models ...................... 327 Information Flow Models ................................................................. 328 Graham–Denning Model .................................................................. 328 Harrison–Ruzzo–Ullman Model....................................................... 328 Brewer–Nash (Chinese Wall) .......................................................... 328 Security Product Evaluation Methods and Criteria............................... 329 Rainbow Series ...................................................................................... 329 Trusted Computer Security Evaluation Criteria (TCSEC) ........... 329 Information Technology Security Evaluation Criteria (ITSEC)........ 330 Common Criteria ................................................................................... 331 Software Engineering Institute’s Capability Maturity Model Integration (SEI-CMMI) ....................................................................... 331 Certification and Accreditation ........................................................... 332 Sample Questions ...................................................................................... 332 Domain 6 Business Continuity and Disaster Recovery Planning......................... 337 Carl B. Jackson, CISSP Introduction................................................................................................ 337 xxviii
AU8231_bookTOC.fm Page xxix Monday, October 23, 2006 12:52 PM
Contents CISSP Expectations ............................................................................... 338 Core Information Security Principles: Availability, Integrity, Confidentiality (AIC)........................................................................... 339 Why Continuity Planning?.................................................................... 339 Reality of Terrorist Attack............................................................... 339 Natural Disasters .............................................................................. 340 Internal and External Audit Oversight........................................... 340 Legislative and Regulatory Requirements .................................... 340 Industry and Professional Standards ................................................. 341 NFPA 1600.......................................................................................... 341 ISO 17799 ........................................................................................... 341 Defense Security Service (DSS)....................................................... 341 National Institute of Standards and Technology (NIST) ............. 341 Good Business Practice or the Standard of Due Care ................. 341 Enterprise Continuity Planning and Its Relationship to Business Continuity and Disaster Recovery Planning ................... 341 Revenue Loss .................................................................................... 342 Extra Expense ................................................................................... 343 Compromised Customer Service.................................................... 343 Embarrassment or Loss of Confidence Impact ............................ 343 Hidden Benefits of Continuity Planning......................................... 343 Organization of the BCP/DRP Domain Chapter ..................................... 344 Project Initiation Phase ........................................................................ 344 Current State Assessment Phase ........................................................ 345 Design and Development Phase.......................................................... 345 Implementation Phase.......................................................................... 345 Management Phase ............................................................................... 346 Project Initiation Phase Description................................................... 346 Project Scope Development and Planning .................................... 346 Executive Management Support..................................................... 348 BCP Project Scope and Authorization ........................................... 348 Executive Management Leadership and Awareness.................... 350 Continuity Planning Project Team Organization and Management.................................................................................... 351 Disaster or Disruption Avoidance and Mitigation........................ 353 Project Initiation Phase Activities and Tasks Work Plan ............ 354 Current State Assessment Phase Description................................... 354 Understanding Enterprise Strategy, Goals, and Objectives........ 354 Enterprise Business Processes Analysis ....................................... 355 People and Organizations ............................................................... 355 Time Dependencies.......................................................................... 355 Motivation, Risks, and Control Objectives.................................... 355 Budgets .............................................................................................. 355 Technical Issues and Constraints .................................................. 356 Continuity Planning Process Support Assessment........................... 356 xxix
AU8231_bookTOC.fm Page xxx Monday, October 23, 2006 12:52 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Threat Assessment ........................................................................... 356 Risk Management.............................................................................. 358 Business Impact Assessment (BIA) ................................................ 359 Benchmarking and Peer Review ..................................................... 362 Sample Current State Assessment Phase Activities and Tasks Work Plan ............................................................................. 363 Development Phase Description ......................................................... 363 Recovery Strategy Development .................................................... 363 Work Plan Development .................................................................. 366 Develop and Design Recovery Strategies ...................................... 366 Data and Software Backup Approaches......................................... 369 DRP Recovery Strategies for IT....................................................... 370 BCP Recovery Strategies for Enterprise Business Processes ..... 371 Developing Continuity Plan Documents and Infrastructure Strategies ......................................................................................... 373 Developing Testing/Maintenance/Training Strategies................. 373 Plan Development Phase Description............................................ 374 Building Continuity Plans ................................................................ 375 Contrasting Crisis Management and Continuity Planning Approaches ..................................................................................... 379 Building Crisis Management Plans ................................................. 379 Testing/Maintenance/Training Development Phase Description...................................................................................... 381 Developing Continuity and Crisis Management Process Training and Awareness Strategies.............................................. 386 Sample Phase Activities and Tasks Work Plan ............................. 386 Implementation Phase Description..................................................... 386 Analyze CPPT Implementation Work Plans................................... 386 Program Short- and Long-Term Testing ........................................ 388 Continuity Plan Testing (Exercise) Procedure Deployment ....... 388 Program Training, Awareness, and Education.............................. 391 Emergency Operations Center (EOC) ............................................ 392 Management Phase Description.......................................................... 392 Program Oversight ........................................................................... 392 Continuity Planning Manager Roles and Responsibilities........... 392 Terminology................................................................................................ 395 References................................................................................................... 398 Sample Questions ...................................................................................... 398 Appendix A: Addressing Legislative Compliance within Business Continuity Plans....................................................................................... 401 Rebecca Herold, CISSP HIPAA ...................................................................................................... 401 xxx
AU8231_bookTOC.fm Page xxxi Monday, October 23, 2006 12:52 PM
Contents GLB.......................................................................................................... 402 Patriot Act .............................................................................................. 402 Other Issues ........................................................................................... 404 OCC Banking Circular 177 ............................................................... 404 Domain 7 Telecommunications and Network Security......................................... 407 Alec Bass, CISSP and Peter Berlich, CISSP-ISSMP Introduction................................................................................................ 407 CISSP® Expectations.............................................................................. 408 Basic Concepts........................................................................................... 408 Network Models .................................................................................... 408 OSI Reference Model ........................................................................ 409 TCP/IP Model .................................................................................... 413 Network Security Architecture ........................................................... 414 The Role of the Network in IT Security.......................................... 414 Network Security Objectives and Attack Modes.......................... 416 Methodology of an Attack ............................................................... 419 Network Security Tools ................................................................... 421 Layer 1: Physical Layer ............................................................................. 423 Concepts and Architecture.................................................................. 423 Communication Technology........................................................... 423 Network Topology............................................................................ 424 Technology and Implementation ........................................................ 427 Cable .................................................................................................. 427 Twisted Pair ...................................................................................... 428 Coaxial Cable..................................................................................... 429 Fiber Optics....................................................................................... 429 Patch Panels...................................................................................... 430 Modems ............................................................................................. 430 Wireless Transmission Technologies ............................................ 431 Layer 2: Data-Link Layer ........................................................................... 433 Concepts and Architecture.................................................................. 433 Architecture ...................................................................................... 433 Transmission Technologies ............................................................ 434 Technology and Implementation ........................................................ 441 Ethernet ............................................................................................. 441 Wireless Local Area Networks ........................................................ 445 Address Resolution Protocol (ARP)............................................... 450 Point-to-Point Protocol (PPP) ......................................................... 450 Layer 3: Network Layer ............................................................................. 450 Concepts and Architecture.................................................................. 450 Local Area Network (LAN) .............................................................. 450 xxxi
AU8231_bookTOC.fm Page xxxii Monday, October 23, 2006 12:52 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Wide Area Network (WAN) Technologies ..................................... 452 Metropolitan Area Network (MAN) ................................................ 462 Global Area Network (GAN) ............................................................ 463 Technology and Implementation ........................................................ 464 Routers............................................................................................... 464 Firewalls ............................................................................................. 464 End Systems ...................................................................................... 468 Internet Protocol (IP) ....................................................................... 471 Virtual Private Network (VPN)........................................................ 475 Tunneling ........................................................................................... 479 Dynamic Host Configuration Protocol (DHCP) ............................. 479 Internet Control Message Protocol (ICMP) ................................... 480 Internet Group Management Protocol (IGMP).............................. 481 Layer 4: Transport Layer .......................................................................... 482 Concepts and Architecture .................................................................. 482 Transmission Control Protocol (TCP) ........................................... 483 User Datagram Protocol (UDP)....................................................... 484 Technology and Implementation ........................................................ 484 Scanning Techniques ....................................................................... 484 Denial of Service ............................................................................... 486 Layer 5: Session Layer............................................................................... 486 Concepts and Architecture .................................................................. 486 Technology and Implementation ........................................................ 486 Remote Procedure Calls .................................................................. 486 Directory Services ............................................................................ 487 Access Services................................................................................. 493 Layer 6: Presentation Layer...................................................................... 495 Concepts and Architecture .................................................................. 495 Technology and Implementation ........................................................ 496 Transport Layer Security (TLS) ...................................................... 496 Layer 7: Application Layer........................................................................ 497 Concepts and Architecture .................................................................. 497 Technology and Implementation ........................................................ 497 Asynchronous Messaging (E-mail and News) ............................... 497 Instant Messaging ............................................................................. 502 Data Exchange (World Wide Web) ................................................. 506 Peer-to-Peer Applications and Protocols....................................... 512 Administrative Services ................................................................... 512 Remote-Access Services .................................................................. 514 Information Services ........................................................................ 517 Voice-over-IP (VoIP) ......................................................................... 518 General References .................................................................................... 520 Sample Questions ...................................................................................... 521 Endnotes ..................................................................................................... 525 xxxii
AU8231_bookTOC.fm Page xxxiii Monday, October 23, 2006 12:52 PM
Contents Domain 8 Application Security .............................................................................. 537 Robert M. Slade, CISSP Domain Description and Introduction .................................................... 537 Current Threats and Levels ................................................................. 537 Application Development Security Outline ....................................... 538 Expectation of the CISSP in This Domain........................................... 539 Applications Development and Programming Concepts and Protection................................................................................................. 540 Current Software Environment ........................................................... 541 Open Source .......................................................................................... 542 Full Disclosure .................................................................................. 543 Programming ......................................................................................... 543 Process and Elements...................................................................... 544 The Programming Procedure ......................................................... 545 The Software Environment .................................................................. 547 Threats in the Software Environment ................................................ 549 Buffer Overflow................................................................................. 549 Citizen Programmers ....................................................................... 550 Covert Channel ................................................................................. 550 Malicious Software (Malware) ........................................................ 551 Malformed Input Attacks................................................................. 551 Memory Reuse (Object Reuse) ....................................................... 551 Executable Content/Mobile Code................................................... 551 Social Engineering ............................................................................ 552 Time of Check/Time of Use (TOC/TOU) ........................................ 553 Trapdoor/Backdoor ......................................................................... 553 Application Development Security Protections and Controls ........ 554 System Life Cycle and Systems Development .............................. 554 Systems Development Life Cycle (SDLC) ...................................... 555 Software Development Methods .................................................... 561 Java Security ..................................................................................... 564 Object-Oriented Technology and Programming .......................... 566 Object-Oriented Security................................................................. 568 Distributed Object-Oriented Systems............................................ 569 Software Protection Mechanisms ....................................................... 571 Security Kernels................................................................................ 571 Processor Privilege States............................................................... 571 Security Controls for Buffer Overflows ......................................... 573 Controls for Incomplete Parameter Check and Enforcement..... 573 Memory Protection .......................................................................... 574 Covert Channel Controls ................................................................. 575 Cryptography.................................................................................... 575 Password Protection Techniques .................................................. 575 xxxiii
AU8231_bookTOC.fm Page xxxiv Monday, October 23, 2006 12:52 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Inadequate Granularity of Controls................................................ 576 Control and Separation of Environments ...................................... 576 Time of Check/Time of Use (TOC/TOU) ........................................ 577 Social Engineering ............................................................................ 577 Backup Controls ............................................................................... 577 Software Forensics ........................................................................... 578 Mobile Code Controls ...................................................................... 580 Programming Language Support .................................................... 582 Audit and Assurance Mechanisms .......................................................... 582 Information Integrity............................................................................. 583 Information Accuracy ........................................................................... 583 Information Auditing............................................................................. 583 Certification and Accreditation ........................................................... 584 Information Protection Management.................................................. 584 Change Management............................................................................. 585 Configuration Management.................................................................. 586 Malicious Software (Malware).................................................................. 586 Malware Types....................................................................................... 589 Viruses ............................................................................................... 589 Worms ................................................................................................ 592 Hoaxes................................................................................................ 593 Trojans ............................................................................................... 593 Remote-Access Trojans (RATs) ...................................................... 595 DDoS Zombies ................................................................................... 596 Logic Bombs ...................................................................................... 596 Spyware and Adware........................................................................ 597 Pranks................................................................................................. 597 Malware Protection............................................................................... 598 Scanners............................................................................................. 599 Activity Monitors .............................................................................. 599 Change Detection.............................................................................. 599 Antimalware Policies........................................................................ 600 Malware Assurance ............................................................................... 601 The Database and Data Warehousing Environment.............................. 602 DBMS Architecture................................................................................ 602 Hierarchical Database Management Model................................... 604 Network Database Management Model ......................................... 605 Relational Database Management Model ...................................... 605 Object-Oriented Database Model ................................................... 609 Database Interface Languages ............................................................. 609 Open Database Connectivity (ODBC) ............................................ 609 Java Database Connectivity (JDBC) ............................................... 610 eXtensible Markup Language (XML) .............................................. 610 Object Linking and Embedding Database (OLE DB) .................... 611 Accessing Databases through the Internet ................................... 612 xxxiv
AU8231_bookTOC.fm Page xxxv Monday, October 23, 2006 12:52 PM
Contents Data Warehousing................................................................................. 613 Metadata ............................................................................................ 614 Online Analytical Processing (OLAP) ............................................ 616 Data Mining ....................................................................................... 616 Database Vulnerabilities and Threats ................................................ 617 DBMS Controls ...................................................................................... 620 Lock Controls.................................................................................... 621 Other DBMS Access Controls ......................................................... 622 View-Based Access Controls........................................................... 622 Grant and Revoke Access Controls................................................ 622 Security for Object-Oriented (OO) Databases .............................. 623 Metadata Controls............................................................................ 623 Data Contamination Controls ......................................................... 623 Online Transaction Processing (OLTP)......................................... 623 Knowledge Management ...................................................................... 624 Web Application Environment................................................................. 626 Web Application Threats and Protection .......................................... 627 Summary ..................................................................................................... 628 References .................................................................................................. 629 Sample Questions ...................................................................................... 629 Domain 9 Operations Security ............................................................................... 633 Sean M. Price, CISSP Introduction................................................................................................ 633 Privileged Entity Controls......................................................................... 633 Operators ............................................................................................... 633 Ordinary Users ...................................................................................... 634 System Administrators......................................................................... 635 Security Administrators ....................................................................... 637 File Sensitivity Labels ...................................................................... 637 System Security Characteristics..................................................... 637 Clearances ......................................................................................... 637 Passwords ......................................................................................... 637 Account Characteristics .................................................................. 638 Security Profiles................................................................................ 638 Audit Data Analysis and Management ........................................... 639 System Accounts................................................................................... 640 Account Management........................................................................... 640 Resource Protection.................................................................................. 642 Facilities ................................................................................................. 642 Hardware................................................................................................ 642 Software.................................................................................................. 644 Documentation ...................................................................................... 644 xxxv
AU8231_bookTOC.fm Page xxxvi Monday, October 23, 2006 12:52 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Threats to Operations .......................................................................... 645 Disclosure .......................................................................................... 645 Destruction ........................................................................................ 645 Interruption and Nonavailability .................................................... 645 Corruption and Modification........................................................... 645 Theft ................................................................................................... 645 Espionage........................................................................................... 646 Hackers and Crackers ...................................................................... 646 Malicious Code.................................................................................. 646 Control Types ........................................................................................ 646 Preventative Controls ...................................................................... 646 Detective Controls ............................................................................ 646 Corrective Controls .......................................................................... 647 Directive Controls............................................................................. 647 Recovery Controls ............................................................................ 647 Deterrent Controls............................................................................ 647 Compensating Controls ................................................................... 647 Control Methods.................................................................................... 648 Separation of Responsibilities ........................................................ 648 Least Privilege ................................................................................... 648 Job Rotation ...................................................................................... 648 Need to Know .................................................................................... 648 Security Audits and Reviews........................................................... 649 Supervision........................................................................................ 649 Input/Output Controls ..................................................................... 650 Antivirus Management ..................................................................... 650 Media Types and Protection Methods ............................................... 650 Object Reuse .......................................................................................... 651 Sensitive Media Handling ..................................................................... 653 Marking .............................................................................................. 653 Handling ............................................................................................. 653 Storing ................................................................................................ 653 Destruction ........................................................................................ 653 Declassification ................................................................................. 654 Misuse Prevention................................................................................. 654 Record Retention................................................................................... 655 Continuity of Operations .......................................................................... 655 Fault Tolerance ................................................................................. 656 Data Protection...................................................................................... 657 Software .................................................................................................. 659 Hardware ................................................................................................ 660 Communications.................................................................................... 660 Facilities.................................................................................................. 661 Problem Management........................................................................... 663 System Component Failure.............................................................. 664 xxxvi
AU8231_bookTOC.fm Page xxxvii Monday, October 23, 2006 12:52 PM
Contents Power Failure .................................................................................... 664 Telecommunications Failure........................................................... 664 Physical Break-In .............................................................................. 664 Tampering ......................................................................................... 664 Production Delay.............................................................................. 665 Input/Output Errors ......................................................................... 665 System Recovery................................................................................... 667 Intrusion Detection System ................................................................. 668 Vulnerability Scanning ......................................................................... 668 Business Continuity Planning.............................................................. 669 Change Control Management................................................................... 669 Configuration Management ................................................................. 670 Production Software ............................................................................. 671 Software Access Control ...................................................................... 671 Change Control Process....................................................................... 672 Requests ............................................................................................ 672 Impact Assessment .......................................................................... 672 Approval/Disapproval...................................................................... 672 Build and Test................................................................................... 672 Notification........................................................................................ 673 Implementation................................................................................. 673 Validation .......................................................................................... 673 Documentation ................................................................................. 673 Library Maintenance............................................................................. 673 Patch Management ............................................................................... 673 Summary ..................................................................................................... 677 References .................................................................................................. 677 Sample Questions ...................................................................................... 678 Domain 10 Legal, Regulations, Compliance and Investigations ............................ 683 Marcus K. Rogers, Ph.D., CISSP Introduction................................................................................................ 683 CISSP® Expectations.............................................................................. 684 Major Legal Systems.................................................................................. 685 Common Law ......................................................................................... 686 Criminal Law ..................................................................................... 687 Tort Law............................................................................................. 687 Administrative Law .......................................................................... 687 Civil Law ................................................................................................. 688 Customary Law ................................................................................. 688 Religious Law......................................................................................... 689 Mixed Law .............................................................................................. 689 Information Technology Laws and Regulations .................................... 690 xxxvii
AU8231_bookTOC.fm Page xxxviii Monday, October 23, 2006 12:52 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Intellectual Property Laws ................................................................... 690 Patent ................................................................................................. 690 Trademark ......................................................................................... 690 Copyright ........................................................................................... 691 Trade Secret ...................................................................................... 691 Licensing Issues ................................................................................ 691 Privacy .................................................................................................... 692 Liability ................................................................................................... 694 Computer Crime .................................................................................... 695 International Cooperation ............................................................... 697 Incident Response...................................................................................... 698 Response Capability ............................................................................. 699 Incident Response and Handling......................................................... 700 Triage ................................................................................................. 700 Investigative Phase........................................................................... 701 Containment ...................................................................................... 701 Analysis and Tracking ...................................................................... 702 Recovery Phase ..................................................................................... 703 Recovery and Repair ........................................................................ 704 Debriefing/Feedback ............................................................................. 704 Computer Forensics .................................................................................. 705 Crime Scene............................................................................................ 707 Digital/Electronic Evidence.................................................................. 708 General Guidelines ................................................................................ 709 Conclusions ................................................................................................ 710 References................................................................................................... 712 Sample Questions ...................................................................................... 715 Appendix A Answers to Sample Questions ............................................................... 719 Domain 1: Information Security and Risk Management ........................ 719 Domain 2: Access Control......................................................................... 724 Domain 3: Cryptography ........................................................................... 728 Domain 4: Physical (Environmental) Security ....................................... 731 Domain 5: Security Architecture and Design ......................................... 734 Domain 6: Business Continuity and Disaster Recovery Planning .................................................................................................... 737 Domain 7: Telecommunications and Network Security........................ 740 Domain 8: Application Security................................................................ 746 Domain 9: Operations Security ................................................................ 748 Domain 10: Legal, Regulations, Compliance and Investigation............ 752 xxxviii
AU8231_bookTOC.fm Page xxxix Monday, October 23, 2006 12:52 PM
Contents Appendix B Certified Information Systems Security Professional (CISSP®) Candidate Information Bulletin ............................................................ 757 1 — Information Security and Risk Management .................................. 758 Overview ................................................................................................ 758 Key Areas of Knowledge....................................................................... 759 2 — Access Control ................................................................................... 759 Overview ................................................................................................ 759 Key Areas of Knowledge....................................................................... 760 3 — Cryptography ..................................................................................... 760 Overview ................................................................................................ 760 Key Areas of Knowledge....................................................................... 760 4 — Physical (Environmental) Security.................................................. 760 Overview ................................................................................................ 760 Key Areas of Knowledge....................................................................... 761 5 — Security Architecture and Design.................................................... 761 Overview ................................................................................................ 761 Key Areas of Knowledge....................................................................... 761 6 — Business Continuity and Disaster Recovery Planning .................. 762 Overview ................................................................................................ 762 Key Areas of Knowledge....................................................................... 762 7 — Telecommunications and Network Security .................................. 763 Overview ................................................................................................ 763 Key Areas of Knowledge....................................................................... 763 8 — Application Security .......................................................................... 764 Overview ................................................................................................ 764 Key Areas of Knowledge....................................................................... 764 9 — Operations Security........................................................................... 764 Overview ................................................................................................ 764 Key Areas of Knowledge....................................................................... 764 10 — Legal, Regulations, Compliance and Investigations .................... 765 Overview ................................................................................................ 765 Key Areas of Knowledge....................................................................... 765 References .................................................................................................. 766 General Examination Information............................................................ 770 Appendix C Glossary .................................................................................................. 775 Index ..................................................................................................... 1023
xxxix
AU8231_bookTOC.fm Page xl Monday, October 23, 2006 12:52 PM
AU8231_C001.fm Page 1 Thursday, October 19, 2006 7:00 AM
Domain 1
Information Security and Risk Management Todd Fitzgerald, CISSP, Bonnie Goins, CISSP, and Rebecca Herold, CISSP
Introduction The information security and risk management domain of the Certified Information Systems Security Professional (CISSP)® Common Body of Knowledge (CBK)® addresses the framework and policies, concepts, principles, structures, and standards used to establish criteria for the protection of information assets, to inculcate holistically the criteria, and to assess the effectiveness of that protection. It includes issues of governance, organizational behavior, and security awareness. Information security management establishes the foundation for a comprehensive security program to ensure the protection of an organization’s information assets. Today’s environment of highly interconnected, interdependent systems necessitates the requirement to understand the linkage between information technology and meeting the business objectives set forth by management. Information security management communicates the risks accepted by the organization due to the currently implemented security controls, and continually works to cost effectively enhance the controls to minimize the risk to the company. Security management encompasses the administrative, technical, and physical controls necessary to adequately protect the confidentiality, integrity, and availability of the information assets. The controls are manifested through the implementation of policies, procedures, standards, baselines, and guidelines. Management practices utilized to reduce the risk of loss of data, unavailability and inaccessibility of information, intentional and unintentional data destruction or modification, loss of company reputation, and disclosure of information include such tools as risk assessment, risk analysis, 1
AU8231_C001.fm Page 2 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® data classification, and security awareness training. Information assets are classified, and through risk analysis, the threats and vulnerabilities related to these assets are identified, along with the appropriate safeguards that can limit the risk of compromise of the asset. Risk management minimizes loss to information assets through identification, measurement, and control, and minimizes loss due to events. It encompasses the overall security review, risk analysis, selection and evaluation of safeguards, cost–benefit analysis, management decision, safeguard implementation, and ongoing effectiveness review. Risk management provides the mechanism to the organization to ensure that executive management knows current risks, and decisions are made to accept the risks or implement safeguards to minimize the risks and accept the lower residual risks. Security management is concerned with regulator y, customer, employee, and business partner requirements for management of the data as it flows between these parties to support the processing and business use of the information. Confidentiality, integrity, and the availability of the information must be maintained throughout the process. CISSP Expectations The Certified Information Systems Security Professional (CISSP) is expected to fully understand: • The planning, organization, and roles of individuals in identifying and securing an organization’s information assets • The development of effective employment agreements; employee hiring practices, including background checks and job descriptions; security clearances; separation of duties and responsibilities; job rotation; and hiring and termination practices • The development and use of policies stating management’s views and positions on particular topics and the use of guidelines, standards, baselines, and procedures to support those policies. • The differences between policies, standards, baselines, and procedures in terms of their application to security administration • The importance of security awareness training to make employees aware of the need for information security, its significance, and the specific security-related requirements relative to the employees’ position • The importance of data classification, including sensitive, confidential, proprietary, private, and critical information • The application of security policies, standards, baselines, and procedures to ensure the privacy, confidentiality, integrity, and availability of information 2
AU8231_C001.fm Page 3 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management • The importance of risk management practices and tools to identify, rate, and reduce the risk to specific information assets • Asset identification and evaluation • Threat identification and assessment • Vulnerability and exposures’ identification and assessment • Calculation of single occurrence loss (single loss expectancy) and annual loss expectancy • Safeguards and countermeasure identification and evaluation, including risk management practices and tools to identify, rate, and reduce the risk to specific information assets • Calculation of the resulting annual loss expectancy and residual risk • Communication of the residual risk to be assigned (i.e., insured against) or accepted by management • The regulatory and ethical requirements to protect individuals from substantial harm, embarrassment, or inconvenience, due to the inappropriate collection, storage, or dissemination of personal information • The principles and controls that protect data against compromise or inadvertent disclosure • The principles and controls that ensure the logical correctness of an information system; the consistency of data structures; and the accuracy, precision, and completeness of the data stored • The principles and controls that ensure that a computer resource will be available to authorized users when they need it • The purpose of and process used for reviewing system records, event logs, and activities; understands the purpose and processes used to review system records, event logs, and activities • The importance of managing change and the change control process, and certification and accreditation • The application of commonly accepted best practices for system security administration, including the concepts of least privilege, separation of duties, job rotation, monitoring, and incident response Internal control standards reduce risk. Internal control standards are required to satisfy obligations with respect to the law, safeguard the organization’s assets, and account for the accurate revenue and expense tracking. There are three categories of internal control standards: general standards, specific standards, and audit resolution standards: • General standards must provide reasonable assurance, support the internal controls, provide for competent personnel, and assist in establishing control objectives and techniques • Specific standards must be documented, clear, and available to the personnel. They allow for the prompt recording of transactions and the prompt execution of authorized transactions. Specific 3
AU8231_C001.fm Page 4 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® standards establish separation of duties, qualified supervision, and accountability. • Audit resolution standards require that the manager promptly resolve audit findings. Managers must evaluate, determine the corrective action required, and take that action. The Business Case for Information Security Management Information security practices protect the assets of the organization through the implementation of managerial, technical, and operational controls. Information assets must be managed appropriately to reduce the risk of financial loss — just as financial assets are managed through finance departments and human assets (people) are managed and cared for by the human resources department and associated code of conduct and employment policies and practices. Failure to protect the information assets from loss, destruction, or unexpected alteration can result in significant losses of productivity, reputation, or finances. Information is an asset that must be protected, as well as the software and hardware, which support the storage and retrieval of the information. Security management ensures that the appropriate policies, procedures, standards, and guidelines are implemented to provide the proper balance of security controls with business operations. Security exists to support the vision, mission, and business objectives of the organization. Effective security management requires judgment based upon the risk tolerance of the organization, the costs to implement the security controls, and the benefit to the business. Although attaining 100 percent security of access to the information systems may appear to be an admirable goal, in practice this would be unrealistic. Even if this goal were able to be attained through the training of all users with access to the systems, timely installation of software patches, rigorous internal controls, realtime monitoring and response to intrusions, and a budget that would support each of these activities, on “day 2,” when a new vulnerability or exploit was known or a new user was added to the system, risks would again be introduced. Furthermore, the cost of providing this level of security would be excessive for most organizations, as these dollars would most likely come at the expense of other new product initiatives, decreased customer service expenditures, or decreased profits for the organization. Because most organizations are in a competitive environment that requires continuous product innovation and reduction of administrative costs, funding information security at the 100 percent level would be cost prohibitive. Therefore, effective security management requires understanding the business objectives of the organization, the management’s tolerance for risk, the costs of the various security alternatives, and subsequently utilizing judgment to match the appropriate security controls to the business initiatives. 4
AU8231_C001.fm Page 5 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management The security professionals leading the information security program are relied upon for their knowledge of the managerial, operational, and technical safeguards that can be implemented to reduce the risks of loss, waste, reputation, and business disruption. Senior management ultimately makes the final decision on the level of security expenditures and the risk they are willing to take. Security professionals should view their role as risk advisors to the organization, as they are not the decision makers. There may be situations where the risk is viewed as an infrequent occurrence and very unlikely to happen, and therefore, management is willing to take the risk due to reasons that the security professional may not know. For example, the decision to accept operating in a regional office without a sprinkler system may be appropriate if the company has been operating in that office for ten years without a fire and management has undisclosed plans to relocate the office within the next six months. Or, the company may be under intense pressure by Wall Street to make the quarterly forecasts, which will affect their ability to provide longer-term investments. Alternatively, there may be government mandates to comply with new regulations or audit findings that have a higher priority. Senior management must weigh all of the risks to the business, and security represents one of those risks that must be considered. This is why the security professional must exercise good judgment in communicating the risks and possible security solutions. There will always be residual risk that is accepted by the organization, and effective security management will minimize this risk to a level that is not “betting the business,” while simultaneously contributing to the organizational mission and the bottom line. Core Information Security Principles: Confidentiality, Availability, Integrity (CIA) The information security program must ensure that the core concepts of availability, integrity, and confidentiality are understood and supported through the implementation of security controls designed to mitigate or reduce the risks of loss, disruption, or corruption of information. Confidentiality. Confidentiality is the principle that only authorized individuals, processes, or systems should have access to information on a need-to-know basis. In recent years, much press has been dedicated to the privacy of information and the need to protect it from individuals, who may be able to commit crimes by viewing the information. Identity theft is the act of assuming one’s identity through knowledge of confidential information obtained from various sources. Information must be classified to determine the level of confidentiality required, or who should have access to the information (public, internal use only, or confidential). Identification, authentication, and authorization through access controls are prac5
AU8231_C001.fm Page 6 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® tices that support maintaining the confidentiality of information. Encrypting information also supports confidentiality by limiting the usability of the information in the event it is viewed while still encrypted. Unauthorized users should be prevented access to the information, and monitoring controls should be implemented to detect and respond per organizational policies to unauthorized attempts. Authorized users of information also represent a risk, as they may have ill intentions by accessing the information for personal knowledge, personal monetary gain, or to support improper disclosures. Integrity. Integrity is the principle that information should be protected from intentional, unauthorized, or accidental changes. Information stored within the files, databases, systems, and networks must be able to be relied upon to accurately process transactions and provide accurate information for business decision making. Controls are put in place to ensure that information is modified through the accepted practices. Management controls such as the segregation of duties, specification of the systems development life cycle with approval checkpoints, and implementation of testing practices assist in providing information integrity. Well-formed transactions and security of the update programs provide consistent methods of applying changes to systems. Limiting update access to those individuals with a need to access limits the exposure to intentional and unintentional modification. Availability. Availability is the principle that information is accessible by users when needed. The two primary areas affecting the availability of systems are (1) denial of services due to a lack of adequate security controls and (2) loss of service due to a disaster, such as an earthquake, tornado, blackout, hurricane, fire, flood, and so forth. In either case, the end user does not have access to information needed to perform his or her job duties. The criticality of the system to the user and its importance to the survival of the organization will determine how significant the impact of the extended downtime becomes.
A lack of security controls increases the risk of viruses, destruction of data, external penetrations, or denial-of-service (DOS) attacks. DOS attacks prevent the system from being used by normal users by sending large amounts of traffic or exploit code to a particular device to cause a system the inability to respond to access requests. Without the proper virus protection controls, viruses or worms can swamp a network with traffic, thus slowing the response time and making the system inaccessible. Business continuity planning ensures that the department can function without the computer system within a defined period using alternate processes. Disaster recovery planning ensures the recovery of the information technology processing capability at a permanent site to an acceptable operational state. These work together to recover from system unavailability. 6
AU8231_C001.fm Page 7 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management
Assess Risk & Determine Needs
Central Management
Implement Policies & Controls
Monitor & Evaluate
Promote Awareness
Figure 1.1. Security management relationships.
When considering the design and implementation of an application or management process, evaluation of the impact to confidentiality, integrity, and availability should be understood. Will it enhance any of these core security principles? Different security controls apply to different core security principles. For example, selection of a backup tape procedure, software, and hardware to perform backups would be most oriented toward the availability aspect of information security, whereas the selection of a security token utilizing strong, two-factor authentication would be most related to the enhancement of the confidentiality of information through the improvement of the authentication function. Security Management Practice Security management is the glue that ensures the risks are identified and an adequate control environment is established to mitigate the risks. Security management ensures the interrelationships among assessing risk, implementing policies and controls in response to the risks, promoting awareness of the expectations, monitoring the effectiveness of the controls, and using this knowledge as input to the next risk assessment. These relationships are shown in Figure 1.1. Information Security Management Governance The increased corporate governance requirements have caused companies to examine their internal control structures more closely to ensure that controls are in place and operating effectively. Organizations are 7
AU8231_C001.fm Page 8 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® increasingly competing in the global marketplace, which is governed by multiple laws and supported by various best practices (i.e., ITIL, ISO 17799, COSO, COBIT). Appropriate information technology investment decisions must be made that are in alignment with the mission of the business. Information technology is no longer a back-office accounting function in most businesses, but rather is a core operational necessity to the business, which must have the proper visibility to the board of directors and management attention of the program. This dependence on information technology mandates ensuring the proper alignment and understanding of the risk to the business. Substantial investments are made in these technologies (which must be appropriately managed), company reputations are at risk for insecure systems, and the trust in the systems needs to be demonstrated to all parties involved, including the shareholders, employees, business partners, and consumers. Information security governance provides the mechanisms for the board of directors and management to have the proper oversight to manage the risk to the enterprise to an acceptable level. Security Governance Defined Although there is no universally accepted definition for security governance at this juncture, the intent of the governance is to guarantee that the appropriate information security activities are being performed to ensure that the risks are appropriately reduced, the information security investments are appropriately directed, and the executive management has visibility into the program and is asking the appropriate questions to determine the effectiveness of the program. The IT Governance Institute (ITGI) defines IT governance as “a structure of relationships and processes to direct and control the enterprise in order to achieve the enterprise’s goals by adding value while balancing risk versus return over IT and its processes.” The ITGI proposes that information security governance should be considered part of IT governance, and that the board of directors become informed about information security, set direction to drive policy and strategy, provide resources to security efforts, assign management responsibilities, set priorities, support changes required, define cultural values related to risk assessment, obtain assurance from internal or external auditors, and insist that security investments are made measurable and reported on for program effectiveness. Additionally, the ITGI suggests that management write security policies with business input and ensure that roles and responsibilities are defined and clearly understood, threats and vulnerabilities are identified, security infrastructures are implemented, control frameworks (standards, measures, practices, and procedures) are implemented after policy approved by the governing body, priorities are implemented in a timely manner, breaches are monitored, periodic reviews and tests are conducted, awareness education is viewed as critical and 8
AU8231_C001.fm Page 9 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management delivered, and security is built into the systems development life cycle. These concepts are further delineated in this section. Security Policies, Procedures, Standards, Guidelines, and Baselines Imagine the day-to-day operation of an organization without any policies. Individuals would have to make decisions about what is right or wrong for the company based upon their personal values or their own past experience. This could potentially consist of as many values as there are people in the organization. Management would also be failing to demonstrate due diligence by not putting practices in place to protect the investors and manage the employees of the organization. Policies establish the glue that ensures everyone has a common set of expectations and communicates management’s goals and objectives. Procedures, standards, guidelines, and baselines (illustrated in Figure 1.2) are different components that support the implementation of the security policy. A policy without mechanisms for its implementation is analogous to an organization having a business strategy without action plans to execute the strategy. In this situation, there would be limited chance of success, as expectations to achieve the higher-level business strategy would not be clear to the workforce. Similarly, policies communicate management’s expectations, which are fulfilled through the execution of procedures and adherence to standards, baselines, and guidelines.
Figure 1.2. Relationships among policies, standards, procedures, baselines, and guidelines. 9
AU8231_C001.fm Page 10 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Security officers and their teams have typically been charged with the responsibility of creating the security policies. The policies must be written and communicated at a level that is understood by the end users of the organization, if there is to be any chance of compliance. If the policies are poorly written, or written at too high of an education level (common industry practice is to focus the content for general users at the sixth- to eighthgrade reading level), they will not be understood. Although security officers may be charged with the development of the policies, the effort is typically collaborative to ensure that the business issues are addressed. Utilization of an executive oversight committee or a subgroup of that committee, depending upon the policy being drafted, is an approach that considers the business impacts of security policy decisions. Developing the policies solely within the IT department and then distributing them without business input is likely to miss important business considerations. As always, deciding on the appropriate security controls is a decision of risk by the organization, which ultimately should be decided by the business leaders. The organization is also more likely to accept security policies that have been approved and endorsed by the business leaders versus the security officer or the IT department. Once these different documents have been created, the basis for ensuring compliance is established. These deliverables form the basis for organizational compliance with the security policies. The most current version of the documents needs to be readily accessible by those that are expected to follow them. Many organizations have placed these documents electronically on their intranets or shared file folders to facilitate their communication. Such placement of these documents plus checklists, forms, and sample documents can save time for the individual and be an added value provided by the security department. Policies define what the organization needs to accomplish at a high level and serves as management’s intentions to control the operation of the organization to meet business objectives. The why should be stated in the form of a policy summary statement or purpose. If end users understand the why, they are more apt to follow the policy. As children, we were told what to do by our parents and we just did it. As we grew older, we challenged those beliefs (as four- and five-year-olds and again as teenagers) and needed to understand the reasoning. The rules had to make sense to us. Today’s organizations are no different; people need to understand the why before they can really commit. Security Policy Best Practices. Someone once said, “Writing security policies is like making sausage, you don’t want to know what goes into it, but what comes out is pretty good.” Writing policies does not have to be a
10
AU8231_C001.fm Page 11 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management mystery, and there are several guidelines for creating good security policies practiced in the industry: • Clearly define policy creation practice. A clearly defined process for initiating, creating, reviewing, recommending, approving, and distributing the policies communicates the responsibilities of all parties and the time expectations of their participation. This can be accomplished by process flows, swim lanes, flowcharts, or written documentation. • Write policies that will survive at least two or three years. Policies are high-level statements of the objectives of the organization. The underlying methods and technologies to implement the controls to support the policies may change. By including these in the other related documents (procedures, standards, guidelines, and baselines), the policy statements will need less frequent change. This avoids frequent updates and subsequent distribution to the organization. • Use directive wording. Policies represent expectations to be complied with. As such, statements including such words as must, will, and shall communicate this requirement versus using weaker directives such as should, may, or can. This latter type of language is better reserved for guidelines or areas where there are options. • Avoid technical implementation details. Policies should be written to be technology independent, as the implemented technology may change over time. • Keep length to a minimum. Policies published online should be limited in length to two or three pages maximum per policy. The intent for the policies is for the end user to understand, and not to create long documents for the sake of documentation. • Provide navigation from the policy to the supporting documents. If the implementation of the policy is placed online, then hyperlinking the procedures, standards, guidelines, and baselines can be an effective method to ensure that the appropriate procedures are followed. Some of the internal security procedures would not be appropriate for general knowledge, such as the procedure for monitoring intrusions or reviewing log files, and these need to be accessible by the security department and properly secured from general distribution. • Thoroughly review before publishing. Proofreading of policies by multiple individuals allows errors to be caught that may not be readily seen by the author. • Conduct management review and sign-off. Senior management must endorse the policies if they are to be effectively accepted by all management levels, and subsequently the end users of the organization. • Avoid techno-speak. Policies are oriented to communicate to nontechnical users. Technical jargon is acceptable in technical documentation, but not in high-level security policies.
11
AU8231_C001.fm Page 12 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® • Review incidents and adjust policies. Review of the security incidents that have occurred may indicate the need for a new policy, a revision to an existing policy, or the need to redistribute the current policy to reinforce compliance. • Periodically review policies. A formalized review process provides a mechanism to ensure that the security policies are still in alignment with the business objectives. • Develop sanctions for noncompliance. Effective policies have consistent sanctions as deterrents and enable action when the policies are not followed. These sanctions may include “disciplinary action up to and including termination.” Stronger language can also be added to warn of prosecution for serious offenses. Policies provide the foundation for a comprehensive and effective security program. The company is protected from surprises that may occur and gives the necessary authority to the security activities of the organization. By communicating the company policies as directives, accountability and personal responsibility for adhering to the security practices are established. The policies are utilized in determining or interpreting any conflicts that may arise. The policies also define the elements, scope, and functions of the security management. Types of Security Policies. Security policies may consist of different
types, depending upon the specific need for the policy. The different security policies work together to meet the objectives of a comprehensive security program. Different policy types include: • Organizational or program policy: This policy is issued by a senior management individual, who creates the authority and scope for the security program. The purpose of the program is described, and the assigned responsibility is defined for carrying out the information security mission. The goals of confidentiality, integrity, and availability are addressed in the policy. Specific areas of security focus may be stressed, such as the protection of confidential information for a credit card company or health insurance company, or the availability focus for a company maintaining mission-critical, high-availability systems. The policy should be clear as to the facilities, hardware, software, information, and personnel that are in scope for the security program. In most cases, the scope will be the entire organization. In larger organizations, however, the security program may be limited in scope to a division or geographic location. The organization policy sets out the high-level authority to define the appropriate sanctions for failure to comply with the policy. • Functional, issue-specific policies: While the organizational security policies are broad in scope, the functional or issue-specific policies address areas of particular security concern requiring clarification. 12
AU8231_C001.fm Page 13 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management The issue-specific policies may be focused on the different domains of security and address areas such as access control, contingency planning, segregation of duties, principles, and so forth. They may also address specific technical areas of existing and emerging technologies, such as use of the Internet, e-mail and corporate communication systems, wireless access, or remote system access. For example, an acceptable use policy may define the responsibilities of the end user for using the corporate computer systems for business purposes only, or may allow the person some incidental personal use, provided the restriction of ensuring that usage is free from viruses, spyware, the downloading of inappropriate pictures or software, or the sending of chain letters through e-mail. These policies will depend upon the business needs and the tolerance for risk. They contain the statement of the issue, the statement of the organization’s position on the issue, the applicability of the issue, compliance requirements, and sanctions for not following the policy. • System-specific policies: Areas where it is desired to have clearer direction or greater control for a specific technical or operational area may have more detailed policies. These policies may be targeted for a specific application or platform. For example, a systemspecific policy may address which departments are permitted to input or modify information in the check-writing application for the disbursement of accounts payable payments. • The more detailed and issue specific the written policy is, the higher the likelihood is that the policy will require more frequent changes. Typically, high-level organizational security policies will survive for several years, while those focused on the use of technology will change much more frequently as technology matures and new technology is added to the environment. Even if an organization is not currently utilizing a technology, policies can explicitly strengthen the message that the technology is not to be used and is prohibited. For example, a policy regarding removable media such as USB drives or one regarding the use of wireless devices or camera phones in the workplace would reinforce management’s intentions around the acceptance or nonacceptance of these devices. Standards. Whereas policies define what an organization needs, standards take this a step further and define the requirements. Standards provide the agreements that provide interoperability within the organization through the use of common protocols.
Standards are the hardware and software security mechanisms selected as the organization’s method of controlling security risks. Standards are prevalent in many facets of our daily lives, such as the size of the tires on automobiles, specifications of the height, color, and format of the STOP sign, 13
AU8231_C001.fm Page 14 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® and the RJ11 plug on the end of the phone jack cable. Standards provide consistency in implementation as well as permit interoperability with reduced confusion. There are many security standards that could be chosen to implement a particular solution. For example, when selecting a control for remoteaccess identification and authentication, an organization could decide to utilize log-in IDs and passwords, strong authentication through a security token over dial-up, or a virtual private network (VPN) solution over the Internet. Standards simplify the operation of the security controls within the company and increase efficiency. It is more costly to support multiple software packages that do essentially the same activity. Imagine if each user was told to go to the local computer store and purchase the antivirus product that he or she liked best. Some users would ask the salesperson’s opinion, some would buy the least expensive, and others might get the most expensive, assuming this would provide the greatest protection. Without a consistent standard for antivirus products, the organization would be unsure as to the level of protection provided. Additionally, each of these different products would have different installation, update, and licensing considerations, contributing to complex management. It makes sense to have consistent products chosen for the organization versus leaving the product choice to every individual. Determination of which standards meet the organization’s needs must be driven by the security policies agreed upon by management. The standards provide the specification of the technology to effectively enable the organization to become successful in meeting the requirements of the policy. If, in the example of remote access, the organization was restricting information over the Internet or had many users in rural areas with limited Internet access, then the VPN standard over the Internet may not be a plausible solution. Conversely, for end users transmitting large amounts of information, the dial-up solution may be impractical. The policy defines the boundaries within which the standards must be supportive. Standards may also refer to those guidelines established by a standards organization and accepted by management. Standards creators include organizations such as the National Institute of Standards and Technology (NIST), International Organization for Standardization (ISO), Institute of Electronics and Electrical Engineers (IEEE), American National Standards Institute (ANSI), National Security Agency (NSA), and others. Procedures. Procedures are step-by-step instructions in support of the policies, standards, guidelines, and baselines. The procedure indicates how the policy will be implemented and who does what to accomplish the tasks. The procedure provides clarity and a common understanding to the operation required to effectively support the policy on a consistent basis. Procedures are best developed when the input of each of the interfacing 14
AU8231_C001.fm Page 15 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management areas is included in their development. This reduces the risk that important steps, communication, or required deliverables are left out. Companies must be able to provide assurance that they have exercised due diligence in the support and enforcement of company policies. This means that the company has made an effort to be in compliance with the policies and has communicated the expectations to the workforce. Documenting procedures communicated to the users, business partners, and anyone utilizing the systems, as appropriate, minimizes the legal liability of the corporation. Creating documented procedures is more than an exercise for the sake of documentation. The process itself creates a common understanding among the developers of the procedure of the methods used to accomplish the task. Individuals from different organizational units may be very familiar with their work area, but not as familiar with the impact of a procedure on a department. This is the “beach ball effect,” where organizations sometimes appear as a large beach ball, and the individuals working in different departments can only see their side of the ball and may not understand the other parts of the organization. The exercise of writing down a single, consistent procedure has the added effect of establishing agreement among the parties. Many times at the beginning of the process, individuals will think they understand the process only to realize that people were really executing different, individual processes to accomplish the task. Consistent documentation of the procedures permits the ability to improve upon the procedures. Once everyone understands the initial procedure, enhancements can be applied and communicated to everyone. This provides a method to incorporate the best thinking on the single procedure versus having multiple procedures for the same operation with a mixture of good and bad practices. Baselines. Baselines provide descriptions of how to implement security
packages to ensure that these implementations are consistent throughout the organization. Different software packages, hardware platforms, and networks have different methods of ensuring security. There are many different options and settings that must be determined to provide the desired protection. An analysis of the available configuration settings and subsequent settings desired, forms the basis for future, consistent implementation of the standard. For example, turning off the Telnet service may be specified in the hardening baseline document for the network servers. A procedure for exceptions to the baseline would need to be followed in the event that the baseline could not be adhered to for a particular device, along with the business justification. The baselines are the specific rules necessary to implement the security controls in support of the policy and standards that have been developed. 15
AU8231_C001.fm Page 16 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Testing of the implemented security controls on a periodic basis ensures that the baselines are implemented according to the documented baselines. The baselines themselves should be reviewed periodically to make sure that they are sufficient to address emerging threats and vulnerabilities. In large environments with multiple individuals performing systems administration and responding to urgent requests, there is an increased risk that one of the baseline configurations may not be implemented properly. Internal testing identifies these vulnerabilities and provides a mechanism to review why the control was or was not properly implemented. Failures in training, adherence to baselines and associated procedures, change control, documentation, or skills of the individual performing the changes may be identified through the testing. Guidelines. Guidelines are discretionary or optional controls used to enable individuals to make judgments with respect to security actions. A good exercise is to replace the word guideline with the word optional. If by doing so the statements contained in the “optional” are what is desired to happen at the user’s discretion, then it is an appropriate guideline. If, on the other hand, the statements are considered to be required to adequately protect the security of the organization, then this should be defined as part of a policy, standard, or baseline.
Guidelines are also those recommendations, best practices, and templates provided by other organizations such as the Control Objectives for Information and related Technology (COBIT), the Capability Maturity Model (CMM), ISO 17799, and British Standard 7799, security configuration recommendations such as those from the National Institute of Standards and Technology (NIST) or the National Security Agency (NSA), organizational guidelines, or other governmental guidelines. Combination of Policies, Standards, Baselines, Procedures, and Guidelines. Each of these documents is closely related to the others and may be
developed as the result of new regulations, external industry standards, new threats and vulnerabilities, emerging technologies, upgraded hardware and software platforms, or risk assessment changes. Sometimes these different areas are combined into single documents for ease of management of all the documents. Keeping policies separate from the implementation components (standards, baselines, and procedures) increases the flexibility and reduces the cost of maintenance, as the policies typically change less frequently than the supporting processes to achieve compliance with the policy. Policy Analogy. A useful analogy to remember the differences between policies, standards, guidelines, and procedures is to think of a company that builds cabinets and has a “hammer” policy. The different components may be as follows: 16
AU8231_C001.fm Page 17 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management • Policy: “All boards must be nailed together using company-issued hammers to ensure end-product consistency and worker safety.” Notice the flexibility provided to permit the company to define the hammer type with changes in technology or safety issues. The purpose is also communicated to the employees. • Standard: “Eleven-inch fiberglass hammers will be used. Only hardened-steel nails will be used with the hammers. Automatic hammers are to be used for repetitive jobs that are >1 hour.” Technical specifics are provided to clarify the expectations that make sense for the current environment and represent management’s decision. • Guideline: “To avoid splitting the wood, a pilot hole should be drilled first.” The guideline is a suggestion and may not apply in all cases or with all types of wood. This does not represent a requirement, but rather a suggested practice. • Procedure: “(1) Position nail in upright position on board. (2) Strike nail with full swing of hammer. (3) Repeat until nail is flush with board. (4) If thumb is caught between nail and board, see Nail FirstAid Procedure.” The procedure indicates the process of using the hammer and the nail to clarify what is expected to be successful. Following this procedure, with the appropriate standard hammers, and practicing guidelines where appropriate, will fulfill the policy. Audit Frameworks for Compliance Multiple frameworks have been created to support the auditing of the implemented security controls. These resources are valuable to assist in the design of a security program, as they define the necessary controls to provide secure information systems. The following frameworks have each gained a degree of acceptance within the auditing or information security community and add value to the information security investment delivery. Although the origins of several frameworks/best practices were not specifically designed to support information security, many of the processes within these practices support different aspects of confidentiality, integrity, and availability. COSO. The Committee of Sponsoring Organizations of the Treadway Commission (COSO) was formed in 1985 to sponsor the National Commission on Fraudulent Financial Reporting, which studied factors that lead to fraudulent financial reporting and produced recommendations for public companies, their auditors, the Securities Exchange Commission, and other regulators. COSO identifies five areas of internal control necessary to meet the financial reporting and disclosure objectives. These include (1) control environment, (2) risk assessment, (3) control activities, (4) information and communication, and (5) monitoring. The COSO internal control model has been adopted as a framework by some organizations working toward Sarbanes–Oxley Section 404 compliance. 17
AU8231_C001.fm Page 18 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® ITIL. The IT Infrastructure Library (ITIL) is a set of 34 books published by the British government’s Stationary Office between 1989 and 1992 to improve IT service management. The framework contains a set of best practices for IT core operational processes such as change, release and configuration management, incident and problem management, capacity and availability management, and IT financial management. ITIL’s primary contribution is showing how the controls can be implemented for the service management IT processes. These practices are useful as a starting point for tailoring to the specific needs of the organization, and the success of the practices depend upon the degree to which they are kept up to date and implemented on a daily basis. Achievement of these standards is an ongoing process, whereby the implementations need to be planned, supported by management, prioritized, and implemented in a phased approach. COBIT. Control Objectives for Information and related Technology (COBIT) is published by the IT Governance Institute and contains a set of 34 high-level processes, one for each of the IT processes, such as define a strategic IT plan, define the information architecture, manage the configuration, manage facilities, and ensure systems security. A total of 214 control objectives are provided to support these processes. Ensure systems security has further been broken down into control objectives, such as manage security measures, identification, authentication and access, user account management, data classification, firewall architectures, and so forth. The COBIT framework examines the effectiveness, efficiency, confidentiality, integrity, availability, compliance, and reliability aspects of the high-level control objectives. The model defines four domains for governance: planning and organization, acquisition and implementation, delivery and support, and monitoring. Processes and IT activities and tasks are then defined within these domains. The framework provides an overall structure for information technology control and includes control objectives that can be utilized to determine effective security control objectives that are driven from the business needs. ISO 17799/BS 7799. The BS 7799/ISO 17799 standards can be used as a basis for developing security standards and security management practices within an organization. The U.K. Department of Trade and Industry (DTI) Code of Practice (CoP) for information security, which was developed from support of industry in 1993, became British Standard 7799 in 1995. BS 7799 was subsequently revised in 1999 to add certification and accreditation components, which became Part 2 of BS 7799. Part 1 of BS 7799 became ISO 17799 and was published as ISO 17799:2000, as the first international information security management standard by the International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC). 18
AU8231_C001.fm Page 19 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management The ISO 17799 standard was modified in June 2005 as ISO/IEC 17799:2005 and contains 134 detailed information security controls based upon the following 11 areas: • • • • • • • • • • •
Information security policy Organizing information security Asset management Human resources security Physical and environmental security Communications and operations management Access control Information systems acquisition, development, and maintenance Information security incident management Business continuity management Compliance
The ISO standards are grouped together by topic areas and the ISO/IEC27000 series has been designated as the information security management series. For example, the 27002 Code of Practice will replace the current ISO/IEC 17799:2005, “Information Technology — Security Techniques — Code of Practice for Information Security Management.” This is consistent with how ISO has named other topic areas, such as the ISO 9000 series for quality management. ISO/IEC 27001:2005 was released in October 2005 and specifies the requirements for establishing, implementing, operating, monitoring, reviewing, maintaining, and improving a documented information security management system, taking into consideration the company’s business risks. This management standard was based on BS 7799, Part 2 and provides information on building information security management systems and guidelines for auditing the system. Organizational Behavior Organizations exist as a system of coordinated activities to accomplish organizational objectives. The larger the organization, the greater the need for formalized mechanisms to ensure the stability of the operations. Formalized, written policies, standards, procedures, and guidelines are created to provide for the long-term stability of the organization, regardless of the incumbent occupying the position. Over time, those in leadership positions will change, as well as those individuals within the workforce being managed. Organizational business processes are rationalized and logically grouped to efficiently and effectively perform the necessary work. Mergers and acquisitions frequently change the dynamics of the current operating organization, providing new opportunities to achieve synergies. 19
AU8231_C001.fm Page 20 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Work is typically broken down into subtasks, which are then assigned to an individual through specialization. When these tasks, such as systems security, database administration, or systems administration activities, are grouped together, one or more individuals that can focus on that particular skill set can perform them. This process of specialization creates greater efficiency within the organization, as it permits individuals to become very knowledgeable in a particular discipline and produce the results faster than if combined with other responsibilities. Organizations are also managed in a hierarchical manner, with the lower levels of the organization having more defined, repetitive tasks with less discretion over the resource allocation of human and physical assets. In the higher levels of the organization, through the definition of the chain of command, there are higher levels of authority and greater capability to reassign resources as necessary to accomplish higher-priority tasks. Organizational Structure Evolution The security organization has evolved over the past several decades with several names, for example, data security, systems security, security administration, information security, and information protection. These naming conventions are reflective of the emerging scope expansion of the information security departments. Earlier naming conventions, such as data security, indicated the primary focus of the information security profession, which was to protect the information that was primarily contacted within the mainframe, data-center era. As the technology evolved into distributed computing and the information has progressively moved outward from the data-center “glass house” protections, the scope of information security has increased to include these platforms. The focus in the 1970s was on the security between computers and the mainframe infrastructure, which evolved into data security and information security in the 1980s, which recognized the importance of protecting the access and integrity of the information contained within the systems. In the 1990s, as information technology was viewed as more fundamental to business success than ever before, and consumers became more aware of privacy issues regarding the protection and use of their information, the concepts of enterprise security protection emerged. With whatever naming convention is used within the organization, the primary focus of the information security organization is to ensure the confidentiality, availability, and integrity of the information. The size of the organization and the types of individuals necessary to staff the organization will depend upon the size of the overall organization, geographic dispersion, centralized or decentralized systems processing, the risk profile of the company, and the budget available for security. Each organization will be slightly different, as they operate within different industries with different threat profiles. Some organizations may be unwilling to take even the 20
AU8231_C001.fm Page 21 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management slightest risk if the information that needs to be protected, if disclosed, would be devastating to the long-term viability of the business. Organizations such as the defense industry, financial institutions, and technical research facilities needing to protect trade secrets may fall into this category. Until recently, the healthcare and insurance industries have spent a small portion of the available funds on information security, as the primary expenditures were allocated to helping patients and providing systems that increased care versus protecting the information. In fact, in some hospital environments, making information “harder to retrieve quickly” was viewed as detrimental to effective, timely care. In the early-centralized mainframe computing environments, a data security officer, who was primarily responsible for account and password administration, granting access privileges to files, and possibly the disaster recovery function, administered the security function. The assets that the security officer were protecting were primarily IT assets contained in the mainframe computer systems, and did not include hard-copy documents, people, facilities, and other company assets. The responsibility for the position resided within the IT department, and as such, the focus was on IT assets and limited in scope. The security officer was typically trained in how to work with security mechanisms such as RACF, ACF2, TopSecret in CICS/MVS environments, reflecting a narrow and well-defined scope of the responsibilities. As distributed, decentralized computing environments evolved to include internetworking between local area networks (LANs) and wide area networks (WANs), e-mail systems, data warehouses, and remote-access capabilities, the scope of the responsibilities became larger, making it more difficult to sustain these skills within one individual. Complicating the environment further was the integration of multiple disparate software applications and multiple-vendor database management system environments, such as the DB2 database management system, Oracle, Teradata, and Structure Query Language (SQL) Server running on different operating systems such as MVS, Windows, or multiple flavors of UNIX. In addition, each application has individual user access security controls that need to be managed. It would not be realistic to concentrate the technical capability for each of these platforms within one individual, or a small set of individuals trained on all of the platforms. Hence, this provided the impetus for specialization of these skills to ensure that the appropriate training and expertise were present to adequately protect the environment. Hence, firewall/router administrators need the appropriate technical training for the devices they are supporting, while a different individual or group may need to work with the Oracle database administrators to provide appropriate database management system (DBMS) access controls and logging and monitoring capabilities. Today’s Security Organizational Structure. There is no “one size fits all” for the information security department or the scope of the responsibili21
AU8231_C001.fm Page 22 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® ties. The location of where the security organization should report has also been evolving. In many organizations, the information security officer (ISSO) or chief information security officer (CISO) still reports to the chief information officer (CIO) or the individual responsible for the information technology activities of the organization. This is due to the fact that many organizations still view the information security function as an information technology problem and not a core business issue. Alternatively, the rationale for this may be due to the necessity to communicate in a technical language, which is understood by information technology professionals and is not typically well understood by the business. Regardless of the rationale for the placement, placing the individual responsible for information security within the information technology organization could represent a conflict of interest, as the IT department is motivated to deliver projects on time, within budget, and of high quality. Shortcuts may be taken on the security requirements to meet these constraints if the security function is reporting to the individual making these decisions. The benefit of having the security function report to the CIO is that the security department is more likely to be engaged in the activities of the IT department and aware of the upcoming initiatives and security challenges. A growing trend is for the security function to be treated as a risk management function and, as such, be located outside of the IT organization. This provides a greater degree of independence as well as the focus on risk management versus management of user IDs, password resets, and access authorization. Having the reporting relationship outside of the IT organization also introduces a different set of checks and balances on the security activities that are expected to be performed. The security function may report to the chief operating officer, chief executive officer, general counsel, internal audit, legal, compliance, administrative services, or some other function outside of information technology. The function should report as high in the organization as possible, preferably to an executivelevel individual. This ensures that the proper message is conveyed to senior management, the company employees view the appropriate authority of the department, and funding decisions can be made while considering the needs across the company. Best Practices Most individuals within an organization come to work every day to perform their jobs to the best of their ability. Most individuals have the appropriate intentions and seek out information on the best ways to perform their jobs, the training required, and what the expectations of their jobs are. The media places much attention on the external threat by hackers; however, there is also the threat internally of erroneous or fraudulent transactions, which could cause information assets to be damaged or 22
AU8231_C001.fm Page 23 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management destroyed. Job controls such as the segregation of duties, job description documentation, mandatory vacations, job and shift rotation, and need to know (least privilege) are implemented to minimize the risk of loss. Individuals must be qualified with the appropriate level of training, with the job responsibilities clearly defined so that the interaction among departments can properly function. Job Rotation. Job rotations reduce the risk of collusion of activities between individuals. Companies with individuals working with sensitive information or systems where there may be the opportunity for personal gain through collusion can benefit by integrating a job rotation with a segregation of duties. Rotating the position may uncover activities that the individual is performing outside of the normal operating procedures, highlighting errors or fraudulent behavior. It may be difficult to implement in small organizations due to the particular skill set required for the replaced position, and thus security controls and supervisory control will need to be relied upon. Rotating individuals in and out of jobs provides the ability to give backup coverage, succession planning, and job enrichment opportunities for those involved. Separation of Duties. One individual should not have the capability to execute all of the steps of a particular process. This is especially important in the information systems departments, where individuals typically have greater access and capability to modify, delete, or add data to the system. Failure to separate the duties could result in individuals embezzling money from the company without the involvement of others. Duties are typically subdivided or split between different individuals or organizational groups to achieve separation. This separation reduces the chances of errors or fraudulent acts, as each group serves as a balancing check on the others. Management is responsible for ensuring that the duties are well defined and separated within their business processes. Failure to do so can result in unintended consequences; for example:
• An individual in the finance department with the ability to add vendors to the vendor database, issue purchase orders, record receipt of shipment, and authorize payment could issue payments to made-up vendors without detection. • An individual in the payroll department with the ability to authorize, process, and review payroll transactions could increase the salaries of coworkers without detection. • A computer programmer with the ability to change production code could change the code to move money to a personal bank account and then conceal his or her actions by replacing the production code. • A programmer with the authority to write the code, move it to production, and run the production job, skipping internal systems 23
AU8231_C001.fm Page 24 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® development procedures, could implement erroneous code either inadvertently or deliberately. Some organizations utilize a two-dimensional segregation of duties matrix to determine what positions should be separated within a department. Each position is written along the axes of the matrix, with an x placed where the two responsibilities should not reside with the same individual. This x indicates where the job duties should be subdivided among different individuals. It is critical to separate the duties between the IS department and the business units, as well as between those areas within the IS organization. For example, the management of the user departments is responsible for providing the authorization of systems access for the access rights of their employees. The information systems department, more specifically the area responsible for security administration, is responsible for granting the access. On a periodic basis, this access is also reviewed and confirmed by the business management. Within the IT department, the security administrator would be separated from the business analyst, computer programmer, computer operator, and so forth. These duties, which should not be combined within one person or group, are referred to as incompatible duties. Incompatible duties may vary from one organization to another. However, the same individual should not typically perform the following functions: • • • • • • • • •
Systems administration Network management Data entry Computer operations Security administration Systems development and maintenance Security auditing Information systems management Change management
In smaller organizations, it may be difficult to separate the activities, as there may be limited staff available to perform these functions. These organizations may have to rely on compensating controls, such as supervisory review, to mitigate the risk. Audit logging and after-the-fact review by a third party can also provide an effective control in lieu of the ability to separate the job functions. Larger organizations need to ensure that appropriate separation and supervisory review and development of formalized operational procedures are in place. The separated functions should be documented fully and communicated to the staff to ensure that only the assigned individuals will execute tasks associated with these functions. These actions can help prevent or detect erroneous work performed by the user. Larger-dollar-amount transactions should have more extensive 24
AU8231_C001.fm Page 25 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management supervisory review controls (i.e., director/vice president/president formal sign-off) before processing is permitted. Individuals in the information systems department must be prohibited from entering data into the systems, data entry personnel must not be the same individuals verifying the data, and reconciliation of the information should not be performed by the individual entering the information. Separation of these duties introduces checks and balances on the transactions. As new applications are developed, mergers and acquisitions occur, and systems are replaced, care must be taken to ensure that the segregation of duties is maintained. Periodic management review ensures that the transaction processing environment continues to operate with the designed separation principles. Least Privilege (Need to Know). Least privilege refers to granting users only the accesses that are required to perform their job functions. Some employees will require greater access than others based upon their job functions. For example, an individual performing data entry on a mainframe system may have no need for Internet access or the ability to run off reports of the information that they are entering into the system. Conversely, a supervisor may have the need to run reports, but should not be provided the capability to change information in the database. Well-formed transactions ensure that the users update the information in systems consistently and through the developed procedures. Information is typically logged from the well-formed transactions, which can serve as a preventive control (because the user knows the information is being logged) and a detective control (to discover how information was modified after the fact). Security controls around these transactions are necessary to ensure that only authorized changes are made to the programs applying the transaction. Access privileges need to be defined at the appropriate level that provides a balance between supporting the business operational flexibility and adequate security. Defining these parameters requires the input of the business application owner to be effective. Mandatory Vacations. Requiring mandatory vacations of a specified consecutive-day period provides similar benefits as the job rotations. If work is reassigned during the vacation period, irregularities may surface through the transaction flow, communications with outside individuals, or requests to process information without following normal procedures. Some organizations remove access to the remote systems during this period as well to ensure that the employee that is temporarily replaced is not performing work. Job Position Sensitivity. The access and duties of an individual for a particular department should be assessed to determine the sensitivity of the job position. The degree of harm that the individual can cause through 25
AU8231_C001.fm Page 26 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® misuse of the computer system, through disclosing information, disrupting data processing, sharing internal secrets, modifying crucial information, or committing computer fraud, should be input to the classification. Rolebased access establishes roles for a job or class of jobs, indicating the type of information the individual is permitted to access. Job sensitivity may also be used to require more stringent policies related to mandatory vacations, job rotations, and access control policies. Excess controls for the sensitivity level of the position waste resources through the added expense, while fewer controls cause unacceptable risks. Responsibilities of the Information Security Officer The information security officer is responsible for ensuring the protection of all of the business information assets from intentional and unintentional loss, disclosure, alteration, destruction, and unavailability. The security officer typically does not have the resources available to perform all of these functions and must depend upon other individuals within the organization to implement and execute the policies, procedures, standards, and guidelines to ensure the protection of information. In this capacity, the information security officer acts as the facilitator of information security for the organization. Communicate Risks to Executive Management. The information security officer is responsible for understanding the business objectives of the organization, ensuring that a risk assessment is performed, taking into consideration the threats and vulnerabilities impacting the particular organization, and subsequently communicating the risks to executive management. The makeup of the executive management team will vary based on type of industry or government entity, but typically includes individuals with C-level titles, such as the chief executive officer (CEO), chief operating officer (COO), chief financial officer (CFO), and chief information officer (CIO). The executive team also includes the first level reporting to the CEO, such as the VP of sales and marketing, VP of administration, general counsel, and the VP of human resources.
The executive team is interested in maintaining the appropriate balance between acceptable risk and ensuring that business operations are meeting the mission of the organization. In this context, executive management is not concerned with the technical details of the implementations, but rather with what is the cost/benefit of the solution and what is the residual risk that will remain after the safeguards are implemented. For example, the configuration parameters of installing a particular vendor’s router are not as important as: (1) What is the real perceived threat (problem to be solved)? (2) What is the risk (impact and probability) to our business operations? (3) What is the cost of the safeguard? (4) What will be the residual risk (risk remaining after the safeguard is properly implemented and sus26
AU8231_C001.fm Page 27 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management tained)? (5) How long will the project take? Each of these must be evaluated along with the other items competing for resources (time, money, people, and systems). The security officer has a responsibility to ensure that the information presented to executive management is based upon a real business need and the facts are represented clearly. Ultimately, it is the executive management of the organization that is responsible for information security. Presentations should be geared at a high level to convey the purpose of the technical safeguard, and not be a rigorous detailed presentation of the underlying technology unless requested. Budget for Information Security Activities. The information security officer prepares a budget to manage the information security program and ensures that security is included in the various other departmental budgets, such as the help desk, applications development, and the computing infrastructure. Security is much less expensive when it is built in to the application design versus added as an afterthought. Estimates range widely over the costs of adding security later in the life cycle; however, it is generally believed that it is at least a factor of 10 times to add security in the implementation phase versus addressing it early in the analysis phases. The security officer must work with the application development managers to ensure that security is considered in the project cost during each phase of development (analysis, design, development, testing, implementation, and postimplementation). Systems security certification, or minimally holding walk-throughs to review the security, ensures that the deliverables are met.
In addition to ensuring that new project development activities appropriately address security, ongoing functions such as security administration, intrusion detection, incident handling, policy development, standards compliance, support of external auditors, and evaluations of emerging technology need to be appropriately funded. The security officer will rarely receive all the funding necessary to complete all of the projects for which he and his team have envisioned, and must usually plan these activities over a multiyear period. The budgeting process requires examination of the current risks and ensuring that activities with the largest cost/benefit to the organization are implemented. Projects greater than 12 to 18 months are generally considered to be long term and strategic in nature and typically require more funding and resources or are more complex in their implementation. In the event these efforts require a longer timeframe, pilot projects to demonstrate near-term results on a smaller scale are preferable. Organizations lose patience with funding long-term efforts, as the initial management supporters may change, as well as some of the team members implementing the change. The longer the payback period, the higher the rate of return (ROR) expected by executive manage27
AU8231_C001.fm Page 28 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® ment. This is due primarily to the higher risk level associated with longerterm efforts. The number of staff, level of security protection required, tasks to be performed, regulations to be met, staff qualification level, training required, and degree of metrics tracking will also be parameters that drive the amount of funding required. For example, if the organization is requested through government regulation to increase the number of individuals with security certifications, whether individual product vendor or industry standard certifications such as the CISSP, then the organization may feel an obligation to fund training seminars to prepare the individuals, and this will need to be factored into the budget process. This may also be utilized to attract and retain security professionals to the organization through increased learning opportunities. As another example, the time required in complying with government mandates and laws may necessitate increased staffing to provide the appropriate ongoing tracking and responses to audit issues. Ensure Development of Policies, Procedures, Baselines, Standards, and Guidelines. The security officer and his team are responsible for ensuring
that the security policies, procedures, baselines, standards, and guidelines are written to address the information security needs of the organization. However, this does not mean that the security department must write all the policies by themselves. Nor should the policies be written solely by the security department without the input and participation of the other departments within the organization, such as legal, human resources, information technology, compliance, physical security, the business units, and others that have to implement the policies. Develop and Provide Security Awareness Program. The security officer provides the leadership for the information security awareness program by ensuring that the programs are delivered in a meaningful, understandable way to the intended audience. The program should be developed to grab the attention of the participant to convey general awareness of the security issues and what reporting actions are expected when the end user notices security violations. Without promoting awareness, the policies remain as shelf-ware with less assurance that they will actually be practiced within the company. Understand Business Objectives. Central to the security officer’s success within the organization is to understand the vision, mission, objectives/goals, and plans of the organization. This understanding increases the chances of success, as security can be introduced at the correct times during the project life cycle and can enable the organization to carry out the mission. The security officer needs to understand the competitive pressures facing the organization, the strengths, weaknesses, threats, and 28
AU8231_C001.fm Page 29 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management opportunities, and the regulatory environment within which the organization operates. This increases the likelihood that the appropriate security controls will be applied to those areas with the greatest risk, thus resulting in an optimal allocation of the scarce security funding. Maintain Awareness of Emerging Threats and Vulnerabilities. The threat environment is constantly changing and, as such, it is incumbent upon the security officer to keep up with the changes. It is difficult for any organization to anticipate new threats, some of which come from the external environment and some from new technological changes. Prior to the September 11, 2001, terrorist attack in the United States, few individuals perceived that sort of attack as very likely. However, since then, many organizations have revisited their access control policies, physical security, and business continuity plans. New technologies, such as wireless and low-cost removable media (writeable CDs/DVDs and USB drives), have created new threats to confidentiality and disclosure of information, which need to be addressed. Although the organization tries to write policies to last for two or three years without change, depending upon the industry and the rate of change, these may need to be revisited more frequently. Evaluate Security Incidents and Response. Computer incident response teams (CIRTs) are groups of individuals with the necessary skills, including management, technical staff, infrastructure, and communications staff, for evaluating the incident, evaluating the damage caused by the incident, and providing the correct response to repair the system and collect evidence for potential prosecution or sanctions. CIRTs are activated depending upon the nature of the incident and the culture of the organization. Security incidents need to be investigated and followed up promptly, as this is a key mechanism in ensuring compliance with the security policies. Sanctions of employees with the appropriate disciplinary action, up to and including termination, must be specified and employed for the policies to be effective. The security officer and the security department ensure that these incidents are followed up in a timely manner. Develop Security Compliance Program. Compliance is the process of ensuring that security policies are adhered to. A policy and procedure regarding the hardening of the company’s firewalls are not very useful if the activity is not being performed. Periodic compliance, whether though internal or external inspection, ensures that the procedures, checklists, and baselines are documented and followed in practice. Compliance of the end users is also necessary to ensure that end users and technical staff are trained and have read the security policies. Establish Security Metrics. Measurements are collected to provide information on long-term trends and the day-to-day workload and to demonstrate the effect of noncompliance. Measurement of processes provides 29
AU8231_C001.fm Page 30 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® the ability to improve the process. For example, measuring the number of tickets for password resets can be translated into workload hours and may provide justification for the implementation of new technologies for the end user to self-administer the reset process. Or, capturing the viruses found or reported may indicate a need for further education or improvement of the antivirus management process. Many decisions need to be made when collecting metrics, such as who will collect the metrics, what statistics will be collected, when they will be collected, and what are the thresholds where variations are out of bounds and should be acted upon. Participate in Management Meetings. Security officers must be involved in the management teams and planning meetings of the organization to be fully effective. Project directions and decisions are made during these meetings, as well as the establishment of buy-in for the security initiatives. These meetings will include board of director meetings (periodic updates), IT steering committees, manager meetings, and departmental meetings. Ensure Compliance with Government Regulations. G o v e r n m e n t s a re continuously passing new laws, rules, and regulations, with which the enterprise must be in compliance. Although many of the laws are overlapping in the security requirements, frequently the new laws provide a more stringent requirement on a particular aspect of information security. Timeframes to be in compliance with the law may not always come at the best time for the organization, nor may they line up with the budget funding cycles. The security officer must stay abreast of emerging regulatory developments to enable response in a timely manner. Assist Internal and External Auditors. Auditors provide an essential role for information security by providing an independent view of the design, effectiveness, and implementation of the security control. The results of these audits generate findings that require corrective action plans to resolve the issue and mitigate the risk. Auditors request information prior to the start of the audit to facilitate the review. Some audits are performed at a high level without substantive testing, while others performing this testing pull samples to determine if the control was correctly executed. The security department cooperates with the internal and external auditors to ensure that the control environment is adequate and functional. Stay Abreast of Emerging Technologies. The security officer must stay abreast of emerging technologies to ensure that the appropriate solutions are in place for the company based upon its appetite for risk, culture, resources available, and desire to be an innovator, leader, or follower (mature product implementation) of security products and practices. Failure to do so could increase the costs to the organization by maintaining 30
AU8231_C001.fm Page 31 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management older, less effective products. Approaches to satisfying this requirement may range from active involvement in security industry associations to interaction with vendors to subscribing to industry research groups to reviewing printed material. Reporting Model The security officer and the information security organization should report as high in the organization as position to (1) maintain visibility of the importance of information security and (2) limit the distortion or inaccurate translation of messages that can occur due to hierarchical, deep organizations. The higher up in the organization, the greater the ability to gain other senior management’s attention to security and the greater the capability to compete for the appropriate budget and resources. Where the information security officer reports in the organization has been the subject of debate for several years and depends upon the culture of the organization. There is no one best model that fits all organizations, but rather pros and cons associated with each placement choice. Whatever the chosen reporting model, there should be an individual chosen with the responsibility for ensuring information security at the enterprisewide level to establish accountability for resolving security issues. The discussion in the next few sections should provide the perspective for making the appropriate choice for the target organization. Business Relationships. Wherever the information security officer reports, it is imperative that he or she establishes credible and good working relationships with executive management, middle management, and the end users that will be following the security policy. Information gathered and acted upon by executive management is obtained through their daily interactions with many individuals, not just executive management. Winning their support may be the result of influencing a respected individual within the organization, possibly several management layers below the executive. Similarly, the relationship between the senior executives and the information security officer is important if the security strategies are to carry through to implementation. Establishing a track record of delivery and demonstrating the value of the protection to the business will build this relationship. If done properly, the security function becomes viewed as an enabler of the business versus a control point, which slows innovation, provides roadblocks to implementation, and represents an overhead cost function. Reporting to an executive that understands the need for information security and is willing to work to obtain funding is preferable. Reporting to the CEO. Reporting directly to the CEO greatly reduces the message filtering of reporting further down the hierarchy and improves the communication, as well as demonstrating to the organization the importance of information security. Firms that have high security needs, such as 31
AU8231_C001.fm Page 32 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® credit card companies, technology companies, and companies whose revenue stream depends highly upon Internet Web site purchases, such as eBay or Amazon, might utilize such a model. The downside to this model is that the CEO may be preoccupied with many other business issues and may not have the interest, time, or enough technical understanding to devote to information security issues. Reporting to the Information Technology (IT) Department. In this model, the information security officer reports directly to the chief information officer (CIO), director of information systems, the vice president of systems, or whatever the title of the head of the IT department is. Most organizations are utilizing this relationship, as this was historically where the data security function was placed in many companies. This is due to the history of security being viewed as only an information technology problem, which it is not. The advantage of this model is that the individual to which the security officer is reporting has the understanding of the technical issues and typically has the clout with senior management to make the desired changes. It is also beneficial because the information security officer and his department must spend a good deal of time interacting with the rest of the information systems department, which builds the appropriate awareness of project activities and issues and builds business relationships. The downside of the reporting structure is the conflict of interest. When the CIO must make decisions with respect to time to market, resource allocations, cost minimization, application usability, and project priorities, the ability exists to slight the information security function. The typical CIO’s goals are more oriented toward delivery of application products to support the business in a timely manner. If the perception is that implementation of the security controls may take more time or money to implement, the security considerations may not be provided equal weight. Reporting to a lower level within the CIO organization should be avoided, as noted earlier; the more levels between the CEO and the information security officer, the more challenges that must be overcome. Levels further down in the organization also have their own domains of expertise that they are focusing on, such as computer operations, applications programming, or computing infrastructure. Reporting to Corporate Security. Corporate security is focused on the physical security of the enterprise, and most often the individuals in this environment have backgrounds as former police officers, military, or were associated in some other manner with the criminal justice system. This alternative may appear logical; however, the individuals from these organizations come from two different backgrounds. Physical security is focused on criminal justice, protection, and investigation services, while information security professionals usually have different training in business and information technology. The language of these disciplines intersects in 32
AU8231_C001.fm Page 33 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management some areas, but is vastly different in others. Another downside may be the association with the physical security group may evoke a police-type mentality, making it difficult to build business relationships with business users. Establishing relationships with the end users increases their willingness to listen and comply with the security controls, as well as to provide knowledge to the security department of potential violations. Reporting to the Administrative Services Department. The information security officer may report to the vice president of administrative services, which may also include the physical security, employee safety, and HR departments. As in reporting to the CIO, there is only one level between the CEO and the information security department. The model may also be viewed as an enterprise function due to the association with the human resources department. It is attractive because of the focus on security for all forms of information (paper, oral, electronic) versus residing in the technology department, where the focus may tend to be more on electronic information. The downside is that the leaders of this area may be limited in their knowledge of information technology and the ability to communicate with the CEO on technical issues. Reporting to the Insurance and Risk Management Department. Information-intensive organizations such as banks, stock brokerages, and research companies may benefit from this model. The chief risk officer is already concerned with the risks to the organization and the methods to control those risks through mitigation, acceptance, insurance, etc. The downside is that the risk officer may not be conversant in the information systems technology, and the strategic focus of this function may give less attention to day-to-day operational security projects. Reporting to the Internal Audit Department. This reporting relationship can create a conflict of interest, as the internal audit department is responsible for evaluating the effectiveness and implementation of the organization’s control structure, including those of the information security department. It would be difficult for the internal audit to provide an independent viewpoint, if the attainment of meeting the security department’s objectives is also viewed as part of its responsibility. The internal audit department may have adversarial relationships with other portions of the company due to the nature of its role (to uncover deficiencies in departmental processes), and through association, the security department may develop similar relationships. It is advisable that the security department establishes close working relationships with the internal audit department to facilitate the control environment. The internal audit manager most likely has a background in financial, operational, and general controls and may have difficulty understanding the technical activities of the information security department. On the positive side, both 33
AU8231_C001.fm Page 34 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® areas are focused on improving the controls of the company. The internal audit department does have a preferable reporting relationship for audit issues through a dotted-line relationship with the company’s audit committee on the board of directors. It is advisable for the information security function to have a path to report security issues to the board of directors as well, either in conjunction with the internal audit department or through its own. Reporting to the Legal Department. Attorneys are concerned with compliance with regulations, laws, and ethical standards, performing due diligence, and establishing policies and procedures that are consistent with many of the information security objectives. The company’s general counsel also typically has the respect or ear of the CEO. In regulated industries, this may be a very good fit. On the downside, due to the emphasis on compliance activities, the information security department may end up performing more compliance-checking activities (versus security consulting and support), which are typically the domain of internal audit. An advantage is that the distance between the CEO and the information security officer is one level. Determining the Best Fit. As indicated earlier, each organization must view the pros and cons of each of these types of relationships and develop the appropriate relationship based upon the company culture, type of industry, and what will provide the greatest benefit to the company. Conflicts of interest should be minimized, visibility increased, funding appropriately allocated, and communication effective when the optimal reporting relationship is decided for the placement of the information security department.
Enterprisewide Security Oversight Committee The enterprisewide security oversight committee, sometimes referred to as a security council, serves as an oversight committee to the information security program. The vision of the security council must be clearly defined and understood by all members of the council. Vision Statement. A clear security vision statement should exist that is in alignment with and supports the organizational vision. Typically, these statements draw upon the security concepts of confidentiality, integrity, and availability to support the business objectives. The vision statements are not technical and focus on the advantages to the business. People will be involved in the council from management and technical areas and have limited time to participate, so the vision statement must be something that is viewed as worthwhile to sustain their continued involvement. The vision statement is a high-level set of statements, brief, to the point, and achievable. 34
AU8231_C001.fm Page 35 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management
The Information Security Council provides management direction and a sounding board for the ACME Company’s information security efforts to ensure that these efforts are: • • • • •
Appropriately prioritized Supported by each organizational unit Appropriately funded Realistic given ACME’s information security needs Balance security needs to be made between cost, response time, ease of use, flexibility, and time to market
The Information Security Council takes an active role in enhancing our security profile and increasing the protection of our assets through: • • • • • • •
Approval of organizationwide information security initiatives Coordination of various workgroups so that security goals can be achieved Promoting awareness of initiatives within their organizations Discussion of security ideas, policies, and procedures and their impact on the organization Recommendation of policies to the ACME Company IT Steering Committee Increased understanding of the threats, vulnerabilities, and safeguards facing our organization Active participation in policy, procedure, and standard review
The ACME Company Information Technology Steering Committee suppor ts the Information Security Council by: • • • •
Developing the strategic vision for the deployment of Information Technology Establishing priorities, arranging resources in concert with the vision Approval of the recommended policies, standards, and guidelines Approving major capital expenditures
Figure 1.3. Sample Security Council mission statement. Mission Statement. Mission statements are objectives that support the overall vision. These become the road map to achieving the vision and help the council clearly view the purpose for its involvement. Some individuals may choose nomenclature such as goals, objectives, initiatives, etc. A sample mission statement is shown in Figure 1.3.
Effective mission statements do not need to be lengthy, as the primary concern is to communicate the goals so both technical and nontechnical individuals readily understand them. The primary mission of the security council will vary by organization, but can include statements that address the following. Security Program Oversight. By establishing this goal in the beginning, the members of the council begin to feel that they have some input and influence over the direction of the security program. This is key, as many security decisions will impact their areas of operation. This also is the beginning of management’s commitment at the committee level, as the 35
AU8231_C001.fm Page 36 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® deliverables produced through the information security program now become recommended or approved by the security council versus the information security department. DECIDE ON PROJECT INITIATIVES. Each organization has limited resources (time, money, people) to allocate across projects to advance the business. The primary objective of information security projects is to reduce the organizational business risk through the implementation of reasonable controls. The council should take an active role in understanding the initiatives and the resulting business impact. PRIORITIZE INFORMATION SECURITY EFFORTS. Once the security council understands the proposed project initiatives and the associated positive impact to the business, its members can be involved with the prioritization of the projects. This may be in the form of a formal annual process or through the discussion and expressed support for individual initiatives. REVIEW AND RECOMMEND SECURITY POLICIES. Review of the security policies should include a line-by-line review of the policies, a cursory review of the procedures to support the policies, and a review of the implementation and subsequent enforcement of the policies. Through this activity, three key concepts are implemented that are important to sustaining commitment: (1) understanding of the policy is enhanced, (2) practical ability of the organization to support the policy is discussed, and (3) buy-in is established to subsequent support of implementation activities. CHAMPION ORGANIZATIONAL SECURITY EFFORTS. Once the council understands and accepts the policies, it serves as the organizational champion behind the policies, because it was involved in the creation of the policies. Council members may have started by reviewing a draft of the policy created by the information systems security department, but the resulting product was only accomplished through their review, input, and participation in the process. Their involvement creates ownership of the deliverable and a desire to see the security policy or project succeed within the company. RECOMMEND AREAS REQUIRING INVESTMENT. Members of the council have the opportunity to provide input from the perspective of their individual business units. The council serves as a mechanism for establishing broad support for security investments from this perspective. Resources within any organization are limited and allocated to the business units with the greatest need and the greatest perceived return on investment. Establishing this support enhances the budgetary understanding of the other business managers, as well as the chief financial officer, which is often essential to obtain the appropriate funding.
A mission statement that incorporates the previous concepts will help focus the council and also provide the sustaining purpose for its involve36
AU8231_C001.fm Page 37 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management ment. The vision and mission statements should also be reviewed on an annual basis to ensure that the council is still functioning according to the values expressed in the mission statement, as well as to ensure that new and replacement members are in alignment with the objectives of the council. Oversight Committee Representation. The oversight committee is made up of representatives from multiple organizational units that are necessary to support the policies in the long term. The HR department is essential to provide knowledge of the existing code of conduct, employment and labor relations, termination and disciplinary action policies, and practices that are in place. The legal department is needed to ensure that the language of the policies states what is intended, and that applicable local, state, and federal laws are appropriately followed. The IT department provides technical input and information on current initiatives and the development of procedures and technical implementations to support the policies. The individual business unit representation is essential to understand how practical the policies may be in carrying out the mission of the business. Compliance department representation provides insight on ethics, contractual obligations, and investigations that may require policy creation. And finally, the security officer, who typically chairs the council, should represent the information security department and members of the security team for specialized technical expertise.
The oversight committee is a management committee and, as such, is populated primarily with management-level employees. It is difficult to obtain the time commitment required to review policies at a detailed level by senior management. Reviewing the policies at this level is a necessary step to achieve buy-in within management. However, it would not be a good use of the senior management level in the early stages of development. Line management is very focused on their individual areas and may not have the organizational perspective necessary (beyond their individual departments) to evaluate security policies and project initiatives. Middle management appears to be in the best position to appropriately evaluate what is best for the organization, as well as possessing the ability to influence senior and line management to accept the policies. Where middle management does not exist, it is appropriate to include line management, as they are typically filling both of these roles (middle and line functions) when operating in these positions. Many issues may be addressed in a single security council meeting, which necessitates having someone record the minutes of the meeting. The chairperson’s role in the meeting is to facilitate the discussion, ensure that all viewpoints are heard, and drive the discussions to decisions where necessary. It is difficult to perform that function at the same time as taking notes. Recording the meeting is also helpful to capture key points that may have been missed in the notes, so that accurate minutes can be produced. 37
AU8231_C001.fm Page 38 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® The relationship between the security department and the security oversight committee is a dotted-line relationship that may or may not be reflected on the organization chart. The value of the committee is in providing the business direction and increasing the awareness of the security activities that are impacting the organization on a continuous basis. How frequently the committee meets will depend upon the organizational culture (i.e., Are monthly or quarterly oversight meetings held on other initiatives?), the number of security initiatives, and the urgency of decisions that need the input of the business units. Roles and Responsibilities. Many different individuals within an organization contribute to successful information protection. Security is the responsibility of everyone within the company. Every end user is responsible for understanding the policies and procedures that are applicable to their particular job function and adhering to the security control expectations. Users must have knowledge of the responsibilities and be trained to a level that is adequate to reduce the risk of loss. Although the exact titles and scope of responsibility of the individuals may vary from organization to organization, the following roles support the implementation of security controls. An individual may be performing multiple roles when the processes are defined for the organization, depending upon the constraints and organizational structure. It is important to provide clear assignment and accountability to designated employees for the various security functions to ensure that the tasks are performed. Communication of the responsibilities for each function, through distribution of policies, job descriptions, training, and management direction, provides the foundation for execution of security controls by the workforce. END USER. The end user is responsible for protecting the information assets on a daily basis through adherence to the security policies that have been communicated. The end users represent many “windows” to the organization, and through their practices the security can be either strengthened through compliance or compromised through their actions. For example, downloading unauthorized software, opening attachments from unknown senders, or visiting malicious Web sites could introduce backdoors or Trojans into the environment. End users can also be the front-line eyes and ears of the organization and report security incidents for investigation. Creating this culture requires that this role and responsibility is clearly communicated and understood by all. EXECUTIVE MANAGEMENT. Executive management maintains the overall responsibility for protection of the information assets. The business operations are dependent upon information being available, accurate, and protected from individuals without a need to know. Financial losses can occur if the confidentiality, integrity, or availability of the information is compromised. They must be aware of the risks that they are accepting for the orga38
AU8231_C001.fm Page 39 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management nization, through either explicit decision making or failure to make decisions or understand the nature of the risks inherent in the existing operation of the information systems. SECURITY OFFICER. As noted in the governance sections, the security officer directs, coordinates, plans, and organizes information security activities throughout the organization. The security officer works with many different individuals, such as executive management, management of the business units, technical staff, business partners, and third parties such as auditors and external consultants. The security officer and his or her team are responsible for the design, implementation, management, and review of the organization’s security policies, standards, procedures, baselines, and guidelines. INFORMATION SYSTEMS SECURITY PROFESSIONAL. Development of the security policies and the supporting procedures, standards, baselines, and guidelines, and subsequent implementation and review are performed through these individuals. Guidance is provided for technical security issues, and emerging threats are considered for the adoption of new policies. Interpretation of government regulations and industry trends and determination of the placement of vendor solutions in the security architecture to advance the security of the organization are performed. DATA/INFORMATION/BUSINESS OWNERS. A business executive or manager is responsible for an information asset. These are the individuals that assign the appropriate classification to the asset and ensure that the business information is protected with the appropriate controls. Periodically, the data owners need to review the classification and access rights associated with the information asset. Depending upon the formalization of the process within the organization, the data owners or their delegates may be required to approve access to the information from other business units. Data owners also need to determine the criticality, sensitivity, retention, backups, and safeguards for the information. Data owners or their delegates are responsible for understanding the policies and procedures used to appropriately classify the information. DATA CUSTODIAN. A data custodian is an individual or function that takes care of the information on behalf of the data owner. These individuals ensure that the information is available to the end users and is backed up to enable recovery in the event of data loss or corruption. Information may be stored in files, databases, or systems whose technical infrastructure must be managed, typically by systems administrators or operations. INFORMATION SYSTEMS AUDITOR. IT auditors determine whether systems are in compliance with the security policies, procedures, standards, baselines, designs, architectures, management direction, and other requirements. The auditors provide independent assurance to management on the appro39
AU8231_C001.fm Page 40 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® priateness of the security objectives. The auditor examines the information systems and determines whether they are designed, configured, implemented, operated, and managed in a way that the organizational objectives are being achieved. The auditors provide top company management with an independent view of the controls that have been designed and their effectiveness. Samples are extracted to test the existence and effectiveness of the controls. BUSINESS CONTINUITY PLANNER. Business continuity planners develop contingency plans to prepare for the occurrence of a major threat with the ability to impact the company’s objectives negatively. Threats may include earthquakes, tornadoes, hurricanes, blackouts, changes in the economic/political climate, terrorist activities, fire, or other major actions potentially causing significant harm. The business continuity planner ensures that business processes can continue through the disaster and coordinates those activities with the information technology personnel responsible for disaster recovery on specific platforms. INFORMATION SYSTEMS/INFORMATION TECHNOLOGY PROFESSIONALS. These personnel are responsible for designing security controls into information systems, testing the controls, and implementing the systems in production environments through agreed upon operating policies and procedures. The information systems professionals work with the business owners and the security professionals to ensure that the designed solution provides security controls commensurate with the acceptable criticality, sensitivity, and availability requirements of the application. SECURITY ADMINISTRATOR. A security administrator manages the user access request process and ensures that privileges are provided to those individuals that have been authorized for access by the proper management. This individual has elevated privileges and creates and deletes accounts and access permissions. The security administrator also terminates access privileges when individuals leave their jobs or transfer to company divisions. The security administrator maintains records of approvals as part of the control environment and produces these records to the information systems auditor to demonstrate compliance with the policies. SYSTEMS ADMINISTRATOR. A systems administrator (sysadmin) configures the hardware and operating systems to ensure that the information can be available and accessible. The administrator runs software distribution systems to install updates and test patches on the company computers. The administrator tests and implements system upgrades to ensure the continued reliability of the servers and network devices. Periodic usage of vulnerability testing tools, through either purchased software or open-source tools tested in a separate environment, identifies areas needing system upgrades or patches to fix the vulnerability. 40
AU8231_C001.fm Page 41 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management PHYSICAL SECURITY. The individuals assigned to the physical security role establish relationships with external law enforcement, such as the local police agencies, state police, or the Federal Bureau of Investigation (FBI) to assist in investigations. Physical security personnel manage the installation, maintenance, and ongoing operation of the closed circuit television (CCTV) surveillance systems, burglar alarm systems, and card reader access control systems. Guards are placed where necessary as a deterrent to authorized access and to provide safety for the company employees. Physical security personnel interface with systems security, human resources, facilities, and legal and business areas to ensure that the practices are integrated. ADMINISTRATIVE ASSISTANTS/SECRETARIES. This role can be very important to information security; in many companies of smaller size, this may be the individual who greets visitors, signs packages in and out, recognizes individuals that desire to enter the offices, and serves as the phone screener for executives. These individuals may be subject to social engineering attacks, whereby the potential intruder attempts to solicit confidential information that may be used for a subsequent attack. Social engineers prey on the goodwill and good graces of the helpful individual to gain entry. A properly trained assistant will minimize the risk of divulging useful company information or providing unauthorized entry. HELP DESK ADMINISTRATOR. As the name implies, the help desk is there to field questions from users that report system problems through a ticketing system. Problems may include poor response time, potential virus infections, unauthorized access, inability to access system resources, or questions on the use of a program. The help desk individual would contact the computer incident response team (CIRT) when a situation meets the criteria developed by the team. The help desk resets passwords, resynchronizes/reinitializes tokens and smart cards, and resolves other problems with access control. These functions may alternatively be performed through self-service by the end user, i.e., an intranet-based solution that establishes the identity of the end user and resets the password, or by another area, such as the security administration, systems administrator, etc., depending upon the organizational structure and separation of duties principles.
An organization may include other roles related to information security to meet its particular needs. Individuals within the different roles will require different levels of training. The end user may require only security awareness training, including the activities that are acceptable, how to recognize that there may be a problem, and what the mechanism is for reporting the problem to the appropriate security personnel for resolution. The security administrator will need more in-depth training on the access control packages to manage the log-on IDs, accounts, and log file reviews. The systems/network administrator will need technical security training for 41
AU8231_C001.fm Page 42 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® the specific operating system (Windows, UNIX, Linux, etc.) to competently set the security controls. ESTABLISHING UNAMBIGUOUS ROLES. Establishing clear, unambiguous security roles has many benefits to the organization beyond providing information as to the responsibilities to be performed and who needs to perform them. The benefits may also include:
• Demonstrable executive management support for information security • Increased employee efficiency by reducing confusion on who is expected to perform which tasks • Team coordination to protect information as it moves from department to department • Lower risks to company reputation by damage due to security problems • Capability to manage complex information systems and networks • Personal accountability for information security • Reduction of turf battles between departments • Security balanced with business objectives • Support of disciplinary actions for security violations up to and including termination • Facilitation of increased communication for resolution of security incidents • Demonstrable compliance with applicable laws and regulations • Shielding of management from liability and negligence claims • Road map for auditors to determine whether necessary work is performed effectively and efficiently • Continuous improvement efforts (i.e., ISO 9000) • Provision of a foundation for determining the security and awareness training required Information security is a team effort requiring the skill sets and cooperation of many different individuals The executive management may have overall responsibility, and the security officer/director/manager may be assigned the day-to-day task of ensuring the organization is complying with the defined security practices. However, every person in the organization has one or more roles to ensure appropriate protection of the information assets. Security Planning Strategic, tactical, and operational plans are interrelated, and each provides a different focus toward enhancing the security of the organization. Planning reduces the likelihood that the organization will be reactionary toward the security needs. With appropriate planning, decisions on projects can be made with respect to whether they support the long- or 42
AU8231_C001.fm Page 43 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management short-term goals and have the priority that warrants the allocation of more security resources. Strategic Planning. Strategic plans are aligned with the strategic business and information technology goals. These plans have a longer-term horizon (three to five years or more) to guide the long-term view of the security activities. The process of developing a strategic plan emphasizes thinking of the company environment and the technical environment a few years into the future. High-level goals are stated to provide the vision for projects to achieve the business objectives. These plans should be reviewed minimally on an annual basis or whenever major changes to the business occur, such as a merger, acquisition, establishment of outsourcing relationships, major changes in the business climate, introductions of new competitors, and so forth. Technological changes will be frequent during a five-year period, so the plan should be adjusted. The high-level plan provides organizational guidance to ensure that lower-level decisions are consistent with executive management’s intentions for the future of the company. For example, strategic goals may consist of:
• Establish security policies and procedures • Effectively deploy servers, workstations, and network devices to reduce downtime • Ensure that all users understand the security responsibilities and reward excellent performance • Establish a security organization to manage security entitywide • Ensure that risks are effectively understood and controlled Tactical Planning. Tactical plans provide the broad initiatives to support and achieve the goals specified in the strategic plan. These initiatives may include deployments such as establishing an electronic policy development and distribution process, implementing robust change control for the server environment, reducing the likelihood of vulnerabilities residing on the servers, implementing a “hot site” disaster recovery program, or implementing an identity management solution. These plans are more specific and may consist of multiple projects to complete the effort. Tactical plans are shorter in length, such as 6 to 18 months to achieve a specific security goal of the company. Operational and Project Planning. Specific plans with milestones, dates, and accountabilities provide the communication and direction to ensure that the individual projects are completed. For example, establishing a policy development and communication process may involve multiple projects with many tasks:
• Conduct security risk assessment • Develop security policies and approval processes 43
AU8231_C001.fm Page 44 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® • Develop technical infrastructure to deploy policies and track compliance • Train end users on policies • Monitor compliance Depending upon the size and scope of the efforts, these initiatives may be steps of tasks as part of a single plan, or they may be multiple plans managed through several projects. The duration of these efforts is short term to provide discrete functionality at the completion of the effort. Traditional “waterfall” methods of implementing projects spend a large amount of time detailing the specific steps required to implement the complete project. Executives today are more focused on achieving some shortterm, or at least interim, results to demonstrate the value of the investment along the way. Such demonstration of value maintains organizational interest and visibility to the effort, increasing the chances of sustaining longerterm funding. The executive management may grow impatient without realizing these early benefits. Personnel Security Hiring qualified and trustworthy individuals depends upon implementing and adhering to personnel policies that screen those individuals whose past actions may indicate undesirable behavior, as well as to ensure that continued employment is supervised. Lower employee morale may result in reduced compliance with controls and lower levels of staff expertise over time. Termination policies and procedures are necessary to ensure that terminated employees no longer have access to the system, and therefore do not have the opportunity to damage the files or systems or disrupt company operations. Although most individuals are hardworking, competent individuals with no intentions of wrongdoing, there are a few individuals with less than desirable intentions. With the potential impact to the information and systems and the negative publicity that results today from these events, it is imperative to implement the appropriate personnel security controls. Hiring Practices. Various activities should be performed prior to the individual starting the position, such as developing job descriptions, contacting references, screening/investigating background, developing confidentiality agreements, and determining policies on vendor, contractor, consultant, and temporary staff access. Job Descriptions. Job descriptions should contain the responsibilities of the position and the education, experience, and expertise required to satisfactorily perform the job function. A well-written job description provides not only the basis for conversation with the applicant to determine if the skills are a good match, but also the barometer by which the ongoing 44
AU8231_C001.fm Page 45 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management performance reviews can be measured. Individual job goals stated within the performance reviews should mirror the job description. Failure to align the correct job description with the individual will give a false sense of security, as the individual may be lacking the job requirements. To ensure that individuals possess the security skills on an ongoing basis, the job skills must be periodically reassessed. Requirements for annual training, especially for those individuals requiring specialized security training, will ensure that the skills remain relevant and current. The employee training and participation in professional activities should be monitored and encouraged. All job descriptions of the organization should have some reference to information security responsibilities, as these responsibilities are shared across the organization. Specific technology, platform requirements, and certifications required for security staff can be noted within the job posting. Employment Agreements. Employment agreements are usually signed by the employee before he or she starts the new job or during the first day, while visiting the human resources department. These agreements will vary from organization to organization as to the form and content, but their purpose is to protect the organization while the individual is employed, as well as after the employee has left employment by the organization. For example, nondisclosure agreements contain clauses to protect the company’s rights to retain trade secrets or intellectual property that the employee may have had access to well after the employee’s departure from the organization. Code of conduct, conflict of interest, gift-handling policies, and ethics agreements may be required to ensure that the employee handles the continued employment in a manner that will be in the best interests of the organization and reduce the liability of the organization to lawsuits for unethical behavior by its employees. Reference Checks. During the interviewing and hiring process, individuals attempt to determine the past work history of the applicant and their competencies, such as teamwork, leadership abilities, perseverance, ethics, customer service orientation, management skills, planning, and specific technical and analytical capabilities. Much of the information provided is obtained by observing the individual in the interview process or from the information he or she has provided through the targeted questions. It is not always possible to determine the true work orientation of the prospective employee without other collaborating information.
Personal reference checks involve contacting those individuals supplied by the prospective employee. Many employers are reluctant to provide personal references for fear of future litigation. After all, providing a reference is placing a stamp of approval on the performance or character of the employee, even though the person providing the reference really has no control over the future work performance of the employee. Many 45
AU8231_C001.fm Page 46 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® individuals will provide references to place them in the best possible light and may place individuals such as presidents, vice presidents, doctors, lawyers, ministers, and so forth, on the list to create the appearance of greater integrity. The astute employer will ask questions that ascertain the capabilities of the employee, such as leadership ability, oral and written communication skills, decision-making skills, ability to work with others, respect from peers, how the individual acted under stress, and managerial ability (budgeting, attracting talent, delivering projects). Multiple reference checks provide multiple perspectives and provide for corroboration of the desired behaviors. Employers need to balance the response of references with the knowledge that the references were provided by the applicant and may be biased in their opinions. Failure of a prospective employee to provide references may be an indicator of a spotty work record or the possibility of prior personnel actions/sanctions against the individual. Background Investigations. Just as the personal reference checks provide the opportunity to obtain corroborating information on whether the applicant will potentially be a good addition to the company, background checks can uncover more information related to the ability of the organization to trust the individual. Organizations want to be sure of the individuals that they are hiring and minimize future lawsuits. Statistics have shown that resumes are filled with errors, accidental mistakes, or blatant lies to provide a perceived advantage to the applicant. Common falsifications include embellishment of skill levels, job responsibilities and accomplishments, certifications held, and the length of employment. The background checks can greatly assist the hiring manager in determining whether he or she has an accurate representation of the skills, experience, and work accomplishments of the individual. Commercial businesses typically do not have the time and money to conduct meaningful, thorough investigations on their own and hire outside firms that specialize in the various background checks. Background checks can uncover:
• • • • • • • • • • • • 46
Gaps in employment Misrepresentation of job titles Job duties Salary Reasons for leaving a job Validity and status of professional certification Education verification and degrees obtained Credit history Driving records Criminal history Personal references Social security number verification
AU8231_C001.fm Page 47 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management BENEFITS OF BACKGROUND CHECKS. The benefits of background checks in protecting the company are self-evident; however, the following benefits also accrue to the employer:
• Risk mitigation • Increased confidence that most qualified candidate was hired versus the one who interviewed the best • Lower hiring cost • Reduced turnover • Protection of assets • Protection of the company’s brand reputation • Shielding of employees, customers, and the public from theft, violence, drugs, and harassment • Insulation from negligent hiring and retention lawsuits • Safer workplace by avoiding hiring employees with a history of violence • Discouraging of applicants with something to hide • Revealing of criminal activity (rarely put on job applications) TIMING OF CHECKS. An effective background check program requires that all individuals involved in the hiring process support the program prior to the candidate being selected for hire. This requires that the human resources department, legal, hiring supervisors, and recruiters understand and execute the screening process. Once the individual is hired into the organization, it is much harder to obtain the information without having a specific cause for performing the investigation. Employees should also be periodically reinvestigated consistent with the sensitivity of their positions. TYPES OF BACKGROUND CHECKS. Many different types of background checks can be performed depending upon the position that the individual may be hired for. A best practice would be to perform background checks on all of the company’s employees and to require external agencies through contract agreements to perform background checks on the contractors, vendors, and anyone coming in contact with the company assets. If this is costprohibitive, the organization must decide on which group of employees it is most critical to conduct background checks. The types of checks range from minimal checks to full background investigations. The types of individuals upon which an organization may focus the checks or decide to provide more extensive checks include:
• • • • •
Individuals involved in technology Individuals with access to confidential or sensitive information Employees with access to company proprietary or competitive data Positions working with accounts payable, receivables, or payroll Positions dealing directly with the public 47
AU8231_C001.fm Page 48 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® • Employees working for healthcare industry-based organizations or organizations dealing with financial information • Positions involving driving a motor vehicle • Employees who will come in contact with children There is a broad range of possible background checks available. The following are the most common background checks performed. CREDIT HISTORY. Credit history is the primary vehicle used by financial institutions to ensure the repayment of consumer loans, credit cards, mortgages, and other types of financial obligations. Credit histories are used to screen for high default risks and to discourage default. Financial services firms use credit histories as primary leverage, providing a threat to place delinquent information on the individual’s credit reports should he or she fall behind in payments. In the past, managers would run a credit report only on those individuals that were directly handling money; however, this has changed due to the interconnection of computers and the potential access to high-risk applications. Basic credit reports verify the name, address, social security number, and prior addresses of the applicant. These can be used to provide more extensive criminal searches or uncover gaps in employment. Detailed credit histories provide the employer with liens, judgments, and payment obligations that may give an indication as to the individual’s ability to handle his or her financial obligations. However, these items must be evaluated in context, as the individual may have previously slipped into financial trouble and then reorganized his or her financial life, so this would not present a risk to the prospective employer. Sometimes credit reports have limited or no information, which may be representative of a prospect’s age (has not yet established a credit history), cash paid for purchases, assumption of a false identity, or a prospects’ residence (lives in a low-income area that relies on fringe lenders, which typically do not report to credit bureaus).
Employers need to ensure that they are using the information appropriately, according to their country’s laws. In the United States, the Fair Credit Reporting Act (FCRA) and laws under the Equal Employment Opportunity Commission (EEOC), and some state laws, will govern the actions by the organization. Legal counsel and human resources should be involved in the development of any policies and procedures related to the screening process. CRIMINAL HISTORY. Criminal records are more difficult to obtain than credit histories, as credit histories are exchanged through a system among banks, retail establishments, financial services firms, and credit-reporting bureaus. With more than 3000 legal jurisdictions in the United States, it is not feasible to search each jurisdiction. Starting with the county of residence and searching in other prior addresses will provide a reasonable 48
AU8231_C001.fm Page 49 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management background check for the applicant. Most background checks examine felonies and overlook the misdemeanors (less serious crimes). Under the FCRA, employers can request full criminal records for the past seven years, unless the applicant earns more than $75,000 annually, in which case there are no time restrictions. Important information to be searched includes state and county criminal records, sex and violent offender records, and prison parole and release records. DRIVING RECORDS. Driving records should be checked for those employees that will be operating a motor vehicle on their job. These records can also reveal information about applicants that will not be driving vehicles as part of their employment, such as verification of the applicant’s name, address, and social security number, and will include information on traffic citations, accidents, driving-under-the-influence arrests, convictions, suspensions, revocations, and cancellations. These may be indicators of a possible alcohol or drug addiction or a lack of responsibility. DRUG AND SUBSTANCE TESTING. The use of illicit drugs is tested by most organizations, as drug use may result in lost productivity, absenteeism, accidents, employee turnover, violence in the workplace, and computer crimes. Individuals using drugs avoid applying or following through the process with companies that perform drug testing. There are many different screening tests available, such as screens for amphetamines, cocaine and PCP, opiates (codeine, morphine, etc.), marijuana (THC), phencyclidine, and alcohol. Independent labs are frequently employed by employers to ensure that proper testing is performed, as businesses are not in the drug testing business. Labs employ safeguards to reduce the likelihood of false-positives, or making a wrongful determination of drug use. In the United States, laws such as the Americans with Disabilities Act (ADA) may provide protections for individuals undergoing rehabilitation. PRIOR EMPLOYMENT. Verifying employment information such as dates employed, job title, job performance, reason for leaving, and if the individual is eligible for rehire can provide information as to the accuracy of the information provided by the applicant. This is not an easy process, as many companies have policies to not comment on employee performance and will only confirm dates of employment. EDUCATION, LICENSING, AND CERTIFICATION VERIFICATION. Diploma and degree credentials listed on the resume can be verified with the institution of higher learning. Degrees can be purchased through the Internet for a fee, without attendance in any classes, so care should be taken to ensure that the degree is from an accredited institution. Certifications in the technology field, such as the CISSP, Microsoft Certified Systems Engineer (MCSE), or industry- or vendor-specific certifications, can be verified by contacting 49
AU8231_C001.fm Page 50 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® the issuing agency. State licensing agencies maintain records of stateissued licenses, complaints, and revocations of licenses. SOCIAL SECURITY NUMBER VERIFICATION AND VALIDATION. That a number is indeed a social security number can be verified through a mathematical calculation, along with the state and year that the number may have been issued. Verification that the number was issued by the Social Security Administration, was not misused, was issued to a person who is not deceased, or that the inquiry address is not associated with a mail receiving service, hotel or motel, state or federal prison, campground, or detention facility can be done through an inquiry to the Social Security Administration. SUSPECTED TERRORIST WATCH LIST. Various services search the federal and international databases of suspected terrorists. Although the construction of these databases and the methods for identifying the terrorists are relatively new and evolving, industries of higher risk, such as the defense, biotech, aviation, and pharmaceutical industries, or those that conduct business with companies associated with known terrorist activities, would benefit from checking these databases. Ongoing Supervision. Ongoing supervision and periodic performance reviews ensure that the individuals are evaluated on their current qualifications and attainment of security goals. Performance ratings for all employees should cover the compliance with security policies and procedures. Compensation and recognition of achievements should be appropriate to maintain high morale of the department. Monitoring the ongoing skill capabilities and training and experience requirements reduces the risk that inappropriate controls are being applied to information security. Employee Terminations. Employees join and leave organizations every day as a common occurrence in performing business. The reasons vary widely, due to retirement, reduction in force, layoffs, termination with or without cause, relocation to another city, career opportunities with other employers, or involuntary transfers. Terminations may be friendly or unfriendly and will need different levels of care. FRIENDLY TERMINATIONS. Regular termination is when there is little or no evidence or reason to believe that the termination is not agreeable to both the company and the employee. A standard set of procedures, typically maintained by the human resources department, governs the dismissal of the terminated employee to ensure that the company property is returned and all access is removed. These procedures may include exit interviews and return of keys, identification cards, badges, tokens, and cryptographic keys. Other property, such as laptops, cable locks, credit cards, and phone cards, are also collected. The user manager notifies the security department of the termination to ensure that access is removed to all platforms and facilities. Some facilities choose to immediately delete the accounts, 50
AU8231_C001.fm Page 51 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management while others choose to disable the accounts for a short period, say 30 days, to account for changes or extensions in the final termination date. The termination process includes a conversation with the departing associate about his continued responsibility for confidentiality of information. UNFRIENDLY TERMINATIONS. Unfriendly terminations may occur when the individual is fired, involuntarily transferred, laid off, or when the organization has reason to believe that the individual has the means and intention to potentially cause harm to the system. Individuals with technical skills and higher levels of access, such as the systems administrators, computer programmers, database administrators, or any individual with elevated privileges, may present higher risk to the environment. These individuals could alter files, plant logic bombs to create system file damage at a future date, or remove sensitive information. Other disgruntled users could enter erroneous data into the system that may not be discovered for several months. In these situations, immediate termination of systems access is warranted at the time of termination or prior to notifying the employee of the termination.
Managing the people aspect of security, from preemployment to postemployment, is critical to ensure that trustworthy, competent resources are employed to further the business objectives that will protect the company information. Each of these actions contributes to preventive, detective, or corrective personnel controls. Security Awareness, Training, and Education Security awareness can be defined as the understanding of the importance of security within an organization. Given today’s complex business environments, most organizations perceive value in promoting an awareness of security within their environments. There are many methods by which an organization can educate its members regarding security. These methods, as well as observations about their implementation, follow. Why Conduct Formal Security Awareness Training? Security awareness training is a method by which organizations can inform employees about their roles, and expectations surrounding their roles, in the observance of information security requirements. Additionally, training provides guidance surrounding the performance of particular security or risk management functions, as well as providing information surrounding the security and risk management functions in general. Finally, educated users aid the organization in the fulfillment of its security program objectives, which may also include audit objectives for organizations that are bound by regulatory compliance (such as HIPAA, the Sarbanes–Oxley Act, the Gramm–Leach–Bliley Act, or any other type of regulation). 51
AU8231_C001.fm Page 52 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Training Topics. Security is a broad discipline, and as such, there are many topics that could be covered by security awareness training. Topics that can be investigated within the security awareness curriculum include:
• • • • • • • • • • • • • • • •
Corporate security policies The organization’s security program Regulatory compliance requirements for the organization Social engineering Business continuity Disaster recovery Emergency management, to include hazardous materials, biohazards, and so on Security incident response Data classification Information labeling and handling Personnel security, safety, and soundness Physical security Appropriate computing resource use Proper care and handling of security credentials, such as passwords Risk assessment Accidents, errors, or omissions
A well-rounded security curriculum will include specialty classes and awareness aids for individuals performing specialized roles within the organization, such as those in IT, accounting, and others. The organization must also keep in mind that special attention should be paid to aligning training with security risk management activities. In doing so, the training may result in partial or complete offset of the risk within the organization. What Might a Course in Security Awareness Look Like? Let us create an outline for a security awareness course surrounding a corporate security policy. Assuming that this is the first formal course the organization has conducted, it is likely that personnel have not been formally introduced to the policy. This introduction would be an appropriate place to begin. You might expect a curriculum to proceed as follows. What Is a Corporate Security Policy? This item allows the organization to explain, in detail, a security measure it is undertaking to protect its environment. Why Is Having a Corporate Security Policy Important? This item provides the opportunity to share with employees that it is everyone’s responsibility to protect the organization, its people, and its assets. This is also an appropriate place for senior management to voice their support of the corporate security policy, and the security management effort in general.
52
AU8231_C001.fm Page 53 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management How Does This Policy Fit into My Role at the Organization? Many employees are concerned about the effect that security may have on them. Some fear that they will not be able to accomplish tasks on time; others fear that their role may change. This is the right time to indicate to employees that although security considerations may add a bit to job performance, it is more than likely that they are already performing many of the security responsibilities set forth in the security policy. The policy adds formalization to the ad hoc security functions in practice; that is, these ad hoc practices are now documented and may be enhanced as well. What about People Who Say They Do Not Have Any Security Functions Present in Their Current Role? It is important to point out that these functions may be
present in an ad hoc fashion, but that any process performed over time becomes at least partly automatic. This leads to decreased time to performance, in reality, over time. The instructor may ask the student whether there was a time in recent memory when he or she was asked to perform a new function as part of his or her job. The instructor can then point out that this is a similar situation. Do I Have to Comply? It is crucial for an organization to agree that all employees, including senior management, must comply with corporate security policies. If there are exceptions to the rule, then the policy may become unenforceable. This puts the organization in the position of having wasted dollars, time, and resources in the crafting of a policy with no “teeth.” What Are the Penalties for Noncompliance? It is equally critical that an organization spell out in common and easily understood terms what the penalty is for noncompliance with a corporate security policy. Most policies indicate in the body of the policy that all personnel, contractors, and business associates are expected to adhere to the policies. Typically, failure to do so results in disciplinary action, up to and including termination or prosecution.
At this point, there are likely to be questions about what may happen in the event of an accidental violation. It is important to reiterate to the students that security violations (or incidents) should be reported immediately, so that the impact to the organization can be minimized. What Is the Effect of This Corporate Policy on My Work? (Will It Make Things Harder?). This item was discussed in detail above; the instructor may tie this
back to impact on the individual’s role. What Type of Things Should I Be Looking For? At this point, the employee’s questions have been answered, relative to their responsibility to comply with the corporate security policy. This would be an appropriate time to discuss the policy’s contents with the students. This can be done as a lecture, by example, or in a “spot the security problem” format.
53
AU8231_C001.fm Page 54 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® When teaching a course of this type, the instructor should be sure to address topics that apply to all staff, including senior management, line management, business unit users, temporary or seasonal staff, contractors, business associates, and so on. Awareness Activities and Methods There are a variety of methods that can be used to promote security awareness. Some of the more common methods include: • Formalized courses, as mentioned above, delivered either in a classroom fashion using slides, handouts, or books, or online through training Web sites suited to this purpose. • Use of posters that call attention to aspects of security awareness, such as password protection, physical security, personnel security, and others. • Business unit walk-throughs, to aid workers in identification of practices that should be avoided (such as posting passwords on Post-It notes in a conspicuous place on the desktop) and practices that should be continued (such as maintaining a clean desk or using a locked screen saver when away from the computer). • Use of the organization’s intranet to post security reminders or to host a weekly or monthly column about information security happenings within the organization. • Appointment of a business unit security awareness mentor to aid with questions, concerns, or comments surrounding the implementation of security within the environment; these individuals would interact together and with the organization’s security officer. These mentors could also interact with the organization’s internal audit, legal, information technology, and corporate business units on a periodic (monthly or quarterly) basis. • Sponsor an organizationwide security awareness day, complete with security activities, prizes, and recognition of the winners. • Sponsor an event with an external partner, such as Information Systems Security Association (ISSA), Information Systems Audit and Control Association (ISACA), SysAdmin, Audit, Network, Security (SANS) Institute, International Information Systems Security Certification Consortium ((ISC)2), or others; allow time for staff members to fully participate in the event. • Provide trinkets for the users within the organization that support security management principles. • Provide security management videos, books, Web sites, and collateral for employees to use for reference. It is important to note that activities should be interesting and rewarding for the organization’s people. To facilitate this interest, the program 54
AU8231_C001.fm Page 55 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management should be adaptable, and the content and format of the awareness materials should be subject to change on a periodic basis. Job Training Unlike general security awareness training, security training assists personnel with the development of their skills sets relative to performance of security functions within their roles. A typical security curriculum in a mature organization will include specialty classes for individuals performing specialized roles within the organization, such as those in IT, accounting, and others. Even within these business units, specialized training will occur. For example, in the IT area, it would be advisable for network staff responsible for maintenance and monitoring of the firewalls, intrusion detection/prevention systems, and syslog servers to be sufficiently trained to perform these duties. Say senior management determined there were no funds available for training. What would be the result? Typically, motivated staff receive some on-the-job learning; however, it may not be sufficient to perform the job duties adequately. As a result, the organization is breached and sensitive information is stolen. Who would be at fault in this case? Senior management is always ultimately responsible in the organization for information security objectives. Senior management failed, in this case, to adequately protect the environment by refusing to properly train staff in their respective security duties. Any legal ramifications would fall squarely upon management’s shoulders. Let us examine the previous situation in another way. Assume that the personnel in question indicated to management that although no paid training was available, they felt comfortable that they could perform the security functions for which they were responsible. To demonstrate, they performed the requisite functions for IT management to demonstrate capability. All is well until the organization is breached some months later and confidential information stolen. Senior management returns to information systems management and asks the director to investigate. During her investigation, she discovers that patching has not occurred for the past three months. When staff were asked about the incident, no satisfactory answer could be given. Who would be responsible for the breach in that event? Again, senior management is always ultimately responsible for information security within the organization; however, senior management held the network team accountable for failing to maintain patching levels and promptly fired them from their positions. Ensuring that a resource is properly trained can assist an organization in assigning accountability for the satisfactory completion of security tasks for which they are responsible. The organization must also keep in mind that training should be closely aligned with security risk management activities. In doing so, the training may result in partial or complete offset of the risk within the organization. 55
AU8231_C001.fm Page 56 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Professional Education Security education (specifically, in this case, information security) revolves around the education of a potential or experienced security professional along career development lines and, as the SSCP administrative domain course points out, “provides decision-making and security management skills that are important for the success of an organization’s security program.” Security certifications, versus vendor certifications, may fit into this category. Certifications such as the Systems Security Certified Practitioner (SSCP), CISSP, Certified Information Systems Auditor (CISA), Certified Information Security Manager (CISM), Global Information Assurance Certification (GIAC), and others are related to the discipline of security for the practitioner. The benefits of this training have already been presented. Costs of the training, relative to the benefits received by the personnel and organization, must be evaluated pretraining. Equally important are curricula that have been introduced into the universities through the federal government and other benefactors, implemented as bachelor’s, master’s, and Ph.D. programs. Many of these programs present both theory and hands-on course work to the student. Topics covered in these programs may include policy and procedures design and development, security assessment techniques, technical and application security assessment techniques, social engineering, malicious software identification and eradication, incident response, disaster recovery, security program development, and others. The benefit derived from this education is self-evident: a practitioner versus a technician is created. It is important to note, however, that education of this type is typically two to six years in duration and takes significant time for resources to successfully complete. An alternative may be to train professionals on a university course-by-course basis in information security. This may be a practical alternative, given the need within the organization. Performance Metrics It is important for the organization to track performance relative to security, for the purposes of both enforcement and enhancement of security initiatives under way. It is also important for the organization to ensure that users acknowledge their security responsibilities by signing off after each class that they have heard and understand the material and will agree to be bound by the organization’s security program, policies, procedures, plans, and initiatives. Measurement can include periodic walk-throughs of business unit organizations, periodic quizzes to keep staff up to date, and so on. Risk Management Risk, as defined in the American Heritage Dictionary, is “the possibility of loss.” Random House Dictionary defines risk management as “the technique 56
AU8231_C001.fm Page 57 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management or profession of assessing, minimizing, and preventing accidental loss to a business, as through the use of insurance, safety measures, etc.” (ISC)2 defines risk management as “a discipline for living with the possibility that future events may cause harm.” Further, (ISC)2 states that “risk management reduces risks by defining and controlling threats and vulnerabilities.” We will discuss both the topics of risk assessment and the principles behind the management of risk. To achieve this, it is important to understand the principles and concepts underlying the risk management process. These principles and concepts will be discussed in this chapter. Risk Management Concepts An organization will conduct a risk assessment (the term risk analysis is sometimes interchanged with risk assessment) to evaluate: • Threats to its assets • Vulnerabilities present in the environment • The likelihood that a threat will “make good” by taking advantage of an exposure (or probability and frequency when dealing with quantitative assessment) • The impact that the exposure being realized has on the organization • Countermeasures available that can reduce the threat’s ability to exploit the exposure or that can lessen the impact to the organization when a threat is able to exploit a vulnerability • The residual risk (that is, the amount of risk that is left over when appropriate controls are properly applied to lessen or remove the vulnerability) An organization may also wish to document evidence of the countermeasure in a deliverable called an exhibit. An exhibit can be used to provide an audit trail for the organization and, likewise, evidence for any internal or external auditors that may have questions about the organization’s current state of risk. Why undertake such an endeavor? Without knowing what assets are critical and which are most at risk within an organization, it is not possible to protect those assets appropriately. For example, if an organization is bound by HIPAA regulations, but does not know how electronic personally identifiable information may be at risk, the organization may make significant mistakes in securing that information, such as neglecting to protect against certain risks or applying too much protection against low-level risks. Risk assessment also takes into account special circumstances under which assets may require additional protection, such as with regulatory compliance. Many times, these regulatory requirements are the means to completion of an appropriate risk assessment for the organization, as meeting compliance objectives requires the risk assessment to be done. 57
AU8231_C001.fm Page 58 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Because no organization has limitless dollars, resources, and time, it is sometimes difficult to persuade senior executives to undertake risk assessment, even in the face of regulatory requirements. How, then, might they be persuaded? One of the principle outcomes of risk assessment is the definition and identification of threats, vulnerabilities, and countermeasures present (or desired) within the organization. It would then be useful to “reuse” the data gathered during the risk assessment for other security initiatives, such as business continuity, security incident response, disaster recovery, and others. This reuse saves the organization dollars, time, and resources and can be demonstrated to senior management. Unlike a risk assessment, vulnerability assessments tend to focus on technology aspects of an organization, such as the network or applications. Data gathering for vulnerability assessments typically includes the use of software tools, which provide volumes of raw data for the organization and the assessor. This raw data includes information on the type of vulnerability, its location, its severity (typically based on an ordinal scale of high, medium, and low), and sometimes a discussion of the findings. Assessors who conduct vulnerability assessments must be expert in properly reading, understanding, digesting, and presenting the information obtained from a vulnerability assessment to a multidisciplinary, sometimes nontechnical audience. Why? Data that is obtained from the scanning may not truly be a vulnerability. False-positives are findings that are reported when no vulnerability truly exists in the organization (that is, something that is occurring in the environment has been flagged as an exposure when it really is not); likewise, false-negatives are vulnerabilities that should have been reported and are not. This sometimes occurs when tools are inadequately “tuned” to the task, or the vulnerability in question exists outside the scope of the assessment. Some findings are correct and appropriate, but require significant interpretation for the organization to make sense of what has been discovered and how to proceed in remediation (that is, fixing the problem). This task is typically suited for an experienced assessor or a team whose members have real-world experience with the tool in question. Qualitative Risk Assessments. Organizations have the option of performing a risk assessment in one of two ways: qualitatively or quantitatively. Qualitative risk assessments produce valid results that are descriptive versus measurable. A qualitative risk assessment is typically conducted when:
• The risk assessors available for the organization have limited expertise in quantitative risk assessment; that is, assessors typically do not require as much experience in risk assessment when conducting a qualitative assessment. 58
AU8231_C001.fm Page 59 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management • The timeframe to complete the risk assessment is short. • The organization does not have a significant amount of data readily available that can assist with the risk assessment and, as a result, descriptions, estimates, and ordinal scales (such as high, medium, and low) must be used to express risk. The following methods are typically used during a qualitative risk assessment: • Management approval to conduct the assessment must be obtained prior to assigning a team and conducting the work. Management is kept apprised during the process to continue to promote support for the effort. • Once management approval has been obtained, a risk assessment team can be formed. Members may include staff from senior management, information security, legal or compliance, internal audit, HR, facilities/safety coordination, IT, and business unit owners, as appropriate. • The assessment team requests documentation, which may include, dependent upon scope: – Information security program strategy and documentation – Information security policies, procedures, guidelines, and baselines – Information security assessments and audits – Technical documentation, to include network diagrams, network device configurations and rule sets, hardening procedures, patching and configuration management plans and procedures, test plans, vulnerability assessment findings, change control and compliance information, and other documentation as needed – Applications documentation, to include software development life cycle, change control and compliance information, secure coding standards, code promotion procedures, test plans, and other documentation as needed – Business continuity and disaster recovery plans and corresponding documents, such as business impact analysis surveys – Security incident response plan and corresponding documentation – Data classification schemes and information handling and disposal policies and procedures – Business unit procedures, as appropriate – Executive mandates, as appropriate – Other documentation, as needed • The team sets up interviews with organizational members, for the purposes of identifying vulnerabilities, threats, and countermeasures within the environment. All levels of staff should be represented, to include: – Senior management 59
AU8231_C001.fm Page 60 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® – – – – – –
Line management Business unit owners Temporary or casual staff (that is, interns) Business partners, as appropriate Remote workers, as appropriate Any other staff deemed appropriate to task
It is important to note that staff across all business units within scope for the risk assessment should be interviewed. It is not necessary to interview every staff person within a unit; a representative sample is usually sufficient. Once interviews are completed, the analysis of the data gathered can be completed. This can include matching the threat to a vulnerability, matching threats to assets, determining how likely the threat is to exploit the vulnerability, and determining the impact to the organization in the event an exploit is successful. Analysis also includes a matching of current and planned countermeasures (that is, protection) to the threat–vulnerability pair. When the matching is completed, risk can be calculated. In a qualitative analysis, the product of likelihood and impact produces the level of risk. The higher the risk level, the more immediate is the need for the organization to address the issue, to protect the organization from harm. Once risk has been determined, additional countermeasures can be recommended to minimize, transfer, or avoid the risk. When this is completed, the risk that is left over — after countermeasures have been applied to protect against the risk — is also calculated. This is the residual risk, or risk left over after countermeasure application. Quantitative Risk Assessments. As an organization becomes more sophisticated in its data collection and retention, and staff become more experienced in conducting risk assessments, an organization may find itself moving more toward quantitative risk assessment. The hallmark of a quantitative assessment is the numeric nature of the analysis. Frequency, probability, impact, countermeasure effectiveness, and other aspects of the risk assessment have a discrete mathematical value in a pure quantitative analysis.
Often, the risk assessment an organization conducts is a combination of qualitative and quantitative methods. Fully quantitative risk assessment may not be possible, because there is always some subjective input present, such as the value of information. It is clear to see the benefits, and the pitfalls, of performing a purely quantitative analysis. Quantitative analysis allows the assessor to determine whether the cost of the risk outweighs the cost of the countermeasure, in mathematical rather than descriptive terms. Purely quantitative 60
AU8231_C001.fm Page 61 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management analysis, however, requires an enormous amount of time and must be performed by assessors with a significant amount of experience. Additionally, subjectivity is introduced because the metrics may also need to be applied to qualitative measures. If the organization has the time and manpower to complete a lengthy and complex accounting evaluation, this data may be used to assist with a quantitative analysis; however, most organizations are not in a position to authorize this work. Three steps are undertaken in a quantitative risk assessment: initial management approval, construction of a risk assessment team, and the review of information currently available within the organization. Single loss expectancy (SLE) must be calculated to provide an estimate of loss. Single loss expectancy is defined as the difference between the original value and the remaining value of an asset after a single exploit. The formula for calculating SLE is as follows: SLE = asset value (in $) × exposure factor (loss in successful threat exploit, as %) Losses can include lack of availability of data assets due to data loss, theft, alteration, or denial of service (perhaps due to business continuity or security issues). Next, the organization would calculate the annualized rate of occurrence (ARO). This is done to provide an accurate calculation of annualized loss expectancy (ALE). ARO is an estimate of how often a threat will be successful in exploiting a vulnerability over the period of a year. When this is completed, the organization calculates the annualized loss expectancy (ALE). The ALE is a product of the yearly estimate for the exploit (ARO) and the loss in value of an asset after a single exploitation (SLE). The calculation follows: ALE = ARO × SLE Note that this calculation can be adjusted for geographical distances using the local annual frequency estimate (LAFE) or the standard annual frequency estimate (SAFE). Given that there is now a value for SLE, it is possible to determine what the organization should spend, if anything, to apply a countermeasure for the risk in question. Remember that no countermeasure should be greater in cost than the risk it mitigates, transfers, or avoids. Countermeasure cost per year is easy and straightforward to calculate. It is simply the cost of the countermeasure divided by the years of its life (that is, use within the organization). Finally, the organization is able to compare the cost of the risk versus the cost of the countermeasure and make some objective decisions regarding its countermeasure selection. 61
AU8231_C001.fm Page 62 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Selecting Tools and Techniques for Risk Assessment. It is expected that an organization will make a selection of the risk assessment methodology, tools, and resources (including people) that best fit its culture, personnel capabilities, budget, and timeline. Many automated tools, including proprietary tools, exist in the field. Although automation can make the data analysis, dissemination, and storage of results easier, it is not a required part of risk assessment. If an organization is planning to purchase or build automated tools for this purpose, it is highly recommended that this decision be based on an appropriate timeline and resource skill sets for creation, implementation, maintenance, and monitoring of the tool(s) and data stored within, long term. Risk Assessment Methodologies. NIST SP 800-30 and 800-66. These methodologies are qualitative methods established for use of the general public, but are particularly used by regulated industries, such as healthcare. The procedures for using the methodologies in the field are essentially the same; SP 800-66 is written specifically with HIPAA clients in mind (though it is possible to use this document for other regulated industries as well). The methodology process follows:
• • • • • • • • •
System characterization Vulnerability identification Threat identification Countermeasure identification Likelihood determination Impact determination Risk determination Additional countermeasures recommendations Document results
OCTAVE. As defined by its creator, Carnegie Mellon University’s Software Engineering Institute, OCTAVE “is a self-directed information security risk evaluation.” OCTAVE is defined as a situation where people from an organization manage and direct an information security risk evaluation for their organization. The organization’s people direct risk evaluation activities and are responsible for making decisions about the organization’s efforts to improve information security. In OCTAVE, an interdisciplinary team, called the analysis team, leads the evaluation.
The OCTAVE criteria are a set of principles, attributes, and outputs. Principles are the fundamental concepts driving the nature of the evaluation. They define the philosophy that shapes the evaluation process. For example, self-direction is one of the principles of OCTAVE. The concept of selfdirection means that people inside the organization are in the best position to lead the evaluation and make decisions. 62
AU8231_C001.fm Page 63 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management The requirements of the evaluation are embodied in the attributes and outputs. Attributes are the distinctive qualities, or characteristics, of the evaluation. They are the requirements that define the basic elements of the OCTAVE approach and what is necessary to make the evaluation a success from both the process and organizational perspectives. Attributes are derived from the OCTAVE principles. For example, one of the attributes of OCTAVE is that an interdisciplinary team (the analysis team) staffed by personnel from the organization lead the evaluation. The principle behind the creation of an analysis team is self-direction. Finally, outputs are the required results of each phase of the evaluation. They define the outcomes that an analysis team must achieve during each phase. We recognize that there is more than one set of activities that can produce the outputs of OCTAVE. It is for this reason that we do not specify one set of required activities. FRAP. In Information Security Risk Analysis, Second Edition (Auerbach Publications, 2005), Tom Peltier writes, “the Facilitated Risk Analysis Process (FRAP) examines the qualitative risk assessment process and then provides tested variations on the methodology. The FRAP process can be used by … any organization that needs to determine what direction the organization must take on a specific issue.”
The process allows organizations to prescreen applications, systems, or other subjects to determine if a risk analysis is needed. By establishing a unique prescreening process, organizations will be able to concentrate on subjects that truly need a formal risk analysis. The process has no outlay of capital and can be conducted by anyone with good facilitation skills. CRAMM. As described on the CRAMM (CCTA Risk Analysis and Management Method) Web site, residing on Siemens Insight Consulting’s Web site, “CRAMM provides a staged and disciplined approach embracing both technical (e.g., IT hardware and software) and nontechnical (e.g., physical and human) aspects of security. To assess these components, CRAMM is divided into three stages: asset identification and valuation, threat and vulnerability assessment, and countermeasure selection and recommendation.” The implementation of this methodology is much like the other methods listed above. More information will be presented on information valuation, threat, and vulnerability and countermeasure identification and selection later in this work. Spanning Tree Analysis. As stated in the (ISC)2 Information Systems Secu-
rity Engineering Professional (ISSEP) course material, spanning tree analysis “creates a ‘tree’ of all possible threats to or faults of the system. ‘Branches’ are general categories such as network threats, physical
63
AU8231_C001.fm Page 64 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® threats, component failures, etc.” When conducting the risk assessment, organizations “prune ‘branches’ that do not apply.” Failure Modes and Effect Analysis. As stated in the (ISC)2 ISSEP course mate-
rial, failure modes and effect analysis was borne in hardware analysis, but can be used for software and system analysis. It examines potential failures of each part or module, and examines effects of failure at three levels:
• Immediate level (part or module) • Intermediate level (process or package) • Systemwide The organization would then “collect total impact for failure of given modules to determine whether modules should be strengthened or further supported.” Risk Management Principles Risk Avoidance. Risk avoidance is the practice of coming up with alternatives so that the risk in question is not realized. For example, have you ever heard a friend, or parents of a friend, complain about the costs of insuring an underage driver? How about the risks that many of these children face as they become mobile? Some of these families will decide that the child in question will not be allowed to drive the family car, but will rather wait until he or she is of legal age (that is, 18 years of age) before committing to owning, insuring, and driving a motor vehicle.
In this case, the family has chosen to avoid the risks associated with an underage driver, such as poor driving performance or the cost of insurance for the child. Although this choice may be available for some situations, it is not available for all. Imagine, if you will, a global retailer who, knowing the risks associated with doing business on the Internet, decides to avoid the practice. This decision will likely cost the company a significant amount of its revenue (if, indeed, the company has products or services that consumers wish to purchase). In addition, the decision may require the company to build or lease a site in each of the locations, globally, for which it wishes to continue business. This could have a catastrophic effect on the company’s ability to continue business operations. Risk Transfer. Risk transfer is the practice of passing on the risk in question to another entity, such as an insurance company. Let us look at one of the examples that was presented above in a different way. The family is evaluating whether to permit an underage driver to use the family car. The family decides that it is important for the child to be mobile, so it transfers the risk of a child being in an accident to the insurance company, which provides the family with auto insurance. 64
AU8231_C001.fm Page 65 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management It is important to note that the transfer of risk may be accompanied by a cost. This is certainly true for the example presented above and can be seen in other insurance instances, such as liability insurance for a vendor or the insurance taken out by companies to protect against hardware and software theft or destruction. Risk Mitigation. Risk mitigation is the practice of the elimination of or the significant decrease in the level of risk presented. Examples of risk mitigation can be seen in everyday life and are readily apparent in the information technology world. For example, to lessen the risk of exposing personal and financial information that is highly sensitive and confidential, organizations put countermeasures in place, such as firewalls, intrusion detection/prevention systems, and other mechanisms, to deter malicious outsiders from accessing this highly sensitive information. In the underage driver example, risk mitigation could take the form of driver education for the child. Risk Acceptance. In some cases, it may be prudent for an organization to simply accept the risk that is presented in certain scenarios. Risk acceptance is the practice of accepting certain risk(s), typically based on a business decision that may also weigh the cost versus the benefit of dealing with the risk in another way.
For example, an executive may be confronted with risks identified during the course of a risk assessment for her organization. These risks have been prioritized by high, medium, and low impact to the organization. The executive notes that to mitigate or transfer the low-level risks, significant costs could be involved. Mitigation might involve the hiring of additional highly skilled personnel and the purchase of new hardware, software, and office equipment, while transference of the risk to an insurance company would require premium payments. She further notes that an insignificant impact to her organization would occur if any of the reported low-level threats were realized. Therefore, she (rightly) concludes that it is wiser for the organization to forego the costs and accept the risk. In the child driver example, risk acceptance could be based on the observation that the child has demonstrated the responsibility and maturity to warrant the parent’s trust in his or her judgment. The decision to accept risk should not be taken lightly, nor without appropriate information to justify the decision. The cost versus benefit, the organization’s willingness to monitor the risk long term, and the impact it has on the outside world’s view of the organization must all be taken into account when deciding to accept risk. It is important to note that there are organizations who may also track containment of risk. Containment lessens the impact to an organization if or when an exposure is exploited through distribution of critical assets (that is, people, processes, data, technologies, and facilities). 65
AU8231_C001.fm Page 66 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Who Owns the Risk? This is a very serious question, with an intriguing answer: it depends. Ultimately, the organization (that is, senior management) owns the risks that are present during operation of the company; however, senior management may rely on business unit (or data) owners or custodians to assist in identification of risks so that they can be mitigated, transferred, or avoided. The organization also likely expects that the owners and custodians will minimize or mitigate risk as they work, based upon policies, procedures, and regulations present in the environment. If expectations are not met, consequences such as disciplinary action, termination, or prosecution will usually result.
Here is an example: A claims processor is working with a medical healthcare claim submitted to his organization for completion. The claim contains electronic personally identifiable healthcare information for a person the claims processor knows. Although he has acknowledged his responsibilities for the protection of the data, he calls his mother, who is a good friend of the individual who filed the claim. His mother in turn calls multiple people, who in turn contact the person who filed the claim. The claimant contacts an attorney, and the employee and company are sued for the intentional breach of information. Several things are immediately apparent from this example. The employee is held immediately accountable for his action in intentionally exploiting a vulnerability (that is, sensitive information was inappropriately released, according to federal law — Health Insurance Portability and Accountability Act of 1996 (HIPAA)). While he was custodian of the data (and a co-owner of the risk), the court also determined that the company was co-owner of the risk, and hence bore the responsibility for compensating the victim (in this example, the claimant). Risk Assessment Identify Vulnerabilities. The National Institute of Standards and Technology (NIST), in Special Publication 800-30 Rev. A, defines a vulnerability as “a flaw or weakness in system security procedures, design, implementation, or internal controls that could be exercised (accidentally triggered or intentionally exploited) and result in a security breach or a violation of the system’s security policy.” Random House Dictionary defines being vulnerable as “capable of or susceptible to being wounded or hurt, as by a weapon” or “(of a place) open to assault; difficult to defend.”
Note that the latter definition is not strictly related to information systems, but rather can be attributed to physical, human, or other factors. In the field, it is common to identify vulnerabilities as they are related to people, processes, data, technology, and facilities. Examples of vulnerabilities could include the absence of a receptionist, mantrap, or other physical security mechanism upon entrance to a facility; inadequate integrity 66
AU8231_C001.fm Page 67 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management checking in financial transaction software; neglecting to require users to sign an acknowledgment of their responsibilities with regard to security, as well as an acknowledgment that they have read, understand, and agree to abide by the organization’s security policies; or patching and configuration of the organization’s information systems are done on an ad hoc basis, and therefore are neither documented nor up to date. Identify Threats. The National Institute of Standards and Technology (NIST), in Special Publication 800-30 Rev. A, defines a threat as “the potential for a particular threat-source to successfully exercise a particular vulnerability.” In the OCTAVE framework, threats are identified as the source from which assets in the organization are secured (or protected).
NIST defines a threat source as “either (1) intent and method targeted at the intentional exploitation of a vulnerability or (2) a situation and method that may accidentally trigger a vulnerability.” Threat sources have been identified by a number of groups, but can be grouped into a few categories. Each category can be expanded with specific threats, as follows: • Human: Malicious outsider, malicious insider, (bio)terrorist, saboteur, spy political or competitive operative, loss of key personnel, errors made by human intervention, cultural issues • Natural: Fire, flood, tornado, hurricane, snow storm, earthquake • Technical: Hardware failure, software failure, malicious code, unauthorized use, use of emerging services, such as wireless, new technologies • Physical: Closed-circuit TV failure, perimeter defense failure • Environmental: Hazardous waste, biological agent, utility failure • Operational: A process (manual or automated) that affects confidentiality, integrity, or availability Many specific threats exist within each category; the organization will identify those sources as the assessment progresses, utilizing information available from groups such as (ISC)2 and SANS, and from government agencies such as the National Institute of Standards and Technology (NIST), the Federal Financial Institutions Examination Council (FFIEC), the Department of Health and Human Services (HHS), and others. Determination of Likelihood. It is important to note that likelihood is a component of a qualitative risk assessment. For more information on the components of a quantitative risk assessment, see the “Quantitative Risk Assessments” section earlier in this chapter.
Likelihood, along with impact, determines risk. Likelihood, as stated by NIST and others, can be measured by the capabilities of the threat and the presence or absence of countermeasures. Initially, organizations that do not have trending data available may use an ordinal scale, labeled high, 67
AU8231_C001.fm Page 68 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Rating Likelihood and Consequences
Figure 1.4. Rating likelihood and consequences.
medium, and low, to score likelihood rankings. Another method is presented in Figure 1.4. Once a selection on the ordinal scale has been made, the selection can be mapped to a numeric value for computation of risk. For example, the selection of high can be mapped to the value of 1. Medium can likewise be mapped to 0.5, and low can be mapped to 0.1. As the scale expands, the numeric assignments will become more targeted. Determination of Impact. Impact can be ranked much the same way as likelihood. The main difference is that the impact scale is expanded and depends upon definitions, rather than ordinal selections. Definitions of impact to an organization often include loss of life, loss of dollars, loss of prestige, loss of market share, and other facets. It is highly recommended that the organization take sufficient time to define and assign impact definitions for high, medium, low, or any other scale terms that are chosen.
Once the terms are defined, impact can be calculated. If an exploit results in loss of life (such as a bombing or bioterrorist attack), the ranking will always be high. In general, groups such as the National Security Agency view loss of life as the highest-priority risk in any organization. As such, it may be assigned the top value in the impact scale. An example: 51 to 100 = high; 11 to 50 = medium; 0 to 10 = low. Determination of Risk. Risk is determined by the product of likelihood and impact. For example, if an exploit has a likelihood of 1 (high) and an impact of 100 (high), the risk would be 100. Using the same mapping scale 68
AU8231_C001.fm Page 69 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management
Consequence: Insignificant
Likelihood:
Minor
Moderate
Major
Catastrophic
1
2
3
4
5
A (almost certain)
H
H
E
E
E
B (likely)
M
H
H
E
E
C (possible)
L
M
H
E
E
D (unlikely)
L
L
M
H
E
E (rare)
L
L
M
H
H
I
Extreme Risk: Immediate action required to mitigate the risk or decide to not proceed
H
High Risk: Action should be taken to compasate for the risk
M
Moderate Risk: Action should be taken to monitor the risk
L
Low Risk: Routine acceptance of the risk
Figure 1.5. ANZ 4360 risk levels.
as for impact, this would be the highest exploit ranking available. A ranking like this might translate to a tragedy, such as the London bombings. These scenarios (high likelihood, high impact) merit immediate attention from the organization. As the risk calculations are completed, they can be prioritized for attention, as required. Note that not all risks will receive the same level of attention, based on the organization’s risk tolerance and its strategy for mitigation, transfer, or avoidance of risk. Figure 1.5 shows another view of risk. Reporting Findings. Once the findings from the assessment have been consolidated and the calculations have been completed, it is time to present a finalized report to senior management. This can be done in a written report, by presentation or outbrief, or by both means. Written reports should include an acknowledgment to the participants, a summary of the approach taken, findings in detail (in either tabulated or graphical form), recommendations for remediation of the findings, and a summary. Organizations are encouraged to develop their own formats, to make the most of the activity, as well as the information collected and analyzed. Countermeasure Selection. It is important for the organization to appropriately select countermeasures to apply to risks in the environment. Many aspects of the countermeasure must be considered to ensure it is the proper fit to the task. Considerations for countermeasures include: 69
AU8231_C001.fm Page 70 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® • Accountability • Auditability (Can it be tested?) • Publicly available, simple design (the construction and the nature of the countermeasure are not secret) • Trusted source • Independence • Consistently applied • Cost-effective • Reliable • Distinct from other countermeasures (no overlap) • Ease of use • Minimum manual intervention • Sustainable • Secure • Protects confidentiality, integrity, and availability of assets • Can be “backed out” in event of issue • Creates no additional issues during operation • Leaves no residual data from its function Although this list appears rather lengthy, it is clear that countermeasures must be above reproach when in use for protection of an organization’s assets. It is important to note that once risk assessment is completed and there is a list of remediation activities to be undertaken, an organization must ensure that it has personnel with appropriate capabilities to implement the remediation activities, as well as to maintain and support them. This may require the organization to provide additional training opportunities to personnel involved in the design, deployment, maintenance, and support of security mechanisms within the environment. In addition, it is crucial that appropriate policies, with detailed procedures that correspond to each policy item, be created, implemented, maintained, monitored, and enforced in the environment. It is highly recommended that the organization assign accountable resources to each task and track tasks over time, reporting progress to senior management and allowing time for appropriate approvals during this process. Information Valuation. All information has value. Value is typically represented by information’s cost and its perceived value internally and externally to an organization. It is important to remember that over time, however, information may lose its value. Additionally, information may lose value if it is modified, improperly disclosed, or not had its proper value calculated. It is of utmost importance, then, to periodically attempt to properly value information assets.
How, then, is information value determined? Similarly to risk analysis, information valuation methods may be descriptive (subjective) or metric 70
AU8231_C001.fm Page 71 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management (objective). Subjective methods include the creation, dissemination, and data collection from checklists or surveys. An organization’s policies or the regulatory compliance requirements that it must follow may also determine information’s worth. Metric, or statistical, measures may provide a more objective view of information valuation. Each of these methods has its uses within an organization. One of the methods that uses consensus relative to valuation of information is the consensus/modified Delphi method. Participants in the valuation exercise are asked to comment anonymously on the task being discussed. This information is collected and disseminated to a participant other than the original author. This participant comments upon the observations of the original author. The information gathered is discussed in a public forum and the best course is agreed upon by the group (consensus). Ethics The consideration of computer ethics fundamentally emerged with the birth of computers. There was concern right away that computers would be used inappropriately to the detriment of society, or that they would replace humans in many jobs, resulting in widespread job loss. To fully grasp the issues involved with computer ethics, it is important to consider the history. The following provides a brief overview of some significant events.1 Consideration of computer ethics is recognized to have begun with the work of MIT professor Norbert Wiener during World War II in the early 1940s, when he helped to develop an antiaircraft cannon capable of shooting down fast warplanes. This work resulted in Wiener and his colleagues creating a new field of research that Wiener called cybernetics, the science of information feedback systems. The concepts of cybernetics, combined with the developing computer technologies, led Wiener to make some ethical conclusions about the technology called information and communication technology (ICT), in which Wiener predicted social and ethical consequences. Wiener published the book The Human Use of Human Beings in 1950, which described a comprehensive foundation that is still the basis for computer ethics research and analysis. In the mid-1960s, Donn B. Parker, at the time with SRI International in Menlo Park, CA, began examining unethical and illegal uses of computers and documenting examples of computer crime and other unethical computerized activities. He published “Rules of Ethics in Information Processing” in Communications of the ACM in 1968, and headed the development of the first Code of Professional Conduct for the Association for Computing Machinery, which was adopted by the ACM in 1973. During the late 1960s, Joseph Weizenbaum, a computer scientist at MIT in Boston, created a computer program that he called ELIZA that he 71
AU8231_C001.fm Page 72 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® scripted to provide a crude imitation of “a Rogerian psychotherapist engaged in an initial interview with a patient.” People had strong reactions to his program, some psychiatrists fearing it showed that computers would perform automated psychotherapy. Weizenbaum wrote Computer Power and Human Reason in 1976, in which he expressed his concerns about the growing tendency to see humans as mere machines. His book, MIT courses, and many speeches inspired many computer ethics thoughts and projects. Walter Maner is credited with coining the phrase “computer ethics” in the mid-1970s when discussing the ethical problems and issues created by computer technology, and taught a course on the subject at Old Dominion University. From the late 1970s into the mid-1980s, Maner’s work created much interest in university-level computer ethics courses. In 1978, Maner published the Starter Kit in Computer Ethics, which contained curriculum materials and advice for developing computer ethics courses. Many university courses were put in place because of Maner’s work. In the 1980s, social and ethical consequences of information technology, such as computer-enabled crime, computer failure disasters, privacy invasion using computer databases, and software ownership lawsuits, were being widely discussed in America and Europe. James Moor of Dartmouth College published “What Is Computer Ethics?” in Computers and Ethics, and Deborah Johnson of Rensselaer Polytechnic Institute published Computer Ethics, the first textbook in the field in the mid1980s. Other significant books about computer ethics were published within the psychology and sociology field, such as Sherry Turkle’s The Second Self, about the impact of computing on the human psyche, and Judith Perrolle’s Computers and Social Change: Information, Property and Power, about a sociological approach to computing and human values. Maner Terrell Bynum held the first international multidisciplinary conference on computer ethics in 1991. For the first time, philosophers, computer professionals, sociologists, psychologists, lawyers, business leaders, news reporters, and government officials assembled to discuss computer ethics. During the 1990s, new university courses, research centers, conferences, journals, articles, and textbooks appeared, and organizations like Computer Professionals for Social Responsibility, the Electronic Frontier Foundation, and the Association for Computing Machinery-Special Interest Group on Computers and Society (ACM-SIGCAS) launched projects addressing computing and professional responsibility. Developments in Europe and Australia included new computer ethics research centers in England, Poland, Holland, and Italy. In the U.K., Simon Rogerson, of De Montfort University, led the ETHICOMP series of conferences and established the Centre for Computing and Social Responsibility. 72
AU8231_C001.fm Page 73 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management Regulatory Requirements for Ethics Programs When creating an ethics strategy, it is important to look at the regulatory requirements for ethics programs. These provide the basis for a minimal ethical standard upon which an organization can expand to fit its own unique organizational environment and requirements. An increasing number of regulatory requirements related to ethics programs and training now exist. The 1991 U.S. Federal Sentencing Guidelines for Organizations (FSGO) outline minimal ethical requirements and provide for substantially reduced penalties in criminal cases when federal laws are violated if ethics programs are in place. Reduced penalties provide strong motivation to establish an ethics program. Effective November 1, 2004, the FSGO was updated with additional requirements: • In general, board members and senior executives must assume more specific responsibilities for a program to be found effective: – Organizational leaders must be knowledgeable about the content and operation of the compliance and ethics program, perform their assigned duties exercising due diligence, and promote an organizational culture that encourages ethical conduct and a commitment to compliance with the law. – The commission’s definition of an effective compliance and ethics program now has three subsections: • Subsection (a) — the purpose of a compliance and ethics program • Subsection (b) — seven minimum requirements of such a program • Subsection (c) — the requirement to periodically assess the risk of criminal conduct and design, implement, or modify the seven program elements, as needed, to reduce the risk of criminal conduct • The purpose of an effective compliance and ethics program is “to exercise due diligence to prevent and detect criminal conduct and otherwise promote an organizational culture that encourages ethical conduct and a commitment to compliance with the law.” The new requirement significantly expands the scope of an effective ethics program and requires the organization to report an offense to the appropriate governmental authorities without unreasonable delay. The Sarbanes–Oxley Act of 2002 introduced accounting reform and requires attestation to the accuracy of financial reporting documents: • Section 103, “Auditing, Quality Control, and Independence Standards and Rules,” requires the board to: – Register public accounting firms 73
AU8231_C001.fm Page 74 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® –
Establish, or adopt, by rule, “auditing, quality control, ethics, independence, and other standards relating to the preparation of audit reports for issuers” • New Item 406(a) of Regulation S-K requires companies to disclose: – Whether they have a written code of ethics that applies to their senior officers – Any waivers of the code of ethics for these individuals – Any changes to the code of ethics • If companies do not have a code of ethics, they must explain why they have not adopted one. • The U.S. Securities and Exchange Commission approved a new governance structure for the New York Stock Exchange (NYSE) in December 2003. It includes a requirement for companies to adopt and disclose a code of business conduct and ethics for directors, officers, and employees, and promptly disclose any waivers of the code for directors or executive officers. The NYSE regulations require all listed companies to possess and communicate, both internally and externally, a code of conduct or face delisting. In addition to these, organizations must monitor new and revised regulations from U.S. regulatory agencies, such as the Food and Drug Administration (FDA), Federal Trade Commission (FTC), Bureau of Alcohol, Tobacco, and Firearms (BATF), Internal Revenue Service (IRS), and Department of Labor (DoL), and many others throughout the world. Ethics plans and programs need to be established within the organization to ensure that the organization is in compliance with all such regulatory requirements. Example Topics in Computer Ethics When establishing a computer ethics program and accompanying training and awareness programs, it is important to consider the topics that have been addressed and researched. The following topics, identified by Terrell Bynum,1 are good to use as a basis. Computers in the Workplace. Computers can pose a threat to jobs as people feel they may be replaced by them. However, the computer industry already has generated a wide variety of new jobs. When computers do not eliminate a job, they can radically alter it. In addition to job security concerns, another workplace concern is health and safety. It is a computer ethics issue to consider how computers impact health and job satisfaction when information technology is introduced into a workplace. Computer Crime. With the proliferation of computer viruses, spyware, phishing and fraud schemes, and hacking activity from every location in the world, computer crime and security are certainly topics of concern when discussing computer ethics. Besides outsiders, or hackers, many 74
AU8231_C001.fm Page 75 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management computer crimes, such as embezzlement or planting of logic bombs, are committed by trusted personnel who have authorization to use company computer systems. Privacy and Anonymity. One of the earliest computer ethics topics to arouse public interest was privacy. The ease and efficiency with which computers and networks can be used to gather, store, search, compare, retrieve, and share personal information make computer technology especially threatening to anyone who wishes to keep personal information out of the public domain or out of the hands of those who are perceived as potential threats. The variety of privacy-related issues generated by computer technology has led to reexamination of the concept of privacy itself. Intellectual Property. One of the more controversial areas of computer ethics concerns the intellectual property rights connected with software ownership. Some people, like Richard Stallman, who started the Free Software Foundation, believe that software ownership should not be allowed at all. He claims that all information should be free, and all programs should be available for copying, studying, and modifying by anyone who wishes to do so.2 Others, such as Deborah Johnson, argue that software companies or programmers would not invest weeks and months of work and significant funds in the development of software if they could not get the investment back in the form of license fees or sales.3 Professional Responsibility and Globalization. Global networks such as the Internet and conglomerates of business-to-business network connections are connecting people and information worldwide. Such globalization issues that include ethics considerations include:
• • • • • •
Global laws Global business Global education Global information flows Information-rich and information-poor nations Information interpretation
The gap between rich and poor nations, and between rich and poor citizens in industrialized countries, is very wide. As educational opportunities, business and employment opportunities, medical services, and many other necessities of life move more and more into cyberspace, gaps between the rich and the poor may become even worse, leading to new ethical considerations. Common Computer Ethics Fallacies Although computer education is starting to be incorporated in lower grades in elementary schools, the lack of early computer education for 75
AU8231_C001.fm Page 76 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® most current adults led to several documented generally accepted fallacies that apply to nearly all computer users. As technology advances, these fallacies will change; new ones will arise, and some of the original fallacies will no longer exist as children learn at an earlier age about computer use, risks, security, and other associated information. There are more than described here, but Peter S. Tippett identified the following computer ethics fallacies, which have been widely discussed and generally accepted as being representative of the most common.4 The Computer Game Fallacy. Computer users tend to think that computers will generally prevent them from cheating and doing wrong. Programmers particularly believe that an error in programming syntax will prevent it from working, so that if a software program does indeed work, then it must be working correctly and preventing bad things or mistakes from happening. Even computer users in general have gotten the message that computers work with exacting accuracy and will not allow actions that should not occur. Of course, what computer users often do not consider is that although the computer operates under very strict rules, the software programs are written by humans and are just as susceptible to allowing bad things to happen as people often are in their own lives. Along with this, there is also the perception that a person can do something with a computer without being caught, so that if what is being done is not permissible, the computer should somehow prevent them from doing it. The Law-Abiding Citizen Fallacy. Laws provide guidance for many things, including computer use. Sometimes users confuse what is legal with regard to computer use with what is reasonable behavior for using computers. Laws basically define the minimum standard about which actions can be reasonably judged, but such laws also call for individual judgment. Computer users often do not realize they also have a responsibility to consider the ramifications of their actions and to behave accordingly. The Shatterproof Fallacy. Many, if not most, computer users believe that they can do little harm accidentally with a computer beyond perhaps erasing or messing up a file. However, computers are tools that can harm, even if computer users are unaware of the fact that their computer actions have actually hurt someone else in some way. For example, sending an email flame to a large group of recipients is the same as publicly humiliating them. Most people realize that they could be sued for libel for making such statements in a physical public forum, but may not realize they are also responsible for what they communicate and for their words and accusations on the Internet. As another example, forwarding e-mail without permission of the author can lead to harm or embarrassment if the original sender was communicating privately without expectation of his or her message being seen by any others. Also, using e-mail to stalk someone, to 76
AU8231_C001.fm Page 77 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management send spam, and to harass or offend the recipient in some way also are harmful uses of computers. Software piracy is yet another example of using computers to, in effect, hurt others. Generally, the shatterproof fallacy is the belief that what a person does with a computer can do minimal harm, and only affects perhaps a few files on the computer itself; it is not considering the impact of actions before doing them. The Candy-from-a-Baby Fallacy. Illegal and unethical activity, such as software piracy and plagiarism, are very easy to do with a computer. However, just because it is easy does not mean that it is right. Because of the ease with which computers can make copies, it is likely almost every computer user has committed software piracy of one form or another. The Software Publisher’s Association (SPA) and Business Software Alliance (BSA) studies reveal software piracy costs companies multibillions of dollars. Copying a retail software package without paying for it is theft. Just because doing something wrong with a computer is easy does not mean it is ethical, legal, or acceptable. The Hacker’s Fallacy. Numerous reports and publications of the commonly accepted hacker belief is that it is acceptable to do anything with a computer as long as the motivation is to learn and not to gain or make a profit from such activities. This so-called hacker ethic is explored in more depth in the following section. The Free Information Fallacy. A somewhat curious opinion of many is the notion that information “wants to be free,” as mentioned earlier. It is suggested that this fallacy emerged from the fact that it is so easy to copy digital information and to distribute it widely. However, this line of thinking completely ignores the fact the copying and distribution of data is completely under the control and whim of the people who do it, and to a great extent, the people who allow it to happen.
Hacking and Hacktivism Hacking is an ambivalent term, most commonly perceived as being part of criminal activities. However, hacking has been used to describe the work of individuals who have been associated with the open-source movement. Many of the developments in information technology have resulted from what has typically been considered as hacking activities. Manuel Castells considers hacker culture as the “informationalism” that incubates technological breakthrough, identifying hackers as “the actors in the transition from an academically and institutionally constructed milieu of innovation to the emergence of self-organizing networks transcending organizational control” (p. 276).5 77
AU8231_C001.fm Page 78 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® A hacker was originally a person who sought to understand computers as thoroughly as possible. Soon hacking came to be associated with phreaking, breaking into phone networks to make free phone calls, which is clearly illegal. The Hacker Ethic. The idea of a hacker ethic originates in the activities of the original hackers at MIT and Stanford in the 1950s and 1960s. Stephen Levy outlined the so-called hacker ethic6 as follows:
1. 2. 3. 4.
Access to computers should be unlimited and total. All information should be free. Authority should be mistrusted and decentralization promoted. Hackers should be judged solely by their skills at hacking, rather than by race, class, age, gender, or position. 5. Computers can be used to create art and beauty. 6. Computers can change your life for the better. The hacker ethic has three main functions (p. 279):5 1. It promotes the belief of individual activity over any form of corporate authority or system of ideals. 2. It supports a completely free-market approach to the exchange of and access to information. 3. It promotes the belief that computers can have a beneficial and lifechanging effect. Such ideas are in conflict with a wide range of computer professionals’ various codes of ethics. Ethics Codes of Conduct and Resources Several organizations and groups have defined the computer ethics their members should observe and practice. In fact, most professional organizations have adopted a code of ethics, a large percentage of which address how to handle information. To provide the ethics of all professional organizations related to computer use would fill a large book. The following are provided to give you an opportunity to compare similarities between the codes and, most interestingly, to note the differences (and sometimes contradictions) in the codes followed by the various diverse groups. The Code of Fair Information Practices. In 1973 the Secretary’s Advisory Committee on Automated Personal Data Systems for the U.S. Department of Health, Education and Welfare recommended the adoption of the following Code of Fair Information Practices7 to secure the privacy and rights of citizens:
1. There must be no personal data record-keeping systems whose very existence is secret; 78
AU8231_C001.fm Page 79 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management 2. There must be a way for an individual to find out what information is in his or her file and how the information is being used; 3. There must be a way for an individual to correct information in his or her records; 4. Any organization creating, maintaining, using, or disseminating records of personally identifiable information must assure the reliability of the data for its intended use and must take precautions to prevent misuse; and 5. There must be a way for an individual to prevent personal information obtained for one purpose from being used for another purpose without his or her consent. Internet Activities Board (IAB) (now the Internet Architecture Board) and RFC 1087. RFC 10878 is a statement of policy by the Internet Activities
Board (IAB) posted in 1989 concerning the ethical and proper use of the resources of the Internet. The IAB “strongly endorses the view of the Division Advisory Panel of the National Science Foundation Division of Network, Communications Research and Infrastructure,” which characterized as unethical and unacceptable any activity that purposely: 1. Seeks to gain unauthorized access to the resources of the Internet, 2. Disrupts the intended use of the Internet, 3. Wastes resources (people, capacity, computer) through such actions, 4. Destroys the integrity of computer-based information, or 5. Compromises the privacy of users. Computer Ethics Institute (CEI). In 1991 the Computer Ethics Institute held its first National Computer Ethics Conference in Washington, D.C. The Ten Commandments of Computer Ethics were first presented in Dr. Ramon C. Barquin’s paper prepared for the conference, ”In Pursuit of a ‘Ten Commandments’ for Computer Ethics.” The Computer Ethics Institute published them as follows in 1992:9
1. 2. 3. 4. 5. 6.
Thou Shalt Not Use a Computer to Harm Other People. Thou Shalt Not Interfere with Other People’s Computer Work. Thou Shalt Not Snoop around in Other People’s Computer Files. Thou Shalt Not Use a Computer to Steal. Thou Shalt Not Use a Computer to Bear False Witness. Thou Shalt Not Copy or Use Proprietary Software for Which You Have Not Paid. 7. Thou Shalt Not Use Other People’s Computer Resources without Authorization or Proper Compensation. 8. Thou Shalt Not Appropriate Other People’s Intellectual Output. 9. Thou Shalt Think about the Social Consequences of the Program You Are Writing or the System You Are Designing. 79
AU8231_C001.fm Page 80 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® 10. Thou Shalt Always Use a Computer in Ways That Insure Consideration and Respect for Your Fellow Humans. National Conference on Computing and Values. The National Conference on Computing and Values (NCCV) was held on the campus of Southern Connecticut State University in August 1991. It proposed the following four primary values for computing, originally intended to serve as the ethical foundation and guidance for computer security:4
1. 2. 3. 4.
Preserve the public trust and confidence in computers. Enforce fair information practices. Protect the legitimate interests of the constituents of the system. Resist fraud, waste, and abuse.
The Working Group on Computer Ethics. In 1991, the Working Group on Computer Ethics created the following End User’s Basic Tenets of Responsible Computing:10
1. I understand that just because something is legal, it isn’t necessarily moral or right. 2. I understand that people are always the ones ultimately harmed when computers are used unethically. The fact that computers, software, or a communications medium exists between me and those harmed does not in any way change moral responsibility toward my fellow humans. 3. I will respect the rights of authors, including authors and publishers of software as well as authors and owners of information. I understand that just because copying programs and data is easy, it is not necessarily right. 4. I will not break into or use other people’s computers or read or use their information without their consent. 5. I will not write or knowingly acquire, distribute, or allow intentional distribution of harmful software like bombs, worms, and computer viruses. National Computer Ethics and Responsibilities Campaign (NCERC).
In 1994, a National Computer Ethics and Responsibilities Campaign (NCERC) was launched11 to create an “electronic repository of information resources, training materials and sample ethics codes” that would be available on the Internet for IS managers and educators. The National Computer Security Association (NCSA) and the Computer Ethics Institute cosponsored NCERC. The NCERC Guide to Computer Ethics was developed to support the campaign. The goal of NCERC is to foster computer ethics awareness and education.4 The campaign does this by making tools and other resources available for people who want to hold events, campaigns, awareness programs, 80
AU8231_C001.fm Page 81 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management seminars, and conferences or to write or communicate about computer ethics. NCERC is a nonpartisan initiative intended to increase understanding of the ethical and moral issues unique to the use, and sometimes abuse, of information technologies. (ISC)2 Code of Ethics. The following is an excerpt from the (ISC)2 Code of
Ethics12 preamble and canons, by which all CISSPs and SSCPs must abide. Compliance with the preamble and canons is mandatory to maintain certification. Computer professionals could resolve conflicts between the canons in the order of the canons. The canons are not equal and conflicts between them are not intended to create ethical binds. Code of Ethics Preamble.
• Safety of the commonwealth, duty to our principals, and to each other requires that we adhere, and be seen to adhere, to the highest ethical standards of behavior. • Therefore, strict adherence to this Code is a condition of certification. Code of Ethics Canons. Protect society, the commonwealth, and the infra-
structure • Promote and preserve public trust and confidence in information and systems. • Promote the understanding and acceptance of prudent information security measures • Preserve and strengthen the integrity of the public infrastructure. • Discourage unsafe practice. Act honorably, honestly, justly, responsibly, and legally • Tell the truth; make all stakeholders aware of your actions on a timely basis. • Observe all contracts and agreements, express or implied. • Treat all constituents fairly. In resolving conflicts, consider public safety and duties to principals, individuals, and the profession in that order. • Give prudent advice; avoid raising unnecessary alarm or giving unwarranted comfort. Take care to be truthful, objective, cautious, and within your competence. • When resolving differing laws in different jurisdictions, give preference to the laws of the jurisdiction in which you render your service. Provide diligent and competent service to principals • Preserve the value of their systems, applications, and information. • Respect their trust and the privileges that they grant you. • Avoid conflicts of interest or the appearance thereof. 81
AU8231_C001.fm Page 82 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® • Render only those services for which you are fully competent and qualified. Advance and protect the profession • Sponsor for professional advancement those best qualified. All other things equal, prefer those who are certified and who adhere to these canons. Avoid professional association with those whose practices or reputation might diminish the profession. • Take care not to injure the reputation of other professionals through malice or indifference. • Maintain your competence; keep your skills and knowledge current. Give generously of your time and knowledge in training others. Organizational Ethics Plan of Action Peter S. Tippett has written extensively on computer ethics. He provided the following action plan4 to help corporate information security leaders to instill a culture of ethical computer use within organizations: 1. Develop a corporate guide to computer ethics for the organization. 2. Develop a computer ethics policy to supplement the computer security policy. 3. Add information about computer ethics to the employee handbook. 4. Find out whether the organization has a business ethics policy, and expand it to include computer ethics. 5. Learn more about computer ethics and spreading what is learned. 6. Help to foster awareness of computer ethics by participating in the computer ethics campaign. 7. Make sure the organization has an E-mail privacy policy. 8. Make sure employees know what the E-mail policy is. Fritz H. Grupe, Timothy Garcia-Jay, and William Kuechler identified the following selected ethical bases for IT decision making:13 Golden rule: Treat others as you wish to be treated. Do not implement systems that you would not wish to be subjected to yourself. Is your company using unlicensed software although your company itself sells software? Kant’s categorical imperative: If an action is not right for everyone, it is not right for anyone. Does management monitor call center employees’ seat time, but not its own? Descartes’ rule of change (also called the slippery slope): If an action is not repeatable at all times, it is not right at any time. Should your Web site link to another site, “framing” the page, so users think it was created and belongs to you? 82
AU8231_C001.fm Page 83 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management Utilitarian principle (also called universalism): Take the action that achieves the most good. Put a value on outcomes and strive to achieve the best results. This principle seeks to analyze and maximize the IT of the covered population within acknowledged resource constraints. Should customers using your Web site be asked to opt in or opt out of the possible sale of their personal data to other companies? Risk aversion principle: Incur least harm or cost. Given alternatives that have varying degrees of harm and gain, choose the one that causes the least damage. If a manager reports that a subordinate criticized him in an e-mail to other employees, who would do the search and see the results of the search? Avoid harm: Avoid malfeasance or “do no harm.” This basis implies a proactive obligation of companies to protect their customers and clients from systems with known harm. Does your company have a privacy policy that protects, rather than exploits customers? No free lunch rule: Assume that all property and information belong to someone. This principle is primarily applicable to intellectual property that should not be taken without just compensation. Has your company used unlicensed software? Or hired a group of IT workers from a competitor? Legalism: Is it against the law? Moral actions may not be legal, and vice versa. Might your Web advertising exaggerate the features and benefits of your products? Are you collecting information illegally on minors? Professionalism: Is an action contrary to codes of ethics? Do the professional codes cover a case and do they suggest the path to follow? When you present technological alternatives to managers who do not know the right questions to ask, do you tell them all they need to know to make informed choices? Evidentiary guidance: Is there hard data to support or deny the value of taking an action? This is not a traditional “ethics” value but one that is a significant factor related to IT’s policy decisions about the impact of systems on individuals and groups. This value involves probabilistic reasoning where outcomes can be predicted based on hard evidence based on research. Do you assume that you know PC users are satisfied with IT’s service or has data been collected to determine what they really think? Client/customer/patient choice: Let the people affected decide. In some circumstances, employees and customers have a right to self-determination through the informed consent process. This principle acknowledges a right to self-determination in deciding what is “harmful” or “beneficial” for their personal circumstances. Are your workers subjected to monitoring in places where they assume that they have privacy? 83
AU8231_C001.fm Page 84 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Equity: Will the costs and benefits be equitably distributed? Adherence to this principle obligates a company to provide similarly situated persons with the same access to data and systems. This can imply a proactive duty to inform and make services, data, and systems available to all those who share a similar circumstance. Has IT made intentionally inaccurate projections as to project costs? Competition: This principle derives from the marketplace where consumers and institutions can select among competing companies, based on all considerations such as degree of privacy, cost, and quality. It recognizes that to be financially viable in the market, one must have data about what competitors are doing and understand and acknowledge the competitive implications of IT decisions. When you present a build or buy proposition to management, is it fully aware of the risk involved? Compassion/last chance: Religious and philosophical traditions promote the need to find ways to assist the most vulnerable parties. Refusing to take unfair advantage of users or others who do not have technical knowledge is recognized in several professional codes of ethics. Do all workers have an equal opportunity to benefit from the organization’s investment in IT? Impartiality/objectivity: Are decisions biased in favor of one group or another? Is there an even playing field? IT personnel should avoid potential or apparent conflicts of interest. Do you or any of your IT employees have a vested interest in the companies that you deal with? Openness/full disclosure: Are persons affected by this system aware of its existence, aware of what data are being collected, and knowledgeable about how it will be used? Do they have access to the same information? Is it possible for a Web site visitor to determine what cookies are used and what is done with any information they might collect? Confidentiality: IT is obligated to determine whether data it collects on individuals can be adequately protected to avoid disclosure to parties whose need to know is not proven. Have you reduced security features to hold expenses to a minimum? Trustworthiness and honesty: Does IT stand behind ethical principles to the point where it is accountable for the actions it takes? Has IT management ever posted or circulated a professional code of ethics with an expression of support for seeing that its employees act professionally? How a Code of Ethics Applies to CISSPs In 1998, Michael Davis described a professional ethics code as a “contract between professionals.”14 According to this explanation, a profession is a 84
AU8231_C001.fm Page 85 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management group of persons who want to cooperate in serving the same ideal better than they could if they did not cooperate. Information security professionals, for example, are typically thought to serve the ideal of ensuring the confidentiality, integrity, and availability of information and the security of the technology that supports the information use. A code of ethics would then specify how professionals should pursue their common ideals so that each may do his or her best to reach the goals at a minimum cost while appropriately addressing the issues involved. The code helps to protect professionals from certain stresses and pressures (such as the pressure to cut corners with information security to save money) by making it reasonably likely that most other members of the profession will not take advantage of the resulting conduct of such pressures. An ethics code also protects members of a profession from certain consequences of competition, and encourages cooperation and support among the professionals. Considering this, an occupation does not need society’s recognition to be a profession. Indeed, it only needs the actions and activities among its members to cooperate to serve a certain ideal. Once an occupation becomes recognized as a profession, society historically has found reason to give the occupation special privileges (for example, the sole right to do certain kinds of work) to support serving the ideal in question (in this case, information security) in the way the profession serves society. Understanding a code of ethics as a contract between professionals, it can be explained why each information security professional should not depend upon only his or her private conscience when determining how to practice the profession, and why he or she must take into account what a community of information security professionals has to say about what other information security professionals should do. What others expect of information security professionals is part of what each should take into account in choosing what to do within professional activities, especially if the expectation is reasonable. The ethics code provides a guide to what information security professionals may reasonably expect of one another, basically setting forth the rules of the game. Just as athletes need to know the rules of football to know what to do to score, computer professionals also need to know computer ethics to know, for example, whether they should choose information security and risk reduction actions based completely and solely upon the wishes of an employer, or, instead, also consider information security leading practices and legal requirements when making recommendations and decisions. A code of ethics should also provide a guide to what computer professionals may expect other members of our profession to help each other do. Keep in mind that people are not merely members of this or that profession. Each individual has responsibilities beyond the profession and, as such, must face his or her own conscience, along with the criticism, blame, 85
AU8231_C001.fm Page 86 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® and punishment of others, as a result of actions. These issues cannot be escaped just by making a decision because their profession told them to. Information security professionals must take their professional code of ethics and apply it appropriately to their own unique environments. To assist with this, Donn B. Parker describes the following five ethical principles15 that apply to processing information in the workplace, and also provides examples of how they would be applied. 1. Informed consent. Try to make sure that the people affected by a decision are aware of your planned actions and that they either agree with your decision, or disagree but understand your intentions. Example: An employee gives a copy of a program that she wrote for her employer to a friend, and does not tell her employer about it. 2. Higher ethic in the worst case. Think carefully about your possible alternative actions and select the beneficial necessary ones that will cause the least, or no, harm under the worst circumstances. Example: A manager secretly monitors an employee’s email, which may violate his privacy, but the manager has reason to believe that the employee may be involved in a serious theft of trade secrets. 3. Change of scale test. Consider that an action you may take on a small scale, or by you alone, could result in significant harm if carried out on a larger scale or by many others. Examples: A teacher lets a friend try out, just once, a database that he bought to see if the friend wants to buy a copy, too. The teacher does not let an entire classroom of his students use the database for a class assignment without first getting permission from the vendor. A computer user thinks it’s okay to use a small amount of her employer’s computer services for personal business, since the others’ use is unaffected. 4. Owners’ conservation of ownership. As a person who owns or is responsible for information, always make sure that the information is reasonably protected and that ownership of it, and rights to it, are clear to users. Example: A vendor who sells a commercial electronic bulletin board service with no proprietary notice at logon, loses control of the service to a group of hackers who take it over, misuse it, and offend customers. 5. Users’ conservation of ownership. As a person who uses information, always assume others own it and their interests must be protected unless you explicitly know that you are free to use it in any way that you wish. Example: Hacker discovers a commercial electronic bulletin board with no proprietary notice at logon, and informs his friends, who take control of it, misuse it, and offend other customers.
86
AU8231_C001.fm Page 87 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management References 1. Terrell Bynum, Stanford Encyclopedia of Philosophy, 2001, available at http://plato. stanford.edu/entries/ethics-computer/. 2. Richard Stallman, Why software should be free, in Software Ownership and Intellectual Property Rights, Terrell Ward Bynum, Walter Maner, and John L. Fodor, Eds., Research Center on Computing & Society, Southern Connecticut State University, 1992, pp. 35–52. 3. Deborah G. Johnson, Proprietary rights in computer software: individual and policy issues, in Software Ownership and Intellectual Property Rights, Terrell Bynum, Computer ethics basic concepts and historical overview, Stanford Encyclopedia of Philosophy, Winter, 2001, Edward N. Zalta, Ed., available at http://plato.stanford.edu/ archives/win2001/entries-computer/. 4. Peter S. Tippett, Computer ethics, in Information Security Management Handbook, Harold F. Tipton and Micki Krause, Eds., Auerbach Publications, Boca Raton, FL, 2002. 5. Jason Whittaker, The Cyberspace Handbook, Routledge London, 2004, p. 276. 6. Stephen Levy, Hackers: Heroes of the Computer Revolution, Penguin Putnam, New York, 1984, p. 26. 7. U.S. Department of Health, Education and Welfare, Secretary’s Advisory Committee on Automated Personal Data Systems, Records, Computers, and the Rights of Citizens viii, 1973. 8. http://www.faqs.org/rfcs/rfc1087.html. 9. Computer Ethics Institute, http://www.brook.edu/its/cei/overview/Ten_Commandments _of_Computer_Ethics.htm. 10. http://www.security.state.az.us/responsible_computing.htm. 11. M. Betts, Campaign addresses computer ethics void, Computerworld, Vol. 29, No. 24, June 13, 1994, p. 33. 12. https://www.isc2.org/cgi-bin/content.cgi?category=12. 13. Fritz H. Grupe, Timothy Garcia-Jay, and William Kuechler, “Is it time for an IT ethics program?” in Information Security Management Handbook, Harold F. Tipton and Micki Krause, Eds., Auerbach Publications, Boca Raton, FL, 2002. 14. Michael Davis, Thinking Like an Engineer: Studies in the Ethics of a Profession, Oxford University Press, New York, 1998, pp. 50–51. 15. Donn B. Parker, Fighting Computer Crime: A New Framework for Protecting Computer Information, John Wiley & Sons, New York, 1998, pp. 423–424.
Other References Todd Fitzgerald, Building management commitment through security councils, Information Systems Security, May/June 2005. Todd Fitzgerald, Ten steps to effective web-based security policy development and distribution, in Information Security Handbook, Harold Tipton and Micki Krause, Eds., Auerbach Publications, Boca Raton, FL, 2005. Rebecca Herold, Managing an Information Security and Privacy Awareness and Training Program, Auerbach Publications, New York, 2005. National Institute of Standards and Technology, An Introduction to Computer Security: The NIST Handbook, Special Publication 800-12. Thomas R. Peltier, Information Security Policies and Procedures: A Practitioner’s Reference, 2nd ed., Auerbach Publications, New York, 2004. Thomas R. Peltier, Information Security Risk Analysis, 2nd ed., Auerbach Publications, New York, 2005. Ben Rothke, Personnel security screening, in Information Security Handbook, Harold F. Tipton and Micki Krause, Eds., Auerbach Publications, Boca Raton, FL, 2005.
87
AU8231_C001.fm Page 88 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Harold F. Tipton and Micki Krause, Eds., Information Security Management Handbook, 5th ed., Vols. 1–3, Auerbach Publications, New York, 2005–2007. U.S. General Accounting Office, Executive Guide Security Management: Learning from Leading Organizations, May 1998. U.S. General Accounting Office, Federal Information System Controls Audit Manual, January 1999. Charles C. Wood, Information Security Roles and Responsibilities Made Easy, Version 1, Pentasafe Security Technologies, Houston, TX, 2001.
Sample Questions 1. Consideration of computer ethics is recognized to have begun with the work of which of the following? a. Joseph Weizenbaum b. Donn B. Parker c. Norbert Wiener d. Walter Maner 2. Which of the following U.S. laws, regulations, and guidelines does not have a requirement for organizations to provide ethics training? a. Federal sentencing guidelines for organizations b. Health Insurance Portability and Accountability Act c. Sarbanes–Oxley Act d. New York Stock Exchange governance structure 3. According to Peter S. Tippett, which of the following common ethics fallacies is demonstrated by the belief that if a computer application allows an action to occur, the action is allowable because if it was not, the application would have prevented it? a. The computer game fallacy b. The shatterproof fallacy c. The hacker’s fallacy d. The law-abiding citizen fallacy 4. According to Stephen Levy, which of the following is one of the six beliefs he described within the hacker ethic? a. There must be a way for an individual to correct information in his or her records. b. Thou shalt not interfere with other people’s computer work. c. Preserve the value of their systems, applications, and information. d. Computers can change your life for the better. 5. According to Fritz H. Grupe, Timothy Garcia-Jay, and William Kuechler, which of the following represents the concept behind the “no free lunch” rule ethical basis for IT decision making? a. If an action is not repeatable at all times, it is not right at any time. b. Assume that all property and information belong to someone. c. To be financially viable in the market, one must have data about what competitors are doing and understand and acknowledge the competitive implications of IT decisions. 88
AU8231_C001.fm Page 89 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management
6.
7.
8.
9.
10.
11.
12.
d. IT personnel should avoid potential or apparent conflicts of interest. The concept of risk management is best described as the following: a. Risk management reduces risks by defining and controlling threats and vulnerabilities. b. Risk management identifies risks and calculates their impacts on the organization. c. Risk management determines organizational assets and their subsequent values. d. All of the above. Qualitative risk assessment is earmarked by which of the following? a. Ease of implementation b. Detailed metrics used for calculation of risk c. Can be completed by personnel with a limited understanding of the risk assessment process d. a and c only Single loss expectancy (SLE) is calculated by using: a. Asset value and annualized rate of occurrence (ARO) b. Asset value, local annual frequency estimate (LAFE), and standard annual frequency estimate (SAFE) c. Asset value and exposure factor d. All of the above Consideration for which type of risk assessment to perform includes all of the following except: a. Culture of the organization b. Budget c. Capabilities of resources d. Likelihood of exposure Security awareness training includes: a. Legislated security compliance objectives b. Security roles and responsibilities for staff c. The high-level outcome of vulnerability assessments d. None of the above A signed user acknowledgment of the corporate security policy: a. Ensures that users have read the policy b. Ensures that users understand the policy, as well as the consequences for not following the policy c. Can be waived if the organization is satisfied that users have an adequate understanding of the policy d. Helps to protect the organization if a user’s behavior violates the policy Effective security management: a. Achieves security at the lowest cost b. Reduces risk to an acceptable level c. Prioritizes security for new products 89
AU8231_C001.fm Page 90 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® d. Installs patches in a timely manner 13. Identity theft is best mitigated by: a. Encrypting information in transit to prevent readability of information b. Implementing authentication controls c. Determining location of sensitive information d. Publishing privacy notices 14. Availability makes information accessible by protecting from each of the following except: a. Denial of services b. Fires, floods, and hurricanes c. Unreadable backup tapes d. Unauthorized transactions 15. The security officer could report to any of the following except: a. CEO b. Chief information officer c. Risk manager d. Application development 16. Tactical security plans: a. Establish high-level security policies b. Enable entitywide security management c. Reduce downtime d. Deploy new security technology 17. Who is accountable for information security? a. Everyone b. Senior management c. Security officer d. Data owners 18. Security is most expensive when addressed in which phase? a. Design b. Rapid prototyping c. Testing d. Implementation 19. Information systems auditors help the organization: a. Mitigate compliance issues b. Establish an effective control environment c. Identify control gaps d. Address information technology for financial statements 20. Long-duration security projects: a. Provide greater organizational value b. Increase return on investment (ROI) c. Minimize risk d. Increase completion risk 21. Setting clear security roles has the following benefits except: a. Establishes personal accountability 90
AU8231_C001.fm Page 91 Thursday, October 19, 2006 7:00 AM
Information Security and Risk Management
22.
23.
24.
25.
26.
27.
28.
29.
b. Enables continuous improvement c. Reduces cross-training requirements d. Reduces departmental turf battles Well-written security program policies should be reviewed: a. Annually b. After major project implementations c. When applications or operating systems are updated d. When procedures need to be modified Orally obtaining a password from an employee is the result of: a. Social engineering b. Weak authentication controls c. Ticket-granting server authorization d. Voice recognition software A security policy that will stand the test of time includes the following except: a. Directive words such as shall, must, or will b. Defined policy development process c. Short in length d. Technical specifications Consistency in security implementation is achieved through: a. Policies b. Standards and baselines c. Procedures d. SSL encryption The ability of one person in the finance department to add vendors to the vendor database and subsequently pay the vendor illustrates which concept? a. A well-formed transaction b. Separation of duties c. Job rotation d. Data sensitivity level Which function would be most compatible with the security function? a. Data entry b. Database administration c. Change management d. Network management Collusion is best mitigated by: a. Job rotation b. Data classification c. Defining job sensitivity level d. Least privilege False-positives are primarily a concern during: a. Drug and substance abuse testing b. Credit and background checks 91
AU8231_C001.fm Page 92 Thursday, October 19, 2006 7:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® c. Reference checks d. Forensic data analysis 30. Data access decisions are best made by: a. User managers b. Data owners c. Senior management d. Application developer 31. Company directory phone listings would typically be classified as: a. Public b. Classified c. Sensitive information d. Internal use only
92
AU8231_C002.fm Page 93 Saturday, June 2, 2007 1:21 PM
Domain 2
Access Control James S. Tiller, CISSP
Introduction Controlling access to systems, services, resources, and data is critical to any security program. Without a comprehensive approach to control access, there are few options to managing the security posture of an organization. The ability to clearly identify, authenticate, authorize, and monitor who or what is accessing the assets of an organization is essential to protecting the environment from threats and vulnerabilities.1 Access controls are a collection of mechanisms that work together to protect the assets of the enterprise. By aligning people, process, and technology, organizations can establish clear oversight, allowing them to reduce exposures, build efficiencies, and have confidence in their control environment. Access control is the backbone of information security. It is the enforcer of policy, offers assurance in enterprise operations, supports business objectives, and assists in forensics investigations. The importance of access controls cannot be understated, and it is pervasive in the discussion of information security. Although access control is represented as a unique domain within the Common Body of Knowledge (CBK)®, readers will see threads of access control in all other domains and attributes in the access control domain. CISSP® Expectations The objectives of this section are to ensure a CISSP should be able to: • Describe the access control concepts and methodologies • Identify access control security tools and technologies • Describe the auditing mechanisms for analyzing behavior, use, and content of the information system Confidentiality, Integrity, and Availability The common thread among good information security objectives is that they address all three core security principles. The goals of information security are to ensure confidentiality, integrity, and availability (CIA).
93
AU8231_C002.fm Page 94 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® • Confidentiality prevents unauthorized disclosure of systems and information. • Integrity prevents unauthorized modification of systems and information. • Availability prevents disruption of service and productivity. Clearly, access controls play a key role in ensuring the confidentiality of systems and information. Managing access to organizational assets is fundamental to preventing exposure of data. Managing an entity’s admittance and rights to specific enterprise resources ensures that valuable data and services are not abused, misappropriated, or stolen. The act of controlling access inherently provides features and benefits that speak to the integrity of business assets. By preventing unauthorized access, organizations can achieve greater confidence in data and system integrity. Without controls to manage who has access to specific resources, and what actions they are permitted to perform, there are few compensating controls that make certain information and systems are not modified by unwanted influences. Moreover, access controls offer greater visibility into determining who or what may have altered data or system information, potentially affecting the integrity of those assets. Access controls can be used to couple an entity with the actions taken against valuable assets, allowing organizations to have greater command over the environment and a better understanding of the state of their security posture. Access controls encompass the operational relationships at all layers within a computing environment. For example, firewalls employ rules that permit, limit, or deny access to various services on systems. By reducing the exposure to threats, controls protect potentially vulnerable system services, ensuring that system’s availability. By reducing exposure to unwanted and unauthorized entities, organizations can limit the number of threats that can affect the availability of systems, services, and data. There is a plethora of examples of how access control is essential to the confidentiality, integrity, and availability of enterprise assets. Throughout this chapter we will discuss all the characteristics of access control and its applicability to CIA and the security posture. Definitions and Key Concepts Access controls are a collection of mechanisms that work together to protect the assets of the enterprise. They help protect against threats and vulnerabilities by reducing exposure to unauthorized activities and providing access to information and systems to only those who have been approved. 94
AU8231_C002.fm Page 95 Saturday, June 2, 2007 1:21 PM
Access Control Introduced above, although access control is a domain within the CBK, it is the most pervasive and omnipresent aspect of information security. Therefore, access controls essentially encompass all aspects and levels of an organization: • • • •
Facilities Support systems Information systems Personnel — management, users, customers, business partners, etc.
Additionally, all entry points to the organization need some type of access control. Given the pervasive nature and importance of access controls throughout the security posture, it is necessary to isolate the four key attributes that enable security management. Access controls enable management to: • • • •
Specify which users can access the system Specify what resources they can access Specify what operations they can perform Provide individual accountability
Each of these four areas, although related, represents an established approach to defining an access control strategy. Determining Users The first step in managing resources is defining who can access a given system or information. The concept of identifying who is permitted access and providing credentials necessary for their role is fundamental to security and ancient in practice. In early cultures, a right of passage was required to obtain a garment, marking, or even a scar signifying you were approved for various activities within the tribe, which translated to access. As populations grew and became more sophisticated, new methods were developed to provide access to an approved community. Over 4000 years ago, the Egyptians developed the first lock-and-key combinations. Wooden locks were operated by a wooden key that controlled pins that would disengage a bolt permitting access. The key would be provided to those who had been identified as needing access. In today’s digital environment, users may be employees, contractors, consultants, partners, clients, or even competitors that organizations need to identify as requiring access. The act of specifying which users can have access to a system is typically driven by an operational demand, such as providing access to an account system so that bills can be paid by users in the financial department. 95
AU8231_C002.fm Page 96 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® The most significant aspect of determining which users will be provided access is clearly understanding the needs of the user and the level of trust given to that person or entity. An identification process must exist that takes into consideration the relevance of that user in the light of business needs, organizational policy, information sensitivity, and security risk. It is important to understand that with each new user or community, the threat profile of an organization is increased. For example, an organization may determine that one of its partners and all its employees need access to a given system. Upon providing that access, the potential threats to the organization now include that partner organization. Not only must the relationship be founded on trust, established by legal or other mechanisms between the two entities, but it also must consider the increase in the number of users, representing a broader spectrum of threat. Usually, making these determinations is bound to operational needs and the capabilities of the access control system. The more sophisticated the access control system, the greater the number of options to support a demand in a secure fashion. It is not uncommon for organizations to have several different access control strategies to accommodate various needs, resulting in the provision of unique access solutions. However, this is not a security best practice, and the objective is to have a consistent access control strategy to avoid complexity, which can lead to unwanted exposures. Defining Resources Typically, determining what resources users or a community is permitted to access is based on the need that identified them as users and the role they represent to the organization. Specifying what resources are permitted for a user to access is critical to the validity and capability of the access control system. The strongest access control system can be rendered useless in the broader implications of security if specific resources are not identified with regard to a user’s given role. For example, if a user is identified as needing access, but specific resources are not qualified, he or she will effectively have unfettered access to all systems and data, greatly increasing the risk to confidentiality, integrity, and availability. It is essential to bind a user, group, or entity to the systems and data they are accessing. Resources can be information, applications, services, servers, storage, processes, printers, or anything that represents an asset to the organization that can be utilized by a user. Every resource, no matter how mundane, is an asset that must be afforded protection (based on a cost–benefit analysis) from unwanted influences and unauthorized use. The user’s role will help identify what resources are necessary for him or her to perform the required function. Once the required resources are allocated, then controls can be administered to specify the level of use. 96
AU8231_C002.fm Page 97 Saturday, June 2, 2007 1:21 PM
Access Control Specifying Use Access controls move well beyond simply defining resources users can access. In addition to these requirements, access control is the mechanism used to specify the level of use of a given resource and the actions permitted by a user. The most fundamental representation of controlling the level of use can be associated with a file system and data. Most file systems provide multiple levels of permissions, such as read, write, and execute. Depending on the file system used to store data, there may be methods of permitting much more granular controls. These may include the ability to provide access to information to a specific user, but only permitting him or her to perform a certain task. For example, a user with the role of data backup will be allowed to perform administrative functions, but not to access or alter the information. (Access permissions will be covered in greater detail below.) A user may have the need to access an application, and therefore be provided execute privileges. However, he or she may not have read or write privileges, to ensure that he or she cannot obtain or modify the application. The same philosophy can be applied to any resource. For example, you may wish to provide a user with access to a printer, but his role requires another level of approval for printing to avoid abuse. In this case, you may provide access to a printer spool, but not permissions that allow unapproved printing. Once the user is identified and authenticated, an access control system must be sensitive to the level of authorization for that user to utilize the identified resources. Therefore, it is simply not enough to identify and authenticate a user to offer access to resources. It is necessary to control what actions are permitted for a specified resource and the user’s role. Accountability Individual accountability is the ability for the organization to hold people responsible for their actions. As mentioned above, access controls can provide ample material in forensics investigations, providing evidence to prove or disprove a user’s involvement in a given event. A comprehensive access control strategy will include the monitoring and secure logging of identification, authentication, and authorization processes. It should also include a log of actions taken on behalf of the user, with all the appropriate and pertinent information associated with the transaction. Moreover, a properly configured system will also log attempted actions by an authenticated user who does not have the necessar y privileges for the requested task. Therefore, when properly 97
AU8231_C002.fm Page 98 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® employed, an access control system can provide substantiation of user activities, linking a user to a transaction or an attempt to access, modify, or delete information. Access Control Principles The first element of access control is to establish an access control policy. As with other characteristics of information security, the security policy is essential in stating expectations, defining standards and requirements, and specifying responsibilities. An access control policy is a document that specifies how users are identified and authenticated and the level of access granted to resources. The existence of an access control policy ensures that decisions governing the access to enterprise assets are based on a formalized organizational statute. In order for users to be provided access to resources, which privileges they will be given must be clearly documented in an access control policy. The absence of a policy will result in inconsistencies in provisioning, management, and administration of controls. The policy will provide for a centralized and managed representation of necessary procedures, guidelines, standards, and best practices concerning the oversight of access management. For example, to limit access to the network, there must first be a written policy implemented by a set of procedures that specify who will be given access to the network and what type of access will be given. The access control policy is usually based on two standards of practice: separation of duties and least privilege. In addition to these, policies are based on the sensitivity of the data that is processed and stored. Separation of Duties. The primary objective of separation of duties is the prevention of fraud and errors. This objective is achieved by disseminating the tasks and associated privileges for a specific process among multiple users. It acts as a deterrent to fraud or concealment because collusion with another individual is required to complete the fraudulent act. The concept is founded on the management of tasks that can be distributed across multiple users, ensuring that no individual acting alone can compromise the security of a system or gain unauthorized access to data.
The first action to employing separation of duties is defining elements of a process or work function. Processes or work functions may be a collection of tasks that must be performed to achieve an objective. A work function may simply be an administrative duty, such as performing a backup, copying files, or cycling log files. Work functions can also encompass highly complex and potentially vital business elements that should not be in the control of any one person. Nevertheless, any job can be represented as a series of elements, or tasks, that are performed by one or more users.
98
AU8231_C002.fm Page 99 Saturday, June 2, 2007 1:21 PM
Access Control Once the various elements are identified and quantified, it is necessary to divide the elements among different users or roles within a function. For example, a specific process may require seven elements to be performed. The first three may be preparing data for input, the fourth executes an application, and the last three are to process the output. Given this scenario, it may be wise to separate the three groups of elements among three different users, each with specific privileges for performing his or her assigned elements within a function. Utilizing this method, one user is tasked with preparing data, a second user with performing the process, and a third with managing the output. Although the entire function requires three uniquely privileged users, it ensures that no one person can manipulate the system. Static or Dynamic. Separation of duties can be either static or dynamic. Compliance with static separation requirements can be determined simply by the assignment of individuals to roles and allocation of specific elements to roles for which users are assigned. The more difficult scenario is dynamic separation of duties, where compliance with requirements can only be determined during system operation. The objective behind dynamic separation of duties is to allow more flexibility in operations.
Consider the case of initiating and authorizing payments. A static policy could require that no individual who can serve as payment initiator can also serve as payment authorizer. Such a policy may be too rigid for commercial use, making the cost of security greater than the loss that might be expected without the security. More flexibility could be allowed by a dynamic policy that allows the same individual to take on both initiator and authorizer roles, with the exception that no one could authorize payments that he had initiated. In a dynamic scenario, the user is permitted to perform seemingly conflicting elements, but process management allows for this to occur, offering flexibility without compromising security. Applicability. To determine the applicability of separation of duties, two distinct factors must be addressed: sensitivity of the function and the available processes that lend themselves to distribution. SENSITIVITY OF FUNCTION. Sensitivity of the function must take into consideration the criticality of the job performed and exposure to fraud, misuse, or negligence. It will be necessary to evaluate the importance of a given transaction and its relationship to enterprise security risk, operations, and, of course, CIA of assets. It is important to be aware that seemingly mundane tasks may require separation of duties practices in the larger scope of security, risk, and CIA. For example, a single user performing backup and restore procedures would have the ability to manipulate system data to
99
AU8231_C002.fm Page 100 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® cover unauthorized activity, change information, or destroy valuable resources undetected. There are other activities within an organization that are not only important when considering separation of duties, but their technical and procedural architecture assist in establishing these controls. An example is application development. Typically there is a development platform, testing facility, rollout procedures, validation, and publication. In application development, the integrity of libraries used for the development is critical. It is also important that live systems and proprietary information are not used within the testing environment, or there may be a risk of exposing sensitive information. Therefore, the development architecture lends itself to establishing separation of duties throughout the process in addition to functions within a given domain. ELEMENT DISTRIBUTION. The second factor when determining the applicability of separation of duties is understanding what elements within a function are prone to abuse, which ones are easily segmented without significantly disrupting operations, and what available skills or pool of users can perform the different elements of the function. These can be summarized as:
• Element identification, importance, and criticality • Operational considerations • User skills and availability Each function will have one or more elements that must be performed to complete the transaction. It is necessary to evaluate each element and its relevance to abuse. In other words, some elements within a function may represent milestones within the function that offer opportunities for fraud or abuse that will require a different user with unique privileges to complete the function. It is possible to collect different elements into groups that together represent several steps that lead to a milestone element. The key is to evaluate each element and the role it plays in performing the function. Once each element is assessed against the potential for abuse, they can begin to be distributed to various users. In the event a collection of elements within a function do not offer a clear point of segmentation, it may be necessary to incorporate a new milestone element as a validation and approval point within the function. For example, it may take several elements to perform a function, but given complexity, automated tasks, or myriad reasons, none of them can be clearly segmented. However, an approval element or a transference point can be incorporated. These can materialize as process approval interruptions; e.g., at a specific point, a manager sends an e-mail, applies a digital signature, or adds a validation mark to the data that must be present for the primary user to continue the remaining processes.
100
AU8231_C002.fm Page 101 Saturday, June 2, 2007 1:21 PM
Access Control One of the key attributes of a successful security program is operating effectively within a larger environment. When considering separation of duties, the impact to the function and its role in the business is essential to overall success. Without taking into account the larger implications, security-related activities can hinder the process and make it prone to circumvention. In the establishment of separation of duties practices for a function, the impact to the operations must be taken into account. Therefore, one must consider the impact to the timing of the function as well as meaningful options in the event there is a system failure or outage. It is important not to sacrifice security, but rather to have alternate compensating controls that can meet the objectives for security. Clearly, the separation of duties requires different users, and each of those users must have the appropriate skills and training to perform the specific element of a given function. Additionally, and somewhat more obvious, there must be enough users to perform all the elements that have been distributed. Finally, there should be more than one user assigned to a given element in the event that some leave the organization or are not available to perform the action. Of course, one must be careful not to have too many users assigned to a task, potentially increasing the possibility of collusion. Least Privilege. The principle of least privilege has been described as
one of the more important characteristics of access control for meeting security objectives. The principle of least privilege requires that a user or process be given no more privilege than necessary to perform a job. The objective is to limit users and processes to access only resources necessary to perform assigned functions. Ensuring least privilege requires identifying what the user’s job is, determining the minimum set of privileges required to perform that job, and restricting the user to a domain with those privileges and nothing more. By denying users privileges that are not necessary for the performance of their duties, management ensures that those denied privileges cannot be used to circumvent the organizational security policy. Information Classification Information classification is the practice of evaluating the risk level of the organization’s information to ensure that the information receives the appropriate level of protection. The application of security controls to information has a cost of time, people, hardware, software, and ongoing maintenance resources that must be considered. Applying the same level of control to all of the company’s assets would result in wasting resources by overprotecting some information while underprotecting other information. This situation results because security dollars are spent uniformly to protect all assets at the same level, when the budget could be better allo101
AU8231_C002.fm Page 102 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® cated to provide increased protection to those assets considered to be of greater value or higher sensitivity. By applying protection controls to the information based upon the classification, the organization gains efficiencies and thus reduces the cost of information security. Many organizations have thousands of data files representing many different data types. Information is created in great volumes on a daily basis from a variety of transactional systems, as well as aggregated into databases and data warehouses to provide support for decision making. The information is stored on backup tapes, copied to floppy drives, and burned to CDs and DVDs for portability. Information is stored on portable computers and network drives, and in duplicate copies stored within the e-mail systems to support the sharing of information. The same information is printed and filed, and stored off site for business continuity and disaster recovery purposes. The same data typically has multiple versions, is stored in multiple locations, and is capable of being accessed by different individuals in each of these locations. The duplication of storage of information highlights the challenge of the problem. Where is the organization’s information? How should the information be handled and protected? Who should have access to it? Who owns the information? Who makes the decisions around these parameters? These questions form the impetus for implementing a data classification strategy. Data Classification Benefits. The benefits of classifying the information may be self-evident by now; however, there are many benefits to classifying the information in addition to applying the appropriate security controls to the resource, such as:
• Creating a greater organizational awareness of the need to protect company information. • Critical information is identified that supports business recovery scenarios in focusing the recovery efforts on the most critical information. • Greater understanding of the value of the information to be protected. • Clearer direction for the handling of sensitive information. • Increased confidentiality, integrity, and availability of information by focusing limited funds on the resources requiring the highest level of protection and providing minimal controls for the information with less risk of loss. • Establishing ownership for the information, which increases the likelihood that the information will be used in the proper context and those accessing the information will be properly authorized. • Periodic review and approval processes maintain awareness of the importance of protecting information. 102
AU8231_C002.fm Page 103 Saturday, June 2, 2007 1:21 PM
Access Control • Placement of information with higher levels of classification on more reliable storage and backup mechanisms. • Understanding of information and its location identifies areas that may need higher levels of protection from vendors, consultants, and temporary employees. • Reducing expense of inefficient storage of nonsensitive documents in physical file cabinets. Establishing a Data Classification Program. Establishment of the data classification strategy and plans for implementation may seem a bit onerous; however, once the work of defining the classification levels and determining the classification of the individual information has been completed, the value to the organization is well worth the effort. The following sections walk through the steps of establishing and sustaining a data classification process. Although the results of the data classification programs may differ, depending upon the nature of the information processed, the steps identified below are a representation of the concepts introduced by programs of this type. The sequence or the steps included may vary or be combined, depending upon the size, complexity, and culture of the organization.
1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14.
Determine data classification project objectives. Establish organizational support. Develop data classification policy. Develop data classification standard. Develop data classification process flow and procedure. Develop tools to support process. Identify application owners. Identify data owners and data owner delegates. Distribute standard templates. Classify information and applications. Develop auditing procedures. Load information into central repository. Train users. Periodically review and update data classifications.
Determine Data Classification Project Objectives. Although there are many benefits, as previously articulated, it is helpful to document the specific project objective to contain the scope of the effort and to determine when early deliverables are completed, so that these accomplishments may be celebrated to sustain those involved in later phases of the project. The purpose is important for establishing the support of those needed to carry out the project. Although security professionals understand the importance of data classification, without the proper positioning of the effort, the end users may perceive the effort as another project requiring their resources, with little payback to their individual departments. Objectives should be 103
AU8231_C002.fm Page 104 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® stated using the benefits, as well as exploring enhancements to the effectiveness of the organization through: • • • • • • • •
Ensuring privacy of information Improved workflow Increased accuracy of information (integrity) Increased availability of information (availability) Avoiding adverse opinion by reducing disclosure (confidentiality) Reduction in costs of overprotection Ability for managers to enforce accountability Protection of departmental intellectual property or trade secrets
Establish Organizational Support. Senior management support is essential to the continued operation of the data classification program. There may be a perception within management that all information should be treated as confidential and is secured by the physical barriers to the facility and the firewalls and other security controls protecting the environment. This view promotes the concept that all information should be protected equally. Information should be provided, even if utilizing only a single department to illustrate the point, of the costs of protecting all of the information at the same level. Develop Data Classification Policy. The data classification policy communicates the requirement to the organization to classify the information assets. The policy also communicates the primary purpose of data classification, which is to ensure that the appropriate protections are applied according to the level of classification. The policy statement is a high-level description of the requirement, including the scope of users and systems it applies to, and what is expected. Policies may indicate the responsibilities of users and management in handling the information. Develop Data Classification Standard. Policies are generally written at a high level, with the details reserved for supporting standards. The standards communicate how to determine the classification of a particular information item, as well as how that information should be subsequently handled. Involvement with business or application owners, as well as the IT department, is important in determining the matrices. This establishes the business owner buy-in to the standards, as well as provides the business operation’s perspective for determining how the information should be handled. Develop Data Classification Process Flow and Procedures. Documented procedures assist in the ongoing operation of classifying information. Although the start-up efforts will require a large amount of resources, as the organization may have many files that have not been looked at through the data classification lenses, ongoing efforts will require that the data is reviewed and updated on a periodic basis. The initial gathering of the data classifi104
AU8231_C002.fm Page 105 Saturday, June 2, 2007 1:21 PM
Access Control cations and the process utilized are driven by a documented procedure so that all of the parties involved in the effort are aware of their individual roles and requirements to complete the classification. Flowcharts can be helpful in addition to the documented written processes, as individuals receive and retain information through different means. Develop Tools to Support Process. Various tools, such as word processing documents, spreadsheets, databases, and presentations, support the collection process. Many different data types can be obtained through the process. Standardized templates facilitate the collection process, as well as the ability to generate reports by data type to ensure that the designations are consistent and are following the prescribed data classification policy and associated standard. Once the spreadsheets are distributed to the individuals completing the information, the results can be exported to a database for consolidation and review. Identify Application Owners. The business owner of the application provides the functional requirements view of what is necessary for the business. He or she will have the most intimate knowledge of the data and how it is used. The systems or information technology owner acts primarily as a data custodian of the information, and may understand the processing and technical storage and security protections of the information. Both individuals contribute to an understanding of the classification requirements of the information and should be identified. Identify Data Owners and Data Owner Delegates. D a t a o w n e r s a re t h o s e business users that understand the information and make decisions about the usage of the information. They may have designees or data delegates that they have empowered to make the day-to-day operational decisions as to who is allowed to read, modify, or delete the business information. These individuals are the primary individuals that are involved in the classification process, as they have the greatest business knowledge of the information. Distribute Standard Templates. The data classification templates created in the earlier steps are distributed to collect the classification information. This places the templates in the hands of the data owners or their delegates to provide information on the information owned by their departments. Classify Information and Applications. Once the templates have been distributed, the data owners or their delegates use the data classification standard to classify the information. Typically there are three to four levels of data classification used by most organizations, as follows:
• Public: Information that may be disclosed to the general public without concern for harming the company, the employees, or business 105
AU8231_C002.fm Page 106 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® partners. No special protections are required, and these are sometimes referred to as unclassified. For example, information that may be posted to a company’s public Internet site, public announcements (after public release), marketing materials, cafeteria menus, and any internal documents that would not present harm to the company if they were disclosed would be considered public. • Internal use only: Information that could be disclosed within the company, but could harm the company if disclosed externally. Information such as customer lists, vendor pricing, organizational policies, standards and procedures, and internal organization announcements would need baseline security protections, but do not rise to the level of protection as confidential information. • Confidential: Information that if released outside of the organization would create severe problems for the organization, e.g., information that provides a competitive advantage, is important to the technical or financial success (i.e., trade secrets, intellectual property, research designs), or compromises the privacy of individuals. Information may include payroll information, health records, credit information, formulas, technical designs, restricted regulatory information, senior management internal correspondence, or business strategies or plans. These may also be called top secret, privileged, personal, sensitive, or secret and highly confidential. Sometimes a fourth level is added to further define information that has a very high level of sensitivity and requires security controls beyond those defined for confidential information. Unless there is a clear distinction and difference between how the information should be handled between the three levels and the additional fourth level, the additional classification level is not added. Develop Auditing Procedures. Once information is classified, classifications rarely change, unless the information has been misclassified, the previously protected information is now public knowledge, or time has made public knowledge of the information less harmful to the organization. In most cases, if there was a reason to protect the information to a certain level, this information continues to be protected at this same level through the useful life of the information. New data types are always generated and need to be classified. Existing data types should be periodically reviewed to ensure that the classifications are correct. Data classification auditing is the process of reviewing the classifications to ensure the accuracy of the information, thus ensuring that the appropriate security protections continue to be applied. Load Information into Central Repository. The data classification information obtained through the data classification procedures and templates is 106
AU8231_C002.fm Page 107 Saturday, June 2, 2007 1:21 PM
Access Control loaded into a central repository to support the analysis of the collections, as well as to serve as the initial database for the ongoing updating of the classifications. A database tool provides the ability to examine the information from multiple perspectives, such as listing all of the data types owned by a data owner, all of the data types of a particular classification level, or which data types are associated with which applications. Train Users. Once all of the information is collected in a repository, the information can be made available to the end users to determine how information should be handled. This functionality provides access to the end user of the organizational data types. Reports could also be generated for the end users to provide the same information. If the end user does not understand what information is public, internal user only, or confidential, or if the end user does not understand the differences in how to handle each type of information, then the investment in data classification will have limited results. Periodically Review and Update Data Classifications. Scheduling the auditing procedures developed on a periodic basis increases the quality of the information. Providing the capability to update and add new data classifications outside of the normal cycle also increases the ongoing data quality of the information. Labeling and Marking. The labeling and marking of media with classification levels provides the ability to treat the information contained within the media with the handling instructions appropriate to the respective level of media. For example, a backup tape may be labeled with a serial number and “Company Confidential” to indicate how the information should be treated. Organizations may decide not to label individual information, but rather control the information within the organization according to restrictions based upon the type of information. Shredding confidential information or locking away information at the end of the day through a “clean desk policy” can address a class of confidential information. Data Classification Assurance. Periodically testing the data classifica-
tions provides assurance that the activities are being performed. The audit procedures previously noted uncover those data types that need to be added or reclassified. Random audits of user areas, such as checking desktops for confidential documents not returned to file drawers, information left overnight in open shredding bins, files in electronic file folders accessible by anyone within the organization, confidential information posted to a public Web site, or information left on copiers, printers, and fax machines, can provide information regarding organizational compliance. Encouraging end users to report security incidents related to mishandling of classified information can also be a source to provide assurance. 107
AU8231_C002.fm Page 108 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Summary. Data classification provides an organizational understanding of the information to be protected and creates the opportunity to allocate information protection resources in a more effective manner, by allocating more protection to the information that has the most need of security controls. End users gain a greater awareness of the differences in the information that is handled and the need to secure the information.
Access Control Categories and Types In the development of an access control architecture, it is necessary to fully understand the different categories and types of controls. This will establish a foundation for later discussion on access control technology, practices, and processes. The objectives of this section are to: • Define access controls • Describe the access control categories • Describe the types of access controls Control Categories Categories are characteristics of an access control system that can be mapped against any control type. Access control categories are very general in nature to offer administrators the opportunity to organize their access architecture and quickly determine gaps. Elements of an access control solution or device can be represented in the form of categories to ensure that all aspects that define the capabilities of access control are met. For example, a particular path of access, such as remote access to an application, may incorporate identification and authentication mechanisms, filters, rules, rights, logging and monitoring, and policy. These elements can be organized into categories to ensure that the overall capability of the control environment is meeting expectations and best practices. There are six main categories of access control: • • • •
Preventive — avoid incident Deterrent — discourage incident Detective — identify incident Corrective — remedy circumstance/mitigate damage and restore controls • Directive • Recovery — restore conditions to normal • Compensating — alternative control (e.g., supervision) Preventative. The existence of access controls prevents the potential for incidents by applying restrictions to user activity. Although obtaining access is initially provided by an authorization process, the provisioning of privileges based on the authorization prevents exposure to various threats. 108
AU8231_C002.fm Page 109 Saturday, June 2, 2007 1:21 PM
Access Control By applying limits and restrictions to a given role for which a user is assigned, an organization can avoid exposing assets to unwanted activities. The initial authorization of users establishes a relationship ultimately granting access. Based on that predefined relationship, limits can be applied that only permit a user to perform a given function. The practice of managing privileges related to access, role, and asset sensitivity is a control that prevents exposure to various incidents. Within this context, it is important to note that privileges are provided upon successful identification and authentication. Therefore, the preventive nature of access controls is founded on known and authenticated entities. If a user authenticates and is permitted to perform any function on a given system, the potential for intentional or accidental harm is significant. However, by allowing only those privileges that are necessary to perform the function, the organization can avoid harmful actions. Deterrent. Access controls act as a deterrent to threat agents by the
simple fact that the process of controlling access is founded on a challenge. By forcing the identification and authentication of a user, service, or application, and all that it implies, the potential for incidents associated with the system is significantly reduced. To demonstrate, if there are no controls for a given access path, the number of incidents and the level of impact become infinite. Controls inherently reduce exposure by applying a methodical oversight, which acts as a deterrent, curbing a threat agent’s appetite in the face of probable repercussions. The best example of discouraging incidents is best seen with employees and their propensity to intentionally perform an unauthorized function, which can lead to an unwanted event. When users understand that by authenticating into a system to perform a function their activities are logged and monitored, it reduces the likelihood they will attempt such an action. Threats operate based on anonymity, and any potential for identification and association with their act is avoided at all costs. It is this fundamental reason why access controls are the key point of circumvention by attackers. Deterrents also take the form of potential punishment if users do something unauthorized. For example, if the policy specifies that an unauthorized person using a sniffer will be fired, that will deter that act. Detective. Clearly, access controls are a deterrent to threats and can be aggressively utilized to prevent harmful incidents through the application of least privileges. However, there is the detective nature of access controls 109
AU8231_C002.fm Page 110 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® that can provide significant visibility into the access environment and help organizations manage their access strategy and related security risk. As mentioned above, privileges offer the ability to reduce the exposure of enterprise assets by limiting the capabilities of an authenticated user. However, there are few options that control what a user can perform once privileges are provided. For example, if a user is provided write access to a file and that file is damaged, altered, or negatively impacted purposefully or unintentionally, the access controls applied will offer visibility into the transaction. The control environment can be implemented to log activity regarding the identification, authentication, authorization, and use of privileges. This can be used to identify when errors occur or attempts to perform an unauthorized action, or to validate when provided credentials were exercised. The control system provides evidence of attempted actions and tasks that were executed by an authorized user. Detection aspects of access control can range from evidentiary, such as postmortem investigations, to real-time alerting of inappropriate activities. This philosophy can be applied to many different characteristics of the security environment. Access awareness can be related to intrusion detection systems (IDSs), virus controls, applications, Web filtering, network operations, administration, logs and audit trails, and security management. Visibility into the environment is key to ensuring a comprehensive security posture. An access control architecture offers significant visibility into the activities within the environment, such as what assets are accessed and by whom, when transactions took place, and what data may be affected. Corrective. When an event occurs, such as a security incident, or something more encompassing, such as a corporate merger, elements within the access control architecture may require corrective actions.
In the case with a security incident, it may be learned that the established controls were not effective in thwarting an attack, one that may have circumvented conventional controls. An access control solution must have the ability to enact corrective measures to reduce, avoid, or simply eliminate the threat. On the other end of the spectrum, corporate mergers and acquisitions can play havoc with established control practices, processes, and policy. The simple act of connecting two or more disparate networks in an effort to support a newly formed entity can represent an enormous threat to assets. This is especially true given that business owners will seek to openly share resources as soon as possible, while on the other hand there may be numerous disgruntled employees considering retribution. 110
AU8231_C002.fm Page 111 Saturday, June 2, 2007 1:21 PM
Access Control Dynamics within the business and technical environment will typically have an impact on the control architecture, forcing change to ensure that the acceptable level of risk and desired security posture are maintained. The controls environment is a collection of various mechanisms to manage exposure of assets. Within the complex web there must exist an overall management capability that can comprehensively supervise the system in accordance with established polices while coping with environmental changes. The number of corrective actions possible makes it difficult to quantify. They can range from rule set changes on firewalls, access control list updates on routers, or policy changes on system platforms to introducing certificates for 802.1x authentication in wireless networks, moving from single-factor to two-factor authentication for remote access, or introducing smart cards. The difficultly in quantification is founded on the fact that access controls are universal throughout the environment. Nevertheless, it is important that a consistent and comprehensive management capability exists that can coordinate and employ changes throughout the enterprise to enable policy compliance. Recovery. Any changes to the access control environment, whether in the face of a security incident or event or to offer temporary compensating controls, need to be accurately reinstated and returned to normal operations.
There are several situations that may affect access controls, their applicability, status, or management. Events can include system outages, attacks, project changes, technical demands, administrative gaps, and employee status. For example, if an application is not correctly installed or deployed, it may adversely affect controls placed on system files or even have default administrative accounts unknowingly implemented upon install. An employee may be transferred, quit, or be on temporary leave that may affect policy requirements regarding separation of duties. An attack on systems may have resulted in the implantation of a Trojan, potentially exposing private user information, such as credentials. In any of these cases, an undesirable situation must be rectified as quickly as possible and controls returned to normal operations. Compensating. Compensating controls are introduced when the existing capabilities of a system do not support the requirement of a policy. Compensating controls can be technical, procedural, or managerial.
Although an existing system may not support the required controls, there may exist other technology that can supplement the existing environment, closing the gap in controls and meeting policy requirements. For example, the access control policy may state that Web-related 111
AU8231_C002.fm Page 112 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® authentication must be encrypted over the Internet. Adjusting an application to support encryption for authentication purposes may be too costly. Secure Socket Layer (SSL), an encryption protocol, can be employed and layered on top of the authentication process to support the policy statement. From a procedural perspective, separation of duties offers the capability to isolate certain tasks to compensate for technical limitations in the system to ensure the security of transactions. Management processes, such as authorization, supervision, and administration, can be used to compensate for gaps in the access control environment. Finally, compensating controls can be temporary solutions to accommodate a short-term change, or support the evolution of a new application, business development, or major project. Changes and temporary additions to access controls may be necessary for application testing, data center consolidation efforts, or even to support a brief business relationship with another company. The critical points to consider when addressing compensating controls are: • Do not compromise stated policy requirements. • Ensure that the controls do not adversely affect risk or increase exposure to threats. • Controls are managed in accordance with established practices and policies. • If temporary, controls are removed after they have served their purpose. Types of Controls There are three types of access control: • Administrative: Defines the roles, responsibilities, policies, and administrative functions to manage the control environment. • Physical: The nontechnical environment, such as locks, fire management, gates, and guards. • Technical: Electronic controls as the personification of the control environment, where controls are applied and validated and information on the state of the environment is produced. The categories can be mapped against the three types to demonstrate various control examples and options shown in Figure 2.1. As discussed above, elements of an access control solution or device can be represented in the form of categories to ensure that all aspects that define the capabilities of access control are met. The value of this matrix is that it can be
112
AU8231_C002.fm Page 113 Saturday, June 2, 2007 1:21 PM
Access Control
Figure 2.1. Control examples for types and categories.
applied to the entire organization, a specific domain within the environment, or a single access path, such as employee remote access. Administrative. Administrative controls represent all the actions, policies, and management of the control system. These represent any aspect of the environment that is necessary to oversee the confidentiality, availability, and integrity of the access controls, manage the people that use it, set policy on use, and define standards for operations.
The aspects of administrative controls can be broad and vary depending on organizational needs, industry, and legal implications. Nevertheless, they can be broken into six major groups: • • • • • •
Operational policies and procedures Personnel security, evaluation, and clearances Security policies Monitoring User management Privilege management
Operational Policies and Procedures. One of the less discussed aspects of administrative oversight is operations management of the controls environment and how it should be managed within the broader scope of enterprise architecture. Access control is realized by the alignment of capabilities of many systems and processes, collaborating to ensure that threats are reduced and incidents prevented. Therefore, other operational elements of the environment must be addressed in some fashion within the access control strategy.
There are several overarching operational processes that access controls must be represented in, or there may be a risk for failure at any point within the system. Following is a list of typical IT and security operational processes where access controls need representation:
113
AU8231_C002.fm Page 114 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® • • • • • • •
Change control Business continuity and disaster recovery Performance Configuration management Vulnerability management Product life-cycle management Network management
CHANGE CONTROL. When changes to the environment are required to accommodate a need, they must be defined, approved, tested, applied, verified, deployed, audited, and documented. Changes can be minor, such as a static route being added to a system or, more significant, the redesign of a storage solution. Every organization must have a change control process to ensure that there is a formalized process for making and documenting changes to the environment. Given the scope of access control, it is important that the change control process include aspects of the access strategy and policy. In some cases, this is obvious, such as adding a new virtual private network (VPN) gateway for remote access. Clearly this will affect the control environment. Some changes are much less obvious, but can have significant impacts to access controls, such as network redesign, which can affect various established paths. Security must be incorporated into the change control process to ensure system integrity; this is most prevalent with access controls given the pervasiveness and sensitivity of controls. BUSINESS CONTINUITY AND DISASTER RECOVERY (BCDR). Many organizations have BCDR plans to ensure the organization can maintain critical operations in case of a catastrophic event or failure. BCDR plans can be simplistic, such as ensuring there are regular backups performed, or highly complex solutions incorporating multiple data centers. The scope and complexity of the BCDR plan is typically defined by the business environment, risks, and system criticality.
Regardless of the type of BCDR plan, the availability of access controls during an event is essential and must be incorporated into the plan. For example, if a system failure was to occur and an alternate system was to be temporarily employed without the expected, original controls, the exposure to critical data can be significant. All too often, security is secondary to maintaining operations. However, the irony is that critical systems are the most important in the light of BCDR. Therefore, one could rightly assume that a system included in the BCDR plan is important and the information on that system is valuable. Unfortunately, for some, security is not included in the plan. If an event were to occur, a company could have its most valuable assets completely exposed.
114
AU8231_C002.fm Page 115 Saturday, June 2, 2007 1:21 PM
Access Control One of the first steps to ensuring security is incorporated into the BCDR plan is defining the access controls for the temporary systems, services, and applications. It should be noted that this includes the access control system itself, for example, a RADIUS server may seem unimportant on the surface, but its absence in a disaster could be detrimental to security. PERFORMANCE. Traditional networks and applications are typically engineered to provide meaningful performance to users, systems, and services. The network is the cardiovascular system for most companies, and if performance is low, the productivity of the organization may suffer.
The same holds true for the controls environment. If it takes a user an excessive amount of time to log on, this could have an unfavorable impact to operations. To reduce time associated with access controls, the performance optimization processes for the networking and system environment should include the performance of controls overseeing authentication and access. CONFIGURATION MANAGEMENT. Very much related to change controls, configuration management represents the administrative tasks performed on a system or device to ensure optimal operations. Configurations can be temporary or permanent to address a multitude of needs.
Configuration management of devices, systems, services, and applications can greatly affect the controls environment. These can be obvious changes that affect user access directly or less direct, such as modifying a routing protocol. Within the bigger picture of access controls, changes to a system’s configuration must take into account what, if any, impacts to access may occur after the configuration is modified. Given the typical separation of the security group from the IT group, it is not uncommon for the network group to make an innocuous modification to a system and impact the controls associated with security. Therefore, it is important to ensure that the resources responsible for configuration management, such as network administrators, system owners, and application developers, are aware of the control environment and the importance of their domain of influence on the security of the organization. VULNERABILITY MANAGEMENT. Vulnerability management will typically include activities such as implementing system patches. It may be necessary to apply patches to accommodate a security issue, update a system service, or represent the addition of features to the system or application. When patches are installed, there may be key system modifications that can negatively affect the security of the system.
Patches must be applied through the change control system to provide a comprehensive record of system modifications and accurate documenta115
AU8231_C002.fm Page 116 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® tion. Ensuring the current state of a system is well documented allows organizations to gain more visibility into the status of their environment in the event a new vulnerability is published. This promotes rapid assessments to evaluate potential risks in the face of an attack or vulnerability. It should also be noted that the change control system utilized during the application of patches offers evidence that can be consulted prior to applying new patches or other system changes, such as installing new software. There are some scenarios where the installation of a patch may adversely affect the security posture of a system, service, or application. A change control system, documenting each change and gaining approval for system modifications, offers a historical record that can be leveraged during future changes, establishing a foundation for creating best practices that are unique to the organization. A key attribute of vulnerability management is the importance of reducing the time of deploying patches or other system updates to mitigate a vulnerability. Vulnerabilities surface in a multitude of ways. For example, a vulnerability may be published by a vendor that has discovered a security issue and provides a patch. Usually, at this point, both attackers and organizations are made aware of the vulnerability. While companies are exercising due diligence in applying fixes, attackers are developing tools or worms to exploit the vulnerability. In contrast, an incident may have occurred that exposed a vulnerability in a system that represents an immediate threat. The latter example is exacerbated by day 0 attacks, where attackers identify and exploit a vulnerability on a massive scale, understanding that time is on their side. It is very common for attackers to discover a vulnerability, develop tools and tactics to exploit it, plan, and then execute. Vulnerable organizations must find alternative measures to compensate for the threat while vendors rush to produce a patch, each consuming time as the attacks expand. Given the taxonomy of each basic scenario, time becomes a critical element in protecting assets. The ability to use time effectively to deploy a patch or employ compensating controls until a patch is published directly corresponds to the level of risk and the overall security posture of the organization. Emphasis on efficient testing and deployment of system patches or compensating controls should be the core of any vulnerability management program. However, time must be balanced against effective deployment. Initially, the evidence provided by the change control process can be investigated to determine which systems are vulnerable and represent the greatest risk, then prioritize accordingly. As the process continues, other affected systems are addressed by a manual or automated (or combination) patch management process used to deploy the update. It is at this point that the vulnerability management program must verify that the patch was in fact 116
AU8231_C002.fm Page 117 Saturday, June 2, 2007 1:21 PM
Access Control implemented as expected. Although this may seem inherent to the objective, it cannot be assumed, regardless if it is a manual or automated process. In the case of manual deployment, users and system owners may not respond accordingly or in a timely fashion. This is somewhat compensated for in automated deployment; nevertheless, both scenarios require a validation of an effective installation. Of course, this is supported and driven by a change control process. As expressed above, the introduction of a patch or control does not in and of itself represent the complete mitigation of the identified vulnerability. Many systems are unique to a specific environment, representing the potential for a change to mitigate one vulnerability and unintentionally introduce another. Or, in some cases, it is assumed that the implementation of a patch or control eliminated the vulnerability altogether. Therefore, a vulnerability management system must not only address the time to test and deploy patches and verify the patch was implemented as expected, but also include testing to ensure the target vulnerability was mitigated and new problems were not introduced by the process. Vulnerability management is a comprehensive and integral process that every security program must develop, maintain, and test regularly. PRODUCT LIFE-CYCLE MANAGEMENT. In every organization that utilizes technology to support business operations there comes a time to upgrade or replace devices and systems. Standards must be established within the access control policy that defines the minimum requirements for a system to ensure the level of controls are realized. By doing so, the organization has a clear foundation by which to evaluate products for implementation without sacrificing security or expected controls. NETWORK MANAGEMENT. Many networks are supported by a separate telemetry network that allows network administrators to manage devices without concern for affecting the production environment. Given the ability to change aspects of the network environment, it is necessary to have network management access controls established to reduce exposure of systems and network devices. Personnel Security, Evaluation, and Clearances. Surprisingly, one of the more overlooked aspects of access control is the people acquiring access. Prior to granting access of any kind, management should assess the person using the supplied credentials. This does not insinuate that every user needs to have a background check prior to checking their e-mail. Clearly, the level of validation of an individual should be directly proportional to the level of sensitivity of the assets offered to the user. Nevertheless, within an organizational environment, it is critical that processes exist to
117
AU8231_C002.fm Page 118 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® evaluate users and ensure they are worthy of the level of trust that is representative of the type of access granted. First and foremost, security requirements — at some level — should be included in all defined job roles and responsibilities. Job roles defined by the organization should have alignment to defined policies and be documented appropriately. They should include any general responsibilities for adhering to security policies, as well as any specific responsibilities concerning the protection of particular assets related to the given role. Once the security requirements for a role are defined and clearly documented, the validation of individuals to obtain credentials for a job role can be defined and exercised. The definition of a screening process is typically related to the sensitivity of the assets being accessed. However, there may be contractual demands, regulatory compliance issues, and industry standards that define how a person is screened to reach a certain level of access. This is mostly seen in the military and the allocation of clearances. Depending on the clearance level requested, a person may be subjected to intense background checks, friend and family interviews, credit checks, employment history, medical history, and a plethora of other potentially unpleasant probing. Of course, once attained, the clearance translates to a level of trustworthiness and, therefore, access. A typical organization will need only a standard process and some additional factors in the light of applicable legal requirements or regulations. These may include a credit check and background checks that simply assure management that you have not falsified information during the application process. Typical aspects of staff verification may include: • • • • •
Satisfactory character references Confirmation of claimed academic and professional qualifications Independent validation, such as the existence of a passport A credit check for those requiring access to financial systems Federal, state, and local law enforcement records check
The relevance of credit checks and other personal history can be valuable in determining a person’s propensity for unlawful acts. Personal or financial problems, changes in their behavior or lifestyle, recurring absences, and evidence of stress or depression might lead to fraud, theft, error, or other security implications. In the event the user is temporary, the access provided must take into consideration the potential exposure of proprietary information given the transient position. It should also be noted that staff agencies should perform employee validation and provide a report to their clients. 118
AU8231_C002.fm Page 119 Saturday, June 2, 2007 1:21 PM
Access Control Management should evaluate the supervision and provisioning of access to new or inexperienced staff. It may not be necessary to provide the keys to the kingdom until a new employee has satisfied a probationary period. Of course, employees should be periodically reevaluated to ensure significant changes to key elements about them have not occurred. Also, all information collected about an individual is private and confidential and should be afforded security controls like any other sensitive material. Finally, confidentiality or nondisclosure agreements should be read and signed by the user to ensure there is no doubt of the user that the information is confidential or secret and valuable to the organization. Security Policies. The organization requirements for access control should be defined and documented. Access rules and rights for each user or group of users should be clearly stated in an access policy statement and then referred back to any specific roles identified within the organization.
The access control policy should consider: • The security requirements of individual enterprise applications, systems, and services • Statements of information dissemination and authorization, such as least privilege, data classification, and specified controls for access • The consistency between the access control and information classification policies of different systems and networks • Contractual obligations or regulatory compliance regarding protection of assets • Standards defining user access profiles for organizational roles • Details regarding the management of the access control system Monitoring. It should be readily apparent that the ability to monitor the access control environment effectively is essential to the overall success and management of the solution. It is one thing to apply controls; it is another to validate their effectiveness and status. The capacity for ensuring that controls are properly employed and working effectively and for being aware of unauthorized activity is governed by the existence of monitoring and logging of the environment. This is not unique to access controls, security, or even IT. It is an essential aspect of business to monitor activity.
Systems should be monitored to detect any deviation from established access control policies and record authentication processes, authentication attempts, credential management, user management, rights usage, and rights and access denial. The monitoring procedures and technology should also monitor the status of controls to ensure conformity to policies and expectations. This last point is typically overlooked and represents a significant potential for unauthorized activities and the ability to mask or hide their existence. For example, if the control activities are monitored, 119
AU8231_C002.fm Page 120 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® yet the status of controls is not, attackers can change various controls permitting access. The logging and monitoring of the activities may not raise suspicion because they are now valid operations thanks to the attacker. LOGGING EVENTS. Logs and what is being logged are important to security management and to maintaining an effective access control solution. A log can include:
• User IDs used on systems, services, or applications. • Dates and times for log-on and log-off. • End system identity, such as IP address, host name, or message authentication code (MAC) address. It may also be possible to determine the location through virtual local area network (VLAN) logging, wireless access point identification, or remote-access system identification, if applicable. • Logging of successful and rejected authentication and access attempts. Knowing when and where people are utilizing their rights can be very helpful to determine if those rights are necessary for a job role or function. It is also helpful to know where access rights are denied to have a better understanding of what a user is trying to do. This can help determine of you have an employee trying to access the payroll system or a user that does not have adequate rights to perform his or her job. Audit logs should be retained for a specified period. In some cases, as with regulations, this is preordained and not open to interpretation. However, there are cases where no legal or regulatory demands exist. If this is the case, the retention time will probably be defined by management perception and size of the logs compared to available storage and resource commitments. The security of the logs is critical. If a log can be altered to erase unauthorized activity, there is little chance for discovery, and if discovered, there may be no evidence. Another important aspect of log security comes in the form of litigation. If logs are not secure and can be proven as such at the time of an event, the logs may not be accepted as valid evidence due to potential tampering. Logs are sensitive to reading, too, not just writing, as they can contain sensitive information such as passwords (when users accidentally type the password into a user ID prompt). Therefore, logs must be accurate, secured, and maintained for a period. REVIEWING EVENTS. Once the events are properly logged, it is necessary to review the logs to evaluate the impact of a given event. Typically, system logs are voluminous, making it difficult to isolate a given event for investigation, much less identify the event. In many cases, organizations will make 120
AU8231_C002.fm Page 121 Saturday, June 2, 2007 1:21 PM
Access Control a copy of the log (preserving the original) and use suitable utilities and tools to perform automated interrogation of the logs. There are several tools available that can be very helpful in analyzing a log file to assist administrators in identifying and isolating activity. Once again, separation of duties plays an important role in reviewing logs. Otherwise, it may be possible for a user to perform an unauthorized task and manipulate the logs. Therefore, it is necessary to separate those being monitored from those performing the review. User Management. A formal procedure should be established to control the allocation of credentials and access rights to information systems and services. The procedure should cover all stages in the life cycle of user access, from the initial registration of new users to the final decommissioning of accounts that are no longer required.
To provide access to resources, one must first established a process for creating, changing, and removing users from systems and applications. These activities should be controlled through a formal process, based on policy, which defines the administrative requirements for managing user accounts. The process should define expectations, tasks, and standards concerning the user management. For example, elements of the process should include: • Approval of user. This can include information from human resources, the user’s manager, or business unit that has approved the creation of the user account. The owner of the system that is providing information or services should concur with the approval process. Approval processes also should speak to the modification of user accounts and their removal. • Standard, defining, unique user IDs, their format, and any specifics that may be application sensitive. Additionally, information about the user should be included in the credential management system to ensure the person is clearly bound to the user defined within the system. • A process for checking that the level of access provided is appropriate to the role and job purpose within the organization and does not compromise segregation of duties. This is especially important when a user’s role and job function change. There must exist a process to evaluate existing privileges compared to the new role of the user and ensure changes are made accordingly. • Defining and requiring users to sign a written statement indicating that they understand the conditions associated with being granted access and any associated liabilities or responsibilities. It is important to understand that user confirmation should occur whenever 121
AU8231_C002.fm Page 122 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® there is a change in rights and privileges, not simply upon creation of the account. • A documentation process to capture system changes and act as a record of the transaction. Keeping a log of the administrative process and relative technical information is essential to an effective access control system. The information will be used in assessments, audits, change requests, and as evidence for investigative purposes. • Status and audit processes for the control environment. This will ensure that users who have left or changed job roles have their unused or no longer required rights immediately removed to ensure elimination of duplications and removal of dormant accounts. • Specific actions that may be taken by management if unauthorized access is attempted by a user or other forms of access abuse are identified. Clearly, this must be approved by the organization’s human resources and legal departments. In addition to overall user management, it is necessary to define policies, procedures, and controls regarding passwords. Passwords are a common practice for validating a user’s identity during the authentication process. Given that in most traditional authentication solutions the password is the only secret in the transaction (i.e., username and password combinations are prone to brute-force attacks were the username is known and the password is repeatedly guessed until access is acquired), great care should be considered in how passwords are created and managed by users and systems. A process governing user passwords should consider the following: • Users should be required to sign a statement agreeing to keep their passwords safe and confidential and to not share, distribute, or document their passwords. • All temporary passwords should be permitted to be used once — to reset the user’s password to something that only he or she knows. • Passwords should never be stored unprotected and in clear text. • Passwords should have a minimum and maximum length and include various characters and formats. • Passwords should be changed regularly. • A history of passwords should be maintained to avoid repeating old passwords as they are changed. There is some debate over the security realized by a username and password combination used for authentication. For example, depending on the system, a longer password can actually make it more prone to attack if it results in the user writing it down. The potential for exposure of passwords, poor password selection by users, and the shear number of passwords many users need to keep track of lay the foundation for potential compro122
AU8231_C002.fm Page 123 Saturday, June 2, 2007 1:21 PM
Access Control mises. Nevertheless, username and password combinations are today’s standard. The best approach to ensuring consistency and control is: • • • • •
Clearly defined policies Well-implemented system controls Understanding of the technical considerations Comprehensive user training Continual auditing
Privilege Management. The importance of privileges demands that their allocation, administration, and use should have specific processes and considerations. It is privilege management that has represented core failures in an otherwise sophisticated access control system. Many organizations will focus on identification, authentication, and modes of access. Although all these are critical and important to deterring threats and preventing incidents, the provisioning of rights within the system is the last layer of control. The typical reason for problems in the allocation of rights is mostly due to the vast number of options available to administrators and managers. It is this aspect of privilege management that demands the existence of processes and documentation that clearly defines and guides the allocation of system rights.
In the development of procedures for privilege management, the following should be considered: • Privileges associated with each system, service, or application, and the defined roles within the organization to which they apply, should be identified and clearly documented. This speaks to identifying and understanding the available rights that can be allocated within a system, aligning those to functions within the system, and defining roles that require the use of the functions. Finally, the roles need to be associated with job requirements. Therefore, a user may have several job requirements, forcing the assignment of several roles, which may translate to a collection of rights within the system. • Privileges should be managed based on least privilege. Covered earlier, only rights required to perform a job should be provided to a user, group, or role. • An authorization process and a record of all privileges allocated should be maintained. Privileges should not be granted until the authorization process is complete and validated. • Any significant privileges that are needed for intermittent job functions should be assigned to a different user account as opposed to those for normal system activity related to the job function. This practice is most seen on UNIX systems. An administrator of a UNIX system may have three or more accounts: one for daily routines, another for specific job requirements, and “root” for rare occur123
AU8231_C002.fm Page 124 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® rences where complete system access must be utilized. This is done to reduce the consequences of errors. Physical. Physical security covers a broad spectrum of controls, ranging from door, locks, and windows to environment controls, construction standards, and guards. Typically, physical security is based on zones, concentric areas within a facility that require increased security, and more zones that are passed by an individual.
Physical entry controls should meet a level of security that is relative to the zone (or area) accessed and credentials provided to those that require access. For example, an employee may work in the data center of a large financial institution — a very sensitive area. The employee may have a special badge to access the parking lot and the main entrance where guards are posted and recording access. To access the specific office area, he or she may need a different badge and PIN to disengage the door lock. Finally, to enter the data center, the card and PIN are combined with a biometric device that must be employed to gain access. The most prevalent aspect of physical security is the perimeter of a facility. A typical perimeter should be sound and without gaps or areas that can be easily broken into. The perimeter starts with the surrounding grounds. Hills, ditches, retention walls, fences, concrete posts, and high curbs can act as deterrents to attack. Depending on the sensitivity of the facility, guards, dogs, and other aggressive measures can be applied. The construction of the facility may include special walls, reinforced barriers, and even certain foliage strategically placed near doors, windows, and utilities. Of course, all this can be augmented by cameras, alarms, locks, and other essential controls. Finally, the oversight of physical controls must adhere to the same basic principles of other forms of controls: separation of duties and least privilege. For example, it may be necessary to segment the job role of guards to ensure that no single point of failure or collusion potentially allows threat agents to enter unchecked. Physical Entry. Secure areas should be protected by appropriate entry controls to ensure that only authorized personnel are allowed access. The provisioning of credentials must take into consideration the needs of the individual, his or her job function, and the zone accessed. As discussed above, the person requiring access must successfully pass an investigative process prior to being provided access.
In defining physical entry controls, the following should be considered: • Visitors should be appropriately cleared and supervised prior to entry. Moreover, the date, time, and escort should be recorded and 124
AU8231_C002.fm Page 125 Saturday, June 2, 2007 1:21 PM
Access Control validated with a signature. Visitors should only be provided access to the areas that allow visitors and should be provided with instructions concerning security actions and emergency procedures. • Access to controlled areas, such as information processing centers and where sensitive data may reside, should be restricted to authorized persons only. Authentication controls, such as badges, swipe cards, smart cards, proximity cards, PINs, and potentially biometric devices, should be employed to restrict access. • Everyone within the controlled perimeter must wear some form of identification and should be encouraged to challenge others not wearing visible identification. Moreover, different styles of identification should be employed to allow others to quickly ascertain the role of the individual. For example, a red ID badge may signify access to the fourth floor of an office building. If someone were wearing a blue badge on the fourth floor, others would be able to determine necessary actions. Action may include verifying they are escorted, notifying security, or escorting them to the nearest exit. • All access rights and privileges should be regularly reviewed and audited. This should include random checks on seemingly authorized users, control devices, approval processes, and training of employees responsible for physical security. Securing Zones. A physical security strategy may include the defining of multiple zones within the facility. Typically, this is associated with rooms, offices, floors, or smaller elements, such as a cabinet or storage locker. The design of the physical security controls within the perimeter must take into account the protection of the asset as well as the individuals working in that area. One must consider fires, floods, explosions, civil unrest, or other man-made or natural disasters. Emergency strategies must be included in the physical controls to accommodate the safe exiting of personnel and adherence to safety standards or regulations. Human safety is the priority in all decisions of physical security. Technical. Technical controls are those mechanisms employed within the digital infrastructure that enforces policy. Given the pervasive nature of technology, access controls can be materialized as almost anything and at any layer within the system. Technology controls can include elements such as firewalls, filters, operating systems, applications, and even routing protocols. In an effort to better communicate the various types, the following list can be used:
• • • •
User controls Network access Remote access System access 125
AU8231_C002.fm Page 126 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® • Application access • Malware control • Encryption User Controls. There are controls associated directly with the user; some have been introduced above and will be detailed in later sections. Technical controls related to users can be best demonstrated as user authentication factors that can be translated into technical representations.
User authentication factors are represented by something you know (e.g., a password), something you have (e.g., a token or smart card), and something you are or do (e.g., biometrics). Single-factor authentication is the employment of one of these factors, two-factor authentication is using two of the three factors, and three-factor authentication is the combination of all three factors. Single-factor authentication is usually associated with a username and password combination. This is unfortunate because the usual reusable (static) password is easily compromised, and therefore provides very limited security. There are a plethora of technical solutions that provide the framework to identify and authenticate users based on this concept. Given the broad use of username and password combinations, most technical solutions will provide this service. Two-factor authentication usually introduces an additional level of technical controls in the form of a physical or biometric device. Typically, this is a token, fob, or smart device that substantiates the user’s identity by being incorporate into the authentication process, such as a username and password. Two-factor authentication can be the combination of any two of the three basic forms of factor authentication. The incorporation can include one-time passwords, such as a time-sensitive number generation, or the existence of a certificate and private key or a biometric. Three-factor authentication will include elements of all three factors and, as such, will include something about the user, such as a biometric feature. Biometrics cover a number of options, such as fingerprints, retina scanning, hand geometry, facial features, and even temperature. The technical personification of this is a device that is used to interface with a person during the authentication process. Network Access. Network access controls are those employed within the communication infrastructure. Usually, this is associated with access control lists, remote-access solutions, virtual local area networks (VLANs), protocols, and security devices, such as firewalls and intrusion detection or intrusion prevention systems. The role of network access controls is usually to limit communications. For example, a firewall will limit what
126
AU8231_C002.fm Page 127 Saturday, June 2, 2007 1:21 PM
Access Control protocols and protocol features are permitted from a given source to a defined destination. However, there are network layer controls that can be used to employ security services that increase the level of access management in the environment. The most common example is proxy systems, those devices that employ controls and act on behalf of the user or application. Proxy systems can apply specific logic in managing service-level communications within the network. For example, a proxy system may control access to Web-based services via the Hypertext Transfer Protocol (HTTP). Just as a firewall would block specific ports, a proxy system would block or control certain aspects of the HTPP to limit exposure. Many proxy systems are used to authenticate sessions for internal users attempting to access the Internet and potentially filter out unwanted Web site activity, such as Java applets, Active Server Page (APS) code, or plug-ins. VLANs can be utilized to segment traffic and limit the interaction from one network to another. Wireless networks can employ several access control mechanisms, such as MAC filtering, several forms of authentication, and encryption, and apply limitations on access. One of the more interesting aspects of network access controls that are being adopted more readily is controlling access of remote or internal systems based on policy. Prior to allowing a system to join the network fabric, the network and supporting services query the system to ensure the workstation is adhering to established policies. Policies can be as simple as ensuring an antivirus package is present on the system and as complex as validating the system is up to date on security patches. In the event the system does not meet security policy, it may be denied access or redirected to a secure area of the network for further testing or to allow the user to implement the necessary changes to access the network. Remote Access. In today’s environment, roaming users make up a significant portion of the user community. Remote-access solutions offer services to remote users requiring access to systems and data. One of the more commonly utilized technical solutions is virtual private networks (VPNs), where the user is authenticated and provided secure communications to predefined resources. Typically, a VPN device is placed on the Internet or behind a firewall to allow remote users to access the system, authenticate, and establish a protected session with various internal systems.
Access controls are typically associated with VPNs by use of authentication mechanisms in combination with encryption methods. These combine with protocols to instill a level of control over the communication. For example, a VPN solution can be configured to permit access by users with the appropriate client package or version of a browser, limit access to cer-
127
AU8231_C002.fm Page 128 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® tain portions of the network, limit the types of services permissible, and control session time windows. System Access. A system can be a single server or a collection of servers that provide a service or perform a function. Common attributes of a system are the server and the operating system that supports upper-level services and functions. Operating systems provide access controls in the form of users, file systems, and processes. They also seek to segment key functions from user or application access to limit the system’s exposure to various operational threats.
The first iteration of access control was arguable on a single system or host. Applications were developed, data structure was created, and users were provided access based on some form of authentication. Early access controls were relegated to the capabilities of the operating system and its oversight of the file system, data store, application provisioning, and session management. All these attributes and more exist in commonly used operating systems today. The most prevalent access control on systems is the file system. The file system is the method utilized by a computer to store, access, and secure data on a device. Many file systems will incorporate access controls to limit usage of files, such as permitting a given user to only read a file. Other access controls will manage the interaction between layers in the system, such as hardware access, memory access, services, and kernel modules. Therefore, applications can run on systems where the operating system can manage resources for the application and administrators can implement controls over that environment. Weaknesses in application input controls can lead to buffer overflows that potentially allow malicious activity by circumventing system-level access controls to resources. Application Access. Applications will usually employ user and system access controls to deter threats and reduce exposure to security incidents. However, applications can incorporate several mechanisms to supplement other controls to ensure secure operations. For example, applications can establish user sessions, apply time-outs, validate data entry, and limit access to specific services or modules based on user rights and needs. Moreover, the application itself can be designed and developed to reduce exposure to buffer overflows, race conditions, and loss of system integrity.
The architecture of an application plays a significant role in its ability to thwart attack. Object-oriented programming, multitiered implementations, and even database security are important to controlling what services are provided to users and what tasks can be performed. Access controls associated with all aspects of an application are important to sound security. Many applications are complicated and offer a wide range of services and 128
AU8231_C002.fm Page 129 Saturday, June 2, 2007 1:21 PM
Access Control access to potentially sensitive information. Additionally, applications may be critical to the operational needs of the business; therefore, their sensitivity to disruption must be considered. Much like other technical controls, applications can employ several layers of security within the application framework. Just as applications may have layers, they can also have layers of controls, and these controls can be associated with user activity, object security, internal services, and data services. For example, an application may encompass a presentation module, object services module, data aggregation module, and output services that interact with the inner workings of other applications. Based on the services offered to a given user, it is necessary to consider not only the application’s oversight of the user, but also the security afforded to other activities. If this aspect of application security and access management within the application is overlooked, it may be possible for low-level users to perform privileged actions. This is when there is no association between the user role and the actions of a given object. Malware Control. Viruses, worms, Trojans, and even spam represent a threat to the enterprise. Technical controls can be applied to reduce the likelihood of impact from unwanted influences. The most prevalent of these controls is antivirus solutions that can be employed on the perimeter, servers, and end systems to detect and potentially eliminate the threat of viruses, worms, or other malicious programs. Other technical solutions include file integrity checks and intrusion prevention systems that can detect when a system service or file is modified, representing a risk to the environment.
In today’s environment, malicious code represents a significant threat to operations. Weaknesses in systems, applications, and services offer opportunities for worms and viruses to infiltrate an organization, causing outages or damage to critical systems and information. Technical access controls can materialize as add-on applications, such as antivirus software, or tools that can be used to filter spam, find spyware on systems, or detect Trojan activity. A more detailed discussion of application security issues is found in the Application Security chapter, Domain 8. Cryptography. Although covered in greater detail in the cryptography domain in the CBK, encryption has an important role in access controls. Encryption can be used to ensure the confidentiality of information or authenticate information ensuring integrity. These two characteristics are highly leveraged in the identification and authentication processes associated with access control. Authentication protocols will employ encryption to protect the session from exposure to intruders, passwords are typically hashed to protect them from disclosure, and session information may be 129
AU8231_C002.fm Page 130 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® encrypted to support the continued association of the user to the system and services used. Encryption can be used to validate a session. For example, if session information is not encrypted, the resulting communication is denied. The most predominant aspect of cryptography in access control is the employment of cryptographic mechanisms to ensure the integrity of authentication protocols and processes. Access Control Threats Access control threats are directly related to the confidentiality, integrity, and availability of enterprise assets. Any threat or threat agent that represents a risk to the CIA of enterprise assets is a threat to the access control environment. The objective for this section is to explain the threats to access controls. There are a plethora of threats to access controls and the CIA of enterprise assets. These can be best represented within the CBK as: • • • • • • • • • • • • • • • • • • •
Denial of service Buffer overflows Mobile code Malicious software Password crackers Spoofing/masquerading Sniffers Eavesdropping Emanations Shoulder surfing Tapping Object reuse Data remnants Unauthorized targeted data mining Dumpster diving Backdoor/trapdoor Theft Intruders Social engineering
Denial of Service A threat to operations is denial-of-services (DoS) attacks. DoS attacks can range from the consumption of specific resources, rendering a system service or application unusable by authorized users, to the complete outage of a system. In the early 1990s, DoS attacks were mostly relegated to protocol manipulation within the Transmission Control Protocol/Internet Protocol (TCP/IP). Known as SYN floods, threats would make an overwhelming num130
AU8231_C002.fm Page 131 Saturday, June 2, 2007 1:21 PM
Access Control ber of open-ended session requests to a service, effectively making it impossible for a valid user or application to gain access to the service. Soon, attackers began to identify weaknesses in various system services, allowing them to make specially formatted requests that would force an error, resulting in the complete loss of the service. Early on, there was an attack called Teardrop, which employed a tool to manufacture overlapping fragmented service request packets that would exploit a flaw in the system, shutting it down. In the late 1990s and into the 21st century, distributed denial of service (DDoS) became a significant threat to operations. Attackers would build vast networks of commandeered systems (i.e., zombies) that would be instructed to simultaneously make numerous requests to a single system or application, such as a Web site. Hundreds or potentially thousands of systems would make millions of requests to a Web site, completely flooding the system to the point where others could not access it or until it simply failed and went offline. DoS attacks are one of the oldest tools used by hackers and are still used quite often today. In early spoofing attacks performed by pioneers such as Kevin Mitnick, the system whose identity was leveraged was “DoS’ed” to allow the attacker to assume the role of the now unavailable system. Today, DoS attacks are used in a multitude of ways to manipulate system interactions to acquire access or redirect communications. The applicability to access control is relative to availability in the CIA triad. If a service is not available, access to valid users or systems is denied. Buffer Overflows A buffer is a portion of system memory that is utilized to temporarily store information for processing. Buffers are essential to computer operations to manage data input and output at all levels of system interaction. A buffer overflow is when a threat manipulates the system’s ability to manage the buffer, causing a system failure. A system failure can include an outage, the failure to control an application state, or the inability to control the code, information, or material prepared for processing. Buffer overflows can be used in a DoS attack or to inject malicious software for processing on behalf of the attacker. Memory can be used in network interfaces, video systems, traditional RAM, or virtual memory on hard disks. Any instance of memory is potentially vulnerable to a buffer overflow. Buffer overflows are also used to gain unauthorized access or to escalate privileges from an authorized account. Buffer overflows are typically the result of poor system memory access control and management. This can be related to poor coding of the application, services, or operating system managing the memory allocation. It 131
AU8231_C002.fm Page 132 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® can also include errors in the system BIOS that are used to oversee the use of memory. The lack of adequate application testing in the development process results in buffer overflow exposure. The applicability to access control is directly related to the controls implemented within the system to manage memory and the code that is employed. A buffer overflow is essentially exploiting a vulnerability in the memory access management system responsible for system operations. Of course, the broader implications are the result of the buffer overflow, which could lead to the injection of malicious software, further exacerbating the exposure to threats and the demise of the access control environment. Mobile Code Mobile code is software that is transmitted across a network from a remote source to a local system and is then executed on that local system, often without explicit action on the part of the user. The local system is often a personal computer, but can also be a smart device, such as a PDA, mobile phone, Internet appliance, etc. Mobile code differs from traditional software in that it need not be installed or executed explicitly by the user. Examples of mobile code include ActiveX controls, Java applets, scripts run within the browser, and HTML e-mail. Mobile code is also known as downloadable code and active content, especially in the context of Web sites and e-mail systems. The security implications of mobile code are significant because of the inherent dynamics of distribution capabilities, limited user awareness, and potential for harm. It is not uncommon for mobile code to be provided to a browser to perform a function for the user. If a system is not configured properly, it can be fooled into running mobile code designed to alter, infect, or otherwise manipulate the system. Mobile code, if left unchecked, can be used to track user activity, access vital information, or be leveraged to install other applications without alerting the user. The issue is exacerbated if the user is remotely accessing corporate resources. Data can be copied to the local system, which may be exposed to unwanted threats, access credentials may be captured and later used by an attacker, or the communication can be used to inject malicious software into the organization. Mobile code can range from a system nuisance, such as Web sites tracking Internet activity, to highly problematic situations, such as spyware that can be used in identify theft. The role of mobile code in the light of access control is relative to the exposure of information, or confidentiality. Controls are implemented to protect information and systems from unwanted exposure. Moreover, the existence of mobile code represents a failure of the application controls implemented to thwart such an event. Of course, this is assuming that the mobile code is malicious. There are businesses that will require the provi132
AU8231_C002.fm Page 133 Saturday, June 2, 2007 1:21 PM
Access Control sioning, or permission, to introduce code to perform a function. A good example is online seminars that require ActiveX applications to produce sound and present information. However, it is usually possible to add granularity to the access control environment by configuring systems to only accept mobile code that has been signed by a trusted entity. Again, this speaks to the level of sophistication of the control strategy and technology. Malicious Software Malicious software is a term used in many ways to describe software, applications, applets, scripts, or any digital material that performs undesirable functions. There was a time when the term was limited to what the industry terms as Trojans or spyware. However, it is typically used to cover just about anything that can be run on a computer system that represents a threat to the system, information, or other applications. To better demonstrate, the following are commonly accepted descriptions of different types of threats that fall under the general definition of malware: • Virus: Parasitic code that requires human transferral or insertion, or attaches itself to another program to facilitate replication and distribution. Other programs can range from e-mail, documents, and macros to boot sectors, partitions, and memory fobs. Viruses were the first iteration of malware and were typically transferred by floppy disks and injected into memory when the disk was accessed or attached to files that were transferred. • Worm: Self-propagating code that exploits system or application vulnerabilities to replicate. Once on a system, they may execute embedded attributes to wreak havoc, much like a virus, and move on to the next system. A worm is effectively a virus that does not require human or other programs to infect systems. • Trojan horse: A very general term referring to programs that appear desirable, but actually contain something harmful. A Trojan horse is software that purports to do one thing that the user wants while secretly performing other, potentially malicious actions. For example, a user may download a game, install, and begin playing. However, unbeknownst to the user, the application installed a virus, launched a worm, or has installed a utility allowing a hacker to gain unauthorized access to the system remotely, all without the user’s knowledge. • Spyware: Prior to their use in malicious activity, spyware was typically a hidden application injected through poor browser security by companies seeking to gain more information about a user’s Internet activity. Today, those methods are used to deploy other malware, collect private data, or monitor system input, such as keystrokes. 133
AU8231_C002.fm Page 134 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® In reality, the line between the different types of malicious code is growing thin. Many threats have surfaced that combine the capabilities of disparate malicious software to make for significant security challenges. Mobile code, spam, and key loggers, along with the list above, are an attempt of the security industry to assign definition to a highly dynamic threat. Hackers do not operate within the same construct. Spam can lead to spyware, which can help the proliferation of worms exploiting vulnerabilities to implant their payload of Trojan devices. Malicious software is an approach to classify software that surreptitiously inflicts unwanted exposure or harm. The threat to access controls that malicious software represents is significant and cannot be overstated. Clearly, the exposure of information, loss of system integrity, and potential exposure of user credentials to unknown entities undermine the control environment. Given the pervasive nature of access controls and their number, diversity, and types, a complicated web interaction that can be quickly unraveled by malicious software is established. Password Crackers Passwords are essentially a group of secret characters that users can enter to prove they are who they claim. Assuming the password is not shared or known by another, username and password combinations offer basic authentication. However, by their very definition, they are prone to discovery, and thereby the strategy of username and password combinations is weakened. Given the fact that the typical password will range from 5 to 15 characters on average, one could rightly assume that there is a limit to the combination of characters. Of course, the diversity of characters increases the number of possibilities. Passwords are typically stored by means of a one-way hash. An algorithm produces a unique representation of the password. When a password is received by a system during authentication, it hashes it using the same algorithm employed during the creation of the password and compares it to the hash on file. If they match, there is a high degree of certainty the password provided is the same as the one on file. In most cases, the password itself is never stored or saved. When a user sets a new password, it is immediately hashed and the hash is saved. Upon authentication, the initial hashing process is repeated and the result compared to complete the transaction. The key factor, and where password crackers come into play, is the saving of the hashed password. If a file containing the hashed password is made available, attackers can perform brute-force attacks by comparing thousands of possible password combinations against the hash. Password crackers are tools that will use a list (or create one dynamically) of possi134
AU8231_C002.fm Page 135 Saturday, June 2, 2007 1:21 PM
Access Control ble combinations, hash them, and compare the hash to the one stored in the file. In reality, it is only a matter of time before all possible character combinations are tested and the password is exposed. Of course, depending on the length and complexity of the password, the process could take minutes or years. However, most users use passwords that are easily remembered, and therefore easily guessed by the system, reducing the required time for the tool to run. Password crackers are easy to come by on the Internet, and many that are in use today were created in the late 1980s and early 1990s. John the Ripper is still very popular, and L0pht (or L0phtcrack), once a hacker tool, is now regularly used by Microsoft system administrators for password auditing and sold as such. Password crackers are one of the few tools that are equally effective for hackers and security administrators. If the tool can discover the password, the hacker can use it, or the administrator can issue a password change request to the user. Although there is much debate over the security of usernames and passwords, the role of the password cracker as a tool for the security administrator is well established as a mechanism to increase the integrity of the system. Putting aside the administrative use of password crackers as a tool, the threat they represent to access controls is very high. Simply put, passwords, for some organizations, are the keys to the kingdom. If the access control strategy is founded on passwords, their exposure would mean the demise of the control environment, rendering it useless. Moreover, the monitoring aspects of the control environment will not be very helpful in exposing the activity because, if performed discreetly, the attacker will be using the privileges assigned to the stolen accounts. Early in the evolution of password crackers, the process of creating hashes of a list of passwords and then comparing them to the stolen collection of hashed passwords was very time consuming. In 1980, Martin Hellman, best known for his work with Whitfield Diffie for the development of public key cryptography, described a cryptanalytic time–memory tradeoff that reduces the time of cryptanalysis by using precalculated data stored in memory. The goal was to save time by performing an exhaustive search and loading the results into memory, taking advantage of the close interaction between memory and the processor. This is possible because many popular operating systems generate password hashes by encrypting a fixed plaintext with the user’s password as the key and storing the result as the password hash. If the password hashing scheme is poorly designed, the plaintext and the encryption method will be the same for all passwords. In that case, the password 135
AU8231_C002.fm Page 136 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® hashes can be calculated in advance and can be subjected to a time–memory trade-off. This technique was improved by Rivest before 1982 with the introduction of distinguished points that drastically reduce the number of memory lookups during cryptanalysis. Hellman’s time–memory trade-off was based on enciphering the plaintext with all possible keys. The results were organized in chains, with only the first and last elements loaded into memory. The trade-off is based on saving memory at the cost of processing time. Unfortunately, as the number of stored chains increased, so did the probability of collisions — or generating the same results with different keys. This is aggravated by the fact that chains will merge, making them useless for downstream cryptanalysis. Rivest introduced defining distinguished points as the ends of the chains. This was based on the conclusion that the first ten bits of the key were all zeros, and therefore a chain can be pulled from memory based only on the end of the chain when a plausible match is identified. The results were a significant reduction in the time of processing passwords through optimizing the precalculated chains in memory. However, in 2003 Philippe Oechslin developed a faster time–memory trade-off. The main issue with Hellman’s chaining process, which was enhanced by Rivest, was the collisions of chains and ultimately mergers within memory. Oechslin postulated a modified approach to the creation of chains in tables that limit collision rates and isolates mergers within the table, greatly reducing memory requirements while significantly optimizing the organization of data. The new chain structure is called rainbow chains. They employ successive reduction of points within the chain, utilizing Rivest’s distinguished points along with a reduction process. This reduces the likelihood of mergers within the table in memory, allowing more useful data to be stored for cryptanalysis. To demonstrate the significance of rainbow table attacks, Oechslin successfully cracked 99.9 percent of 1.4 GB of alphanumeric password hashes in 13.6 s, whereas previous time–memory trade-off processes would do the same in 101 s. The rainbow table attack has revolutionized password cracking and is being rapidly adopted by tool creators. Spoofing/Masquerading In the 1980s, Kevin Mitnick popularized IP spoofing, originally identified by Steve Bellovin several years prior as an attack method that used weaknesses within Internet protocols to gain access to systems that were based on IP addresses and inherent trust relationships. Through IP spoofing, one could appear to come from a trusted source but was, in fact, well outside of the trusted environment. Mitnick used this technique, along with social engineering, to access systems to obtain various application source codes 136
AU8231_C002.fm Page 137 Saturday, June 2, 2007 1:21 PM
Access Control for other hacking purposes. Specifically, he wanted the source codes for cell phones (the operating system of most cell phones at the time) that would allow him to manipulate phones to access other conversations and greater system access. By the simplest definition, spoofing is appearing to a system as if coming from a known and trusted source. As stated above, early versions of spoofing were performed at the protocol layer. Attackers would send packets to a server with the source address of a known system in the packet header. This would fool filtering devices that were configured to permit activity to and from the server for specific, trusted addresses and networks. The difficult part was for the attacker to predict the response to the server that would normally come from the host whose identity is being utilized. Session identifiers, such as TCP sequence numbers, would have to be predetermined by the hacker to avoid detection by the server. Quickly, systems and firewalls compensated for such an attack, making it much less used in today’s computing environment. However, there are a multitude of other methods to manipulate systems and users, allowing hackers to appear as valid participants within the network, all falling under the definition of spoofing or masquerading. For example, phishing is a process of fooling people to believe that an email requesting a password change on an online banking application is the bank and not the attacker, essentially masquerading as a trusted source. There are examples of hackers appearing as Domain Name Servers (DNSs) to redirect Internet users to malicious sites engineered to inject malware or obtain information. Spoofing is also an effective technique used in part for man-in-the-middle attacks. A user may believe he or she is interacting with the expected destination, when in fact his or her communications are being redirected through an intermediary, collecting information from both sides of the communication. The impact of spoofing or masquerading on access controls is the affect it has on modes of communication. If a communication was manipulated and, depending on the stage of the communication the hacker employs for the attack, the affects could be likened to obtaining the necessary credentials, such as with password crackers. The threat to the access control environment is significant. Attackers could gain the necessary access to systems and information by incorporating themselves in a manner that circumvents established controls. Although many situations are realized by poor user awareness, there still exist many technical scenarios that allow attackers to bypass controls. Sniffers, Eavesdropping, and Tapping Communications, at some point, will typically become aggregated into a physical medium. These can materialize as wired or even wireless networks 137
AU8231_C002.fm Page 138 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® that represent the potential for exposure. In the event an attacker were to incorporate into the communication medium, he could potentially visualize and collect all layers of the communication. Interestingly, the same capability is utilized by some primary security systems, such as IDS. IDSs will monitor the communication in an effort to detect unwanted activities. Sniffers are typically devices that can collect information from a communication medium, such as a network. These devices can range from specialized equipment to basic workstations. It is important to note that a sniffer can collect information about most, if not all, attributes of the communication. Emanations Emanation is the proliferation or propagation of a signal. This is most evident in wireless networks. A wireless network antenna may radiate the signal to areas beyond the desired scope, such as out of a building and into the parking lot. If this were to occur, an attacker could drive to the location and attempt to access the network from the privacy of his vehicle, potentially undetected. However, there are several other examples. Given the electromagnetic properties of computing devices, there is the potential to acquire data from great distances. For example, sophisticated electromagnetic loops can be generated near communication lines to eavesdrop on communications without physically interacting with the wire. It is well known that various governments used this method to tap transoceanic communication lines during the cold war. Other examples include utilizing sensitive devices to acquire signals propagating from computer monitors from other buildings or from the streets below to see what a user is doing. There are stories (potentially urban legend) of acquiring communication signals from underground pipes that pass close to the communication line. Other examples include interacting with Bluetooth-enabled devices or cars from great distances, heat and energy distribution detection, sound propagation, and mobile phones. The threat to access controls is very similar to those realized with sniffers and taps. The attacker can obtain private information or gain better awareness of the communication architecture for further attacks. In addition to the employment of encryption, there are mechanisms to reduce the emanation of signals, such as TEMPEST. Wireless antennae come in many formats that have different irradiation patterns that can be utilized in different ways to reduce signal propagation. For the purposes of this book, there are three basic types: omnidirectional, semidirectional, and highly directional. Within these three basic groups there are several different antenna subtypes, such as mast, pillar, ground plate, patch, panel, sectorized, yagi, parabolic, and grid. Each type and subtype represents 138
AU8231_C002.fm Page 139 Saturday, June 2, 2007 1:21 PM
Access Control options for the designers of wireless networks to reduce exposure by focusing the signal. Finally, there are materials that can be used on windows or other building attributes to further disrupt the emanation of electromagnetic signals. When considering the risk of potential threats exploiting signal propagation, the level and types of controls employed should be relative to the level of impact and cost if an attacker were to acquire sensitive information. Shoulder Surfing Shoulder surfing is the act of surreptitiously gathering information from a user by means of direct observation. A good example is watching someone type in their password while talking about what they did over the weekend. This obviously requires close interaction and proximity to the user and exposes the attacker to being identified. There are many themes to this type of attack, ranging from watching people perform tasks to listening in on conversations. Essentially, this is social awareness and seeking the opportunity to gain information through observation. The impact to access controls is simply the exposure of potentially sensitive information. If someone were to see a user enter his password, it would be trivial for that person to return to her desk and log in as that user. The only plausible defense against this threat is awareness training or use of one-time passwords. However, in some circumstances, organizations have employed multifactor authentication, making it much more difficult to acquire passwords. To avoid someone reading your screen over your shoulder, you can install screen filters that require you to either look directly into the monitor to see the contents or wear special glasses to depolarize the video stream. There are numerous logical and physical controls that can be employed — at a cost — but many organizations will seek to train their employees in an effort to thwart such an attack. Object Reuse Object reuse refers to the allocation or reallocation of system resources to an unauthorized user or, more appropriately, to an application or process. Applications and services on a computer system may create or utilize objects in memory or in storage to perform a function. In some cases, it is necessary to share these resources between various system applications to perform actions. However, some objects are employed by an application to perform privileged tasks on behalf of an authorized user or upstream application. If object usage is not controlled or objects are not purged from the system after use, they may become available to unauthorized use. There are two aspects of application object reuse: the direct employment of the object or the use of data input or output from the object. In the case with object use, it is necessary for the system providing the object to 139
AU8231_C002.fm Page 140 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® verify the requesting entity. In the event of object data use, in most, if not all cases, the system should clear all residual data prior to assigning the object to another process, ensuring that no process intentionally or unintentionally inherits or reads the data of another process. More specifically, the requirements for security imply the controlled sharing of these resources. For example, the printer process needs to be controlled so that it prints only one user’s output at a time. It would be a security violation if one user’s output were mixed in with that of another and printed on the same physical sheet of paper. Fortunately, keeping print jobs separate is simple for a printing process. In a similar way, it is easy to prevent more than one user from using a terminal at any given time. Although there is nothing to prevent two people from sharing the same log-in session, internal mechanisms for terminal handling ensure that a terminal is flushed of residual data after a log-in session before another user can log on to it. However, the controlled sharing of memory is more difficult to manage. Many systems allow several processes to execute in memory simultaneously. Sections of memory may be allocated to one process for a while, then deallocated, then reallocated to another process. The constant reallocation of memory is a potential security vulnerability, because residual information may remain when a section of memory is reassigned to a new process after a previous process is completed. It is necessary for the operating system to zero out the memory upon release and before being allocated to another process. Object reuse is also applicable to system media, such as hard drives, magnetic media, or other forms of data storage. It is not uncommon for media to be reassigned to a new system, application, or user group. When media is reused, it is best practice to clean all data from the device prior to assignment. Removing all data from the storage device reduces the likelihood of exposing proprietary or confidential information. Degaussing and writing over media are example standard methods for handling object reuse to prevent unauthorized access to sensitive data when media is reassigned. The threat to access controls is significant. Very similar to buffer overflow, processes can be employed by an attacker to obtain nonzeroed, unallocated memory objects, effectively exposing information or allowing the assumption of privileges of the previous process. The threats associated with media are easily understandable. If the event devices are not degaussed or overwritten to eliminate the data, there is a significant risk to exposing sensitive information. Data Remanence It is becoming increasingly commonplace for people to buy used computer equipment, such as a hard drive, and find what was thought to be deleted 140
AU8231_C002.fm Page 141 Saturday, June 2, 2007 1:21 PM
Access Control information. Data remanence is the remains of partial or even the entire data set of digital information. Normally, this refers to the data that remains after the media is written over or degaussed. Information can be stored, processed, or transmitted, and remnants of data are most common in storage systems, but as we discussed above, they can occur in memory. Hard drives are typically made up of platters organized into segments and clusters. When a file is written to a hard drive, the file system will place the file in one or more clusters in series or spread throughout the disk based on the availability of clusters on the hard drive. The file allocation table maintains the physical location information for a given file for later retrieval. Clusters are fixed allocated space on a disk that is used for file storage. There are several scenarios that may occur that can lead to data exposure. Deleting a file does not remove it from the system. The process simply removes the information from the file allocation table, signifying to the system that those portions (clusters) are now available for use. The data remains until a new set of data is written to that space. The same basic principles apply during a disk format. The format process effectively clears the file allocation table, but again, not the data. As data is written to a disk, it will be spread out to several clusters. Each cluster is reserved for a given file. Even if the actual data stored requires less storage than the cluster size, an entire cluster is allocated for the file. The unused space is called the slack space. In early computer systems this spaced was filled with random portions of data pulled from memory. Soon, many came to realize that confidential information, including passwords stored in memory, could be found on the hard drive, even after being formatted. Although this does not typically occur today, problems surface when sensitive data is deleted and a new file is partially written to the reallocated cluster. A portion of the original file will remain in the slack space until the cluster is completely overwritten. Slack space can also be used by an attacker. Several tools are available that can write data striped across multiple clusters using only the slack space. The data is completely hidden from the user and system unless forensics tools are used to identify the information. Attackers will use this capability to store information, tools, or malicious code on a victim’s system. Depending on the frequency of data writes and deletes to the hard drive, the unwanted information may remain on the system for extended periods. There are utilities that can be used to wipe the data from the hard drive by overwriting the file information with bytes of 1s or 0s, or a random combination. However, some of these tools will not overwrite the file header, allowing someone to see the size, date, and location information of a file. Statistical analysis can be used to determine what the files may have been, 141
AU8231_C002.fm Page 142 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® and in aggressive investigations, electromagnetic shadows of the original data can be extrapolated from the disk. The most effective mechanism to destroy data, either a file or an entire disk — short of grinding the disk into little pieces, which is still no guarantee — is to overwrite the data several times. This accomplishes two things: (1) it ensures enough randomization to avoid statistical analysis of the data, and (2) each write works to further mask the remnants of any electromagnetic representation of the original information. Unfortunately, no plan is perfect. A file may be stored in swap space on the disk, may appear in other areas, such as the sent mail folder of an email program, or previously deleted versions remain in slack space throughout the disk. To reduce exposure, the hard drive should be installed in a different system and cleaned by tools running on the host operating system. The threat to access controls is related to confidentiality and data integrity. Clearly, the exposure of data, even small parts, can represent an enormous risk to the organization. The integrity of information can be affected by the manipulation of data physically on the disk. It is important to realize that the security controls applied to a file are several layers above disk operations. Tools can be easily employed to circumvent these controls at the hardware layer to change, obtain, or remove sensitive data. Unauthorized Targeted Data Mining Data mining is the act of collecting large quantities of information to perform predictions of use or type. Data mining is typically used by large organizations to find hidden patterns in data. Retail and credit companies will use data mining to identify buying patterns or trends in geographies, age groups, products, or services. Data mining is essentially the logical determination of information in lieu of specific data. Hackers will typically perform reconnaissance against their target in an effort to collect as much information as possible to draw conclusions on operations, practices, technical architecture, and business cycles. On the surface, this information is assumed to be worthless, and it is for this reason that it is easy to collect over the Internet or the phone. However, when combined in different ways and analyzed by a determined individual, vulnerabilities may surface that can be exploited or assist in other attack strategies. The role of access control is to limit the exposure of potentially harmful information. The most nebulous version of access control is the security group within an organization working closely with the marketing group to ensure public information placed on Web sites cannot be used against the company. In the early days of the Internet, organizations rushed to put as much information about the company as possible on their Web site to 142
AU8231_C002.fm Page 143 Saturday, June 2, 2007 1:21 PM
Access Control attract potential customers. This included information about the executive staff, manufacturing practices, financial data, locations, technical solutions, and other data that could be used to guide hackers. Although this is much less prominent in today’s communication practices, highly capable search engines, such as Google, can be used to gather significant information about a target. The role of access controls is to impose varying levels of access privileges to information based on need and sensitivity. When the level of sensitivity becomes lower and lower, the level of controls is reduced, and in some cases eliminated. Therefore, prior to changing the classification level of data, effectively reducing or eliminating controls over the distribution and access to information, organizations must understand the potential impacts associated with data mining by unethically inclined entities. Dumpster Diving In the old days, dumpster diving was the primary tactic used by thieves to get credit card numbers and other personal information that can be gleaned from what people and companies throw away. Dumpster diving is simply taking what people assume is trash and using that information, sometimes in combination with other data, to formulate conclusions or refine strategies for an attack. This is especially sensitive for companies who may throw away copies of proprietary data or seemingly benign data that, when in the hands of a hacker, can provide substantial information. Simple, but useful information ranges from phone numbers and e-mail lists to communication bills that have the service provider name and account details. A bill receipt containing account information can be used to help authenticate a hacker calling the service provider to access design features or IP addresses for locating logical areas where the exact target may reside. Even with sophisticated word processors and a computer on everyone’s desk, people still print volumes of documentation, sometime several times, to share with others or read later, only to throw it away without concern for the sensitivity of the data. It is not uncommon to find network designs, equipment purchase receipts, phone bills, phone books, human resource information, internal communications, configuration documentation, software documentation, project plans, and project proposals in a trash can. Most hackers do not want physical contact and the implied exposure of going through trash. The ability to go to the location, at the right time, and get information from the garbage insinuates a certain type of threat with specific motivations. Many companies destroy documentation to mitigate the risk of exposing information. Conversely, many companies assume there is little risk of dis143
AU8231_C002.fm Page 144 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® closing proprietary information, increasing the exposure to a very basic and easily exploited vulnerability. Backdoor/Trapdoor During the development of an application, the creator has the ability to include special access capabilities hidden within the application. Referred to as backdoors, applications may have hard-coded characteristics that allow complete and unfettered access to those who know the existence of the backdoor. Often these are installed to enable the programmer to access the system by bypassing the front-end security controls in cases where the system fails, locking everyone out. Most of the examples seen are the use of hidden accounts built within the application. These accounts can be used to gain authorized access without the knowledge of the system owner. Sometimes this occurs because the developer is attempting to support broad system functions and may not realize that the account can be used to gain access. In 2003, Oracle released a new version of its software that had at least five privileged accounts created upon install with the administrator none the wiser. There are other cases where system integrators will create special rules or credentials that allow them to gain complete access to systems they have installed to support their customer. Unfortunately, these practices typically mean that the same methods and username and password combinations are used for each customer. If the information were to be exposed or an employee were to leave with the information, every customer that has the backdoor implemented is exposed to a plethora of threats. However, as one would correctly assume, backdoors mostly occur in independently developed applications and not from established vendors. The threat to access controls is based on the existence of unknown credentials or configurations that will allow someone to circumvent established controls. Theft Theft is a simple concept anyone can grasp. However, as the digital interaction between people and businesses expands, the exposure of valuable information continues to exceed the traditional physicality typically associated with the term theft. Physical theft includes anything of value an unauthorized entity can remove. Computers, documents, books, phones, keys, or other materials that can be moved can be stolen. It can also include theft of a service by physical means, such as power, cable, or phone service. Digital theft is typically easier because of the virtualization of the information and the proliferation of avenues of access. 144
AU8231_C002.fm Page 145 Saturday, June 2, 2007 1:21 PM
Access Control Personal and private information about individuals and companies is shared, sold, transferred, and collected by other people and organizations for legitimate and illegitimate activities. Regardless of intent, as information is collected, the security of that data will usually grow weak. This is due to the characteristics of vast amounts of information. Either it is simply too much to oversee and manage, or it is assumed that voluminous amounts of data are unattractive. Another aspect of data collection is organizations not performing due diligence in the protection of private customer information because it is seen as not core to their business objectives. It is becoming increasingly common for hackers to gain access to an E-commerce site not to steal products or services, but rather the customers’ credit card information. Social Engineering Social engineering is the oldest form of attack to obtain data. It is the practice of coercion and misdirection to obtain information. Social engineering can take many forms, ranging from telephone calls to e-mail to face-to-face interaction. Additionally, the degree of interaction is a variable common among all forms of the attack. For example, a determined hacker may apply for a job that allows access to the establishment for on-site reconnaissance. Hackers may assume the identity of employees or their colleagues to lure others into providing information. On the lighter side, a hacker may simply send an e-mail hoping for a response. E-mail is a potent medium that can be used to extract information. It is easy to obtain names of certain employees and deduce an e-mail address. With very little research on the Internet, you can find subjects that interest a certain individual and establish communication on a common theme. An example is finding a network administrator and his conversations on various news groups to determine his physiological profile and willingness to share information. Through e-mail interaction, you may be able to gain insightful characteristics about the internal network and related security. A more prevalent approach used by hackers, and thankfully growing more difficult due to security awareness, is calling a help desk and asking for a password reset on an account. However, even with good security practices, such as asking for a human resource (HR) ID or your mother’s maiden name, it remains a simple barrier for a minimally skilled attacker to overcome. E-mail Social Engineering. E-mail can be a powerful persuasion device for hackers and con artists alike. E-mail has become a basic element in society and is considered crucial for many companies to run a successful business. People have grown so accustomed to e-mail that they rarely question the integrity of the content or source. To add to the malaise, many people 145
AU8231_C002.fm Page 146 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® do not understand how e-mail is routed from one desktop to another, and eventually the technology and science take a back seat to magic, leaving people to assume if the sender is
[email protected], it must be from Dad. Given the general public is trusting of their e-mail and the direct access to people the e-mail service provides, e-mail is used over and over again to spread worms, viruses, and just bad information. It is a trivial task to make an e-mail appear as though it came from a known source. This can be especially powerful when sending an e-mail to someone from his or her management requesting the updated design for an executive presentation about the changes to security controls that are in progress. There is an endless array of e-mails that can be sent to trick people into offering information. These can range from obtaining remote-access phone numbers or information on applications in use to collecting data on security management protocol, such as getting passwords updated. Help Desk Fraud. One of the more common types of social engineering is calling the help desk as an employee in need of help. The traditional subject for help is with passwords and getting new ones. The only problem with this tactic is that help desk employees are usually trained to follow a protocol for providing passwords, and many do not include furnishing them over the phone.
A communication protocol is essentially a predefined list of questions and actions executed by the help desk attendant and the caller to ensure authentication. In many cases, there are several options to the help desk employee to deal with different scenarios. For example, if the caller cannot retrieve e-mail to get the updated password, the help desk may be directed to use voice mail. However, nothing ventured, nothing gained, and many social engineering attacks still include calls to the help desk seeking to obtain unauthorized information, and they still get results. Either someone does not follow protocol, or is simply fooled into thinking he or she has the necessary information to prove the identity of the caller. In some cases, the success was based on misdirection and controlled confusion in the conversation, such as introducing elements that were not considered in the protocol, forcing the help desk employee to make a decision based solely on his or her opinion and assumptions. Beyond trying to get passwords, which can be difficult, obtaining remote-access phone numbers or IP addresses of VPN devices can be helpful as well, and many help desk employees do not see the need to authenticate the caller for seemingly useless information.
146
AU8231_C002.fm Page 147 Saturday, June 2, 2007 1:21 PM
Access Control Nevertheless, help desks are typically prepared for controlling the provisioning of information and applications, but it is for this very reason that they can be a lucrative target for social engineering attacks. They get calls asking for similar information all day long and are expected to provide answers using the protocol, which can be weak. Additionally, for large help desks or companies that provide help desk services for other companies, there is usually a high degree of rotation of employees, resulting in unfamiliarity with the protocol, introducing even more opportunities to glean information. In some scenarios, the help desk employee may grow nonchalant about giving out passwords and simply give it to the attacker on the phone. Access to Systems To this point we have discussed access control principles and the threats to the control environment. This section covers details regarding access controls and essential control strategies. Areas include: • • • •
Identification and authentication Access control services Identity management Access control technologies
The objectives for this section are: • • • • •
List the types of authentication Describe smart cards Define access control services Describe identity management Describe the key access control technologies
Identification and Authentication Identification is the assurance that the entity (e.g., user) requesting access is accurately associated with the role defined within the system. It is the assertion of a unique user identity and all that it implies within the system or systems accessed. Identification is a critical first step in applying access controls. It is necessary to identify the user in the light of downstream activities and controls. Downstream controls include accountability, with a protected audit trail, and the ability to trace activities to individuals. They also include the provisioning of rights and privileges, system profiles, and availability of system information, applications, and services. The objective is to bind a user to the level of controls based on that unique user instance. For example, once the user is validated through authentication, her identity within the infrastructure will be used to allocate resources based on predefined privileges. 147
AU8231_C002.fm Page 148 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Authentication is verifying the identity of the user. Upon requesting access, a user will present her unique user identification and provide another set of private data that establishes trust between the user and the system for the allocation of privileges. The combination of the identity and data or information only known by or only in the possession of the user acts to verify that the user identity is being used by the expected and assigned entity (e.g., person). Types of Identification. The most common form of identification is a username, user ID, account number, or personal identification number (PIN). These can be used as a point of assignment and association to a user entity within the system. However, access is not limited to users and includes software and hardware services that may require access. Software may need to access objects, modules, databases, or other applications to provide the full suite of services offered. In an effort to ensure the authorized application is making the requests to potentially sensitive resources, they can use digital identification, such as a certificate or onetime session identifier. Other types of identification include tokens, smart cards or smart devices, biometric devices, and even badges for visual and physical access. User Identification Guidelines. There are three essential security practices
regarding identities: • Uniqueness • Nondescriptive • Issuance First and foremost, user identification must be unique so that each person can be positively identified. Although it is possible for a user to have many unique identifiers, each must be distinctive within the access control environment. In short, any function or requirement that will be granted access to the system requires a unique identifier. In the event there are several disparate access control environments that do not interact, share information, or provide access to the same resources, it is possible for duplication. For example, a user’s ID at work is “mary_t,” allowing her to be identified and authenticated within the corporate infrastructure. She may also have a Yahoo e-mail account with the user ID of “mary_t.” This is possible because the corporate access control environment does not interact with services or resources offered by Yahoo’s public services. However, one would rightly conclude this is not a good security practice. Users are prone to duplicating certain attributes, such as passwords, to minimize their effort. Therefore, any duplication, although plausible in certain circumstances, represents a fundamental risk to the enterprise. 148
AU8231_C002.fm Page 149 Saturday, June 2, 2007 1:21 PM
Access Control User identification should not expose the associated role or job function of the user. User ID’s are typically easy to discover, and the process is frankly trivial; it is the password (in a single-factor authentication scheme) that remains the most important attribute. If a user were to be called “cfo,” an attacker would be able to focus energy on that user alone based on the assumption that he is the CFO of the company and would probably have privileged access to critical systems. However, this is practiced quite often. It is very common to have user IDs of “admin,” “finance,” “shipment,” “Web master,” or other representations of highly descriptive IDs. The most predominant is the username “root.” Everyone, including hackers, knows what the username root represents in a system. It is for this very reason that attaining root’s password is so desirable. Unfortunately, in most UNIX systems (systems that typically employ the root user) changing the user or masking that role is impossible. In Microsoft systems it is possible to change the username of the default “administrator” (nearly the equivalent of root in UNIX) to some other nondescriptive name. Clearly, any highly privileged system account, such as root and administrator, represents a target for attackers, and it can be difficult to mask their role. However, traditional users, who may have a broad set of privileges throughout the enterprise, can be more difficult for attackers to isolate as a target. Therefore, establishing a username that is independent of job function or role will act to mask the true privileges of the user. Finally, the process of issuing identifiers must be secure and documented. The quality of the identifier is in part based on the quality of how it is issued. Regardless of security controls, practices, and management, if an identity can be inappropriately issued, the entire system begins to break down. The identifier is the first, and arguably the most important, step in acquiring access. The issuing of IDs must conform to an established and secure process that clearly defines the requirements, such as approval, notification, administration, and allocation of various contracts, that must be met prior to issuance. Moreover, the entire process must be logged and documented accordingly to ensure the process can be verified and audited. Types of Authentication. There are three types of authentication:
• Authentication by knowledge — what a person knows • Authentication by ownership — what a person has • Authentication by characteristic — what a person is or does As you can see, these philosophies correspond to the multiple factors introduced in the “User Controls” section. User authentication factors are represented by something you know (e.g., a password), something you 149
AU8231_C002.fm Page 150 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® have (e.g., a token or smart card), and something you are or do (e.g., biometrics). Single-factor authentication is the employment of one of these factors, two-factor authentication is using two of the three factors, and three-factor authentication is the combination of all three factors. Single-factor authentication is usually associated with a username and password combination. This is unfortunate because the usual reusable (static) password is easily compromised, and therefore provides very limited security. There are a plethora of technical solutions that provide the framework to identify and authenticate users based on this concept. Given the broad use of username and password combinations, most technical solutions will provide this service. Two-factor authentication usually introduces an additional level of technical controls in the form of a physical or biometric device. Typically, this is a token, fob, or smart device that substantiates the user’s identity by being incorporated into the authentication process, such as a username and password. Nevertheless, two-factor authentication can be the combination of any two of the three basic forms of factor authentication. The incorporation can include one-time passwords, such as a time-sensitive number generation, or the existence of a certificate and private key or a biometric. Three-factor authentication will include elements of all three factors and, as such, will include something about the user, such as a biometric feature. Biometrics cover a number of options, such as fingerprints, retina scanning, hand geometry, facial features, and even temperature. The technical personification of this is a device that is used to interface with a person during the authentication process. What a Person Knows. A representation of single-factor authentication, what a person knows, can be associated with the user ID or other unique identifier. As discussed above, this is predominantly a password. A password is typically a short (5 to 15 characters) string of characters that the user must remember to authenticate against their unique identifier. Passwords can be simple words or a combination of two easily remembered words, include numbers or special characters, and range in length and complexity. The more diverse (e.g., complicated) the password is, the more difficult it will be to guess or crack the password. Given that password crackers will typically start with a dictionary, or a collection of words, the use of common words has grown significantly less secure. What was once feasible as a password is now considered a vulnerability.
• Standard words: BoB, Phoenix, airplane, doGWalk are examples of basic words, with some capitalization, that are not considered secure by today’s standards. Unfortunately, this represents the most common passwords used. But system administrators are incorporating password requirements so that the system will not accept 150
AU8231_C002.fm Page 151 Saturday, June 2, 2007 1:21 PM
Access Control such simple passwords. However, applying this type of administrative control is not always possible. • Combination passwords: Air0ZiPPEr, Bean77Kelp, and OcEaNTaBlE12 are examples of mixing dictionary words to help the user remember the password. The inclusion of numbers can help add some complexity to the password. This example represents what a user will create to meet the system requirements for passwords, but these are still somewhat easy for a password cracker to discover. • Complex passwords: Z(1@vi|2, Al!e&N-H9z, and W@!k|nGD2w*^ are examples of passwords that include many different types of characters to introduce a significant level of complexity. Unfortunately, these examples can be difficult for the average user to remember, potentially forcing them to write the password down where it can be discovered. There are several practices to help users produce strong passwords that can be easily remembered. It can be as simple as remembering a keyboard pattern or replacing common letters with numbers and special characters, or characters that represent words that start with the letter being entered, for example, “password” can be P@zS|W0r$. Although not recommended, this password does meet most system requirements; it is over eight characters and uses lower- and uppercase letters, numbers, and special characters. An alternative to passwords is a passphrase. They are longer to enter and typically harder to attack. A passphrase will support all types of characters and spaces, allowing the user to utilize an easier to remember password without sacrificing the integrity. Examples are: • List of names: “Bobby CarolAnn Stuart Martha Mark” is an example of a 33-character passphrase that is very easy for the user to remember, but can be difficult for a password cracker to discover. • Song or phrase: “A long time ago in a galaxy far, far away …” Of course, as a phrase is identified, it is simple to incorporate special characters to replace letters, furthering the complexity. Given the singularity of the password, in that all you need is a username to associate, the confidentiality of the password or passphrase is critical. Passwords must never be passed over a network or stored in cleartext. Unfortunately, old services such as File Transfer Protocol (FTP) do not protect the password from exposure during authentication. It is important to ensure that applications and network-enabled services apply various controls to ensure passwords are not exposed. Additionally, the storage of passwords must be protected. As discussed above, password crackers can be used to attack a password file offline and 151
AU8231_C002.fm Page 152 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® unbeknownst to the system owner. Therefore, the security of the system maintaining the user credentials is paramount. To protect passwords from being exposed, they are typically hashed. A hash function will produce a unique, fixed-length representation of any amount of data. The important aspect is the hash function is inherently one-way.? This means that a hash result cannot be deciphered to produce the original data. Although there are several types of attacks that can be performed to take advantage of weaknesses in the hash algorithm or how it is employed, in nearly all cases the hash cannot be directly manipulated to produce the original data. Of course, as we learned above, password crackers can hash different passwords until a match with the original password hash in a file is found. This is not a weakness of the philosophical attributes of hashing data. It is simply performing an act that a normal system operation would perform, just thousands or potentially millions of times. However, the time to discovery of a password by means of a password cracker is directly related to the complexity of the user’s password and the employment of the hashing algorithm. What a Person Has. In addition to a unique identity, a user may be provided a token or some form of physical device that can be used in lieu of a traditional password or in addition to having a password. The objective is to add another layer of confidence that the user is who he or she claims by the assurance a physical device offers. There are two basic two-factor methods: asynchronous and synchronous. ASYNCHRONOUS. Asynchronous token device is essentially a challenge response technology. Dialogue is required between the authentication service and the remote entity trying to authenticate. Basically, the process is this: The authentication server will provide a challenge to the remote entity that can only be answered by the token that the individual holds in her hands. The token will give the correct response, which is then provided to the authentication server. Without the asynchronous token device, a correct answer to the challenge cannot be generated.
As demonstrated in Figure 2.2, the user makes a request and is challenged. The user will utilize his token to calculate the requested response. For example, the user may enter his PIN in the device, which mathematically produces a unique number. The number is then entered to authenticate. The authenticating system has the ability to perform the same mathematical function to ensure the one-time password is correct. SYNCHRONOUS. Although a very similar scenario, synchronous token authentication is based on an event, location, or time-based synchronization between the requestor and authenticator. The most common and widely adopted version is time based, although smart cards and smart 152
AU8231_C002.fm Page 153 Saturday, June 2, 2007 1:21 PM
Access Control Asynchronous Token Device · Challengeresponse scheme · Based on onetime pad 3. Enter Challenge # and PIN
1. Request sent to Auth. Server 2. Challenge # 7 digits display on Computer 6. Response sent of Auth. Server 5. Enter Response on Computer 4. Read Response on Handheld
Figure 2.2. Asynchronous token process.
tokens can store special credentials that promote event and location authentication schemes (covered in more detail later). In a time-based model, a user will be issued a token or smart device that utilizes an embedded key to produce a unique number or string of characters in a given timeframe, such as every 60 s. When the user makes a request for authentication, she is challenged to enter her user ID and the number that is currently displayed on the token. The authenticating system will know which token is issued to the user and, based on the timing of the authentication request, will know the number that should appear on the device. In addition to the relationship established between user and authenticator based on user ID, the physical possession of the token, and the number generated based on an embedded key and time, the system may request a PIN or password. In many cases, an added entry location will be provided to the user to enter her password. However, some solutions will simply request the user to append her PIN to the end of the token-generated number, furthering the level of confidence that the user identified is who she claims to be. Authentication Devices. There are physical devices, in addition to traditional number generation tokens, that contain credentials for authentication. The credentials, which can only exist on the device, act as a representation of physicality and possession of the device.
There are effectively two forms of authentication devices: • Memory cards • Smart cards The main difference between memory cards and smart cards is the processing power. A memory card holds information, but does not process information. A smart card has the necessary hardware and logic to actually 153
AU8231_C002.fm Page 154 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® process information. A memory card holds a user’s authentication information, so this user only needs to type in a user ID or PIN and present the memory card; if the two match and are approved by an authentication service, the user is successfully authenticated. An example of a memory card is a swipe card that must be used for an individual to be able to enter a building. The user enters a PIN and swipes the memory card through a card reader. If this is the correct combination, the reader flashes green and the individual can open the door and enter the building. Another example is the ATM (automatic teller machine) card. Memory cards can be used with computers, but they require a reader to process the information. The reader adds cost to the process, especially when one is needed per computer, and the overhead of PIN and card generation adds additional cost and effort to the whole authentication process. A memory card provides a more secure authentication method than using a password because the attacker would need to obtain the card and know the correct PIN. Administrators and management need to weigh the costs and benefits of a memory card implementation to determine if it is the right authentication mechanism for their environment. One of the most prevalent weaknesses of memory cards is that data stored on the card is not protected. Unencrypted data on the card (or stored on the magnetic strip) can be extracted or copied. Unlike a smart card, where security controls and logic are embedded in the integrated circuit, memory cards do not employ an inherent mechanism to protect the data from exposure. Therefore, very little trust can be associated with confidentiality and integrity of information on the memory cards. A smart card is a credit card-size plastic card that, unlike a credit card, has an embedded semiconductor chip that accepts, stores, and sends information. It can hold more data than magnetic-stripe cards. The semiconductor chip can be either a memory chip with nonprogrammable logic or a microprocessor with internal memory. Information on a smart card can be divided into several sections: • • • •
Information Information Information Information
that is read only that is added only that is updated only with no access available
There are different types of security mechanisms used in smart cards. Access to the information contained in a smart card can be controlled by: • Who can access the information (everybody, the cardholder, or a specific third party) • How can the information be accessed (read only, added to, modified, or erased) 154
AU8231_C002.fm Page 155 Saturday, June 2, 2007 1:21 PM
Access Control SMART CARDS. The term smart card is somewhat ambiguous and can be used in a multitude of ways. The International Organization for Standardization (ISO) uses the term integrated circuit card (ICC) to encompass all those devices where an integrated circuit (IC) is contained within an ISO 1 identification card piece of plastic. The card is 85.6 × 53.98 × 0.76 mm and is essentially the same as a bank or credit card.
The IC embedded is, in part, a memory chip that stores data and provides a mechanism to write and retrieve data. Moreover, small applications can be incorporated into the memory to provide various functions. There are several memory types, some of which can be implemented into a smart card, for example: • ROM (read-only memory): ROM, or, better yet, the data contained within ROM, is predetermined by the manufacturer and is unchangeable. Although ROM was used early in the evolution of smart cards, it is far too restrictive for today’s requirements. • PROM (programmable read-only memory): This type of memory can be modified, but requires the application of high voltages to enact fusible links in the IC. The requirements for high voltage for programming made it unusable for ICC, but many have tried. • EPROM (erasable programmable read-only memory): EPROM was widely used in early smart cards, but the architecture of the IC operates in a one-time programmable mode (OTP), restricting the services offered by the ICC. Moreover, it required ultraviolet light for erasing the memory, making it difficult for the typical organization to manage cards. • EEPROM (electrically erasable programmable read-only memory): EEPROM is the IC of choice because it provides user access and the ability to be rewritten, in some cases, up to a million times. Clearly this provides the services smart cards need to be usable in today’s environment. Typically, the amount of memory will range from 8 to 256 KB. • RAM (random-access memory): Up until this point, all the examples were nonvolatile, meaning that when power is removed, the data remains intact. RAM does not have this feature, and all data is lost when not powered. For some smart cards that have their own power source, RAM may be used to offer greater storage and speed. However, at some point the data will be lost — this can be an advantage or disadvantage, depending on your perspective. However, memory alone does not make a card “smart.” In the implementation of an IC, a microcontroller (or central processing unit) is integrated into the chip, effectively managing the data in memory. Control logic is embedded into the memory controller providing various services, least of which is security. Therefore, one of the most interesting aspects for smart 155
AU8231_C002.fm Page 156 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® cards — and their use in security-related applications — is founded on the fact that controls associated with the data are intrinsic to the construction of the IC. To demonstrate, when power is applied to the smart card, the processor can apply logic in an effort to perform services and control access to the EEPROM. The logic controlling access to the memory is a significant attribute in ensuring that data, such as a private key, is not exposed. Smart cards can be configured to only permit certain types of data, such as public key certificates and private keys, to be stored on the device, but never accessed directly by external applications. For example, a user may request a certificate and a corresponding public and private key pair. Once the keys and certificate are provided to the user, the user may have the opportunity to store the certificate and private key on the smart card. In these cases, some organizations will establish a certificate policy that only allows the private key to be exported once. Therefore, the private key is stored permanently on the smart card. However, smart cards will typically contain all the logic necessary to generate the keys during a certificate request process. In this scenario, the user initiates a certificate request and chooses the smart card as the cryptographic service provider. Once the client system negotiates with the certificate authority, the private key is generated and stored on the smart card in a secure fashion. When organizations select this strategy, the keys are nonexportable and tied to the owner of the card. A very similar process is utilized to leverage the private key material for private key processes, such as digitally signing documents. Data is prepared by the system and issued to the smart card for processing. This allows the system and user to leverage the key material without exposing the sensitive information, permitting it to remain in a protective state on the smart card. To allow these functions and ensure the protection of the data, programs are embedded in portions of the memory that the processor utilizes to offer advanced services. We will discuss these in more detail later. Nevertheless, simply put, a smart card has a processor and nonvolatile memory, allowing it to be smart as well as secure. To bring all this together, the following are examples of smart card features that are typically found on a common smart card today: • 64-KB EEPROM: This is the typical amount of memory found on contemporary cards. • 8-bit CPU microcontroller: This is the small controller where several forms of logic can be implemented. For example, it is not uncommon for a processor to perform cryptographic functions for DES, 3DES, RSA 1024 bit, and SHA-1, to name a few. 156
AU8231_C002.fm Page 157 Saturday, June 2, 2007 1:21 PM
Access Control • Variable power, 2.7 to 5.5 V: Given advances in today’s IC substrate, many cards will operate below 3 V, offering longer life and greater efficiencies. Alternatively, they can also operate up to 5.5 V to accommodate old card readers and systems. • Clock frequency, 1 to 7.5 MHz: In the early developments of smart card technology, the clock was either 3.57 or 4.92 MHz, mostly because of the inexpensive and prolific crystals that were available. In contrast, today’s IC can operate at multiple speeds to accommodate various applications and power levels. • Endurance: Endurance refers to the number of write/erase cycles. Obviously, this is important when considering smart card usage. Typically, most smart cards will offer between 250,000 and 500,000 cycles. Considering the primary use of a smart card in a security scenario is permitting the access to read data on the card, it is highly unlikely that someone would reach the limits of the IC. However, as more complex applications, such as Java, are integrated into the IC, there will be more management of the data, forcing more cycles upon each use. • Data retention: User data and application data contained within the memory have a shelf-life. Moreover, that life span is directly related to the temperature the smart card is exposed to. Finally, the proximity to some materials or radiation will affect the life of the card’s data. Most cards offer a range between 7 and 10 years of data retention. It is important to understand that a smart card is effectively a computer with much of the same operational challenges. There is the IC, incorporating the processor and memory; the logic embedded in the processor that supports various services; applications built into the processor and housed on the EEPROM for on-demand use; protocol management, how it is supposed to interface with other systems; and managing the data. All these and more exist in a very small substrate hidden in the card and will only become more complex as technology advances. At the most basic level, there are two types of smart cards, and the difference is founded on how they interact with other systems — contact cards, which use physical contact to communicate with systems, or contactless cards, which interface using proximity technology (Figure 2.3). Contact cards are fairly self-explanatory. Based on ISO 7816-2, a contact ICC provides for eight electrical contacts (only six are used) to interact with other systems or devices. The contacts on a smart card, as shown in Figure 2.4, provide access to different elements of the embedded IC. The contact designation (Cn) starts with C1, Vcc, and continues counterclockwise around the plate (see Table 2.1). Contactless cards, those founded on proximity communications, are growing in demand and in use. They are increasing in adoption because of 157
AU8231_C002.fm Page 158 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK®
Figure 2.3. Smart cards.
Figure 2.4. Contact plate.
durability, applications in use, speed, and convenience. They eliminate the physicality of interacting with disparate systems, reducing the exposure to damage that contact cards have with the plate or magnetic strip. Finally, a contactless card offers a multitude of uses and opportunity for integration, such as with cell phones or PDAs. Typically, the power and data interchange is provided by an inductive loop using low-frequency electronic magnetic radiation. ISO 14443 defines the physical characteristics, radio frequency power and signal interface, initialization and anticollision, and transmission protocol for contactless cards. The proximity coupling device (PCD) provides all the power and signaling control for communications with the card, in this case referred to as a proximity integrated circuit card (PICC). The PCD produces a radio frequency (RF) field that activates the card once it is within the electrometric field loop. The frequency of the RF operating field is 13.56 MHz ± 7 kHz and operates constantly within a minimum
158
AU8231_C002.fm Page 159 Saturday, June 2, 2007 1:21 PM
Access Control Table 2.1. Contact Descriptions Contact
Designation
Use
C1
Vcc
C2
RST
C3
CLK
C4 C5
RFU GND
C6
Vpp
C7
I/O
C8
RFU
Power connection through which operating power is supplied to the microprocessor chip in the card Reset line through which the interface device (IFD) can signal to the smart card’s microprocessor chip to initiate its reset sequence of instructions Clock signal line through which a clock signal can be provided to the microprocessor chip; this line controls the operation speed and provides a common framework for data communication between the IFD and the ICC Reserved for future use Ground line providing common electrical ground between the IFD and the ICC Programming power connection used to program EEPROM of first-generation ICCs Input/output line that provides a half-duplex communication channel between the reader and the smart card Reserved for future use
and maximum power range. When a PICC is incorporated into the loop, the PCD begins the communication setup process. There are two types of modulation (or signal types), type A and type B, that the PCD alternates until a PICC is incorporated and interacts on a given interface. The important point is that both types support 106 kbps in bidirectional communications. This can be best described as selecting what is the equivalent to layers 1 and 2 of the OSI model for computer networking. Many of the PICC and PCD solutions today are provided by or founded on Mifare and host ID (HID) products and solutions, the de facto in proximity solutions. A major advantage to smart cards is that the log-on process is done at the reader instead of at the host. Therefore, the identifier and password are not exposed to hackers while in transit to the host. In an increasingly online environment, where critical information and transactions are exchanged over open networks, security is key. New technologies involve smart cards, readers, and tools incorporating public key infrastructure (PKI) technologies to provide everything from authentication to network and information security functions on a single card platform. Features include: • Secure log-on • Secure e-mail/digital signatures 159
AU8231_C002.fm Page 160 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® • Secure Web access/remote access • Virtual private networks (VPNs) • Hard disk encryption For organizations considering smart cards, there are the physical uses of the card, the security uses, and even application extension uses. Although each is helpful in its own right, the value is in the singularity of the solution — a card. Therefore, if any of the uses below are seen as meaningful options, nearly by default all others are plausible and offer potential for significant returns on related investments. Typically, the way smart cards are used for accessing computer and network resources is that a user inserts the smart card into a reader and then enters the PIN associated with that card to unlock the services of the card. In such a scenario, the first factor of security is providing something you have — a smart card. The second factor in this case is providing something you know — the PIN. The data on the card, validated by the existence of the card in combination with the access authenticated by the PIN, offers integrity to the two-factor authentication process. With a smart card, there is a level of added integrity. The PIN provides access to the information on the card (not simply displayed as with a token), and the key on the device is used in the authentication process. When used in combination with certificates, the system participates in a larger infrastructure to provide integrity of the authentication process. Below is a summary of smart card capabilities: • • • • • • •
Can store personal information Have a high degree of security and portability Have tamper-resistant storage Can isolate security-critical computations within Offers secure enterprisewide authentication Used in encryption systems to store keys Offers capability to perform encryption algorithms on the card
What a Person Is. The ability to authenticate a user based on physical attributes about his person or other characteristics unique to him is possible due to biometric technology. Biometrics is the use of sophisticated technology to determine specific biological indicators of the human body or behavioral characteristics that can be used to calculate uniqueness.
There are two types of biometrics: • Physiological • Behavioral PHYSIOLOGICAL. A physiological biometric is representative of acquiring information about unique, physical attributes, such as a fingerprint, of the 160
AU8231_C002.fm Page 161 Saturday, June 2, 2007 1:21 PM
Access Control user. The user interfaces with the device to provide the information; the device performs a measurement (potentially several times), makes a final determination, and compares the results with the information stored in the control system. Following is a discussion about some of the commonly used physiological biometric systems. Fingerprints are unique to each person and have been used for years to identify people. One of the earlier forms of easily accessible biometric systems was based on fingerprint analysis. Today, fingerprint biometric PCMCIA (Personal Computer Memory Card International Association) thumb reader cards can be purchased and quickly utilized. Moreover, the technology has become so commonplace that USB memory fobs have incorporated fingerprint readers into the datastick to authenticate its use. Hand geometry is a technique that will attempt to discern several attributes about a person’s hand to gather enough information to draw conclusions on identity. The system may measure tension in the tendons in the hand, temperature, finger and bone length, and hand width, among several others. Palm or hand scans can be best described as a combination between certain capabilities of hand geometry and fingerprint analysis. The user’s hand is typically placed flat on a special surface where, again, several points of information are collected and combined to make a determination on the user’s identity. Up until this point, the human elements discussed have been relegated to the hand. However, the face and eyes offer another aspect of individuality that can be harnessed by a biometric device to determine and authenticate identity. There are two primary aspects of the human eye that can be investigated: the retina and iris. Each is unique to an individual to each eye. The retina is effectively the back of the eye. There are blood vessels that can be scanned to identify distinctive patterns. The iris is the colored material surrounding the pupil that governs the amount of light permitted to enter the eye. Again, each one has granularity characteristics that will identify the individual. Voice patterns and recognition are regularly used for identification and authentication. In recent history, the advancement in speech recognition has added to the technology’s viability. Voice pattern matching investigates how a user speaks. Typically, the user will associate with the system by saying several different words, including her name. When authentication occurs, the user typically says her name and the system matches the results with the expected pattern, supported by metrics gathered during the use of other words spoken during the association. The system is attempting to determine the unique sounds produced by the human during speech. With the addition of speech recognition, the system can identify 161
AU8231_C002.fm Page 162 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® what is said as well as how it is spoken. Therefore, a passphrase can be used in combination with a person’s name to authenticate on multiple levels. Again, as with other forms of multifactor authentication, other elements are used, such as a PIN, smart card, swipe card, or even another biometric system, such as fingerprint input. Finally, the entire face can be used by a system to visually verify geometry and heat signatures that correspond to skull structure and tissue density. BEHAVIORAL. The act of determining unique characteristics about a person from patterns in their actions has emerged as a viable approach to biometrics. The most prevalent of these methods is keystroke dynamics. As mentioned above, in many cases with multifactor authentication, the password (passphrase, PIN, or something else the users knows) is incorporated into the process. Keystroke pattern analysis is adding a biometric measurement to the user’s authentication process based on the fact that people enter the same information differently. For example, a user is prompted for a traditional username and password. He enters the information, but the system measures how he entered his password. If the password is correct, but the biometric pattern is wrong, he will be denied access.
In addition to keystroke authentication, an emerging solution is signature dynamics. Much like voice patterns, a user can sign her name on a digital interface, allowing the system to not only infer the context of the signature, but also measure stroke speed, acceleration and deceleration, and pressure. Law enforcement and forensics have been leveraging the science of handwriting dynamics for many years. They inspect the physical nature of the signature to identify known characteristics that indicate forgeries. For example, if the paper (or material) shows signs that intense pressure was applied, it could indicate a copy or intensity, representative of someone uncomfortable during the signing — a rarity for normal signing processes. Moreover, pauses or slow movement in specific areas typically associated with fast or light areas of the signature can be identified by an investigator. Again, signs of a forgery. These same principles are utilized by a pressure-sensitive digital interface to detect a divergence from expected signing practices. SUMMARY. As far as “what a person is” in the realm of biometrics, the truly unique biometric characteristics are fingerprint, retina, and iris. These are the three biometric options that represent the greatest diversity with the lowest range of measurement. In other words, the patterns found in the eye and on the finger have been proven to be unique. Moreover, the technical and mathematical processes of measuring these human attributes are proven technology. Unlike other physiological and behavioral biometric possibilities, the eyes and fingers represent the best combination of accurate measurement of a proven human characteristic. 162
AU8231_C002.fm Page 163 Saturday, June 2, 2007 1:21 PM
Access Control Important Elements of Biometrics. Biometrics represents a measurement of key physical attributes of a given person. Unlike passwords, tokens, or smart devices, which offer a high degree of accuracy and confidence in the transaction based on limited possibilities in measurement, biometrics, by their very nature, are prone to failure.
A password can be hashed and verified. The measurement process is static: the algorithm. Tokens, especially synchronous tokens, are founded on a strict measurement framework, such as time. When an error occurs in token-based authentication, it is typically because the time window between authentication and number creation on the token has shifted, causing an error. This is a common and sometimes expected error, and the authentication failure is in the direction of denial of access. In the case with smart cards, there is no measurement dynamic and the card adheres to established technical and physical standards. In stark contrast, biometrics is effectively a technical and mathematical guess. How a biometric device responds can be affected by hundreds, if not thousands, of environmental variables. Temperature, humidity, pressure, and even the medical condition of the individual represent changes the measurement process must try to cope with. If a person is ill or a woman is pregnant, iris scans, facial recognition, and hand geometry may fail. (Note: It is for this reason that biometrics has not been widely deployed for public use. Privacy issues are numerous.) Therefore, the accuracy, or the ability to separate authentic users from imposters, is essential to the calculated risk of use. There are three categories of biometric accuracy measurement (all represented as percentages): • False reject rate (type I error): When authorized users are rejected as unidentified or unverified. • False accept rate (type II error): When unauthorized persons or imposters are accepted as authentic. • Crossover error rate (CER): The point at which the false rejection rates and the false acceptance rates are equal. The smaller the value of the CER, the more accurate the system. As demonstrated in Figure 2.5, the level of sensitivity translates into varying levels of false rejections and false acceptance. The lower the sensitivity, the more prone the system is to false acceptance. With low sensitivity, the system may offer a broad margin of error or disparity in the determination of the metrics measured. It can also be represented as not acquiring enough meaningful data to discern the authorized from the unauthorized. If the system is overly sensitive, it may apply too much granularity to the process, resulting in a level of investigation that is prone to environmental changes or minor changes in the person. In both cases, there is a point of infinity and neutrality. Not sensitive enough, and anyone 163
AU8231_C002.fm Page 164 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK®
Figure 2.5. Crossover error rate.
is authorized; too sensitive, and no one gets through. Neutrality is achieved by tuning the system so the rates intersect at a midpoint. The lower the intersection point between the two rates, the more accurate the system is overall. It is important to understand the relationships and measurements, because the trends in either direction are not mathematically consistent. Moreover, the level of intersection will expose hardware issues. For example, if the lowest attainable intersection is at 20 percent, regardless of sensitivity adjustments, the system is clearly failing consistently at an unacceptable level. Also, the key is to attain the lowest possible intersection rate at the greatest possible sensitivity. Therefore, the optimal CER location is in the bottom right of the graph (Figure 2.6). In this light, although the lowest CER rate is desired, the less the sensitivity applied to push the intersection down, the greater the potential for failure. For example, while the intersection is the lowest with less sensitivity, the system is susceptible to changes in the environment promoting
Figure 2.6. Optimal crossover error rate. 164
AU8231_C002.fm Page 165 Saturday, June 2, 2007 1:21 PM
Access Control false acceptance. Therefore, it may be necessary to sacrifice a slightly higher intersection to accommodate unforeseen dynamics. Finally, the practice of tuning to the side of false rejections offers confidence that there is less potential for unauthorized access, albeit at the risk of introducing some added false rejections. The key is in the delta between the two and the relative sensitivity gained. As shown in Figure 2.6, the level of sensitivity is increased by nearly a third, while only increasing the potential for false rejections by a nominal percentage. As one would rightly conclude, the ability to tune the system and make determinations on what is optimal for a specific solution is directly relative to the level of risk and the importance of the controls. In short, the crossover error rate must be appropriate to the application. Biometric Considerations. In addition to the access control elements of a biometric system, there are several other considerations that are important to the integrity of the control environment. These are:
• • • •
Resistance to counterfeiting Data storage requirements User acceptance Reliability and accuracy
First and foremost is resistance to counterfeiting. Portrayed in movies, a person gets his finger cut off or eye removed by the villain to be used to gain unauthorized access to a super-secure room. In nearly all cases, a biometric system will employ simple metrics, such as heat, blood pressure, or heart rate, to add further confidence in the process. A very popular activity is the mimicking of a human thumb to fool a fingerprint reader. The perpetrator places a modified, thin layer of gummy bear jelly on his finger to gain unauthorized access. The thin coating, with the fingerprint of the authorized user incorporated, allows heat, pressure, and other simple metrics to be fooled because there is a real finger behind the coating. If the imposter knows the user’s PIN and has her physical credentials, the attack is probable, although unlikely. Nonetheless, a highly determined and sophisticated attacker could identify biometric weaknesses and take advantage of them by counterfeiting what is measured. Although arguably very complicated, it is feasible and therefore must be considered. Beyond the tuning of the system to the upper, optimal range and adding other identification and authentication requirements, there is little one can do; he or she is at the mercy of the product vendor to incorporate added investigative controls. A biometric system has to be trained for a given user during association. For example, a user may have to talk into a microphone, look into a scanner, or provide a fingerprint several times before the system can obtain enough data to make future decisions. During this process, the system is 165
AU8231_C002.fm Page 166 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® gathering highly sensitive and private information about the physical attributes of the user. The security of the data storage is paramount. Not unlike securing the hashed passwords on a system so that it does not fall into the hands of an attacker with a password cracker, the information about a user’s physical attributes can be used against the system or even the user. Great care and consideration must be applied to the security of the system’s authentication data to protect the system, the organization, and the user’s privacy. User acceptance can be a significant barrier to meaningful adoption of biometrics. People have understandable concerns about lasers scanning the insides of their eyes and the potential exposure of very private information, such as health status. For example, a biometric system may be able to detect a drug addiction, disease, or pregnancy before the woman knows she is pregnant. Clearly, information of this type represents a concern to users, and understandably so. In some scenarios, these concerns are nullified by the simple fact that the process is required by a job position or role within an organization — a necessary evil. A good example is the Pentagon, the central military facility in the United States, which employs a wide array of biometric systems that you must use and abide by as a member of the government. In contrast, there have been several attempts in the private sector to incorporate biometric systems to simplify customer interaction, such as with automatic teller machines (ATMs). Unfortunately, these endeavors failed due to poor user adoption, probably because of the aforementioned concerns. Finally, user acceptance may be hindered by the intrusiveness of the system. Some people may not find it intrusive to place their thumb on a reader, while others may be uncomfortable with perceptions the of sanitary condition of the device. Placing your hand on a device or looking into a system requires close personal interaction that may exceed an individual’s threshold of personal space. In addition to accuracy, detailed above, the reliability of the system is important. For example, passport control points in airports, theme parks, or other areas that can utilize biometrics for the public are typically used a lot and may be in a location that exposes the system to the elements. Sea World in Florida uses hand geometry biometrics for annual members. Members receive a picture identification card that is swiped at the entry point, and they insert their hand into the reader. The system will be used by thousands of people in a given year, and these devices are outside, exposed to the elements. The reliability of the system to perform, and perform at an acceptable level of accuracy, must be considered. Beyond physical dependability, and related more so to accuracy, is the reliability of the solution. The standard enrollment process should take roughly two minutes per person. A system should be able to reliably obtain 166
AU8231_C002.fm Page 167 Saturday, June 2, 2007 1:21 PM
Access Control enough information for future authentication requirements. In the event the system takes longer, it may affect user acceptance, raise questions of capability, and reduce confidence in the system. Moreover, the average authentication rate — speed and throughout — is six to ten seconds. A reliable system should be able to attain these levels of performance. When considering the role of biometrics, its close interactions with people, and the privacy and sensitivity of the information collected, some disadvantages begin to arise. The most significant is the inability to revoke the physical attribute of the credential. In contrast, a token, fob, or smart card can be confiscated. This limitation requires significant trust in the biometric system. If a user’s biometric information were to be captured in transit or counterfeited via a falsified reader, there are few options to the organization to (1) detect that the attack has occurred and (2) revoke the physical credential. The binding of the authentication process to the physical characteristics of the user can complicate the revocation or decommissioning processes. Finally, biometric technology excels at performing authentication. Biometrics collect specific information in a relatively controlled, localized environment and perform what amounts to comparative calculation of expected data. In stark contrast, biometrics do not perform well for identification purposes, such as face scanning for terrorists in a crowd. Authentication Method Summary. We have covered a large amount of information about identification and authentication techniques, technology, and processes. However, how do these solutions compare? As demonstrated in Figure 2.7, the level of capability and confidence increases as more factors are included in the identification and authentication process and what types.
The strength of authentication methods will vary, but generally, biometrics tends to provide higher security than any of the others. The way to interpret Figure 2.7 is to simply understand that as we move from left to right, we are increasing the strength provided by the authentication method. Something you are provides more security than something you know. One would rightly conclude that greater security controls will translate into increased implementation and support costs. Unmistakably, deploying a username and password access control system to all your users costs significantly less than deploying a token-based solution. As shown in Figure 2.8, as the complexity of the solution increases, so does the cost. The value to the business is what is being measured and demonstrated, not the strength of a given solution. For example, it may cost less to deploy fingerprint readers than it would to do the same for tokens, whereas a fingerprint solution has the potential to be a much stronger authentication solution. 167
AU8231_C002.fm Page 168 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK®
Figure 2.7. Authentication method comparison.
Figure 2.8. Authentication method cost versus business value.
168
AU8231_C002.fm Page 169 Saturday, June 2, 2007 1:21 PM
Access Control
Figure 2.9. Cost versus risk.
The value, or perception of value to the business, is directly related to risk and the determined level of controls for a collection of assets. As with anything in the realm of security, the cost of controls is related to the value of what is protected. Moreover, controls should be balanced. An iris scanned to gain access to a secured floor in a building does not mean much when people can walk out with sensitive materials or e-mail data to an anonymous account. As shown in Figure 2.9, risk analysis is key, and a balanced, pragmatic approach will help determine the level of authentication for an organization. Therefore, strength can relate, to some degree, to cost and ultimately risk. Identity and Access Management Access control services represent the systems and system architecture of the access control environment. The services that are offered speak directly to the core attributes of a control solution; they are: • Identification: Asserts user identity. • Authentication: Verifies who the user is and whether access is allowed. • Authorization: What the user is allowed to do. • Accountability: Tracks what the user did and when it was done. A typical control architecture is comprised of three systems: • Host • Requestor • Authenticator
169
AU8231_C002.fm Page 170 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® The host is the system, user, application, or service providing an identification and authentication interface. The requestor, also referred to as the network access server (NAS), is the system providing a challenge to the host. The authenticator is the system performing the validation of the user’s credentials. A simple example is a user logging into a remote-access solution, such as a VPN. The user attempts to connect from the host to the VPN device (i.e., NAS, requestor), which in turn issues a challenge to the host for authentication. To verify the host’s credentials, the requestor interacts with the authenticator, such as a Microsoft active directory, to validate the user. Identity Management Identity management is a much used term that refers to a set of technologies intended to offer greater efficiency in the management of a diverse user and technical environment. Identity management is intended to solve the difficulties of managing the identity of employees, contractors, customers, partners, and vendors in a highly complex organization. Identity management systems are IT infrastructure designed to centralize and streamline the management of user identity, authentication, and authorization data. As demonstrated in Figure 2.10, identity management addresses all aspects of controlling access, with a focus on centralized management. Given the complexity of modern organizations and the diversity of business requirements, many access control infrastructures grow convoluted and difficult to manage. Enterprises will operate a vast array of IT infrastructure, including: • Network operating systems • A variety of servers
Figure 2.10. Identity management overview. 170
AU8231_C002.fm Page 171 Saturday, June 2, 2007 1:21 PM
Access Control
Figure 2.11. Common complex environments.
• • • • • •
User directories Human resources, payroll, and contract management systems A variety of line-of-business applications Customer relationship management (CRM) systems Electronic commerce applications Enterprise resource management systems planning (ERP)
The result of maintaining and supporting a diverse environment to support a broad spectrum of business services is the need to provide access to a collection of different users, such as: • • • • •
Employees Contractors Partners Vendors Customers
As shown in Figure 2.11, almost every system must track valid users and control their permissions for a given system. The diversity of these systems — each with their own administration software, people, and processes — and the fact that users typically access multiple systems, makes managing this data about users difficult at best, and an obstacle to doing business at worst. Identity management technologies attempt to simplify the administration of this distributed, overlapping, and sometimes contradictory data about the users of an organization’s information technology systems. 171
AU8231_C002.fm Page 172 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK®
BACKLOGS
Request for Access Generated
New Use REQUESTS DELAYED
Administrators
GROWING RESOURCES
Policy and Role Examinated
MISSING AUDIT TRA II
IT in Box
Approval Routing
ERRORS
INCOMPLETE REQUEST FORMS
Figure 2.12. Inefficiencies in provisioning. Identity Management Challenges. One of the most significant business problem areas that identity management seeks to solve is provisioning of users, the associated processes, oversight, and management.
Typically, when an employee is hired, a new user profile is created and stored in an HR database and a request for access is created. Then, the user profile is compared to a company’s role–authorization policies. The request is then routed for the necessary approval, and sent to the IT department if approval is granted. Finally, the IT department submits the approved request to various system administrators to provision user access rights. The user is provisioned, and the approval is recorded in a history file (Figure 2.12). The problems that arise from using these methods include: • • • • • • •
172
Requests for access rights are backlogged, jamming user productivity. Requests are delayed. Cumbersome policies cause errors. Request forms are not fully completed. The number of resources across your enterprise is growing. Precise audit reports are rarely maintained. Many profiles and users are dormant or associated with departed employees, making them invalid.
AU8231_C002.fm Page 173 Saturday, June 2, 2007 1:21 PM
Access Control The need for identity management is mostly founded on limitations of existing capacity, increasing cost, and burgeoning inefficiencies that stem from growing business demands. The potential negative impacts are: • Loss of business productivity • Long deployment cycles adversely affecting business productivity • Increasing number of access points creating more potential breach points • Increasing number of single point of failures Key management challenges regarding identity management solutions are: • Consistency: User profile data entered into different systems should be consistent. This includes name, log-in ID, contact information, termination date, etc. The fact that each system has its own user profile management system makes this difficult. • Efficiency: Setting a user to access multiple systems is repetitive. Doing so with the tools provided with each system is needlessly costly. • Usability: When users access multiple systems, they may be presented with multiple log-in IDs, multiple passwords, and multiple sign-on screens. This complexity is burdensome to users, who consequently have problems accessing systems and incur productivity and support costs. • Reliability: User profile data should be reliable, especially if it is used to control access to sensitive data or resources. That means that the process used to update user information on every system must produce data that is complete, timely, and accurate. • Scalability: Enterprises manage user profile data for large numbers of people. There are typically tens of thousands of internal users and hundreds or thousands of partners or clients. Any identity management system used in this environment must scale to support the data volumes and peak transaction rates produced by large user populations. Identity Management Technologies. The foundation of a comprehensive identity management solution is the systems focused on streamlining the management process and managing data consistently across multiple systems. A typical enterprise will have many users with various access requirements for a diverse collection of data and application services. To bind the user to established policies, processes, and privileges throughout the infrastructure, several types of technologies are utilized to ensure consistency and oversight.
Technologies utilized in identity management solutions include: 173
AU8231_C002.fm Page 174 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® • • • • • •
Directories Web access management Password management Legacy single sign-on Account management Profile update
Directories. A corporate directory is a comprehensive system designed to centralize the management of data about an assortment of company entities. A directory will contain a hierarchy of objects storing information about users, groups, systems, servers, printers, etc. The directory is stored on one or more servers that may replicate the data stored, in part or in whole, to other directory servers. The objective, as with many multisystem, distributed environments, is to ensure scalability and availability. Client and other applications will normally access data stored in a directory by means of a standard protocol, such as the Lightweight Directory Access Protocol (LDAP).
Directories offer a valuable service to the enterprise that can be used in a multitude of ways to support a vast array of people, processes, and technology. Most importantly, given the nature of a centralized collection of user data, directories are used by many applications to avoid replication of information and simplify the architecture. Using directories, it is possible to configure several applications to share data about users, rather than having each system manage its own list of users, authentication data, etc. A key limitation of directories and their role in simplifying identity management is integration with legacy systems. Mainframes, old applications, and other outdated systems simply do not support the use of an external system to manage their own users. Example directory vendors include: • • • • • • •
Critical Path IBM/Tivoli Microsoft Novell Oracle Siemens Sun/iPlanet
There are, however, several free versions of directory software available that can be helpful for learning the idiosyncrasies of directories and their role in enterprise data management. Web Access Management. Once a directory is in place, it is possible to quickly leverage the data to manage user identity, authentication, and 174
AU8231_C002.fm Page 175 Saturday, June 2, 2007 1:21 PM
Access Control authorization data on multiple Web-based applications using a Web access management (WAM) solution. These solutions replace the sign-on process on various Web applications, typically using a plug-in on the front-end Web server. When users authenticate for the first time into the Web application environment, they maintain that user’s authentication state as the user navigates between applications. Moreover, these systems normally also define user groups and attach users to privileges on the managed systems. These systems provide effective user management and single sign-on in Web environments. They do not, in general, support comprehensive management of the entire access control environment and legacy systems. Nevertheless, WAM has offered a meaningful solution for Web environments to help organizations manage multiple Internet users accessing a collection of Web-based applications. For this reason, Web access management tools have been rapidly adopted by organizations seeking more efficient methods for managing a large number of users for a select group of applications. Vendors with access management products include: • • • • • • • • • • •
Baltimore Entegrity Entrust IBM Microsoft Netegrity Novell Oblix Open Network Technologies RSA Wipro
Password Management. Traditional users will log into systems with a username and password combination. Because passwords may be compromised over time, it is prudent to periodically change passwords. Most modern systems can be configured easily to require users to change their password at predefined intervals, in addition to offering tools and simplified processes for users. Most enterprise organizations enforce a password change interval ranging from 30 to 90 days.
When users have multiple passwords, on multiple disparate systems, that expire on different dates, they tend to write them down, store them insecurely (i.e., password.txt on the desktop), or replicate the password across multiple systems. For example, in the absence of a password management system incorporated into a larger identity management solution, a user may have the same password for several systems and simply rotate 175
AU8231_C002.fm Page 176 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® a set of three or four, making it very easy to remember, but equally easy for an attacker. A password management system is designed to manage passwords consistently across multiple platforms. This is usually achieved by a central tool synchronizing passwords across multiple systems. However, other features include assisting users and management with mundane tasks. For example, users who forget their password or trigger a lockout from too many failed attempts may be offered alternative authentication mechanisms to gain specific access to utilities to reset their password and system lockout. It is not uncommon for an organization to issue multifactor authentication tokens that are used, in part, for providing utilities so users can self-manage their accounts and passwords on other, potentially older or nonintegrated systems. In the event that an alternative authentication mechanism does not exist, password management will allow administrators or support staff to reset forgotten or disabled passwords. Another feature of a password management system, and regularly employed on large Internet sites, is a registration process that incorporates questions whose answers are private to that user, allowing him to manage his account, such as resetting the password. Vendors with password management products include: • • • • •
Blockade Courion M-Tech Net Magic Proginet
Legacy Single Sign-On. Single sign-on (SSO) is a term used to describe the user experience when accessing one or more systems. Users who log into many systems will prefer to sign into one master system, and thereafter be able to access other systems without being repeatedly prompted to identify and authenticate themselves.
There are numerous technical solutions that offer SSO to users, but many are associated with the centralization of user data, such as a directory. As demonstrated above, many legacy systems do not support an external means to identify and authenticate users. Therefore, it is possible to store the credentials outside of the various applications and have them automatically entered on behalf of the user when an application is launched. Legacy single sign-on systems provide a central repository of a user’s credentials, such as user IDs and passwords associated with the suite of applications. Users launch various applications through the SSO client software, which opens the appropriate client program and sends keystrokes to that program, simulating the user typing his own log-in ID and password. 176
AU8231_C002.fm Page 177 Saturday, June 2, 2007 1:21 PM
Access Control Today, many of these solutions are based on the possession of a smart card, secured by a PIN, that stores the user’s array of credentials in the memory of the card. In addition to a smart device loaded with credentials, software is loaded into the system that is designed to detect when the user is prompted for authentication. Upon discovery, the user is typically asked whether to learn the new application or ignore it in the future. If the system is told to learn it, it collects the identification and authentication information from the user, stores it securely on the system, and populates the fields on behalf of the user. From that point forward, the user must only remember the passphrase to unlock the smart device, so that the system can gain access to the collection of identification and authorization materials. There are also solutions that store the user’s credentials on a central system. Once authenticated to the primary SSO system, the user credentials are provided to the end system for downstream use. However, there are some limitations and challenges presented by the use of a legacy SSO solution. First, given that the applications are completely unaware of the slight of hand by the system, when a user must change his or her password within the application, it must also be changed in the system providing the SSO services. For example, very much like the smart card used in the above example, SSO software may be loaded on the client system that collects, stores, and encrypts the user’s credentials. In both scenarios, the user must change the stored password to reflect the new password in the application. Seeing that the systems are not incorporated, any change in either system will require replication, or the process will fail. Next is cost. The price of smart devices or simply the SSO software can become cost-prohibitive for a large environment. If the solution is based on a centralized SSO system that users log into to collect their IDs and passwords, there are additional costs to ensure availability of the system. If the entire user population utilizes the SSO system to gain access to enterprise applications and it were to fail (a classic single point of failure example), activity would come to a rapid halt. One of the more prevalent concerns is the fact that all of a user’s credentials are stored in a single instance. For example, if someone were to crack the SSO password, they would effectively have all the keys to the kingdom. Some vendors with SSO products include: • • • • •
Computer Associates IBM/Tivoli Novell RSA Passlogix 177
AU8231_C002.fm Page 178 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Account Management. One of most costly, time-consuming, and potentially risk-laden aspects of access control is the creation, modification, and decommissioning of users. Many organizations consume inordinate amounts of resources ensuring the timely creation of new system access, adjustments of user privileges to reflect changes in responsibilities, and termination of access once a user leaves.
Although Web access management addresses this problem for a Webbased environment, most enterprises are heterogeneous, with multiple types and versions of systems and applications, each with potentially different account management strategies and capabilities. For example, ERP systems, operating systems, network device systems, mainframes, database servers, and more, typically have difficulty in interacting with a centralized directory. Moreover, for those that can attain data from a directory, there may be limitations to the degree of control available to the system. As a result, user management processes must be performed on each system directly. Access management attempts to streamline the administration of user identity across multiple systems. They normally include one or more of the following features to ensure a central, cross-platform security administration capability: • A central facility for managing user access to multiple systems at once. • A workflow system where users can submit requests for new, changed, or terminated systems access, and these requests are automatically routed to the appropriate people for approvals. Approved requests trigger creation of accounts and allocation of other resources. • Automatic replication of data, particularly user records, between multiple systems and directories. • A facility for loading batch changes to user directories. • Automatic creation, change, or removal of access to system resources based on policies, and triggered by changes to information elsewhere (for example, in an HR system or corporate directory). • Account management systems focus on insiders, because outsiders are already well served by Web access management systems, which typically manage this data in a single directory (e.g., using LDAP). Account management systems sometimes also include a simple password management capability. As with Web access management systems, this capability is usually limited. The major drawback of access management systems is deployment time and cost. Some systems can take literally years to deploy. Some vendors with access management products include: 178
AU8231_C002.fm Page 179 Saturday, June 2, 2007 1:21 PM
Access Control • • • • • • •
Access360 BMC Business Layers Computer Associates IBM/Tivoli M-Tech Waveset
Profile Update. Profiles are a collection of information associated with the identifying entity. A user identity normally includes personal information, such as name, telephone number, e-mail address, home address, date of birth, etc. However, a profile can contain information related to privileges and rights on specific systems.
As one would rightly conclude, any information specific to an entity is going to change over time. When a change is required in the profile, it should be easy to manage and be automatically reflected in key systems, such as the corporate directory and the individual systems the users log into. Most customer relationship management (CRM) systems include some facility to manage user profiles either administratively or using a selfservice method. This capability is also available in some Web access management systems, access management systems, and password management systems. It is helpful to allow users to enter and manage those parts of their own profiles where new data is either not sensitive or does not have to be validated. Access Control Technologies There are several access control technologies that can be employed in various methods to support the multiple layers required for managing access. • Single sign-on (SSO) • Kerberos • Secure European System for Applications in a Multi-Vendor Environment (SESAME) • Security domains Single Sign-On. Introduced earlier, single sign-on for legacy systems provides an intermediary management system to support client requests. However, there are applications in use today that do not require client-side software, allowing users to gain access via an authentication gateway.
Some network enterprise systems provide users with access to many different computer systems or applications for their daily work. This wide range of access may require the user to have a user ID and password for each available resource. Instead of logging on to each individualized system or program, a solution is to implement SSO standards that enable a 179
AU8231_C002.fm Page 180 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® 1. User enters name and password
Computer
2. Server uses password to authenticate user’s identity
2. User’s Name and Password Transmitted
4. Server authorizes access for authenticated resources
Authentication Server
Figure 2.13. Common single sign-on architecture.
user to log on once to the enterprise and access all additional authorized network resources (Figure 2.13). Single sign-on is often referred to as reduced sign-on and federated ID management. By implementing a SSO solution that incorporates a single instance of user credentials leveraged by multiple systems, an organization’s employees as well as partners can be offered access with limited management. There are advantages and disadvantages to SSO solutions: • Advantages: – Efficient log-on process: Users require fewer passwords to remember and are prompted less to perform their job functions. – Users may create stronger passwords: With the reduced number of passwords to remember, users can remember a single, very strong password that can also be changed often. – No need for multiple passwords: The introduction of a SSO system translates into a single-use credential for users. – Time-out and attempt thresholds enforced across entire platform: Used to protect against a user being away from his workstation but still logged on for an extended period, thereby leaving it available to an intruder who could continue with the user’s session. The attempt threshold is used to protect against an intruder attempting to obtain an authentic user ID and password combination by brute force (trying all combinations). In the first case, the workstation would be disconnected after the selected period of inactivity. – Centralized administration: Given the singularity of the access control mechanism, administrators are offered central administrative interface to support the enterprise. • Disadvantages: – A compromised password allows intruders into all authorized resources: If the credential used for total access is compromised, the attacker would then have the privileges and capacity as180
AU8231_C002.fm Page 181 Saturday, June 2, 2007 1:21 PM
Access Control
–
signed to the original user. Although the risks are similar in nature to typical username and password combination, SSO introduces complete access via a single account. An attacker would only be limited by the enterprisewide assigned privileges, as opposed to the assigned privileges for a given system. Inclusion of unique platforms may be challenging: As introduced earlier, SSO is complex and requires significant integration to be effective. It is not uncommon for a large enterprise to utilize hundreds, if not thousands, of applications running on a wide variety of operating systems, each with their own approach to user management. Therefore, significant planning and analysis should be performed prior to embarking on a SSO solution.
Kerberos. The name Kerberos comes from Greek mythology; it is the three-headed dog that guarded the entrance to Hades. In Latin it is spelled and commonly seen as Cerberus, leading many to pronounce the name as Serberos. However, in Latin the letter C is always hard. So, Cerberus is pronounced “Ker-ber-ous.” In Latin the letter K is not normally used, and in Roman times, C always represented the K sound — hence the Greek spelling and pronunciation we use today. Also, -os is a Greek suffix (nominative masculine singular) whose nearest equivalent in Latin is the suffix -us (very familiar in Latin names). That is why the name goes into Latin as Cerberus and in Greek as Kerberos.
Kerberos, developed under Project Athena at MIT, guards a network with three elements: authentication, accounting, and auditing. Kerberos is a network authentication protocol. It is designed to provide strong authentication for client/server applications by using secret key cryptography. Kerberos is an effective authentication mechanism used in open, distributed environments where users must have a unique ID for each application on a network. Kerberos verifies that users are who they claim to be and the network services they use are contained within their permission profile. It meets four basic requirements for access control: • Security: A network eavesdropper should not be able to obtain needed information to impersonate a user. • Reliability: Available for users when needed. • Transparency: User is not aware of authentication process. • Scalability: Must support a small or large number of clients and servers. Kerberos Process. Kerberos is aptly named after a three-headed dog, as the authentication process is based on interaction between three systems: the requesting system (or principal), the endpoint destination server, and the Kerberos server. This Kerberos distribution center (KDC) will serve 181
AU8231_C002.fm Page 182 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® two functions during the authentication transaction — as an authentication server (AS) and as a ticket-granting server (TGS). A principal is any entity that interacts with the Kerberos server, such as a user workstation, an application, or a service. Given that Kerberos is based on symmetrical encryption and a shared secret key, the KDC maintains a database of the secret keys of all the principals on the network. While acting as the AS, it will authenticate a principal via a preexchanged secret key. Once a principal is authenticated, the KDC operates as a TGS, providing a ticket to the principal to establish a trusted relationship between multiple principals. For example, a KDS maintains the secret keys for a server and a workstation (both principals), each trusting the KDS. A user on the workstation authenticates to the AS and receives a ticket that is accepted by the server based on the trust associated with the KDS. Principals are preregistered with a secret key in the KDS. This is typically achieved through a user or system registration process. When a user or system is added to the Kerberos realm, it is provided the realm key, a common key used for initial trusted communications. During the introduction into the realm, a unique key is created to support future communications with the KDC. For example, when a Windows workstation joins a domain (i.e., realm) or a user joins the domain, a unique key is created and shared via the realm’s key, which is managed by the KDS. In the case with a user, it is common for Kerberos to utilize the password hash as the unique user key. Once the user is incorporated into the Kerberos realm, he can then be authenticated by the AS. At this point, the system authenticates the user (or system) and the TGS provides him with a ticket-granting ticket (TGT). The TGT allows the client to request service tickets (STs) and is analogous to a passport. TGTs are valid for a certain period, typically between eight and ten hours, after which they expire and the user must reauthenticate to the KDC. However, once the TGT has been issued, there is no further use of passwords or other log-on factors when interacting with other systems within the Kerberos realm. As demonstrated in Figure 2.14, upon authentication to the AS, a user workstation (P1) will be given a TGT by the TGS. Later, P1 may request access to an application server (P2) by requesting another ticket from the KDS — hence the term ticket-granting ticket. Upon validating the TGT, the KDS will generate a unique session key (SK1) to be used between P1 and P2, and will encrypt the SK1 with P1’s secret key (P1Key) and P2’s secret key (P2Key). The KDS will pack up the data in a ST, which it sends to P1. If P1 is who it claims to be, P1 will be able to decrypt SK1 and send it, along with P2Key (encrypted), to P2, the application server. The application server 182
AU8231_C002.fm Page 183 Saturday, June 2, 2007 1:21 PM
Access Control
Figure 2.14. Kerberos interaction.
will receive the ticket (the encrypted SK1 by P2Key) from P1 and be able to decrypt SK1. By decrypting SK1, the user and the application server are authenticated and now have the shared session key that can be used for encrypted communications between P1 and P2. Given that P1 and P2 have established secret keys with the KDS, it can generate unique session keys and encrypt them with its stored secret keys from the systems requesting secure interactions. The client (in this case P1) is sent the service ticket first to avoid denial-of-service attacks against the application server; otherwise, the server could be overloaded with encrypted session requests. The session key is effectively encrypted twice, once with P1’s secret key and once with P2’s secret key. This forces both systems to authenticate themselves (by possession of the correct secret key) to obtain the unique session key. Once each has the session key, each now has matching key material that can be used in follow-on symmetrical encrypted communications. A few key points to remember about Kerberos tickets are: • When the user is authenticated to the AS, it simply provides a TGT. This in and of itself does not permit access. Therefore, upon log-on the user obtains a TGT that can be used to request access to resources. • The TGT allows the user to request a service ticket from the TGS, authenticating the user through encryption processes and building a ST for the user to present to the target resource system. • The possession of the ST signifies that the user has been authenticated and can be provided access (assuming the user passes the application server’s authorization criteria). • The user is authenticated once via a traditional log-on process and verified by means of message encryption to request and acquire service tickets. The user does not have to reauthenticate as long as the TGT is valid. 183
AU8231_C002.fm Page 184 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® The primary goal of Kerberos is to ensure private communications between systems over a network. However, in providing the key material, it acts to authenticate each of the principals in the communication based on the possession of the secret key, which allows access to the session key. Kerberos is an elegant solution and used in many platforms as the basis for broad authentication processes. However, no solution is perfect, and there are some issues related to the use of Kerberos, shared by many other solutions. For example, the security depends on careful implementation: enforcing limited lifetimes for authentication credentials minimizes the threats of replayed credentials, the KDC must be physically secured, and it should be hardened, not permitting any non-Kerberos activity. More importantly, the KDC can be a single point of failure, and therefore should be supported by a backup and continuity plans. It is not uncommon for there to be several KDCs in a Kerberos architecture, each sharing principal information, such as keys, to support the infrastructure if one of the systems were to fail. Finally, the length of the keys (secret and session) is very important. For example, if the key is too short, it is vulnerable to brute-force attacks. If it is too long, systems can be overloaded with encrypting and decrypting tickets and network data. The Achilles’ heal of Kerberos is the fact that encryption processes are ultimately based on passwords. Therefore, it can fall victim to traditional password-guessing attacks. Secure European System for Applications in a Multi-Vendor Environment (SESAME). The Secure European System for Applications in a Multi-Vendor
Environment (SESAME) is a European research and development project, funded by the European Commission and developed to address some of the weaknesses found in Kerberos. It is also the name of the technology that came from the project. It offers SSO with added distributed access controls using symmetric and asymmetric cryptographic techniques for protection of interchanged data. It is actually an extension of Kerberos, offering public key cryptography and role-based access control capabilities. Attributes of SESAME include: • Offers SSO with added distributed access controls using symmetric and asymmetric cryptographic techniques for protecting interchanged data • Offers role-based access control • Uses a privileged attribute certificate (PAC), similar to a Kerberos ticket • Components can be accessible through Kerberos v5 protocol • Uses public key cryptography for the distribution of secret keys
184
AU8231_C002.fm Page 185 Saturday, June 2, 2007 1:21 PM
Access Control
Figure 2.15. Domain hierarchical relationship. Security Domain. A security domain is based on trust between resources or services in realms that share a single security policy and single management. The trust is the unique context in which a program is operating. The security policy must define the set of objects that each user has the ability to access. Think of a security domain as a concept where the principle of separation protects each resource and each domain is encapsulated into distinct address spaces.
Security domains support a hierarchical relationship where subjects can access objects in equal or lower domains; therefore, domains with higher privileges are protected from domains with lesser privileges. Characteristics for security domains include: • Can have more than one domain on a server. • A subject’s domain is the set of objects to which it has access. • In Figure 2.15, two distinct and separate security domains exist on the server, and only those individuals or subjects authorized can have access to the information on a particular domain. A subject’s domain, which contains all of the objects that the subject can access, is kept isolated. Shared objects may have more than one access by subjects, and this allows this concept to work (Figure 2.16). For example, if a hundred subjects have access to the same object, that object has to appear in a hundred different domains to allow this isolation.
Figure 2.16. Objects in domains.
185
AU8231_C002.fm Page 186 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Section Summary.
• Authentication factors are authentication by knowledge, authentication by ownership, and authentication by characteristic. • An identity management system is an IT infrastructure designed to centralize and streamline the management of user identity, authentication, and authorization data. • Single sign-on, Kerberos, and security domains are examples of the key access control technologies. Access to Data To this point, the discussion of access controls has been relegated to the processes and technology used to identify, authenticate, and authorize users and applications. However, all this must translate into controls associated with the structure of controls related to the security of data. The objectives for this section are to: • • • •
List the types of access controls Describe access control lists (ACLs) Describe rule-based and role-based access controls Define capability tables
Subtopics of this section include: • • • • • • • • • • • •
Discretionary access control Mandatory access control Access control lists Access control matrix Rule-based access control Role-based access control Content-dependent access control Constrained user interface Capability tables Temporal (time-based) isolation Centralized access control Decentralized access control
Discretionary and Mandatory Access Control Discretionary access controls (DACs) are those controls placed on data by the owner of the data. The owner determines who has access and what privileges they have. Discretionary controls represent a very early form of access control and were widely employed in VAX, VMS, UNIX, and other minicomputers in universities and other organizations prior to the evolution of personal computers. Today, DACs are widely used to allow users to manage their own data and the security of that information, and nearly 186
AU8231_C002.fm Page 187 Saturday, June 2, 2007 1:21 PM
Access Control every mainstream operating system, from Microsoft and Mac to Solaris and Linux — including the aforementioned — supports DAC. Mandatory access controls (MACs) are those controls determined by the owner and the system. The system applies controls based on privilege (or clearance) of a subject (or user) and the sensitivity (or classification) of an object (or data). In DACs, the user is free to apply controls at the discretion of the owner and not related to the overall value or classification of the data. In contrast, MACs allow the system to become involved in the assignment of controls that can be managed in accordance with broader security policies. MACs are typically for systems and data that are highly sensitive. Associating the security controls of an object based on its classification and the clearance of subjects provides for a secure system that allows for multilayered processing of information. The system’s decision controls access; the owner provides the need-toknow control. Not everyone who is cleared should have access, only those cleared and with a need to know. Even if the owner says a user has the need to know, the system must ascertain that the user is cleared or no access is allowed. To accomplish this, data needs to be labeled, allowing specific controls to be applied to that instance. Moreover, the controls the system employs are ultimately governed by an administrator following the organization’s security policy. The results demand placing limitations on authorizers, or those responsible for centrally managing the security polices applied to a given system. As demonstrated in Figure 2.17, access permissions can be applied to an object based on the level of clearance given to a subject. Moreover, multi-
No access/Null
No access permission granted
Read (R)
Read but make no changes
Write (W)
Write to file; includes change capability
Execute (X)
Execute a program
Delete (D) Change (C) Full Control
Delete file Read, write, execute and delete; may not change file permission All abilities; including changing access control permission
Figure 2.17. Example access permissions. 187
AU8231_C002.fm Page 188 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® ple access permissions can be applied to a single object, such as owner, group, public, system, or administrator, to mention a few. The example provided represents only a few of the possible permissions that can be assigned to an object. For example, “list” is a permission seen in common operating systems that permits users with that assigned permission to only list the files in a directory, not read, delete, modify, or execute the files. Access Control Lists. The term access control list (ACL) is used in many forms to communicate how a collection of controls are assigned based on a set of parameters. For example, ACLs are seen in routers that can limit or permit traffic based on an interface or session attribute, such as an IP address. Once there is a match, the list is applied. Moreover, when a list is identified and applied, it is parsed until an action can be determined.
In the case with the broader implications of security, ACLs are used in the provisioning of permissions within a given system based on policy. In most cases, ACLs within the system are performed on behalf of the user or administrator. For example, an administrator may create a set of users, assign them to a group, and apply a set of permissions for files and directories to that group. Within the system that information is translated into, an ACL that is then employed (Figure 2.18). The most common implementation of ACLs is in the DAC control model. It specifies a list of users who are allowed access to each object and is often implemented with access control matrices (ACMs). Of course, any and all ACLs must be properly secured from unauthorized modification. Access Control Matrix. An access control matrix (ACM) is simply a table structure of an ACL. Subjects are identified, as are the objects, and permissions are incorporated into the matrix.
Shown in Figure 2.19, an ACM can be used to quickly summarize what permissions a subject has for objects. Albeit a simple example, and in large environments an ACM can become cumbersome, it can be helpful in system or application design to ensure that security is incorporated early in the process. Rule-Based Access Control. In a rule-based system, access is based on a list of rules that determine what accesses should be granted. The rules, created or authorized by system owners, specify the privileges granted to users (i.e., read, write, execute, etc.). This is an example of a DAC, because the owner writes the rules.
A mediation mechanism enforces the rules to ensure only authorized access by intercepting every request, comparing it to user authorizations, and making a decision based on the rule. 188
AU8231_C002.fm Page 189 Saturday, June 2, 2007 1:21 PM
Access Control
ACCESS CONTROL LIST Mary: UserMary Directory - FullControl UserBob Directory - Write UserBruce Directory - Write Printer 001 - Execute
Bob: UserMary Directory - Read UserBob Directory - Full Control UserBruce Directory - Write Printer 001 - Execute
Bruce: UserMary Directory - No Access User Bob Directory - Write UserBruce Directory - Full Control Printer 001 - Execute
Sally: UserMary Directory - No Access UserBob Directory - No Access UserBruce Directory - No Access Printer 001 - No Access
Figure 2.18. Example access control list.
Figure 2.19. Example access control matrix. Role-Based Access Control. A role-based access control (RBAC) policy bases the access control authorizations on the functions that the user is allowed to perform within an organization. The determination of what roles have access to a file can be governed by the owner of the data, as with DACs, or applied based on policy. RBAC is a form of DAC because the roles
189
AU8231_C002.fm Page 190 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK®
Figure 2.20. Approaches to RBAC.
are determined by managers or HR and the access is determined by owners in accordance with company policy. These are all discretionary actions. Access control decisions are based on job function, previously defined and governed by policy, and each role (job function) will have its own access capabilities. User objects associated with a role will inherit privileges assigned to a job function. This is also true for groups of users, allowing administrators to simplify access control strategies by assigning users to groups and groups to job functions. RBAC Approach. There are several approaches to RBAC. As with many system controls, there are variances on how they can be applied within a computer system. As demonstrated in Figure 2.20, there are four basic RBAC architectures:
• Non-RBAC: Non-RBAC is simply a user-granted access to an application or data by traditional mapping, such as with ACLs. • Limited RBAC: Limited RBAC is achieved when users are mapped to roles within an application. This also includes users that are directly mapped to applications or data. For example, a user may be assigned to multiple roles within several applications and also mapped directly to another application or system. The key attribute is that 190
AU8231_C002.fm Page 191 Saturday, June 2, 2007 1:21 PM
Access Control
Figure 2.21. Content access control.
the role is defined within the application and not necessarily to the user’s overall job function. • Hybrid RBAC: Hybrid RBAC introduces the use of a role that is applied to one or more applications or systems. A user (or subject) is assigned to an individual that has a specific role within the organization. That user subject is assigned to a role that is then applied to applications or systems. However, as the term hybrid suggests, there are instances where the subject is assigned to roles defined within specific applications, devoid of the larger, more encompassing organizational role used by other systems. • Full RBAC: Full RBAC is when access is controlled by roles defined in the policy and then applied to applications and systems. The applications, systems, and associated data apply permissions based on the definition of a role that is representative of a job function, and not one defined by a specific application or system. Content-Dependent Access Control. Content-dependent access control is based on actual content of the data. It requires the access control mechanism (the arbiter program) to investigate the data to make the decision.
For example, consider a payroll database that managers have access to. Managers would only have access to those records that pertain to their own employees, not others. There is more granularity, but it creates and requires more overhead (Figure 2.21). Constrained User Interface. Another method for controlling access is restricting users to specific functions by not allowing them to request functions or services beyond their privileges or role. This is typically seen in limiting available menus, data views, encryption, or physically constrained user interfaces, such as an automatic teller machine (ATM). Capability Tables. Capability tables are used to track, manage, and apply controls based on the object and rights, or capabilities of a subject. For example, a table identifies the object, specifies access rights allowed 191
AU8231_C002.fm Page 192 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK®
Subject
Procedure A
Process A Joe
Execute
File X
File Y
Read
Read/Write
Write
Figure 2.22. Example capability table.
for a subject, and permits access based on the user’s possession of a capability (or ticket) for the object. As shown in Figure 2.22, the row defines the capabilities that a subject has with respect to all objects in the table. For example, Process A (subject) has read access to File X and read/write capability to File Y. The first column is a control list defining the subjects, and their corresponding capabilities relative to a specific object are shown in the row. In the example chart, File X can be read by Process A and written to by Joe. The headings of the other columns contain the title of the object. Temporal (Time-Based) Isolation. Timed-based access controls are those employed at a given time for a predetermined duration. If a request is made for access or privileged use of information or services not in the defined time window, the process is denied.
For example, only process confidential data in the morning and process secret data in the afternoon. Examples can include batch processing in a mainframe environment. Centralized Access Control. Centralized access control is when one entity (individual, department, device) makes the access decisions for the organization. Once the decision for centralized access control is made, owners or systems can specify what subjects have access to applications or data. Examples of centralized access control systems include RADIUS, TACACS+, and DIAMETER (Figure 2.23). Centralized access control is thoroughly discussed in the Telecommunications and Network Security chapter, Domain 7. Decentralized Access Control. Control is given to the people closer to the resource, as in department managers and sometimes users. Access requests do not get processed by one centralized entity, potentially leading to nonstandardization and overlapping rights, which may cause gaps in security controls (Figure 2.24). Section Summary.
• With rule-based access, access control is based on a list of rules that determine authorization. Role-based access control decisions are based on job function. 192
AU8231_C002.fm Page 193 Saturday, June 2, 2007 1:21 PM
Access Control
Figure 2.23. Example of centralized control.
Figure 2.24. Example of decentralized control.
193
AU8231_C002.fm Page 194 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® • Access control lists specify a list of users who are allowed access to each object. • A capability table is an authorization table (matrix) that relates subjects, objects, and rights. Intrusion Detection and Prevention Systems A complete and competent access control environment employs a web of technology, process, and policy working together to ensure that the desired security posture is maintained. Although firewalls, remote-access devices, applications, and myriad other technical solutions play an integral role in access control, intrusion detection and prevention systems take part in a manner that deserves greater explanation. An intrusion detection system (IDS) is a technology that employs various techniques to alert organizations to adverse or unwanted activity. IDS can be implemented as a network device, such as a router, switch, firewall, or dedicated device monitoring traffic, typically referred to as network IDS (NIDS). The technology can also be incorporated into a host system (HIDS) to monitor a single system for undesirable activities. An intrusion prevention system (IPS) is a technology that defines an acceptable operational envelope. For example, an IPS permits a set of functions and actions allowed on a network or system; anything that is not permitted is considered an unwanted process and blocked. Early versions of IPS were limited to a host system; however, network layer solutions have emerged. The objectives for this section are to: • Describe intrusion prevention systems and intrusion detection systems • Describe the benefits of audit trail monitoring • List examples of audit event types • List types of information security activities As demonstrated in Figure 2.25, IDS attempts to detect activities on the network that are evidence of an attack and warn of the discovery. The automated response capabilities of IDS are somewhat limited to its placement in the infrastructure and the existence and integration of other access control technologies. IDS is informative by nature and provides real-time information when suspicious activities are suspected. It is primarily a detective device and, acting in this traditional role, is not used to directly prevent the suspected attack. In contrast, and as the name of this technology suggests, IPS is engineered specifically to respond in real-time to an event at the system or network layer. By enforcing policy, IPS can thwart not only attackers, but also 194
AU8231_C002.fm Page 195 Saturday, June 2, 2007 1:21 PM
Access Control
Figure 2.25. IDS and IPS overview.
privileged users attempting to perform an action that is not within policy. Fundamentally, IPS is considered an access control and policy enforcement technology, whereas IDS is considered network monitoring and audit. It is important to understand that the line between IDS and IPS is growing thinner. For example, some IDS solutions are adopting preventative capabilities that allow it to act more effectively in the light of policy infringement. IPS systems are incorporating detection techniques to augment the policy enforcement capabilities. The evolution is arguably more evident within the realm of IDS. This is because IPS inherently has many of the same qualities and capabilities of IDS, such as detection, logging, alerting, and audit. Moreover, fundamentally, IPS is very simple. If a process or action performed by a user, service, or application — including the operating system — is not permitted based on policy, it is blocked. Of course, defining the policy in technical terms that permit valid system functions, updating the policy, and sharing it among other devices is where IPS can become complicated. Nevertheless, when compared to the numerous techniques IDS uses to detect unwanted activities, IPS is much more straightforward. These two technologies — more so IDS — and their role within access control will be discussed throughout this section. Intrusion Detection Systems Introduced above, IDS is a technical solution whose role is to detect suspicious activity and report on the findings. Early in the development of IDSs, the act of implementing compensating controls to thwart an attack was relegated to the security group or administrator. In short, IDS is a reactive warning system that provides information necessary to guide administrators in responding to the attack. As the technology evolved, it began to offer limited capabilities to autonomously respond to events. However, the detective characteristics of the technology worked to reduce the viability of automation. 195
AU8231_C002.fm Page 196 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® It can be argued that IDS is the opposite of IPS. IDS is attempting to detect the existence of suspicious activity. The permutations of potential and probable attack vectors are infinite. In contrast, IPS is only concerned about what is possible, defined by policy, and considers everything else detrimental. This basic characteristic has become the impetus for the development of sophisticated detection techniques to help the system determine if the monitored activities are friend or foe. A by-product is the need to tune IDS to the unique traffic generated by the organization. For example, the activity associated with a company’s custom developed application may appear to an IDS as unwanted activity, forcing the generation of multiple alerts, sometimes in the form of a flood. Equally problematic is if the IDS is not tuned to notice the slight nuances between a custom application’s activities and those of a real attack, producing no alerts. Tuning an IDS is an art form and can become a significant gap in security if not performed correctly, potentially rendering the system worthless. It becomes a noisy box people begin to ignore, or it sits quietly as networks and systems are attacked. Given the complexity tuning represents and the potential for false-positives or false-negatives, automated responses generated from IDS alerts represent a risk too great for many organizations. Network Intrusion Detection System. A network intrusion detection system (NIDS) is a network device, or dedicated system attached to the network, that monitors traffic traversing the network segment for which it is integrated. NIDS is usually incorporated into the network in a passive architecture.
A passive NIDS takes advantage of promiscuous mode access to the network, allowing it to gain visibility into every packet traversing the network segment. This allows the system to inspect packets and monitor sessions without impacting the network, performance, or the systems and applications utilizing the network. Typically, a passive NIDS is implemented by installing a network tap, attaching it to a hub, or mirroring ports on a switch to a NIDS dedicated port. Given that NIDS is simply monitoring the traffic, the risk for not detecting an event starts with the potential for dropped packets. If a 100MB, 10-port switch is used and all the ports are mirrored to a single-GB port for the NIDS, it will require the capacity to monitor and investigate GB traffic without dropping packets. Another potential failure in monitoring traffic is when the information is encrypted. The same encryption employed to ensure communication confidentiality greatly reduces the ability for IDS to inspect the packet. The amount and granularity of information that can be investigated from 196
AU8231_C002.fm Page 197 Saturday, June 2, 2007 1:21 PM
Access Control an encrypted packet is related to the implementation of the encryption technology. In most cases, the data portion of the packet is encrypted with other packet headers in the clear. Therefore, the IDS can gain some visibility into the communication participants, session information, protocol, ports, and other basic attributes. However, as the IDS digs deeper and deeper into the packet to perform analysis, it will eventually fail due to the encryption. A NIDS consumes a copy of each packet off the network to analyze the contents and its role in a session. By doing so, the IDS does not interfere with existing communications and can perform various investigative functions against the collected data. On those occasions when an IDS detects an unwanted communication stream, and is enabled to perform automated responses, it can attempt to terminate the connection. This can be accomplished in a multitude of ways. For example, it can start by simply utilizing TCP and injecting reset packets into the network, forcing one or more identified systems to cancel the communications. In lieu of directly terminating the session, many IDS solutions can be integrated with firewalls, routers, and switches to facilitate dynamic rule changes to block specific protocols, ports, or IP addresses associated with the unwanted communications. In summary, NIDS has the following essential characteristics: • Monitors network packets and traffic on transmission links in real-time • Analyzes protocols and other relevant packet information • Can send alerts or terminate offending connection • Can integrate with firewall and define new rules • Encryption interferes with monitoring data packets Host-Based Intrusion Detection System. As the name suggests, HIDS is the implementation of IDS capabilities at the host level. Its most significant difference from NIDS is intrusion detection analysis, and related processes are limited to the boundaries of the host. However, this presents advantages in effectively detecting objectionable activities. Some of these efficiencies are gained from the fact that the IDS process is running on the system. This offers unfettered access to system logs, processes, system information, and device information, and virtually eliminates limits associated with encryption and automated response. The level of integration represented by HIDS inherently increases the level of visibility and control at the disposal of the HIDS application.
There are also multihost IDSs that audit and respond to and act on data from multiple hosts. When configured in this manner, the multihost HIDS architecture allows systems to share policy information and real-time 197
AU8231_C002.fm Page 198 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® attack data. For example, if a system were to experience an attack, the signature of the attack, along with remediation actions, can be shared with other systems automatically in an attempt to establish a defensive posture. There are some retractors to implementing HIDS, many of which are common among system agents and not necessarily unique to HIDS. HIDS can consume inordinate amounts of CPU and memory to function effectively, especially during an event. Although today’s server platforms are powerful, diminishing some of the performance issues, workstations and laptops, good candidates for HIDS, may suffer from the overhead of performing analysis on all system activities. In summary, HIDS has the following essential characteristics: • Agent residing on host detects apparent intrusions. • Scrutinizes event logs, critical system files, and other auditable system resources. • Looks for unauthorized change or suspicious patterns of behavior or activity. • Host-based IDS can send alerts when unusual events are discovered. • Multihost HIDS gets audit data from multiple hosts. Analysis Engine Methods The term analysis has been used frequently to explain the role of IDSs. However, several analysis methods can be employed by an IDS. Although there are different methodologies for detection, this does not indicate that all are used in a given solution. There are two basic analysis methods: pattern matching and anomaly detection. Pattern matching is when an attack vector is known and the system produces an alert if the pattern is detected (if configured to do so). Anomaly detection is somewhat more complicated in that it uses different tactics to draw conclusions on whether the traffic represents a risk to the network or host. Anomalies may include: • • • • •
Multiple failed log-on attempts Users logging in at strange hours Unexplained changes to system clocks Unusual error messages Unexplained system shutdowns or restarts
An anomaly-based IDS tends to produce more data because anything outside of the expected behavior is reported. Thus, they tend to report more false-positives as expected behavior patterns change. An advantage is that they are able to detect new attacks that may be overlooked by a signature-based system (i.e., pattern matching). Some of these are also rule or statistical based. 198
AU8231_C002.fm Page 199 Saturday, June 2, 2007 1:21 PM
Access Control The rest of the section covers: • Pattern/stateful matching engine – Pattern matching intrusion detection – Stateful matching intrusion detection • Anomaly-based engine – Statistical anomaly-based intrusion detection – Protocol anomaly-based intrusion detection – Traffic anomaly-based intrusion detection Pattern/Stateful Matching Engine. Pattern Matching Intrusion Detection.
Some of the first IDS products used pattern matching technology, typically referred to as signatures. Signatures are a collection of byte sequences that represent a mode of attack. For example, a hacker manipulating an FTP server may use a tool that sends a specially constructed packet. If the attack vector is known, it can be represented in the form of a signature that IDS can then compare to incoming packets. Pattern-based IDS will have a database of hundreds, if not thousands, of signatures that are compared to traffic streams. As new attack signatures are produced, the system is updated, much like antivirus solutions. Several conclusions can be made based on this type of detection. Most importantly, signatures can only exist if the attack is known. If a new or different attack vector is used, it will not match a pattern and slip past the IDS. Additionally, if an attacker knows the IDS is present, he may modify his method to avoid detection. For example, the attacker may send a packet that is slightly modified to avoid detection, but still have the desired affect. Of course, as with some antivirus systems, the IDS is only as good as the latest signature database on the system. Therefore, regular updates are required to ensure the IDS has the most recent signatures. This is especially critical for newly discovered attacks. Attributes of a pattern matching IDS include: • • • • •
Identifies known attacks Provides specific information for analysis and response May trigger false-positives Requires frequent updates of signatures Attacks can be modified to avoid detection
Stateful Matching Intrusion Detection. Very much related to signature-based detection, stateful matching takes pattern matching to the next level. It scans for attack signatures in the context of a traffic stream rather than the individual packets. A hacker may use a tool that sends a volley of valid packets to a targeted system. Given the packets are valid, pattern matching is nearly useless. However, the combination of the packets represents a known attack pattern. To compensate, the attacker may send packets from 199
AU8231_C002.fm Page 200 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® different locations with long wait periods between each to either confuse the detection system or exhaust its session timing window. Because this method also uses signatures, it too must be updated regularly and has some of the same limitations as pattern matching. Attributes of a stateful matching IDS include: • • • • • •
Identifies known attacks Detects signatures spread across multiple packets Provides specific information for analysis and response May trigger false-positives Requires frequent updates of signatures Attacks can be modified to avoid detection
Anomaly-Based Engine. Statistical Anomaly-Based Intrusion Detection. T h e statistical-based IDS analyzes audit trail data by comparing it to typical or predicted profiles in an effort to find potential security breaches. Thus, it attempts to identify suspicious behavior by analyzing audit data that deviates from a predicted norm.
This type of detection method can be very effective, and at a very high level, it begins to take on the characteristics of IPS by establishing an expected base and acting on divergence from that base. However, there are some potential issues that may surface depending on the environment. Tuning the IDS can be challenging, and if not performed regularly, the system is prone to false-positives. Also, the definition of normal traffic can be open to interpretation and does not preclude an attacker using normal activities to penetrate systems. The value of statistical analysis is that the system has the potential to detect unknown attacks. This is a huge departure from being limited to matching techniques. Therefore, when combined with matching technology, the IDS can be very effective. Attributes of a statistical anomaly-based IDS include: • Develops baselines of normal traffic activity and throughput, and alerts on deviations from these baselines • Can identify unknown attacks and DoS floods • Can be difficult to tune properly • Must have a clear understanding of normal traffic environment Protocol Anomaly-Based Intrusion Detection. A protocol anomaly-based IDS identifies any unacceptable deviation from expected behavior based on known protocols and signals an alert. For example, if a typical HTTP packet is received and contains attributes that deviate from established protocol standards, it may represent a specifically constructed packet used to penetrate a firewall or exploit a vulnerability. 200
AU8231_C002.fm Page 201 Saturday, June 2, 2007 1:21 PM
Access Control The value of this method is directly related to the existence or use of well-defined protocols. In the face of custom or complex protocols, the system will have more difficultly or be completely disabled in determining proper packet format. Interestingly, this type of method is prone to the same challenges faced by signature-based IDSs. For example, protocol analysis engines may have to be added or customized to deal with unique protocols or how common protocols are manipulated and leveraged within an organization. Nevertheless, having an IDS that is intimately aware of valid protocol use can be very powerful when an organization employs standard implementations of common protocols. Attributes of a protocol anomaly-based IDS include: • Looks for deviations from standards set forth in requests for comment (RFCs). • Can identify attacks without a signature. • Reduces false-positives with well-understood protocols. • May lead to false-positives and false-negatives with poorly understood or complex protocols. • Protocol analysis modules take longer to deploy to customers than signatures. Traffic Anomaly-Based Intrusion Detection. A traffic anomaly-based IDS identifies any unacceptable deviation from expected behavior based on traffic structure. When a session is established between systems, there are multiple layers to the packets representing layers in the communication. Traffic the communication produces can be compared to expected conduct founded on the understandings of traditional system interaction.
Attributes of a traffic anomaly-based IDS include: • Watches for unusual traffic activities, such as a flood of User Datagram Protocol (UDP) packets, or a new service appearing on the network • Can identify unknown attacks and DoS floods • Can be difficult to tune properly • Must have a clear understanding of normal traffic environment Intrusion Responses As alluded to above, an IDS can be integrated into an infrastructure, allowing it to perform automated changes to other systems in an effort to thwart the attack or minimize the affects of undesirable traffic. The level of capability to perform automated acts is directly related to the level of integration with other systems and the amount of trust an organization has in the system to make the correct conclusions. Upon detection of an adverse event or suspicious activity, the IDS can begin — if permitted to and configured accordingly — to interact with the 201
AU8231_C002.fm Page 202 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® systems participating in the communication, and other systems that can be used to restrict or block traffic, and to collaborate with other IDS devices or logical access control systems, such as a directory. Early versions of IDS integration for automated intrusion responses were to tie the IDS to the firewall, allowing it to instruct the firewall to implement specific rules targeted at the questionable traffic. This practice is still employed today and used when the attack can be clearly quantified and the proposed rules do not conflict with normal business operations. On the surface, injecting a rule in a firewall to stop an attack seems logical. However, firewalls may have hundreds of rules, and at what level the new rule is inserted can have an impact on normal, mission-critical communications. Moreover, some firewall platforms will share all or portions of their rules with other firewalls in the organization. Therefore, an attack affecting North American Internet connections may be blocked without affecting local traffic, but when that change is replicated to firewalls in Germany, the results can be catastrophic. Much like firewall rule set modification, the IDS can inject new access control lists in routers, VPN gateways, or other layer 3 devices, such as switches supporting VLANs, to block or restrict traffic. Again, the placement of the filter in the system’s existing ACL can have repercussions to other communications. Nevertheless, in some cases these concerns can be quelled by tuning the interaction between the IDS and other filtering devices and predefining acceptable rules and default placement. Finally, attacks can be damaging, so much so that the temporary loss of other communications is acceptable in the face of an aggressive attack. In some configurations, multiple IDS devices can collaborate and employ additional controls based on information gathered from activities detected on a remote system. This occurs most often on HIDS and IPS devices. Other detection methods and types of IDS may require human interaction to employ proactive changes. Finally, in some cases the IDS can be configured or used in combination with custom applications to enact changes in systems logically distant from the attack. An example is a script that is activated by an alert from an IDS that temporarily disables a user account, increases the level of auditing on certain systems, or suspends an application from accepting new connections. These capabilities can be summarized as follows: • • • • 202
Dropping suspicious data packets at firewall Denying access to a user displaying suspicious activity Reporting the activity to other hosts on site Updating configurations within the IDS
AU8231_C002.fm Page 203 Saturday, June 2, 2007 1:21 PM
Access Control Alarms and Signals. The impetus for the development of IDS was to gain visibility into the activities on the network and alert administrators to potentially harmful behavior. The core capability of IDS is to produce alarms and signals that work to notify people and systems to adverse events.
There are three fundamental components of alarms: • Sensor • Control/communication • Alert/enunciator/actuator A sensor is the detection system that identifies the event and produces the appropriate notification. The notification can be informational, warning, or critical, or whatever the system administrator has configured. Control or communication refers to the mechanism of distribution. For example, an alert may materialize as an e-mail, instant message, pager (i.e., mobile device) message, or even an audible message to a phone or voice mail. The enunciator is essentially a relay system. For example, it may be necessary to notify local resources immediately and remote resources later. Also, the enunciator is the system that can employ business logic as well as construct the message to accommodate different delivery mechanisms. For example, it may have to truncate the message to send to a pager, format to support a type of e-mail system, or compile a special file format, a text-to-voice, or fax system. Finally, business logic can be employed to determine who gets the message, when, and how. It is worth noting that establishing who receives an alert, when, and how is critical. Once the appropriate people are identified, determining what types of alerts they should receive and the level of urgency, the “how” needs to be addressed. For example, if an alert occurs and a message containing sensitive material needs to be delivered to the CSO, the security of that information must be considered. Therefore, the type of technology used for delivery can have a bearing on the amount and type of data sent. To further exacerbate the issue of secure delivery, when one communication transaction fails and a secondary is attempted, the message format as well as the information it contains may need to be changed. For example, the CSO may determine that the most secure mode of notification during working hours is her private fax machine in her office. If acknowledgment is not received in a predetermined timeframe, the next mode of communication is a message to her cell phone, a much less secure device. Given this, the format must be adjusted to accommodate not only the receiving device, but the content as well. All three of these components exist in off-the-shelf IDS products with varying degrees of capability. 203
AU8231_C002.fm Page 204 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® IDS Management As IDS became an accepted enterprise security technology and adopted more readily, it began to suffer from ill-founded perceptions. Many organizations felt that it was simply a product investment — fire and forget; nothing could be further from the truth. Arguably, IDS is more about management than the box that is bolted into a rack. First and foremost, someone has to implement, tune, and monitor the system. IDS is designed to alert people in the event something is detected. It is worthless if it is not implemented properly, tuned accordingly, and there is someone to act on the information it provides. Unfortunately, what appeared to many as simply a product investment quickly turned into the inclusion of a full-time IDS administrator. Soon after, investments were needed for managing the data output from the systems, such as policies, procedures, and technology for storage, retention, and security. Finally, with added awareness of what is occurring on the network, many organizations were motivated to acquire additional technology, such as more IDSs, correlation engines, and other security controls to address the onslaught. In short, IDS requires expert administration and overall management of the solution, and the technology’s success within the enterprise is directly proportional to the integrity of management. Upon implementing IDS, organizations must ensure they have a knowledgeable resource to select, install, configure, operate, and maintain the system. Management processes and procedures must be developed and employed to ensure that the system is regularly updated with signatures and evaluated for suspicious activities, and the IDS itself is not vulnerable to direct attack. One of the more important, but overlooked, aspects of IDS is dealing with the results. Many organizations employ IDS to gain a better understanding of what may be occurring on their networks and systems. Once the IDS detects an event, it is necessary for the organization to have an incident response process. Although an IDS can be configured to perform some automated functions to thwart an attack, complete reliance on the system is not realistic. Essentially, IDS alerts you to the existence of suspicious activity and can help — to a point. Incident response is detection, identification, isolation, eradication, recovery, and learning from the incident. IDS is designed to detect and, if possible, identify the attack. It is up to people to work within their environment to follow through with the process of managing an incident. Therefore, IDS, although an effective technology, is simply the tip of the security management spear. Physically implementing IDS is the first step in orchestrating a comprehensive security management infrastructure designed to deal with potentially harmful events. In many organizations, IDS is a good candidate for outsourcing. In summary, the following are needed to ensure an effective IDS: 204
AU8231_C002.fm Page 205 Saturday, June 2, 2007 1:21 PM
Access Control • Employ a technically knowledgeable person to select, install, configure, operate, and maintain the IDS. • Update the system with new signature attacks and also to evaluate expected behavior profiles. • Be aware that IDS may be vulnerable to attacks. • Intruders may try to disable with false information or overload the system. • Attacks on IDS may be a distraction. Access Control Assurance Audit Trail Monitoring An audit trail is the data collected from various systems logging activity. It is a record of system activities that can be investigated to determine if network devices, systems, applications, services, or any computer system that produces a log is operating within expected parameters. Virtually any system can be configured to produce a log of system activities. Of course, not all possible events that can be logged are applicable to security. However, there are many system attributes that can be very helpful for collecting information and gaining awareness of the system and overall infrastructure status. The function of audit trails is to alert staff of suspicious activity for further investigation. For example, an administrator may see evidence of a user logging into a mission-critical system during after-hours, something unexpected. Upon further investigation, by checking the logs from other devices, the administrator determines that the access was sourced from a VPN device. This collection of information can be used to validate that the access was approved or expected. Moreover, the investigation can look at other logs produced from the system in question to determine if other questionable actions were performed by the user. The audit trail can provide details on the extent of intruder activity. During a typical attack, the advisory will usually have to traverse several systems, such as routers, firewalls, and applications, potentially leaving behind a record of activity. This information can be used to reconstruct the avenue of attack, what tools may have been used, the actions of the attacker, and what were the affects. If the information is properly collected and secured, and a chain of custody of said data can be accurately represented, the information can be used for legal proceedings to prove guilt or innocence. Audit Event Types. There are several event types that can be audited given the diversity and broad spectrum of capabilities of technology employed in a typical organization. However, in the light of information security and access controls, there are five key types that are the founda205
AU8231_C002.fm Page 206 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® tion of security auditing: network, system, application, user, and keystroke activity. Network. The network plays a critical role in attacks. As useful as the network is to organizations in performing business processes, it is equally valuable to threat agents; it is the common field of operations. Devices responsible for supporting communications can provide ample information about events or activities on the network.
Network layer information can be helpful in determining and isolating threat activity, such as a worm or a DoS attack. It can also be helpful in detecting whether users are using software or services not permitted by policy, such as instant messaging programs. System. System events are important to audit to have a clear understanding of the system activity. These can include logging when files are deleted, changed, or added, software is installed, or privileges are changed. System information can help determine if a worm or virus is present by evidence of unexpected activity on the system. Application. Application events encompass a broad range of possibilities for monitoring activity. Many of these are dependent on the services offered by the application, in addition to the functions it performs. For example, it may be possible to determine if a Web server is being attacked by evidence of URL manipulation. Given the diversity of applications and possible attributes that can be audited and logged, the objective is to audit activities that help isolate key functions to at least gain an initial perspective of application activity. User. User events are very important to understand who is accessing what and when. Log-on and log-off times, privilege use, and data access are the essential basics of user monitoring. Keystroke. Logging the keystrokes a user enters on a system can be extraordinarily helpful in investigating suspicious activity, but can act as evidence for remediation. A simple example is the .histor y or .bash_history file in the user’s home directory on a UNIX system. Although far from a perfect example, .history files do log all the commands a user enters during a logged-on session. Auditing Issues and Concerns. An issue with audit trails is the volume of data that is collected. The audit logs may well exceed the administrative time and skills necessary to review and investigate events that seem suspicious. Therefore, it may be necessary to use some type of event filtering or “clipping level” to properly determine the amount of log detail captured.
There are several auditing tools available that can be used to process the information. Many of these also perform correlations that help deter206
AU8231_C002.fm Page 207 Saturday, June 2, 2007 1:21 PM
Access Control mine relationships between multiple systems to better determine exactly what was performed during an attack. The security of logs is important to ensure the integrity of the information; this is especially true if the information is going to be used for forensics investigations or legal proceedings. Logs have to be protected against unauthorized access and changes. Therefore, the security of storage and archive systems is critical to the integrity of the information collected. Following are best practices in addressing audit issues and concerns: • Control the volume of data. • Event filtering or clipping level determines the amount of log detail captured. • Auditing tools can reduce log size. • Establish procedures in advance. • Train personnel in pertinent log review. • Protect and ensure against unauthorized access. • Disable auditing or deleting/clearing logs. • Protect the audit logs from unauthorized changes. • Store/archive audit logs securely. Information Security Activities There are several essential information security activities, some covered above. These are: • Security awareness and training: Ensuring users are fully aware of and understand corporate security polices and sound security practices. Moreover, how to identify potential threats and what to do if they suspect an adverse event. Educated employees can be the strongest aspect of an information security program. • Separation/rotation of duties: As discussed above, separating people and job functions so that no one person can perform a function that could result in a security event. • Least privilege: Ensuring that a user, process, application, or service is only provided the permissions and privileges required to perform its role within the organization. • Recruiting and termination procedures: The hiring and removal of personnel from an organization represent a potential threat to a company and its assets. Care must be taken when employees are added or removed in the light of information security. • Security reviews/audits: Performing regular assessments and audits ensures that established policies are followed and expectations are met. • Vulnerability/network assessment tools: Performing a vulnerability assessment assists organizations in determining the state of the environment and the level of exposure to threats. 207
AU8231_C002.fm Page 208 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® • Penetration testing: The next level in vulnerability assessments; seeks to exploit vulnerabilities to determine the true nature and impact of a given vulnerability. Penetration Testing. Penetration testing goes by many names, such as ethical hacking, tiger team, and vulnerability testing. It is the employment of exploitive techniques to determine the level of risk associated with a vulnerability or collection of vulnerabilities. The primary goal is to simulate an attack on a system or network at the request of the owner to evaluate the risk characteristics of an environment. Characteristics can include understanding the level of skill required, time to exploit a given vulnerability, and the level of impact, such as depth of access and attainable privileges.
Penetration testing can be employed against anything. However, most companies seek penetration testing to focus on Internet systems and services, remote-access solutions, and applications. The key to successful and valuable penetration testing is clearly defined objectives, scope, stated goals, agreed upon limitations, and acceptable activities. For example, it may be acceptable to attack an FTP server, but not to the point where the system is rendered useless or data is damaged. Having a clear framework and management oversight during a test is essential to ensuring the test does not have adverse affects on the target company and the most value is gained from the test. Types of Penetration Testing. One of the first steps in establishing the rules of engagement are considering what information about the target should be provided to the tester. No matter the scope or scale of a test, how information flows initially will set in motion other attributes of planning, ultimately defining factors by which the value of the test will be measured.
Usually some form of information is provided by the target, and only in the most extreme cases is absolutely no information offered. Some cannot be avoided, such as the name of the company, while others can be easily kept from the testers without totally impeding the mechanics of the test. Following are some basic definitions of information provisioning: • Zero knowledge: Zero knowledge is just that; the tester is provided nothing about the target’s network or environment. The tester is simply left to her ability to discover information about the company and use it to gain some form of access. This is also called black box or closed, depending on who is scoping the test. This is particularly appropriate when testing for external penetrations. • Partial knowledge: Something growing in popularity with companies seeking penetration testing is providing just enough information to get started. In some cases, information may include phone numbers to be 208
AU8231_C002.fm Page 209 Saturday, June 2, 2007 1:21 PM
Access Control tested, IP addresses, domain information, applications, and other data that would take some time to collect and do not represent any difficultly to a hacker, but would be rather time-consuming for the tester. The interesting aspect of getting some information and not all is the assumption of scope. Organizations tend to use limited information to define boundaries of the test, as opposed to providing initial data to support the test. For example, there is a difference in exposing whether a company has IDS, as opposed to providing a list of phone numbers. The former is an obvious attempt to limit the information provided to the tester, whereas the latter is influencing the scope of the test. • Full knowledge: Full knowledge is when every possible piece of information about the environment is provided to the tester. This type of test is typically employed when there is greater focus on what can be done, as opposed to what can be discovered. This is also known as crystal box, white box, or open, again depending on who is planning the test. This is particularly appropriate when testing for internal penetrations. Methodology. A methodology is an established collection of processes that are performed in a predetermined order to ensure the job, function, or, in this case, a test is accurately executed. There are several methods for performing a penetration test. However, there is a basic and logical methodology that has become best practice for performing tests:
• Reconnaissance/discovery: Identify and document information about target. • Enumeration: Gain more information with intrusive methods. • Vulnerability analysis: Map environment profile to known vulnerabilities. • Exploitation: Attempt to gain user and privileged access. RECONNAISSANCE/DISCOVERY. Reconnaissance is the search for freely available information to assist in the test. The search can be quick ping sweeps to see what IP addresses on a network will respond, scouring news groups on the Internet in search of misguided employees divulging useful information, or rummaging through the trash to find receipts for telecommunication services.
Reconnaissance can include theft, lying to people, tapping phones and networks, impersonations, or even leveraging falsified friendships to collect data about a target. The search for information is only limited by the extremes to which a company and the tester are willing to go. Reconnaissance offers a plethora of options, each related to one another. However, unlike other phases within the test’s framework, each option can be controlled, moderated, and measured to a surprisingly high level of granularity when properly planned and managed. 209
AU8231_C002.fm Page 210 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® ENUMERATION. Enumeration (also known as network or vulnerability discovery) is essentially obtaining readily available — and sometimes provided — information directly from the target’s systems, applications, and networks. An interesting point to make very early is that the enumeration phase represents a point within the project where the line between a passive attack and an active attack begins to blur. Without setting the appropriate expectations, this phase can have results ranging from “Oops” to “Do you swear to tell the truth and nothing but the truth?”
To build a picture of a company’s environment, there are several tools and techniques available to compile a list of information obtained from the systems. Most notably, port scanning is the “block and tackle” of the enumeration, and NMap is today’s most valuable player. The simplest explanation of a port scan is the manipulation of the basic communication setup between two networked systems using TCP/IP as a communication protocol. TCP/IP uses a basic session setup to determine with what application ports a system is willing to establish communications. Obviously, collecting information about systems is the first step in formulating an attack plan. However, information collected during the reconnaissance phase can be added to help build a picture of the target’s systems and networks. It is one thing to collect information, and it is another to determine its value, and the perceived value in the hands of a hacker. On the surface, enumeration is simple; take the collected data and evaluate it collectively to establish a plan for more reconnaissance or building a matrix for the next phase, vulnerability analysis. However, this is the phase where the tester’s ability to make logical deductions plays an enormous role. VULNERABILITY ANALYSIS. There is a logical and pragmatic approach to analyzing data. During the enumeration phase, we try to perform an interpretation of the information collected, looking for relationships that may lead to exposures that can be exploited. The vulnerability analysis phase is a practical process of comparing the information collected with known vulnerabilities.
Most information can be collected from the Internet or other sources, such as news groups or mailing lists, which can be used to compare information about the target to seek options for exploitation. However, information provided by vendors, and even data collected from the target, can be used to formulate a successful attack. Information collected during the reconnaissance phase from the company can provide knowledge about vulnerabilities unique to its environment. Data obtained directly from the company can actually support the discovery of vulnerabilities that cannot be located anywhere else. As mentioned above, information on the Internet is very helpful. Known vulnerabilities, incidents, service packs, updates, and even available 210
AU8231_C002.fm Page 211 Saturday, June 2, 2007 1:21 PM
Access Control hacker tools help in identifying a point of attack. The Internet provides a plethora of insightful information that can easily be associated with the architecture of the target. EXPLOITATION. A great deal of planning and evaluation are performed during the earlier phases to ensure that a business-centric foundation of value is established for the test. Of course, all of this planning must lead to some form of attack. Exploiting systems and applications can be as easy as running a tool or as intricate as executing fine-tuned steps in a specific way to get in. No matter the level of difficultly, good testers follow a pattern during the exploitation phase of a test.
During a penetration test, the details considered in the planning come into full view and affect the outcome of every action taken by the tester. A sound course of action is needed to translate the planning into an attack to meet the objectives within the specified period and within the defined scope. The attack process is broken up into threads and groups, and each appears in sets of security. A thread is a collection of tasks that must be performed in a specific order to achieve a goal. Threads can be one step or many in a series used to gain access. Every thread is different, but may have similarities that can be useful. Therefore, threads can be combined into groups to create a collection of access strategies. Groups are then reviewed and compared to support comprehensive attacks using very different threads in a structured manner. Each test is evaluated at every point within the operation to ensure the expected outcome is met. Each divergence from plan is appraised to make two fundamental determinations: • Expectations: Are the expectations of the thread or group not being met or are the test’s results conflicting with the company’s assumptions and stated goals? The objective is to ensure each test is within the bounds of what was established and agreed upon. On the other hand, if the test begins to produce results that were not considered during the planning, enumeration, and vulnerability analysis phases, the engagement needs to be reconsidered, or at minimum, the planning phase needs to be revisited. Meeting expectations is everything, and in the world of ethical hacking, it can represent a fundamental challenge when not planned properly or not executed to the plan. • Technical: Is a system reacting in an unexpected manner, which is having an impact on the test? Much more granular in theory than general expectations of the test, technical gaps are literally the response of a system during a test. Keeping your eyes open for unexpected responses from systems ensures that you have not negatively affected the target or gone beyond the set scope of the test. 211
AU8231_C002.fm Page 212 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Document Findings. The goal of penetration testing is to gain awareness and detailed understanding of the state of the security environment. Information is collected and acted upon during the test, producing more information that can be used to draw conclusions and articulate findings.
The goal of the document is to clearly present the findings, tactics used, and tools employed, and to offer the information collected from the test. Finally, the ultimate objective is to offer perspectives and recommendations for remediation. In summary, the test results can help with identifying: • • • • • •
Vulnerabilities of the system Gaps in security measures IDS and intrusion response capability Whether anyone is monitoring audit logs How suspicious activity is reported Suggested countermeasures
Testing Strategies. Strategies for penetration testing, based on specific objectives to be achieved, are a combination of the source of the test, how the company’s assets are targeted, and the information, or lack thereof, provided to the tester. EXTERNAL VERSUS INTERNAL TESTING. External testing refers to attacks on the organization’s network perimeter using procedures performed from outside the organization’s systems, that is, from the Internet or extranet. To conduct the test, the testing team begins by targeting the company’s externally visible servers or devices, such as the Domain Name Server (DNS), email server, Web server, or firewall.
Internal testing is performed from within the organization’s technology environment. The focus is to understand what could happen if the network perimeter were successfully penetrated, or what an authorized user could do to penetrate specific information resources within the organization’s network. BLIND AND DOUBLE-BLIND VERSUS TARGETED TESTING. In a blind testing strategy, the testing team is provided with only limited information concerning the organization’s information systems configuration. The penetration testing team must use publicly available information (such as company Web site, domain name registry, and Internet discussion board) to gather information about the target and conduct its penetration tests.
Blind testing can provide information about the organization that may have been otherwise unknown, but it can also be more time-consuming and expensive than other types of penetration testing (such as targeted 212
AU8231_C002.fm Page 213 Saturday, June 2, 2007 1:21 PM
Access Control testing) because of the effort required by the penetration testing team to research the target. Double-blind testing extends the blind testing strategy in that the organization’s IT and security staff are not notified or informed beforehand and are “blind” to the planned testing activities. Double-blind testing can test the organization’s security monitoring and incident identification, escalation, and response procedures. Normally, in double-blind testing engagements, very few people within the organization are made aware of the testing, perhaps only the project sponsor. Double-blind penetration testing requires careful monitoring by the project sponsor to ensure that the testing procedures and the organization’s incident response procedures can be terminated when the objectives of the test have been achieved. Targeted testing (often referred to as the “lights turned on” approach) involves both the organization’s IT team and the penetration testing team being aware of the testing activities and being provided information concerning the target and the network design. A targeted testing approach may be more efficient and cost-effective when the objective of the test is focused more on the technical setting, or on the design of the network, than on the organization’s incident response and other operational procedures. A targeted test typically takes less time and effort to complete than blind testing, but may not provide as complete a picture of an organization’s security vulnerabilities and response capabilities. Types of Testing. In addition to the penetration testing strategies to be used, consideration should be given to the types of testing the testing team is to carry out. These could include:
• Application testing: Evaluate controls over the application and its process flow. • Denial-of-service testing: Evaluate a system’s susceptibility to attacks that will render it inoperable. • War dialing: Identify, analyze, and exploit modems, remote-access devices, and maintenance connections. • Wireless network testing: Identify security gaps or flaws in design, implementation, and operation of wireless technologies. • Social engineering: Use social interaction to gather information and penetrate organization’s systems. • Private branch exchange (PBX) and IP telephony testing: Evaluate controls dealing with telephony technologies. Application Testing. Many organizations offer access to core business functionality through Web-based applications. This type of access introduces new security vulnerabilities because, even with a firewall and other monitoring systems, security can be compromised, because traffic must be 213
AU8231_C002.fm Page 214 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® allowed to pass through the firewall. The objective of application security testing is to evaluate the controls over the application and its process flow. Topics to be evaluated may include the application’s usage of encryption to protect the confidentiality and integrity of information, how users are authenticated, the integrity of the Internet user’s session with the host application, and the use of cookies — a block of data stored on a customer’s computer that is used by the Web server application. Denial-of-Service (DoS) Testing. The goal of DoS testing is to evaluate the system’s susceptibility to attacks that will render it inoperable so that it will deny service, that is, drop or deny legitimate access attempts. Decisions regarding the extent of denial-of-service testing to be incorporated into a penetration testing exercise will depend on the relative importance of ongoing, continued availability of the information systems and related processing activities. War Dialing. War dialing is a technique for systematically calling a range of telephone numbers in an attempt to identify modems, remote-access devices, and maintenance connections of computers that may exist on an organization’s network. Well-meaning users can inadvertently expose the organization to significant vulnerability by connecting a modem to the organization’s information systems. Once a modem or other access device has been identified, analysis and exploitation techniques are performed to assess whether this connection can be used to penetrate the organization’s information systems network. Wireless Network Testing. The introduction of wireless networks, whether through formal, approved network configuration management or the inadvertent actions of well-meaning users, creates additional security exposures. Sometimes referred to as war driving, hackers have become proficient in identifying wireless network’s access points simply by driving or walking around office buildings with their wireless network equipment. The goal of wireless network testing is to identify security gaps or flaws in the design, implementation, or operation of the organization’s wireless network. Social Engineering. Often used in conjunction with blind and double-blind testing, this refers to techniques using social interaction, typically with the organization’s employees, suppliers, and contractors, to gather information and penetrate the organization’s systems. Such techniques could include posing as a representative of the IT department’s help desk and asking users to divulge their user account and password information; posing as an employee and gaining physical access to restricted areas that may house sensitive information; or intercepting mail, courier packages, or even searching through trash for sensitive information on printed materials. Social engineering activities can test a less technical, but equally 214
AU8231_C002.fm Page 215 Saturday, June 2, 2007 1:21 PM
Access Control important, security component: the ability of the organization’s people to contribute to — or prevent — unauthorized access to information and information systems. PBX and IP Telephony Testing. Beyond war dialing, phone systems and those incorporated with the network and network services represent a potential to access corporate resources. It is not uncommon for security services to use voice mail systems to provide information to users relying on the authentication of the user to gain access to their voice mail. Hackers can gain access to voice mail to gather information and monitor activity. Moreover, phone systems can be manipulated to permit a hacker to make long-distance calls for free and undetected, potentially furthering his attack on other organizations.
IP telephony or Voice-over-IP (VoIP) is the use of traditional data networks for phone conversations. It can also include the integration of phone systems with network applications, databases, and other services, such as e-mail or collaboration systems. Tests can be performed against these technologies to gain a better understanding of the risks associated with combining voice and data on a single network. The potential threat profile essentially doubles by assuming the threats associated with IP networks and those of telephone systems. Summary.
• An intrusion prevention system provides the ability to block attacks in real-time, while an intrusion detection system attempts to identify and report attacks. • Penetration testing is a series of activities undertaken to identify and exploit security vulnerabilities.
References Thomas R. Peltier, Justin Peltier, and John A Blackley. Managing a Network Vulnerability Assessment. New York: Auerbach Publications, 2003. James S. Tiller. The Ethical Hack: A Framework for Business Value Penetration Testing. New York: Auerbach Publications, 2004. Susan Young and Dave Aitel. The Hacker’s Handbook: The Strategy behind Breaking into and Defending Networks. New York: Auerbach Publications, 2003.
Sample Questions 1. A preliminary step in managing resources is: a. Conducting a risk analysis b. Defining who can access a given system or information c. Performing a business impact analysis d. Obtaining top management support 215
AU8231_C002.fm Page 216 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® 2. Which best describes access controls? a. Access controls are a collection of technical controls that permit access to authorized users, systems, and applications. b. Access controls help protect against threats and vulnerabilities by reducing exposure to unauthorized activities and providing access to information and systems to only those who have been approved. c. Access control is the employment of encryption solutions to protect authentication information during log-on. d. Access controls help protect against vulnerabilities by controlling unauthorized access to systems and information by employees, partners, and customers. 3. _______ requires that a user or process be granted access to only those resources necessary to perform assigned functions. a. Discretionary access control b. Separation of duties c. Least privilege d. Rotation of duties 4. What are the six main categories of access control? a. Detective, corrective, monitoring, logging, recovery, and classification b. Deterrent, preventative, detective, corrective, compensating, and recovery c. Authorization, identification, factor, corrective, privilege, and detective d. Identification, authentication, authorization, detective, corrective, and recovery 5. What are the three types of access control? a. Administrative, physical, and technical b. Identification, authentication, and authorization c. Mandatory, discretionary, and least privilege d. Access, management, and monitoring 6. Which approach revolutionized the process of cracking passwords? a. Brute force b. Rainbow table attack c. Memory tabling d. One-time hashing 7. What best describes two-factor authentication? a. Something you know b. Something you have c. Something you are d. A combination of two listed above 8. A potential vulnerability of the Kerberos authentication server is: a. Single point of failure b. Asymmetric key compromise 216
AU8231_C002.fm Page 217 Saturday, June 2, 2007 1:21 PM
Access Control
9.
10.
11.
12.
13.
14.
15.
c. Use of dynamic passwords d. Limited lifetimes for authentication credentials In mandatory access control, the system controls access and the owner determines: a. Validation b. Need to know c. Consensus d. Verification Which is the least significant issue when considering biometrics? a. Resistance to counterfeiting b. Technology type c. User acceptance d. Reliability and accuracy Which is a fundamental disadvantage of biometrics? a. Revoking credentials b. Encryption c. Communications d. Placement Role-based access control _______: a. Is unique to mandatory access control b. Is independent of owner input c. Is based on user job functions d. Can be compromised by inheritance Identity management is: a. Another name for access controls b. A set of technologies and processes intended to offer greater efficiency in the management of a diverse user and technical environment c. A set of technologies and processes focused on the provisioning and decommissioning of user credentials d. A set of technologies and processes used to establish trust relationships with disparate systems A disadvantage of single sign-on is: a. Consistent time-out enforcement across platforms b. A compromised password exposes all authorized resources c. Use of multiple passwords to remember d. Password change control Which of the following is incorrect when considering privilege management? a. Privileges associated with each system, service, or application, and the defined roles within the organization to which they are needed, should be identified and clearly documented. b. Privileges should be managed based on least privilege. Only rights required to perform a job should be provided to a user, group, or role. 217
AU8231_C002.fm Page 218 Saturday, June 2, 2007 1:21 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® c. An authorization process and a record of all privileges allocated should be maintained. Privileges should not be granted until the authorization process is complete and validated. d. Any privileges that are needed for intermittent job functions should be assigned to multiple user accounts, as opposed to those for normal system activity related to the job function. 16. Capability lists are related to the subject, whereas access control lists (ACLs) are related to the object, and therefore: a. Under capability lists, attacker subjects can simply refuse to submit their lists and act with no restrictions. b. Under access control lists, a user can invoke a program to access objects normally restricted. c. Capability lists can only manage subject-to-subject access, and thus cannot be part of the reference monitor. d. Only access control lists can be used in object-oriented programming.
218
AU8231_C003.fm Page 219 Saturday, June 2, 2007 1:22 PM
Domain 3
Cryptography Kevin Henry, CISSP
Introduction Cryptography is perhaps the most fascinating domain in the CISSP® CBK®. No other domain has the history, challenge, and technological advancements that cryptography enjoys. Throughout history, cryptography has been a crucial factor in military victories or failures, treason, espionage, and business advantage. Cryptography is both an art and a science — the use of deception and mathematics, to hide data, as in steganography, to render data unintelligible through the transformation of data into an unreadable state, and to ensure that a message has not been altered in transit. Another feature of some cryptographic systems is the ability to provide assurance of who sent the message, authentication of source, and proof of delivery. CISSP Expectations The CISSP should fully understand the basic concepts within cryptography, including public and private key algorithms in terms of their applications and uses. Cryptography algorithm construction, key distribution, key management, and methods of attack are also important for the successful candidate to understand. The applications, construction, and use of digital signatures are discussed and compared to the elements of cryptography. The principles of authenticity of electronic transactions and nonrepudiation are also included in this domain. Core Information Security Principles: Confidentiality, Integrity, and Availability The cryptography domain addresses the principles, means, and methods of disguising information to ensure its integrity, confidentiality, and authenticity. Unlike the other domains, cryptography does not support the standard of availability.
219
AU8231_C003.fm Page 220 Saturday, June 2, 2007 1:22 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Key Concepts and Definitions Plaintext or cleartext is the message in its natural format. Plaintext would be readable to an attacker. Ciphertext or cryptogram is the message once altered, so as to be unreadable for anyone except the intended recipients. An attacker seeing ciphertext would be unable to read the message or to determine its content. The cryptosystem represents the entire cryptographic operation. This includes the algorithm, the key, and key management functions. Encryption is the process of converting the message from its plaintext to ciphertext. It is also referred to as enciphering. The two terms are used interchangeably in literature and have similar meanings. Decryption is the reverse process from encryption. It is the process of converting a ciphertext message into plaintext through the use of the cryptographic algorithm and key that was used to do the original encryption. This term is also used interchangeably with the term decipher. The key or cryptovariable is the sequence that controls the operation of the cryptographic algorithm. It determines the behavior of the algorithm and permits the reliable encryption and decryption of the message. There are both secret and public keys used in cryptographic algorithms, as will be seen later in this chapter. Nonrepudiation is a security service by which evidence is maintained so that the sender and recipient of data cannot deny having participated in the communication. Individually, it is referred to as nonrepudiation of origin and nonrepudiation of receipt. An algorithm is a mathematical function that is used in the encryption and decryption process. It may be quite simple or extremely complex. Cryptanalysis is the study of techniques for attempting to defeat cryptographic techniques and, more generally, information security services. Cryptology is the science that deals with hidden, disguised, or encrypted communications. It embraces communications security and communications intelligence. Collision occurs when a hash function generates the same output for different inputs. Key space represents the total number of possible values of keys in a cryptographic algorithm or other security measure, such as a password. For example, a 20-bit key would have a key space of 1,048,576. Work factor represents the time and effort required to break a protective measure. 220
AU8231_C003.fm Page 221 Saturday, June 2, 2007 1:22 PM
Cryptography An initialization vector is a nonsecret binary vector used as the initializing input algorithm for the encryption of a plaintext block sequence to increase security by introducing additional cryptographic variance and to synchronize cryptographic equipment. Encoding is the action of changing a message into another format through the use of a code. This is often done by taking a plaintext message and converting it into a format that can be transmitted via radio or some other medium, and is usually used for message integrity instead of secrecy. An example would be to convert a message to Morse code. Decoding is the reverse process from encoding — converting the encoded message back into its plaintext format. Transposition or permutation is the process of reordering the plaintext to hide the message. Transposition may look like this: Plaintext
Transposition Algorithm
Ciphertext
HIDE
REORDER SEQUENCE 2143
IHED
Substitution is the process of exchanging one letter or byte for another. This operation may look like this: Plaintext
Substitution Process
Ciphertext
HIDE
Shift alphabet three places
KLGH
The SP-network is the process described by Claude Shannon used in most block ciphers to increase their strength. SP stands for substitution and permutation (transposition), and most block ciphers do a series of repeated substitutions and permutations to add confusion and diffusion to the encryption process. An SP-network uses a series of S-boxes to handle the substitutions of the blocks of data. Breaking a plaintext block into a subset of smaller S-boxes makes it easier to handle the computations. Confusion is provided by mixing (changing) the key values used during the repeated rounds of encryption. When the key is modified for each round, it provides added complexity that the attacker would encounter. Diffusion is provided by mixing up the location of the plaintext throughout the ciphertext. Through transposition, the location of the first character of the plaintext may change several times during the encryption process, and this makes the cryptanalysis process much more difficult. The avalanche effect, an important consideration in all cryptography, is to design algorithms where a minor change in either the key or the plaintext will have a significant change in the resulting ciphertext. This is also a feature of a strong hashing algorithm. 221
AU8231_C003.fm Page 222 Saturday, June 2, 2007 1:22 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK®
Figure 3.1. The cryptographic process.
The History of Cryptography Cryptography has been around for many years, and yet the basic principles of cryptography have not changed. The core principle of most cryptographic systems is that they take a plaintext message and, through a series of transpositions or substitutions, convert it to ciphertext, as shown in Figure 3.1. The Early (Manual) Era. There is evidence of cryptographic-type operations going back thousands of years. In one case, there is an example in Egypt of one set of hieroglyphics that were encrypted with a simple substitution algorithm.
The Spartans were known for the Spartan scytale — a method of transmitting a message by wrapping a leather belt around a tapered dowel. Written across the dowel, the message would be undecipherable once it was unwrapped from the dowel. The belt could then be carried to the recipient, who would be able to read the message as long as he had a dowel of the same diameter and taper. There are further examples of the use and development of cryptographic methods throughout the past two centuries. Julius Caesar used the Caesar cipher — a simple substitution cipher that shifted the alphabet three positions. Developments in cryptographic science continued throughout the middle ages with the work of Leon Battista Alberti, who invented the idea of a cryptographic key in 1466, and the enhanced use of polyalphabetic ciphers by Blais de Vigenère. We will look at their work in more detail when we review the methods of cryptography. The Mechanical Era. From a paper-and-pencil world, cryptography developed into the mechanical era with the advent of CipherDisks and rotors to simplify the manual processes of cryptography. Devices developed during this era were in regular use well into the 20th century. These include the German Enigma machine, the Confederate Army’s CipherDisk, and the Japanese Red and Purple machines.
222
AU8231_C003.fm Page 223 Saturday, June 2, 2007 1:22 PM
Cryptography During this era, tools and machines were developed that greatly increased the complexity of cryptographic operations, as well as enabling the use of much more robust algorithms. Many of these devices introduced a form of randomization to the cryptographic operations and made the use of cryptographic devices available to nontechnical people. One core concept developed in this era was the performance of the algorithm on the numerical value of a letter, rather than the letter itself. This was a natural transition into the electronic era, where cryptographic operations are normally performed on binary or hex values of letters, rather than on the written letter. For example, we could write the alphabet as follows: A = 0, B = 1, C = 3 … Z = 25 This was especially integral to the one-time pad and other cipher methods that were developed during this era. The Modern Era. Today’s cryptography is far more advanced than the cryptosystems of yesterday. We are able to both encrypt and break ciphers that could not even have been imagined before we had the power of computers.
Today’s cryptosystems operate in a manner so that anyone with a computer can use cryptography without even understanding cryptographic operations, algorithms, and advanced mathematics. However, as we will review later, it is still important to implement a cryptosystem in a secure manner. In fact, we could state that most attacks against cryptosystems are not the result of weaknesses in cryptographic algorithms, but rather poor or mismanaged implementations. As we look at the cryptographic algorithms in use today, we will see many of the advances of yesterday built into the functions of today — randomization, transposition, and cryptographic keys. Emerging Technology Quantum Cryptography.* A fundamental difference between traditional cryptography and quantum cryptography is that traditional cryptography primarily uses difficult mathematical techniques as its fundamental mechanism. Quantum cryptography, on the other hand, uses physics to secure data. Whereas traditional cryptography stands firm due to strong math, quantum cryptography has a radically different premise in that the secu-
*Ben Rothke, An overview of quantum cryptography, in Information Security Management Handbook, 3rd ed., Vol. 3, Tipton, Harold F. and Krause, Micki, Eds., Auerbach Publications, New York, 2006, pp. 380–381.
223
AU8231_C003.fm Page 224 Saturday, June 2, 2007 1:22 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® rity should be based on known physical laws rather than on mathematical difficulties. Quantum cryptography (also known as quantum key distribution, or QKD) is built on quantum physics. Perhaps the most well known aspect of quantum physics is the uncertainty principle of Werner Heisenberg. His basic claim is that we cannot know both a particle’s position and momentum with unlimited accuracy at the same time. Specifically, quantum cryptography is a set of protocols, systems, and procedures by which it is possible to create and distribute secret keys. Quantum cryptography can be used to generate and distribute secret keys, which can then be used together with traditional crypto algorithms and protocols to encrypt and transfer data. It is important to note that quantum cryptography is not used to encrypt data, transfer encrypted data, or store encrypted data. As noted early, the need for asymmetric key systems arose from the issue of key distribution. The quagmire is that you needed a secure channel to set up a secure channel. Quantum cryptography solves the key distribution problem by allowing the exchange of a cryptographic key between two remote parties with complete security, as dictated via the laws of physics. Once the key exchange takes place, conventional cryptographic algorithms are used. For that reason, many prefer the term quantum key distribution to quantum cryptography. When used in a commercial setting, the following is a basic and overly simplistic setup of how quantum cryptography can be used: 1. Two remote parties need to exchange data electronically in a highly secure manner. 2. They choose standard crypto algorithms, protocols, systems, and transport technologies to exchange the data in an encrypted form. 3. They use a quantum cryptography channel to generate and exchange the secret keys needed by the algorithms. 4. They use the secret keys generated with quantum cryptography and the classical algorithms to encrypt the data. 5. They exchange the encrypted data using the chosen classical protocols and transfer technologies. Within quantum cryptography, there are two unique channels. One is used for the transmission of the quantum key material via single-photon light pulses. The other channel carries all message traffic, including the cryptographic protocols, encrypted user traffic, and more. Within the laws of quantum physics, once a photon has been observed, its state is changed. This makes quantum cryptography perfect for security because any time that someone tries to eavesdrop on a secure channel, 224
AU8231_C003.fm Page 225 Saturday, June 2, 2007 1:22 PM
Cryptography this will cause a disturbance to the flow of the photons. This can easily be identified to provide extra security. Quantum algorithms are orders of magnitude better than current systems. It is estimated that quantum factorization can factor a number a million times longer than that used for RSA in a millionth of the time. In addition, it can crack a DES cipher in less than four minutes. KC’s speed increase comes from forming a superposition of numbers. Quantum computers are able to perform calculations on various superpositions simultaneously, which creates the effect of a massive parallel computation. Protecting Information Data Storage. Protection of stored data is a key requirement. Backup tapes, off-site storage, password files, and many other types of confidential information need to be protected from disclosure or undetected alteration. This is done through the use of cryptographic algorithms that limit access to the data to those that hold the proper encryption (and decryption) keys. (Note: Because password files are hashed instead of encrypted, there are no keys to decrypt them.) Some modern cryptographic tools also permit the condensing of messages, saving both transmission and storage space. Data Transmission. One of the primary purposes throughout history has been to move messages across various types of media. The intent was to prevent the contents of the message from being revealed even if the message itself was intercepted in transit. Whether the message is sent manually, over a voice network, or via the Internet, modern cryptography provides secure and confidential methods to transmit data and allows the verification of the integrity of the message, so that any changes to the message itself can be detected. Advances in quantum cryptography also allow the detection of whether a message has even been read in transit. Link Encryption. Data is encrypted on a network using either link or endto-end encryption. In general, link encryption is performed by service providers, such as a data communications provider on a Frame Relay network. Link encryption encrypts all of the data along a communications path (e.g., a satellite link, telephone circuit, or T-1 line). Because link encryption also encrypts routing data, communications nodes need to decrypt the data to continue routing. The data packet is decrypted and reencrypted at each point in the communications channel. It is theoretically possible that an attacker compromising a node in the network may see the message in the clear. Because link encryption also encrypts the routing information, it provides traffic confidentiality better than end-toend encryption. Traffic confidentiality hides the addressing information from an observer, preventing an inference attack based on the existence of traffic between two parties. 225
AU8231_C003.fm Page 226 Saturday, June 2, 2007 1:22 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK®
Figure 3.2. Comparison of link and end-to-end encryption. End-to-End Encryption. End-to-end encryption is generally performed by the end-user organization. The data is encrypted at the start of the communications channel and remains encrypted until it is decrypted at the remote end. Although data remains encrypted when passed through a network, routing information remains visible. It is possible to combine both types of encryption (NIST SP 800-12). See Figure 3.2.
Uses of Cryptography Availability. Cryptography supports all three of the core principles of information security. Many access control systems use cryptography to limit access to systems through the use of passwords. Many token-based authentication systems use cryptographic-based hash algorithms to compute one-time passwords. Denying unauthorized access prevents an attacker from entering and damaging the system or network, thereby denying access to authorized users. Confidentiality. Cryptography provides confidentiality through altering or hiding a message so that it cannot be understood by anyone except the intended recipient. Integrity. Cryptographic tools provide integrity checks that allow a recipient to verify that a message has not been altered. Cryptographic tools cannot prevent a message from being altered, but they are effective to detect either intentional or accidental modification of the message.
Additional Features of Cryptographic Systems In addition to the three core principles of information security listed above, cryptographic tools provide several more benefits. 226
AU8231_C003.fm Page 227 Saturday, June 2, 2007 1:22 PM
Cryptography Nonrepudiation. In a trusted environment, authentication of the origin can be provided through the simple control of the keys. The receiver has a level of assurance that the message was encrypted by the sender, and the sender has trust that the message was not altered once it was received. However, in a more stringent, less trustworthy environment, it may be necessary to provide assurance via a third party of who sent a message and that the message was indeed delivered to the right recipient. This is accomplished through the use of digital signatures and public key encryption. As shown later in this chapter, the use of these tools provides a level of nonrepudiation of origin that can be verified by a third party.
Once a message has been received, what is to prevent the recipient from changing the message and contesting that the altered message was the one sent by the sender? Nonrepudiation of delivery prevents a recipient from changing the message and falsely claiming that the message is in its original state. This is also accomplished through the use of public key cryptography and digital signatures and is verifiable by a trusted third party. Authentication. Authentication is the ability to determine who has sent a message. This is primarily done through the control of the keys, because only those with access to the key are able to encrypt a message. This is not as strong as nonrepudiation of origin, which will be reviewed shortly.
Cryptographic functions use several methods to ensure that a message has not been changed or altered. These include hash functions, digital signatures, and message authentication codes (MACs). All of these will be reviewed in more detail through this chapter. The main thing is that the recipient is able to detect any change that has been made to a message, whether accidentally or intentionally. Access Control. Through the use of cryptographic tools, many forms of access control are supported — from log-ins via passwords and passphrases to the prevention of access to confidential files or messages. In all cases, access would only be possible for those individuals that had access to the correct cryptographic keys.
Methods of Cryptography Stream-Based Ciphers. There are two primary methods of encrypting data: the stream and block methods. When a cryptosystem performs its encryption on a bit-by-bit basis, it is called a stream-based cipher.
This is the method most commonly associated with streaming applications, such as voice or video transmission. The cryptographic operation for a stream-based cipher is to mix the plaintext with a keystream that is generated by the cryptosystem. The mixing operation is usually an exclusive-or (XOR) operation — a very fast mathematical operation. 227
AU8231_C003.fm Page 228 Saturday, June 2, 2007 1:22 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK®
Figure 3.3. Cryptographic operation for a stream-based cipher.
As seen in Figure 3.3, the plaintext is XORed with a seemingly random keystream to generate ciphertext. We refer to it as seemingly random because the generation of the keystream is usually controlled by the key. If the key could not produce the same keystream for the purposes of decryption of the ciphertext, then it would be impossible to ever decrypt the message. The exclusive-or process is a key part of many cryptographic algorithms. It is a simple binary operation that adds two values together. If the two values are the same, 0 + 0 or 1 + 1, then the output is always a 0; however, if the two values are different, 1 + 0 or 0 + 1, then the output is a 1. From the example above we can see this in operation: Input plaintext Keystream Output of XOR
0101 0001 0111 0011 0010 0010
A stream-based cipher relies primarily on substitution — the substitution of one character or bit for another in a manner governed by the cryptosystem and controlled by the cipher key. For a stream-based cipher to operate securely, it is necessary to follow certain rules for the operation and implementation of the cryptographic tool. The keystream must be strong enough to not be easily guessed or predictable. In time, the keystream will repeat, and that period (or length of the repeating segment of the keystream) must be long enough to be difficult to calculate. If a keystream is too short, then it is susceptible to frequency analysis or other language-specific attacks. The implementation of the stream-based cipher is probably the most important factor in the strength of the cipher — this applies to nearly every crypto product and, in fact, to security overall. Some important factors in the implementation are to ensure that the key management processes are secure and cannot be readily compromised or intercepted by an attacker. 228
AU8231_C003.fm Page 229 Saturday, June 2, 2007 1:22 PM
Cryptography Block Ciphers. A block cipher operates on blocks or chunks of text. As plaintext is fed into the cryptosystem, it is divided into blocks of a preset size — often a multiple of the ASCII character size — 64, 128, 192 bits, etc.
Most block ciphers use a combination of substitution and transposition to perform their operations. This makes a block cipher relatively stronger than most stream-based ciphers, but more computationally intensive and usually more expensive to implement. This is also why many stream-based ciphers are implemented in hardware, whereas a block-based cipher is implemented in software. Encryption Systems Substitution Ciphers The substitution cipher is something many of us have used. It involves the simple process of substituting one letter for another based upon a cryptovariable. Typically, substitution involves shifting positions in the alphabet of a defined number of characters. Many old ciphers were based on substitution, including the Caesar cipher and ROT-13.* Playfair Cipher. The playfair cipher was used well into the 20th century and was a key element of the cryptographic systems used by the Allies in the Second World War.
The sender and receiver agreed on a key word, for example, Triumph. A table was then constructed using that word and then the rest of the alphabet — skipping over the letters already appearing in the key, and using I and J as the same letter. For the sake of clarity, the key word is highlighted in the table so that it can be easily found. T P D L V
R H E N W
I/J A F O X
U B G Q Y
M C K S Z
If the sender wanted to encrypt the message “Do not accept offer,” it would be encrypted by first grouping the plaintext in two letter blocks and spacing the repeated letters in the plaintext with a filler letter, e.g., X. The plaintext would then be: DO NO TA CX CX EP TO FX FX ER The table is read by looking at where the two letters of the block intersect. For example, if we took the first block, DO, and made it a rectangle, *Chris Hare, Cryptography 101, Data Security Management, 2002.
229
AU8231_C003.fm Page 230 Saturday, June 2, 2007 1:22 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® the letters at the other two corners of the rectangle would be FL, that is, the ciphertext for the block DO. The box created by the letters DO is in a border for clarity. The next plaintext block is NO, and because both of those letters are on the same row, we would use the ciphertext of the next letters — in this case, NO would be encrypted as OQ. If the input block had been OS, then the row would wrap and the output ciphertext would be QL, using the next letter after the O, and the next letter after the S being the L from the beginning of the row. The letters FX fall in the same column, and we would do the same as for letters that fall in the same row — use the next lower letter and wrap to the top of the column if necessary. The block FX would be encrypted as either OI or OJ. Transposition Ciphers. All of the above cryptosystems are based on the principle of substitution, that is, to substitute or exchange one value or letter for another. Now we will look at the cryptosystems that use transposition or permutation.
These systems rely on concealing the message through the transposing of or interchanging the order of the letters. The Rail Fence. In the simple transposition cipher known as the rail fence, the message is written and read in two or more lines. Let us say that we want to send the message “Purchase gold and oil stocks.”
We could write the message in alternating diagonal rows as shown: P
R U
H C
S A
G E
L O
A D
D N
I O
S L
O T
K C
S
The ciphertext would read as follows: PRHSGLADIGOKUCAEODNOLTCS The problem with such a system is that because the letters are the same as the plaintext, no substitution has taken place, just a reordering of the letters; the ciphertext is still susceptible to frequency analysis and other cryptographic attacks Rectangular Substitution Tables. The use of rectangular substitution tables was an early form of cryptography. The sender and receiver decided on the size and structure of a table to hold the message, and then the order in which to read the message.
Let us use the same plaintext as the previous example (“Purchase gold and oil stocks”), but place it in a rectangular substitution block. 230
AU8231_C003.fm Page 231 Saturday, June 2, 2007 1:22 PM
Cryptography P A L O O
U S D I C
R E A L K
C G N S S
H O D T
Reading the table in a top-down manner would produce the following ciphertext: PALOOUSDICREALKCGNSSHODT Of course, the sender and receiver could agree on reading the table any way — bottom up, diagonally — that suited them. Monoalphabetic and Polyalphabetic Ciphers. We looked at the Caesar cipher earlier — a simple substitution algorithm that merely shifted the plaintext over three places to create the ciphertext. This was a monoalphabetic system — the substitution was one alphabet letter for another. In the case of the Caesar cipher, the replacement alphabet was offset by three places: A D
B E
C F
D G
E H
F I
G J
H K
I L
J M
K N
… …
Z C
There is also the scrambled alphabet. In this case, the substitution alphabet is a scrambled version of the alphabet. It could look like this, for example: A M
B G
C P
D U
E W
F I
G R
H L
I O
J V
K D
… …
Z K
Using the scrambled alphabet above, the plaintext of BAKE would be substituted as GMDW. The problem with monoalphabetic ciphers, however, is that they are still subject to the characteristics of the plaintext language — an E, for example, would be substituted as a W throughout the ciphertext. That would mean the letter W in the ciphertext would appear as frequently as an E in plaintext. That makes a cryptanalytic attack of a monoalphabetic system fairly simple. The use of several alphabets for substituting the plaintext is called polyalphabetic ciphers. It is designed to make the breaking of a cipher by frequency analysis more difficult. Instead of substituting one alphabet for another, the ciphertext is generated from several possible substitution alphabets. We show an example here: 231
AU8231_C003.fm Page 232 Saturday, June 2, 2007 1:22 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Plaintext Substitution 1 Substitution 2 Substitution 3 Substitution 4
A M V L N
B G K P B
C P P O V
D U O I C
E W I J X
F I U M Z
G R Y K A
H L T H S
I O J G D
J V H T E
K D S U Y
… … … … …
Z K A F W
Using this table, we could substitute the plaintext FEED as IIJC, if we used the substitution alphabets in sequence. We see the power of using multiple alphabets in this example, as we see that the repeated E in the plaintext is substituted for different results in the ciphertext. We also note that the ciphertext has a repeated I, and yet that is for different plaintext values. Blais de Vigenère. Blais de Vigenère, a Frenchman, developed the polyalphabetic cipher using a key word and 26 alphabets, each one offset by one place.* We can show this in the following table. The top row of the table would be the plaintext values and the first column of the table the substitution alphabets.
A B C D E F G H I J K L … Z
A A B C D E F G H I J K L … Z
B B C D E F G H I J K L M … A
C C D E F G H I J K L M N … B
D D E F G H I J K L M N O … C
E E F G H I J K L M N O P … D
F F G H I J K L M N O P Q … E
G G H I J K L M N O P Q R … F
H H I J K L M N O P Q R S … G
I I J K L M N O P Q R S T … H
J J K L M N O P Q R S T U … I
K K L M N O P Q R S T U V … J
L L M N O P Q R S T U V W … K
… … … … … … … … … … … … … … …
Z Z A B C D E F G H I J K … Y
The sender and receiver of the message would agree on a key to use for the message — in this case we could use the word FICKLE, as the key. Just as in the running cipher shown below, we would repeat the key for the length of the plaintext. If we wanted to encrypt the message “HIKE BACK,” it would be constructed as follows:
*Please note that the author has agreed with the assertion of William Stallings in regard to Vigenère. Ross Anderson alleges that Vigenère’s work was based on modular mathematics, not on substitution tables.
232
AU8231_C003.fm Page 233 Saturday, June 2, 2007 1:22 PM
Cryptography Plaintext Key Ciphertext
H F M
I I Q
K C M
E K O
B L M
A E E
C F H
K I S
The ciphertext is found by finding where the H of the plaintext — the top row of the table — intersects with the F row of the ciphertext. For the first letter we have highlighted that cell. Again, we see the power of a polyalphabetic system where repeated values in the plaintext do not necessarily give the same ciphertext values, and repeated ciphertext values correspond to different plaintext inputs. Modular Mathematics and the Running Key Cipher. The use of modular mathematics and the representation of each letter by its numerical place in the alphabet are the key to many modern ciphers: A B C D E F G H I J K L M N O P Q … Z 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 … 25
The English alphabet would be calculated as mod 26 because there are 26 letters in the English alphabet. The use of mod 26 means that whenever the result of a mathematical operation is greater than 26, we subtract the 26 from the total as often as we need to until it is less than 26. Using the above values, the cryptographic operation operates as follows: Ciphertext = plaintext + key (mod 26). This is written as C = P + K (mod 26). Ciphertext is the value of the plaintext + the value of the key (mod 26). For example, the plaintext letter N has a value of 13 (it is the 13th letter in the alphabet using the table above). If the key to be used to encrypt the plaintext is a Q with a value of 16, the ciphertext would be 13 + 16, or the 29th letter of the alphabet. Because there is no 29th letter in the English alphabet, we subtract 26 (hence the term mod 26) and the ciphertext becomes the letter corresponding to the number 3, a D. Running Key Cipher. In the example below, we demonstrate the use of a running key cipher. In a running key cipher the key is repeated (or runs) for the same length as the plaintext input. Here we selected the key of FEED to encrypt the plaintext CHEEK. We repeat the key as long as necessary to match the length of the plaintext input.
Let us demonstrate the encryption of the word CHEEK using the table above and the key of FEED. Remember, the numbers under the letters represent the value or position of that letter in the alphabet. 233
AU8231_C003.fm Page 234 Saturday, June 2, 2007 1:22 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Plaintext: CHEEK
C 2
H 7
E 4
E 4
FEED
F 5
E 4
E 4
D 3
K 10
Key: F 5
The key is repeated for the length of the plaintext. The ciphertext would be computed as follows: Plaintext key Value of plaintext Value of key Ciphertext value Ciphertext
C F 2 5 7 H
H E 7 4 11 L
E E 4 4 8 I
E D 4 3 7 H
K F 10 5 15 P
One-Time Pads. Now we will look at the only cipher system that we can assert is unbreakable. That is, as long as it is implemented properly. These are often referred to as Vernam ciphers after the work of Gilbert Vernam, who proposed the use of a key that could only be used once and that must be as long as the plaintext but never repeats.
The one-time pad uses the principles of the running key cipher, using the numerical values of the letters and adding those to the value of the key; however, the key is a string of random values the same length as the plaintext. It never repeats, compared to the running key that may repeat several times. This means that a one-time pad is not breakable by frequency analysis or many other cryptographic attacks. The sender and receiver must first exchange the key material. This is often done using cryptographic pads and sending them to both the sender and receiver through a secure exchange mechanism, such as a diplomatic pouch or trusted courier. Assume that we develop a keystream for the one-time pad that looks like this: ksosdfsherfn avaishdas vdsfvksdklvsidfva sckapocs asknvaklvnhainfwivhpreovia It is just a randomly chosen set of values — maybe generated through monitoring some seemingly random occurrence, such as atmospheric noise or the readings on a Geiger counter. In this case, we chose a series of 234
AU8231_C003.fm Page 235 Saturday, June 2, 2007 1:22 PM
Cryptography alphabetic values, but we could just as easily choose mathematical values and then add them to the plaintext using mod 26. We know from our earlier example the values of each letter. A B C D E F G H I J K L M N O P Q R S T … Z 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 … 25
Let us send the message “Do not negotiate less than 5%.” We can then format our encryption as follows: Plaintext key (from above) Plaintext values Key values Ciphertext value Ciphertext
D O K S 3 14 10 18 13 32 (mod 26) = 6 N G
N O T N E G O S D F S H 13 14 19 13 4 6 14 18 3 5 18 7 27 (mod 26) = 1 6 22 18 22 13 B G W S W N
Steganography. Steganography is the hiding of a message inside of another medium, such as a photograph, music, or other item. Steganography comes from the Greek expression of “hidden writing.”
Steganography is not a new art — history tells of slaves having a message tattooed onto their head to carry it through enemy lines. The message itself was not encrypted, but its existence was hidden so that only the recipient would know to reveal the message through a radical haircut. Steganography was also used through the centuries through microdots, invisible ink, and regular letters between friends containing hidden meanings. Modern stego tools will bury a message inside of a jpeg or other graphic image by “stealing” the least significant bit of every byte and using it to carry the secret message. This will not noticeably change the image, and a casual observer will not realize that a message is hidden inside the image, only to be revealed to the person with the correct stego tool. Most unfortunately, this has become a method of industrial espionage, where a mole (spy) within a company will send confidential information to a competitor via a generic e-mail address, hiding confidential corporate information within the message that would not be noticed even if the e-mail was being monitored. Watermarking. Watermarking is the addition of identifiable information into a file or document. This is often done to detect the improper copying or theft of information. The watermark may or may not be visible and may affect the quality of the original file. Code Words. Later in the chapter we will look more closely at modern methods of message integrity using message authentication codes and dig235
AU8231_C003.fm Page 236 Saturday, June 2, 2007 1:22 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® ital signatures; however, we will briefly look at one way that encoding a message can provide a level of message integrity. For many years now, radio transmissions have been an important part of air traffic control; however, we do not all speak with the same accent, and the radios themselves are often subject to noise, distortion, and interference. For the sake of understanding, it is often better to use a code word instead of a letter — we may not understand whether the person speaking said a B or a D, but if they say “Bravo” or “Delta,” we can easily distinguish a B from the word Bravo or D from Delta. Symmetric Ciphers. Now that we have looked at some of the history of cryptography and some of the methods of cryptography, let us look at how cryptographic principles are actually used today in real implementations.
There are two primary forms of cryptography in use today, symmetric and asymmetric cryptography. We will look at each one in some detail. Symmetric algorithms operate with a single cryptographic key that is used for both encryption and decryption of the message. For this reason, it is often called single, same, or shared key encryption. It can also be called secret or private key encryption because the key factor in secure use of a symmetric algorithm is to keep the cryptographic key secret. Some of the most difficult challenges of symmetric key ciphers are the problems of key management. Because the encryption and decryption processes both require the same key, the secure distribution of the key to both the sender (or encryptor) of the message and the receiver (or decryptor) is a key factor in the secure implementation of a symmetric key system. The cryptographic key cannot be sent in the same channel (or transmission medium) as the data, so out-of-band distribution must be considered. Out of band means using a different channel to transmit the keys, such as courier, fax, phone, or some other method (Figure 3.4).
Sender Plaintext
Receiver encryption Process
Ciphertext
Key Material
Key Material Out of Band Key Distribution
Figure 3.4. Out-of-band key distribution. 236
Decryption Process
Plaintext
AU8231_C003.fm Page 237 Saturday, June 2, 2007 1:22 PM
Cryptography The advantages of symmetric key algorithms are that they are usually very fast, secure, and cheap. There are several products available on the Internet at no cost to the user that use symmetric algorithms. The disadvantages include the problems of key management, as mentioned earlier, but also the limitation that a symmetric algorithm does not provide many benefits beyond encryption, unlike most asymmetric algorithms, which also provide the ability to establish nonrepudiation, message integrity, and access control. We can best describe this limitation by using a physical security example. If ten people have a copy of the key to the server room, it can be difficult to know who entered that room at 10 P.M. yesterday. We have limited access control in that only those people with a key are able to enter; however, we do not know which one of those ten actually entered. The same with a symmetric algorithm; if the key to a secret file is shared between two or more people, then we do not have a way to know who was the last person to access the encrypted file. It would also be possible for a person to change the file and allege that it was changed by someone else. This would be most critical when the cryptosystem is used for important documents such as electronic contracts. If a person that receives a file can change the document and allege that that was the true copy he had received, we are faced with problems of repudiation. Examples of Symmetric Algorithms. We have looked at several symmetric algorithms already — the Caesar cipher, the Spartan scytale, and the Enigma machine were all symmetric algorithms. The receiver needed to use the same key to perform the decryption process as he had used during the encryption process. Now we will look at some of the modern symmetric algorithms. The Data Encryption Standard (DES). The Data Encryption Standard was based on the work of Harst Feistal.* Harst Feistal had developed a family of algorithms that had a core principle of taking the input block of plaintext and dividing it in half. Then each half was used several times through an exclusive-or operation to alter the other half — providing a type of permutation as well as substitution.
DES became the standard in 1977 when it was adopted by several agencies of the U.S. federal government for deployment across all U.S. government departments for nonclassified but sensitive information. DES is used extensively even today in many financial, virtual private network (VPN), and online encryption systems. DES has been replaced as the stan-
*The author has chosen to use the spelling of Harst Feistal as described by Anderson. It is also spelled Horst in other publications. The work of Feistal was the implementation of the research done by Claude Shannon in 1945.
237
AU8231_C003.fm Page 238 Saturday, June 2, 2007 1:22 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® dard by the Advanced Encryption Standard (AES), which is based on the Rijndael algorithm. The origin of DES was the Lucifer algorithm developed by Feistal; however, Lucifer had a 128-bit key. The algorithm was modified to make it more resistant to cryptanalysis, and the key length was reduced to 56 bits so that it could be fit onto a single chip. DES operates on 64-bit input blocks (which, because it is a Feistal cipher, are then broken into 32-bit subblocks) and a 56-bit key. The output is also 64-bit blocks. When looking at a DES key, it is 64 bits in length; however, every eighth bit (used for parity) is ignored. Therefore, the effective length of the DES key is 56 bits. Because every bit has a possible value of either 1 or 0, we can state that the effective key space for the DES key is 2.56 This gives a total number of keys for DES to be 7.2*1016. Because DES has become the template for so many later encryption technologies, we will look at it in more detail.* Originally there were four modes of DES accepted for use by the U.S. federal government (NIST); in later years, the CTR mode was also accepted (Table 3.1). THE BLOCK MODES OF DES. The following modes of DES operate in a block structure, on 64-bit input blocks.
Electronic Codebook Mode (ECB) — Electronic codebook is the most basic mode of DES (Figure 3.5). It is called codebook because it is similar to having a large codebook containing every piece of 64-bit plaintext input and all possible 64-bit ciphertext outputs. In a manual sense, it would be the same as looking up the input in a book and finding what the output would be depending on which key was used. When a plaintext input is received by ECB, it operates on that block independently and produces the ciphertext output. If the input was more than 64 bits long and each 64-bit block was the same, then the output blocks would also be the same. Such regularity would make cryptanalysis fairly simple. For that reason, as we see in Table 3.1, ECB is only used for very short messages (less than 64 bits in length), such as transmission of a DES key. As with all Feistal ciphers, the decryption process is the reverse of the encryption process. Cipher Block Chaining Mode (CBC) — Cipher block chaining mode is stronger than ECB in that each input block will produce a different output — even if the input blocks are identical. This is accomplished by introduc*For more information on DES, see FIPS 81 and NIST SP 800-38A.
238
AU8231_C003.fm Page 239 Saturday, June 2, 2007 1:22 PM
Cryptography Table 3.1. Four Modes of DES Replacement DES Mode Electronic Code Book (ECB)
Cipher Block Chaining (CBC)
Cipher Feedback (CFB)
Output Feedback (OFB)
Counter Mode (CTR)
How It Works
Usage
In ECB mode, each block is encrypted independently, allowing randomly accessed files to be encrypted and still accessed without having to process the file in a linear encryption fashion. In CBC mode, the result of encrypting one block of data is fed back into the process to encrypt the next block of data. In CFB mode, each bit produced in the keystream is the result of a predetermined number of fixed ciphertext bits. In OFB mode, the keystream is generated independently of the message In CTR mode, a counter — a 64-bit random data block — is used as the first initialization vector.
Very short messages (less than 64 bits in length), such as transmission of a DES key.
Authentication
Authentication
Authentication
Used in high-speed applications such as IPSec and Asynchronous Transfer Mode (ATM)
Source: James S. Tiller, Message Authentication, in Harold F. Tipton and Micki Krause (eds.), Information Security Management Handbook, 5th ed., New York: Auerbach Publications, 2004.
ing two new factors in the encryption process — an initialization vector (IV) and a chaining function that XORs each input with the previous ciphertext. (Note: Without the IV, the chaining process applied to the same messages would create the same ciphertext.) The initialization vector is a randomly chosen value that is mixed with the first block of plaintext. This acts just like a seed in a stream-based cipher. The sender and receiver must know the IV so that the message can be decrypted later. It is important to protect the IV to the same level as the key. Usually the IV is encrypted with ECB and sent to the receiver. We can see the function of CBC in Figure 3.6.
239
AU8231_C003.fm Page 240 Saturday, June 2, 2007 1:22 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK®
Figure 3.5. Electronic codebook is the most basic mode of DES.
Figure 3.6. Cipher block chaining mode. 240
AU8231_C003.fm Page 241 Saturday, June 2, 2007 1:22 PM
Cryptography The initial input block is XORed with the IV, and the result of that process is encrypted to produce the first block of ciphertext. This first ciphertext block is then XORed with the next input plaintext block. This is the chaining process, which ensures that even if the input blocks are the same, the resulting outputs will be different. The Stream Modes of DES — The following modes of DES operate as a stream; even though DES is a block mode cipher, these modes attempt to make DES operate as if it were a stream mode algorithm. A block-based cipher is subject to the problems of latency or delay in processing. This makes them unsuitable for many applications where simultaneous transmission of the data is desired. In these modes, DES tries to simulate a stream to be more versatile and provide support for stream-based applications. Cipher Feedback Mode (CFB) — In the cipher feedback mode of DES, the input is separated into individual segments — usually of 8 bits, because that is the size of one character (Figure 3.7). When the encryption process starts, the initialization vector is chosen and loaded into a shift register. It is then run through the encryption algorithm. The first 8 bits that come from the algorithm are then XORed with the first 8 bits of the plaintext (the first segment). Each 8-bit segment is then transmitted to the receiver and also fed back into the shift register. The shift register contents are then encrypted again to generate the keystream to be XORed with the next plaintext segment. This process continues until the end of the input. One of the drawbacks of this, however, is that if a bit is corrupted or altered, all of the data from that point onward will be damaged. It is interesting to note that because of the nature of the operation in CFB, the decryption process uses the encryption operation rather than operate in reverse like CBC. Output Feedback Mode (OFB) — Output feedback mode is very similar in operation to cipher feedback except that instead of using the ciphertext result of the XOR operation to feed back into the shift register for the ongoing keystream, it feeds the encrypted keystream itself back into the shift register to create the next portion of the keystream (Figure 3.8). Because the keystream and message data are completely independent (the keystream itself is chained, but there is no chaining of the ciphertext), it is now possible to generate the entire keystream in advance and store it for later use. However, this does pose some storage complications, especially if it were to be used in a high-speed link. Counter Mode (CTR) — Counter mode is used in high-speed applications such as IPSec and Asynchronous Transfer Mode (ATM) (Figure 3.9). In this mode, a counter — a 64-bit random data block— is used as the first initialization vector. A requirement of counter mode is that the counter must be different for every block of plaintext, so for each subsequent block, the 241
AU8231_C003.fm Page 242 Saturday, June 2, 2007 1:22 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK®
Figure 3.7. Cipher feedback mode of DES.
counter is incremented by 1. The counter is then encrypted just as in OFB, and the result is used as a keystream and XORed with the plaintext. Because the keystream is independent from the message, it is possible to even process several blocks of data at the same time, thus speeding up the throughput of the algorithm. Again, because of the characteristics of the algorithm, the encryption process is used at both ends of the process — there is no need to install the decryption process. ADVANTAGES AND DISADVANTAGES OF DES. DES is a strong, fast algorithm that has endured the test of time; however, it is not suitable for use for very confidential data due to the increase in computing power over the years. Initially, DES was considered unbreakable, and early attempts to break a DES message were unrealistic. (A computer running at one attempt per millisecond would still take more than 1000 years to try all possible keys.) 242
AU8231_C003.fm Page 243 Saturday, June 2, 2007 1:22 PM
Cryptography
Initialization Vector loaded into Shift Register
Key
Select s bits
S bits of Plaintext
DES Algorithm
Discard 64 - s bits
XOR
S bits of ciphertext
64 - s bits
s bits
DES Algorithm
Select s bits
Next s bits of Plaintext
Discard 64 - s bits
XOR
Next S bits of ciphertext
Figure 3.8. Output feedback mode of DES.
However, DES is susceptible to a brute-force attack. Because the key is only 56 bits long, the key may be determined by trying all possible keys against the ciphertext until the true plaintext is recovered. The Electronic Frontier Foundation (www.eff.org) demonstrated this several years ago. However, it should be noted that they did the simplest form of attack — a known plaintext attack; they tried all possible keys against a ciphertext knowing what they were looking for (they knew the plaintext). If they did not know the plaintext (did not know what they were looking for), the attack would have been significantly more difficult. Regardless, DES can be deciphered using today’s computing power and enough stubborn persistence. There have also been criticisms of the structure of the DES algorithm. The S-boxes used in the encryption and decryption operations are secret,
243
AU8231_C003.fm Page 244 Saturday, June 2, 2007 1:22 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK®
Figure 3.9. Counter mode is used in high-speed applications such as IPSec and ATM.
and this always leads to claims that they may contain hidden code or untried operations. DOUBLE DES. The primary complaint about DES was that the key was too short. This made a known plaintext brute-force attack possible. One of the first alternatives considered to create a stronger version of DES was to double the encryption process — just do the encryption process twice using two different keys, as shown in Figure 3.10.
As we can see, the first DES operation created an intermediate ciphertext, which we will call m for discussion purposes. This intermediate ciphertext, m, was then reencrypted using a second 56-bit DES key for greater cryptographic strength. Initially there was a lot of discussion as to whether the ciphertext created by the second DES operation would be the same as the ciphertext that would have been created by a third DES key. In other words, if we consider that double DES looks like this: C = EK2(EK1(P)) ciphertext created by double DES is the result of the plaintext encrypted with the first 56-bit DES key and then reencrypted with the second 56-bit DES key. 244
AU8231_C003.fm Page 245 Saturday, June 2, 2007 1:22 PM
Cryptography
Figure 3.10. Operations within double DES cryptosystems.
Is that equivalent to this? C = EK3(P) Would the result of two operations be the same as the result of one operation using a different key? It has since been proved that this is not the case; however, more serious vulnerabilities in double DES emerged. The intention of double DES was to create an algorithm that would be equivalent in strength to a 112-bit key (two 56-bit keys). If we say that the strength of single DES is approximately 255 (it is actually 1 less than 256), would double DES be approximately 2111? Unfortunately, this was not the case because of the “meet in the middle” attack (described below), which is why the lifespan of double DES was very short. MEET IN THE MIDDLE. The most effective attack against double DES was just like the successful attacks on single DES, based on doing a brute-force attack against known plaintext* (Figure 3.11). The attacker would encrypt the plaintext using all possible keys and create a table containing all possible results. We call this intermediate cipher m. This would mean encrypt-
*In a known plaintext attack, the attacker has both the plaintext and the ciphertext, but he does not have the key, and the brute-force attack, as we saw earlier, was an attack trying all possible keys. See the “Attacks on Hashing Algorithms and Message Authentication Codes” section for more details on this.
245
AU8231_C003.fm Page 246 Saturday, June 2, 2007 1:22 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK®
Figure 3.11. Meet-in-the-middle attack on 2DES.
ing using all 256 possible keys. The table would then be sorted according to the values of m. The attacker would then decrypt the ciphertext using all possible keys until he found a match with the value of m. This would result in a true strength of double DES of approximately 256 (twice the strength of DES, but not strong enough to be considered effective), instead of the 2112 originally hoped.* TRIPLE DES (3DES). The defeat of double DES resulted in the adoption of triple DES as the next solution to overcome the weaknesses of single DES. Triple DES was designed to operate at a relative strength of 2112 using two different keys to perform the encryption.
The triple DES operation using two keys is shown below: C = EK1(EK2(EK1(P))) The ciphertext is created by encrypting the plaintext with key 1, reencrypting with key 2, and then encrypting again with key 1. *Note that most cryptographers consider the strength of single DES to be 255, not 256 as might be expected. Because double DES is approximately twice the strength of DES, it would be considered to be 256.
246
AU8231_C003.fm Page 247 Saturday, June 2, 2007 1:22 PM
Cryptography This would have a relative strength of 2112 and be unfeasible for attack using either the known plaintext or differential cryptanalysis attacks. This mode of 3DES would be referred to as EEE2 (encrypt, encrypt, encrypt using two keys). The preferred method of using triple DES was to use a decrypt step for the intermediate operation, as shown below: C = EK1(DK2(EK1(P))) The plaintext was encrypted using key 1, then decrypted using key 2, and then encrypted using key 1. Doing the decrypt operation for the intermediate step does not make a difference in the strength of the cryptographic operation, but it does allow backward compatibility through permitting a user of triple DES to also access files encrypted with single DES. This mode of triple DES is referred to as EDE2. Originally the use of triple DES was primarily done using two keys as shown above, and this was compliant with ISO 8732 and ANS X9.17; however, some users, such as PGP and S/MIME, are moving toward the adoption of triple DES using three separate keys. This would be shown as follows: C = EK3(EK2(EK1(P))) for the EEE3 mode or C = EK3(DK2(EK1(P))) for the EDE3 mode There are seven different modes of triple DES, but the four explained above are all that we are concerned with at this time. Advanced Encryption Standard (AES). Chronologically, this is not the next step in the evolution of symmetric algorithms; however, we will look at the Advanced Encryption Standard (AES) now and come back to other algorithms later.
In 1997, the National Institute of Standards and Technology (NIST) in the United States issued a call for a product to replace DES and 3DES. The requirements were that the new algorithm would be at least as strong as DES, have a larger block size (because a larger block size would be more efficient and more secure), and overcome the problems of performance with DES. DES was developed for hardware implementations and is too slow in software. 3DES is even slower, and thus creates a serious latency in encryption as well as significant processing overhead. After considerable research, the product chosen to be the new Advanced Encryption Standard was the Rijndael algorithm, created by Dr. Joan 247
AU8231_C003.fm Page 248 Saturday, June 2, 2007 1:22 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Daemon and Dr. Vincent Rijmen of Belgium. The name Rijndael was merely a contraction of their surnames. Rijndael beat out the other finalists: Serpent, of which Ross Anderson was an author; MARS, the IBM product; RC6, from Ron Rivest and RSA; and TwoFish, developed by Bruce Schneier. The AES algorithm was obliged to meet many criteria, including the need to be flexible and implementable on many types of platforms and free of royalties. RIJNDAEL. The Rijndael algorithm can be used with block sizes of 128, 192, or 256 bits. The key can also be 128, 192, or 256 bits, with a variable number of rounds of operation depending on the key size. Using AES with a 128-bit key would do ten rounds, whereas a 192-bit key would do 12 and a 256-bit key would do 14. Although Rijndael supports multiple block sizes, AES only supports one block size (subset of Rijndael).
We will look at AES in the 128-bit block format. The AES operation works on the entire 128-bit block of input data by first copying it into a square table (or array) that it calls state. The inputs are placed into the array by column so that the first four bytes of the input would fill the first column of the array. Following is input plaintext when placed into a 128-bit state array: 1st byte 2nd byte 3rd byte 4th byte
5th byte 6th byte 7th byte 8th byte
9th byte 10th byte 11th byte 12th byte
13th byte 14th byte 15th byte 16th byte
The key is also placed into a similar square table or matrix. The Rijndael operation consists of four major operations. 1. Substitute bytes: Use of an S-box to do a byte-by-byte substitution of the entire block. 2. Shift rows: Transposition or permutation through offsetting each row in the table. 3. Mix columns: A substitution of each value in a column based on a function of the values of the data in the column. 4. Add round key: XOR each byte with the key for that round; the key is modified for each round of operation. Substitute Bytes — The substitute bytes operation uses an S-box that looks up the value of each byte in the input and substitutes it with the value in the table. The S-box table contains all possible 256 8-bit word values and a simple cross-reference is done to find the substitute value using the first half of the byte (4-bit word) in the input table on the x-axis and the second 248
AU8231_C003.fm Page 249 Saturday, June 2, 2007 1:22 PM
Cryptography half of the byte on the y-axis. Hexadecimal values are used in both the input and S-box tables. Shift Row Transformation — The shift row transformation step provides blockwide transposition of the input data by shifting the rows of data as follows. If we start with the input table we described earlier, we can see the effect of the shift row operation. Please note that by this point the table will have been subjected to the substitute bytes operation, so it would not look like this any longer, but we will use this table for the sake of clarity. Columns
Rows
1st byte 2nd byte 3rd byte 4th byte
5th byte 6th byte 7th byte 8th byte
9th byte 10th byte 11th byte 12th byte
13th byte 14th byte 15th byte 16th byte
5th byte
9th byte
13th byte
The first row is not shifted. 1st byte
The second row of the table is shifted one place to the left. 6th byte
10th byte
14th byte
2nd byte
The third row of the table is shifted two places to the left. 11th byte
15th byte
3rd byte
7th byte
The fourth row of the table is shifted three places to the left. 16th byte
4th byte
8th byte
12th byte
The final result of the shift rows step would look as follows: 1 6 11 16
5 10 15 4
9 14 3 8
13 2 7 12
Mix Column Transformation — The mix column transformation is performed by multiplying and XORing each byte in a column together, according to the table in Figure 3.12. 249
AU8231_C003.fm Page 250 Saturday, June 2, 2007 1:22 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK®
Figure 3.12. Mix column transformation.
The Figure 3.12 table is the result of the previous step, so when we work on the first column (shaded in the state table), we work with the first row in the mix column table (shaded) and use multiplication and XOR. The computation of the mix columns step for the first column would be (1*02) (6*03) (11*01) (16*01) The second byte in the column would be calculated using the second row in the mix column table as (6*01) (11*02) (16*03) (1*02) Add Round Key — The key is modified for each round by first dividing the key into 16-bit pieces (4 4-bit words) and then expanding each piece into 176 bits (44 4-bit words). The key is arrayed into a square matrix, and each column is subjected to rotation (shifting the first column to the last (1, 2, 3, 4 would become 2, 3, 4, 1) and then the substituting of each word of the key using an S-box. The result of these first two operations is then XORed with a round constant to create the key to be used for that round. The round constant changes for each round, and its values are predefined. Each of the above steps (except for the mix columns, which is only done for nine rounds) are done for ten rounds to produce the ciphertext. AES is a strong algorithm that is not considered breakable at any time in the near future and is easy to deploy on many platforms with excellent throughput. International Data Encryption Algorithm (IDEA). IDEA was developed as a replacement for DES by Xuejai Lai and James Massey in 1991. IDEA uses a 128-bit key and operates on 64-bit blocks. IDEA does eight rounds of transposition and substitution using modular addition and multiplication, and bitwise exclusive-or (XOR). The patents on IDEA will expire in 2010–2011, but it is available for free for noncommercial use. CAST. CAST was developed in 1996 by Carlisle Adams and Stafford Tavares. CAST-128 can use keys between 40 and 128 bits in length and will 250
AU8231_C003.fm Page 251 Saturday, June 2, 2007 1:22 PM
Cryptography do between 12 and 16 rounds of operation, depending on key length. CAST128 is a Feistal-type block cipher with 64-bit blocks. CAST-256 was submitted as an unsuccessful candidate for the new AES. CAST-256 operates on 128-bit blocks and with keys of 128, 192, 160, 224, and 256 bits. It performs 48 rounds and is described in RFC 2612. Secure and Fast Encryption Routine (SAFER). All of the algorithms in SAFER are patent-free. The algorithms were developed by James Massey and work on either 64-bit input blocks (SAFER-SK64) or 128-bit blocks (SAFERSK128). A variation of SAFER is used as a block cipher in Bluetooth. Blowfish. Blowfish is a symmetrical algorithm developed by Bruce Schneier. It is an extremely fast cipher and can be implemented in as little as 5K of memory. It is a Feistal-type cipher in that it divides the input blocks into two halves and then uses them in XORs against each other. However, it varies from the traditional Feistal cipher in that Blowfish does work against both halves, not just one. The Blowfish algorithm operates with variable key sizes, from 32 up to 448 bits on 64-bit input and output blocks.
One of the characteristics of Blowfish is that the S-boxes are created from the key and are stored for later use. Because of the processing time taken to change keys and recompute the S-boxes, Blowfish is unsuitable for applications where the key is changed frequently or in applications on smart cards or with limited processing power. Blowfish is currently considered unbreakable (using today’s technology), and in fact, because the key is used to generate the S-boxes, it takes over 500 rounds of the Blowfish algorithm to test any single key. Twofish. Twofish was one of the finalists for the AES. It is an adapted version of Blowfish developed by a team of cryptographers led by Bruce Schneier. It can operate with keys of 128, 192, or 256 bits on blocks of 128 bits. It performs 16 rounds during the encryption/decryption process. RC5. RC5 was developed by Ron Rivest of RSA and is deployed in many of RSA’s products. It is a very adaptable product useful for many applications, ranging from software to hardware implementations. The key for RC5 can vary from 0 to 2040 bits, the number of rounds it executes can be adjusted from 0 to 255, and the length of the input words can also be chosen from 16-, 32-, and 64-bit lengths. The algorithm operates on two words at a time in a fast and secure manner.
RC5 is defined in RFC 2040 for four different modes of operation: • RC5 block cipher is similar to DES ECB producing a ciphertext block of the same length as the input. • RC5-CBC is a cipher block chaining form of RC5 using chaining to ensure that repeated input blocks would not generate the same output. 251
AU8231_C003.fm Page 252 Saturday, June 2, 2007 1:22 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® • RC5-CBC-Pad combines chaining with the ability to handle input plaintext of any length. The ciphertext will be longer than the plaintext by at most one block. • RC5-CTS is called ciphertext stealing and will generate a ciphertext equal in length to a plaintext of any length. RC4. RC4, a stream-based cipher, was developed in 1987 by Ron Rivest for RSA Data Security and has become the most widely used stream cipher, being deployed, for example, in WEP and SSL/TLS.
RC4 uses a variable length key ranging from 8 to 2048 bits (1 to 256 bytes) and a period of greater than 10100. In other words, the keystream should not repeat for at least that length. The key is used to initialize a state vector that is 256 bytes in length and contains all possible values of 8-bit numbers from 0 through 255. This state is used to generate the keystream that is XORed with the plaintext. The key is only used to initialize the state and is not used thereafter. If RC4 is used with a key length of at least 128 bits, there are currently no practical ways to attack it; the published successful attacks against the use of RC4 in WEP applications are related to problems with the implementation of the algorithm, not the algorithm itself. More details on this can be found in the Telecommunications and Network Security chapter, Domain 7. Advantages and Disadvantages of Symmetric Algorithms. S y m m e t r i c algorithms are very fast and secure methods of providing confidentiality and some integrity and authentication for messages being stored or transmitted.
Many algorithms can be implemented in either hardware or software and are available at no cost to the user. There are serious disadvantages to symmetric algorithms — key management is very difficult, especially in large organizations. The number of keys needed grows rapidly with every new user according to the formula n(n – 1)/2, where n is the number of users. An organization with only 10 users, all wanting to communicate securely with one another, requires 45 keys (10*9/2). If the organization grows to 1000 employees, the need for key management expands to nearly a half million keys. Symmetric algorithms also are not able to provide nonrepudiation of origin, access control, and digital signatures, except in a very limited way. If two or more people share a symmetric key, then it is impossible to prove who altered a file protected with a symmetric key. Selecting keys is an important part of key management. There needs to be a process in place that ensures that a key is selected randomly from the entire keyspace, and that there is some way to recover a lost or forgotten key. 252
AU8231_C003.fm Page 253 Saturday, June 2, 2007 1:22 PM
Cryptography Because symmetric algorithms require both users (the sender and the receiver) to share the same key, there can be challenges with secure key distribution. Often the users must use an out-of-band channel such as mail, fax, telephone, or courier to exchange secret keys. Use of an out-of-band channel should make it difficult for an attacker to seize both the encrypted data and the key. The other method of exchanging the symmetric key is to use an asymmetric algorithm. Asymmetric Algorithms Whereas symmetric algorithms have been in existence for several millennia, the use of asymmetric (or public key) algorithms is relatively new. These algorithms became commonly known when Drs. Whit Diffie and Martin Hellman released a paper in 1976 called “New Directions in Cryptography.”* This paper described the concept of using two different keys (a key pair) to perform the cryptographic operations. The two keys would be linked mathematically, but would be mutually exclusive. For most asymmetric algorithms, if one half of this key pair were used for encryption, the other half of the key pair would be required to decrypt the message. We will look at several different asymmetric algorithms later in this chapter; however, for now, we will look at some of the general concepts behind public key cryptography. When a person wishes to communicate using an asymmetric algorithm, she would first generate a key pair. Usually this is done by the cryptographic application or the public key infrastructure without user involvement to ensure the strength of the key generation process. One half of the key pair is kept secret, and only the key holder knows that key. For this reason, it is often called the private key. The other half of the key pair can be given freely to anyone that wants a copy. In many companies, it may be available through the corporate Web site or access to a key server. That is why this half of the key pair is often referred to as the public key. Asymmetric algorithms are one-way functions, that is, a process that is much simpler to go in one direction (forward) than to go in the other direction (backward or reverse engineering). The process to generate the public key (forward) is fairly simple, and providing the public key to anyone that wants it does not compromise the private key because the process to go from the public key to the private key is computationally infeasible. Confidential Messages. Because the keys are mutually exclusive, any message that is encrypted with a public key can only be decrypted with the *Whit Diffie and Martin Hellman, New directions in cryptography, IEEE Transactions on Information Theory, IT-22, 1976.
253
AU8231_C003.fm Page 254 Saturday, June 2, 2007 1:22 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK®
Figure 3.13. Using public key cryptography to send a confidential message.
corresponding other half of the key pair — the private key. Therefore, as long as the key holder keeps her private key secure, we have a method of transmitting a message confidentially. The sender would encrypt the message with the public key of the receiver. Only the receiver with the private key would be able to open or read the message — providing confidentiality. See Figure 3.13. Open Message. Conversely, when a message is encrypted with the private key of a sender, it can be opened or read by anyone that possesses the corresponding public key. When a person needs to send a message and provide proof of origin (nonrepudiation), he can do so by encrypting it with his own private key. The recipient then has some guarantee that, because she opened it with the public key from the sender, the message did, in fact, originate with the sender. See Figure 3.14. Confidential Messages with Proof of Origin. By encrypting a message with the private key of the sender and the public key of the receiver, we have the ability to send a message that is confidential and has proof of origin. See Figure 3.15. RSA. RSA was developed in 1978 by Ron Rivest, Adi Shamir, and Len Adleman when they were at MIT. RSA is based on the mathematical challenge of factoring the product of two large prime numbers.
254
AU8231_C003.fm Page 255 Saturday, June 2, 2007 1:22 PM
Cryptography
Figure 3.14. Using public key cryptography to send a message with proof of origin.
Figure 3.15. Using public key cryptography to send a message that is confidential and has a proof of origin.
255
AU8231_C003.fm Page 256 Saturday, June 2, 2007 1:22 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® A prime number can only be divided by 1 and itself. Some prime numbers include 2, 3, 5, 7, 11, 13, and so on. Factoring is taking a number and finding the numbers that can be multiplied together to calculate that number. For example, if the product of a*b = c, then c can be factored into a and b. If we calculate 3*4 = 12, then 12 can be factored into 3, 4 and 6, 2 and 12, 1. The RSA algorithm uses large prime numbers that when multiplied together would be incredibly difficult to factor. Successful factoring attacks have been executed against 512-bit numbers (at a cost of approximately 8000 MIPS years), but current computational capability makes the factoring of a 1024-bit number impractical. RSA is the most widely used public key algorithm and operates on blocks of text according to the following formula: C = Pe mod n The ciphertext is computed from the plaintext to the exponent e mod n. How to Generate RSA Key Pairs. Select p and q where both are prime num-
bers and p ≠ q:
p = 17 and q = 11 Calculate n = pq: n = 17 ∗ 11 = 187 Calculate (n) = (p – 1)(q – 1): (n) = (17 – 1)(11 – 1) = 16 ∗ 10 = 160 Select integer e where is e is relatively prime to (n) = 160 and less than (n). Choose e = 7. Determine d such that de ≡ 1 mod 160 and d < 160: d = 23 because 23 ∗ 7 = 161 = 10 ∗ 160 + 1. (d is calculated using Euclid’s algorithm.) Public key = {e, n} = {7, 187} Private key = {d, n} = {23, 187} To encrypt a plaintext of 88 using the public key (confidentiality), we would do the following mathematics: C = Pe mod n For the sake of simplicity, we will break the modular arithmetic function into smaller pieces: C = 887 mod 187 = [(884 mod 187) ∗ (882 mod 187) ∗ 881 mod 187)] mod 187 256
AU8231_C003.fm Page 257 Saturday, June 2, 2007 1:22 PM
Cryptography 881 mod 187 = 88 882 mod 187 = 7744 mod 187 = 77 884 mod 187 = 59,969,536 mod 187 = 132 887 mod 187 = (88*77*132) mod 187 = 894,432 mod 187 = 11 C = 11 To decrypt the ciphertext of 11, we would use the formula P = 1123 mod 187. Attacking RSA. The three primary approaches to attack the RSA algorithm are to use brute force, trying all possible private keys; mathematical attacks, factoring the product of two prime numbers; and timing attacks, measuring the running time of the decryption algorithm. Diffie–Hellmann Algorithm. Diffie–Hellmann is a key exchange algorithm. It is used to enable two users to exchange or negotiate a secret symmetric key that will be used subsequently for message encryption. The Diffie–Hellmann algorithm does not provide for message confidentiality.
Diffie–Hellmann is based on discrete logarithms. This is a mathematical function based first on finding the primitive root of a prime number. Using the primitive root, we can put together a formula as follows: b ≡ ai mod p
0 ≤ i ≤ (p – 1)
where i is the discrete log (or index) for a mod p. Key Exchange Using Diffie–Hellmann. The prime number (p) and primitive root (g) used in Diffie–Hellmann are common to most users. We will use p = 353 and g = 3 for our example.
Each user A, B would choose a random secret key X that must be less than the prime number. If A chose the secret key of ’97, we could write its secret key as XA = 97. The public key, YA, for user A would be calculated as YA = gAx mod p. Therefore, A would calculate YA = 397 mod 353 = 40. If B chose the secret key of 233, the public key, YB, for user B would be calculated as YB = gBx mod p. Therefore, B would calculate YB = 3233 mod 353 = 248. A and B would then exchange the public keys that they had calculated. Using the following formula, they would each compute the common session key: A computes the common key, K, as K = (YB)AX mod 353 = 24897 mod 353 = 160 257
AU8231_C003.fm Page 258 Saturday, June 2, 2007 1:22 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® B computes the common key as K = (YA)XB mod 353 = 40233 mod 353 = 160 The two parties A and B can now encrypt their data using the symmetric key of 160. This would be an example of a hybrid system, which we will describe later in the chapter. El Gamal. The El Gamal cryptographic algorithm is based on the work of Diffie–Hellmann, but it included the ability to provide message confidentiality and digital signature services, not just session key exchange. The El Gamal algorithm was based on the same mathematical functions of discrete logs. Elliptic Curve Cryptography. One branch of discrete logarithmic algorithms is based on the complex mathematics of elliptic curves. These algorithms, which are too complex to explain in this context, are advantageous for their speed and strength. The elliptic curve algorithms have the highest strength per bit of key length of any of the asymmetric algorithms. The ability to use much shorter keys for ECC implementations provides savings on computational power and bandwidth. This makes ECC especially beneficial for implementation in smart cards, wireless, and other similar application areas.
Elliptic curve algorithms provide confidentiality, digital signatures, and message authentication services. Advantages and Disadvantages of Asymmetric Key Algorithms. T h e development of asymmetric key cryptography revolutionalized the cryptographic community. Now it was possible to send a message across an untrusted medium in a secure manner without the overhead of prior key exchange or key material distribution. It allowed several other features not readily available in symmetric cryptography, such as nonrepudiation of origin, access control, and nonrepudiation of delivery.
The problem was that asymmetric cryptography is extremely slow compared to its symmetric counterpart. Asymmetric cryptography was a product that took us back to the dark ages in terms of speed and performance and would be impractical for everyday use in encrypting large amounts of data and frequent transactions. This is because asymmetric is handling much larger keys and computations — making even a fast computer work harder than if it were only handling small keys and less complex algebraic calculations. Also, ciphertext output from asymmetric algorithms may be much larger than the plaintext. This means that for large messages, they are not effective for secrecy; however, they are effective for message integrity, authentication, and nonrepudiation. 258
AU8231_C003.fm Page 259 Saturday, June 2, 2007 1:22 PM
Cryptography Hybrid Cryptography. The solution to many of the problems lays in developing a hybrid technique of cryptography that combined the strengths of both symmetric cryptography, with its great speed and secure algorithms, and asymmetric cryptography, with its ability to securely exchange session keys, message authentication, and nonrepudiation.
Symmetric cryptography is best for encrypting large files. It can handle the encryption and decryption process with little impact on delivery times or computational performance. Asymmetric cryptography can handle the initial setup of the communications session through the exchange or negotiation of the symmetric keys to be used for this session. In many cases, the symmetric key is only needed for the length of this communication and can be discarded following the completion of the transaction, so we will refer to the symmetric key in this case as a session key. A hybrid system operates as shown in Figure 3.16. The message itself is encrypted with a symmetric key, SK, and is sent to the recipient.
Sender
Plaintext Large Message
Receiver
Encryption Using Symmetric Key
Encrypted Message
Decryption Using Symmetric Key
Plaintext Message
Symmetric Key
Encryption of Symmetric Key
Public Key of Receiver
Encrypted Symmetric Key
Decryption of Symmetric Key
Private Key of Receiver
Figure 3.16. Hybrid system using a symmetric algorithm for bulk data encryption and an asymmetric algorithm for distribution of the symmetric key.
259
AU8231_C003.fm Page 260 Saturday, June 2, 2007 1:22 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® The symmetric key is encrypted with the public key of the recipient and sent to the recipient. The symmetric key is decrypted with the private key of the recipient. This discloses the symmetric key to the recipient. The symmetric key can then be used to decrypt the message. Message Integrity Controls An important part of electronic commerce and computerized transactions today is the assurance that a message has not been modified, is indeed from the person that the sender claims to be, and that the message was received by the correct party. This is accomplished through cryptographic functions that perform in several manners, depending on the business needs and level of trust between the parties and systems. Traditional cryptography, such as symmetric algorithms, does produce a level of message authentication. If two parties share a symmetric key, and they have been careful not to disclose that key to anyone else, then when they transmit a message from one to another, they have assurance that the message is indeed from their trusted partner. In many cases, they would also have some degree of confidence in the integrity of the message, because any errors or modification of the message in transit would render the message undecipherable. With chaining-type algorithms, any error is likely to destroy the remainder of the message. Asymmetric algorithms also provide message authentication. Some, such as RSA, El Gamal, and ECC, have message authentication and digital signature functionality built into the implementation. These work, as we saw earlier, in the section on open messages and secure and signed messages using asymmetric key cryptography. Checksums The use of a simple error detecting code, checksum, or frame check sequence is often deployed along with symmetric key cryptography for message integrity. We can see this in Figure 3.17. We can see that the checksum was created and then appended to the message. The entire message may then be encrypted and transmitted to the receiver. The receivers must decrypt the message and generate their own checksum to verify the integrity of the message. Hash Function The hash function accepts an input message of any length and generates, through a one-way operation, a fixed-length output. This output is referred 260
AU8231_C003.fm Page 261 Saturday, June 2, 2007 1:22 PM
Cryptography
Figure 3.17. Combining checksum with encryption for message integrity.
to as a hash code or sometimes a message digest. It uses a hashing algorithm to generate the hash, but does not use a secret key. There are several ways to use message digests in communications, depending on the need for confidentiality of the message, authentication of the source, speed of processing, and choice of encryption algorithms. The requirements for a hash function are that they must provide some assurance that the message cannot be changed without detection and that it would be impractical to find any two messages with the same hash value. Simple Hash Functions. A hash operates on an input of any length (there are some limitations, but the message sizes are huge) and generates a fixed-length output. The simplest hash merely divides the input message into fixed-size blocks and then XORs every block. The hash would therefore be the same size as a block.
Hash = block 1 block 2 block 3 … end of message MD5 Message Digest Algorithm. MD5 was developed by Ron Rivest at MIT in 1992. It is the most widely used hashing algorithm and is described in RFC 1321. MD5 generates a 128-bit digest from a message of any length. It processes the message in 512-bit blocks and does four rounds of processing. Each round contains 16 steps. The likelihood of finding any two messages with the same hash code is estimated to be 264, and the difficulty of 261
AU8231_C003.fm Page 262 Saturday, June 2, 2007 1:22 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® finding a message with a given digest is estimated to be 2128. One common use of MD5 is to verify the integrity of digital evidence used in forensic investigations and ensure that the original media has not been altered since seizure. In the past two years there have been several attacks developed against MD5 where it is now possible to find collisions through analysis. This is leading to many professionals recommending the abandonment of MD5 for use in secure communications, such as digital signatures. MD4 was developed in 1990 and revised in 1992. It only does three rounds of processing and fewer mathematical operations per round. It is not considered strong enough for most applications today. It also generated a 128-bit output. Secure Hash Algorithm (SHA) and SHA-1. The original Secure Hash Algorithm was developed by the National Institute of Standards and Technology (NIST) in the United States in 1993 and issued as Federal Information Processing Standard (FIPS) 180. A revised version (FIPS 180-1) was issued in 1995 as SHA-1 (RFC 3174).
SHA was based on the MD4 algorithm, whereas SHA-1 follows the logic of MD5. SHA-1 operates on 512-bit blocks and can handle any message up to 264 bits in length. The output hash is 160 bits in length. The processing includes four rounds of operations of 20 steps each. Recently there have been several attacks described against the SHA-1 algorithm despite it being considerably stronger than MD5. NIST has issued FIPS 180-2, which recognizes SHA-256, SHA-384, and SHA-512 as a part of the Secure Hash Standard. The output lengths of the digests of these are 256, 384, and 512 bits, respectively. HAVAL. HAVAL was developed at the University of Wollongong in Australia. It combines a variable length output with a variable number of rounds of operation on 1024-bit input blocks. The output may be 128, 160, 192, 224, or 256 bits, and the number of rounds may vary from three to five. That gives 15 possible combinations of operation.
HAVAL operates 60 percent faster than MD5 when only three rounds are used and is just as fast as MD5 when it does five rounds of operation. RIPEMD-160. The European RACE Integrity Primitives Evaluation project developed the RIPEMD-160 algorithm in response to the vulnerabilities it found in MD4 and MD5. The original algorithm (RIPEMD-128) has the same vulnerabilities as MD4 and MD5 and led to the improved RIPEMD-160 version. The output for RIPEMD-160 is 160 bits, and it also operates similarly to MD5 on 512-bit blocks. It does twice the processing of SHA-1, performing five paired rounds of 16 steps each for a total of 160 operations.
262
AU8231_C003.fm Page 263 Saturday, June 2, 2007 1:22 PM
Cryptography Attacks on Hashing Algorithms and Message Authentication Codes.
There are two primary ways to attack hash functions: through brute-force attacks and cryptanalysis. Over the past few years, a lot of research has been done on attacks on various hashing algorithms, such as MD-5 and SHA-1. In both cases, these have been found to be susceptible, in theory, to cryptographic attacks. A brute-force attack relies on finding a weakness in the hashing algorithm that would allow an attacker to reconstruct the original message from the hash value (defeat the one-way property of a hash function), find another message with the same hash value, or find any pair of messages with the same hash value (what is called collision resistance). Oorschot and Weiner developed a machine that could find a collision on a 128-bit hash in about 24 days. Therefore, a 128-bit value (MD5 and HAVAL128) is inadequate for a digital signature. Using the same machine, it would take 4000 years to find a match on a 160-bit hash.* The past year, however, has seen a paper on the vulnerability of a 160-bit hash that would be feasible with current computing power and attack methodology. The Birthday Paradox. The birthday paradox has been described in textbooks on probability for several years. It is a surprising mathematical condition that indicates the ease of finding two people with the same birthday from a group of people. If we consider that there are 365 possible birthdays (let us not include leap year and assume that birthdays are spread evenly across all possible dates), then we would expect to need to have roughly 183 people together to have a 50 percent probability that two of those people share the same birthday. In fact, once there are more than 23 people together, there is a greater than 50 percent probability that two of them share the same birthday. We are not going to explain the mathematics of this here, but it is correct. You can understand this once you consider that in a group of 23 people, there are 253 different pairings (n(n – 1)/2). In fact, once you have 100 people together, the chance of two of them having the same birthday is greater than 99.99 percent.
So why is a discussion about birthdays important in the middle of hashing attacks? Because the likelihood of finding a collision for two messages and their hash values may be a lot easier than we have thought. It would be very similar to the statistics of finding two people with the same birthday. One of the considerations for evaluating the strength of a hash algorithm must be its resistance to collisions. The probability of finding a collision for a 160-bit hash can be estimated at either 2160 or 2160/2, depending on the level of collision resistance needed. *As quoted in William Stallings, Cryptography and Network Security: Principles and Practice. However, the cost of this machine would be about $10 million.
263
AU8231_C003.fm Page 264 Saturday, June 2, 2007 1:22 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® This approach is relevant because a hash is a representation of the message and not the message itself. Obviously, the attacker does not want to find an identical message; he wants to find out how to (1) change the message contents to what he wants it to read or (2) cast some doubt on the authenticity of the original message by demonstrating that another message has the same value as the original. The hashing algorithm must therefore be resistant to a birthday-type attack that would allow the attacker to feasibly accomplish his goals. Message Authentication Code (MAC) A message authentication code (also known as a cryptographic checksum) is a small block of data that is generated using a secret key and then appended to the message. When the message is received, the recipient can generate her own MAC using the secret key, and thereby know that the message has not changed either accidentally or intentionally in transit. Of course, this assurance is only as strong as the trust that the two parties have that no one else has access to the secret key. In the case of DES-CBC, a MAC is generated using the DES algorithm in cipher block chaining mode, and the secret DES key is shared by the sender and receiver. The MAC is actually just the last block of ciphertext generated by the algorithm. This block of data (64 bits) is attached to the unencrypted message and transmitted to the far end. All previous blocks of encrypted data are discarded to prevent any attack on the MAC itself. The receiver can just generate his own MAC using the secret DES key he shares to ensure message integrity and authentication. He knows that the message has not changed because the chaining function of CBC would significantly alter the last block of data if any bit had changed anywhere in the message. He knows the source of the message (authentication) because only one other person holds the secret key. And furthermore, if the message contains a sequence number (such as a TCP header or X.25 packet), he knows that all messages have been received and not duplicated or missed. A MAC is a small representation of a message and has the following characteristics: • A MAC is much smaller (typically) than the message generating it. • Given a MAC, it is impractical to compute the message that generated it. • Given a MAC and the message that generated it, it is impractical to find another message generating the same MAC. HMAC. A MAC based on DES is one of the most common methods of creating a MAC; however, it is slow in operation compared to a hash function. A hash function such as MD5 does not have a secret key, so it cannot be 264
AU8231_C003.fm Page 265 Saturday, June 2, 2007 1:22 PM
Cryptography used for a MAC. Therefore, RFC 2104 was issued to provide a hashed MACing system that has become the process used now in IPSec and many other secure Internet protocols, such as SSL/TLS. Hashed MACing implements a freely available hash algorithm as a component (black box) within the HMAC implementation. This allows ease of replacement of the hashing module if a new hash function becomes necessary. The use of proven cryptographic hash algorithms also provides assurance of the security of HMAC implementations. The HMAC operation provides cryptographic strength similar to a hashing algorithm, except that it now has the additional protection of a secret key, and still operates nearly as rapidly as a standard hash operation. Digital Signatures A digital signature is comparable to a handwritten signature on an important document such as a contract. It verifies to all parties to the contract that each signatory has read, agreed with, and will comply with the conditions of the contract. It is legally binding and enforceable in most courts of law. The purpose of a digital signature is to provide the same level of accountability for electronic transactions where a handwritten signature is not possible. A digital signature will provide assurance that the message does indeed come from the person that claims to have sent it, it has not been altered, both parties have a copy of the same document, and the person sending the document cannot claim that he did not send it. A digital signature will usually include a date and time of the signature, as well as a method for a third party to verify the signature. What is a digital signature? It is a block of data (a pattern of bits, usually a hash) that is generated based on the contents of the message sent and encrypted with the sender’s private key. It must contain some unique value that links it with the sender of the message that can be verified easily by the receiver and by a third party, and it must be difficult to forge the digital signature or create a new message with the same signature. Digital Signature Standard (DSS) The DSS was proposed in 1991 as FIPS 186 using the Secure Hashing Algorithm (SHA). It has since been updated several times, most recently in 2000, when it was issued as FIPS 186-2 and expanded to include the Digital Signature Algorithms based on RSA and elliptic curve cryptography. Contrasted to RSA, a digital signature is based on a public key (asymmetric) algorithm, but it does not provide for confidentiality of the message through encryption and is not used for key exchange. The Digital Signature Standard uses two methods of creating the signature: the RSA method and the DSS approach. In both cases, the operation 265
AU8231_C003.fm Page 266 Saturday, June 2, 2007 1:22 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® starts with the creation of a hash of the message. The RSA approach then encrypts the hash with the private key of the sender, thus creating the signature. The DSS approach is to sign the hash using the Digital Signature Algorithm (DSA). The DSA is based on the discrete logarithmic algorithms used in El Gamal and Schnorr. The DSA chooses a random number to create a private and public key pair and encrypts the hash value with the private key and a universal key to create a two-part signature. A digital signature can be created by encrypting the entire message with the private key of the sender; however, in most cases this is not practical because of the computational impact of encrypting a message using asymmetric algorithms. Therefore, in most cases the digital signature is created by encrypting a hash of the message with the sender’s private key. If further confidentiality is needed, then the message can be encrypted with a symmetric algorithm; however, it is best to create the signature before encrypting the message — then the signature authenticates the message itself and not the ciphertext of the message. Once a digital signature is created, it is appended to the message and sent to the receiver. The receiver decrypts the signature with the public key of the sender and can verify that the message has not been altered and can establish nonrepudiation of origin of the signature. Uses of Digital Signatures Digital signatures have become invaluable in protecting the integrity of financial transactions, E-commerce, and e-mail. They are also used by software vendors to ensure that software has not been compromised through the introduction of viruses or other manipulation. This is especially important when downloading a patch via the Internet to ensure that the patch is from a legitimate site, as well as ensuring the integrity of the download. In many parts of the world, digital signatures have become recognized by the government and courts as a verifiable form of authentication. Encryption Management Key Management Perhaps the most important part of any cryptographic implementation is key management. Control over the issuance, revocation, recovery, distribution, and history of cryptographic keys is of utmost importance to any organization relying on cryptography for secure communications and data protection. It is good to review the importance of Kerckhoff’s law. Auguste Kerckhoff wrote that “a cryptosystem should be secure even if everything about the 266
AU8231_C003.fm Page 267 Saturday, June 2, 2007 1:22 PM
Cryptography system, except the key, is public knowledge.”* The key therefore is the true strength of the cryptosystem. The size of the key and the secrecy of the key are perhaps the two most important elements in a crypto implementation. Claude Shannon, the famous 20th-century military cryptographer, wrote that “the enemy knows the system.” We cannot count on the secrecy of the algorithm, the deftness of the cryptographic operations, or the superiority of our technology to protect our data and systems. We must always consider that the enemy knows the algorithms and methods we use and act accordingly. As we saw earlier, a symmetric algorithm shares the same key between the sender and receiver. This often requires out-of-band transmission of the keys — distribution through a different channel and separate from the data. Key management also looks at the replacement of keys and ensuring that new keys are strong enough to provide for secure use of the algorithm. Just as we have seen over the years with passwords, users will often choose weak or predictable passwords and store them in an insecure manner. This same tendency would affect the creation of cryptographic keys if the creation was left to the user community. People also forget passwords, necessitating the resetting of access to the network or a workstation; however, in the cryptographic world, the loss of a key means the loss of the data itself. Without some form of key recovery, it would be impossible to recover the data that was encrypted with a lost key. Key Recovery. A lost key may mean a crisis to an organization. The loss of critical data or backups may cause widespread damage to operations and even financial ruin or penalties. There are several methods of key recovery, such as common trusted directories or a policy that requires all cryptographic keys to be registered with the security department. Some people have even been using steganography to bury their passwords in pictures or other locations on their machine to prevent someone from finding their password file. Others use password wallets or other tools to hold all of their passwords.
One method is multiparty key recovery. A user would write her private key on a piece of paper, and then divide the key into two or more parts. Each part would be sealed in an envelope. The user would give one envelope each to trusted people with instructions that the envelope was only to be opened in an emergency where the organization needed access to the user’s system or files (disability or death of the user). In case of an emergency, the holders of the envelopes would report to human resources, where the envelopes could be opened and the key reconstructed.
*Kerckhoff’s Law, http://underbelly.blog-topia.com/2005/01/kerckhoffs-law.html.
267
AU8231_C003.fm Page 268 Saturday, June 2, 2007 1:22 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® The user would usually give the envelopes to trusted people at different management levels and different parts of the company to reduce the risk of collusion. Key Distribution Centers. Recall the formula used before to calculate the number of symmetric keys needed for users: n(n – 1)/2. This necessitates the setup of directories, public key infrastructures, or key distribution centers.
The use of a key distribution center (KDC) for key management requires the creation of two types of keys. The first are master keys, which are secret keys shared by each user and the KDC. Each user has his own master key, and it is used to encrypt the traffic between the user and the KDC. The second type of key is a session key, created when needed, used for the duration of the communications session and then discarded once the session is complete. When a user wants to communicate with another user or an application, the KDC sets up the session key and distributes it to each user for use. An implementation of this is found in Kerberos in the Access Control chapter, Domain 2. A large organization may even have several KDCs, and they can be arranged so that there are global KDCs that coordinate the traffic between the local KDCs. Because master keys are integral to the trust and security relationship between the users and hosts, such keys should never be used in compromised situations or where they may become exposed. For encrypting files or communications, separate nonmaster keys should be used. Ideally, a master key is never visible in the clear, it is buried within the equipment itself, and it is not accessible to the user. Standards for Financial Institutions ANSI X9.17 was developed to address the need of financial institutions to transmit securities and funds securely using an electronic medium. Specifically, it describes the means to ensure the secrecy of keys. The ANSI X9.17 approach is based on a hierarchy of keys. At the bottom of the hierarchy are data keys (DKs). Data keys are used to encrypt and decrypt messages. They are given short lifespans, such as one message or one connection. At the top of the hierarchy are master key-encrypting keys (KKMs). KKMs, which must be distributed manually, are afforded longer lifespans than data keys. Using the two-tier model, the KKMs are used to encrypt the data keys. The data keys are then distributed electronically to encrypt and decrypt messages. The two-tier model may be enhanced by adding another layer to the hierarchy. In the three-tier model, the KKMs are not used to encrypt data keys directly, but to encrypt other key-encrypting keys (KKs). The KKs, which are exchanged electronically, are used to encrypt the data keys. 268
AU8231_C003.fm Page 269 Saturday, June 2, 2007 1:22 PM
Cryptography Public Key Infrastructure (PKI) The use of public key (asymmetric) cryptography has enabled more effective use of symmetric cryptography as well as several other important features, such as greater access control, nonrepudiation, and digital signatures. So often, the biggest question is, Who can you trust? How do we know that the public key we are using to verify Jim’s digital signature truly belongs to Jim, or that the public key we are using to send a confidential message to Valerie is truly Valerie’s and not that of an attacker that has set himself up in the middle of the communications channel? Public keys are by their very nature public. Many people include them on signature lines in e-mails, or organizations have them on their Web servers so that customers can establish confidential communications with the employees of the organization, who they may never even meet. How do we know an imposter or attacker has not set up a rogue Web server and is attracting communications that should have been confidential to his site instead of the real account, as in a phishing attack? Setting up a trusted public directory of keys is one option. Each user must register with the directory service, and a secure manner of communications between the user and the directory would be set up. This would allow the user to change keys — or the directory to force the change of keys. The directory would publish and maintain the list of all active keys and also delete or revoke keys that are no longer trusted. This may happen if a person believes that her private key has been compromised, or she leaves the employ of the organization. Any person wanting to communicate with a registered user of the directory could request the public key of the registered user from the directory. An even higher level of trust is provided through the use of public key certificates. This can be done directly, in that Jim would send his certificate to Valerie, or through a certificate authority, which would act as a trusted third party and issue a certificate to both Jim and Valerie containing the public key of the other party. This certificate is signed with the digital signature of the certificate authority and can be verified by the recipients. A certificate authority will adhere to the X.509 standards. This is part of the overall X.500 family of standards applying to directories. X.509 was issued in 1988 and has been updated twice since. We currently use version 3 of the standard, which was issued in1995 and revised in 2000. An X.509 certificate looks as follows: Field Version Certificate serial number
Description of Contents Currently version 3 Unique identifier for this certificate
269
AU8231_C003.fm Page 270 Saturday, June 2, 2007 1:22 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Field Algorithm used for the signature Issuer name Period of validity: start date/end date Subject’s name Subject’s public key information (algorithm, parameters, key) Issuer unique identifier Subject’s unique identifier Extensions Digital signature of certificate authority
Description of Contents Algorithm used to sign the certificate X.500 name of CA Owner of the public key Public key and algorithm used to create it Optional field used in case the CA used more than one X.500 name Optional field in case the public key owner has more than one X.500 name Hash of the certificate encrypted with the private key of the CA
Figure 3.18 shows an example of a certificate issued by Verisign.
Figure 3.18. A X.509 certificate issued by Verisign.
270
AU8231_C003.fm Page 271 Saturday, June 2, 2007 1:22 PM
Cryptography Revocation of a Certificate. When a certificate needs to be revoked, the CA will keep a list of all revoked certificates. It does not keep a record of expired certificates because it is obvious that they are no longer valid. It is the responsibility of the user to check whether a certificate is revoked. Unfortunately, very few users ever check a certificate revocation directory. Cross-Certification. Users will often need to communicate with other users that are registered with a different CA. Especially in a large organization, it may not be practical to use one CA for all users. Therefore, the CAs must also have a method of cross-certifying one another so that a public key certificate from one CA is recognized by users from a different CA.
Legal Issues Surrounding Cryptography Most countries have some regulations regarding the use or distribution of cryptographic systems. Usually, this is to maintain the ability of law enforcement to do their jobs and to keep strong cryptographic tools out of the hands of criminals. Cryptography is considered in most countries to be a munition, a weapon of war, and is managed through laws written to control the distribution of military equipment. Some countries do not allow any cryptographic tools to be used by their citizens; others have laws that control the use of cryptography, usually based on key length. This is because key length is one of the most understandable methods of gauging the strength of a cryptosystem. In some countries, the laws require all organizations and individuals to provide law enforcement with their cryptographic keys on demand. Cryptanalysis and Attacks Throughout this chapter we have looked at the strengths of the cryptographic algorithms and their uses. However, we must always be aware that any security system or product is subject to compromise or attack. We will now look at many of the methods of cryptanalysis that are used. Ciphertext-Only Attack The ciphertext only attack is one of the most difficult because the attacker has so little information to start with. All he has is some unintelligible data that he suspects may be an important encrypted message. The attack becomes simpler when the attacker is able to gather several pieces of ciphertext and thereby look for trends or statistical data that would help in the attack. Known Plaintext Attack For a known plaintext attack, the attacker has access to both the ciphertext and the plaintext versions of the same message. The goal of this type 271
AU8231_C003.fm Page 272 Saturday, June 2, 2007 1:22 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® of attack is to find the link — the cryptographic key that was used to encrypt the message. Once the key has been found, the attacker would then be able to decrypt all messages that had been encrypted using that key. In some cases, the attacker may not have an exact copy of the message — if the message was known to be an E-commerce transaction, the attacker knows the format of such transactions even though he does not know the actual values in the transaction. Chosen Plaintext Attack To execute the chosen attacks, the attacker knows the algorithm used for the encrypting, or even better, he may have access to the machine used to do the encryption and is trying to determine the key. This may happen if a workstation used for encrypting messages is left unattended. Now the attacker can run chosen pieces of plaintext through the algorithm and see what the result is. This may assist in a known plaintext attack. An adaptive chosen plaintext attack is where the attacker can modify the chosen input files to see what effect that would have on the resulting ciphertext. Chosen Ciphertext Attack This is similar to the chosen plaintext attack in that the attacker has access to the decryption device or software and is attempting to defeat the cryptographic protection by decrypting chosen pieces of ciphertext to discover the key. An adaptive chosen ciphertext would be the same, except that the attacker can modify the ciphertext prior to putting it through the algorithm. Social Engineering This is the most common type of attack and usually the most successful. All cryptography relies to some extent on humans to implement and operate. Unfortunately, this is one of the greatest vulnerabilities and has led to some of the greatest compromises of a nation’s or organization’s secrets or intellectual property. Through coercion, bribery, or befriending people in positions of responsibility, spies or competitors are able to gain access to systems without having any technical expertise. Brute Force There is little that is scientific or glamorous about this attack. Brute force is trying all possible keys until one is found that decrypts the ciphertext. This is why key length is such an important factor in determining the strength of a cryptosystem. With DES only having a 56-bit key, in time the attackers were able to discover the key and decrypt a DES message. This is also why SHA-1 is considered stronger than MD5, because the output hash is longer, and therefore more resistant to a brute-force attack. 272
AU8231_C003.fm Page 273 Saturday, June 2, 2007 1:22 PM
Cryptography Differential Power Analysis Also called side channel attack, this more complex attack is done by measuring the exact execution times and power required by the crypto device to perform the encryption or decryption. By measuring this, it is possible to determine the length of the key and the algorithm used. Frequency Analysis This attack works closely with several other types of attacks. It is especially useful when attacking a substitution cipher where the statistics of the plaintext language are known. In English, for example, we know that some letters will appear more often than others, allowing an attacker to assume that those letters may represent an E or S. Birthday Attack We looked at this attack earlier when we discussed hash algorithms. Because a hash is a short representation of a message, we know that given enough time and resources, we could find another message that would give us the same hash value. As we saw from the statistics of the birthday attack, this may be easier than originally thought. However, hashing algorithms have been developed with this in mind, so that they can resist a simple birthday attack. Dictionary Attack The dictionary attack is used most commonly against password files. It exploits the poor habits of users that choose simple passwords based on natural words. The dictionary attack merely encrypts all of the words in a dictionary and then checks whether the resulting hash matches an encrypted password stored in the .sam file or other password file. Replay Attack This attack is meant to disrupt and damage processing by the attacker sending repeated files to the host. If there are no checks or sequence verification codes in the receiving software, the system might process duplicate files. Factoring Attacks This attack is aimed at the RSA algorithm. Because that algorithm uses the product of large prime numbers to generate the public and private keys, this attack attempts to find the keys through solving the factoring of these numbers. Reverse Engineering This attack is one of the most common. A competing firm buys a crypto product from another firm and then tries to reverse engineer the product. 273
AU8231_C003.fm Page 274 Saturday, June 2, 2007 1:22 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Through reverse engineering, it may be able to find weaknesses in the system or gain crucial information about the operations of the algorithm. Attacking the Random Number Generators This attack was successful against the SSL installed in Netscape several years ago. Because the random number generator was too predictable, it gave the attackers the ability to guess the random numbers so critical in setting up initialization vectors or a nonce. With this information in hand, the attacker is much more likely to run a successful attack. Temporary Files Most cryptosystems will use temporary files to perform their calculations. If these files are not deleted and overwritten, they may be compromised and lead an attacker to the message in plaintext. Encryption Usage Selecting the correct encryption product is much more than just adopting a technology. A CISSP is expected to understand the many other issues related to implementing a technology — the correct processes, procedures, training of the users, and maintenance of the products chosen. Some considerations must be the security of the cryptographic keys and the ability to recover lost keys or data that was encrypted by exemployees. Every organization must have policies related to the use of cryptographic tools. These policies should require the use of organizationally supported standard encryption products and the correct processes for key management (key exchange and recovery) and storage and transmission of classified data. E-mail Security Using Cryptography Cryptography has many uses in today’s business environment, but perhaps one of the most visible is to protect e-mail. E-mail is the most common form of business communications for most organizations today, far outranking voice or personal contact in its importance for commerce. There are several reasons why an organization may need to protect email — to protect the confidentiality of message content and preserve intellectual property, to verify the course of e-mails to ensure that the sender is who he claims to be, to provide access control, and to prevent the distribution or copying of e-mail message content. As we know, the ability to forge e-mails, alter attachments, and masquerade as another user is a fairly simple process, and this underlines the requirement for secure e-mail. 274
AU8231_C003.fm Page 275 Saturday, June 2, 2007 1:22 PM
Cryptography Protocols and Standards Pretty Good Privacy (PGP) PGP was developed by Phil Zimmerman as a free product for noncommercial use that would enable all people to have access to state-of-the-art cryptographic algorithms to protect their privacy. PGP is also available as a commercial product that has received widespread acceptance by many organizations looking for a user-friendly, simple system of encryption of files, documents, and e-mail and the ability to wipe out old files through a process of overwriting them to protect old data from recovery. PGP also compresses data to save on bandwidth and storage needs. PGP uses several of the most common algorithms — symmetric algorithms such as CAST-128 and 3DES for encryption of bulk data, RSA for key management and distribution of hash values, and SHA-1 to compute hash values for message integrity. It gives the user the option to choose which algorithm she wishes to use, including others not mentioned here. When sending e-mail, PGP will ensure compatibility with most e-mail systems, converting binary bits to ASCII characters, breaking large messages into smaller pieces, and encrypting each message with a session (symmetric) key that is only used once. A user needing a new keyring will select a passphrase (the advantages of passphrases over passwords was described in the “Access Control” section) and PGP will generate the key pair to be used. A user will establish trust in another user’s public key through a web of trust relationship. Rather than establishing trust in a hierarchical format, where a root CA is trusted by everyone and everyone below that level trusts the higher authority, PGP establishes trust based on relationships, and one user can sign another user’s key for a third party based on the level of trust that the third party has on the key signer. Secure/Multipurpose Internet Mail Extension (S/MIME) S/MIME is the security enhancement for the MIME Internet e-mail standard format. S/MIME provides several features, including signed and encrypted mail messages. It uses SHA-1 and DSS for digital signature services and Diffie–Hellmann and RSA for encryption of the session key. Encryption of the message itself is accomplished through 3DES and RC2/40. S/MIME also adjusts the encryption method used depending on the algorithms used by the receiver of the message. Internet and Network Security IPSec. IPSec was developed to provide security over Internet connections and prevent IP spoofing, eavesdropping, and misuse of IP-based authentication. It operates with both IPv4 and IPv6. We study IPSec in more 275
AU8231_C003.fm Page 276 Saturday, June 2, 2007 1:22 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® detail in the telecommunications domain, but we will look at it here for utilization of encryption and compression services to provide secure communications. IPSec is documented in RFCs 2401, 2412, 2406, and 2408. IPSec uses HMAC-MD5-96 or HMAC-SHA-1-96 to provide an integrity check value for the message and part of the headers. This prevents the spoofing of the address portion of the headers. The reason they are called “-96” is that although the full MD5 or SHA-1 is used to calculate the integrity check value, only the first 96 bits (of 128 or 160, respectively) is used. For Encapsulating Security Payload (ESP) mode of operations, IPSec uses three-key 3DES, RC5, IDEA, three-key 3IDEA, CAST, or Blowfish to encrypt the payload (message) and, depending on whether it is tunnel or transport mode, part of the header. Key management is an important part of IPSec, and it uses Oakley/ISAKMP for key exchange. Oakley uses a form of Diffie–Hellman with some added security to prevent clogging of the key exchange process. This is done by forcing the sender to include a random number (nonce) in the original packet to the receiver. The receiver will then respond back to the address of the sender in the packet and not begin the decryption process until it receives an acknowledgment that the reply was received by the sender. If an attacker spoofed the sender’s address, the sender would never get the response from the receiver, and this would prevent the receiver from doing unnecessary work and clogging up his system with spoofed requests. ISAKMP specifies the format of the process to negotiate security associations. SSL/TLS. SSL is one of the most common protocols we use to protect Internet traffic. It encrypts the messages using symmetric algorithms, such as IDEA, DES, 3DES, and Fortezza, and also calculates the MAC for the message using MD5 or SHA-1. The MAC is appended to the message and encrypted along with the message data. Exchange of the symmetric keys is accomplished through various versions of Diffie–Hellmann or RSA.
TLS is the Internet standard based on SSLv3. TLSv1 is backwards compatible with SSLv3. It uses the same algorithms as SSLv3; however, it computes the MAC in a slightly different manner. References Henri Cohen et al. Handbook of Elliptic and Hyperelliptic Curve Cryptography. Boca Raton, FL: CRC Press, 2005. Gregory Kipper. Investigator’s Guide to Steganography. New York: Auerbach Publications, 2003. Richard A. Mollin. Codes: The Guide to Secrecy from Ancient to Modern Times. Boca Raton, FL: CRC Press, 2005. Richard A. Mollin. RSA and Public-Key Cryptography. Boca Raton, FL: CRC Press, 2002.
276
AU8231_C003.fm Page 277 Saturday, June 2, 2007 1:22 PM
Cryptography William Stallings. Cryptography and Network Security: Principles and Practice. Englewood Cliffs, NJ: Prentice Hall, 2002. Douglas R. Stinson. Cryptography: Theory and Practice, 3rd ed. Boca Raton, FL: CRC Press, 2005. James S. Tiller. A Technical Guide to IPSec Virtual Private Networks. New York: Auerbach Publications, 2000. Harold F. Tipton and Micki Krause (Eds.). Information Security Management Handbook, 5th ed., Vols. 1–3. New York: Auerbach Publications, 2005–2007. John R. Vacca. Public Key Infrastructure: Building Trusted Applications and Web Services. New York: Auerbach Publications, 2004. Lawrence C. Washington. Elliptic Curves: Number Theory and Cryptography. Boca Raton, FL: CRC Press, 2004.
Sample Questions 1. Asymmetric key cryptography is used for all of the following except: a. Encryption of data b. Access control c. Nonrepudiation d. Steganography 2. The most common forms of asymmetric key cryptography include: a. Diffie–Hellman b. Rijndael c. Blowfish d. SHA-256 3. One of the most important principles in the secure use of a public key algorithm is: a. Protection of the private key b. Distribution of the shared key c. Integrity of the message d. History of session keys 4. Secure distribution of a confidential message can be performed by: a. Encrypting the message with the receiver’s public key b. Encrypting a hash of the message c. Having the message authenticated by a certificate authority d. Using a password-protected file format 5. What are the disadvantages of using a public key algorithm compared to a symmetric algorithm? a. A symmetric algorithm provides better access control. b. A symmetric algorithm is a faster process. c. A symmetric algorithm provides nonrepudiation of delivery. d. A symmetric algorithm is more difficult to implement. 6. When a user needs to provide message integrity, what options may be best? a. Send a digital signature of the message to the recipient b. Encrypt the message with a symmetric algorithm and send it
277
AU8231_C003.fm Page 278 Saturday, June 2, 2007 1:22 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK®
7.
8.
9.
10.
11.
12.
13.
14.
278
c. Encrypt the message with a private key so the recipient can decrypt with the corresponding public key d. Send an encrypted hash of the message along with the message to the recipient A certification authority provides which benefits to a user? a. Protection of public keys of all users b. History of symmetric keys c. Proof of nonrepudiation of origin d. Validation that a public key is associated with a particular user What is the output length of a RIPEMD-160 hash? a. 160 bits b. 150 bits c. 128 bits d. 104 bits What is the primary risk of using cryptographic protection for systems or data? a. Loss of the system may mean loss of all data. b. A hardware failure may lead to lost data or system integrity. c. A disgruntled user may lead to denial of service. d. An employee may hide his activities from the security department. ANSI X9.17 is concerned primarily with: a. Protection and secrecy of keys b. Financial records and retention of encrypted data c. Formalizing a key hierarchy d. The lifespan of key-encrypting keys (KKMs) When a certificate is revoked, what is the proper procedure? a. Setting new key expiry dates b. Updating the certificate revocation list c. Removal of the private key from all directories d. Notification to all employees of revoked keys What is not true about link encryption? a. Link encryption encrypts routing information. b. Link encryption is often used for Frame Relay or satellite links. c. Link encryption is suitable for high-risk environments. d. Link encryption provides better traffic flow confidentiality. A _______________ is the sequence that controls the operation of the cryptographic algorithm. a. Encoder b. Decoder wheel c. Cryptovariable d. Cryptographic routine The process used in most block ciphers to increase their strength is: a. Diffusion b. Confusion
AU8231_C003.fm Page 279 Saturday, June 2, 2007 1:22 PM
Cryptography
15.
16.
17.
18.
19.
20.
c. Step function d. SP-network The two methods of encrypting data are: a. Substitution and transposition b. Block and stream c. Symmetric and asymmetric d. DES and AES Cryptography supports all of the core principles of information security except: a. Availability b. Confidentiality c. Integrity d. BCP A way to defeat frequency analysis as a method to determine the key is to use: a. Substitution ciphers b. Transposition ciphers c. Polyalphabetic ciphers d. Inversion ciphers The running key cipher is based on: a. Modular arithmetic b. XOR mathematics c. Factoring d. Exponentiation The only cipher system said to be unbreakable by brute force is: a. AES b. DES c. One-time pad d. Triple DES Messages protected by steganography can be transmitted to: a. Picture files b. Music files c. Video files d. All of the above
279
AU8231_C003.fm Page 280 Saturday, June 2, 2007 1:22 PM
AU8231_book.fm Page 281 Friday, October 13, 2006 8:00 AM
Domain 4
Physical (Environmental) Security Paul Hansford, CISSP
Introduction The physical (environmental) security domain addresses the common physical, environmental, and procedural risks that may exist in the environment in which the information system is managed. It also addresses physical and procedural defensive and recovery strategies, countermeasures, and resources, including the corporate physical infrastructure, security policies and procedures, physical security tools, and the organization’s staff. It is commonly accepted that the greatest potential source of threats to systems is “the insider” — staff, contractors, and anyone else who has logical (technical) or physical access to the system. It is also commonly accepted that the majority of the security breaches caused by insiders are not malicious, but the result of ignorance, i.e., a lack of training or awareness, or vulnerabilities arising from inadequate physical security or lapsed working procedures. In establishing and maintaining the security of a system, the Certified Information Systems Security Professional (CISSP®) will need to address its physical environment. In this, the CISSP must have an understanding of the strengths, weaknesses, and applicability of the likes of vehicle gates, doors, and windows, which may initially appear to have little direct bearing on information security. But of course they do. The Common Body of Knowledge (CBK®) reflects this in its domain on physical (environmental) security; in terms of the examination, you, the candidate, should be able to: • Describe the common threats to — and vulnerabilities found in — the environments of system and mobile devices, including working procedures, site facilities, equipment and media, and environmental support systems 281
AU8231_book.fm Page 282 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® • Explain the principle of defense in depth, expressed as “a layered combination of complementary countermeasures,” and the importance of providing both preventive and recovery measures • Identify the range of countermeasures available for the environmental protection of information assets, including security procedures, physical barriers, physical intrusion detection and monitoring systems, technical identification and authentication tools (e.g., smart cards, biometric devices), and environmental controls (e.g., of power, water, light, and heat). CISSP Expectations The CBK domain on physical (environmental) security is listed below to assist in determining topics for further review. Although this exam guide is detailed, it is not possible to thoroughly discuss every CBK topic here. Therefore, additional study of some items may the advisable. The CISSP should fully understand: • Threat types, including physical attack, arson, accidental damage, and burglary; environmental damage, including water, dust, heat, and power; irregularities; and disruptions • Threat sources, including pressure and terrorist groups, criminals, staff and contractors, and the general public • Vulnerabilities, including inadequate or lapsed procedures and weak or inappropriate physical security measures • The organization of perimeter, site zones, and building security • The benefits of including preexisting physical and procedural measures in system security strategies, and of blending physical and procedural measures with technical measures in a defense-indepth strategy • The benefits of coordinating with physical security and facilities management staff, and training staff, to utilize physical security measures and promote procedural security through training and awareness • Physical security procedures • Environmental controls, aligned with relevant health and safety legislation, including fire, flood, and similar safety requirements • Physical barriers, including identification and authentication, and access and intrusion detection controls Physical (Environmental) Security Challenges Physical, environmental, and procedural security measures perhaps offer the greatest potential for cost-effective defense in depth, particularly where measures already exist to address other security needs and are appropriate to the security of the system. However, it is important when 282
AU8231_book.fm Page 283 Friday, October 13, 2006 8:00 AM
Physical (Environmental) Security selecting these to confirm that they remain effective in practice as well as on paper — procedures can become relaxed over time and physical measures neglected. The increasing tendency toward home working and mobile computing, with diminished control over the physical environment, makes the need for procedural security more important than ever before. Security procedures must be clearly understood by those required to carry them out, and fit with working practices: experience shows that staff who do not appreciate the need for procedures will, sooner or later, find ways around them. At a strategic (corporate) level, the challenges for the information security professional include understanding the discipline of physical security and building effective working relationships with those who manage environmental and physical security. This may also include negotiating with contracted security services providers. To ensure staff honor their security obligations, it may be necessary to carry out security education, training, and awareness to promote the concept that everyone in the organization has a responsibility to address the physical and procedural aspects of information security in their daily work. Threats and Vulnerabilities As with other information security goals, the objective of physical, environmental, and procedural security is to ensure the system is available when needed and maintains data integrity and the confidentiality of the information it manages. The physical and procedural threats to systems include both malicious and accidental actions, plus environmental conditions that may damage computer system hardware and media. The physical components of systems are particularly at risk during installation, and where building maintenance or geographical relocation is under way. Mobile systems, including laptops, mobile phones, and personal digital assistants (PDAs), are particularly at risk in public transit and when left unattended in “foreign” environments, such as hotel rooms. Threat Types. There are three basic threat types: natural and environmental, threats from utility systems, and man-made and political threats. This section examines each of these treats in relation to sources, vulnerabilities, and countermeasures. Environmental Threats. A range of threats are presented by the physical environment itself. These include water leakage and humidity, ingress of dust and materials, excessive high and low temperature levels within and around the system, and power fluctuations and loss. Complete loss of power from its commercial source is sometimes called a blackout, whereas 283
AU8231_book.fm Page 284 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® a reduction of the power level is termed a brownout. Power fluctuations can be caused within the organization’s infrastructure by, for example, switching on a motor or infrastructure component that requires a high level of power to start up. This may cause a momentary sag or dip in voltage. Similarly, switching off a large system or failing to regulate voltage from an internal power source may deliver a momentary power surge or spike of higher voltage to the system, burning out circuitry. Power surges can also be caused by environmental changes, such as a local lightning storm. Most computer rooms need to be temperature controlled: failure to maintain this may require the system to be shut down. Securing the system’s environment must be achieved in compliance with statutory health and safety regulations: clearly, the protection of human life overrides the protection of computers. The position of a fire exit may not fit with security plans, but that must be accommodated. And although staff are normally required, for example, to purge printers, faxes, and photocopiers and lock away media before leaving their offices, these procedures clearly do not apply in the event of a fire. Malicious Threats. These include physical attack, sabotage, vandalism, arson, and theft. All of these may disrupt the business longer than may first seem likely. It is rarely a simple matter of replacing hardware or installing a software backup. A physical attack may be targeted on the building or site, rather than the system itself; therefore, damage to the infrastructure may need to be repaired before the computer system can be restored. Arson may entail recovering from the wider effects of smoke and water damage.
It is difficult to generalize on the possible sources of such physical attacks, but the information security professional should consider the capability, opportunities, and motivation of, for example, any relevant political or other pressure groups, competitors, and even former employees. These days, the majority of thefts tend to be either of laptops and similar small devices or of peripherals such as keyboards, mouses, monitors, and printers. In all these cases, the target is the hardware, but information is also lost and confidentiality compromised. That said, there are still examples of more organized, larger-scale burglaries of computer systems and software from premises; the risk of such an attack cannot therefore be discounted. Finally, information itself has a value: commercial plans, research and development data, staff directories, and other such information are of use to competitors, investigative journalists, hackers, and others. Many technical hacking attacks begin with information gathering and social engineering, for example, the collection of paper waste that has not been shredded or otherwise rendered unusable.
284
AU8231_book.fm Page 285 Friday, October 13, 2006 8:00 AM
Physical (Environmental) Security Accidental Threats. It is commonly accepted that around 75 percent of all attacks perpetrated by insiders are in fact simply accidents, or the consequence of ignorance of security obligations or how to operate the system securely. This type of physical threat may range from the minor and ephemeral — for example, the often-quoted “spilling coffee on the keyboard” — to a more serious disruption to service. Examples of building contractors accidentally cutting through cables during site excavations are often heard. Experience has shown that many potential accidental threats can be avoided by education and good procedural security measures. Vulnerabilities. Most physical, environmental, and procedural vulnerabilities arise from inadequate or lapsed security working practices and weak or inappropriate physical security measures. They include failure to test and review the integration of procedures with working practices, or to monitor and maintain physical security measures over time — against changes in the threat, work practices, and the physical infrastructure of the organization itself.
Site Location The location of the site has implications for the need for physical security for the system. An out-of-town location may provide complete control of the outermost perimeter by means of fencing, guard patrols, closed-circuit television (CCTV), and other intrusion sensors. In an urban area, however, that perimeter area may be as shallow as the building’s walls or the floors the organization occupies. Where the organization leases part of a shared building, control over external security may be difficult to achieve: although it can protect its own perimeters within the building, the organization may not be able to legislate against attacks on the building itself, nor any shared infrastructure services, such as telecommunications. The geographical location of the site may affect the security requirement if it is vulnerable to natural disasters, such as lying in a flood plain; is vulnerable to civil disobedience, demonstrations, or terrorist attacks; experiences crime, including burglary, vandalism, street crime, and arson; or lacks adequate access for the logistical support of emergency services. Site Fabric and Infrastructure The layout of and materials used in buildings have implications for security, as does the provision of infrastructure: water, light, heat and ventilation, and power systems. In this area, security arrangements must comply with statutory health and safety requirements.
285
AU8231_book.fm Page 286 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® In modern buildings, it may be possible to change the layout to accommodate security requirements, for example, to place cabling in secure ducting under floors or in ceiling cavities. In older buildings, particularly where these are subject to conservation orders, this may be impossible and security must be addressed within given architectural constraints. When a new site is built, or the organization relocates to modern premises, there is an opportunity to influence infrastructure and physical security arrangements to best protect the systems. To achieve this, security will need to be considered at all stages of design and build. Although this may involve additional complexity in planning, it will result in overall better and more cost-effective security than if this is considered after the event. Site fabric and infrastructure issues that may affect the security requirement include the location of doors and windows in the building, particularly on the ground floor; entry points open to the public, reception areas, and delivery and other trade entrances; the location of fire escapes, including external and internal stairways; the layout of offices, whether open plan, partitioned, or with solid or hollow walls; and door construction. The Layered Defense Model The main aim of physical and procedural security should be to integrate with technical system security measures in providing defense in depth. This is defined as “a layered combination of complementary countermeasures” or the layered defense model (Figure 4.1). The number and nature of defensive layers will depend on the configurations of the sites in which the system is managed. Typically, they will include the outermost physical perimeter, inner perimeters, and security zones specific to the system. In this chapter, we use the term outermost perimeter to define the furthest physical extent that the organization can control; inner perimeter to describe areas within this, that themselves require some additional form of protection; and restricted areas to describe more specific areas, such as suites or individual rooms. But there is no unified definition of these terms, nor will they necessarily be defined as such in the CISSP examination. In an open or rural environment, the outermost perimeter may comprise the fences, landscaping, and parking areas surrounding the buildings of the site; inner perimeters may comprise individual buildings within this overall perimeter; and security zones may be needed for server rooms and specific data processing areas. In an urban environment, the outermost perimeter may comprise the building or a suite of floors in a shared building that belong to the organization. Inner perimeters may comprise specific floors or suites of rooms, 286
AU8231_book.fm Page 287 Friday, October 13, 2006 8:00 AM
Communications channels
Physical (Environmental) Security
Outermost perimeter Building Grounds Entrance/public areas General offices
ICT suite and other rooms
Figure 4.1. The basic layered defense model.
and security zones may again comprise those rooms that specifically house the system itself. These inner perimeters logically extend to communications systems and ducting that carries wire or fiber-optic cables between secure zones. In considering communications security, note that physical security measures cannot protect wireless communications and — unless specific shielding measures are taken — the unintended emission and interception of signals from system components. Physical Considerations Working with Others to Achieve Physical and Procedural Security. W h e r e the system is installed within a secure site, its security strategy can benefit from physical and procedural measures that already exist for other reasons, such as in business continuity planning (BCP). The benefits of utilizing existing measures in a new security strategy include cost-effectiveness, because those measures are already in place and budgeted; any problems should have been identified and resolved; and they provide a visible deterrent to potential attackers.
In adopting existing measures into the security strategy, there needs to be proof that these: 287
AU8231_book.fm Page 288 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® • Actually provide security, and procedures are actually carried out as specified • Continue to address the risks, and that these risks are relevant to the system • Neither duplicate security provision nor leave security gaps in the defense-in-depth strategy Physical and Procedural Security Methods, Tools, and Techniques. I n common with technical aspects of information security, the defense-indepth strategy applied to the environment in which the system is managed addresses four primary protection aims:
• Identifying and authenticating those individuals with physical access to the environment • Authorizing (i.e., defining and implementing access control for) those individuals • Monitoring and accounting for actions within the environment • Providing a contingency (to support business continuity) capability in the environment In this section, we will define a simple model of a site, comprising four protective layers, and discuss the procedural, environmental, and physical controls appropriate to these environments: • The external perimeter, comprising green space and car parking areas • Buildings within the site, housing various company activities • Restricted areas within each building: whole floors, suites, or individual rooms in which the IT system is sited and central resources, server rooms, and offices containing networked workstations • Communications channels that run between secure zones and outside the site perimeter Procedural Controls. At the outermost perimeter, procedural controls are designed to manage the environment boundaries and open areas. This may include landscaped areas, car parking, and shipping and receiving areas. Procedural controls may consist of the following. Guard Post. A guard post may include vehicle gates control, patrols of perimeters, staff and visitor car parks, buildings, vehicle inspections, and monitoring of CCTV. Many organizations outsource guard services to contractor companies. This can create a vulnerability to the information security regime due to high turnover and poor training. These vulnerabilities can be mitigated though good contract management. Reasonable wages and specific post orders detailing what is expected of contract security staff, along with specific performance metrics built into the contract, can go a long way in minimizing turnover and ensuring high-quality service. 288
AU8231_book.fm Page 289 Friday, October 13, 2006 8:00 AM
Physical (Environmental) Security Security guard shifts should overlap and not allow periods where boundaries are unattended. Guards should remain aware of the security requirements for which they are responsible, and their employer should be contractually bound to enforce these. Computer rooms should be secured at all times, require an appropriate form of access control (for example, card access), and have the capability to monitor and account for traffic moving in and out of them. Staff and visitor car parking should be segregated, and passes used to identify staff vehicles. In particularly sensitive situations, guards may be required to search vehicles on entry and exit to the premises. If so, it may be necessary to consult with legal experts on the legal aspects of such procedures. Checking and Escorting Visitors on Site. Because hackers have used social engineering to enter sites and gain information prior to launching a technical attack, and in extreme cases, commercial competitors and others have similarly used covert access to company premises to directly or indirectly gain access to information or IT services, the management of visitors on company premises has a direct bearing on information security. The ability to identify and account for visitors on site is a health and safety requirement, and security requirements can therefore be integrated with existing procedures in this respect. Managing Deliveries to the Site. Individual buildings on the site may need differing levels of security. Some buildings may be open to visitors and others not. The central suite, communications center, and server rooms will require special attention. Procedural controls may include:
• • • •
Reception areas, including visitor registration and pass management Deck-to-deck walls Cameras recording all access points to server room Management of entry and exit points, including fire exits and delivery points • Control on site of devices belonging to staff and visitors, such as laptops, mobile phones, and personal digital assistants (PDAs) Buildings may themselves be organized in various levels of secure zones. Typical examples include the control center, server rooms, and offices containing workstations. Procedural controls may include: • Clear desk procedures • Purging of storage media on fax, photocopier, and telephone voice mail facilities • Security checks on leaving rooms and the end of the day Communications channels present a separate security environment, because they may cross boundaries between secure zones within a site, 289
AU8231_book.fm Page 290 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® and indeed across the organizational boundary to the outside world. In this section, we refer particularly to data links between components of the corporate system. But they may also include telephone and similar channels acting as carriers for computer, fax, teleconferencing, and other forms of transmission. Procedural controls may include: • Inspections of internal cable ducting, cross-site underground cable runs, and telephone exchanges to guard against breakages, water and other damage, and tapping attacks • Inspections of physical controls (locks and similar security devices) on cabinets and ducting that house communications equipment • Testing for unintended emissions from cables and computer system peripherals (including monitor screens and keyboards) • Testing for the range of transmission from wireless communications Infrastructure Support Systems Environmental controls may already be in place to comply with health and safety legislation, including fire, flood, and other threats. The information security strategy must work within health and safety requirements; for example, a fire exit in or near a computer room may present a security vulnerability, but that vulnerability must be managed without compromising the purpose of the exit. In addition, computer systems themselves need protection in terms of power management, heating, ventilation, and air conditioning (HVAC), and refrigeration issues. At the external perimeter of our model, there are unlikely to be environmental controls other than, perhaps, parks maintenance to maintain clear line of sight for guard patrols and CCTV to all parts of the perimeter boundary. In urban settings, where the external perimeter may be that of the office block itself, environmental controls may be required to prevent parking close to the block. Within buildings, and within restricted areas dedicated to systems, there may be particular needs. Depending on the architecture and fabric of the building, these include HVAC, power surge suppressors and uninterruptible power supplies (UPSs), alternate power sources and grids, and electromagnetic and radio frequency interference (RFI). Temperature, humidity, and air quality sensors are tied into building alarm systems, which include emergency shutdown, for example, emergency power off (EPO) switches and protection and recovery measures against water leakage and flooding. Fire Prevention, Detection, and Suppression. Clearly, measures for preventing, detecting, and dealing with fires should be in place for health and safety and infrastructure protection reasons. Some of these measures — 290
AU8231_book.fm Page 291 Friday, October 13, 2006 8:00 AM
Physical (Environmental) Security for example, the placement and operation of fire exits — may affect the security strategy, which must consider the protection of human life as paramount. But there are also specific considerations for protecting systems, and these are discussed below. Training and education have a part to play here too, because a timely and appropriate response to a fire outbreak may prevent it becoming a critical incident. Most environments include false floors or ceilings to carry cabling. These areas can act as tunnels for flame and smoke. Therefore, the materials used in these must be nonflammable, and consideration should be given to compartmenting computer suites using floor-to-ceiling barriers to prevent the spread of flames and smoke in the event of fire. Noting that media such as magnetic tapes can produce poisonous gases when ignited, and that smoke can do as much damage as the fire itself, combustible materials should not be stored in central computing facility rooms. Media containing critical data and system software should be stored in fireproof containers, and backups should be held off-site. Fire and Smoke Detection Systems. Detection and suppression systems are likely to be installed throughout the infrastructure to protect staff and the building in which they work. Those units operating in data centers should be calibrated to accommodate the temperature, humidity, and other requirements for operating the hardware. Common types of fire and smoke detection systems include:
• Ionization: Reacts to the charged particles in smoke. • Photoelectric: Reacts to changes in or blockage of light caused by smoke. • Heat: Reacts to significant changes in temperature caused by fire. Fire Suppression Devices. Statutor y health and safety requirements require that portable fire extinguishers are available near electrical equipment, and of course this includes hardware. These must remain accessible, all relevant staff should know how and when to use them, and the correct class of extinguisher should be available and clearly labeled. These classes (and their applications) are:
• Class A: For common combustibles such as paper, wood, and laminates. Uses water or soda acid to control the fire. • Class B: For liquids such as petrol or coolants. Uses gas (Halon, a substitute, or carbon dioxide) or soda acid to control the fire. • Class C: For electrical equipment including wiring. Uses gas (Halon, a substitute, or carbon dioxide) or soda acid to control the fire. • Class D: For combustible metals. Uses dry powder to control the fire. • Class K: For commercial kitchens. Uses wet chemicals for fire suppression. 291
AU8231_book.fm Page 292 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Fire extinguishers can deal with small outbreaks, but larger sprinkler systems are needed to quell larger fires. Gas systems (including Halon) are sometimes used, but water systems are more common. Article 2B of “The Montreal Protocol on Substances That Deplete the Ozone Layer” has been agreed to by 183 nations under the United Nations Environment Programme, and amended most recently in Beijing (1999). Among other requirements to limit CFC emissions, this protocol controls the use of Halon 1211, 1301, and 2402. It also asserts that all new installations must use alternative means of fire suppression. Clearly, water will damage system hardware and, as a conductor of electrical charge, may cause significant damage through shorting out electrically live system components. This situation may to some extent be mitigated by first ensuring that fire and smoke detectors focus on specific zones within the suite (and perhaps that the sprinkler is activated only when a combination of these sensors detect heat or smoke) and then building into the suppression system a slight delay before water is introduced into the environment. This arrangement is called a dry pipe system: a valve is activated by the smoke or fire sensor and essentially chokes the water supply for a short time — to allow for, for example, evacuation or emergency system shutdown procedures to take place — before the sprinkler system is started. Sprinkler systems without this type of delay feature (that start the sprinkler as soon as the sensor detects smoke or fire) are called wet pipe systems. As stated previously, the security strategy must recognize that a fire of any magnitude in or near the data center is likely to cause significant damage — if not through fire itself, then through the results of smoke damage or, ironically, the consequence of fire suppression actions. Clearly, the response to this type of event will likely be based on a business continuity plan. Boundary Protection. Physical barriers, including identification and authentication, access control, and intrusion detection, are required to protect all layers of the environment of the system. In established sites, many of these measures may already be in place for other reasons. Providing they are effective and relevant to the security requirement for the system, these can offer a cost-effective basis for a defense-in-depth security strategy for the system.
At the external perimeter of our model, physical controls may include those listed below. In an urban environment, where, for example, the external perimeter is that of a shared office block, some perimeter controls may be jointly owned with other organizations or provided as a managed service. In this case, security arrangements may need to be negotiated and internal measures raised within those parts of the block where the organization has control, to allow for any shortfalls. 292
AU8231_book.fm Page 293 Friday, October 13, 2006 8:00 AM
Physical (Environmental) Security Perimeter Walls, Fences, and Similar Barriers. Walls and fences provide a clear statement of the physical boundaries of the organization and should act as a deterrent. Considerations for implementing these include:
• The ability to see all parts of such barriers, using guard patrols, CCTV, or other means to detect any attempts to break over or through them. • Ensuring that landscaping and architectural features do not impede line of sight. • Ensuring that chain-link fencing is adequately sited, for example, in concrete bases, and taut. Chain fencing is liable to corrosion and other wear-and-tear deterioration; checks must be made periodically for signs of weakness in the structure. • Using a combination of barrier types, such as a barbed-wire top guard above the fence or wall. Vehicle and Personnel Entry and Exit Gateways. These gateways include automated barriers. Automated gates and pole barriers that are raised and lowered to allow or prevent vehicular access may need to be manned or to incorporate some form of surveillance, for example, CCTV, to detect instances of “tailgating,” where the barrier is open long enough for a trespasser to follow immediately behind an authorized visitor.
Personnel barriers include turnstiles and mantraps. The latter comprise a double-door facility so that the entrant is momentarily “trapped” between the closed door and the other about to open. Personnel barriers may be operated by guards or by the presentation of a smart card or other token, a PIN (personal identification number), or possibly a biometric reading. In selecting which type of barrier to implement, consideration must be given to the number of staff who will need to pass through these at peak office times, and their ability to permit a swift exit in the event of a fire or similar emergency. Revolving doors and turnstiles can be set for freewheeling egress when interfaced with building fire systems. Building Entry Points Keys and Locking Systems. Locking systems, using keys or other devices, remain among the most common form of access control to premises. It is worth defining the various types available and their uses. Key and Deadbolt Locks. Key locks require a physical key to open the lock. Deadbolt locks additionally comprise a bolt or bolts that are “thrown” from the door into the door frame when the key is turned, providing additional strength against a physical attack. Commonly used on standard room and cabinet doors, these are perhaps the most vulnerable types of locking system, in the sense that a lost or stolen key offers ready access and locks are relatively easy to pick. Therefore, accounting of all keys and spares must 293
AU8231_book.fm Page 294 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® be made, and audits conducted periodically to ensure none are missing. To mitigate this risk further, ensure that restricted areas with access-controlled doors do not have keys, which are issued to staff. Keys must be available in emergency situations if the access control system fails, but this can be handled in a number of ways. More expensive locking hardware can also be added to the door to increase the difficulty of picking the lock. Although there is no thing as a pick-proof lock, a lock can be made difficult enough to frustrate someone’s attempt. Some organizations operate a master key system, where the master can be used to access all doors, though each also has its own unique key. In this case, the master key and its duplicate are critical and must be held securely against loss, theft, or copying. Restricted key systems offer a method for eliminating the possibility of having keys copied. Even with a restricted key system, some keys need to be issued and could be misused. Therefore, it is a good practice to make sure that the perimeter doors are keyed to a separate system than the internal keys, that a minimum number of perimeter doors (one or two) be keyed alike, and that only a few of those keys be issued in case of access control failure. That way, if a key is compromised, only a maximum of two doors need to be rekeyed. Combination Locks. Combination locks comprise a numbered tumbler or dial that must be turned clockwise and anticlockwise a number of times to preset numbers to open the device. This system overcomes the issue of physical key management and effectively allows the key (the combination numbers) to be changed periodically. It is commonly (not exclusively) associated with security furniture and requires knowledge both of the numbers and of the rotation sequence to gain access. The principal vulnerabilities of this system are where the combination is written down and found, or is not changed frequently, and when staff leave. A register of tumbler combinations, and those who hold them, needs to be managed centrally. Keypad or Pushbutton Locks. Keypad or pushbutton locks simply require a combination of numbers to be learned and secured. When the number is input to the keypad, access is given. The combination relates to the barrier rather than to the individual, and may therefore be shared by all those requiring access — hence accountability is lost. Further, there is the danger that the combination number is overseen as it is input, or that it can be discovered by trial and error. Smart Locks. Smart locks and associated smart cards can be used to permit only authorized individuals to gain access and can be programmed, for example, to limit access to certain times or for a prescribed period, after which the card will not work. These are similar to access-controlled doors, 294
AU8231_book.fm Page 295 Friday, October 13, 2006 8:00 AM
Physical (Environmental) Security but are not linked to a central database. Physically going to each door to update access tables is required, which makes this solution impractical for large-scale deployment. Associated with this limitation is the vulnerability of not being able to disable access quickly when a person leaves the company but keeps the smart card. Walls, Doors, and Windows. Door Design and Materials. Having discussed locks, it is worth considering the value or otherwise of door construction. These may be solid or contain a pane of glass to ensure those approaching the door can be seen. They may be hollow, comprising two thin boards on a frame. The solid door is more secure than the hollow construction, which can simply be kicked in or cut through. Where doors to rooms that contain equipment have glass elements, care should be taken to ensure that desks and monitor screens cannot be observed from outside the room.
In addition, it is often overlooked that a determined attacker may remove the door from its frame to gain access. Fixing of the frame securely to the wall and ensuring the door hinges are hidden, rather than external, when the door is closed add to the security of the doorway. In cases where external hinges are necessary, the hinge pins should be welded or pinned to keep them from being removed by an intruder. Finally, attention should be paid to fire doors, roof access points and fire escapes, ventilation openings, and crawl spaces below raised floors and above false ceilings that may allow access between rooms. Window Glass and Types. In secure areas, the type and construction of windows — particularly external windows — can either aid security or present vulnerabilities. Standard plate glass is relatively brittle and breaks into shards, whereas tempered glass is several times stronger and breaks into small fragments. Reflective or shatter-proof security film (sometimes referred to as bomb blast film) can be applied to such existing window panes to prevent outsiders seeing into rooms or to further reinforce the window against shattering. Wire mesh or a polycarbonate sheet can be embedded between two sheets of glass to create a laminated window pane; the wire mesh format is often required to comply with fire regulations. Finally, acrylics such as Lexan (this is both a product name and a common way of referring to this type of very strong, clear acrylic sheeting) can be used in place of glass. A measure of the strength of Lexan is seen in its use in aircraft windows.
As with doors, it is important that the window frames are fully secured to the walls, that the windows can be locked, and that the glass is fixed within the frame such that it cannot be removed from outside. Security sensors can be applied to window frames to detect noise or vibration that indicates an attack on the window. 295
AU8231_book.fm Page 296 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Most of the measures we have discussed so far provide access control. We now discuss some identification and authentication measures, recognizing that these are often closely integrated with access control measures and procedures. Access Controls. Card, Badge, and Pass Identifiers. C o l o r - c o d e d b a d g e s and passes can be worn to indicate the authorization of the individual to enter secure zones. This measure must be supported by the procedural requirement that badges and passes are worn at all times and that staff challenge those who do not wear them or are in an unauthorized area.
As discussed earlier, programmable swipe cards and smart cards with chip technology can be used to provide two-factor authentication: something the individual has (the card) and something she knows (a PIN or password she supplies to the control, with the card). Smart cards can further offer encryption-based identification in concert with the access control device Contact card systems require physical contact between the card and the reader to share authentication information. Proximity cards are typically 13.56-MHz contactless radio frequency interface device (RFID) cards, also known as contactless smart cards. Proximity cards are defined by the ISO 14443 (proximity card) standard. Biometric Controls. Biometric identification and authentication measures are gaining in popularity, but arguably still suffer from false-negative and false-positive decisions. Biometric devices offer authentication based on the “something you are” principle: a fingerprint, retina or iris scan, signature dynamic, voice recognition, or facial scan. They are often recommended as being faster and less onerous for staff to operate than password or PIN systems. But it is not clear that this is always true, and some staff remain unhappy about submitting to what they perceive to be a physically intrusive measure.
For more information about cards and biometrics, see the chapter on Access Control, Domain 2. We now turn to consider monitoring and intrusion detection systems. Closed-Circuit Television (CCTV). Although CCTV provides a limited, visible deterrent to potential intruders, its real benefit is the ability to detect and identify an intrusion as it happens. However, CCTV requires human intervention and, like any other monitoring capability, is only as good as those who operate it. An effective CCTV system includes the following features:
• Positioning of cameras at an adequate height to avoid physical attack. • Appropriately distributed to exclude any blind areas between each camera range. 296
AU8231_book.fm Page 297 Friday, October 13, 2006 8:00 AM
Physical (Environmental) Security • Adequate lighting in all conditions, to obtain good pictures. • Appropriate lenses (fixed or zoom) and the ability to pan, tilt, and zoom as required. • Ability to be recorded. Premise liability exists when a camera system is in place that is not recorded. Tapes have been used for years, but most current systems involve digital recorders using some form of video compression algorithm, either locally on a digital video recorder (DVR) or centrally on a server across the network in a network video recorder (NVR) system. A software interface allows people to view live and archived video. • Having the camera system tied into the alarm system. This way, a pan, tilt, and zoom (PTZ) can be programmed to automatically focus in on a preset location, like a door or window, when an alarm occurs. Other options for both PTZ and fixed cameras are to have the number and quality of video frames increased during an alarm event. • Regular servicing of moving parts, and ensuring that lenses remain clean. • Human intervention: literally, someone to watch the pictures. It is ineffective for a person to attempt to monitor more than a few cameras over a long period, even if that is their only function. Motion detection and pixel analysis software, which are an integral part of most DVR solutions, allow hundreds of cameras to be monitored by one person, who then only needs to pull them up when they go into an alarm state. Various choices can be made about the type of CCTV coverage; clearly, more sensitive areas will require more constant, real-time, and live coverage. Cameras can be programmed to sweep a range or remain static; video monitoring can be programmed to swap pictures on a single screen or provide multiple screens to view all camera pictures all the time. Besides the DVR and NVR solutions, current advancements, including smart video and IP cameras, have a storage capability of up to a week. This makes an NVR solution more practical, but all of these advancements also raise new issues around bandwidth and network reliability. Although MPEG-4 compression helps to minimize some bandwidth issues, an assessment of your existing network should be done to determine which solution is right for you. The use of CCTV has both legal and practical implications for the organization: • Whether digital or video, the amount of visual data recorded and kept will have a storage implication. This may be considerable if data is kept over time for possible use in legal prosecution. • Video tapes must be stored in a way that mitigates against their physical deterioration. Digital records must be kept in a way that asserts their integrity if brought as evidence in a legal prosecution. 297
AU8231_book.fm Page 298 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® • Recording the actions of individuals may present statutory human rights and privacy implications. For example, it may be necessary to gain the consent of staff to record them, and to place signage around the site that warns visitors that CCTV is in operation. • In using CCTV records as evidence in a prosecution, privacy legislation may require that details of individuals other than the accused be blurred or pixelated. This may include the faces, car number plates, and so on: this requires the technical facility to edit visual records and the often significant human effort to accomplish it. This account of CCTV has so far focused on the type of units we see installed around site perimeters and on buildings. Nowadays, cameras come in all shapes and sizes — some are very small, often intended for covert use. Although these devices have an application (for example, in specific investigations), their deployment must be considered carefully against any infringement of human rights and privacy legislation. It is important to differentiate between overt and covert video recording. It is against the law in some countries to covertly monitor employees. There is also an expectation of privacy in, for example, bathrooms and locker rooms, which make placing even overt cameras a legally indefensible action. Intrusion Detection Systems. While CCTV provides an overall detection
capability, other intrusion detection devices offer protection for specific areas within buildings, for example, doors, windows, and false ceilings. Electrical Circuit. This uses foil or wire contacts placed across door or window frames, carrying a low level of current. When the door or window is opened, the circuit is broken and the alarm triggered. Light Beam. This uses a photoelectric cell that receives a small light source across the secure boundary. When the path of the light to the cell is interrupted, the alarm is triggered. This method has particular application to open boundaries that, for whatever reason, cannot implement a physical barrier, such as a gate or doorway. However, dust or other small materials may trigger false alarms, and attackers may be able to avoid disturbing the light beam as they cross the boundary line. Passive Infrared Detector (PIR). Among the most common intrusion detection devices, PIRs measure light energy within a physical range. When this energy level is changed by the presence of an intruder, the alarm is triggered. PIR systems are effective in detecting heat and movement, but they need to be calibrated carefully to ensure against false alarms. Microwave and Ultrasonic Systems. These systems comprise transceivers and control units. The transceiver units produce and monitor either a measurement of distance (microwave) or an acoustic energy pattern (ultrasonic) across the room and trigger an alarm when this changes. Ultrasonic 298
AU8231_book.fm Page 299 Friday, October 13, 2006 8:00 AM
Physical (Environmental) Security systems have the advantage of invisibility, but can be prone to raising false alarms when significant external sounds impact on the acoustic pattern. Regardless of what intrusion detection systems you deploy, ensure that they are tied into an alarm system that is monitored somewhere and that you have a response procedure in place. CCTV can minimize, but not eliminate, false alarm calls. Portable Device Security. The proliferation of portable computing, remote computing, and telecommuting has raised the issue of physical and procedural protection for these devices, associated media, and the information they manage. This issue extends beyond laptops and PDAs to increasingly integrated devices, such as camera phones, personal dictation machines, and media, including USB sticks.
Theft of such devices is usually to gain the hardware. But of course information — too often sensitive and of value to the individual and the organization — is lost with it. However, the majority of security incidents relate to simply losing equipment, leaving it in taxis, on public transport, in hotel rooms, and so on. Procedural and physical security measures for laptops may include: • Carrying them in unmarked bags or briefcases (as opposed to manufacturers’ bags, which draw attention to their contents) • Transporting the hard disk separately from the laptop (though this is not always convenient) • Using tamper detection measures, tracing software, or invisible marking systems • Protecting against illicit access with tokens, such as smart cards Elsewhere, protection against compromised confidentiality for portable devices may be achieved by disk encryption, password protection, and similar technical measures. As stated in the introduction to this chapter, the inherent inability to provide a physically secure environment places greater emphasis than ever on good procedural security measures, plus education, training, and awareness to promote them to users. Asset and Risk Registers. Registers of physical IT assets can help audit against theft and the acquisition, movement, and decommissioning of systems and security equipment. This may become more important as wireless technology allows systems infrastructures to become more dynamic and mobile.
Records can be made of those members of staff who, for example, hold combination and PIN numbers to access secure zones and security furniture; these can be used to ensure that these items are changed at a pre299
AU8231_book.fm Page 300 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® scribed frequency and on a timely basis as staff join, move around, or leave the organization. Risk registers covering physical sources and types of threats and vulnerabilities can be used in reviewing and auditing the effectiveness of physical, environmental, and procedural security against changes in the risk, practices, and physical infrastructure of the organization. Information Protection and Management Services Managed Services Organizations that contract out physical security services, such as guarding, building control, and courier services, may have little control over the vetting process and contracts of staff employed by the managed service provider. In these situations, the following issues must be addressed: • The contractor understands and is contractually bound to meet the organization’s physical and procedural security requirements. • The contracting organization has the ability to audit or test the security services provided. • There is a channel of communication between the contracting authority and the contractor to affect changes to procedural and physical security measures as they are needed. Physical security can be compromised during temporary situations, such as office relocations and times when scaffolding is placed against buildings for refurbishment or repair work. Security of hardware assets may be weak when in transit and in temporary storage. And although good physical security may be evident on ground floors, the windows, fire exits, and external stairwells on upper stories may be less well protected. Audits, Drills, Exercises, and Testing Among the most commonly experienced vulnerabilities are failing to implement physical security measures properly and allowing procedures to relax. Security audits report that although physical, environmental, and procedural security may be organized in the security strategy, it is not applied as prescribed in day-to-day working practices. Education, training, and awareness can help resolve this discrepancy (see below), as can auditing those measures and conducting exercises and tests. An audit of compliance with the security strategy should be undertaken at an appropriate frequency — perhaps annually. Audits should also be conducted when any significant breach of security or change in the risk, working practices, or nature of the physical infrastructure indicates this may be needed. Audits may be paper based and review the practical effectiveness of physical and procedural measures against prescribed security operating 300
AU8231_book.fm Page 301 Friday, October 13, 2006 8:00 AM
Physical (Environmental) Security procedures and security manuals, training and awareness materials, and incident reporting statistics. They may also involve interviews with staff and observation of behaviors in the workplace. In these methods, however, staff may feel criticized and exhibit behaviors expected of them, rather than their normal practice. Vulnerability and Penetration Tests Vulnerability and penetration tests may be conducted as part of initial procurement evaluation and routinely as part of an audit exercise. Vulnerability tests may include technical testing of biometric devices against false-negative and false-positive results, and attempting brute-force and lock-picking attacks on security furniture. Penetration tests may include attempts to enter premises and test, or attempt to subvert, secure working practices by, for example: • Social engineering attempts to gain entry while posing as a legitimate member of staff • Testing procedures, such as challenging, by not wearing an appropriate security pass • Testing physical security by, for example, attempting to tailgate staff through entry points Such exercises may be either advertised or conducted under cover, to test how alert staff really are to such intrusions. The results of such exercises can be used to justify the effort required to improve security and to raise awareness and encourage staff to address any vulnerabilities that are discovered. Maintenance and Service Issues Because many physical security mechanisms involve moving parts, and some are open to the corrosive elements of the weather and other agents, security maintenance will need to take into account testing, maintaining, and servicing such items. And contingency measures should be available if these break down for any reason. Regular checks of static elements, such as fencing, walls, and similar barriers, are needed to ensure that any malicious damage or general wear and tear has not diminished their effectiveness. Education, Training, and Awareness As for other aspects of information security, education, training, and awareness can encourage staff to address physical and procedural security requirements. This is particularly important in situations where the organization has diminished control over the physical environment, such as home- or telecommuting scenarios and where the workforce is predominantly mobile. 301
AU8231_book.fm Page 302 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Emphasis can be given to those areas of procedural security in particular that are shown in audits and incident reporting statistics to be poorly supported. Analogies can be drawn with health and safety requirements, which may already be understood. Physical and procedural security measures can often be demonstrated to ensure that staff literally see their value to the business of the organization and the protection of their jobs. Summary The need for physical and particularly procedural security has grown alongside the development of wireless technology and an increasingly mobile workforce. Although systems may not always be the target of a malicious physical attack on an organization, they may suffer as much as any other corporate resource as a result. Physical and procedural countermeasures should aim to provide identification and authentication, authorization (access control), and accountability for all individuals who may have physical access to the system. A fourth element is the provision of physical contingency resources and alternative procedures should any aspect of the systems and the services it provides be mitigated. Physical security should be organized in a defense-in-depth strategy: a “layered combination of complementary countermeasures” that together present a variety of protective measures against deliberate and accidental interventions, as well as commonplace environmental threats. This can be provided in a cost-effective way by adopting measures that are already in place to address other security requirements. If so, these measures must be checked to ensure that they remain effective against the risk and that they fit with the security strategy for the system. The effectiveness of procedural security relies on the knowledge, skills, and awareness of staff to enable them to comply with procedures consistently and completely. Analogies can be drawn with health and safety requirements. Physical and procedural security are integral parts of information security and rely on integrating computer security with facilities and other forms of security. This is an area where the CISSP must work effectively with other security specialists to achieve the best security solution for the system. References Mary Lynn Garcia. The Design and Evaluation of Physical Protection Systems. London: Butterworth Heinemann, 2001. Joseph F. Gustin. Facility Manager’s Handbook. New York: Marcel Dekker, 2002.
302
AU8231_book.fm Page 303 Friday, October 13, 2006 8:00 AM
Physical (Environmental) Security Richard Lack. Safety, Health, and Asset Protection: Management Essentials, 2nd ed. Boca Raton, FL: CRC Press, 2001. James P. Muuss and David Rabern. The Complete Guide for CPP Examination Preparation. New York: Auerbach Publications, 2006. POA Publishing. Asset Protection and Security Management Handbook. New York: Auerbach Publications, 2002. Louis A. Tyska and Lawrence J. Fennelly. Physical Security: 150 Things You Should Know. Amsterdam: Elsevier, 2000.
Sample Questions 1. Which of these statements best describes the concept of defense in depth or the layered defense model? a. A combination of complementary countermeasures b. Replicated defensive techniques, such as double firewalling c. Perimeter fencing and guarding d. Contingency measures for recovery after, e.g., system failure 2. Sprinkler systems to defeat a fire outbreak may include either a dry pipe or wet pipe mechanism. Which of these statements is not true of a dry pipe mechanism? a. It delays briefly before providing water to the fire. b. It uses gas or powder, rather than a fluid, to choke the fire. c. It offers a brief opportunity for emergency shutdown procedures. d. It offers a brief opportunity to evacuate staff from the affected rooms. 3. The geographical location of the site may affect the security requirement if it: a. May be vulnerable to natural disaster (e.g., a floodplain) b. Lacks adequate access for, or the logistical support of, emergency services c. Experiences crime, including burglary, vandalism, street crime, and arson d. All of the above 4. Which of these infrastructure features would most likely present a physical vulnerability for an information system? a. Fire escapes, including external and internal stairways b. The information security architecture c. The corporate compliance policy d. The internal telephone network 5. Which one of these would be the principal practical benefit of utilizing existing physical or procedural measures in an information system’s security strategy? a. They offer duplication of, e.g., access controls. b. They are already tried, tested, and accepted by staff. c. They are managed by facilities staff. d. They are written into corporate procedures. 303
AU8231_book.fm Page 304 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® 6. Which one of these is least likely to provide a physical security barrier for a system? a. External site perimeter b. Protected zones (e.g., a floor or suite of rooms) within a building c. Communications channels d. Office layout 7. Which of these is a procedural (rather than an administrative or technical) control? a. System logging b. Purging storage media on, e.g., fax, photocopier, or voice mail facilities c. Developing a system security policy d. Configuring a firewall rule base 8. Which of these is not a common type of fire/smoke detection system? a. Ionization b. Photoelectric c. Heat d. Movement 9. Which one of these fire extinguisher classes is most appropriate for controlling fires in electrical equipment or wiring? a. Class A b. Class B c. Class C d. Class D 10. Which one of these is the strongest form of protective window glass? a. Standard plate b. Tempered c. Embedded polycarbonate sheeting d. Embedded wire mesh 11. Which one of these physical intruder detection systems reacts to fluctuations of ambient light energy within its range? a. Electrical circuit b. Light beam c. Passive infrared detector (PIR) d. Microwave system 12. Which one of these physical locking devices requires the knowledge of a set of numbers and a rotation sequence to achieve access? a. Deadbolt lock b. Combination lock c. Keypad d. Smart lock 13. Which one of these is the most critical aspect of ensuring the effectiveness of a CCTV system? a. Positioning cameras at a height that prevents physical attack b. Adequate lighting and positioning to address blind spots 304
AU8231_book.fm Page 305 Friday, October 13, 2006 8:00 AM
Physical (Environmental) Security c. Monitoring of and reaction to camera feeds d. Safe storage of footage 14. In terms of physical security, which one of these is the best measure to prevent loss of data in a mobile computing scenario? a. Carry the laptop in an unmarked bag or briefcase. b. Carry the laptop’s hard disk separately from the laptop. c. Use tamper detection measures or tracing software. d. Restrict access via tokens, such as smart cards. 15. Procedural security measures often fail because staff fail to appreciate why they should use them. Which one of these measures may best address this? a. Security operating procedures b. Security training and awareness c. Disciplinary procedures d. Dissemination of the corporate security policy
305
AU8231_book.fm Page 306 Friday, October 13, 2006 8:00 AM
AU8231_C005.fm Page 307 Wednesday, May 23, 2007 9:02 AM
Domain 5
Security Architecture and Design William Lipiczky, CISSP
Introduction The security architecture and design domain addresses the high-level and detailed processes, concepts, principles, structures, and standards used to define, design, implement, monitor, and secure/ensure operating systems, applications, equipment, and networks. It addresses the technical security policies of the organization as well as the implementation and enforcement of those policies. The security architecture and design must clearly address the design, implementation, and operation of those controls used to enforce various levels of availability, integrity, and confidentiality to ensure effective operation and compliance (with governance and other drivers). This domain presents the key principles and concepts that are critical to consider when designing security architecture. The information contained in the other domains, when used in accordance with the principles and concepts discussed in this domain, can provide a solid basis to evaluate an existing architecture and then build a truly strong enterprise security architecture. CISSP® Expectations Designing the security architecture of an information system is crucial to implementing an organization’s information security policy. A Certified Information Systems Security Professional (CISSP) must be able to address competently the following areas: 1. Identify the physical components of IT architecture. 2. Discuss the relationship between the various uses of software. 3. Understand the design principles as they relate to the enterprise architecture. 4. Describe how to secure an enterprise architecture. 307
AU8231_C005.fm Page 308 Wednesday, May 23, 2007 9:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® 5. Identify the difference between trusted and nontrusted components of an enterprise. 6. Discuss security models and architecture theory. 7. Identify appropriate protection mechanisms. 8. Discuss evaluation methods and criteria. 9. Understand the role of assurance evaluations. 10. Explain the terms certification and accreditation. 11. Identify the techniques used to provide system security. Security Architecture and Design Components and Principles There are three basic components to system architecture: the central processing unit (CPU), storage devices, and peripherals (input/output devices). They each have specialized roles in the architecture. The CPU, or microprocessor, is the brains behind a computer system and performs calculations as it solves problems and performs system tasks. Storage devices provide both long- and short-term storage of information that the CPU either has processed or may process. Peripherals (scanners, printers, modems, etc.) are devices that either input data or receive the data output by the CPU. Security Frameworks: ISO/IEC 17799:2005, BS 7799:2, ISO 270001 ISO/IEC 17799:2005, the “Code of Practice for Information Security Management,” is an internationally recognized set of controls that focus on best practices for information security. BS 7799-2:2002 provides instructions on how to apply ISO/IEC 17799 and to construct, run, sustain, and advance an information security management. ISO/IEC 17799:2005 addresses 11 categories: 1. Business continuity management mitigates an incident’s impact on critical business systems. 2. Access control provides only authorized access to data, mobile communications, telecommunications, and network services, as well as detects unauthorized activities. 3. System development, acquisition, and maintenance implements security controls into operations and development systems to ensure the security of application systems software and data. 4. Physical and environmental security prevents unauthorized access, damage, and interference to facilities and data. 5. Compliance ensures adherence to criminal and civil laws and statutory, regulatory, or contractual obligations, complies with organizational security policies and standards, and provides for a comprehensive audit process. 6. Human resources security minimizes the risks of human error, theft, and misuse of resources, provides information security threats and 308
AU8231_C005.fm Page 309 Wednesday, May 23, 2007 9:02 AM
Security Architecture and Design
7.
8.
9. 10. 11.
concerns to users, and disseminates information to support the corporate security policy. Information security organization provides a formal data security mechanism within an organization that includes information processing facilities and information assets accessed or maintained by third parties. Communications and operations management ensures the proper and secure operation of data processing facilities by protecting software, communications, data, and the supporting infrastructure, as well as ensuring proper data exchange between organizations. Asset management protects corporate assets by ensuring data assets receive appropriate protection. Security policy provides management guidance and support for information security. Information security incident management implements procedures to detect and respond to information security incidents.
ISO 27001 is “Information Security Management: Specification with Guidance for Use.” Once it is formally released, it will directly replace BS 77992:2002. ISO 27001 defines an information security management system and creates a framework for the design, implementation, management, and maintenance of IS processes throughout an organization. As with BS 7799, ISO 27001 will continue to complement ISO 17799. They are two distinct documents, but are designed to support each other. Where ISO 17799 is a code of practice, detailing individual controls for potential implementation, ISO 27001 defines the information management system itself, which encompasses the former. Currently, certification is granted against BS 7799. Upon its release, future certifications will be against ISO 27001. Design Principles Diskless Workstations, Thin Clients, and Thin Processing. A diskless workstation is a computer without a hard drive in it, and sometimes without a CD-ROM drive or a floppy drive. The workstation has a network card and video card, and may also have other expansion cards, such as a sound card. A diskless workstation relies on services provided by network servers for most of its operation, such as booting and running applications. Usually the workstation operates as a thin client connecting to a server that provides access to files and runs applications. These thin-client systems replace individual desktop PCs with a centralized server that provides the applications and data users need at individual workstations. Thin clients were used during the mainframe-dominated era, but now PC usage is pervasive. This is both beneficial and problematic. The problem with using PCs is that users get themselves — and the company — in trouble through unauthorized applications, downloads, and errors. By returning control of the applications to centralized servers maintained by knowl309
AU8231_C005.fm Page 310 Wednesday, May 23, 2007 9:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® edgeable IT staff, thin-client computing may be successful in addressing these and other security concerns. Diskless workstations are usually described as workstations without any secondary storage. Because there is no storage capability, users do not need to have download capabilities. Other by-products are that this reduces virus vectors, lessens removable media concerns, and improves availability compared to dealing with disk failures on PCs. On the downside, there may be lower availability under network outage or power degradation conditions. Thin Storage. Management of file and database storage is getting out of control and using more and more system administrators’ time. Networkattached storage or CD-ROM libraries are ways to add shared storage capacity to networks. Then, a thin client containing an operating system installed in flash memory is plugged into the network and an IP address assigned. A variety of systems and protocols are supported, such as Common Internet File System/Server Message Block for Microsoft Windows NT, NetWare Core Protocol for Novell NetWare, Network File System for UNIX, Dynamic Host Configuration Protocol for automatic IP addressing, and Simple Network Management Protocol for network management.
The thin-client application normally uses an Internet browser to connect to a central server. This allows for almost universal access to thin-client applications. There are two types of thin-client applications used to access storage management software: application service provider (ASP) and Web-based data warehousing. With ASP, data is stored in a remote server farm that the software provider maintains. This provides a centralized storage area that can be accessed from any Internet connection. However, ASP requires a constant connection to the Internet. If the connection is unavailable to the user or the host, then the software is unable to operate. Web-based data warehousing provides redundancy and allows access even if the Internet connection is disrupted. A transmitter applet transmits data based on user-defined intervals. If the Internet connection is lost, it can be automatically reestablished and any storage changes are automatically sent to the server. The advantage of thin-client applications is that usually they outperform other server connection methods. An added benefit is that to access data, all that is needed is Internet connectivity. In addition, by choosing an application that provides data redundancy, even if the Internet connection is disrupted, access to data is still possible. Operating System Protection. Privilege Levels and Ring Protection. Privilege level controls prevent memory access (programs or data) from less privileged to more privileged levels. Memory access from more privileged to less privileged levels is permit310
AU8231_C005.fm Page 311 Wednesday, May 23, 2007 9:02 AM
Security Architecture and Design ted. Mechanisms called control gates manage transfers from a less privileged level to a more privileged level. The highest level, 0, is used by the operating system; the lowest level, 3, is used by the applications. These privilege levels, which identify what actions can be done by specific processes, are supported by all modern processors and operating systems to some degree. OSs running in ring 0 potentially have unrestricted access to the hardware, and limiting this ring to use by a single OS enables the OS to have complete knowledge of the state of the hardware. An architecture where there are more than two execution domains (or privilege levels) is called a ring architecture because it is normally pictured as a set of concentric rings with the most privileged ring in the center. Each ring has access to its own resources and to the resources available to the rings outside it, but no access to the resources of the more privileged rings inside it. Layering. This is assigning each layer a specific process. Communication between the layers is through well-defined interfaces. This helps ensure that volatile areas of the system are protected from unauthorized access or change. Data Hiding. Data hiding maintains activities at different security levels to separate these levels from each other. This assists in preventing data at one security level from being seen by processes operating at other security levels. Abstraction. Abstraction negates the need for users to know the particulars of how an object functions. They only need to be familiar with the correct syntax for using an object and the nature of the information that will be presented as a result.
Hardware The term mainframe originally referred to the very large computer systems housed in very large steel-framed boxes and was used to differentiate them from the smaller mini- or microcomputers. A large company such as IBM usually manufactured these systems. These mainframes were used in Fortune 1000 companies to process commercial applications and were also employed by federal, state, and local governments. The term has been used in numerous ways over the years, but most often it describes the successive families of IBM computer systems starting with System/360. This term also applies to comparable systems built by other companies. Historically, a mainframe has been associated with centralized rather than distributed computing. There appears to be a prevailing belief that the computing environment is still very much a distributed computing environment on PCs. This is not 311
AU8231_C005.fm Page 312 Wednesday, May 23, 2007 9:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® necessarily true. The business world has been using mainframes for over 30 years. This long history, combined with the advances sure to take place over the next two to three decades, will keep the mainframe as probably the most reliable platform. By consolidating multiple vendor platforms and providing scalability, mainframes are an effective way to minimize costs. Multiple operating systems can be running on a mainframe, most notably numerous instances of Linux. Other uses include data warehousing systems, Web applications, financial applications and middleware. Mainframes provide reliability, scalability, and maintainability, with the lower total cost of ownership and credible disaster recovery. This section looks at the main components of desktop architectures, applicability issues related to varying users, and various aspects of element design and scaling. The desktop environment consists of numerous parts: client devices, applications, services and servers, and OS software. A wide range of client devices has become available over the years. This document addresses the choices of desktop or mobile users and looks at both thick- and thin-client device options. One widespread solution is the deployment by many organizations of traditional desktop systems. These systems were usually based on the Intel x86 architecture and Microsoft operating software. They provided autonomous processing power to the user and the capacity to use a varied range of personal productivity and line-of-business applications. A drawback of these systems is that the hardware and operating software may need to be refreshed on a frequent basis. Patch management and hardware upgrades need to be addressed on a continuous basis. Because the open-source movement has gained support, there has been an increasing deployment of Linux desktop systems. Some popular Linux distributions now contain a number of packages that supply the needs of the desktop user. It has now become feasible to duplicate a lot of the functionality of the traditional desktop clients on a Linux system. There has also been significant growth of interest in support for the Apple Macintosh. This is particularly true for users of specialized graphical and publishing applications that are developed specifically for Macintosh. These systems are integrated into an overall desktop architecture through the use of Mac OS 9 and OS X clients, e-mail and browser clients, and office productivity products. Because there is such a wide variety of clients available, many organizations look for a solution that will enable support for the widest variety of client devices. Some of the architecture decisions being addressed follow. How well do the clients integrate with office productivity suites? Currently, Microsoft Office 11 and the Star Office 7 software are compatible with some, but not all, PDA office personal productivity applications. How easy is it to perform e-mail and data synchronization? There are emerging stan312
AU8231_C005.fm Page 313 Wednesday, May 23, 2007 9:02 AM
Security Architecture and Design dards, such as SyncML, that may simplify synchronization between handheld devices and enterprise mail message stores. Looking at a broader picture, there is a need to choose integration software that delivers server-based applications. These can be found in a typical client/server or distributed environment or in a thin-client architecture. Let us not forget UNIX workstations. A large installed base of technical workstations based on Reduced Instruction Set Computer (RISC) processors and various flavors of UNIX still exists. Many UNIX users must interoperate with and use office productivity applications, Internet browsers, and other applications. A popular solution is to install open-source equivalents, as most of these have been ported to the popular variations of UNIX. Another solution to deal with a heterogeneous environment is the application integration solution first seen as Windows Terminal Services (WTS). WTS delivers a presentation layer service to client devices through Microsoft’s Remote Desktop Protocol (RDP). A suitably equipped client could be a PC running Windows ME or XP, or it could be a system that uses RDP. A variation of this solution is to install Citrix MetaFrame on the Windows server to deliver applications to devices, such as a Windows-based terminal or a traditional desktop PC with Citrix client software. A significant feature of an organization’s IT environment is whether it is centralized or distributed. Having a centralized system means that users access a single source where they log in, manage their accounts, and collect the data that results from user requests. In distributed environments, users log into their own computer and data is saved locally or remotely at various sites. There is no central authority that administers user authentications and accounts or manages data storage. No central server is necessary, although servers may have an assortment of roles in such systems. Distributed environments support a wide range of diverse software applications, real-time data access, and varied media formats and data storage. In addition, distributed systems support diverse devices, such as desktops and laptop computers, thin clients, cell phones, or other kinds of handheld devices. Finally, because there may be a wide range of resources accessed, updated, and stored, there needs to be a way to track user interactions with the system. A distributed file-sharing network has a common or universal file format (e.g., Network File System [NFS]) to allow an unknown array of files to be stored, recognized, and exchanged by any authorized user on the network. For more functional software, such gaming or instant messaging, all involved users must have a common software application. This software is obtained in diverse ways, including propagating software around the network from one user to another. 313
AU8231_C005.fm Page 314 Wednesday, May 23, 2007 9:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® One challenge of distributed systems is the need to have a central naming repository, which generates universally unique identifiers (UUIDs). When a user requests a resource, a search is done within a potentially large network to find a particular resource, thus requiring a precise specification of the resource. However, authorization may be the biggest challenge, as there is no central authority to trust. In a fully distributed system, trust is typically done through cryptographic digital signatures via a public key cryptography system. There is another kind of distributed system that has evolved because of networks supporting peer-to-peer exchanges of data and software. Napster was one of the first prominent examples of a swapping community where there was minimal involvement of a centralized authority. Rather, each individual, or peer, logs on and is connected to all other peers in a network. This permits the viewing and exchanging of files with any other peer. Although Napster used a central server to set up the interconnected network of users, current peer-to-peer implementations use discovery. All entities connecting to the network use the same kind of software. This enables them to discover all other peers connected to the system and running the same software. This collection of interconnected users provides a new type of functionality that does not need a central authority to negotiate transactions or store data. Personal Digital Assistants (PDAs) and Smart Phones. Apple Newtons, Palm Pilots, and Microsoft Handheld PC devices were the original personal digital assistants (PDAs). They were designed to be an electronic organizer or portable day planner that was easy to use and capable of sharing information with PCs. They were not developed as a replacement for a PC, but rather as an extension of the PC.
The number of PDA and mobile devices has grown considerably in the past four or five years. Products vary from sophisticated mobile phones, such as third-generation (3G) handsets, to full-featured PDAs. These PDAs are basically small-footprint PCs that contain an operating system, productivity and business software, and a browser. At least three technologies have taken hold in this arena. One of these is Java technology-based solutions. These are mobile phones, PDAs, and some gaming devices that function with Java Virtual Machine (JVM) software. These devices generally have standard business productivity functionality as well as use Java applications, such as games, on a pay-peruse basis. Another PDA technology is based on variations of the PALM operating system, such as Palm, Handspring, and Sony. These devices tend to have more capabilities than phone handsets and provide a larger range of builtin applications. 314
AU8231_C005.fm Page 315 Wednesday, May 23, 2007 9:02 AM
Security Architecture and Design A third PDA technology is the use of Win CE or the PocketPC. In essence, these devices use subsets of the Microsoft Windows operating system, thus presenting a potential platform for porting Windows-based applications. PDAs can now manage personal information, such as contacts, appointments, and to-do lists. Current PDAs connect to the Internet, function as global positioning system (GPS) devices, and run multimedia software. They can also support Bluetooth technology and wireless wide area networks (WANs). They have memory card slots that accept flash media that can serve as additional storage for files and applications. Some PDAs provide audio and video support, incorporating MP3 players, a microphone, a speaker, and headphone jacks along with a built-in digital camera. Integrated security features such as a biometric fingerprint reader can also be included. A smart phone can be described as either a cell phone with PDA capabilities or a traditional PDA with added cell phone capabilities, depending on the style and manufacturer. Features of these devices include various combinations of cell phone and PDA functionality, a cellular service, Internet access through cellular data networks, and an operating system. Central Processing Unit (CPU). Processing occurs inside the computer in an area called the central processing unit (CPU). Processing is the conversion of inputted raw data into useful information called output. The processor manages all of the system’s devices as well as doing the actual data processing. From a physical perspective in today’s terminology, the CPU typically is one or more microprocessor chips and is located on the computer’s motherboard. The CPU and memory operate together, with the memory holding data and the next set of program instructions as the CPU uses its current instructions to perform calculations on the data. When the CPU requires data, it retrieves it from memory. Multitasking, Multiprocessing, and Multithreading. As mentioned previously, the CPU is the vital component of a computer because it executes programs and controls hardware operations. To take advantage of the speed of a processor, programmers split programs into multiple, cooperating processes. A multitasking operating system switches from one process to another quickly to speed up processing. To the user, it appears to be simultaneous execution even though only one process is running at any given time on the CPU. However, there needs to be a mechanism in place that will allow the OS to start running an application with the probability, but not the certainty, that the application will sooner or later return control to the operating system.
Higher performance can be achieved by increasing the number of processors in a system. Powerful computers such as servers typically have several processors handling different tasks, although there must be one 315
AU8231_C005.fm Page 316 Wednesday, May 23, 2007 9:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® processor to control the flow of instructions and data through the supplementary processors. We call this type of system a multiprocessing system. As a program is executing, it runs each line of code in sequence. However, there are times when a subsequent step is not dependent upon the completion of a previous step. If the programmer requests a new thread to be generated for the later step, the CPU can be asked to do something else at the same time the application continues doing its current task. An example might be a spreadsheet calculation running at the same time that the main application asks a user for input. Multithreading, then, is the concept whereby the operating system time slices the threads and gives one thread some time on the CPU, then switches to another thread and lets it run for a while. This routine continues until the first thread has its turn again. In essence, the threads are split up and given to the CPU in an interleaved manner. Each thread operates as though it has exclusive access to the CPU, even though it runs only for a short time and then stops until it runs again in a short time. The ability of a system to do multitasking, multiprocessing, and multithreading can lead to some potential security vulnerabilities. One challenge is how to protect the multiple processes/tasks/threads from the other processes/tasks/threads that may contain bugs or exhibit unfriendly actions. Techniques need to be implemented to measure and control resource usage. For example, when a system is running many different tasks, being able to measure each task’s total resource usage is a desired piece of managing security in such a system. This information needs to be gathered without incurring a significant performance penalty and without changing the manner that tasks are written and executed. If this type of functionality is not available, a mischievous task might assign and seize enough memory to effect a denial-of-service attack, a crash, or a system slowdown. The bottom line, even though there are many advantages to implementing multiprocessing, multitasking, and multithreading, is that the more subtasks a system creates, the more things that can go awry. Storage. Primary Storage. As data waits for processing by the CPU, it sits in a staging area called primary storage. Whether implemented as memory, cache, or registers (part of the CPU), and regardless of its location, primary storage stores data that has a high probability of being requested by the CPU, so it is usually faster than long-term, secondary storage. The location where data is stored is denoted by its physical memory address. This memory register identifier remains constant and is independent of the value stored there. Some examples of primary storage devices include randomaccess memory (RAM), synchronous dynamic random-access memory (SDRAM), and read-only memory (ROM). Random-access memory is volatile, that is, when the system shuts down, it flushes the data in RAM. Con316
AU8231_C005.fm Page 317 Wednesday, May 23, 2007 9:02 AM
Security Architecture and Design trast this to read-only memory (ROM), which is nonvolatile storage that retains data even when electrical power is shut off. The closer data is to the CPU, the faster it can be retrieved and thus processed. Data is sent to the CPU through various input devices (keyboards, modems, etc.), cache, main memory, and disk storage devices. As the data travels to the CPU, it typically moves from storage devices (disks, tapes, etc.) to main memory (RAM) to cache memory, finally arriving at the CPU for processing. The further data is from the CPU, the longer the trip takes. In fact, if one were to compare speed of access to data, retrieving data from disk storage takes the longest, retrieval from random-access memory (RAM) is faster than disk storage, and cache memory retrieval takes the least amount of time. Cache memory can be described as high-speed RAM. Optimally designed cache can reduce the memory access time because data moves from the slower RAM to the faster cache then to the CPU. This process speeds up the CPU’s access to the data and thus improves the performance of program execution. Secondary Storage. Secondary storage holds data not currently being used by the CPU. In addition to being nonvolatile, it has higher capacity than primary storage. Computer systems use multiple media types for storing information as both raw data and programs. This media differs in storage capacity, speed of access, permanency of storage, and mode of access. Fixed disks may store up to hundreds of gigabytes in personal computers and up to hundreds of terabytes in large systems. Fixed-disk data access is typically done randomly and is slower than RAM access. However, data stored on fixed disks is permanent in that it does not disappear when power is turned off, although data can be erased and modified. Dismountable media devices can be removed for storage or shipping and include floppy diskettes, which are randomly accessed; magnetic tapes, with gigabytes of storage and either sequential or random access (DLT, SDLT, 8-mm DAT); optical compact disks (CDs), with 650 to 870 MB of storage per CD; and high-capacity DVDs, with 50 to 100 GB of storage. Both CDs and DVDs use random access. Virtual Memory. Most operating systems have the ability to simulate having more main memory than is physically available as main memory. This is done by storing part of the data on secondary storage, such as a disk. This can be considered a virtual page. If the data requested by the system is not currently in main memory, a page fault is taken. This condition triggers the operating system handler. If the virtual address is a valid one, the operating system will locate the physical page, put the right information in that page, update the translation table, and then try the request again. Some other page might be swapped out to make room. Each process may have its own separate virtual address space along with its own mappings and protections. 317
AU8231_C005.fm Page 318 Wednesday, May 23, 2007 9:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® One of the reasons that virtual memory was developed is that computer systems have a limited amount of physical memory, and often that amount of RAM is insufficient to run simultaneously all of the programs that users want to use. For example, with the Windows operating system loaded and an e-mail program, along with a Web browser and word processor, physical memory may be insufficient to hold all of the data. If there were no such entity as virtual memory, the computer would not be able to load any more applications. With virtual memory, the operating system looks for data in RAM that has not been accessed recently and copies it onto the hard disk. The cleared space is now available to load additional applications (but within the same physical memory constraints). This process occurs automatically, and the computer functions as though it has almost unlimited RAM available. Because hard disks are cheaper than RAM chips, virtual memory provides a good cost-effective solution. There are potential downsides to using virtual memory, especially if it is not configured correctly. To take advantage of virtual memory, the system must be configured with a swap file. This swap or page file is the hard disk area that stores the data contained in the RAM. These pages of RAM, called page frames, are used by the operating system to move data back and forth between the page file and RAM. When it comes to accessing data, the read and write speeds of a hard drive are a lot slower than RAM access. In addition, because hard drives are not designed to constantly access tiny bits of data, if a system relies too much on virtual memory, there may be a sizable negative impact on performance. One solution is to install sufficient RAM to run all tasks simultaneously. Even with sufficient physical memory, the system may experience a small hesitation as tasks are changed. However, with the appropriate amount of RAM, virtual memory functions well. On the other hand, with an insufficient amount of RAM, the operating system continuously has to swap data between the hard disk and RAM. This thrashing of data to and fro between the disk and RAM will also slow down a computer system. Input/Output Devices. As was described earlier, data needs to be inputted and processed and output generated. The data is transferred between numerous locations — from disk to CPU or from the CPU to memory or from memory to the display adapter. It would be unrealistic to have discrete circuits between every pair of entities. For instance, throughput would be too slow. However, when a bus concept is implemented, a shared set of wires connects all the computer devices and chips. Certain wires transmit data; others send control and clocking signals. Addresses identifying specific devices or memory locations are transmitted and, when a device’s address is transmitted, the corresponding device then transfers data across the wires to the CPU, RAM, display adapter, etc.
318
AU8231_C005.fm Page 319 Wednesday, May 23, 2007 9:02 AM
Security Architecture and Design Data is the raw information fed to the computer, and programs are the collection of instructions that provide directions to the computer. To tell a system what tasks to perform, commands are entered into the system by the user. For ease of use, input takes various forms. Commands and responses can be entered locally via a keyboard or mouse, with menus and icons, or remotely from another system or peripheral. The result of computer processing is considered output. This output is in binary or hexadecimal numbers, but for users to understand the output, it takes the form of alphanumeric characters and words that are interpreted by humans as video, audio, or printed text. Thus, output devices may be computer displays, speaker systems, laser printers, and all-in-one devices. Inputs are the signals received through an interface, and outputs are the signals sent from the interfaces. A person (or another computer system) communicates with the computer by using these interfaces (I/O devices.) In summary, the CPU and main memory, working in tandem, are the core processes of a computer, and the transfer of information from or to that duo, for example, retrieving from and storing data to a disk drive, is considered to be I/O. Communications Devices. Software programs called drivers control the input and output devices and the communication channels that are used for system I/O. Drivers enable the OS to control and communicate with hardware. Different signals require different interfaces that differ according to the communications channel of the I/O device. A Universal Serial Bus (USB) device communicates through a USB cable attached to a USB port. The high-speed 2.0 standard supports data transfer rates of 480 Mbps and connects up to 127 peripheral devices, such as removable disk drives, mouses, printers, and keyboards. Serial and parallel devices require their appropriate interfaces. Networks and Partitioning. Computer networks are a crucial ingredient of most organizations. Local area networks bind together computer equipment within a building or over a whole site. This allows users with common goals and interests to share resources such as disks, printers, or applications. Wide area networks are used to exchange data as well as a means to access remote resources. Some wide area networks are regional networks that join systems throughout an organization, regardless of location — be it within a city, state, or region. Wide area networks may also be national, international, or global in scope. The Internet is a loose association of wide area networks. A workstation is on the Internet if it is part of a local area network that is, in turn, connected to a regional or national network that is also on the Internet.
As this discussion shows, there is an underlying theme when defining networks. Organizations can develop policies for each type of network 319
AU8231_C005.fm Page 320 Wednesday, May 23, 2007 9:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® (and its associated resources) under its control and define who, what, when, and how resources — computers, data, applications, etc. — may be accessed. In fact, security policies can be designed and implemented to be unique to each network partition. These network partitions are typically trusted areas that are separated from untrusted areas by an imaginary boundary, sometimes referred to as the security perimeter. This separation is possible because organizations can implement a combination of hardware, software, and other controls to enforce their security policy. Further control is available by implementing a system that validates all accesses to every resource. This reference monitor intercepts every request of a subject to an object and verifies that the credentials of the subject meet the criteria for object access. Software Software is a succession of related instructions, performed one step at a time by the CPU, to achieve a specific task. Software establishes how computers respond to input, what processing will occur, what data will be displayed, and the output. There are at least three types of programs: operating systems, programming languages, and middleware and applications. Operating Systems. The operating system (OS) is the software that controls the operation of the computer from the moment it is turned on or booted. The OS controls all input and output to and from the peripherals, as well as the operation of other programs, and allows the user to work with and manage files without knowing specifically how the data is stored and retrieved. In multiuser systems, operating systems manage user access to the processor and peripherals and schedule jobs.
Examples of operating systems are Microsoft Windows, Apple’s MacOS X, various versions of UNIX and Linux, and mainframe systems commonly using proprietary operating systems, such as IBM’s MVS, developed by their manufacturers. Although functions performed by operating systems are similar, it can be very difficult to move files or software from one to another; many software packages run under only one operating system or have substantially different versions for different operating systems. System Kernel. The kernel is the core of an operating system, and one of its main functions is to provide access to system resources, which includes the system’s hardware and processes. The kernel supplies the vital services that make up the heart of computer systems; it loads and runs binary programs, schedules the task swapping, which allows computer systems to do more than one thing at a time, allocates memory, and tracks the physical location of files on the computer’s hard disks. The kernel provides these services by acting as an interface between other programs operating under its control and the physical hardware of the computer; this insulates 320
AU8231_C005.fm Page 321 Wednesday, May 23, 2007 9:02 AM
Security Architecture and Design programs running on the system from the complexities of the computer. For example, when a running program needs access to a file, it does not simply open the file. Instead, it issues a system call asking the kernel to open the file. The kernel takes over and fulfills the request, then notifies the program of the success or failure of the request. To read data from the file requires another system call. If the kernel determines the request is valid, it reads the requested block of data and passes it back to the program. The security kernel is the hardware, firmware, and software elements of a trusted computing base that implement the reference monitor concept. It must mediate all accesses, be protected from modification, and be verifiable as correct. System States. The CPU has two states: a supervisor state and a problem state. In supervisor state (privilege or kernel mode), the CPU is operating at the highest privilege level on the system, and this allows it to access any system resource (data and hardware) and execute both privileged and nonprivileged instructions. This enforces the separation of applications from the operating system. Thus, applications run in the problem state (nonprivileged or user mode) and have limited access to system data and hardware. When operating in the problem or user state, only nonprivileged instructions can be run, such as instructions or commands that applications execute.
When a processor is executing in supervisor mode, some of the privileged instructions it can execute are the ability to control I/O operations and to give instructions to change processor mode. This mode can also access privileged address spaces, such as the data structures within the operating system and other processes’ address spaces, as well as change and create address spaces. The operating system changes to user mode to run applications. This mode provides the ability to use the operating system standard instructions, such as load and store general registers to and from memory. User mode can access only subsets of memory and cannot perform privileged instructions. This helps in guaranteeing that only individuals (programs, processes, etc.) that have supervisory privileges may access the additional functionality needed to perform such functions as system maintenance. System compromises have occurred, where an intruder has entered a system via an application running in user mode. Once on the system, vulnerabilities are then exploited that permit the intruder to escalate to the privilege mode and have unrestricted access to the system. Application Programs. Applications are programs used for all purposes other than performing operating system chores or writing other programs (programming languages). Applications include word processors, spread321
AU8231_C005.fm Page 322 Wednesday, May 23, 2007 9:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® sheets, database management systems, airline reservation systems, and payroll systems. Word processors are applications that modify or edit the contents of files. Word processors as well as other applications have printer drivers that link them to a printer so that a user can obtain a hard copy of the results of the processing. Databases are designed to create, edit, manipulate, and analyze data. Many databases use the same language, Structured Query Language (SQL), for formulating queries. Spreadsheets allow users to work with numerical data in tabular form. Processes and Threads. A program is a set of instructions, along with the information necessary to process those instructions. When a program executes, it spawns a process or an instance of that program. This process requests resources called handles or descriptors.
The operating system allocates the required resources, such as memory, to run the program. A process progresses through phases from its initial entry into the system until it completes or exits. From the process’ point of view, it is either running or not, and the status of each process is maintained in a process table. One of the challenges from a security perspective is to make sure that only authorized processes are running. When a process requests resources, it creates one or more independent threads. There is not a parent/child relationship between threads as there is for processes. This is because threads may be created and joined by many different threads in the process. Threads can be created by any thread, joined by any other, and have different attributes and options. A thread can be considered a lightweight process. Upon creation, a process is allocated a virtual address space as well as control of a resource (a file, I/O device, etc.). This process (or task) has protected access to processors, other processes, files, and I/O resources. As it is executing, it becomes a lightweight process or thread. This thread is either running or ready to run. If it is not running, its context is saved. When it is executing, a thread has access to the memory space and resources of its processes. Thus, it takes less time to create a new thread than a process, because the newly created thread uses the current process’ address space. Communication overhead between threads is minimized because the threads share everything. Because address space is shared, data produced by one thread is immediately available to all other threads. Similar to multiple processes running on some systems, there can also be multiple threads running (multithreading.) There are two major disadvantages to using threads: deadlocks and blocking. Deadlocks occur when two or more threads stop executing while waiting for the same resource. This happens because the threads are attempting to manipulate the same resources that control the resource. 322
AU8231_C005.fm Page 323 Wednesday, May 23, 2007 9:02 AM
Security Architecture and Design Because each thread is waiting for the other to finish, completion of the entire process is stalled until the process is killed or restarted. The second disadvantage occurs when a thread makes specific system calls, such as an I/O request. This call will not return until it has completed or the system call is interrupted by a signal to the process. If a fault occurs during the call, it may take an extended time for the call to come back (if it ever does). For this period, the thread cannot execute any other instruction. Middleware. Middleware is connectivity software that enables multiple processes running on one or more machines to interact. These services are collections of distributed software that are present between the application running on the operating system and the network services, which reside on a network node. The main purpose of middleware services is to help solve many application connectivity and interoperability problems.
In essence, middleware is a distributed software layer that hides the intricacies and heterogeneous distributed environment consisting of numerous network technologies, computer architectures, operating systems, and programming languages. Some of the services provided are directory services, transaction tracking, data replication, and time synchronization, services that improve the distributed environment. Some examples are workflow, messaging applications, Internet news channels, and customer ordering through delivery. Firmware Firmware is the storage of programs or instructions in read-only memory. Typically, this software is embedded into hardware and is used to control that hardware. Because ROM is nonvolatile, these programs and instructions will not change if power is shut off, but instead become a permanent part of the system. User manipulation of the firmware should not be permitted. Usually, firmware is upgradeable and is stored in erasable programmable read-only memory (EEPROM.) This is handy in those instances where firmware may have bugs and an upgrade will fix the problems. The hardware itself is not upgradeable without substituting portions of it. Therefore, vendors attempt to store as many important controls as possible in the firmware in case changes need to be made. From the vendor’s perspective, if a bug is discovered, it is preferable to notify the affected clients to upgrade the firmware than to replace the product. Examples of devices with firmware are computer systems, peripherals, and accessories such as USB flash drives, memory cards, and mobile phones. Trusted Computer Base (TCB) The Orange Book (Department of Defense Trusted Computer System Evaluation Criteria) defines the trusted computing base. The TCB is the combi323
AU8231_C005.fm Page 324 Wednesday, May 23, 2007 9:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® nation of all hardware, firmware, and software responsible for enforcing the security policy. The ability of a trusted computing base to correctly enforce a security policy depends solely on the mechanisms within the TCB and on the correct input by system administrative personnel of parameters (e.g., a user’s clearance) related to the security policy. Reference Monitor The reference monitor is also an Orange Book concept and refers to an abstract machine that mediates all accesses to objects by subjects. Because the reference monitor is an access control mechanism, it must be auditable to ensure it is performing its role effectively and that it is always invoked. Security Models and Architecture Theory A security model formally describes a security policy. The role of a security policy is to document the security requirements of an organization. The security policy will then become the guidance document that will be followed to help an organization defend itself against security threats and mitigate risks. Security researchers create access control models as a way of formalizing security policies. Whereas security policies describe the rules about who is allowed to do what to a system or network, access control models designate the rules and explain how the system makes authorization choices. Each access control model provides an element or concept that is different from the others. Some models are founded on mathematical equations that provide proof of the model’s security to a system, whereas others are based solely on a recognized necessity. In terms of data sensitivity and data integrity, two models stand out: the Bell–LaPadula and the Clark–Wilson. The Bell–LaPadula model concentrates its efforts on providing security while maintaining data sensitivity, and the Clark–Wilson model focuses on data integrity. Both models attack the concerns of data security from differing points of view, each with pros and cons.* Lattice Models Lattice-based access control is a mechanism for enforcing one-way information flow, which can be applied to confidentiality or integrity security. Users are assigned security clearances and data is classified. The system evaluates the clearance of the user with the classification of the data to determine access. Security labels are attached to all objects.
*Patricia Ferson, I’ll take an order of data sensitivity with some integrity on the side: finding a balance within access control models, Information Systems Security, 13, 22, 2004.
324
AU8231_C005.fm Page 325 Wednesday, May 23, 2007 9:02 AM
Security Architecture and Design Lattice-based access control is one of the essential ingredients of computer security, and models (e.g., Bell–LaPadula [BLP], Biba, Chinese Wall, etc.) are designed to deal with this information flow. The basic principle behind information flow is to control access by assigning every object a security class. (Note: The Orange Book definition of a lattice is “a partially ordered set for which every pair of elements has a greatest lower bound and a least upper bound.”) State Machine Models State machine models capture the current security posture of an automated information system (AIS). According to its rule set, which is determined by a security policy, an AIS’s secure state can only change at distinct points in time, such as when an event occurs or a clock triggers it. Thus, upon its initial start-up, the AIS checks to determine if it is in a secure state. Once the AIS is determined to be in a secure state, the state machine model will ensure that every time the AIS is accessed, it will be accessed only in accordance with the security policy rules. This process will guarantee that the AIS will transition only from one secure state to another secure state. Research Models Noninterference Models. The goal of a noninterference model is to help ensure that high-level actions (inputs) do not determine what low-level users can see (outputs). Most of the security models presented are secured by permitting restricted flows between high- and low-level users. The noninterference model maintains activities at different security levels to separate these levels from each other. In this way, it minimizes leakages that may happen through covert channels, because there is complete separation (noninterference) between security levels. Because a user at a higher security level has no way to interfere with the activities at a lower level, the lower-level user cannot get any information from the higher level. Information Flow Models. Information flow models have a similar framework as BLP in that objects are labeled with security classes in the form of a lattice and the information the object represents may flow from one data set to another without concern for direction. Logically, it could flow upward or at the same level, if allowed.
Bell–LaPadula Confidentiality Model The BLP model (the confidentiality model) is perhaps the most well known and significant security model, and it is described in the Orange Book or the Trusted Computer System Evaluation Criteria (TCSEC). Bell–LaPadula is a state machine model that helps ensure the confidentiality of an automated information system (AIS). This is accomplished by using mandatory access control. Mandatory access control is based on labeling both objects 325
AU8231_C005.fm Page 326 Wednesday, May 23, 2007 9:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® (with their classifications) and subjects (with their clearances). The system (reference monitor) compares the level of classification with the clearance and only allows access if the clearance is equal to or higher than the classification. Because all subjects that have an appropriate clearance may not need access, the system owner must also allow access by providing a need-to-know decision. However, the owner may not allow access to a subject that does not have an appropriate clearance. Bell–LaPadula security rules prevent information from being moved from a higher security level to a lower one. Access modes can be one of two types: simple security and the * (star) property. Simple security (the read property) states that a subject of lower clearance cannot read an object of higher classification, but a subject with a higher clearance level can read down. The * property (the write property), on the other hand, states that a high-level subject cannot send messages to a lower-level object. In short, subjects can read down and objects can write or append up. BLP thus uses access permission matrices and a security lattice for security levels for access control. Biba Integrity Model The Biba model ensures integrity and is a complement to BLP because this model is based on the premise that higher levels of integrity are more trusted than lower ones. Access is controlled to ensure that objects or subjects cannot have less integrity as a result of read/write operations. The Biba model assigns integrity levels to subjects and objects. Then, depending on what is requested, one of two properties are followed: the simple integrity (read) property or the integrity * property (write). The simple integrity property states that a subject may have read access to an object only if the security level of the subject is either lower or equal to the level of the object. The integrity * property states that a subject may have write access to an object only if the security level of the subject is equal to or higher than that of the object. Basically, the model ensures that no information from a subject can be passed on to an object in a higher security level. This prevents contaminating data of higher integrity with data of lower integrity. Clark–Wilson Integrity Model The Clark–Wilson model moves from the area of integrity levels to the area of change controls that are suitable for transaction systems. This model was designed specifically for the commercial environment and addresses the three goals of integrity: no changes by unauthorized subjects, no unauthorized changes by authorized subjects, and the maintenance of internal and external consistency. This model establishes a system of subject–program–object bindings such that the subject no longer has direct access to the object. Instead, this is done through a program (called a 326
AU8231_C005.fm Page 327 Wednesday, May 23, 2007 9:02 AM
Security Architecture and Design well-formed transaction) with access to the object. The following controls are considered in the Clark–Wilson model: subject authentication and identification, only a set of programs can access objects, and subjects can execute only a restricted set of programs. These controls lend themselves to guaranteeing the Clark–Wilson model goal of internal and external consistency. Internal consistency relates to the system doing what it is expected to do every time without exception. External consistency relates to the data in the system being consistent with similar data in the outside world. This goal is achieved through the use of well-formed transactions and the separation of duties. The Clark–Wilson model defines each data item and allows changes by only a limited set of programs. The items defined are: Constrained data item (CDI): The integrity of this data item is protected. Unconstrained data item: Data not controlled by Clark–Wilson. Nonvalidated input or any output. Integrity verification procedure (IVP): Procedure scanning data and confirming its integrity. Transformation procedures (TPs): Procedures allowed only to change a constrained data item. Once data items have been defined, the next step is to label subjects and objects with sets of TPs. The TPs operate as the intermediate layer between subjects and objects. Each data item has a set of access operations that can be performed on it. Each subject is given a set of access operations that it can perform. The system then compares these two parameters and either permits or denies access by the subject to the object. This restricted access to CDIs only through TPs is the core of the Clark–Wilson integrity model. Access Control Matrix and Information Flow Models An access control matrix lists the users, groups, and roles down the lefthand side, and all the resources and functions across the top. The matrix is a concise way to represent rules, and once it is finished, rules are implemented according their description in the matrix. Subjects are listed in rows and objects are listed in columns. The exercise of working out the users, roles, assets, capabilities, and how to choose rows and columns can get complicated. However, if you cannot describe the access control rules, it is very unlikely that the access control matrix will be implemented correctly. Typically, access control is based on a user’s role or group membership. It may be advantageous to identify groups for some attributes to make it easier to manage. Sometimes it is beneficial to specify how the access will be performed. Perhaps some users are allowed read only, while others can read and write. 327
AU8231_C005.fm Page 328 Wednesday, May 23, 2007 9:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® The list of access methods will be what is appropriate to your organization. Typical access methods for content are read, write, edit, and delete. Recording this type of information requires extending the access control matrix to include the appropriate permissions in each cell. It is important to note that this model does not describe the relationship between subjects in the model, such as if one subject created another or gave another subject access rights. Information Flow Models. Information flow models have a similar framework as Bell–LaPadula in that objects are labeled with security classes in the form of a lattice, and the information the object represents may flow from one data set to another without concern for direction. Logically, it could flow upward or at the same level, if allowed. Graham–Denning Model. The Graham–Denning access control model has three parts: a set of objects, a set of subjects, and a set of rights. The subjects are composed of two things: a process and a domain. The domain is the set of constraints controlling how subjects may access objects. Subjects may also be objects at specific times. The set of rights govern how subjects may manipulate the passive objects. This model describes eight primitive protection rights called commands that subjects can execute to have an effect on other subjects or objects. The eight primitive protection rights are:
1. 2. 3. 4. 5. 6. 7. 8.
Create object Create subject Delete object Delete subject Read access right Grant access right Delete access right Transfer access right
Harrison–Ruzzo–Ullman Model. The Bell–LaPadula model does not state policies for changing access rights or for creating and deleting subjects and objects. The Harrison–Ruzzo–Ullman model describes authorization systems that deal with these issues. This model is very similar to the Graham–Denning model, and it is composed of a set of generic rights and a finite set of commands. By implementing this set of rights and commands and restricting the commands to a single operation each, it is possible to determine when a specific subject can ever obtain a particular right to an object. Brewer–Nash (Chinese Wall). This model focuses on preventing conflict of interest. The principle is that users should not access the confidential information of both a client organization and one or more of its competi328
AU8231_C005.fm Page 329 Wednesday, May 23, 2007 9:02 AM
Security Architecture and Design tors. The process is that users have no wall initially. Once any given file is accessed, files with competitor information become inaccessible. Unlike other models, access control rules change user behavior. Security Product Evaluation Methods and Criteria Rainbow Series Trusted Computer Security Evaluation Criteria (TCSEC). The criteria that address only confidentiality were published in the Orange Book and have been replaced by the Common Criteria. TCSEC defines four main levels (A, B, C, D):
A Verified protection A1 Verified design B Mandatory protection B3 Security domains B2 Structured protection B1 Labeled security C Discretionary protection C2 Controlled access C1 Discretionary protection D Minimal security Process Isolation. Process isolation is addressed in the Orange Book when the specific functionalities of the TCB are presented. The Orange Book states that the TCB will provide distinct address spaces to maintain process isolation. Furthermore, to achieve a B1 rating, the TCB must isolate the resources that are protected so that they are subject to the access control and auditing requirements. The end result will provide a process that will run without being altered or interfered with by other programs or the user. Layering and Data Hiding. The Orange Book specifies controls that focus on ensuring confidentiality of both data and processes, and layering is one control required at B3 or above. The design of the TCB is such that although the layers of the TCB know about interfaces and depend on the services of layers below, they know nothing about and do not depend on the correct functioning of the layers above. This ensures that each layer of the TCB is protected from tampering by the layers above, and that layers cannot violate the portions of the security policy enforced by the layers below. This is also a required design technique when developing the TCB to achieve B3 or above. Layering not only facilitates the verification of the correctness of the TCB by allowing examination of one layer at a time, but it also allows higher layers to be modified or replaced without the need to redesign or reverify the lower layers.
Data hiding is an Orange Book requirement stating that the TCB shall incorporate significant use of layering, abstraction, and data hiding. It must 329
AU8231_C005.fm Page 330 Wednesday, May 23, 2007 9:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® be implemented correctly to minimize any negative effect it may have in compromising the ability to examine the TCB and verify that it correctly enforces the security policy. The TCB at B2 or above must be divided into well-defined modules with data hiding being a required criterion of module development for systems at B3 or above. Although layering, abstraction, and data hiding are not required until B3, they are good design techniques for system development. Because these architectural features help in understanding and maintaining the system, using them may facilitate the evaluation of a TCB at any class. Data hiding is also a characteristic of object-oriented programming. Because an object can only be associated with data in predefined classes or templates, the object can only “know” about the data it needs to know about. This is needed so that there is no way for someone maintaining the code to inadvertently point to or unintentionally access the wrong data. Thus, all data not required by an object can be said to be hidden. Data might include text, output of commands, executables, programs, games, and rootkits. This concept of hiding data within a class and providing it only through the methods is known as encapsulation because it seals the data (and internal methods) safely inside the capsule of the class, where it can be accessed only by trusted users (i.e., by the methods of the class). Information Technology Security Evaluation Criteria (ITSEC) ITSEC addresses confidentiality, integrity, and availability and is primarily used in Europe. The product or system being evaluated is called the target of evaluation (TOE). There are two ratings: a functionality rating (F1 to F10) and an assurance rating (E0 to E6). See below for a rough mapping between ITSEC and TCSEC. ITSEC has also been replaced by the Common Criteria. It was created as a harmonization criterion so the vender products evaluated in one country could be marketed in all other countries without being reevaluated. ITSEC
TCSEC
E0 D F1 + E1 C1 F2 + E2 C2 F3 + E3 B1 F4 + E4 B2 F5 + E5 B3 F6 + E6 A1 F6 — systems that provide high integrity F7 — systems that provide high availability F8 — systems that provide data integrity during communications F9 — systems that provide high confidentiality (like crypto devices) F10 — networks with high demands on confidentiality and integrity
330
AU8231_C005.fm Page 331 Wednesday, May 23, 2007 9:02 AM
Security Architecture and Design Common Criteria The Common Criteria is an ISO standard product evaluation criterion that supersedes several different criteria, including TCSEC and ITSEC. Participating governments recognize Common Criteria certifications awarded by other nations. The Common Criteria defines a scale for measuring the criteria for the evaluation of protection profiles (PPs) and security targets. There are seven evaluation assurance levels (EAL 1 to 7) in a uniformly increasing scale of assurance. The terminology used includes the terms protection profile, security target, target of evaluation, evaluation assurance level, and security functional requirements. The evaluation assurance levels are as follows: EAL 1: The product is functionally tested; this is sought when some assurance in accurate operation is necessary, but the threats to security are not seen as serious. EAL 2: Structurally tested; this is sought when developers or users need a low to moderate level of independently guaranteed security. EAL 3: Methodically tested and checked; this is sought when there is a need for a moderate level of independently ensured security. EAL 4: Methodically designed, tested, and reviewed; this is sought when developers or users require a moderate to high level of independently ensured security. EAL 5: Semiformally designed and tested; this is sought when the requirement is for a high level of independently ensured security. EAL 6: Semiformally verified, designed, and tested; this is sought when developing specialized targets of evaluation (TOEs) for high-risk situations. EAL 7: Formally verified, designed, and tested; this is sought when developing a security TOE for application in extremely high risk situations. Software Engineering Institute’s Capability Maturity Model Integration (SEI-CMMI) The SEI is a research and development center contracted to advance software engineering practices. The CMMI ratings help customers determine trustworthy and low-risk vendors of software products and services. CMMI level 5 means an organization can prove successful application of government and industry best practices to a company’s management and engineering operations. Process management is the focal point in this model and is based on the idea that the quality of a system is highly influenced by the quality of the process used to acquire, develop, and maintain it. By providing a structured collection of elements that describe the characteristics of effective processes, there is a benchmark for assessing different organizations for equivalent comparison. 331
AU8231_C005.fm Page 332 Wednesday, May 23, 2007 9:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Certification and Accreditation A way to determine how well a system meets its security requirements is to perform a formal evaluation. This evaluation is a two-step process, first a certification and then an accreditation. Basically, the objective is to determine how well a system measures up to a preferred level of security. The methodology used needs to consider, from a security perspective, the entire system, the network, and the application life cycle. An audit of policies, procedures, controls, and continuity planning will be performed. At the beginning of the process, the evaluation criteria must be chosen. With the criteria known, the certification process will test the system’s hardware, software, and configuration. Once the entire system has been evaluated, the next step is to evaluate the method by which a secure system was connected to a network and the physical security of that system. This will create a baseline for the specific design and implementation, which will be used to compare against the set of specific security requirements. Keep in mind that the certification is only applicable for a particular system in a specific environment and configuration. With the results of the certification available, management evaluates the capacity of a system to meet the needs of the organization. If management determines that the needs of the system satisfy the security needs of the organization, they will formally accept the evaluated system for a stated period. If the configuration is changed, the new configuration must be certified. Recertification must normally be performed either when the time period elapses or when configuration changes are made. Sample Questions 1. What is the name for an operating system that switches from one process to another process quickly to speed up processing? a. Multiprocessing b. Multitasking c. Multithreading d. Multidimensional 2. What mode do applications run to limit their access to system data and hardware? a. Supervisor mode b. User mode c. Tunnel mode d. Interprocess mode 3. Which of the following is not true of the reference monitor? a. It must mediate all accesses. b. It must be protected from modification. c. It must be verifiable as correct. d. It must provide continuous monitoring of file privileges. 332
AU8231_C005.fm Page 333 Wednesday, May 23, 2007 9:02 AM
Security Architecture and Design 4. In the Bell–LaPadula model, the simple security property addresses which of the following? a. Reads b. Writes c. Executes d. Read/writes 5. Which of the following does not provide a certification process? a. ISO/IEC 17799:2005 b. BS 7799:2 c. ISO 27001 d. ISO 15408 6. Data hiding is a required TCSEC criterion of module development for systems beginning at what criterion level? a. A1 b. B3 c. B2 d. C3 7. Which of the following security models addresses three goals of integrity? a. Biba b. Bell–LaPadula c. Clark–Wilson d. Brewer–Nash 8. ITSEC added which of the following requirements that TCSEC did not address? a. Confidentiality and availability b. Integrity and confidentiality c. Availability and integrity d. Nonrepudiation and integrity 9. Which of the following is not a usual integrity goal? a. Prevent unauthorized users from making modifications b. Prevent authorized users from making improper modifications c. Maintain conflict-of-interest protections d. Maintain internal and external consistency 10. Which model establishes a system of subject–program–object bindings such that the subject no longer has direct access to the object, but instead this is done through a program? a. Biba b. Bell–LaPadula c. Clark–Wilson d. Brewer–Nash 11. The Biba integrity * (star) property ensures: a. No write up b. No write down 333
AU8231_C005.fm Page 334 Wednesday, May 23, 2007 9:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK®
12.
13.
14.
15.
16.
17.
18.
334
c. No read up d. No read down Which model fails to address the fact that because all subjects that have an appropriate clearance may not need access, the system owner must still allow access by providing the need-to-know decision? a. Biba b. Bell–LaPadula c. Clark–Wilson d. Brewer–Nash Which model helps ensure that high-level actions (inputs) do not determine what low-level users can see (outputs)? a. Noninterference model b. Lattice model c. Information flow model d. Graham–Denning model Which access control model has three parts — a set of objects, a set of subjects, and a set of rights — as well as defining eight primitive rights? a. Access control matrix b. Lattice model c. Information flow model d. Graham–Denning model What is the name for the collections of distributed software that are present between the application running on the operating system and the network services that reside on a network node? a. Applications b. Middleware c. Trusted computer base (TCB) d. System kernel Which model assigns access rights to subjects for their accesses to objects? a. Jueneman model b. Access control matrix c. Information flow model d. Noninterference model Which model describes a partially ordered set for which every pair of elements has a greatest lower bound and a least upper bound? a. Lattice-based model b. Access control matrix c. Information flow model d. Noninterference model What are typically trusted areas that are separated from untrusted areas by an imaginary boundary sometimes referred to as the security perimeter? a. Mainframes and centralized computing environments
AU8231_C005.fm Page 335 Wednesday, May 23, 2007 9:02 AM
Security Architecture and Design b. PCs and distributed computing environments c. Network partitions d. Chinese wall 19. The Common Criteria uses which designations for evaluation? a. D1, C1, C2, B1, B2, B3, A1 b. E0, E1, E2, E3, E4, E5, E6, E7 c. EAL1, EAL2, EAL3, EAL4, EAL5, EAL6, EAL7 d. F0, F1, F2, F3, F4, F5, F6, F7
335
AU8231_C005.fm Page 336 Wednesday, May 23, 2007 9:02 AM
AU8231_C006.fm Page 337 Thursday, October 19, 2006 7:02 AM
Domain 6
Business Continuity and Disaster Recovery Planning Carl B. Jackson, CISSP
Introduction The business continuity planning (BCP) and disaster recovery planning (DRP) domain addresses the preparation, processes, and practices required to ensure the preservation of the business in the face of major disruptions to normal business operations. BCP and DRP involve the identification, selection, implementation, testing, and updating of processes and specific actions necessary to prudently protect critical business processes from the effects of major system and network disruptions and to ensure the timely restoration of business operations if significant disruptions occur. The objective of this chapter is to provide the reader with a step-by-step route map (Figure 6.1) on how to view, develop, implement, maintain, and measure an appropriate continuity planning process for his or her organization. The continuity planning methodology, processes, and techniques suggested here are tried and true, and equally applicable to public and private organizations. These approaches have a broad base of support and are intended for the information security professional preparing to sit for the Certified Information Systems Security Professional (CISSP®) examination. The expectation is that after studying this chapter, the CISSP candidate should have knowledge of the entire continuity planning process and its constituent BCP and DRP components. CISSP candidates should understand that these endeavors are the discipline of anticipating, preplanning, deploying, and managing a set of activities and tasks to assist in the survival of the organization following a disaster, by significantly mitigating its overall adverse impact.
337
AU8231_C006.fm Page 338 Thursday, October 19, 2006 7:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK®
Figure 6.1. A route map that represents the megaprocesses of the continuity planning implementation method. This chapter is organized along these lines and provides the reader with more detailed discussion on each of the megaprocesses.
CISSP Expectations The CISSP candidate must possess a fundamental understanding of the two disciplines of business continuity planning (BCP) and disaster recovery planning (DRP). These disciplines are just two parts of a larger endeavor, which we refer to as the continuity planning process (CPP). The continuity planning process encapsulates all continuity planning disciplines, including BCP and DRP. The concept of the continuity planning process is not to be confused with actual enterprise business processes that continuity plans are designed to protect. Enterprise business processes will also be covered in detail within this chapter. In the past, continuity planning was frequently thought of as the recovery of computer or information technology systems and nothing more. This discipline is often referred to as disaster recovery planning. Experience in the field of continuity planning has shown that the recovery of IT functions alone does not ensure survival of the enterprise following a serious disruption or disaster. Complete recovery requires the thorough knowledge of all aspects of the enterprise, as will be explained later in the chapter. The most efficient approach to ensuring the continuity of the enterprise is to anticipate events and prepare continuity plans that focus attention on the organization’s time-critical1 business processes and the resources, including IT resources, that support those time-critical processes. 338
AU8231_C006.fm Page 339 Thursday, October 19, 2006 7:02 AM
Business Continuity and Disaster Recovery Planning Core Information Security Principles: Availability, Integrity, Confidentiality (AIC) As we know, the cornerstone of information security is the protection of the confidentiality, integrity, and availability of enterprise information resources. The term availability implies that continuity or recovery planning activities are the responsibility of the information security group. Many organizations specifically assign continuity planning to the information security folks, while others designate a separate continuity planning department to address those issues. Either way, the information security manager or information security officer (ISO) is a stakeholder in continuity planning processes. Even if not directly accountable, ISOs must work to make certain that a proper degree of enterprise continuity planning is taking place. Why Continuity Planning? The effects of Hurricane Katrina and the events of September 11, 2001, in conjunction with the recent corporate corruption cases of WorldCom, Enron, HealthSouth, etc., cause concern to organizations about continuity and, in fact, about enterprise survivability. All organizations bear responsibility for the life safety of their employees and others on their premises. They also have financial and moral obligations to their shareholders, customers, employees, employees’ families, and other key stakeholders. Terrorism, government regulation, internal and external audit, industry standards and guidelines, good business practice (standard of due care), and actual disaster events are all excellent reasons to prepare, in advance, for organizational survival. These obligations and responsibilities mandate the creation and maintenance of an effective enterprisewide continuity planning program. To some CISSPs, the need for continuity planning may seem intuitive. Others (outside the profession) fail to share this viewpoint. It is not usually easy to vie for scarce internal resources, gain support from management, or communicate the importance of continuity planning, given competing priorities. Reality of Terrorist Attack. Let us start with the obvious. Subsequent to the September 11 attacks on the World Trade Center and the Pentagon, the U.S. attorney general advised and encouraged American companies to immediately evaluate and strengthen their security programs. This raised a number of questions regarding the condition and practicality of existing life safety emergency response plans and continuity plans within public and private organizations around the world. The recommended approaches to continuity planning emphasize support for first responders, understanding and addressing risks, and making preparations to withstand 339
AU8231_C006.fm Page 340 Thursday, October 19, 2006 7:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® such events. All of these recommendations were at the heart of this advice from the attorney general in the weeks following these horrifying events. Natural Disasters. Hurricane Katrina is a dramatic example of a natural disaster that was long anticipated. Her consequences were dreadfully underestimated and caused tragic devastation and loss of life to the U.S. Gulf Coast area. The hurricane’s devastation touched the Bahamas and most of eastern and southeastern North America. The highest wind gusts were up to 175 miles per hour. The destruction in the Florida Panhandle, Alabama, Mississippi, and Louisiana, most especially New Orleans, caused federal disaster declarations over a 90,000 square mile area. In Louisiana, after the hurricane moved north, the New Orleans levies were breached and the major flooding started. It was not until then that the city suffered dramatic increases in the loss of life and damage already wrought by the storm. This one-two punch caught many by surprise and slowed the overall disaster response at all levels of community and government. As of this writing, the toll on human death, property damage, jobs lost, businesses destroyed, etc., has yet to be clearly understood or quantified. Suffice to say, this dramatic regional catastrophe is the most terrible in U.S. history, with total damage estimates at or exceeding $100 billion. This event illustrates compellingly why crisis management and continuity planning are of increasing concern to public and private sector executive decision makers. Internal and External Audit Oversight. Adverse audit or regulator y comments, more often than not, drive continuity planning initiatives. For audit purposes, the life cycle of the continuity planning process, and the point at which each organization has evolved within that life cycle, offers ample opportunity for audit scrutiny and deficiency comments. It is common knowledge that the audit function usually initiates audits based upon a review of organizational policies, standards, and procedures, and then measures compliance with those directives, similar to the continuity planning policy example (see sample continuity planning policy insert). Industry surveys also continue to show that an overwhelming number of organizations undertake continuity planning as a direct result of adverse audit/regulator comments. Legislative and Regulatory Requirements. There are other reasons that organization management is concerned about continuity planning issues. In regulated industries, for example, state and federal regulations and rulings mandate degrees of continuity planning. We reference the reader to an article written by Rebecca Herold in Addressing Legislative Compliance within Business Continuity Plans.2 This article, in Appendix A, does an outstanding job of listing various regulatory initiatives requiring continuity planning on the part of constituent organizations that specifically address HIPAA, SOX, GLB, the Patriot Act, and others. 340
AU8231_C006.fm Page 341 Thursday, October 19, 2006 7:02 AM
Business Continuity and Disaster Recovery Planning Industry and Professional Standards NFPA 1600. The National Standard on Preparedness, also known as NFPA 1600 (National Fire Protection Association), is a benchmark that continuity planners might use as a source of guidance on methodological development, risk identification, or planning guidelines. ISO 17799. ISO 17799 is “a comprehensive set of controls comprising best practices in information security.” It is essentially an internationally recognized generic information security standard. ISO 17799 comprises ten prime sections: security policy, system access control, computer and operations management system development and maintenance, physical and environmental security, compliance, personnel security, security organization, asset classification and control, and business continuity management (BCM). Defense Security Service (DSS). The Defense Security Service (DSS), formerly known as the Defense Investigative Service (DIS) is agency of the United States Department of Defense (DoD). DSS makes its contribution to the National Security Community by conducting personnel security investigations and providing industrial security products and services, as well as offering comprehensive security education and training to DoD and other government entities. To complement its three primary missions: the Personnel Security Investigations Program (PSI); the Industrial Security Program (ISP); and the Security Education, Training and Awareness Program, DSS offers the unique advantage of integrating counterintelligence into its core security disciplines through training programs, policy development, and operational support to its field elements. (http://en.wikipedia.org/wiki/Defense_Security_Service) National Institute of Standards and Technology (NIST). NIST provides a number of requirements for federal contingency planning. Good Business Practice or the Standard of Due Care. The Free Dictionary defines the standard of due care as “the care that a reasonable man would exercise under the circumstances; the standard for determining legal duty” (http://www.thefreedictionary.com/due+care). The standard of due care applies to the enterprise exercising good corporate citizenship and proactively planning to abide by good business practices in the protection of its shareholders, customers, employees, etc.
Enterprise Continuity Planning and Its Relationship to Business Continuity and Disaster Recovery Planning At the end of the day, even with a comprehensive and well-managed continuity planning infrastructure, an enterprise can be surprised. By continuity planning infrastructure, we mean a complete continuity planning business process that includes all the components reflected in Figure 6.2: all of 341
AU8231_C006.fm Page 342 Thursday, October 19, 2006 7:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK®
Figure 6.2. A view of the components of an enterprisewide continuity planning business process.3
the continuity and recovery teams, their reporting structure, IT disaster recovery planning, business resumption planning for business processes, crisis management planning, and high availability. Organizations should plan for worst-case events, rather than for specific scenarios or types of disasters. The theory, proven many times, is that the organization that is ready to respond to the worst-case disaster will also be able to handle lesser disruptions. The purpose of continuity planning, which includes the disciplines of BCP and DRP, is to reduce the impact of an incident or disaster. Will a reasonably well capitalized organization survive a serious disruption without a business continuity plan or disaster recovery plan? In many cases, for larger organizations, the answer is yes. Chances of survival are increased, however, if the organization is well prepared to significantly reduce the possible impact of a disruptive event, and the resulting qualitative and quantitative losses are significantly reduced. What are the potential loss categories? Revenue Loss. Permanent loss of revenue or a temporary interruption (cash flow) is a significant causative factor for organizations to undertake 342
AU8231_C006.fm Page 343 Thursday, October 19, 2006 7:02 AM
Business Continuity and Disaster Recovery Planning a continuity planning program. By extrapolating the potential material revenue loss for the first few hours and days of a disaster, management can create a picture of what it stands to lose long term, should an event occur. Extra Expense. Extraordinary expenses are those that would not have normally been incurred had the organization not suffered the disaster. This category includes overtime, rents and leases for temporary space and equipment, activating recovery capabilities, interest paid on receivables, the loss of interest income, legal or regulatory fines or penalties, and the like. Compromised Customer Service. Interestingly enough, an interruption in service, whether to an external or internal customer, or business partner, drives short recovery windows in almost all cases, even more so than an interruption in revenue. We must be able to measure the impact of a disaster to the organization from the moment of the event. Although we cannot usually determine the precise financial cost of customer inconvenience, we certainly can estimate the level of their inconvenience as it builds over time. We also know that eventually this customer inconvenience will most likely result in revenue loss. Thus, customer service level is the only truly reflective short-term measure over the first few seconds, minutes, hours, and days of a disaster, and its economic result can be described in metrics of high impact, medium impact, or low impact. Embarrassment or Loss of Confidence Impact. The impact to track and measure here is the embarrassment or loss of confidence suffered by important external entities that rely upon, or have an interest in, the enterprise. Examples of such external entities include key customers, suppliers, business partners, regulatory agencies, and auditors. Similar to the impact of compromised customer service, the metric for loss of confidence cannot be easily expressed in terms of financial loss. However, it can be stated in qualitative levels of impact, such as high, medium, or low, and can be tracked over a period of time following a disruption.
As seen in Figure 6.2, there is a tremendous interdependency among the various enterprisewide continuity planning process subcomponents. The continuity planning process is comprised of business continuity planning, disaster recovery planning (for technology of the enterprise), and the crisis management plan structure. Crisis management is the subcomponent that focuses on communications and management of the enterprise through the course of the disaster. A good understanding of the concepts and component interrelationships presented in Figure 6.2 will serve the CISSP candidate well in implementing and managing continuity planning within an information security program. Hidden Benefits of Continuity Planning. The primary goal of a continuity planning process is to ensure the organization’s survival in the event of 343
AU8231_C006.fm Page 344 Thursday, October 19, 2006 7:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® a disaster. There are other, less obvious benefits of continuity planning. In developing a comprehensive continuity planning infrastructure, the continuity planner must understand the business processes of his enterprise, and how information, goods, and services move within the organization. Equally important is knowing how information, goods, services, and cash flow in and out of the enterprise. The collection and analysis of this knowledge could identify potential cost reductions by improving or creating operating efficiencies. The planner may also find opportunities for cost savings in business interruption insurance and directors’ and officers’ coverage. These examples show that continuity planning could provide an advantage over competitors. As the importance of continuity planning becomes more well known, the lack of planning could even disqualify a company from consideration for new business. The continuity planning process also forces a review of various other components of the organization’s infrastructure. Vital records management, data backup and storage, and physical, environmental, and information security controls must also be scrutinized when addressing continuity planning, and efficiencies may be discovered during the process. Organization of the BCP/DRP Domain Chapter There are many ways to present CISSP examination review subject matter. Although there are others, this chapter presents the project management plan approach to BCP/DRP. This approach walks through the process from initiation to final implementation, maintenance, and management of a continuity planning initiative. Although this is a from-scratch approach, the application of this same project plan method is also relevant for those CISSPs who are working in the field with organizations that already have some level of continuity planning in place. Project Initiation Phase Preplanning activities recommended in the project initiation phase will set the tempo for each succeeding phase. Clearly articulated management intentions and commitment will contribute to the success of later continuity planning phases. It is in this phase where all the project preplanning is performed, including: • Establish the organization’s continuity planning scope and objectives criteria • Gain and demonstrate management support • Form the BCP project implementation team, referred to hereafter as the continuity planning project team (CPPT), and define their roles and responsibilities • Define and obtain continuity project resource requirements 344
AU8231_C006.fm Page 345 Thursday, October 19, 2006 7:02 AM
Business Continuity and Disaster Recovery Planning • Understand and leverage current and anticipated disaster avoidance preparations Current State Assessment Phase The current state assessment phase is composed of several discrete sets of activities that will provide enterprise management with the practical information it must have to make informed decisions concerning business continuity planning. When the activities within this phase of the methodology are completed, you will have gained an understanding of the strategies, goals, and objectives of the enterprise, and you will have completed: • • • •
A threat analysis A business impact assessment (BIA) A continuity planning process current state assessment Possible a benchmark or peer review
Design and Development Phase Given the baseline information gathered in the current state assessment phase, the CPPT is in a position to devise preliminary recommendations and action plans regarding suitable next steps. This phase is where the organization, with the assistance of the CPPT, formulates the most efficient and effective recovery strategies to address the threats and recovery priorities identified. The primary activities that take place during this phase of the methodology are: • Develop and design the most appropriate continuity strategies • Develop the crisis management plan (CMP) and continuity planning (BCP and DRP) structures • Develop continuity and crisis management plan infrastructure testing and maintenance activities • Design initial acceptance testing of the plans • Plan for recovery resource acquisition Implementation Phase During this phase, CPPT professionals work with business process owners or representatives to deploy: • Continuity plans (business continuity plans and disaster recovery plans) as well as the enterprise crisis management plan • Program short-term and long-term testing • Program short-term and long-term maintenance strategies • Program training, awareness, and education processes • Program management process 345
AU8231_C006.fm Page 346 Thursday, October 19, 2006 7:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Management Phase The management phase of the methodology is where the day-to-day management of continuity planning is organized, executed, and sustained. During this ongoing process, the CPPT works with the business process owner or representatives to address overall continuity planning issues, which include program oversight and continuity planning manager roles and responsibilities. The ideas presented in this brief introduction will now be explored in depth. Project Initiation Phase Description The project initiation phase sets the tone for the entire continuity planning project. All the project preplanning is performed in this phase, including: • Establish the organization’s continuity planning scope and objectives criteria • Gain and demonstrate management support • Form the continuity planning project team (CPPT) and define their roles and responsibilities • Define and obtain continuity project resource requirements • Understand and leverage current and anticipated disaster avoidance preparations Project Scope Development and Planning. The continuity planning process minimizes the impact on an organization’s time-critical business processes, given a significant disruptive event such as power outages, a natural disaster, accident, act of sabotage, or other such occurrence. The continuity planning process is intended to help management develop costeffective approaches to ensure continuity during and after an interruption of time-critical processes, supporting systems, or resources. The components of the enterprise continuity planning process, shown in Figure 6.2, are as follows. Disaster Recovery Planning (DRP). Traditional DRP addresses the recovery planning needs of the organization’s IT infrastructures, including centralized and decentralized IT capabilities, as well as voice and data communications network support services. As mentioned in the chapter introduction, traditional continuity planning was concerned only with the recovery of computer or IT systems. We still commonly refer to this as disaster recovery planning. As the field of continuity planning has matured, it has become apparent that recovery of IT alone does not ensure survival of the enterprise following a serious disruption or disaster. Speedy recovery of all the components of IT is useful only if the organization’s business units are able to continue functioning, at some level, throughout the event. They must be in a position to communicate with customers, organizations or 346
AU8231_C006.fm Page 347 Thursday, October 19, 2006 7:02 AM
Business Continuity and Disaster Recovery Planning patients, key business partners, vendors, employees and employees’ families, and the like. They must also be able to receive and enter orders, produce goods, provide services, collect and book revenue, account for assets, etc. Business Continuity Planning (BCP). Traditional BCP addresses recovery of
an organization’s business operations (i.e., accounting, purchasing, etc.) should it lose access to its supporting resources (i.e., IT, communications network, facilities, business partner relationships, etc.). As the disciplines of the continuity planning profession matured, it became apparent that continuity planning is a business issue, not a technical issue. The focus shifted from IT and related areas to the actual business processes of the enterprise, where they should have been all along. Each business process must be examined and prioritized, paying particular attention to its time criticality, to determine its recovery time objective (RTO). This approach will ultimately provide management and the CPPT with empirical information for decision making and for implementing the most effective continuity plans. Crisis Management Planning. Another component of the enterprise continuity planning process approach is the crisis management plan (CMP). Many organizations utilize the incident command system (ICS)4 to organize their crisis management planning efforts, and in these cases, the CMP may be referred to as the emergency operations center (EOC). There is more discussion on the ICS later in the chapter. Either way, those responsible for CMP must provide leadership to facilitate an effective and efficient enterprisewide emergency/disaster response capability. This response capability includes forming appropriate management teams and training their members how to react to serious emergency situations like hurricanes, earthquakes, flood, fire, and serious hacker or virus damage. Continuous Availability. Continuous availability (CA) is a building-block approach to constructing resilient and robust technological infrastructures that support high availability requirements.5 Not solely a continuity planning responsibility, continuous availability is a concern of the CPPT, should any given business process have a zero or near-zero-time RTO. The CA concept joins a mix of disciplines, focusing on enterprise high availability needs in a 24 hours a day, 7 days a week (24/7) environment. The CPPT can assist with planning, designing, implementing, and measuring IT infrastructure capabilities for organizations with 24/7 application and network uptime requirements once there is agreement on the RTO timeframe. It is primarily the responsibility of the IT systems and IT Infrastructure groups to build and operate a CA environment, but the CPPT can and should work closely with them ensure that the RTOs can be met. Incident Command System. The ICS is a standardized response management system described as an all hazard–all risk approach to managing cri347
AU8231_C006.fm Page 348 Thursday, October 19, 2006 7:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® sis response operations and noncrisis events. A group of local, state, and federal agencies, with wild land fire protection responsibilities, initially designed it to improve the ability of fire forces to respond to any type of emergency. The ICS is a structure of management-level positions suitable for managing any incident. It is described as being organizationally flexible and capable of expanding and contracting to accommodate responses or events of varying size or complexity. For more information on ICS, see the ICS-related Web site URL at the end of this chapter, which is just one of many Internet resources relating to ICS. Executive Management Support. From the very start of any continuity planning project, executive management’s understanding and support is an unqualified must. The continuity planning endeavor simply touches every single corner of the enterprise, encompassing business processes, information technology (IT), communications infrastructures, facilities, virtual organization business partners, personnel, and other mission-critical business process support services. The business continuity planning manager and the CPPT must enjoy clearly articulated top-down management support. This support must include suitable resource commitments to the continuity planning process and CPPT. Supporting the continuity planning effort means that management must make suitable resource commitments in terms of staffing the CPPT, establishing a budget, adopting senior-level policies and standards, approving scope, etc.
The project effort may include one or all three of the individual components of the enterprisewide approach to continuity planning. That is, the continuity planning process effort may be for IT continuity planning (DRP) only, or could include any combination of IT continuity planning, business continuity planning, or crisis management planning. Table 6.1 suggests techniques the CPPT might use to assist them in gaining support. BCP Project Scope and Authorization. For organizations without a continuity planning process, the scope will be full, enterprisewide, and end to end. Of course, the individual components of the project can be broken down to bite-sized chunks to avoid the appearance that the CPPT is attempting to boil the ocean.
Once the continuity planning process is in place and fully tested, the work is not over. Organizations are constantly in a state of change, so the continuity planning must constantly be tweaked. Changes occur in personnel, organizational structure, product development activities, business partner relationships, and dozens of other events. These events all result in changing the organization slightly, or sometimes significantly, at any given time. Should the enterprise have an active, or at least recent, continuity plan, the continuity planner should prepare to adjust the scope of the
348
Analyze, collate, and assess business process owner/ representative requirements Create summary working document of interview and workshop notes (not for presentation to the business process owner/representative)
Interview key stakeholders and set expectations for workshop Prepare business process owner/representative-specific presentation and discussion points for workshops Conduct workshop/interview
Design interview scripts and agree with sponsor
Identify executive sponsor for strategy consulting CPP effort Propose and agree with sponsor on interview methodology or workshop with key stakeholders with purpose of mapping business and process requirements Identify business strategy stakeholders with sponsor
As needed to support the scope of the CPP effort As needed to support the scope of the CPP effort As needed to support the scope of the CPP effort
Agree on preferred approach for conducting business impact assessment as well as other current state assessments as needed by the business process owner/representative As appropriate for this component of the enterprisewide availability infrastructure See plan phase samples below for the most appropriate interview scripts As needed to support the scope of the CPP effort As needed to support the scope of the CPP effort
As needed to support the scope of the CPP effort As needed to support the scope of the CPP effort As needed to support the scope of the CPP effort
Business Continuity Planning (BCP) CEO, CFO, other executive management representatives Agree on preferred approach for conducting business impact assessment as well as other current state assessments as needed by the business process owner/representative As appropriate for this component of the enterprisewide availability infrastructure See plan phase samples below for the most appropriate interview scripts As needed to support the scope of the CPP effort As needed to support the scope of the CPP effort
CIO or IT director
IT Continuity Planning (DRP)
Table 6.1. Techniques for Gaining Management Support
As needed to support the scope of the CPP effort
As needed to support the scope of the CPP effort As needed to support the scope of the CPP effort
CEO, CFO, other executive management representatives Agree on preferred approach for conducting business impact assessment as well as other current state assessments as needed by the business process owner/representative As appropriate for this component of the enterprisewide availability infrastructure See plan phase samples below for the most appropriate interview scripts As needed to support the scope of the CPP effort As needed to support the scope of the CPP effort
Crisis Management Planning (CMP)
AU8231_C006.fm Page 349 Thursday, October 19, 2006 7:02 AM
Business Continuity and Disaster Recovery Planning
349
AU8231_C006.fm Page 350 Thursday, October 19, 2006 7:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® project to address current needs (i.e., update stale plans, plan for and conduct tests, readdress or update aged business impact assessments, etc.). Executive Management Leadership and Awareness. Executive management can express its support in a number of ways, including the following. Formalizing Continuity Planning Policy. As mentioned above, an executivelevel organizational policy regarding requirements, roles, and responsibilities for continuity planning is a must. Continuity planning policy sets the stage for further development of appropriate standards and procedures that will assist organizational units to address continuity planning issues prudently and provides a benchmark for compliance with policy. It is vital to the continuity planning endeavor that executive management (including the board of directors, where applicable) makes a strong commitment that is clearly and succinctly articulated. Middle management should develop appropriate support standards and procedures based on the policy direction set by executive management. Appropriate supporting standards should outline and provide for a continuity planning infrastructure that is relevant and supports executive management’s vision. This will demonstrate to all those who have hands-on continuity planning responsibility the value that an appropriate degree of continuity planning will add to the organization (see sample continuity planning policy statement). Establishing and Managing a Continuity Planning Budget. Allocation of funds for continuity planning projects visibly demonstrates top-down commitment. Of particular concern will be the questions and methods on how continuity planning-related costs/expenses would be capitalized or allocated within the organization. This is an important topic to discuss and decide upon with the executive sponsors prior to the project kickoff. Associated continuity planning budget categories are as follows:
• Expenditures used for acquisition, implementation, and maintenance of preventative controls that are designed for physical environmental or information security. • Expenditures utilized for purchasing alternative recovery resources like facilities, equipment, supplies, hardware, software, and telecommunication infrastructure facilities. • Personnel expenses; consideration should be given to the annual salary requirements of continuity planning personnel and the annual salary and benefits of the incremental business unit personnel costs for each of the business function representatives involved in the effort. • Use of external consultants or vendor organization personnel that might be utilized and the consulting fees and travel expenses associated with their participation. • Day-to-day management expenditures of the continuity planning process, including testing maintenance and training. 350
AU8231_C006.fm Page 351 Thursday, October 19, 2006 7:02 AM
Business Continuity and Disaster Recovery Planning With regard to testing, there will likely be costs associated with travel, lodging, meals, and ground transportation if management has elected to use a remote location recovery strategy. Defining Continuity Planning Metrics. Continuity planning-specific metrics developed in support of the organization’s strategic vision and values will help to eliminate the on-again, off-again continuity planning efforts that have plagued many organizations. A clear set of both qualitative and quantitative continuity planning metrics will illustrate what management considers important. After all, we get what we measure. Articulating Continuity Planning Communications. T h e a b o v e - m e n tioned techniques notwithstanding, clearly articulated executive management communications of support will open a lot of otherwise closed doors to the continuity planning effort. Continuity Planning Project Team Organization and Management. T h e continuity planning project team (CPPT) should be made up of continuity planning leadership and selected technical and business experts who have business process knowledge and a stake in the continuity planning process. A CPPT project lead should be appointed as soon as is practical. The most senior and knowledgeable staff need always be involved; assigning lower-level staff to this team will do little to ensure success. As a rule, the team members should be the more seasoned managers who understand the need for continuity planning, the goals of the enterprise, and the intricacies of the business processes.
Once the CPPT has been named, their first duty is to scope the continuity planning process and define the project management strategies they will utilize to accomplish their goals, in line with management and the allocated funding. Of particular concern to the CPPT is whether to use a PMO approach, what project management tools they will be depending upon, and what their timelines are for the project. Project Management Office Techniques. In larger organizations, it may be useful to organize the continuity planning process under the control of a project management office (PMO). A PMO approach to the project will provide strategic support to business units and management. This can be especially helpful for continuity planning processes, as they are crossorganizational in scope and approach. The continuity planning endeavor encompasses the entire enterprise and all its business processes, which are mapped to their supporting resources. As such, the CPPT must be able to interact with many levels of management and organizational structures. They also must have a thorough understanding of the associated business functions throughout the enterprise. The PMO manages and supports from the top down. Trained project managers utilize project management standards, methods, tools, and education to their best effect. 351
AU8231_C006.fm Page 352 Thursday, October 19, 2006 7:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Project Management Tools. A practical approach to developing a continuity planning process and enterprise implementation is through the use of project management methodologies. With these methods, the PMO or the CPPT will likely use software and Internet/intranet space to compile, organize, secure, and store relevant internal and external information. These and other tools are helpful in the initial planning and implementation of the continuity planning process. They are even more useful in the long term (testing and maintenance stages), where the real expense of continuity planning is incurred. In the book The Complete Project Management Office Handbook,6 Gerard Hill presents an overview of PMO functions. Continuity Planning Project Timelines. When establishing schedules, deadlines, and milestones for a continuity planning process, the nature or culture of each unique organization must be taken into consideration. Success is more likely if the components of the project are broken up in more manageable (bite-size) units, like days and weeks, rather than months. This will prevent the perception of boiling the ocean and the fatigue that goes along with it. Keeping the continuity planning process timeframes relatively short will help to avoid project participant burnout as well. Conduct Continuity Planning Project Kickoff Meeting. Within some cultures, holding a formal continuity planning kickoff meeting is an important first step in the continuity planning process. The kickoff meeting is a formal introduction of the business process owners or representatives to the CPPT and should help them get comfortable with the effort. The objectives of a continuity planning kickoff meeting might be:
• Allow the executive sponsor to introduce the continuity planning project and describe its value to the enterprise • Introduce the CPPT • Provide an overview of the continuity planning process • Present an overview of the continuity planning methodology • Detail the project approach and scope • Present the project objectives • Review the project schedule • Discuss project staffing • Describe the project deliverables • Review the preliminary work plan • Identify key business process owners or representative contacts outside the project team • Obtain time commitments from business process owner or representative team members • Answer questions and address concerns As the continuity planning process evolves, it is crucial that all of the business process owners or representatives involved understand: 352
AU8231_C006.fm Page 353 Thursday, October 19, 2006 7:02 AM
Business Continuity and Disaster Recovery Planning • The continuity planning process as it applies to their organization • The project scope, deliverables, schedule, and approach • Their responsibilities and time commitments for the continuity planning process effort In addition, it is important that the kickoff meeting provide an opportunity for participants to raise issues and concerns related to continuity planning or the project. The meeting should include a question-and-answer period at the end so the audience can ask questions, make suggestions, or discuss challenges they may have. Any issues that are not answered immediately should be discussed with the CPPT project lead as soon as possible. Swift resolution of these issues is required to ensure that the project can continue as scheduled. It is important to stress to the business process owners or representatives on the CPPT that the type of disaster (e.g., fire, brownout, flood) is less important than the fact that the disaster could occur. A disaster is any event that takes time-critical processes down for longer than the RTO that is defined in the business impact analysis (BIA). When participants focus on particular types of disasters, rather than more generalized disaster scenarios, they are likely to miss the point of the overall information-gathering exercise by answering questions too specifically. From a continuity planning perspective, the type of disaster or interruption is relatively unimportant. Destruction of a facility by reason of fire versus flood is not significant; the impact of the destruction and the continuity of critical operations are the important issues. Disaster or Disruption Avoidance and Mitigation. It is during the project initiation phase that the CPPT should consider the extent and status of existing physical, environmental, and information security-related controls that might mitigate the effects of an event. Although a formalized risk assessment is part of the current state assessment phase, a fundamental understanding of the vulnerabilities of the organization will help to sell the need for continuity planning to management.
By definition, continuity planning tends to focus on preparation of the mechanisms necessary to react to an adverse event, more than proactively planning to mitigate or prevent the event in the first place. Activities designed to identify potential threats and vulnerabilities are incorporated into an enlightened approach to continuity planning, so that appropriate mitigating controls can be considered. The physical, environmental, and information security of business processes are reviewed, and the resulting deliverable contains observations and recommendations regarding the most appropriate next steps in putting effective and efficient control mechanisms in place. This is the risk assessment, sometimes known as the risk management review (RMR), and is fully described under the current state 353
AU8231_C006.fm Page 354 Thursday, October 19, 2006 7:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Table 6.2. Project Initiation Phase Activities and Tasks Work Plan Project Initiation Phase Activity/Task Prepare project charter and obtain management approval Prepare and finalize project plan, including work steps, deliverables, and milestones Prepare and finalize project budget Management presentation and approval to move to next phase
Deliverables
Milestone Definitions
Project charter Project work plan Budget
assessment section of this chapter. This risk/vulnerability mitigation process would also include consideration of the overall control infrastructure of the business process and could suggest other organizational controls. Physical or information security representatives can assist in the development of appropriate mitigating controls, standards, and procedures, or any other mechanism required to implement and manage an effective control infrastructure. The resulting infrastructure would be designed to mitigate or avoid threats and vulnerabilities to the most practical degree. A continuous availability (CA) recommendation from the CPPT to the business unit may be an appropriate course of action if that unit’s business processes have an RTO of zero or close to zero, and where its network infrastructure mandates uptimes of more than 99 percent. Project Initiation Phase Activities and Tasks Work Plan. The work plan in Table 6.2 presents a sample of the high-level activities and tasks suggested as a starting point for planning and executing this phase.
Current State Assessment Phase Description The current state assessment phase is composed of several discrete sets of activities that will provide enterprise management with the practical information they must have to make informed decisions concerning business continuity planning activities. The primary activities that take place during this phase of the methodology are: • • • • •
Understand enterprise stratgies, goals, and objectives Conduct a threat analysis Business impact assessment (BIA) Continuity planning process (CPP) current state assessment Possibly benchmarking or peer review
Understanding Enterprise Strategy, Goals, and Objectives. The first step in understanding the enterprise is to request, gather, and analyze informa354
AU8231_C006.fm Page 355 Thursday, October 19, 2006 7:02 AM
Business Continuity and Disaster Recovery Planning tion such as annual reports, organizational charts, strategic planning documentation, existing continuity plans, audit reports, and third-party service level assurance reports. In addition, online searches for financial information and other competitive intelligence can be valuable in understanding the organization’s true business environment. Enterprise Business Processes Analysis. The continuity planning process should support the overall objectives of the enterprise and be measurable, maintainable, and add value. To ensure that this process meets these criteria, it is critical that the CPPT request or develop business process information, including business process maps, of the enterprise. Highlevel business process maps that identify required components like facilities, business units, IT systems and infrastructure, and critical business partnerships illuminate the overall process flow of the organization for the CPPT. They should be broken down into mega, major, and subbusiness processes. This information will be relied upon heavily during the business impact assessment undertaken later in the methodology. People and Organizations. Organizational charts, telephone directories (online and hard copy), and other inventory lists will be needed to execute the continuity planning project effort. One of the goals of the BIA process will be to map time-critical business processes to people, locations, external communities, technologies and networks, etc. Therefore, a thorough understanding of each of the physical and geographical locations, and the numbers of employees who work in those locations, is mandatory. Time Dependencies. At this juncture, time-critical dependencies of the business processes should be identified and documented. A more detailed analysis and justification of time-critical business processes and their supporting resources will be done during the BIA phase of the continuity planning project. Motivation, Risks, and Control Objectives. Successful implementation of a continuity planning process involves many people issues, sometimes addressed in terms of organizational change management. Because the continuity plans may be ready for the organization before the organization is ready for the continuity plans, it is important to understand the peoplerelated barriers, enablers, and rewards. Also of importance are the technology and process barriers, enablers, and rewards. People, process, and technology issues have to be considered when attempting to understand how an organization will react to the implementation and will eventually become a predictor for the success of the continuity planning process effort. Budgets. Like any part of the enterprise, financial considerations are important to the CPPT. They must understand the budgetary processes and the resources allocated to this continuity planning process effort. 355
AU8231_C006.fm Page 356 Thursday, October 19, 2006 7:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Technical Issues and Constraints. Technical enablers and barriers must be understood from the beginning of the continuity planning process effort. The CPPT should devote time to consider and document the technological environments (IT, networks — voice and data) of the enterprise and closely associated communities (i.e., virtual business partners, outsourced service providers, vendors, customers/patients, etc.). It is also important that the CPPT know about any current or future strategies the organization is considering in terms of linking business and technology plans.
Continuity Planning Process Support Assessment The current state assessment phase examines the health and vitality of an enterprise’s continuity planning infrastructure and determines if the components are up to date. Continuity planning is not an event; it is a process like other major business processes. It can be judged by how intelligently the relationships of the people, processes, technologies, and missions of the enterprise are understood and documented. Two broad types of information flow from the current state assessment phase. The first stage consists of the threat assessment. A collection of potential quick hits from the threat assessment are listed in Table 6.5. The second type of information is the business impact assessment, which details the core or key business processes for which comprehensive continuity plans must be developed. Threat Assessment. One objective of the threat assessment is to evaluate the existing organizational controls and procedures that could reduce the likelihood of a potential interruption of services. Another objective is, should an interruption take place, the impact of the interruption is minimized and the organization’s assets are safeguarded. Steps must be taken to identify significant exposures that could place an organization at risk for an interruption and to determine solutions for the identified risks in advance.
During the threat analysis, potential risks and vulnerabilities are assessed and strategies and programs are developed to mitigate or eliminate those risks. The CPPT’s understanding of existing risk factors is an integral part of preparing to recover before the disruption. Prior to developing mitigating controls to prevent or lessen the impact of a disaster or disruption, you must understand the underlying risks. This type of threat assessment differs somewhat in scope from a traditional information security threat assessment. The BCP project team is concerned specifically with threats as they relate to information and resources that are necessary to support critical business processes. In general, a threat assessment is broken down into three types of threat assessments: 356
AU8231_C006.fm Page 357 Thursday, October 19, 2006 7:02 AM
Business Continuity and Disaster Recovery Planning • Physical and personnel security assessment • Environmental security assessment • Information security assessment Physical and Personnel Security Assessment. The physical and personnel
security assessment includes: • Loss of key personnel, temporary or permanent for any reason (even retirement) • Physical access control weaknesses • Health or accident • Supply chain failure • Vendor business interruption • War/terrorism • Shortage of raw materials • Surveillance • Business interruption and extra expense insurance • Emergency response plan assessment, including a review of the enterprise crisis management plans, as well as other emergency response teams, to ensure there are recommendations to achieve the following: – Identification of affected areas – Business processes affected – Infrastructure, buildings, and equipment conditions – Users’ life safety – Consideration of impact on customers, shareholders, community, etc. – Condition of utilities and communications – Notification and alerting procedures to crisis managers – Providing for safety and security of personnel – Personnel notification as necessary – Executive succession planning – Role of executives in crisis management – Roles of BCP coordinator and team members – Notification lists – Role of public relations toward the media, customers, local officials, and employees – Backups and off-site storage – Data, applications, and disaster recovery plans – Premises accessibility – Security – Environmental security – Communications status – Emergency systems: phones, cellular phones, radios, pagers – Communications networks – Emergency response procedures 357
AU8231_C006.fm Page 358 Thursday, October 19, 2006 7:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® – – –
Mitigating the damage Declaring a disaster Recovery team structure roles and responsibilities
Environmental Security Assessment. The environmental security assess-
ment includes: • • • • • • •
Fire detection and suppression Protection from water damage Utility failure Gas leaks Electrical disruptions and controls HVAC controls General utilities review at both the primary and secondary operations locations, including ensuring that electrical power is sufficient at alternate sites • Telecommunications availability Information Security Assessment. The information security assessment
includes: • Off-site data storage deficiencies • Logical access control weaknesses • Continuity planning — existing strategies for recoverability of timecritical processes and support resources • Change or problem management • Identification of single points of failure Reducing the identified exposures such as access control weaknesses will also result in more efficient or stable ongoing operations. Proactive steps to reduce an organization’s overall level of exposure are the most cost-effective elements of a comprehensive continuity planning program. Risk Management. Arriving at an understanding of what threats are most likely to affect the organization, some BCP professionals consider risk assessment and BIA processes to be very closely related. In his book Business Continuity: Best Practices,7 Andrew Hiles says: Risk management includes identification of risks; appreciation of their impact on the business and the likely frequency of occurrence; and implementation of steps to reduce that frequency to an acceptable level. Although risk assessment and business impact analysis are often treated as separate activities, for all practical purposes they are part of the overall process of risk management.
Table 6.3 can be utilized to assist the CISSP candidate to understand the type of information that is collected and useful during the threat analysis phase. 358
AU8231_C006.fm Page 359 Thursday, October 19, 2006 7:02 AM
Business Continuity and Disaster Recovery Planning Table 6.3. Information Collected and Useful during the Threat Analysis Phase Current State Assessment Component Physical security Environmental security Information security Business impact assessment Emergency response procedures Existing continuity plans Insurance coverage Off-site backup site inventory/backup processes Continuity planning business process
Information Requested Facilities diagrams and supporting documentation Facilities diagrams and supporting documentation Information security policies, procedures, standards, etc. Existing business impact assessment reports or documents, audit reports, etc. Written emergency response procedures documentation Written or automated continuity plans, audit reports Insurance documentation Backup media inventory information, backup process operational information Organizational charts, telephone books, continuity planning policies, standards, procedures, etc.
Interview Key Infrastructure and Business Managers. Table 6.4 lists people to interview as part of the current state assessment phase.
The threat assessment should be a high-level-only review that serves to identify major exposures. If a more detailed review is required, it should be completed as a separate project and the appropriate CPPT specialists should be contacted. Certain areas of evaluation may require expertise or certification not possessed by the project team, particularly when evaluating insurance provisions or certain engineering issues. In these cases, appropriate assistance should be requested. Once the threat analysis footwork has been completed, the CPPT should analyze and prepare recommendations for next steps. Mitigation of Risk Factors. Table 6.5 provides some very general examples of quick hits that may be identified as a result of this type of current state assessment. Business Impact Assessment (BIA). The goal of the BIA is to provide enterprise management with a prioritized list of time-critical business processes, and estimate a recovery time objective (RTO) for each of the timecritical processes and the components of the enterprise that support those processes.
359
AU8231_C006.fm Page 360 Thursday, October 19, 2006 7:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Table 6.4. People to Interview as Part of the Current State Assessment Phase Current State Assessment Component Physical security Environmental security Information security Business impact assessment Emergency response procedures Existing continuity plans Insurance coverage Off-site backup site inventory/backup processes Continuity planning business process
Positions to Interview Facilities management, data center management, risk management, physical security management Facilities management, data center management, risk management, physical security management Information security management, data center management Continuity planning management Facilities management, data center management, risk management, physical security management, key business unit management representatives Continuity planning management, data center management, crisis management, risk management Risk management Data center management, media storage management
Continuity planning management, senior management representatives, data center management, risk management
Executive management must understand the potential losses or impacts to the organization as precisely as possible to allocate resources to the continuity planning process. It is vital that they thoroughly understand the time-critical business processes. The BIA is where this information is gathered, analyzed, consolidated, and presented with recommendations (including next steps and rough order of magnitude cost). Another important outcome of the BIA is the mapping of time-critical processes to their constituent support resources (i.e., IT servers and applications, infrastructure and networks, facilities space requirements, business partner connectivity, etc.). The initial step in the BIA is to adopt an efficient method (such as a questionnaire) for gathering information relative to enterprise business processes as follows. Business Interruption. Business interruption is also known as customer service interruption or loss. Experience has shown that a severe interruption in customer service capabilities will, in almost all cases, drive shorter recovery time windows than financial impacts. A customer service interruption, whether to an external or internal customer, or business partner, must be measured from the moment of the event. Although we cannot usually determine the precise financial costs associated with customer inconve360
AU8231_C006.fm Page 361 Thursday, October 19, 2006 7:02 AM
Business Continuity and Disaster Recovery Planning Table 6.5. Examples of Quick Hits That May Be Identified as a Result of This Type of Current State Assessment Current State Assessment Component Physical security Environmental security
Information security
Business impact assessment
Emergency response procedures Existing continuity plans Insurance coverage Off-site backup site, inventory/ backup processes Continuity planning business process
Example Quick-Hit Opportunities Develop physical security policies and procedures Implement physical security controls Develop environmental security policies and procedures Implement environmental security controls Implement various information security controls Develop information security policies and procedures Conduct risk analysis, etc. Business process analysis can reveal various quickhit opportunities for continuity planning as well as other noncontinuity-planning-related projects Development of emergency response procedures Development of crisis management plans Testing assistance Testing assistance Enhancement of outdated plans, etc. Reduction in premium studies Expanded continuity planning infrastructure Implementation of specialized automated backup systems Regular audits of off-site backup Reengineering the continuity planning process Defining appropriate continuity planning matrix, etc.
nience, at least in the short term, we certainly can estimate the level of their inconvenience as it builds over time. We also know that eventually this customer inconvenience will most likely result in revenue loss. So, customer service level is the only truly reflective short-term measure over the first few seconds, minutes, hours, and days of a disaster, and its economic result can be described in metrics of high impact, medium impact, or low impact. Embarrassment or Loss of Confidence Impact. The impact to track and measure here is the embarrassment or loss of confidence suffered by important external entities that rely upon, or are have an interest in, the enterprise. Examples of such external entities include key customers, suppliers, business partners, regulatory agencies, and auditors. Similar to the impact of compromised customer service, the metric for loss of confidence cannot be easily expressed in terms of financial loss. It can, however, be stated in qualitative levels of impact, such as high, medium, or low, and can be tracked over a period of time following a disruption. 361
AU8231_C006.fm Page 362 Thursday, October 19, 2006 7:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Revenue Loss. Permanent loss of revenue or a temporary interruption (cash flow) is a significant causative factor for organizations to undertake a continuity planning program. By extrapolating the potential material revenue loss for the first few hours and days of a disaster, management can create a picture of what it stands to lose long term, should an event occur. Extra Expense. Extraordinary expenses include those that would not have normally been incurred had the organization not suffered the disaster. This category includes overtime, rents and leases for temporary space and equipment, activating recovery capabilities, interest paid on receivables, the loss of interest income, legal or regulatory fines or penalties, and the like. ESTABLISHING ACCURATE FINANCIAL MATERIALITY. Executive management, including the CFO, if possible, should be enlisted to help set the level of materiality of significant financial impacts. Setting revenue loss and extra expense ranges at either too high or too low a level when customizing the BIA questionnaire will adversely impact the BIA’s findings and resulting recommendations. This is especially so if the BIA is being used to cost justify the acquisition of expensive alternative recovery resources like computers, network circuits, alternative workspaces, etc. PREPARING AND PRESENTING BIA INFORMATION. We are not going to go into the myriad available methods to consolidate, analyze, and present BIA information in this chapter. Suffice it to say that without a well-executed BIA, executive management can only guess at what the business priorities are, and when and in what order they should be recovered. Because the true cost of continuity planning is not in the initial analysis, but in the long-term testing, maintenance, and training of recovery team personnel, it is incumbent on the organization to ensure that it has done due diligence during the BIA to be able to justify further continuity planning investments. Benchmarking and Peer Review. The CPPT can be aided in their efforts if they take advantage of peer or benchmarking techniques. Although completely optional, this component encompasses performance of industry or peer benchmarking studies to determine leading practices. These leading practices can then be used to establish the most appropriate future state vision for the continuity planning infrastructure.
Benchmarking is a powerful technique that enables organizations to identify goals based on potential future performance rather than on the outcomes of previous years. Benchmarking allows organizations to look beyond rigid functional or industry boundaries when attempting to improve or innovate processes and practices. These benefits help to produce world-class organizations that are equipped to rapidly respond to, if not anticipate and prepare for, sudden changes in their environments. Benchmarking can: 362
AU8231_C006.fm Page 363 Thursday, October 19, 2006 7:02 AM
Business Continuity and Disaster Recovery Planning • Provide opportunities to leverage best-practice measurements into opportunities for substantial performance improvement • Help identify processes and practices that serve as models for streamlining, redesigning, or reengineering within an organization • Help establish strategic plans based on maximum organizational potential • Allow realistic, yet aggressive, goal setting for action plans and agendas • Provide an effective context for developing metrics and measures that help executive management identify improvement opportunities and successes • Help establish or spread a continuous improvement philosophy throughout an organization • Increase the level of employee involvement in performance improvement • Focus growing numbers of personnel on the search for and assimilation of best practices • Help identify new products and services Sample Current State Assessment Phase Activities and Tasks Work Plan.
The work plan in Table 6.6 presents a sample of the high-level activities and tasks suggested as a starting point for planning for and executing this phase. Development Phase Description Once all the current state activities are completed, the CPPT has a wellgrounded understanding of the current state status and future state expectations of executive management. The resulting gap can now be translated into actionable development and design activities. This phase takes the next step in providing the CPPT with the occasion to thoughtfully consider and design the most suitable continuity planning process strategies, programs, plans, and short- and long-term testing, maintenance, training, and measurement processes. Recovery Strategy Development. Acknowledging the baseline information gathered in the current state assessment phase, the CPPT is in a position to design and determine the most appropriate recovery alternatives. The recovery alternative procedure must be in line with the RTO established in the BIA. The primary activities that take place during this phase of the methodology are to:
• Develop the most suitable recovery strategies • Develop continuity and crisis management planning formats and implementation infrastructure • Develop strategies for continuity and crisis management plan testing, maintenance, and awareness and training • Plan for recovery resource acquisition 363
AU8231_C006.fm Page 364 Thursday, October 19, 2006 7:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Table 6.6. Sample of the High-Level Activities and Tasks Suggested as a Starting Point for Planning for and Executing the Current State Assessment Phase Current State Assessment (CSA) Phase Activity/Task CSA phase initiation activities Identify planning software tool or internal process (BIA and plan development) Schedule and conduct kickoff meeting Define business processes affected Develop high-level business process model Obtain process model for this industry Gather enterprise business process information Customize process model to environment Map major processes to functional business units Map major processes to information systems Map major processes to physical facilities Map major processes to virtual business partners Assemble high-level process model Perform threat assessment Identify concerns and risks from audit reports Customize threat analysis questionnaire Perform risk assessment Assess physical security Assess environmental security Assess information security Assess emergency response procedures (ERPs) Obtain, review, and evaluate emergency response procedures Prepare recommendations for updating ERPs/restart procedures Assess application/systems restart procedures Obtain, review, and evaluate application/system restart/recovery procedures Assess site management training and awareness Review site management training and awareness effectiveness
364
Deliverables
Project plan
Process model
Milestone Definitions
AU8231_C006.fm Page 365 Thursday, October 19, 2006 7:02 AM
Business Continuity and Disaster Recovery Planning Table 6.6. Sample of the High-Level Activities and Tasks Suggested as a Starting Point for Planning for and Executing the Current State Assessment Phase (Continued) Current State Assessment (CSA) Phase Activity/Task Assess system backup strategies Review existing system and data backup strategies Document threat analysis results Business impact assessment (BIA) Obtain BIA data-gathering tool and install on local system Prepare/customize BIA questionnaire Distribute questionnaires via memo to appropriate management Contact management and set up BIA interviews Conduct BIA management interviews Document BIA management interviews using BIA summary template Enter BIA information into BIA data-gathering tool (if applicable) Enter BIA inventory information into access database Prepare/send confirming memo/BIA summary template to interviewees Consolidate BIA interview information Map consolidated BIA information to continuity process matrices Analyze BIA interview information Document business impact assessment report Review results with project sponsor/team Assemble draft business impact assessment report Obtain executive management approval Publish business impact assessment report Benchmark/peer review Scope benchmark review
Conduct benchmark review Prepare benchmark report
Deliverables
Milestone Definitions
Risk management/threat analysis report
Final questionnaire
Draft BIA report
Final BIA report Develop benchmarking approach and questions Final benchmark report
365
AU8231_C006.fm Page 366 Thursday, October 19, 2006 7:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Work Plan Development. For each of these recovery strategies and next-step activities developed or facilitated by the CPPT, they should prepare a high-level project plan that includes components like timing, resource commitments, etc. This project plan should serve as an outline for the implementation, testing, maintenance, and management steps for IT and business owners or representatives to use as they progress through implementation and other phases of the methodology. Develop and Design Recovery Strategies. The objective of the recovery alternative strategy is to successfully translate enterprise business requirements into recovery resource requirements that make sense and that meet the RTOs defined in the BIA. The purpose here is to determine, at a reasonable level of detail, the technical and business process requirements needed to meet the various recovery tiers for both IT and business process recovery. Knowledgeable technical and business owners or representatives are equipped to determine the resource requirements, and the CPPT must enlist their help with many components of this phase.
Recovery strategy can be developed in different ways. Here we will divide strategy development into three parts: • IT and IT infrastructure (DRP), strategy development • Business processes (or functions or units) strategy development BCP • Facilities strategy development DRP Recovery Strategies for IT. DRP recovery strategies must be developed to address IT resource requirements. Because the organization’s time-critical processes have been mapped to their constituent components during the BIA, the CPPT knows the applications, systems, and supporting equipment required to support those processes. This procedure must also take into consideration the recovery of data.
The CPPT must work with IT to define and agree upon functional and technical requirements for IT recovery strategies. All likely recovery alternatives should be considered when determining and recommending continuity strategies. Analysis and availability of critical recovery resources is an important element in developing ongoing recovery cost estimates. After system recovery requirements are determined, there are a number of areas for consideration, including: • • • • 366
Systems hardware resources (also includes midrange) Systems data storage requirements Unique (i.e., nonstandard) hardware resources Distributed systems (e.g., workstations, intranets, extranets, etc.)
AU8231_C006.fm Page 367 Thursday, October 19, 2006 7:02 AM
Business Continuity and Disaster Recovery Planning DRP strategy development can be organized along the lines of the organization’s IT workload requirements, segmented as follows: • • • • •
Nondeferrable Deferrable Development Nontime-critical production Time-critical production
The Concept of IT Full Production Backup. Determination of critical production IT workloads is very important because it is the key criterion in developing recovery strategies. If this workload cannot be identified and isolated in terms of hardware resources and data storage requirements, then the alternative is to recover the entire IT production workload in the shortest acceptable recovery timeframe (tier I), or a very large portion of it (see Table 6.7). This could have a significant effect on cost and recovery timeframes.
Recovery planning in a distributed systems environment requires thoughtful consideration of the business processes supported by IT. Careful analysis and classification of the required components can narrow the scope of the recovery planning efforts. Distributed IT systems environments are highly integrated and interconnected. Because of interdependencies, recovery of one distributed system environment component will generally require the recovery of most, if not all, the other distributed systems environment components. The time has long since past when the IT groups for a major organization can simply select batch processes easily transported to a secondary computer system for recovery. The real issue is not the computer itself, or even the computer’s location; it is the communication system or infrastructure that connects the IT systems to end-user workstations. Implementation of full production backup capabilities requires a thorough examination of the distributed systems environments from all points of view (i.e., hardware, software, applications, database, and communications levels). This will add to the complexity of identifying the recovery alternates required for the distributed systems environment, but is the only accurate method of aligning IT recovery capabilities with business process RTO requirements. Other Recovery Alternative Considerations. For both IT and IT infrastructure and for business process (or functions or units) recovery strategy purposes, there are various recovery alternatives that can be considered. Along with a brief description of the alternatives below, we have attempted to estimate the RTO support capabilities of each. It should be remembered that these RTO estimates are very broad, and there are often exceptions. These recovery alternatives are discussed below. 367
AU8231_C006.fm Page 368 Thursday, October 19, 2006 7:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Cold Sites. A cold site is an IT location that is capable of supporting IT functionality, but is not already equipped with IT and supporting equipment. A cold site is usually simply a shell location that must be built out to serve as an alternative IT center. This is also one of the most cost-effective alternate site strategies for RTOs one week or longer. Cold sites should not be used when RTOs are shorter than one or even two weeks. Warm Sites. Similar to a cold site, a warm site is capable of hosting IT operations and contains some level of IT equipment on-site that may or may not be operationally capable, although not currently being used for the purposes for which it was designed. It is appropriate to use a warm site when the RTO is five days or more. Hotsites. As opposed to the descriptions of the cold and warm sites above, a hotsite is an IT location that has “hot” capability. That is, the site has the equipment, software, and communications capabilities to facilitate a recovery within a few minutes or hours following the notification of a disaster at the organization’s primary site. Relative to IT continuity planning strategies, most major commercial recovery site vendors (HP, IBM, SunGard, etc.) provide more than one potential recovery site location. Because it is likely that there will be contention for scarce resources in the event of an areawide or multiple disaster situation, multiple recovery site capabilities should be a prime consideration in the selection of a recovery vendor. Mobile Sites. Some commercial vendors offer mobile facilities. These facilities are usually mounted on large truck trailers and can be brought to whatever location designated by the organization for operations recovery. The support for this type of recovery alternative is normally for RTOs of three to five days. Multiple Processing Sites. Some organizations use multiple internal processing locations to support backup and recovery. Should one site go down, the other sites could pick up the production load. These support RTOs as low as minutes and hours. The challenge with this type of strategy is maintaining the configuration management (hardware, software, processes, etc.) so that the multiple sites stay synchronized from a technical capability standpoint. Workspace and Facilities. Many commercial recovery site vendors and some enterprises also maintain alternative locations for business operations and users that may or may not be equipped with workstations and communications capabilities. There are myriad alternative implementations that can be selected. This strategy can be used to support RTOs in a very broad range, from hours to days. Virtual Business Partners. Similar to the multiple processing site alternatives described above, this strategy relies upon another outside business 368
AU8231_C006.fm Page 369 Thursday, October 19, 2006 7:02 AM
Business Continuity and Disaster Recovery Planning partner’s IT or workspace capabilities to support recovery. As with the multiple processing site alternatives, configuration management is a must. Although there may well be exceptions to this assumption, the RTO support provided by this strategy, in most cases, is in a days to weeks timeframe. IT Recovery Alternative Processing and Support Agreements. Described below are a series of miscellaneous recovery alternative considerations. The prepared CISSP candidate must be familiar with them.
Reciprocal or mutual aid agreements are formalized agreements between two different organizations, or even multiple locations within one organization, that support similar or dissimilar IT processing objectives. The reciprocal agreement philosophy is sometimes plagued with challenges like proactive configuration management coordination (operating systems releases, hardware compatibility, operator training, telecommunications similarities, etc.) between two organizations’ IT sites, who must agree to support each other. One additional concern is that if one of the two participants declares a disaster, which may or may not have similar processing configuration environments, the first organization is, in effect, declaring a disaster at both sites, as both sets of users will be adversely affected by the disaster. Commercially available service bureaus have similar characteristics as the reciprocal agreements discussed above. A service bureau normally services multiple customers and agrees to provide recovery capability should a disaster occur at one of the subscriber locations. The challenges of this type of recovery alternative are very nearly the same as described above for the reciprocal/mutual aid agreement alternatives. Data and Software Backup Approaches. Electronic vaulting is a type of data backup strategy that utilizes IT infrastructure (communications network) to send data and software backups directly to a facility, like a hotsite, to ensure that the data is on-site and available should a disaster occur at the primary site. The data can be stored in disk or tape media, depending upon the RTO requirements of the organization. Electronic vaulting significantly improves RTOs in organizations that use magnetic tape as a backup media stored at an off-site location.
Remote journaling is a process that involves replicating data transactions, or other categories of data, in a real-time or near-real-time manner at a secondary processing site. Presumably, this secondary site would be used as a hot recovery alternative for IT systems and infrastructure. The RTO support provided by this strategy is measured in terms of seconds and minutes in most cases. Off-Site Storage. Traditional IT data and software backup techniques require making regular backups and removing and storing them at a secondary secure off-site location. Should a disaster occur, the tapes would 369
AU8231_C006.fm Page 370 Thursday, October 19, 2006 7:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® be transported to the secondary site for use in recovering IT capabilities. The RTO support from this type of strategy is usually measured in three to five days. Storage Area Networks. A storage area network (SAN) is a data storage capability characterized by using high-speed subnetworking of shared disk storage devices. In this case, the data is not stored directly on the organization’s network servers so that server power can be used for IT production work. Database Shadowing and Mirroring. Database shadowing and mirroring involves the writing of data simultaneously to multiple disks (redundant array of inexpensive disks (RAID)) to provide redundant data locations in case of single-drive failure situations. Therefore, mirroring provides rapid recovery of data should a single drive fail. If a multiple-drive loss of the environment occurs, this technique can be ineffective. DRP Recovery Strategies for IT. Voice and data infrastructure communications are every bit as important as IT systems. Network recovery strategies are unique to the specific enterprise recovery requirement and, consequently, will require that adequate technical staff expertise be available to the CPPT to devise appropriate recovery strategies, usually with long lead times. DRP Recovery Strategies for IT Infrastructure. For IT infrastructure recovery alternative strategy development there will be IT infrastructure resource requirements to operate time-critical network infrastructures. These most likely will include both voice and data communication requirements and any other equipment needed to support the various recovery tiers. As with IT DRP strategy development, knowledgeable technical and business experts are the best equipped to determine recovery resource requirements.
Sometimes, voice communication recovery is even more time critical than data network recovery. Provisions must be made, in advance, for the capability to divert inbound voice communication to another location. An estimate of capacity required to support time-critical processes must be made to provide cost-effective continuity for voice communication. Allowances for outbound voice communication must also be addressed. Similar to voice communication, estimates of data communication bandwidth and other technical requirements are required. The recovery location, determined by the IT systems, application system, and data center requirements, must be known to determine the exact data network recovery requirements. Department computing locations and resulting local data networking requirements also must be determined and agreed upon. Data communication should be restored at the chosen alternate comput-
370
AU8231_C006.fm Page 371 Thursday, October 19, 2006 7:02 AM
Business Continuity and Disaster Recovery Planning ing facility (hotsite, etc.), as well as at the locations that personnel/users may be operating from, if outside their primary business location. BCP Recovery Strategies for Enterprise Business Processes. S i m i l a r t o recovery strategy development techniques used for IT and IT infrastructure, the development of business continuity plans for business processes (or functions or units) follows a parallel path. The CPPT should use the enterprise business process maps of time-critical business processes matched to business departments or divisions, etc., as a guide. The CPPT can then utilize that information to meet with business experts to develop the most appropriate recovery strategies.
The CPPT and the business unit participants must consider many factors in determining the most appropriate recovery strategies. For instance, they must consider: • • • • • • •
Business process/function/unit priorities Time-critical process descriptions IT Infrastructure needs IT systems needs Recovery time objectives (see definitions section) Recovery point objectives (see definitions section) Cost/benefit analysis for each potential recovery alternative, including manual workaround procedures • Recovery alternatives, such as: – Workspace/facilities (see discussion above) – Virtual business partners (see discussion above) – Logistics and supplies – Transportation of supplies and employees – Workspace at alternate site for equipment and employees – Emergency funds availability to speed decisions and acquisitions Other inventory information will be fully discussed in the next section of the chapter. Developing Facilities Recovery Strategies. The requirements for the above resources must also be translated into physical facility requirements. The combinations of options are too numerous and complex to be described here. If an organization’s key business functions are not centralized, the communications and facilities recovery resources provided by external vendors may meet their needs. In other cases, an alternate location, controlled by the business unit, may be required. The BIA will indicate which business units need to resume critical processes and when, according to its tier level. It will also indicate the number of employees required for each tier, and their equipment and supply needs.
371
AU8231_C006.fm Page 372 Thursday, October 19, 2006 7:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® An overall facility recovery strategy must also be defined. The logical process is similar to developing data center alternatives. Business experts should be involved as much as possible in considering suggested recommendations and should be allowed, with management, to review and approve them. Integration of Disaster Recovery Plans and Business Continuity Plans into the Crisis Management Process. DRP and BCP strategy development
considerations do not stand alone in times of emergency or disaster. Common mitigation and response strategies must also be linked to the enterprise crisis management planning scheme. Many organizations, especially since 9/11, have created a crisis management team infrastructure that is responsible for monitoring events and reacting to a disaster that affects the enterprise. Development of crisis management plans will be discussed later in the chapter, but for now it is important to keep in mind that the alternative recovery strategies designed for IT and the business processes will link to the overall crisis management strategy of the enterprise. (See Figure 6.2 for how crisis management interfaces with the BCP and DRP structure.) Recovery Strategy Development Techniques. One way for the CPPT to gather this information and gain consensus is to facilitate meetings, designed to determine and document the most appropriate recovery alternatives to continuity planning challenges, as below:
• Determine if a hotsite or any other recovery resource is needed for IT recovery purposes • Determine if additional communications circuits should be installed in a networking environment • Determine if additional workspace is needed in a business operations environment, etc., using the information derived from the risk assessments After these facilitated meetings, CPPT professionals work with the business process owners or representatives, as well as the technical teams, to create business process documentation with the current state information and proposed recovery alternative recommendations for management approval. Identifying Recovery Alternatives. This step involves identification of recovery alternatives available for each of the tiers of recovery (unique to each organization). The recovery strategy is based on the required recovery timeframe.
Table 6.7 illustrates only one method an organization may want to use to prioritize business processes or IT infrastructure components. Conducting the Recovery Alternative Meetings. R e c o v e r y a l t e r n a t i v e meetings are an important step in the continuity planning process. In the 372
AU8231_C006.fm Page 373 Thursday, October 19, 2006 7:02 AM
Business Continuity and Disaster Recovery Planning Table 6.7. Method to Prioritize Business Processes or IT Infrastructure Components Recovery Tier
Recovery Timeframe
I
0–24 hours
II III IV
1–3 days 3–5 days Other
Recovery Requirement Resources must be available in advance and implemented first Resources must be available in advance Resources must be identified and quickly available Resources must be identified
meetings, key business representatives and IT technicians brainstorm and agree on the most appropriate recovery strategies. The agenda might include: • Introduce the participants • Provide an overview of current state (BIA, benchmarks, risk assessment, etc.) • Present an overview of the meeting rules and expectations • Discuss recovery alternatives and reach consensus • Identify recovery alternative resource providers • Determine strategies for getting management approval • Identify next steps • Obtain time commitments from team members for next steps • Answer questions and address concerns Developing Continuity Plan Documents and Infrastructure Strategies.
As stated, the results of the BIA are used to develop continuity strategies that are documented in the recovery alternative strategy deliverable. The required resources, recovery timeframes (based on the business process), and recovery alternatives are identified, and management approval of the recommended alternative is obtained. Once the strategic alternatives have been identified and agreed upon, they must be documented in a deliverable, the strategy report, which might be reviewed by senior management. Because expenditures for recoverability can be substantial, the approval level required is almost certainly at the officer level, possibly as high as the executive committee or board of directors. For this reason, the CPPT alternative recovery report must be written in business terms with a minimal amount of jargon. In addition to a description of the alternatives, each alternative must be evaluated for cost and noncost factors and presented to management for decision. Developing Testing/Maintenance/Training Strategies. The real cost of continuity planning is in the long-term testing, maintenance, and training activities required to keep the continuity planning process vital and oper373
AU8231_C006.fm Page 374 Thursday, October 19, 2006 7:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® ational. It is the responsibility of the CPPT, on behalf of the enterprise, to create and document the strategies necessary to successfully manage the following ongoing processes. Testing. In the next section of this chapter, development and implementation of continuity test plans will be discussed fully. It is important, however, at this stage in the methodology that the organization formalize short- and long-term testing philosophies to include policies and standards that can be applied to the testing process following implementation. Maintenance. Maintenance procedures are also described in the next section. As above, it is important at this stage in the methodology that the CPPT formalizes short- and long-term maintenance policies and standards for the maintenance process. Training. Prior to 9/11, awareness and training of contingency or recovery team members was usually considered a secondary task. This training was done infrequently, for a number of reasons. Following 9/11, however, people are far more aware of the need to prepare responsible personnel for crisis and recovery activities, and provide regular, ongoing training and frequent testing. This awareness has caused some to favor increased testing and practice over exhaustive documentation of some types of continuity plans. Plan Development Phase Description. Once the recovery strategy phase is complete, the CPPT can move into the plan documentation phase of the project. This phase is where the organization, with the assistance of the CPPT, actually develops the crisis and continuity plans that will help effect the most efficient and effective recovery. The team will design and prepare migration plans in line with the preliminary recommendations and action plans. The primary activities that take place during this phase of the methodology are:
• Determine the practicality of using continuity planning software • Create continuity plan structure and gather required information • Acquire appropriate backup resources Use of Continuity Planning Software and Tools. Traditionally, organizations have documented their continuity plans using paper-based technologies (i.e., typewritten, word processing based, etc.). Continuity plans that are on paper-based forms are designed to facilitate the rapid recovery of business processes and supporting resource operations. They are cumbersome, however, and are often merely developed and put on the shelf. The goal is to develop the plan, test it appropriately, then continue testing and maintaining it. The real cost of continuity planning is not in the original development of the plans; it is in the ongoing testing and maintenance of the plans and the entire continuity planning process. Any tool that enhances an organization’s capability to effectively document, test, and maintain continuity plans is advantageous. 374
AU8231_C006.fm Page 375 Thursday, October 19, 2006 7:02 AM
Business Continuity and Disaster Recovery Planning Many organizations have acquired and implemented continuity planning software planning tools (a large number of which are commercially available). Continuity planning software helps planners construct recovery plans by automating the planning and documentation task. Use of continuity planning software should save time for the organization, improve the overall effectiveness of the recovery plan documentation process, and facilitate maintenance and execution of the plans. The continuity planning software solution or mechanism should be easy to use and maintain, and should be based on a sound business continuity planning methodology. It must be compatible with existing organization systems. The tool should also allow for an appropriate degree of plan roll-up for validation and review purposes, and should have the ability to demonstrate to management the robust state of the continuity planning implantation. Continuity planning software tools also enhance overall awareness and training of those individuals responsible for development, implementation, testing, and maintenance of continuity plans, if the data-gathering technique or method is standardized. As the plan writers will be the same people who implement and execute the plans should disaster strike, value is added to the organization — because everyone is on the same page. Some of the advantages of using continuity planning software are: • • • • • • •
Standardized tools Centralized development Oversight audit and management Improves testing and maintenance Facilitates plan implementation Prepare manual plans Can be used in an online manner during a crisis or disaster
Future Trends in Continuity Planning Automation. Another trend in the industry is the use of existing enterprise in-house technologies to build platforms for maintaining continuity plans in an automated environment. This has many of the software-related advantages mentioned above. It also allows the organization to Web enable the continuity plan development and maintenance procedure. While providing for multiple copies to be maintained, Web-enabled plans are easier to use during an emergency. Examples of these types of technologies include Lotus Notes, Sharepoint, and numerous database applications. This approach can sometimes be much more cost-effective than purchasing and maintaining externally developed continuity planning software products. Building Continuity Plans. Continuity Plan Development. During plan development, the CPPT assists the IT and BCP teams in documenting their plans. Using the recovery strategies determined previously, the CPPT can initiate or at least facilitate development of enterprisewide business continuity plans. 375
AU8231_C006.fm Page 376 Thursday, October 19, 2006 7:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Business Continuity Plans for Enterprise Business Operations. The information used to populate business continuity plans is based on the current state assessment phase and should include the strategies as determined in the development phase. The resulting business continuity plans will comprise the activities and tasks required by business process owners for recovery of the process. Disaster Recovery Plans for IT and IT Infrastructures. The IT disaster recovery plans will address the technical recovery priorities as determined during the strategy phase and will include the procedures, team structure, and inventory information required by IT.
For all intents and purposes, the basic structure and the categories of information required for both the IT DRP and the BCP are very similar. The following is a description of the types of information that should be organized and formalized into the continuity plans. CONTINUITY PLAN CONTENTS. The purpose of a business continuity plan is to significantly reduce the impact that will be suffered by the organization if a disaster strikes. To accomplish this, it is important that the continuity plan and planner address most of the contingencies that may arise ahead of time. It will be too late to disseminate the specific guidance on recovery decisions to recovery personnel following a disaster. In times of crises, people are placed under a great deal of stress and can be subject to unpredictable decision making. This can result in mistakes that will slow the recovery. The plan should anticipate appropriate measures that expedite recovery and provide direction to both experienced and inexperienced personnel on how best to proceed. Although it is impractical and impossible to prepare an exhaustive list of all contingencies and directions on how to address them, it is very possible to document the plan with precise recovery guidelines, and assign tasks to specific recovery team members.
Continuity plans are composed of three broad categories of information: • Scope, objectives, and assumptions (testing and maintenance responsibilities) • Execution and logistical information (i.e., team structure, recovery tasks, etc.) • Inventory information (i.e., space, people, software, networks, etc.) DEFINE CONTINUITY PLAN SCOPE, OBJECTIVES, AND ASSUMPTIONS. Introductory information contained in all continuity plans is relatively standard and includes the following:
• Introductory information and a description of the purpose of the continuity plan (i.e., table of contents, background, scope, objectives, assumptions, etc.) 376
AU8231_C006.fm Page 377 Thursday, October 19, 2006 7:02 AM
Business Continuity and Disaster Recovery Planning • Plan maintenance responsibilities (who specifically is assigned maintenance responsibilities and what are their timeframes) • Plan testing responsibilities (who specifically is assigned testing responsibilities and their timeframes) Identify the Recovery Team Structure. The CPPT should assist in the identification of those individuals who will be assigned recovery team responsibilities. Each team should be comprised of individuals who have the appropriate expertise to recover and add value to the CPP effort. Of course, there is no hard and fast guideline on the number of individuals that should be assigned to each of these teams. This decision depends upon the organization. The following recovery team functions are suggested as a minimum.
The recovery management team (RMT) is responsible for leading the recovery effort. All other recovery teams report to the RMT, and all decisions flow from this team. The RMT is responsible for actually declaring a disaster (or degree of disaster) and communicating it accordingly. The RMT should have team members with the authority to make quick, considered decisions, and be able to authorize recovery expenses. This team should also have representatives of various departments such as human resources, so that notifications and communications with employees and their families can be properly facilitated. There should be a security presence on this team to help prevent fraud, looting, vandalism, etc., if possible. Legal and life safety issues must also be considered when naming team members. The damage assessment team has the important responsibility to quickly assess the current situation and ascertain whether the event will render the IT or business processes unavailable for longer than the RTO. Whatever their decision, the team lead should quickly provide his assessment to the RMT leader along with the recommendations on next steps. If a disaster has been formally declared, the backup activation team initiates the recovery procedures as outlined in the continuity plans, including but not limited to moving to alternate sites; scheduling and executing the transfer of people, equipment, and other resources; and recovering the most time critical processes and systems first. Once the time-critical systems or business operations have been restored, the backup operations team takes over the more routine operations of the processes while restoration procedures are initiated. The restoration team is responsible for further diagnosis of the damage and for the restoration of the systems or business operations capabilities to their original or desired condition. These responsibilities include, but are not limited to, cleaning, salvaging, repair, procurement, and replacement activities.
377
AU8231_C006.fm Page 378 Thursday, October 19, 2006 7:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Once the restoration team has completed its duties, the primary site/service reactivation team is responsible for preparing the primary site or capability for reactivation. This must include a full test of the newly renovated systems or business operations capabilities, ensuring that they are fully functional, so as not to cause yet another disaster by moving away from the backup site too soon. Recovery Plan Logistical Information. Once the CPPT has identified and assigned the recovery team’s responsibilities, it is time to document those activities and tasks associated with the recovery of time-critical systems and business processes. This portion of the continuity plan should really be considered the heart of the plan. The suggested order of tasks is as follows:
1. Detail recovery procedures, checklists, etc., that outline the precise steps necessary to recover time-critical applications, systems, networks etc., depending upon the scope and objective of the recovery plan. 2. Assign recovery team personnel who are responsible for executing the specific recovery procedures. 3. Assign a location where the recovery activities are to take place (EOC, etc.). 4. Assign the presumed timeframe for the recovery activities. 5. Identify to whom the recovery teams are to report, what they should report, and when they report (in what timeframe). Continuity Plan Inventory Information. Because the recovery teams will require resources to execute their assigned recovery duties, resource inventory information should be gathered and documented prior to the disaster for ease of access.
To provide the access to this inventory information, these inventory lists should be supplemented, expanded, or centralized into appropriate appendices within the continuity plan, rather than included in the text of the continuity plan itself. The inventory information consists of all the time-critical inventory information, and only the time-critical inventory information, required to successfully execute the recovery effort. This inventory information includes detailed listings of people, equipment, documentation, supplies, hardware, software, vendors, other suppliers, critical applications, required data processing reports, networks/communications capabilities, vital records, transportation, data backup, backup facilities, backup site directions and amenities, civil authorities, customers (internal and external), recovery site personnel (third-party vendor), off-site storage personnel (third-party vendor), location of emergency funds, etc. 378
AU8231_C006.fm Page 379 Thursday, October 19, 2006 7:02 AM
Business Continuity and Disaster Recovery Planning Facilitating Continuity Plan Development by the CPPT. Table 6.8 is provided as a guide for how CPPT members may want to approach the documentation of continuity plans from a procedural standpoint. This table contains suggested approaches only; they are certainly subject to interpretation and customization depending upon their use. The Continuity Plan Contents. The continuity plan contents are:
• • • • • • • •
Plan overview and assumptions Responsibilities for development, testing, and maintaining the plans Continuity team structure and reporting requirements Detailed procedures for recovery of time-critical processes, applications, networks, systems, facilities, etc. Recovery locations and emergency operations centers Emergency operations communications channels Recovery timeframes Supporting inventory information (hardware, software, networks, data, people, space, transportation, external agents, documentation and data, etc.)
Contrasting Crisis Management and Continuity Planning Approaches.
Crisis management planning is defined in the American Heritage Dictionary as “special measures taken to solve problems caused by a crisis.” Crisis management planning is a term used to describe a methodology used by executives to respond to and manage a crisis. The objective is to gain control of the situation quickly so an organization can manage the crisis efficiently and minimize the negative impacts. Should a crisis strike, the organization crisis management team will be activated to manage the crisis until conclusion. The crisis management team is also utilized during the recovery effort. The continuity plans identify how the business operations (units) should receive support from members of the executive management and crisis management team. Typically, this support would be in the form of facilitated communications, resource allocation and access, and any other support required by the business and IT recovery teams to facilitate rapid continuation of time-critical functionality. Differences in Scope. The continuity plans deal with incidents that cause physical or logical damage to enterprise assets and resources. The crisis management plan deals with those types of incidents, as well as incidents that do not cause physical damage to assets of the organization, such as financial emergency, kidnapping, executive death, or injury. This is the main difference between the two types of plans. Building Crisis Management Plans. Crisis management planning is an integral part of the continuity planning process for the enterprise. Crisis management plans should be activated in a disaster or preemergency situ379
380
Establish communications processes and reporting timeframes with IT and business operations continuity planning teams, as well as with external communities (i.e., shareholders, civil authorities, customers/clients, employee families, press, etc.) Gather and document all inventory information for those resources that support time-critical resources
Facilitate development of activities and tasks to facilitate management of the organization through an emergency/crisis event Assign activities and tasks to continuity team members Identify and establish crisis management emergency operations center location(s)
Meet with client executive management and facilitate development of crisis management team structures
IT Continuity Plans (DRP)
Gather and document all inventory information for those resources that support time-critical resources
Meet with business unit management and facilitate development of continuity team structures for each BU involved in the effort Facilitate development of activities and tasks to recover time-critical BU resources (workstations, facilities, space, vital records, people, telephones, etc.) Assign activities and tasks to continuity team members Identify and establish BU emergency operations center location(s) for each BU involved Establish communications processes and reporting timeframes with client crisis management and IT continuity planning teams
Business Operations Continuity Plans (BCP)
Table 6.8. Continuity Plan Development Guidelines
Establish communications processes and reporting timeframes with IT and business operations continuity planning teams, as well as with external communities (i.e., shareholders, civil authorities, customers/clients, employee families, press, etc.) Gather and document all inventory information for those resources that support time-critical resources
Facilitate development of activities and tasks to facilitate management of the organization through an emergency/crisis event Assign activities and tasks to continuity team members Identify and establish crisis management emergency operations center location(s)
Meet with client executive management and facilitate development of crisis management team structures
Crisis Management Plans (CMP)
AU8231_C006.fm Page 380 Thursday, October 19, 2006 7:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK®
AU8231_C006.fm Page 381 Thursday, October 19, 2006 7:02 AM
Business Continuity and Disaster Recovery Planning ation and be tightly interwoven with the continuity planning structure of the organization. The continuity plans identify how the business processes affected by the disaster are recovered. During that time, the business units receive support from members of executive management and the crisis management team. The role of the crisis management team is to manage the enterprise through the disaster situation and facilitate communication with other teams and parties. Emergency Operations Center (EOC) Designation. These potential scenarios call for at least two, if not three, identified and outfitted EOCs. The first EOC should be in the primary building location of the organization, possibly a large conference room that has been specifically identified as an EOC. The second location should be in a space that is relatively close to the primary location for ease of access by employees, should the disaster be localized in nature. This type of space could typically be another building owned and occupied by the organization, rented space, or a commercially available off-site workspace location. In some instances, where regional disasters are considered a possibility, a tertiary EOC should be established at some distance from the primary location, such as another close-by city or even in another state if necessary. Outfitting the EOC is a matter to be decided by organizational crisis management and continuity planning personnel. An efficiently equipped EOC might include a large number of workstation connections, telephones, tables and chairs, white boards, office supplies, communications equipment (radios, cell phones, satellite phones, etc.), cable television access for monitoring news channels, coffee-making equipment, and appropriate water and food, as desired. Acquisition of Backup or Recovery Resources. It is during this phase that the CPPT should be working with IT technicians and business operations and facilities representatives to acquire and install all those backup and recovery resources identified in the strategy development phase. Examples of the types of resources that should be considered for acquisition and implementation include:
• Commercial recovery site contracts for IT or business process support • IT Infrastructure circuits and supporting telecommunications equipment • Other IT backup and recovery equipment, such as disk drives, workstations, servers, software, and duplicate software licenses or permissions • Equipment such as server racks, mail room equipment, telephones and fax machines, Blackberry devices, cell phones, emergency-useonly radios, etc. Testing/Maintenance/Training Development Phase Description. T h i s phase of the methodology directs the CPPT to design the following: 381
AU8231_C006.fm Page 382 Thursday, October 19, 2006 7:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® • Testing and maintenance strategies for short- and long-term continuity and crisis management • Continuity planning process training and awareness programs Developing Plan Testing Strategies. During this phase, the CPPT should work with the IT and business operations representatives to design appropriate continuity testing approaches and guidelines. The CPPT, however, should begin with the next step, which is to require all those IT and business operations that created plans to perform the very first walk-through test. Initial Testing of the Plans. The CPPT should strongly advise that immediately upon completion of the first continuity plan drafts (before they can possibly be called complete), they must undergo a very fundamental tabletop examination. It really could be considered the first test of the continuity plan, and is simply a walk-through/read-through test. Walk-Through Test Process Description. The objective of the initial continuity plan walk-through process is to bring the contingency organization, which is the same as the recovery team designated in the plan, together to review the entire continuity plan to ensure the accuracy of all information it contains, including:
• • • • • • • • •
Plan objectives Scope and assumptions Plan testing Maintenance Training requirements Contingency organizational structure Interim and alternate procedures Action plan checklists Adequacy of plan appendices
This process will serve to ensure that the plan accurately reflects the continuity strategy, will raise awareness of the continuity team members, and will train each of them in his or her particular continuity responsibilities. The approach to this test is as follows: • Call a meeting of the contingency organization for an appropriate length of time (usually one to two hours). The contingency organization is composed of all those individuals who make up the recovery teams as defined in the plan. There may be others who wish to attend, such as the internal auditor or department managers. • Prepare and distribute a copy of the continuity plan to each participant at the meeting. IT or business operations management, or the continuity management team leader, should lead the discussion by asking all participants to review the plan document, from beginning to end, in an organized manner. For instance, the manager may orally 382
AU8231_C006.fm Page 383 Thursday, October 19, 2006 7:02 AM
Business Continuity and Disaster Recovery Planning review the table of contents while the contingency team members follow along. Test participants should be instructed that they are to discuss the document and suggest changes to ensure that it reflects the true continuity needs of the department. Each chapter of the plan should then be reviewed in sequence. Particular attention should be given to the following areas: – Vital records identification and off-site storage arrangements (departmental plans only) – Plan scope and assumptions – Interim and alternate procedure steps – Team structure appropriateness • Action plan organization, activities, and tasks • Appendixes’ adequacy, completeness, and currency updated accordingly During the walk-through, one individual should be assigned the responsibility of taking notes on any suggested changes to the plan, and then arrange to update the plans accordingly. Once the plans have been updated, the older versions of the plan should be replaced with this new, clearly dated version, and distributed accordingly. Every effort should be made to replace each copy of the obsolete version. Prepare a report that addresses the results of the test, and communicate it to the appropriate people. This would be all levels of management concerned with all aspects of business continuity planning and recovery in the organization. This process should be repeated periodically, depending upon the number of changes that take place within the department, such as changes to the department procedures, organization, personnel, and location. The results of the walk-through test must be documented and the continuity plan updated, as soon as possible following the test. Once the plan has been updated, the initial version of the continuity plan can be considered completed, and the plan can then go into maintenance mode. It is important that members of the CPPT participate in as many of the walkthrough tests as possible. It is wise to divide the team and assign individual members to different departments’ walk-throughs for efficiency. Utilizing the internal/external audit staff in this capacity can also provide for an adequate number of third-party observers so that several departments can conduct their walk-through at the same time, thereby shortening the testing phase window. Developing Long-Term Testing Strategies. Regular and ongoing tests of continuity plans demonstrate their effectiveness, train personnel in continuity 383
AU8231_C006.fm Page 384 Thursday, October 19, 2006 7:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® operations, and serve as a vehicle for updating the plan. The plan should have information describing the status of plan testing or, at a minimum, direction for the reader as to the location of plan test documentation and updated documentation. The importance of ongoing testing of the continuity plan cannot be overemphasized. Specific responsibilities, including test coordination, must be assigned to individuals and activities monitored. The functional titles and names of those responsible should be clearly delineated in the plan. Additionally, more detailed instruction or guidance should be given to those responsible for plan testing. The following are some suggested areas that may be included in test preparation, implementation, and follow-up documentation. TEST OBJECTIVES. What is the purpose and objective of the test? An example would be to test the plan with the objective of determining that the plan’s continuity team structure is adequate and will respond in a timely manner to a test or actual alert. MEASUREMENT CRITERIA. How will the test management team determine if the objectives of the test were achieved? Describe, in detail, the test evaluation criteria, and provide the test team with the materials and guidance required to properly document and evaluate the test’s effectiveness. TEST SCHEDULE. When will the test occur? The test should be scheduled so it does not impact regular production work. The test’s effectiveness will increase with the magnitude and complexity of the test subject matter, reflecting the fact that a disaster would not likely be restricted to one facet of the organization. TEST TIMEFRAMES. How long should the test take? Precise test timeframes should be defined and adhered to. Should the actual testing activity take longer than the defined timeframe, the reasons for it should be thoroughly examined and understood. PARTICIPANTS. Who will participate in the test? The selection of test participants is an important issue. Because one of the primary benefits of testing is the increased awareness and training of the contingency organization, test participants should be selected carefully, with frequent rotation of participants to achieve the maximum possible training benefit for the most participants. TEST SCRIPT. What are the instructions to the test participants? Depending upon the complexity of the test, preparation of the testing script will require more or less effort. Remember that the results of the test will be evaluated using the original test script as a guideline. 384
AU8231_C006.fm Page 385 Thursday, October 19, 2006 7:02 AM
Business Continuity and Disaster Recovery Planning The various types of testing (checklist, structured walk-through, simulation, parallel, and full interruption) are described in the implementation phase. Recovery Plan Updating Instructions and Responsibilities. O n c e c o m pleted, the continuity planning documentation or continuity planning software should be updated. What are the instructions that detail the maximum timeframes for updating the continuity media? Who has responsibility for updating the plans? Development of other follow-up measures may also be necessary. Each of these questions should be answered and documented. Notification of Interested Parties. Almost as important as the test itself is the notification of interested parties, including senior management. This is an essential component in the overall testing strategy of the organization. Timely notification of test results serves to keep senior management attuned to the status of continuity capabilities, and should also demonstrate the need to continue ongoing, regular continuity planning tests. Developing Plan Maintenance Strategies. Similar to the philosophy for longterm continuity planning testing described above, the CPPT should address the long-term strategies for maintaining the continuity and crisis management structure. Among the considerations to be addressed include the following. REGULAR REVIEWS AND UPDATES. Internal and external audit and internal compliance functions can be used to ensure that the plans are regularly reviewed and updated by the IT and business operations components that originally developed them. Regular or spot inspections by members of the CPPT also may prove useful. VERSION CONTROL. This topic is critically important. Obviously, organizations change on a daily basis: employees, employee assignments, departmental shifts, IT and infrastructure changes, etc. As a result, continuity plans, especially hard-copy plans, can get out of date very quickly. Out-ofdate contact lists of key players can result in real execution problems later on, so updating or, worse yet, not updating is vital, as simple as it may seem. As updates are made to the changing plans, a failure of version control could seriously and adversely impact the enterprise. This would happen if an event occurred and the continuity and crisis management teams referred to different versions of the plans. Retrieval and destruction of outdated plans is an important issue, so a process designed to facilitate this is also important. It should go without saying that version control is an absolute must. DISTRIBUTION OF UPDATED PLANS. Along the same lines as version control, distribution control of the plans can be challenging for a couple of reasons. 385
AU8231_C006.fm Page 386 Thursday, October 19, 2006 7:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® The first, mentioned above, concerns version control. The second, however, also has serious ramifications because the information that plans contain is confidential, such as personnel data, confidential company process, and locations. The CPPT must ensure that plan distribution controls are adequate to protect this resource against disclosure to those who do not have the need to know. An automated system for documenting and storing continuity plans can be particularly useful for both version control and distribution challenges. Developing Continuity and Crisis Management Process Training and Awareness Strategies. A training and awareness program is key because it
is the organization’s people who will be designing, documenting, testing, and maintaining the continuity plans that may actually use the plans in an emergency situation. The CPPT should, as part of this phase, understand what internal organization resources are available to spread the word regarding employee awareness of continuity and crisis management activities. Internal groups such as human resources, training, risk management, and others may well already have mechanisms for spreading awareness and training programs available that cover many other topics. Taking advantage of existing organization channels for distribution of this type of information can be very effectively employed by the CPPT. Whether there are internal resources to increase awareness and training, the CPPT must take this reasonability very seriously and, if necessary, create an appropriate program to ensure its implementation. Sample Phase Activities and Tasks Work Plan. The work plan in Table 6.9 presents a sample of the high-level activities and tasks suggested as a starting point for planning for and executing all of the development phase.
Implementation Phase Description Much of the continuity planning process analysis, strategy, and design have been done at this point. The implementation phase is when the agreed upon strategies and action plans are deployed. The objective of this phase is to: • Analyze and validate CPPT implementation plans that were prepared earlier and ensure that initial walk-through tests have been completed • Meet with each of the specific organizational units that will be impacted by the implementation • Monitor recovery resource acquisition and plan implementation efforts Analyze CPPT Implementation Work Plans. The CPPT should consolidate and validate each of the IT and business operations unit implementa386
AU8231_C006.fm Page 387 Thursday, October 19, 2006 7:02 AM
Business Continuity and Disaster Recovery Planning Table 6.9. Sample Phase Activities and Tasks Work Plan Development Phase: Activity/Task
Deliverables
Milestone Definitions
Stage 1: Phase Start-Up and Preparation Define project charter Define project scope Define project management approach Define project organization Develop project plan Develop project work plan Develop PROJECT BUDGET Develop high-level project budget Develop value scorecard Develop value delivery process Kickoff project Prepare for kickoff meeting Run meeting Stage 2: Prepare Continuity Strategy Design Collect benchmarking information Identify potential people enablers Identify potential technology enablers Identify potential physical facilities enablers Plan for continuity strategy design meeting Stage 3: Continuity Strategy Design Session Conduct continuity strategy design meeting Establish success criteria Review leading practices/benchmarking info./enablers from stage 2 Generate improvement alternatives Develop and document high-level recovery strategy Generate rough order of magnitude alternative cost comparison Obtain management sign-off and approvals
Stage 4: Design Detailed Continuity Strategy Initiate detailed design of continuity strategy Design people, technology, and physical facilities continuity strategies Design people, technology and physical facility enablers Obtain detailed quotations/issue vendor RFQs Submit detailed continuity strategy to management for approvals
Management presentation
Recovery alternative recommendations
RFQ Management presentation
387
AU8231_C006.fm Page 388 Thursday, October 19, 2006 7:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® tion work plans to ensure that coordination issues have been addressed, then stage the most appropriate deployment schedule, and then establish an appropriate get-together with each of the organizational entities to consider and validate implementation work plan strategies. Organizational Unit Plan Deployment. The CPPT should schedule deployment meetings with each of the IT and business units involved in the effort and address the following:
• The initial versions of the continuity plans or crisis management plans have been tested • The recovery team structures or the plans have been validated • That there have not been significant changes to the environment since initial planning efforts • The identification of the persons who are assigned continuity planning testing, maintenance, and training responsibilities for the unit • Present long-term testing and maintenance strategies validate understanding • Regular and ongoing communications have been established between the unit and the CPPT or organizational unit that will be taking over the continuity planning process management effort Monitor Implementation. As part of the established regular and ongoing communication paths determined during the deployment meetings, the CPPT must monitor IT and business operations implementation efforts and support those efforts as required. Delays in implementation should be monitored and managed accordingly. Program Short- and Long-Term Testing. This implementation phase includes the deployment and implementation of the long-term testing and maintenance strategies described below. Why Test? Regular ongoing testing, or exercising of continuity plans, demonstrates their effectiveness, trains personnel in recovery operations, and informs the planner of needed updates. Continuity plans should have information describing the status of plan tests or, at a minimum, direction for the reader as to the location of plan testing and updated documentation. Additionally, a walk-through of this plan should be accomplished as soon as possible. As mentioned previously, ongoing testing is vital. Specific responsibilities must be assigned and activities monitored. To achieve this goal, specific individuals must be appointed responsibility for BCP testing and test coordination. The functional titles and names of those responsible should be clearly delineated in the plan. Continuity Plan Testing (Exercise) Procedure Deployment. During plan testing, the CPPT works with business unit leaders to simulate potential disasters and test continuity plans for effectiveness. Any necessary adjust388
AU8231_C006.fm Page 389 Thursday, October 19, 2006 7:02 AM
Business Continuity and Disaster Recovery Planning ments and modifications are incorporated into the plans, so strategies remain flexible over time. There are some very basic tenants of continuity planning testing that should be observed. Objective of Continuity and Crisis Management Plan Tests (Exercises).
There is more than one reason to test. Obviously, the enterprise wants to test its recovery strategies and business continuity plans to see whether they work. Perhaps an even more important reason for the tests is that they serve as a training tool. Tests usually place the participants in a roleplaying situation. By thinking and reacting during the test, they mentally place themselves in a recovery situation and gain an awareness and understanding of what it might take to help the enterprise survive. Another reason for a test is to inventory and enhance the business continuity plans. In other words, a continuity planning test is a way to maintain the business continuity plan and be sure that the included business processes are updated. Tests also demonstrate the capability to recover rather than just demonstrate the existence of a plan to recover. They are also utilized by compliance entities such as internal and external auditors to verify recovery capabilities. Types of Tests. There are myriad ways that an organization can set up and conduct a test. Types of tests include the following. CHECKLIST. A checklist test is one in which the continuity planner or members of the recovery team validate the inventory checklists in their continuity plans by physically walking through the checklist and verifying that each of the inventory items is available and viable. As mentioned earlier, these inventory items include hardware, software, telecommunications, people, equipment, documentation, data, space, transportation, procedures, etc. Areas that may be included in test preparation, implementation, and follow-up documentation are set out in the development phase section of this chapter. TABLETOP WALK-THROUGH. A tabletop walk-through test consists of convening the recovery/continuity team members named in each specific continuity planning document to achieve two objectives. The first objective is for the team members to thoroughly study and discuss all aspects of the planning document or output and to challenge every assertion, assumption, activity, task, action, etc., called for within the plan. This challenge involves open discussions about the practicality, correctness, viability, and suitability of every aspect of the plan. Recovery team members should be encouraged to openly question any aspect of the planning strategies, etc., to work through areas of the plan that may well need to be changed. The second purpose for a tabletop walk-through is to serve as a training and awareness tool. It is intended to help the continuity planner to familiarize the recovery team members with their specific roles and responsibil389
AU8231_C006.fm Page 390 Thursday, October 19, 2006 7:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® ities, and also begin to indoctrinate them into thinking like a recovery team — a practice event, if you will, that allows the team members to participate and begin to get comfortable with acting in a recovery team environment. SIMULATION. A simulation test is one where the organization actually conducts some level of simulation of an emergency/disaster event. The breadth and scope of this type of simulation exercise can vary significantly from a very small localized departmental simulation to an all-out enterprisewide simulation, and all events in between. PARALLEL. A parallel test is often used in a transaction-oriented business environment supported by IT. An organization may decide to retrieve yesterday’s backup data and apply today’s transactions against that data in a parallel way to compare the results to today’s actual processing files to ensure that they are exactly alike. This process will highlight any faults in the data reconstruction process. FULL INTERRUPTION. Not usually recommended as an appropriate testing approach because it requires interruption of actual production activities on a real-time basis, a full interruption test is an all-encompassing continuity planning test. Extreme care should be taken when conducting this type of test so as not to actually disrupt production or even cause a real disaster. Under certain circumstances, this may be a useful method, but care is advised, as it involves a deliberate interruption and then recovery of business processes.
Areas that may be included in test preparation, implementation, and follow-up are set out in the development phase section of this chapter. In addition, Geoffrey Wold in his book Business Continuity Preparedness10 presents a well-written discussion on testing the plan (Chapter 15, p. 15-1), including a detailed discussion on the advantages and disadvantages of test types. Program Short- and Long-Term Maintenance Strategies. S h o r t - a n d long-term maintenance strategies were designed and documented during the previous phase. It is here where the final approved strategies are deployed and introduced into systems and business process documentation and standard operating procedures. Each continuity plan should contain directives relative to short- and long-term maintenance responsibilities. This is also the time to ensure that any metrics that have been developed for the continuity planning process are updated with the shortand long-term maintenance strategy expectations. Regular Review and Updates. Depending upon the continuity plan documentation method (i.e., hard copy, internally automated, Web based), the continuity plan infrastructure should be subjected regularly to review for currency and viability. Given the significant investment in time and 390
AU8231_C006.fm Page 391 Thursday, October 19, 2006 7:02 AM
Business Continuity and Disaster Recovery Planning resources required to develop a viable continuity planning structure, the use of tools and techniques that facilitate a regular and ongoing review of the continuity plans is vital to protect that investment. In addition, keeping the plans current will help ensure that they are capable of doing what they were designed for: to drive down the impact of a disaster by facilitating rapid recovery actions and activities. Version Control. As mentioned previously, version control of continuity plans is a critical issue when the plans themselves are hard copy or paper based. Enterprise distribution and tracking of the plans is essential, so the continuity planner can efficiently facilitate their updating as circumstances change within the organization. It is very easy to understand that in times of emergency, everyone should be working from the same version. Even in an internal automated environment where hard-copy plans are merely stored on automated media, version control is critical. If the plans are Web based, with appropriate database capabilities, version control and change management issues are less challenging. This is one big reason why automation of continuity planning structures is desirable. Retrieval and Destruction of Outdated Plans. As mentioned above, if the continuity planning structure is paper based, document version control, storage, retrieval, and destruction processes must be developed and rigidly adhered to. Unlike other, more normal types of organizational information, continuity plans are often designed with the intention that they will be taken off-site and stored elsewhere, including employees’ homes, automobile trunks, etc. It is important that a continuity planner develop and oversee an effective plan inventory process. Updating Contact List of Key Stakeholders. Along with version control and inventory processes comes the necessity for a continuity planner to ensure that recovery team and key contact calling trees or contact lists are kept current. This is more easily accomplished in an automated environment than when paper-based plans are used. There is often a high degree of sensitivity also associated with this process. Key organizational stakeholder detailed contact information must be maintained under an appropriate level of control for obvious privacy and security purposes. Program Training, Awareness, and Education. Continuity planning, like the information security function, is much more a people issue than a technical issue. Why are the training, awareness, and education processes so important? Because of the following reasons:
• • • • •
It It It It It
is is is is is
the organization’s people who know the business processes. the people who must document that plan recovery processes. the people who will test and maintain the plans. the people who will be impacted by the event. these same people who will have to recover the organization. 391
AU8231_C006.fm Page 392 Thursday, October 19, 2006 7:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Therefore, a high degree of importance must be placed by the organization on the people’s training, awareness, and education. A lesson learned from the 9/11 event was that although heroic efforts were put forth by many brave people, more focused attention on training the recovery teams would have been much more useful before the event, rather than devoting the thousands of hours by many organizations in documenting, to the nth degree, a comprehensive continuity plan that turned out to have little relevance to the situation at hand. In acknowledging the critical importance of this activity, it is often useful for a continuity planner to seek out internal or external training/education/awareness subject matter experts to assist in developing and deploying the most appropriate processes for achieving this vital objective. This would include focused attention on all three components of the process: training, awareness, and education of the organization’s people. Emergency Operations Center (EOC). When implementing the emergency operations center, two fundamental recovery scenarios must be considered:
• The disaster has occurred, but it has not affected the physical location, and therefore the organization’s people can stay at work and continue working, only without one or more of their supporting resources (IT, for instance). • The disaster has occurred, and it does impact the organization’s physical location or premise; thus, the people have to relocate to secondary locations to carry out time-critical process recovery. See the implementation phase for further guidance on the EOC. Management Phase Description The management phase of the methodology focuses the CISSP candidate upon those activities, tasks, and responsibilities associated with organizing and executing the day-to-day management of the continuity planning process on a go-forward basis. During this process, the CPPT works with the business process owner or representative to focus attention on program oversight and continuity planning manager roles and responsibilities. Program Oversight. Completely aside from the many phased tasks and activities presented in this chapter (most of what the continuity planning manager or function should be responsible for), there are a number of ongoing oversight duties that are required to ensure the health and viability of the continuity planning process (CPP). Continuity Planning Manager Roles and Responsibilities. Following is a list of some of the oversight duties that an enterprise continuity manager might be responsible for: 392
AU8231_C006.fm Page 393 Thursday, October 19, 2006 7:02 AM
Business Continuity and Disaster Recovery Planning • Continue to serve as the primary CPPT contact, even if the initial CPPT membership is pared down after implementation of the continuity planning process • Serve as the primary contact for the enterprise CPP; present information on such topics as status, requirements, and environmental changes that drive CPP changes, industry leading practices, threat potential, and the changing threat environment • Develop and maintain continuity planning policies, standards, procedures, and practices • Plan, lead, or otherwise observe the execution of major continuity planning exercises and tests • Serve as a communication clearinghouse for continuity planning subprocesses such as the IT disaster recovery planners, business continuity planners, facilities management, and members of the crisis management team • Serve on the crisis management team as the continuity planning expert • Ensure that continuity planning efforts are included when negotiating or otherwise dealing with significant external trading partners, business partners, and all other virtual organizational dependencies • Serve in a continuity planning consulting role to all disaster recovery, continuity planning, and crisis management planning functions • Interface with external continuity planning vendors, commercial recovery site representatives, relevant software vendors, consultants, and other CPP-related external entities • Advise and assist organization or business units with the need to verify or otherwise validate the continuity planning capabilities of an external entity, if their time-critical business processes rely on them • Develop, maintain, and participate in continuity planning-related training, education, and awareness programs • Interface with internal departments responsible for physical, environmental, and information security to ensure that coordination of activities is maintained • Ensure that the connections between formalized emergency response programs and written procedures include hooks into the crisis management and continuity plan structures • Work as a liaison with the IT disaster recovery lead to ensure that IT department and business operations RTO expectations are aligned and adjusted as needed, including consideration of off-site data backup practices, alternative recovery site status, RTO and RPO changes, etc. • Prepare short- and long-term CPP budgets and plan for future enhancements to the process • Participate with management in developing, maintaining, and abiding by a well-developed set of qualitative and quantitative CPP metrics 393
AU8231_C006.fm Page 394 Thursday, October 19, 2006 7:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® • Lead periodic refreshers of the BIA, especially as enterprise organization conditions change • Work with internal and external audit and regulatory entities to ensure understanding and compliance with enterprise continuity planning policies and standards • Work with internal and external regulatory entities on follow-up of exceptions or deficiencies noted in past audits and help resolve issues • Periodically refresh the risk assessment to ensure awareness levels of the changing threat environment We started the chapter with the objective to provide the CISSP candidate with a step-by-step map on how to view, develop, implement, maintain, and measure an appropriate continuity planning process for his or her organization. We have provided a logical continuity planning methodology with the expectation that when finished studying this chapter, the CISSP candidate should have well-founded knowledge of the continuity planning process. We covered: Project initiation phase: This phase sets the tempo for each succeeding complementary phase. Clearly articulated or demonstrated management intentions and commitment will contribute to the success of later continuity planning phases. Current state assessment phase: This phase provides management with the practical information it must have to make informed decisions concerning business continuity planning activities. Design and development phase: This phase provides the continuity planning project team with the information needed to design an efficient and effective recovery. Implementation phase: In this phase, the CPPT formalizes and deploys all the plans, testing, maintenance, training, and measurement processes created in the development phase. Management phase: Finally, this phase focuses the CISSP candidate on those activities, tasks, and responsibilities required to successfully execute and manage the continuity planning process day to day. CISSP candidates must understand that continuity planning is truly a business process, rather than just an event or a plan to recover. It emphasizes the importance of time-critical business processes to the enterprise. Organizational survival is the primary rationale for planning, whether the organization is a business, government, educational, public, or private entity. Short- and long-term support of time-critical business (or organization) processes is the important factor. Avoidance of financial loss is an obvious need for planning. The possibility of customer service interruption or failure to fulfill the ethical and legal obligations of the organization to employees and shareholders also demonstrates the need for planning. Without effective planning, the organization is forced to attempt a 394
AU8231_C006.fm Page 395 Thursday, October 19, 2006 7:02 AM
Business Continuity and Disaster Recovery Planning response to a disaster without an understanding of its recovery priorities, the time and resources needed to reestablish time-critical business processes, and sources of services and products needed during recovery. The delays caused by such lack of planning can be fiscally lethal, depending upon the structure and purpose of the organization. Once developed and implemented, the individual components of the continuity planning process must be tested. What is more important is that the people who will participate in the recovery of the organization must be trained and made aware of their roles and responsibilities. Failure to do this properly was probably one of the largest lessons learned from the September 11 attacks. Continuity planning is all about people. An appropriate measurement system is also crucial to success. Organizations must measure not only the financial metrics, but also how the continuity planning process adds value to the organization’s people, processes, technologies, and mission. These metrics must be both quantitative and qualitative. Terminology The following list of terms represent more than just a definitions list; it also contains some explanation as to the application for the term discussed. Business continuity planning (BCP): Typically refers to the process or business function of continuity planning activities, also known as business resumption planning. Business impact analysis (BIA): The process of identifying, compiling, documenting, and analyzing the business requirements for continuity. The BIA is the process by which organizations can quantify, qualify, and validate the impact of potential threats on economic and operational capabilities; determine recovery timeframe requirements of essential business functions and supporting applications and IT infrastructure; establish recovery priorities; and recommend recovery priorities and strategies. Contingency organization: Simply another name for the recovery team structure that has been formed and named in the various continuity plans. The concept of a contingency organization is that it collapses the normal state structure to a more streamlined reporting structure that is required to execute a rapid recovery following an event. Once the event is over, the organization reverts to its traditional organization chart and decision processes. Continuity planning (CP): The ultimate purpose of continuity planning is to reduce the impact of an adverse event. The term as it will be used within the context of this chapter encompasses all aspects of an enterprisewide continuity planning infrastructure, otherwise referred to as the continuity planning process. Continuity planning 395
AU8231_C006.fm Page 396 Thursday, October 19, 2006 7:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® is a process that includes all aspects of preparing continuity plans and ensuring that the continuity planning process is healthy and viable. It also acknowledges that the continuity planning process adds value to the people, processes, technologies, and mission of the enterprise. Crisis management plan (CMP): This plan focuses on developing an effective and efficient enterprisewide emergency/disaster response capability. This response capability includes forming appropriate management teams and training their members how to react to serious company emergency situations (i.e., hurricane, earthquake, flood, fire, serious hacker, or virus damage). CMP also encompasses response to life safety issues for personnel during a crisis or response to disaster. Defining disaster: It is impractical to attempt to anticipate all of the potential events that might be considered disasters. However, within the context of this chapter, a disaster is defined as any incident that results in the loss of support for time-critical business processes for longer than its predetermined recovery time objective (RTO). The RTO is normally determined during the business impact assessment (BIA) phase of the continuity planning development methodology. Disasters are commonly thought of as resulting from a fire, hurricane, earthquake, or flood — catastrophic acts of nature. This is not surprising, given the unprecedented number of disasters the world has suffered in the past decade. Given the tragic events of September 11, 2001, however, we know that disasters can be the result of manmade events. Events such as the introduction of a computer virus or worm into a computer system, a programming error, or a distributed denial-of-service attack (DDOS), for instance, are also considered disasters. Disasters can even occur as the result of a serious financial fluctuation, executive management disruption, or a simple power outage. Despite the widespread association of disasters with natural disasters, most continuity planning professionals have expanded the definition of disaster to include any event that disrupts business operations. Given the variability in causes of disaster, continuity planning specialists should not attempt to focus on specific types of disasters; rather, they should broaden their view to include any type of event that might disrupt time-critical organizational activity. Disaster recovery planning (DRP): This has traditionally been used to describe information systems or technology (IT)-related continuity efforts. DRP is sometimes referred to as IT continuity planning or, in some government terminologies, contingency planning. These technology-based continuity plans include attention to systems hardware, software (application and systems), and telecommunications infrastructure (both voice and data). Know, however, 396
AU8231_C006.fm Page 397 Thursday, October 19, 2006 7:02 AM
Business Continuity and Disaster Recovery Planning that not all organizations abide by these naming conventions, so it is the wise CISSP who understands and adapts to cultural differences in terminology. Emergency response planning (ERP): CISSP candidates should also be familiar with emergency response planning. The purpose of emergency response plans is to aid the enterprise in rapid identification of events that could cause significant disruption, to describe procedures designed to mitigate the disruption in the first place, and to subsequently reduce the severity of the impact. ERPs are designed to document rapid response actions that reduce the impact of events, while the continuity plans detail actions that the enterprise will utilize following the event to recover capabilities. Information resource continuity plans: These are detailed plans describing the recovery team structure, identifying emergency operations locations, and listing activities and tasks associated with recovery of time-critical systems. Also included are the listings of the inventory information (i.e., data, software, hardware, people, communications, documentation, transportation, off-site facilities, etc.) that the recovery teams must successfully activate. Information resource continuity plans serve to identify the resources and specify actions required to minimize losses that might otherwise result from a business interruption — no matter the cause — and to ensure the timely and orderly restoration of IT functionality in support of the organization’s business activities. Recovery point objective (RPO): Focuses on how much data loss an organization can tolerate without significant financial or operational impact to the business. RPO is defined as the most recent point in time to which data must be synchronized without adversely affecting the organization (financial or operational impacts). The RPO must be reflected in the timeliness of the data stored off-site. Shorter RTOs and RPOs generally result in more complex, technological, and expensive recovery requirements. Recovery time objective (RTO): The period of time within which systems, applications, or functions must be recovered after an outage (e.g., one business day). An RTO is often used as the basis for the development of recovery strategies and as a determinant as to whether to implement the recovery strategies during a disaster situation. Time-critical application: An application that is essential to the organization’s ability to perform necessary business functions. Loss of the time-critical applications would have a negative impact on the business, as well as legal or regulatory impacts. Time-critical processes: Of interest is the difference between the terms critical and time critical when describing a timely reliance on critical processes. Experience has shown that time-critical definitions are translated into what will be viewed as the RTO for those processes 397
AU8231_C006.fm Page 398 Thursday, October 19, 2006 7:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® and resources requiring the shortest recovery windows, typically 8 to 72 hours. No fixed RTO should be assumed, but instead is dependent upon a timely business impact assessment. The time criticality defined for each supporting resource is typically determined during the business impact assessment process.
References 1. U. S. Securities and Exchange Commission. Comments on Proposed Rule: Draft Interagency White Paper on Sound Practices to Strengthen the Resilience of the U. S. Financial System. [Release No. 34-46432; File No. S7-32-02]. http://www.sec.gov/rules/ concept/s73202.shtml. 2. Rebecca, Herold, 2003. Addressing legislative compliance within business continuity plans, in Official (ISC)2® Guide to the CISSP® CBK®, Harold F. Tipton and Kevin Henry (eds.), Boca Raton, FL: CRC Press, 2007. 3. Jackson, Carl B. The role of continuity planning in the enterprise risk management structure, in Information Security Management Handbook, Harold F. Tipton and Micki Krause (Eds.), Boca Raton, FL: Auerbach Publications, 2004. 4. National Interagency Incident Management System, http://www.fs.fed.us/fire/ operations/niims.shtml. 5. Jackson, Carl B., The changing face of continuity planning, in Information Security Management Handbook, Harold F. Tipton and Micki Krause (Eds.), Boca Raton, FL: Auerbach Publications, 2003. 6. Hill, Gerard. The Complete Project Management Office Handbook, New York: Auerbach Publications, 2004. 7. Hiles, Andrew. Business Continuity: Best Practices, World-Class Business Continuity Management 2nd Edition. Brookfield, CT: Rothstein Associates, Inc., 2004. 8. Wallace, Michael and Webber, Lawrence. The Disaster Recovery Handbook: A Step-byStep Plan to Ensure Business Continuity and Protection Vital Operations, Facilities, and Assets, New York: AMACOM, 2004. 9. Harvard Business Essentials. Crisis Management, Mastering the Skills to Prevent Disasters, Boston, MA: Harvard Business School Press, 2004. 10. Wold, Geoffrey. Business Continuity Preparedness, Austin, TX: ALEX eSolutions, Inc., 2005.
Sample Questions 1. Which of the following is considered the most important component of the enterprisewide continuity planning program? a. Business impact assessment b. Formalized continuity plans c. Executive management support d. Hotsite arrangements 2. During the threat analysis phase of the continuity planning methodology, which of the following threats should be addressed? a. Physical security b. Environmental security c. Information security d. All of the above 398
AU8231_C006.fm Page 399 Thursday, October 19, 2006 7:02 AM
Business Continuity and Disaster Recovery Planning 3. The major objective of the business impact assessment process is to: a. Prioritize time-critical business processes b. Determine the most appropriate recovery time objective for business processes c. Assist in prioritization of IT applications and networks d. All of the above 4. Continuity of IT technologies or IT network infrastructure capabilities is addressed in what type of continuity plan? a. Disaster recovery plans b. Emergency response/crisis management plans c. Business continuity plans d. Continuous availability plans 5. Crisis management planning focuses management attention on the following: a. Preplanning that will enable management to anticipate and react in the event of emergency b. Reacting to a natural disaster such as a hurricane or earthquake c. Anticipating adverse financial events d. IT systems’ restart and recovery activities 6. Performing benchmarking and peer review relative to enterprise continuity planning business processes is a valuable method to do all of the following except: a. Help identify leading business continuity planning processes and practices b. Allow realistic goal setting for action plans and agendas c. Provide a method for developing metrics and measures for the continuity planning process d. Compare continuity planning personnel salary levels 7. An effective continuity plan will contain all of the following type of information except for: a. Prioritized list of business processes or IT systems to be recovered b. The business impact assessment report c. Recovery team structures and assignments d. The primary and secondary location where backup and recovery activities will take place 8. All but one of the following are advantages of automating or utilizing continuity planning software: a. It standardizes training approaches. b. It provides a platform for management and audit oversight. c. It eases long-term continuity plan maintenance. d. It provides business partners with an enterprisewide view of the continuity planning infrastructure. 9. Which is the least important reason for developing business continuity and disaster recovery plans? a. Disasters really do occur 399
AU8231_C006.fm Page 400 Thursday, October 19, 2006 7:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK®
10.
11.
12.
13.
14.
15.
400
b. Budgeting IT expenditures c. Good business practice and standard of due care d. Legal or regulatory compliance When conducting the business impact assessment, business processes are examined relative to all but one of the following criteria: a. Customer interruption impacts b. Embarrassment or loss of confidence impacts c. Executive management disruption impacts d. Revenue loss potential impacts The primary purpose of formalized continuity planning test plans is to accomplish all except: a. Define test scope and objectives b. Define test timeframes c. Define test costs d. Define the test script The primary reason for conducting continuity planning tests is to: a. Provide employees’ families with a method for contacting management b. Ensure that continuity plans are current and viable c. Prepare third parties to react to an emergency within the enterprise d. Identify which employees can go home following a disaster During development of alternative recovery strategies, all of the following activities should be performed except: a. Use the prioritized business process maps developed during the BIA to map time-critical supporting resources b. Develop short- and long-term testing and maintenance strategies c. Prepare cost estimates for acquisition of continuity support resources d. Provide executive management with recommendations on acquiring appropriate continuity resources The primary phases of the enterprise continuity planning implementation methodology include all of the following except: a. Current state assessment phase b. Execution phase c. Design and development phase d. Management phase Which of the following statements most appropriately describes the timeliness of processes and supporting resources prioritization and recovery? a. The processes are mission critical b. The processes are critical c. The processes are time critical d. All of the above
AU8231_C006.fm Page 401 Thursday, October 19, 2006 7:02 AM
Business Continuity and Disaster Recovery Planning
Appendix A: Addressing Legislative Compliance within Business Continuity Plans Rebecca Herold, CISSP
When creating business continuity plans, one topic that is often overlooked, or perhaps skipped over because it is viewed as opening another can of worms you just do not want to deal with, is the issue of addressing legal and legislative compliance requirements. There are dozens of laws and legislative requirements in place today that include directives for security and privacy. Additionally, there you may face legal jeopardy as a result of your backup and storage practices. This appendix will briefly touch on a few of the major legislative issues to give you a high-level awareness of each, and to guide you to issues that are most likely to impact your own organization. Additionally, it provides a listing of other related legal and legislative issues that you need to think about and research. HIPAA The Health Insurance Portability and Accountability Act (HIPAA) of 1996 was passed to provide insurance portability, fraud enforcement, and administrative simplification for the healthcare industry. The act touches upon BCP in several sections, including the need to destroy documents following legal proceedings, and similar directives. However, HIPAA provides some explicit requirements for document retention and BCP. The most significant BCP requirements are provided in §164.530, “Administrative Requirements: Policies and Procedures.” Covered entities must have policies and procedures in place to preserve documentation. These policies and procedures must ensure: • Documented policies and procedures are maintained and readily available to appropriate authorized individuals in written or electronic form • Any communications that are required in writing must be maintained in the original form or as an electronic copy as documentation • All actions, activities, or designations required by the regulation to be documented must be maintained as a written or electronic record of such actions, activities, or designations 401
AU8231_C006.fm Page 402 Thursday, October 19, 2006 7:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Additionally, covered entities must retain the documentation listed above for at least six years from the date of creation or the date when it was last updated or in effect, whichever is later. GLB The Gramm–Leach–Bliley (GLB) Act was passed in 1999. Title V of GLB contains privacy provisions relating to consumers’ financial information. In general, these provisions require financial institutions to have restrictions on when they may disclose a consumer’s personal financial information to nonaffiliated third parties, in addition to providing their customers with documentation outlining the financial institution’s information collection and sharing practices. Within Title V are some brief, but significant, directives that will impact a financial institution’s BCP. Section 6801(b)(2) of GLB requires covered entities to provide protection for customer information against any anticipated threats or hazards, and to take appropriate actions to ensure the security of customer records and preserve the record integrity. Such protections must include policies, procedures, and measures to protect against destruction, damage, and unauthorized modification or loss of customer information as a result of conceivable environmental hazards (for example, fire or water damage) or technological failures. Financial organizations must review their business continuity plans to ensure they address these issues. Additionally, covered organizations must implement adequate and appropriate policies and procedures to ensure that customer information files are routinely backed up and stored at secure and off-site facilities that are a reasonable distance from the data center so that one incident cannot affect both facilities and destroy data files in both locations. Patriot Act The Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism Act of 2001, otherwise known as the USAPA and Patriot Act, was signed into law on October 26, 2001. The Patriot Act makes it much easier for the government to obtain information from organizations through several requirements and changes to past laws, such as those governing wiretaps and obtaining subpoenas. At a high level, the sections of the regulation that will impact organizational business continuity plans include: Section 209: Allows the government to obtain access to stored voice mail messages with a simple search warrant. Section 210: Expands the types of information that the government can subpoena from companies. Such information includes logs regarding user sessions, temporary network addresses, and session connection durations. Additionally, the government can request informa402
AU8231_C006.fm Page 403 Thursday, October 19, 2006 7:02 AM
Business Continuity and Disaster Recovery Planning tion about a customer’s method of payment for his Internet service, including bank accounts, credit card information, or other means of payment for the service. Section 211: Cable companies can now provide customer information to the government without notifying the customer, emphasizing the importance of a company’s documentation and retention procedures. Section 212: Requires emergency disclosure of electronic communications if the government determines there is a reasonable threat. Such information must be provided to the government “without delay,” emphasizing the importance for companies to keep comprehensive records, complete with date, time, and similar log information. Section 214: Increases the type of information that can be retrieved via pen registers and trap-and-trace devices. Companies need to be familiar with the legal consequences of such data-collecting methods. Section 215: Increases the types of information that can be subpoenaed by the government for investigating possible terrorist activities. The information includes all tangible items, such as papers, books, manuals, records, hard drives, diskettes, CDs, tapes, any type of computer peripheral used to store data, and any other type of item that can store information. This will require companies to keep comprehensive and accurate inventories of their storage equipment, along with directories of the information on each. Section 216: Specifies that the government can do information gathering of any type, include Internet traffic. As part of this requirement, company personnel may be required to assist the government in its investigation. This directive may very well increase the government’s requests for Internet traffic monitoring. Section 217: Protects the government from lawsuits resulting from what can be determined as warrantless searches. This also defines a computer trespasser as anyone who obtains unauthorized access of a “protected computer.” The government has the ability to intercept electronic communications of computer trespassers without a warrant. Section 218: This section is significant for companies that provide employees with Internet access. The company will be expected to provide logs of electronic communications if the government suspects that the information may be involved in some sort of foreign terrorist activities. This regulation is likely to increase the frequency of the government’s requests for such electronic information. Sections 219 and 220: These sections provide that a warrant issued in one jurisdictional area will be applicable to any other area where U.S. law is enforced, negating previous requirements that the government obtain warrants for separate jurisdictional areas in which terrorism is suspected. Because of the increased ease by which the government can obtain warrants and collect information, companies 403
AU8231_C006.fm Page 404 Thursday, October 19, 2006 7:02 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® can expect a likely increase in requests for information, personnel, or facilities from the government. Section 222: This makes it clear that companies cannot be forced to purchase additional equipment or other resources to comply with the Patriot Act requirements. So, the bulk of the work for companies will be in updating their policies, procedures, and business continuity plans. Of course, you need to be aware of all the sections of the Patriot Act. However, the focus of the previous list was the sections most closely related to BCP. The Patriot Act creates expectations for companies to be thorough and comprehensive in their record-keeping procedures and to be able to provide the government information at any time at its request. Other Issues The previous sections outlined just three of the many laws and regulations that may very well require your organization to take specific BCP actions and make updates. You need someone to identify the other laws and regulations with which your company must comply. The following lists just some of the other laws and regulations you need to review closely, and will help you get started on your important research: • European Union Data Protection Directive • Canada’s Personal Information Protection and Electronic Documents Act • Children’s Online Privacy Protection Act (COPPA) • Electronic Communications Privacy Act (ECPA) • Electronic Signatures in Global and National Commerce Act (E-SIGN) • 21 CFR Part 11 • Software license infringement • Copyright infringement • Civil liability issues • Duty • Breach of duty • Damage • Proximate cause • Negligence • Other legislation • Other laws (federal, state, multinational) OCC Banking Circular 177. Originally issued by the U.S. Treasury Office of the Comptroller of the Currency in the 1980s, BC 177 addresses the need for appropriate continuity planning for federally regulated financial institutions. BC 177 has been adopted by the Federal Financial Institutions Examination Council (FFIEC). The FFIEC 404
AU8231_C006.fm Page 405 Thursday, October 19, 2006 7:02 AM
Business Continuity and Disaster Recovery Planning is a formal interagency body empowered to prescribe uniform principles, standards, and report forms for the federal examination of financial institutions by the Board of Governors of the Federal Reserve System (FRB), the Federal Deposit Insurance Corporation (FDIC), the National Credit Union Administration (NCUA), the Office of the Comptroller of the Currency (OCC), and the Office of Thrift Supervision (OTS) and to make recommendations to promote uniformity in the supervision of financial institutions.
405
AU8231_C006.fm Page 406 Thursday, October 19, 2006 7:02 AM
AU8231_C007.fm Page 407 Wednesday, May 23, 2007 9:05 AM
Domain 7
Telecommunications and Network Security Alec Bass, CISSP and Peter Berlich, CISSP-ISSMP
Introduction The telecommunications and network security domain encompasses the structures, transmission methods, transport formats, and security measures used to provide integrity, availability, authentication, and confidentiality for transmissions over private and public communications networks and media. Network security is often described as the cornerstone of IT security. The network is a central asset, if not the most central, in most IT environments. Loss of network connectivity on any level can have devastating consequences, while control of the network provides an easy and consistent venue of attack. Conversely, a well-architected, -protected, and -guarded network provides powerful protection and will stop many attacks in their tracks. Hitherto, most attention has been paid to perimeter defense through firewalls and similar tools. As disappearance of network boundaries becomes a business requirement facilitated through hastened introduction of new technologies and a constant struggle between ease of use and security, it is widely recognized that the inside of a network must be as resilient as its perimeter, that tools alone are ineffective if not combined with proper process, and that the availability of a network is often its key value in business terms. In this chapter, we are going to focus heavily on the Open System Interconnect (OSI) model as a point of reference and Transmission Control Protocol/Internet Protocol (TCP/IP) as the most commonly used protocol stack. We will touch upon other protocol stacks as needed.1 Excellent books and Internet resources exist to learn the basics of networking, and we are only going to cover the basic network concepts insofar as they are required for the self-sufficiency of this book and useful for obtaining an understanding of network security concepts. 407
AU8231_C007.fm Page 408 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® It is not possible to give a complete and comprehensive overview of all possible attack scenarios. For the purposes of this chapter, we will focus on the most important security risks and those that will be instructive to the readers to gain an understanding of network security concepts and enable them to enhance their understanding and gain in-depth knowledge in self-study. CISSP® Expectations The professional should fully understand the seven-layer OSI reference model. Related terms include WWW, nonce, TACACS+, Kerberos, worm, radius, Domain Name Server (DNS), firewalls, proxy, and WEP. Additionally, the professional should fully understand: • Communications and network security as it relates to voice, data, multimedia, and facsimile transmissions in terms of local area, wide area, and remote access • Internet/intranet/extranet in terms of firewalls, routers, switches, gateways, and various protocols • Communications security management and techniques to prevent, detect, and correct errors so that integrity, availability, and confidentiality of transactions over networks may be maintained • Security boundaries and how to translate security policy to controls • How to detect intrusions and collect and preserve evidence • Methods of attack (worms, flooding, eavesdropping, sniffers, spamming, war driving) Basic Concepts Network Models Network communication is usually described in terms of layers. Several layering models exist; the most commonly used are: • OSI reference model, structured into seven layers (physical layer, data-link layer, network layer, transport layer, session layer, presentation layer, application layer) • TCP/IP or Department of Defense (DoD) model (not to be confused with the TCP/IP protocols), structured into four layers (link layer, network layer, transport layer, application layer) One feature that is common to both models and highly relevant from a security perspective is encapsulation. This means that not only do the different layers operate independently from each other, but they are also isolated on a technical level. Short of technical failures, the contents of any lower- or higher-layer protocol are inaccessible from any particular layer.2 Without restricting the generality of the foregoing, we are going to use the OSI model as a general point of reference herein. 408
AU8231_C007.fm Page 409 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security OSI Reference Model. The seven-layer3 OSI (Open System Interconnect)
model was defined in 1984 and published as an international standard (ISO/IEC 7498-1). The last revision to this standard was in 1994.4 Although sometimes considered complex, it has provided a practical and widely accepted way to describe engineer networking. In practice, some layers have proven to be less crucial to the concept (such as the presentation layer), while others (such as the network layer) have required more specific structure, and applications overlapping and transgressing layer boundaries exist. • Layer 1, the physical layer, describes the networking hardware, such as electrical signals, and bits and bytes, such as network interfaces and cabling. • Layer 2, the data-link layer, describes data transfer between machines, for instance, by an Ethernet. • Layer 3, the network layer, describes data transfer between networks, for instance, by the Internet Protocol (IP). • Layer 4, the transport layer, describes data transfer between applications, flow control, and error detection and correction, for instance, by TCP/User Datagram Protocol (UDP). • Layer 5, the session layer, describes the handshake between applications, for instance, authentication processes. • Layer 6, the presentation layer, describes the presentation of information, such as ASCII syntax. • Layer 7, the application layer, describes the structure, interpretation, and handling of information. In security terms, it is relevant because it relies on all underlying layers. From the point of view of the (ISC)2 Common Body of Knowledge, the application layer is covered in the “Operations” section. Each layer is defined so that it interacts with its corresponding layer on the remote host without concern for how the remote’s layer is implemented. In addition, each layer processes messages in a modular fashion, without concern for how the other layers on the same host process the message. For example, the layer that interacts directly with applications (layer 7) can communicate with its remote peer without knowing how the data is routed over the network (layer 3) or the hardware that is required (layers 1 and 2). When an application transmits data over a network, the data enters the top layer and moves to each successive lower level (moving down the stack) until it is transmitted over the network at layer 1. The remote host receives the data at layer 1 and moves to successive higher layers (moves up the stack) until it reaches layer 7 and then to the host’s application. 409
AU8231_C007.fm Page 410 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Layer 1: Physical Layer. At the physical layer, bits from the data-link layer are converted into electrical signals and transmitted on a physical circuit. Physical topologies are defined at this layer. Because the required signals depend on the transmitting media (e.g., required modem signals are not the same as ones for an Ethernet network interface card), the signals are generated at the physical layer.
Not all hardware consists of layer 1 devices. Even though many types of hardware, such as cables, connectors, and modems operate at the physical layer, some operate at different layers. Routers and switches, for example, operate at the network and data-link layers, respectively. Layer 2: Data-Link Layer. The data-link layer prepares the packet that it receives from the network layer to be transmitted as frames on the network. This layer ensures that the information that it exchanges with its peers is error-free. If the data-link layer detects an error in a frame, it will request that its peer resend that frame.
The data-link layer converts information from the higher layers into bits in the format that is expected for each networking technology, such as Ethernet, Token Ring, etc. Using hardware addresses, this layer transmits frames to devices that are physically connected only. As an analogy, consider the path between the end nodes on the network as a chain, and each link as a device in the path. The data-link layer is concerned with sending frames to the next link. The Institute of Electrical and Electronics Engineers (IEEE) data-link layer is divided into two sublayers: Logical link control (LLC): Manages connections between two peers. It provides error and flow control and control bit sequencing. Media access control (MAC): Transmits and receives frames between peers. Logical topologies and hardware addresses are defined at this sublayer. An Ethernet’s 48-bit hardware address is often called a MAC address as a reference to the name of the sublayer. Link layer encryption can be implemented at this layer. Be forewarned that link layer encryption only protects the information between two connected devices. To encrypt information between end nodes using link layer encryption, the information must be decrypted and reencrypted at each device along the path. The quality of the encryption of many of the devices may be out of the control of the owners of the two end nodes. Layer 3: Network Layer. It is important to clearly distinguish the function of the network and data-link layers. The network layer moves information between two hosts that are not physically connected. On the other hand, the data-link layer is concerned with moving data to the next physically connected device. Also, whereas the data-link layer relies on hardware 410
AU8231_C007.fm Page 411 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security addressing, the network layer uses logical addressing that is created when hosts are configured. Internet Protocol (IP) from the TCP/IP suite is the most important network layer protocol. IP has two functions: Addressing: IP uses the destination IP address to transmit packets through networks until the packets’ destination is reached. Fragmentation: IP will subdivide a packet if its size is greater than the maximum size allowed on a local network. IP is a connectionless protocol that does not guarantee error-free delivery. Layer 3 devices, such as routers, read the destination layer 3 address (e.g., destination IP address) in received packets and use their routing table to determine the next device on the network (the next hop) to send the packet. If the destination address is not on a network that is directly connected to the router, it will send the packet to another router. Routing tables are built either statically or dynamically. Static routing tables are configured manually and change only when updated by a human. Dynamic routing tables are built automatically as routers periodically share information that reflect their view of the network, which changes as routers go on- and offline, traffic congestion develops, etc. This allows the routers to effectively route packets as network conditions change. The Certified Information Systems Security Professional (CISSP) candidate is not required to be familiar with the details of dynamic routing. However, dynamic routing protocols include: • Routing Information Protocol (RIP) versions 1 and 2 • Open Shortest Path First (OSPF) • Border Gateway Protocol (BGP) Internet Control Message Protocol (ICMP) is a network layer protocol that is used for diagnostics and error correction. As we will see, this humble protocol has caused more than its share of heartaches by malicious users. With Internet Protocol Security (IPSec), the network layer can provide end-to-end encryption. Layer 4: Transport Layer. The transport layer creates an end-to-end transport between peer hosts. User Datagram Protocol (UDP) and Transmission Control Protocol (TCP) are important transport layer protocols in the TCP/IP suite. UDP does not ensure that transmissions are received without errors, and therefore is classified as a connectionless unreliable protocol. This does not mean that UDP is poorly designed. Rather, the application will perform the error checking, instead of the protocol.
Connection-oriented reliable protocols, such as TCP, ensure integrity by providing error-free transmission. They divide information from multiple 411
AU8231_C007.fm Page 412 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® applications on the same host into segments to be transmitted on a network. Because it is not guaranteed that the peer transport layer receives segments in the order that they were sent, reliable protocols reassemble received segments into the correct order. When the peer layer receives a segment, it responds with an acknowledgment. If an acknowledgment is not received, the segment is retransmitted. Lastly, reliable protocols ensure that each host does not receive more data than it can process without loss of data. Layer 5: Session Layer. This layer provides a logical persistent connection between peer hosts. A session is analogous to a conversation that is necessary for applications to exchange information. The session layer is responsible for creating, maintaining, and tearing down the session.
Three modes are offered: (Full) duplex: Both hosts can exchange information simultaneously, independent of each other. Half duplex: Hosts can exchange information, but only one host at a time. Simplex: Only one host can send information to its peer. Information travels in one direction only. Session layer protocols include Network File System (NFS), Structured Query Language (SQL), and Remote Procedure Call. Layer 6: Presentation Layer. The applications that are communicating over a network may represent information differently, such as using incompatible character sets. This layer provides services to ensure that the peer applications use a common format to represent data. For example, if a presentation layer wants to ensure that Unicode-encoded data can be read by an application that understands the ASCII character set only, it could translate the data from Unicode to a standard format. The peer presentation layer could translate the data from the standard format into the ASCII character set.
This layer also provides services for encryption and compression of network data. However, other layers or the application often perform these services, instead of the presentation layer. Layer 7: Application Layer. This layer is the application’s portal to network-based services, such as determining the identity and availability of remote applications. When an application or the operating system transmits or receives data over a network, it uses the services from this layer.
Many well-known protocols, such as Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP), and Simple Mail Transfer Protocol (SMTP), operate at this layer. 412
AU8231_C007.fm Page 413 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security It is important to remember that the application layer is not the application, especially when an application has the same name as a layer 7 protocol. For example, the FTP command on many operating systems initiates an application called FTP, which eventually uses the FTP protocol to transfer files between hosts. TCP/IP Model. The U.S. Department of Defense developed the TCP/IP model, which is very similar to the OSI model, but with fewer layers.
• The link layer provides physical communication and routing within a network. It corresponds to everything required to implement an Ethernet. It is sometimes described as two layers, a physical layer and a link layer. In terms of the OSI model, it covers layers 1 and 2. • The network layer includes everything that is required to move data between networks. It corresponds to the IP protocol, but also Internet Control Message Protocol (ICMP) and Internet Group Management Protocol (IGMP). In terms of the OSI model, it corresponds to layer 3. • The transport layer includes everything required to move data between applications. It corresponds to TCP and UDP. In terms of the OSI model, it corresponds to layer 4. • The application layer covers everything specific to a session or application, in other words, everything relating to the data payload. In terms of the OSI model, it corresponds to layers 5 through 7. Owing to its coarse structure, it is not well suited to describe application-level information exchange. As with the OSI model, data that is transmitted on the network enters the top of the stack, and each of the layers, with the exception of the physical layer, encapsulates information for its peer at the beginning and sometimes the end of the message that it receives from the next highest layer. On the remote host, each layer removes the information that its peer encapsulated before the remote layer passes the message to the next higher layer. Also, each layer processes messages in a modular fashion, without concern for how the other layers on the same host process the message. Layer 1: Link Layer. The data-link layer prepares the packets that it receives from the network layer to be transmitted as frames on the network. This layer ensures that the information that it exchanges with its peers is error-free. If the data-link layer detects an error in a frame, it will request that its peer resend that frame.
The IEEE data-link layer is divided into two sublayers: Logical link control (LLC): Manages connections between two peers. It provides error-flow control and control bit sequencing. 413
AU8231_C007.fm Page 414 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Media access control (MAC): Transmits and receives frames between peers. Logical topologies and hardware addresses are defined at this sublayer. An Ethernet’s 48-bit hardware address is often called a MAC address as a reference to the name of the sublayer. Layer 2: Network Layer. Like the OSI model’s network layer, this layer
transmits packets between two end-node hosts. Layer 3 devices, such as routers, examine the destination IP address of incoming packets and use routing tables to determine to which host to forward the packet. Because IP is an unreliable protocol, no error checking is performed. Internet Control Message Protocol (ICMP) is a network layer protocol that is used for diagnostics and error correction. Multicasts are sent as identical streams to multiple hosts in a multicast group simultaneously. Typically, multicasts are used by applications that require much bandwidth, such as videoconferencing. Hosts use the Internet Group Management Protocol (IGMP) to join and manage their membership in multicast groups. Layer 3: Transport Layer. The transport layer constructs, maintains, and tears down a transport between peer hosts. As with the OSI model, the transport layer supports two types of transports: reliable connection oriented and unreliable connectionless. Reliable connection-oriented transports are implemented with the TCP, which transmits streams and ensures that the peer layer receives all of the data and retransmits any data that the peer does not acknowledge. Because segments of the stream may arrive at the remote host in any order, the peer transport layer is responsible for reordering the segments into the correct order. TCP uses flow control so that hosts do not receive more data than they can process without data loss.
UDP is used for connectionless unreliable transports. The transport layer transmits datagrams without error checking. This protocol assumes that the application will guarantee that the datagrams are sent error-free. Layer 4: Application Layer. This layer performs the same functions as the OSI model’s application, presentation, and session layers. In addition, the application is included in this layer. The application layer converts the data into a format that can be processed by its peer layer and sends it to the transport layer.
Network Security Architecture The Role of the Network in IT Security. The Network as a Target of Attack. Attacks can be directed at the network itself, i.e., the network’s availability or one of its other services — especially security services — is at risk. At any rate, it is probably not so much the network itself, which is the final goal of such an attack. Normally, an 414
AU8231_C007.fm Page 415 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security attacker will be focused on any IT service that the network is an agent of; therefore, crippling, controlling, or simply using a network will be an intermediate step in his or her strategy. Common attacks falling in this category would include denial-of-service attacks, attempts to breach a firewall, or attempts to breach a router. The Network as an Enabler or Channel of Attack. We h a v e t o d i s t i n g u i s h between two subtly different situations here: where an attacker uses certain network characteristics to support his attack, for instance, by intelligence gathering, and where an attack is born across the network. We will focus on the second one, being the main concern in IT security.
Once an attacker avails himself of the use of a network — not necessarily at the cost of a full loss of control on the defender’s side — it can be used as a channel. Use of a network is not necessarily based on a breach of the network, for instance, in the case of a virus infection the breach may have occurred on a user’s laptop connected to the Internet. Although it is true that in such a case a deficiency in the network’s architecture was exploited, its own infrastructure and the security services that were designed into it have not technically been breached. Incidentally, the use of networks as a channel for an attack could be considered the security situation occurring most frequently, considering that the Internet infrastructure as a whole has become an enabler for certain types of attacks (zero day exploits would be impossible without it), while as far as is known, no attacker has yet managed to take over control over any significant portion of it. The Network as a Bastion of Defense. As described in the two previous sections, the network is a key, if not the most valuable, strategic asset in IT security.5 It is therefore paramount to implement strong and coherent network security architecture across an entire organization.
The security controls deployed in a network are fairly common based on the fact that the particularity of a network is its paramount importance as an asset, not the way in which its IT security is being managed. As described elsewhere (see the information security and risk management chapter, Domain 1, and the operations security chapter, Domain 9), such measures will typically be built around a complete security quality cycle of social, organizational, procedural, and technical activities. Measures will be based on the organization’s security policy and typically include configuration and change management, monitoring and log reviews, vulnerability and compliance testing and scanning (including detection scans on the network), security reviews and audits, backup and 415
AU8231_C007.fm Page 416 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® recovery, as well as awareness and training measures. They need to be balanced, appropriate, and affordable to the organization and commensurate with its business objectives and level of risk. Key concepts include: • Definition of security domains: This could be defined by level of risk or by organizational control. A prime example is the tendency of decentralized organizations to manage their IT — and thereby also their network security — locally, and to different degrees of success.6 • Segregation of security domains, control of traffic flows according to risk/benefit assessment, and taking into account formal models, such as the Bell–La Padula model, the Biba integrity model, or the Clark–Wilson model (see the security architecture and design chapter, Domain 5). • Incident response capability (see the legal, regulations, compliance, and investigations chapter, Domain 10), including but not limited to an inventory of business-critical traffic (to be allowed if possible; this could, for instance, be e-mail or Lotus Notes traffic, but also DNS), noncritical traffic (such as HTTP or FTP), a way to quickly contain breaches (for instance, by shutting off parts of the network or blocking certain types of traffic), and a process for managing the reaction. • Protecting network assets, such as firewalls and routers, in all ways as would be normal for an IT system (reference chapter Operations Security), but likely in a more stringent manner and to a far higher degree of security. • Contingency or network backup in case of network overload or failure. Network Security Objectives and Attack Modes. A l t h o u g h s e c u r i t y objectives are specific to each business, we can distinguish a number of key themes, which will be prioritized by each business differently, but often the order in which they are listed here is the one chosen.
A number of secondary objectives, such as interoperability and, in particular, ease of use, are undercurrents to these themes. A user expects the network (in particular, network security) to be fully transparent and will not easily accept restrictions.7 Conversely, it is a common perception that network security is the end to all other security measures, i.e., that firewalls will protect a corporation. It is intuitively clear that this is a flawed perception; in effect, working perimeter defense, while reducing the overall number of successful attacks, will also shift the balance toward insider attacks.8 Access Control. The network being a primary entry point into a corporate network provides access control in both directions — from the outside in and from the inside out.9 416
AU8231_C007.fm Page 417 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security • Attacks against outside-in access controls can be executed by stealing credentials and circumventing controls (i.e., by looking for open network ports, modems). • Another such method is breaking into a server in a demilitarized zone (DMZ) and using it as a stepping stone, either inbound or outbound. Proper countermeasures include the usual hardening, configuration, and monitoring measures, as well as proper firewall configuration that under no circumstances allows this server any type of inbound connection. • The proverbial laptop catching a virus infection on the Internet and spreading it on a corporate network is another way of circumventing inbound controls. The infection can happen, because the corporate network places transitive trust on the laptop to remain protected while it is not connected to the network. This implicit (and possibly undocumented) bidirectional trust relationship between network and client violates the principle of multilateral security10 in favor of an (arguably) simplified and cheaper security model.11 • Attacks against inside-out access controls include protocol tunneling12 (carrying a lower-layer network service through a higher layer) and sidestepping the restriction through use of unauthorized gateways (or unauthorized use of authorized gateway components), such as modems or wireless cards. Interestingly, this method can also be used to enable inbound connections, if the client is able to poll for waiting inbound connections.13 Conversely, not only does a user have to authenticate against the network, but ideally a network should also authenticate against the user to prevent, for instance, certain types of man-in-the-middle attacks.14 It is instructive to note that none of the aforementioned methods involves hacking, i.e., an actual breach of a perimeter gateway. They are all based on workarounds and creative use of existing protocols and components. Availability. In corporate networks, availability of the service is com-
monly the key business requirement. Conversely and for this very reason, network availability has also become a prime target for attackers and a key business risk. DENIAL-OF-SERVICE ATTACK. The easiest attack to carry out against a network, or so it may seem, is to overload it through excessive traffic. Although this may appear a simple concept, it is not. The reason for this is that the cost for the attacker would normally be exactly the same as for the target, which would make this type of attack very difficult to carry out.
Countermeasures would include a redundant network, load balancing, reserved bandwidth (quality of service, which would at least protect systems not directly targeted), and blocking traffic from an attacker on a fire417
AU8231_C007.fm Page 418 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® wall or upstream router. A target can also shift its IP address or DNS name to sidestep the attack.15 It is instructive to note that many protocols contain basic protection from message loss that would at least mitigate the effects of denial-of-service attacks. This starts with the TCP managing packet loss within certain limits, and ends with higher-level protocols, such as SMTP, that will provide robustness against temporary connection outages (store and forward). Fortunately, for the attacker, there are a number of ways to execute denial-of-service attacks while minimizing his own cost: • Distributed denial-of-service attack: Using a network of remote-controlled hosts (typically workstations that have been backdoored or time-triggered through a specific virus infection), the target is subjected to traffic from a wide range of sources that are very hard to block. The downside of this type of attack to both the attacker and network service provider is that the attack may already throttle upstream network channels, taking out more than just its intended target. • Distributed denial-of-service attacks have been used as a means of extortion,16 but also as a political instrument (referred to by their initiators as online demonstration), where activists would instigate users to frequently reload a certain Web site at a certain point in time.17 • Countermeasures are similar to those of conventional denial-of-service attacks, but simple IP or port filtering might not work. • Utilizing weaknesses in the design of TCP/IP to clog the network stack on the target system instead of the network itself, for instance, through SYN flood attacks. Such attacks can be executed over minimal bandwidth and can be hard to trace by their very nature (no complete transaction channel is ever established, and consequently, no logging occurs). • Countermeasures include protecting the operating system through securing its network stack. This is not normally something the user or owner of a system has any degree of control over; it is a task for the vendor. Finally, the network needs to be included in a corporation’s disaster recovery and business contingency plans. For local area networks, one may set high recovery objectives and provide appropriate contingency, based upon the fact that any recovery of services is likely to be useless without at least a working local area network (LAN) infrastructure. As wide area networks are usually outsourced, contingency measures might include acquisition of backup lines from a different provider, procurement of telephone or Integrated Services Digital Network (ISDN) lines, etc. (see business continuity and disaster recovery chapter, Domain 6).
418
AU8231_C007.fm Page 419 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security Confidentiality. The network, as the carrier of almost all digital information within a corporation, provides an attractive target to bypass access control measures on IT systems and access information while it is in transit. Among the information that can be acquired is not just the payload information, but also credentials, such as passwords. Conversely, an attacker might not even be interested in the information transmitted, but simply in the fact that communication has occurred. EAVESDROPPING (SNIFFING). To access information from the network, an attacker must have access to the network itself in the first place (see “Methodology of an Attack” section). An eavesdropping computer can be a legitimate client to the network or an unauthorized one. It is not necessary for the eavesdropper to become a part of the network (for instance, having an IP address); it is often far more advantageous for an attacker to remain invisible (and inaddressible) on the network. This is particularly easy in wireless LANs, where no physical connection is necessary.19
Countermeasures to eavesdropping include encryption of network traffic on a network or application level, traffic padding to prevent identification of times when communication happens, and rerouting of information to anonymize its origins and potentially split different parts of a message. Integrity. A network needs to support the integrity of its traffic. In many ways, the provisions taken for protection against interception, to protect confidentiality, will also protect the integrity of a message.
Although the modification of messages will often happen at the higher network layers (i.e., within applications), networks can be set up to provide robustness or resilience against interception and change of a message (man-in-the-middle attack) or replay attacks. Ways to accomplish this can be based on encryption or checksums on messages, as well as on access control measures for clients that would prevent an attacker from gaining the necessary access to send a modified message into the network. Conversely, many protocols, such as SMTP, HTTP, or even DNS, do not provide any degree of authentication (see below), so that it becomes relatively easy to inject messages with fake sender information into a network from the outside through an existing gateway. The fact that no application can rely on the security or authenticity of underlying protocols has become a common design factor in networking. Methodology of an Attack. Security attacks have been described formally as attack tree models.20 Attack trees are based upon the goal of the attacker, the risk of the defender, and the vulnerabilities of the defense systems. They form a specialized form of decision tree that can be used to formally evaluate system security.
419
AU8231_C007.fm Page 420 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® The following methodology describes not the attack tree itself (which is a defender’s view), but the steps that an attacker would undergo to successfully traverse the tree toward his or her target.21 Target Acquisition. In network security, an attack usually starts by intelligence gathering to obtain a collection of possible targets, for instance, by evaluating directory services and by network scanning.
If the attacker is after a specific target (as in industrial espionage settings), he will be using a different approach than if he were after any susceptible target (for instance, if an effected security breach would be a kind of trophy for the attacker with his peers). In the second case, the attacker is far more likely to use a form of discovery scanning before performing his target analysis. It is therefore important to limit information on a network and make intelligence gathering as difficult as possible. This would include installation of split DNS zones (internal nodes are only visible on the inside of a network), network address translation, limiting access to directories of persons and assets, using hidden paths, nonstandard privileged usernames, etc. Importantly, all of these obscurity measures do not have an inherent security value — they serve to slow the attacker down but will not in themselves provide any protection beyond this point.22,23 Target Analysis. In a second step, the identified target is analyzed for security weaknesses that would allow the attacker to obtain access. Depending on the type of attack, the discovery scan has already taken this into account, e.g., by scanning for servers susceptible to a certain kind of buffer overflow attack. Tools available for the target acquisition phase are generally capable of automatically performing a first-target analysis.
The most effective protection is to minimize security vulnerabilities, for instance, by applying software patches at the earliest possible opportunity and using an effective configuration management. In addition, target analysis should be made more difficult for the attacker. For example, system administrators should minimize the system information (e.g., system type, build, and release) that an attacker could glean, making it more difficult to attack the system. Target Access. In the next step, an attacker will obtain some form of access to the system. This can be access as a normal user or as a guest. The attacker could be exploiting known vulnerabilities or common tools for this, or bypass technical security controls altogether by using social engineering attacks.
To mitigate the risk of unauthorized access, existing user privileges need to be well managed, access profiles need to be up to date, and unused 420
AU8231_C007.fm Page 421 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security accounts should be blocked or removed. Access should be monitored and monitoring logs need to be regularly analyzed.24 Target Appropriation. As the second but last level of an attack, the attacker can then escalate his or her privileges on the system to gain system-level access. Again, exploitation of known vulnerabilities through existing or bespoke tools and techniques is the main technical attack vector; however, other attack vectors, such as (again and always) social engineering, need to be taken into account.
Countermeasures against privilege escalation, by nature, are similar to the ones for gaining access. However, because an attacker can gain full control of a system through privilege escalation, secondary controls on the system itself (such as detecting unusual activity in log files) are less effective and reliable.25 Network (router, firewall, and intrusion detection system) logs can therefore prove invaluable. Last but not least, the attacker may look to sustain control of the system to regain access at a later time or to use it for other purposes, such as sending spam or as a stepping stone for other attacks. To such an end, the attacker could avail himself of prefabricated rootkits to sustain control. Such a rootkit will not only allow access, but also hide its own existence from cursory inspection. To detect the presence of unauthorized changes, which could indicate access from an attacker or backdoors into the system, the use of hostbased intrusion detection systems26 can provide detection that cannot easily be bypassed. Output from the host-based IDS (such as regular snapshots or file hashes) need to be stored in such a way that they cannot be overwritten from the source system. Network Security Tools. Tools make a security practitioner’s job easier. Whether aids in collecting input for risk analysis or scanners to assess how well a server is configured, tools automate processes, which saves time and reduces error.27
Do not fall into the trap of reducing network security to collecting and using tools. Your ability as a practitioner has nothing to do with how cool your tools are. The only tool that really matters is the one that is between your ears. Use that one well, and you will go far. Intrusion Detection Systems. Intrusion detection systems (IDSs) monitor activity and send alerts when they detect suspicious traffic. There are two broad classifications of IDSs: host-based IDSs, which monitor activity on servers and workstations, and network-based IDSs, which monitor network activity. We will discuss network-based IDSs.
Currently, there are two approaches to IDSs. An appliance on the network can monitor traffic for attacks based on a set of signatures (analogous to anti421
AU8231_C007.fm Page 422 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® virus software), or the appliance can watch the network’s traffic for a while, learn what traffic patterns are normal, and send an alert when it detects an anomaly. Of course, an IDS can use a hybrid of the two approaches. Independent of the IDS’s approach, how an organization uses its IDS determines whether the tool is effective. Despite its name, IDS should not be used to detect intrusions. Instead, it should send an alert when it detects interesting, abnormal traffic that could be a prelude to an attack. For example, someone in the engineering department trying to access payroll information over the network at 3 A.M. is probably very interesting and not normal. Or, perhaps a sudden rise in network utilization should be noted. The above implies that an organization understands the normal characteristics of its network. Considering modern networks’ complexity and how much they change, that task is much easier said than done. The reader will want to familiarize himself with Snort,28 a free and opensource intrusion detection system. In addition, a large number of commercial tools are available. The reader is advised to consult the respective vendors’ Web sites. Scanners. A network scanner can be used in several ways:
• Discovery of devices and services on a network, for instance, to establish whether new or unauthorized devices have been connected. Conversely, this type of scan can be used for intelligence gathering on potentially vulnerable services. • Test of compliance with a given policy, for instance, to ensure certain configurations (deactivation of services) have been applied. • Test for vulnerabilities, for instance, as part of a penetration test, but also in preparation for an attack. DISCOVERY SCANNING. A discovery scan can be performed with very simple methods, for example, by sending a ping packet (ping scanning) to every address in a subnet. More sophisticated methods will also discover the operating system and services of a responding device. COMPLIANCE SCANNING. A compliance scan can be performed either from the network or on the device (for instance, as a security health check). If performed on the network, it will usually include testing for open ports and services on the device. VULNERABILITY SCANNING. A vulnerability scan can either test for vulnerability conditions or try an active exploitation of the vulnerability. A vulnerability scan can be performed in a nondisruptive manner or under acceptance of the fact that even a test for certain vulnerabilities might affect the target’s availability or performance. 422
AU8231_C007.fm Page 423 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security When new vulnerabilities have been published or are exploited, targeted scanner tools often become available from software vendors, antivirus vendors, independent vendors, or the open-source community. SCANNING TOOLS. This is not the place to describe the functionality of the many available scanning tools in detail. However, readers will want to familiarize themselves with the following tools:
• Nessus,29 a vulnerability scanner • Nmap,30 a discovery scanner that will allow determining not only services running on a machine, but also other host characteristics, such as a machine’s operating system In addition, a large number of commercial tools exist. Readers are advised to consult the respective vendor’s information on the Web. Layer 1: Physical Layer Concepts and Architecture A network’s physical topology relates to how network components are connected with each other. The appropriate topology for a network can be determined by assessing the available protocols, how end nodes will be used, available equipment, financial constraints, and the importance of fault tolerance. Communication Technology. Analog Communication. Analog signals use electronic properties, such as frequency and amplitude, to represent information. Analog recordings are a classic example: A person speaks into a microphone, which converts the vibration from acoustical energy to an electrical equivalent. The louder the person speaks, the greater the electrical signal’s amplitude. Likewise, the higher the pitch of the person’s voice, the higher the frequency of the electrical signal.
Analog signals are transmitted on wires, such as twisted pair, or with a wireless device. In radio communication, for example, the electrical representation of the person’s voice would be modulated with a carrier signal and broadcasted. Digital Communication. Whereas analog communication uses complex waveforms to represent information, digital communication uses two electronic states (on and off). By convention, 1 is assigned to the on state and 0 to off. Electrical signals that consist of these two states can be transmitted over cable, converted to light and transmitted over fiber optics, and broadcasted with a wireless device. In all of the above media, the signal would be a series of one of two states: on and off.
It is easier to ensure the integrity of digital communication because the two states of the signal are sufficiently distinct. When a device receives a dig423
AU8231_C007.fm Page 424 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK®
Figure 7.1. Network with a bus topology.
ital transmission, it can easily determine which digits are 0s and which are 1s (if it cannot, then the device knows the signal is erroneous). On the other hand, analog’s complex waveforms make ensuring integrity very difficult. Network Topology. Bus. A bus is a LAN with a central cable (bus) to which all nodes (devices) connect. All nodes transmit directly on the central bus. Each node listens to all of the traffic on the bus and processes only the traffic that is destined for it. This topology relies on the data-link layer to determine when a node can transmit a frame on the bus without colliding with another frame on the bus.
A LAN with a bus topology is shown in Figure 7.1. Advantages of buses include: • Adding a node to the bus is easy. • A node failure will not likely affect the rest of the network. Disadvantages of buses include: • Because there is only one central bus, a bus failure will leave the entire network inoperable. Tree. Tree topology is similar to a bus. Instead of all of the nodes connecting to a central bus, the devices connect to a branching cable. Like a bus, every node receives all of the transmitted traffic and processes only the traffic that is destined for it. Furthermore, the data-link layer must transmit a frame only when there is not a frame on the wire. A network with a tree topology is shown in Figure 7.2.
Advantages of a tree include: • Adding a node to the tree is easy. • A node failure will not likely affect the rest of the network. 424
AU8231_C007.fm Page 425 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security
Figure 7.2. Network with a tree topology.
Disadvantages of a tree include: • A cable failure could leave the entire network inoperable. Ring. Ring is a closed-loop topology. Data is transmitted in one direction. Each device receives data from its upstream neighbor only and transmits data to its downstream neighbor only. Typically rings use coaxial cables or fiber optics. A Token Ring network is shown in Figure 7.3.
Advantages of rings include: • Because rings use tokens, one can predict the maximum time that a node must wait before it can transmit (i.e., the network is deterministic). • Rings can be used as a LAN or network backbone. Disadvantages of rings include: • Simple rings have a single point of failure. If one node fails, the entire ring fails. Some rings, such as fiber distributed data interface (FDDI), use dual rings for failover. Mesh. In a mesh network, all nodes are connected to every node on the network. A full mesh network is usually too expensive because it requires many connections. As an alternative, a partial mesh can be employed in which only selected nodes (typically the most critical) are connected in a full mesh and the remaining nodes are connected to a few devices. As an 425
AU8231_C007.fm Page 426 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK®
Figure 7.3. Network with a ring topology.
Figure 7.4. Network with a mesh topology.
example, core switches, firewalls, and routers and their hot standbys are often all connected to ensure as much availability as possible. A full mesh network is shown in Figure 7.4. Advantages of a mesh include: • Mesh networks provide a high level of redundancy. Disadvantages of a mesh include: • Mesh networks are very expensive because of the enormous amount of cables that are required. 426
AU8231_C007.fm Page 427 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security
Figure 7.5. Network with a star topology. Star. All nodes in a star network are connected to a central device, such as a hub, switch, or router. Modern LANs usually employ a star typology. A star network is shown in Figure 7.5.
Advantages of a star include: • Star networks require fewer cables than full or partial mesh. • Star networks are easy to deploy, and nodes can be easily added or removed. Disadvantages of a star include: • The hub is a single point of failure. If the hub is not functional, all of the connected nodes lose network connectivity. Technology and Implementation Cable. Networks have very impressive devices, including computers with dozens of CPUs that act as many virtual servers and phone booth-size routers used by ISPs. It is tempting to underestimate the importance of cables in a network. Yet, without the cables, there would not be a network, just stand-alone components. One can think of cables as the glue that holds a network together.
Selecting proper cables in a network design is imperative. If inappropriate ones are used, the results can be as disastrous as using the wrong server or router. Cables have to withstand much that threatens the confidentiality, integrity, and availability of the information on the network. Consider the risk of someone tapping into a cable to intercept its signal, or electromagnetic interference from nearby devices, or simply the dangers of a cable breaking. This, considered with the technical parameters of cables, shows that the correct cable must be used for each application. Here are some parameters that should be considered when selecting cables: 427
AU8231_C007.fm Page 428 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Throughput: The rate that data will be transmitted. Certain cables, such as fiber optic, are designed for hauling an incredible amount of data at once. Distance between devices: The degradation or loss of a signal (attenuation) in long runs of cable is a perennial problem, especially if the signal is at a high frequency. Also, the time required for a signal to travel (propagation delay) may be a factor. A bus topology that uses collision detection may not operate correctly if the cable is too long. Data sensitivity: What is the risk of someone intercepting the data in the cables? Fiber optics, for example, makes data interception very difficult. Environment: It is a cable-unfriendly world. Cables may have to be bent when installing. The amount of electromagnetic interference is a factor because cables in an environment with a lot of interference may have to be shielded. Twisted Pair. Unshielded Twisted Pair. Pairs of copper wires are twisted together to reduce electromagnetic interference and cross talk. Each wire is insulated with a fire-resistant material, such as Teflon. The twisted pairs are surrounded by an outer jacket that physically protects the wires. The quality of cable, and therefore its appropriate application, is determined by the number of twists per inch, the type of insulation, and conductive material.
To help determine which cables are appropriate for an application or environment, cables are assigned into categories (Table 7.1). Unshielded twisted pair (UTP) has several drawbacks. Because UTP does not have shielding like shielded twisted-pair cables, UTP is susceptible to interference from external electrical sources, which could reduce the integrity of the signal. Also, to intercept transmitted data, an intruder can install a tap on the cable or monitor the radiation from the wire. Thus, UTP may not be a good choice when transmitting very sensitive data or when installed in an environment with much electromagnetic interference (EMI) or radio frequency interference (RFI).
Table 7.1. Cable Categories Category 1
Less than 1 Mbps
Category 2 Category 3 Category 4 Category 5
<4 Mbps 16 Mbps 20 Mbps 100 Mbps
Category 5e Category 6
1000 Mbps 1000 Mbps
428
Analog voice and basic interface rate (BRI) in Integrated Services Digital Network (ISDN) 4-Mbps IBM Token Ring LAN 10 Base-T Ethernet 16-Mbps Token Ring 100 Base-TX and Asynchronous Transfer Mode (ATM) 1000 Base-T Ethernet 1000 Base-T Ethernet
AU8231_C007.fm Page 429 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security Despite its drawbacks, UTP is the most common cable type. In fact, many grocery stores carry CAT-5 or CAT-5e cables. UTP is inexpensive, can be easily bent during installation, and, in most cases, the risk from the above drawbacks is not enough to justify more expensive cables. Shielded Twisted Pair (STP). Shielded twisted pair is similar to UTP. Pairs of insulated twisted copper are enclosed in a protective jacket. However, STP uses an electronically grounded shield to protect the signal. The shield surrounds each of the twisted pairs in the cable, surrounds the bundle of twisted pairs, or both. The shield protects the electronic signals from outside interference and from eavesdropping by intruders.
Although the shielding protects the signal, STP has disadvantages over UTP. STP is more expensive and is bulkier and hard to bend during installation. Coaxial Cable. Instead of a pair of wires twisted together, coaxial cable (or simply, coax) uses one thick conductor that is surrounded by a grounding braid of wire. A nonconducting layer is placed between the two layers to insulate them. The entire cable is placed within a protective sheath.
The conducting wire is much thicker than twisted pair, and therefore can support greater bandwidth and longer cable lengths. The superior insulation protects coaxial cable from electronic interference, such as EMI and RFI. Likewise, the shielding makes it harder for an intruder to monitor the signal with antennae or install a tap. Coaxial cable has some disadvantages. The cable is expensive and is difficult to bend during installation. For this reason, coaxial cable is used in specialized applications, such as cable TV. Fiber Optics. Fiber optics takes a very different approach to cabling. Instead of using a metal conductor to transmit and receive electrical signals, fiber optics uses glass or plastic to transmit light. Fiber optics consists of three components: a light source, the optical cable, and a light detector.
The light source transmits the optical signal on the fiber cable. There are two types of light sources: Light-emitting diodes (LEDs): Sophisticated cousins to the ubiquitous LEDs found in consumer electronics. LEDs are less expensive than diode lasers, but offer less bandwidth over a shorter distance. Typically LEDs are used in LANs. Diode lasers: A much more expensive alternative, especially because they require more expensive fiber cables and light detectors. This optical source is used by carriers on their backbone. Optical fiber is made from a very narrow glass or plastic fiber that is surrounded by a cladding that is designed to reflect transmitted light back into 429
AU8231_C007.fm Page 430 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® the fiber. The cladding in turn is covered by a protective sheath. There are two types of optical fiber: Multimode fiber: Light is transmitted in slightly different modes (paths) in fibers that are about 50 to 100 microns in diameter. Due to the relatively large diameter, the light disperses too much when using medium and long cable lengths. Single-mode fiber: Single-mode fiber is about 10 microns in diameter. As the name implies, the transmitted light will take a direct path down the center of the fiber. This allows for greater bandwidth and longer cable lengths. Single-mode fiber is suitable for carrier backbones. Light detectors convert transmitted optical signals back into electrical energy. Fiber optics has clear advantages over copper cables. Fiber optics can support 40 gigabits or more per second, which far exceeds coaxial cable or twisted-pair and longer cable distances without amplification. From a security perspective, fiber optics’ immunity to electromagnetic interference (EMI) and radio frequency interference (RFI) is important. Because fiber optics does not emit energy from the cable, data cannot be remotely intercepted.31 Fiber optics has disadvantages, though. It is relatively expensive compared to UTP, cumbersome to install, and relatively difficult to acquire. Therefore, fiber optics is generally reserved for high-bandwidth applications. Patch Panels. Even moderate-size data centers have many interconnected devices, such as switches, routers, servers, workstations, and even test equipment. It is a challenge for network administrators to organize the cables that connect these devices, and to easily modify how they are connected.
As an alternative to directly connecting devices, devices are connected to the patch panel. Then, a network administrator can connect two of these devices by attaching a small cable, called a patch cord, to two jacks in the panel. To change how these devices are connected, network administrators only have to reconnect patch cords. Modems. Modems (modulator/demodulator) allow users remote access to a network via analog phone lines. Essentially, modems convert digital signals to analog and visa versa. A modem that is connected to the user’s computer converts a digital signal to analog to be transmitted over a phone line. On the receiving end, a modem converts the user’s analog signal to digital and sends it to the connected device, such as a server. Of course, the process is reversed when the server replies. The server’s reply is converted from digital to analog and transmitted over the phone line, and so on. 430
AU8231_C007.fm Page 431 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security To address the problem of bandwidth reduction caused by errors in converting analog signals to digital, a new modem standard was created. Because most organizations that support modem access can do so without analog lines, V.90, and later V.92, were created. The new standards assume that a remote user’s connection is the only one that is analog. With only one analog connection, the above conversion errors are reduced, which yields greater bandwidth. Because the upstream transmission (i.e., from the user) is converted from analog to digital, the transfer speed is equivalent to that of a traditional modem. However, the downstream transmission is not converted from analog to digital, and therefore has the potential of speeds approaching the maximum 56 Kbps. The slower upstream speed is often not noticeable because much less data is transferred in that direction. For instance, consider a Web-browsing session. The upstream data is typically generated by mouse clicks and keystrokes, which does not need a lot of bandwidth. It is the download of information from the Web server that requires and receives that extra bandwidth.32 V.92 offers improved performance over V.90, including reducing the time required for connection, the ability not to disconnect when the user’s line receives a call-waiting signal, and increased bandwidth in both directions33 Modems allow remote users to access a network from almost any analog phone line worldwide. While this provides easy access to telecommuters, road warriors, etc., it also provides easy access for intruders, who know they can sneak in an organization’s backdoor while the security staff protects the Internet gateway. In fact, many organizations have implemented policies that forbid modems on the network. Wireless Transmission Technologies. Direct-Sequence Spread Spectrum (DSSS). Direct-sequence spread spectrum is a wireless technology that
spreads a transmission over a much larger frequency band, and with corresponding smaller amplitude. By spreading the signal over a wider band, the signal is less susceptible to interference at a specific frequency. In other words, the interference affects a smaller percentage of the signal. During transmission, a pseudorandom noise code (PN code) is modulated with the signal. The sender and receiver’s PN code generators are synchronized, so that when the signal is received, the PN code can be filtered out. Frequency-Hopping Spread Spectrum (FHSS). T h i s w i re l e s s t e c h n o l o g y spreads its signal over rapidly changing frequencies. Each available frequency band is subdivided into subfrequencies. Signals rapidly change (hop) among these subfrequencies in an order that is agreed upon between the sender and receiver. 431
AU8231_C007.fm Page 432 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® The benefit of FHSS is that the interference at a specific frequency will affect the signal during a short interval. Conversely, FHSS can cause interference with adjacent DSSS systems. Orthogonal Frequency Division Multiplexing (OFDM). A signal is subdivided into subfrequency bands, and each of these bands is manipulated so that they can be broadcasted together without interfering with each other. Frequency Division Multiple Access (FDMA). Frequency division multiple access is used in analog cellular only. It subdivides a frequency band into subbands and assigns an analog conversation to each subband. Time Division Multiple Access (TDMA). Time division multiple access multiplexes several digital calls (voice or data) at each subband by devoting a small time slice in a round-robin to each call in the band. Two subbands are required for each call: one in each direction between sender and receiver. Code Division Multiple Access (CDMA), CDMA 2000, Wideband CDMA. CDMA is a spread-spectrum wireless technology that is mostly used for cellular technology. Like DSSS, it spreads each call over a large frequency band and is tagged with a pseudorandom noise code to differentiate between the calls. Qualcomm is a driver of this technology and is able to multiplex approximately three times as many calls as other technologies.
CDMA 200034 offers an improved capability of ten times the number of calls and transmission rates of 153.6 Mbps.35 Wideband CDMA (or W-CDMA) uses a wider band than CDMA, which increases the throughput of the carrier. Mobile Telephony. GLOBAL SERVICE FOR MOBILE COMMUNICATIONS (GSM). GSM is the most popular cellular technology in the world. A frequency band is subdivided in simplex channels; each can support as many as eight callers using time division multiplexing (see Tannenbaum’s Computer Networks in “General References”).
Mobile subscribers are associated and identify themselves with their socalled International Mobile Subscriber Identifier (IMSI), a (usually) 15-digit number coded into the user’s Subscriber Identity Module (SIM) card and containing information about home country and network. Because the user (or mobile phone) authenticates with the network, but not the network with the mobile phone, a man-in-the-middle attack can be performed using a device called an IMSI catcher, which the attacker can use to masquerade as a base station. Such devices are regularly used by law enforcement agencies. The IMSI catcher can use a control command in GSM to deactivate encryption for a call from the targeted device. (Currently, commercially available 432
AU8231_C007.fm Page 433 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security mobile phones do not display whether encryption is activated for a connection.) Thus, mobile phone calls can be intercepted with relatively low effort. The drawback, at least from a law enforcement perspective, is that the whereabouts of the target needs to be known to successfully deploy an attack. Layer 2: Data-Link Layer Concepts and Architecture Architecture. In addition to providing throughput, a network’s architecture should also help protect its assets. Listed below are the key concepts concerning isolating networks in different domains of trust. Security Perimeter. The security perimeter is the first line of defense between trusted and untrusted networks. In general, it includes a firewall and router that helps filter traffic. Security perimeters may also include proxies and devices, such as an intrusion detection system (IDS), to warn of suspicious traffic.
It is important to note that while the security perimeter is the first line of defense, it must not be the only one. If there are not sufficient defenses within the trusted network, then a misconfigured or compromised device could allow an attacker to enter the trusted network. Network Partitioning. Segmenting networks into domains of trust is an effective way to help enforce security policies. Controlling which traffic is forwarded between segments will go a long way to protecting an organization’s critical digital assets from malicious and unintentional harm. Boundary Routers. Boundary routers primarily advertise routes that external hosts can use to reach internal ones. However, they should also be part of an organization’s security perimeter by filtering external traffic that should never enter the internal network. For example, boundary routers may prevent external packets from the Finger service from entering the internal network because that service is used to gather information about hosts.
A key function of boundary routers is the prevention of inbound or outbound IP spoofing attacks. In using a boundary router, spoofed IP addresses would not be routable across the network perimeter. DUAL-HOMED HOST. A dual-homed host has two network interface cards (NICs), each on a separate network. Provided that the host controls or prevents the forwarding of traffic between NICs, it can be an effective measure to isolate a network. BASTION HOST. Bastion hosts serve as a gateway between a trusted and untrusted network that gives limited, authorized access to untrusted hosts. 433
AU8231_C007.fm Page 434 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® For instance, a bastion host at an Internet gateway could allow external users to transfer files to it via the FTP. This permits files to be exchanged with external hosts without granting them access to the internal network. If an organization has a network segment that has sensitive data, it can control access to that network segment by requiring that all access must be from the bastion host. In addition to isolating the network segment, users will have to authenticate to the bastion host, which will help audit access to the sensitive network segment. For example, if a firewall limits access to the sensitive network segment, allowing access to the segment from only the bastion host will eliminate the need for allowing many hosts access to that segment. DEMILITARIZED ZONE (DMZ). A demilitarized zone (DMZ), also known as a screened subnet, allows an organization to give external hosts limited access to public resources, such as a company Web site, without granting them access to the internal network. Typically, the DMZ is an isolated subnet attached to a firewall (when the firewall has three interfaces — internal, external, and DMZ — this configuration is sometimes called a three-legged firewall). Because external hosts by design have access to the DMZ (albeit controlled by the firewall), organizations should only place in the DMZ hosts and information that are not sensitive. Transmission Technologies. There are many points that must be considered about transmitting information from sender to receiver. For example, will the information be expressed as an analog or digital wave? How many recipients will there be? If the transmission media will be shared with others, how can one ensure that the signals will not interfere with each other? Synchronous Communications. Synchronous communication uses a timing mechanism to synchronize the transmission of data. The communicating devices can use a clocking mechanism, or the transmitting device can include timing information in the stream.
When devices communicate synchronously, they transmit large frames of data, which are surrounded by synchronizing bit patterns, which is much more efficient than 3-bit overhead for every byte in asynchronous transmissions. Error checking in synchronous communication is more robust than in asynchronous communication. For instance, the transmitting device can apply a cyclic redundancy checking (CRC) polynomial to a frame and include the resulting value in the frame. CRC error checking will detect close to all erroneous transmissions. Because of its minimal use of overhead and superior error checking, synchronous communication is much more practical for high-speed, high-volume data transfer than asynchronous. 434
AU8231_C007.fm Page 435 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security Asynchronous Communications. In asynchronous communications, a clocking mechanism is not used. Instead, the sending device surrounds each byte with bits that mark the beginning and end of transmission. In addition, one bit is sent for error control. For each byte, a start bit is sent to signal the start of a transmission. Next, the data byte is sent, followed by a parity bit (for error control), and then a stop bit to signal the end of transmission. As you can see, each byte of data requires 3 bits of overhead.
Needless to say, the receiving device strips off the overhead bits before sending the data up the TCP/IP stack. Modems and dumb terminals are examples of devices that use asynchronous communication. Unicast, Multicast, and Broadcast Transmissions. Most communication, especially that directly initiated by a user, is from one host to another. For example, when a person uses a browser to send a request to a Web server, he or she sends a packet to the Web server. A transmission with one receiving host is called Unicast.
A host can send a broadcast to everyone on its network or subnetwork. Depending on the network topology, the broadcast could have anywhere from one to tens of thousands of recipients. Like a person standing on a soapbox, this is a noisy method of communication. Typically, only one or two destination hosts are interested in the broadcast; the other recipients waste resources to process the transmission. However, there are productive uses for broadcasts. Consider a router that knows a device’s IP address but must determine the device’s MAC address. The router will broadcast an Address Resolution Protocol (ARP) request asking for the device’s MAC address. Notice how one broadcast could result in hundreds or even thousands of packets on the network. Intruders often leverage this fact in denial-ofservice attacks. Public and private networks are used more often than ever for streaming transmissions, such as movies, videoconferences, and music. Given the intense bandwidth to transmit these streams, and the sender and recipients are not necessarily on the same network, how does one transmit the stream to only the interested hosts? The sender could send a copy of the stream via Unicast to each receiver. Unless there is a very small audience, Unicast delivery is not practical because the multiple simultaneous copies of the large stream on the network at the same time could cause congestion. Delivery with broadcasts is another possibility, but every host would receive the transmission, even if they were not interested in the stream. Multicast was designed to deliver a stream to only interested hosts. Radio broadcasting is a typical analogy for multicasting. To select a spe435
AU8231_C007.fm Page 436 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK®
Figure 7.6. Multicast transmission.
cific radio show, you tune a radio to the broadcasting station. Likewise, to receive a desired multicast, you join the corresponding multicast group. Multicast agents are used to route multicast traffic over networks and administer multicast groups. Each network and subnetwork that supports multicasting must have at least one multicast agent. Hosts use IGMP to tell a local multicast agent that it wants to join a specific multicast group. Multicast agents also route multicasts to local hosts that are members of the multicast’s group and relay multicasts to neighboring agents. When a host wants to leave a multicast group, it sends an IGMP message to a local multicast agent. Multicasts do not use reliable sessions. Therefore, the multicasts are transmitted as best effort, with no guarantee that datagrams are received. As an example, consider a server multicasting a videoconference to desktops that are members of the same multicast group as the server (Figure 7.6). The server transmits to a local multicast agent. Next, the multicast agent relays the stream to other agents. All of the multicast agents transmit the stream to local hosts that are members of the same multicast group as the server. Circuit-Switched Networks. Circuit-switched networks establish a dedicated circuit between endpoints. These circuits consist of dedicated switch connections. Neither endpoint starts communicating until the circuit is completely established. The endpoints have exclusive use of the circuit and its bandwidth. Carriers base the cost of using a circuitswitched network on the duration of the connection, which makes this 436
AU8231_C007.fm Page 437 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security type of network only cost-effective for a steady communication stream between the endpoints. Examples of circuit-switched networks are the plain old telephone service (POTS), Integrated Services Digital Network (ISDN), and Point-to-Point Protocol (PPP). Packet-Switched Networks. Packet-switched networks do not use a dedicated connection between endpoints. Instead, data is divided into packets and transmitted on a shared network. Each packet contains meta-information so that it can be independently routed on the network. Networking devices will attempt to find the best path for each packet to its destination. Because network conditions could change while the partners are communicating, packets could take different paths as they transverse the network and arrive in any order. It is the responsibility of the destination endpoint to ensure that the received packets are in the correct order before sending them up the stack.
Because carriers base the cost of using a packet-switched network on the amount of data that is transmitted, such networks are appropriate for transmissions with significant idle time (i.e., bursty transmissions). Switched Virtual Circuits (SVCs), Permanent Virtual Circuits (PVCs). Virtual circuits provide a connection between endpoints over high-bandwidth, multiuser cable or fiber that behaves as if the circuit were a dedicated physical circuit. There are two types of virtual circuits, based on when the routes in the circuit are established. In a permanent virtual circuit, the carrier configures the circuit’s routes when the circuit is purchased. Unless the carrier changes the routes to tune the network, respond to an outage, etc., the routes do not change. On the other hand, the routes of a switched virtual circuit are configured dynamically by the routers each time the circuit is used. Carrier Sense Multiple Access. As the name implies, Carrier Sense Multiple Access (CSMA) is an access protocol that uses the absence/presence of a signal on the medium that it wants to transmit on as permission to speak. Only one device may transmit at a time; otherwise, the transmitted frames will be unreadable. Because there is not an inherent mechanism that determines which device may transmit, all of the devices must compete for available bandwidth. For this reason, CSMA is referred to as a contentionbased protocol. Also, because it is impossible to predict when a device may transmit, CSMA is also nondeterministic.
There are two variations of CSMA based on how collisions are handled. LANs using Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) require devices to announce their intention to transmit by broadcasting a jamming signal. When devices detect the jamming signal, 437
AU8231_C007.fm Page 438 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® they know not to transmit; otherwise, there will be a collision. After sending the jamming signal, the device waits to ensure that all devices have received that signal, and then broadcasts the frames on the media CSMA/CA is used in the IEEE 802.11 wireless standard.36 Devices on a LAN using Carrier Sense Multiple Access with Collision Detection (CSMA/CD) listen for a carrier before transmitting data. If another transmission is not detected, the data will be transmitted. It is possible that a station will transmit before another station’s transmission had enough time to propagate. If this happens, two frames will be transmitted simultaneously, and a collision will occur. Instead of all stations simply retransmitting their data, which will likely cause more collisions, each station will wait a randomly generated interval before retransmitting. CSMA/CD is part of the IEEE 802.3 standard.37 Polling. A network that employs polling avoids contention by allowing a device (a slave) to transmit on the network only when it is asked by a master device. Polling is used mostly in mainframe protocols, such as Synchronous Data Link. The point coordination function, an optional function of the IEEE 802.11 standard, uses polling as well.38 Token Passing. Token passing takes a more orderly approach to media access. With this access method, only one device may transmit on the LAN at a time, thus avoiding retransmissions.
A special frame, known as a token, circulates through the ring. When a device wishes to transmit on the network, it must possess the token. The device replaces the token with a frame containing the message to be transmitted and sends the frame to its neighbor. When each device receives the frame, it relays it to its neighbor if it is not the recipient. The process continues until the recipient possesses the frame. That device will copy the message, modify the frame to signify that the message was received, and transmit the frame on the network. When the modified frame makes a trip back to the sending device, the sending device knows that the message was received. Token passing is used in Token Ring and FDDI networks. An example of a LAN using token passing is in Figure 7.7. Ethernet (IEEE 802.3). Ethernet, which is defined in IEEE 802.3, played a major role in the rapid proliferation of LANs in the 1980s. The architecture was flexible and relatively inexpensive, and it was easy to add and remove devices from the LAN. Even today, for the same reasons, Ethernet is the most popular LAN architecture. The physical topologies that are sup-
438
AU8231_C007.fm Page 439 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security
11010
Figure 7.7. LAN token passing.
ported by Ethernet are bus, star, and point to point, but the logical topology is the bus. With the exception of full-duplex Ethernet (which does not have the issues of collisions), the architecture uses CSMA/CD. This protocol allows devices to transmit data with a minimum of overhead (compared to Token Ring), resulting in an efficient use of bandwidth. However, because devices must retransmit when more than one device attempts to send data on the medium, too many retransmissions due to collisions can cause serious throughput degradation. The Ethernet standard supports coaxial cable, unshielded twisted pair, and fiber optics. Ethernet was originally rated at 10 Mbps, but like 10-megabyte disk drives, users quickly figured out how to use and exceed its capacity and needed faster LANs. To meet the growing demand for more bandwidth, 100 Base-TX (100 Mbps over twisted pair) and 100 Base-FX (100 Mbps over multimode fiber optics) were defined. When the demand grew for even more bandwidth over unshielded twisted pair, 1000 Base-T was defined, and 1000 Base-SX and 1000 Base-LX were defined for fiber optics. These standards support 1000 Mbps. Token Ring (IEEE 802.5). Originally designed by IBM, Token Ring was adapted with some modification by the IEEE as IEEE 802.5. Despite the architecture’s name, Token Ring uses a physical star topology. The logical topology, however, is a ring. Each device receives data from its upstream 439
AU8231_C007.fm Page 440 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® neighbor and transmits to its downstream neighbor. Token Ring uses ring passing to mediate which device may transmit. As mentioned in the section on token passing, a special frame, called a token, is passed on the LAN. To transmit, a device must possess the token. To transmit on the LAN, the device appends data to the token and sends it to its next downstream neighbor. Devices retransmit frames whenever the token is not the intended recipient. When the destination device receives the frame, it copies the data, marks the frame as read, and sends it to its downstream neighbor. When the packet returns to the source device, it confirms that the packet has been read. It then removes the frame from the ring. What happens if a device goes offline immediately after it transmits the data? No device will be able to remove the token from the ring, and it will continue to circulate forever, preventing any further transmissions. To recover from errors and perform ring maintenance, a device on the ring is designated as the active monitor. In the above scenario, the active monitor would purge the ring the second time the token was received. Because Token Rings are deterministic, the active monitor can calculate the maximum time interval required for a token to circulate around the ring. If the active monitor does not receive a token within that interval, then it assumes the token is lost and transmits a new one. All other devices that are not the active monitor are considered standby monitors. If the active monitor goes offline, then one of the standby monitors will be promoted to active monitor. Token Rings use a beaconing process to perform automatic error correction. For example, if a device detects a hardware fault, it will send a beacon frame, which contains the address of the device’s nearest upstream neighbor. When the neighbor receives the frame, it will perform self-diagnostics to determine if it is the cause of the fault (Figure 7.8). Token Ring allows devices on the network to receive a higher priority to transmit. Each device has an assigned priority number. When a frame with information is circulating around the ring, a device can set the priority of the frame so that the next generated token will only be able to be possessed by a device with that priority number or higher. Token Ring supports speeds of 4, 16, and 100 Mbps. Fiber Distributed Data Interface (FDDI). FDDI is a token-passing architecture that uses two rings. Because FDDI employs fiber optics, FDDI was designed to be a 100-Mbps network backbone. Only one ring (the primary) is used; the other one (secondary) is used as a backup. Information in the rings flows in opposite directions from each other. Hence, the rings are referred to as counterrotating. 440
AU8231_C007.fm Page 441 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security
Beacon
Figure 7.8. Token Ring beacon.
Devices that are on the FDDI ring fall into four categories: Single attachment station (SAS): A device that connects to the primary ring only. These are not expected to remain online. Single attachment concentrator (SAC): Used as a concentrator to connect SAS devices to the primary ring. Dual attachment station (DAS): A device that connects to both rings. Dual attachment concentrator (DAC): A device that attaches devices to both rings. SAS, SAC, and DAS devices connect to this device. Devices that are attached to both rings should not be turned offline or disconnected; otherwise, a fault condition will occur, and the two rings will failover to one ring. Copper distributed data interface (CDDI) allows FDDI to run on copper wire, instead of fiber optics. Although new developments in copper wire support faster transmission speed, copper cable segments cannot be as long as fiber. For this reason, CDDI is best suited for LANs. Technology and Implementation Ethernet. Ethernet is an important architecture. Due to its relatively low cost and ease of administration, it is one of the most important popular local area network technologies. Concentrators. Concentrators multiplex connected devices into one signal to be transmitted on a network. For instance, a FDDI concentrator multiplexes transmissions from connected devices to a FDDI ring. 441
AU8231_C007.fm Page 442 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Front-End Processors. Input and output involve moving parts, such as fingers typing and disks spinning, which are quite slow compared to the speed of CPUs (central processing units). Servicing input and output therefore reduce a computer’s throughput. Some hardware architectures employ a hardware front-end processor that sits between the input/output devices and the main computer. By servicing input/output on behalf of the main computer, front-end processors reduce the main computer’s overhead. Multiplexers. A multiplexer overlays multiple signals into one signal for transmission. Using a multiplexer is much more efficient than transmitting the same signals separately. Multiplexers are used in devices from simple hubs (see below) to very sophisticated dense-wave division multiplexers (DWDMs) that combine multioptical signals on one optical fiber. Hubs and Repeaters. Hubs are used to implement a physical star topology. All of the devices in the star connect to the hub. Essentially, hubs retransmit signals from each port to all other ports. Although hubs can be an economical method to connect devices, there are several important disadvantages:
• All connected devices will receive each other’s broadcasts, potentially wasting valuable resources processing irrelevant traffic. • All devices can read and potentially modify the traffic of other devices. • If the hub becomes inoperable, then the connected devices will not have access to the network. • As the distance between the sender and receiver increases, the signal’s quality can degrade due to attenuation. To allow longer distances while preserving signal quality, repeaters are used to reamplify signals. For example, a repeater can be used to increase the length of an Ethernet bus to accommodate a physically larger network. Switches and Bridges. As LANs grow in number of users, bandwidth utilization, and physical dimensions, they can reach thresholds that prevent the LAN from expanding. Bandwidth is exceeded, cable lengths cannot increase because of signal attenuation, and the LAN can be too large to manage.
On the other side of the coin, how would one interconnect LANs without reconfiguring the networks so that they can communicate? One possible solution to both issues is to use bridges. Bridges are layer 2 devices that filter traffic between segments based on MAC addresses. In addition, they amplify signals to facilitate physically larger networks. A basic bridge filters out frames that are not destined to another segment. (Consider the network shown in Figure 7.9.) When a client PC on segment A transmits to a server on segment A, the bridge will read the destination’s MAC address and not forward the traffic to segments B and C, relieving them of the burden of traffic that is not des442
AU8231_C007.fm Page 443 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security
Figure 7.9. Network segments connected with a bridge.
tined to a device on these segments. In this simple example, this might not seem important, but if the segments had hundreds of devices on long network segments, the bridge would greatly reduce unnecessary traffic and allow the network to physically grow without signal attenuation. Bridges can connect LANs with unlike media types, such as connecting a UTP segment with a segment that uses coaxial cable. Bridges do not reformat frames, such as converting a Token Ring frame to Ethernet. This means that only identical layer 2 architectures can be connected with a simple bridge (e.g., Ethernet to Ethernet, etc.). Network 443
AU8231_C007.fm Page 444 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK®
Figure 7.10. Simple switched network.
administrators can use encapsulating bridges to connect dissimilar layer 2 architectures, such as Ethernet to Token Ring. These bridges encapsulate incoming frames into frames of the destination’s architecture. Other specialized bridges filter outgoing traffic based on the destination MAC address. In the network in Figure 7.10, suppose the bridge were a filtering bridge. When a user on segment A sends traffic to a server on segment B, the bridge will forward the transmission to segment B only, reducing unnecessary traffic on segment C. Again, in the network in Figure 7.10, if a server on segment A sends out a broadcast on the wire, would segments B and C receive the broadcast? Because broadcasts are for all devices, the bridge will forward the broadcast. This is an important point to keep in mind about bridges: they do not filter broadcasts. Bridges do not prevent an intruder from intercepting traffic on the local segment. Switches solve the same issues posed at the beginning of this section, except the solutions are more sophisticated and more expensive. Essentially, a basic switch is a multiport device to which LAN hosts connect. Switches forward frames only to the device specified in the frame’s destination MAC address, which greatly reduces unnecessary traffic. (To illustrate, see Figure 7.10.) In this very simple LAN, client A transmits traffic to the server. When the switch receives the traffic, it relays it out of the port to which the server is connected. Client B does not receive any of the traffic. On the other hand,
444
AU8231_C007.fm Page 445 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security if the switch were a hub, client B would receive the traffic transmitted between client A and the server. Because client B does not receive the traffic between the other client and server, the likelihood of client B intercepting the traffic is reduced (there are sophisticated attacks that could trick a switch, especially a poorly configured one, into sending traffic to client B). Switches can perform more sophisticated functions to increase network bandwidth. Due to the increased processing speed of switches, models exist that can make forwarding decisions based on IP address and prioritization of types of network traffic. Like hubs and bridges, switches forward broadcasts. Wireless Local Area Networks. Wireless networks allow users to be mobile while remaining connected to a LAN. Unfortunately, this allows unauthorized users greater access to the LAN as well. In fact, many wireless LANs can be accessed off of the organization’s property by anyone with a wireless card in a laptop, which effectively extends the LAN where there are no physical controls. Below are some important wireless security issues and controls. General. AUTHENTICATION. As we will see, it is impractical for an organization to hide its access point’s signal from unauthorized users. Instead, authentication should be the first real line of defense. Unfortunately, many wireless authentication mechanisms are weak because they do not provide sufficient assurance that the client and access point are who they claim to be.
Open System Authentication — Open system authentication is the most basic form of wireless authentication. The wireless client is permitted to join the network if its SSID matches the wireless network’s. The wireless client sends an encrypted frame containing its SSID to the access point. Upon successful authentication (matching SSIDs), the client is associated with the wireless LAN. Shared-Key Authentication — In shared-key authentication, WEP is used to encrypt a shared secret between the access point and wireless client. After the client identifies its MAC address to the access point, the access point responds with a frame containing a randomly generated challenge using the WEP key stream generator. The client responds with the challenge encrypted with WEP. If the access point decrypts the challenge and it is the same as the one that it sent to the client, the client is authenticated. There is a serious flaw with shared-key authentication. Because exclusive or’s (XORs) are used in the authentication, an attacker can intercept 445
AU8231_C007.fm Page 446 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® the challenge and response and recover the key stream. Because of this, shared-key Authentication is not considered effective. MAC Address Tables — Authenticating based on MAC address is not very effective. Because it is very easy to spoof a MAC address, there is little assurance that the client is an authorized workstation, instead of an intruder who is advertising a bogus MAC address. SSID. Service Set Identifier (SSID) Broadcasting — The service set identifier is defined in IEEE 802.11 as a name for the wireless LAN, not as an authentication mechanism. Specifying the SSID of the wireless LAN to which you want to connect ensures that you connect to the correct one when several are in the same vicinity. That is the primary reason not to use the default SSID. If neighboring LANs use access points from the same manufacturer and both use the default SSID, then users could easily attempt to connect to the wrong wireless LAN.
Wireless client cards can broadcast a probe to ask wireless LANs that receive the probe to respond with their SSID. Network administrators often configure wireless LANs to block these requests or remove the SSID from the probe response. Because a patient attacker can still access these LANs, not broadcasting the SSID is a minor security control at best. SSID Naming Conventions — Although SSIDs are not for authentication, an organization should not advertise that it is the owner of the wireless LAN. In general, organizations should not use SSIDs that can be easily identified with them, such as company name, product, mascot, etc. PLACEMENT AND CONFIGURATION OF ACCESS POINTS. We previously discussed organizations broadcasting wireless signals past the confines of their property. Wireless access points should be placed and their power levels set to make it more difficult for unauthorized clients to receive a signal (e.g., do not place an access point by a window). However, organizations should not count on hiding their access point’s signal, especially from an intruder with a unidirectional antenna. Besides being unrealistic, that would be an example of security by obscurity — a fundamental fallacy.
Access point placement should focus on ensuring that authorized users can receive a strong signal from an access point. Keep access points away from electronically noisy machines that could interfere with an access point’s transmission. Many microwaves interfere with access points. Encryption. WIRED EQUIVALENT PRIVACY (WEP). Wired equivalent privacy uses a shared secret between the client and access point. Before each packet is transmitted, a CRC-32 checksum is appended to it and both are encrypted using RC4 with the shared secret and initialization vector. The encrypted packet 446
AU8231_C007.fm Page 447 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security with the initialization vector is transmitted. The recipient reverses the process. If the client has the same shared secret, then the packet will be decrypted. Due to flaws in the RC4 implementation in WEP, and the reuse of initialization vector values, WEP transmissions can be decrypted by an attacker in a very short time. WIFI PROTECTED ACCESS (WPA). After it was announced that WEP could not effectively protect traffic from eavesdropping, the WiFi Alliance had to replace WEP, but IEEE 802.11i was not ready. As a stopgap measure, it released WPA. This system uses an improved implementation of RC4 with 128-bit keys. The initialization vector was expanded from 24 to 48 bits, which supports many more shared secrets. To make an intruder’s task of cracking the encryption of intercepted traffic much more difficult, WPA uses the Temporal Key Integrity Protocol (TKIP). Instead of using the same key for the entire session, TKIP uses a different key for each packet. WEP’s CRC-32 checksum was replaced with a message integrity check, dubbed Michael. Michael protects the packet’s header and data and uses a frame counter to thwart replay attacks.
Another advantage of WPA is that both the client and network authenticate each other (mutual authentication). In addition to the network verifying that the client is authorized for network access, the client can be assured that he or she is not communicating with an imposter, posing as a network. When IEEE 802.11i was completed, WPA2 was certified. Although WPA did not have any significant security flaws, RC4 was replaced by the Advanced Encryption Standard (AES), a stronger encryption algorithm. Also, TKIP and Michael were replaced by the Counter-Mode/CBC-Mac Protocol (CCMP), which manages encryption keys and message integrity. WPA2 supports IEEE 802.1X authentication, which is based on the Extensible Authentication Protocol (EAP) framework. The framework allows the authenticating partners to negotiate the authentication method during the authentication phase. EAP authentication methods include: EAP-TLS (EAP, tunneled layered security): Both the client and authentication server mutually authenticate over a TLS session with digital certificates. In addition to ensuring that the client is authorized to access the network, the client can be confident that he or she is communicating with the desired network, not an imposter. This method is the most secure because an attacker must steal both a digital certificate and its password. However, organizations that have many clients may find there is too much overhead in administering the client certificates for EAP-TLS to be feasible. 447
AU8231_C007.fm Page 448 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® EAP-TTLS (EAP, tunneled TLS): Like EAP-TLS, digital certificates are used. However, to establish an encrypted tunnel, the authentication server only presents a certificate to the client. Once the tunnel is established, the client authenticates to the authentication server using an EAP or legacy mechanism that is easier to administer than client-side digital certificates. The lack of client-side certificates makes EAP-TTLS easier to administer than EAP-TLS, but the less robust client authentication makes EAP-TTLS less secure than EAPTLS. Of course, the extent of how much less secure depends on the strength of the client authentication. EAP-PEAP (EAP, protected EAP): EAP-PEAP is very similar to EAP-TLS. To establish an encrypted tunnel, the authentication server authenticates to the client with a digital certificate, and the client employs a nondigital certificate mechanism to authenticate to the server. However, EAP-PEAP requires that the client authenticates with an EAP method. As with EAP-TTLS, EAP-PEAP is easier to administer than EAP-TLS, but the lack of a client-side certificate makes this method less secure than EAP-TLS. During 802.1X authentication, a client (called a supplicant) contacts an authenticator (an access point, for wireless authentication). The authenticator blocks all traffic from the client to the network, except for what is required for authentication. The authenticator sends an EAP challenge to the client and forwards the challenge and client’s EAP response to an authentication server on the wired network. The authentication server, a RADIUS server, establishes the appropriate authentication method and sends the corresponding challenge to the client via the access point. The client sends the response back to the authentication server via the access point. If the authentication is successful, a session key is generated and the authenticator removes the network traffic restriction. Instead of EAP, homes or organizations without RADIUS can use preshared 32-byte keys between the access point and clients. In low-risk environments, the use of preshared keys is cost-effective. However, due to the relatively weak authentication and lack of accountability (all clients use the same key), businesses should avoid this option, if feasible. WI-FI PROTECTED ACCESS (WPA2). WPA2, created by the IEEE 802.11i working group, addresses the many problems with wireless security. It supersedes the weak WEP and the stopgap WPA. It uses 802.1x access control to start an EAP authentication method and Counter-Mode/CBC-Mac Protocol (CCMP) for encryption.
WPA2 implements the final IEEE 802.11i amendment to the 802.11 standard (as opposed to WPA, which only implements a subset of IEEE 802.11i) and is compliant with FIPS 140-2. 448
AU8231_C007.fm Page 449 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security IEEE 802.11b. This amendment to IEEE 802.11 was ratified in 1999. It continued the usage of CSMA/CA and direct-sequence spread spectrum (DSSS). The bandwidth was increased to 5.5 and 11 Mbps, and continued the support of the original 1- and 2-Mbps rates. The slower rates can be used when signal quality is poor. Use of DSSS ensured that transmissions were resilient to interference due to the number of chips used per broadcasted bit.
There are 14 transmission channels in the 2.4-GHz band, each 5 MHz wide. Channel 14 is used in Japan only. IEEE 802.11a. IEEE 802.11a was ratified in 1999. Because the 2.G-Hz band specified in IEEE 802.11b was overused, which increased the potential of problems from interference, this amendment used the 5-GHz band. The higher frequency has the disadvantage that more access points are required because the receiver and sender must be near the line of site from each other.
This amendment employs orthogonal frequency division multiplexing (OFDM), using 20-MHz channels subdivided into 52 subcarriers (48 are used for data). The maximum transmission speed is 52 Mbps. IEEE 802.11a is not compatible with IEEE 802.11b. IEEE 802.11g. This standard, which was ratified in 2003, combines the frequency band of 802.11b (2.4 GHz) with the increased speed of 802.11a (52 Mbps). IEEE 802.11g is fully compatible with IEEE 802.11b. Cards are available to compensate for the incompatibility of some of the standards.
Both IEEE 802.11g and IEEE 802.11b are very popular. Bluetooth. Bluetooth allows Bluetooth-enabled devices to communicate over a very short distance without wires. The convenience of wireless exchange of information, cell phone earpieces without cables, etc., has become very popular. Today, Bluetooth technology can be found in electronics such as cell phones, PDAs, and laptops. In fact, the IEEE based its IEEE 802.15 standard on Bluetooth.
Unfortunately, the technology’s security features are inadequate. To make matters worse, far too many owners of Bluetooth devices are not aware of the technology’s vulnerabilities. Some of the vulnerabilities include: Blue jacking: Allows an anonymous message to be displayed on the victim’s device. Buffer overflow: An attacker can remotely exploit bugs in software on Bluetooth-enabled devices. Blue bug attack: An attacker can use the AT commands on a victim’s cell phone to initiate calls, send SMS messages, etc. 449
AU8231_C007.fm Page 450 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Bluetooth devices are designed to communicate over a distance of 10 m (30 ft), but longer ranges are possible. Class 1 Bluetooth devices contain a 100-mW transmitter and are known to bridge distances of up to 1 km. Devices transmit at the 2.4-GHz band using frequency-hopping spread spectrum (FHSS). Address Resolution Protocol (ARP). Given a layer 3 IP address of a device, ARP determines the device’s layer 2 MAC address. ARP tracks IP addresses and their corresponding MAC addresses in a dynamic table called ARP cache. To determine a device’s address, ARP first looks in its cache to see if the MAC address is already known. If it is not, ARP sends out a broadcast asking all devices to return the MAC address, if they know it. The returned MAC address is added to the cache.
Because ARP does not require authentication, an attacker could place bogus entries in the ARP cache to carry out other attacks, such as a man in the middle. Adding bogus entries in ARP cache is called ARP poisoning. Point-to-Point Protocol (PPP). PPP is used to connect a device to a network over a serial line to a network. Generally, this protocol is used to connect a remote workstation over a phone line. For example, ISPs use PPP to allow dial-up users access to the Internet.
PPP supports authentication, including the following protocols: Password Authentication Protocol (PAP): A simple insecure protocol that transmits in plaintext. Challenge Handshake Authentication Protocol (CHAP): A protocol that uses a three-way handshake. The server sends the client a challenge, which includes a random value (a nonce) to thwart replay attacks. The client responds with a MD5 hash of the nonce and the password. The authentication is successful if the client’s response is the one that the server expected. Extensible Authentication Protocol (EAP): An authentication framework whereby the authentication partners establish the authentication method during the authentication phase. See the section on WPA (See Section Encryption) for details. PPP replaced the Serial Line Internet Protocol (SLIP) in many uses. Layer 3: Network Layer Concepts and Architecture Local Area Network (LAN). Local area networks service a relatively small area, such as a home, office building, or office campus. In general, LANs service the computing needs of their local users. LANs consist of most modern computing devices, such as workstations, servers, and peripherals 450
AU8231_C007.fm Page 451 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security connected in a star topology or internetworked stars. Ethernet is the most popular LAN architecture because it is inexpensive and very flexible. Most LANs have connectivity to other networks, such as dial-up or dedicated lines to the Internet, access to other LANs via wide area networks (WANs), and so on. Virtual Local Area Networks (VLANs). For IT to be successful, it must provide effective computing resources for the business. However, a business rarely aligns itself to make IT’s job easier, which includes physically locating users with similar computing requirements in the same area. For instance, engineers who mostly access the same servers could be scattered throughout a campus on different subnetworks.
Virtual local area networks (VLANs) allow network administrators to use switches to create software-based LAN segments that can be defined based on factors other than physical location. Devices that share a VLAN communicate through switches, without being routed to other subnetworks, which reduces overhead due to router latency (as routers become faster, this is less of an advantage). Furthermore, broadcasts are not forwarded outside of a VLAN, which reduces congestion due to broadcasts. Placing devices that often communicate with each other in the same VLAN allows them to do so more efficiently. For example, the engineers in the above scenario can be placed in a dedicated VLAN with their servers. The throughput between the devices in the VLAN is increased because the traffic is not routed. Also, the broadcasts from the devices are isolated to the VLAN, which improves throughput for the entire LAN. Because VLANs are not restricted to the physical location of devices, they help make networks easier to manage. When a user or group of users change their physical location, network administrators can simply change the membership of ports within a VLAN. Likewise, when additional devices must communicate with members of a VLAN, it is easy to add new ports to a VLAN. VLANs can be configured based on switch port, IP subnet, MAC address, and protocols. It is important to remember that VLANs do not guarantee a network’s security. At first glance, it may seem that traffic cannot be intercepted because communication within a VLAN is restricted to member devices. However, there are attacks that allow a malicious user to see traffic from other VLANs (so-called VLAN hopping). Therefore, a VLAN can be created so that engineers can efficiently share confidential documents, but the VLAN does not significantly protect the documents from unauthorized access. 451
AU8231_C007.fm Page 452 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Wide Area Network (WAN) Technologies. Modems and Public Switched Telephone Networks (PSTN). The PSTN is a circuit-switched network that was originally designed for analog voice communication. When a person places a call, a dedicated circuit between the two phones is created. Although it appears to the callers that they are using a dedicated line, they are actually communicating through a complex network. As with all circuit-switched technology, the path through the network is established before communication between the two endpoints begins, and barring an unusual event, such as a network failure, the path remains constant during the call. Phones connect to the PSTN with CAT 3 copper wires to a central office (CO), which services an area of about 1 to 10 km (see Tannenbaum’s Computer Networks in “General References”).
The central offices are connected to a hierarchy of tandem offices (for local calls) and toll offices (toll calls), with each higher level of the hierarchy covering a larger area. Including the COs, the PSTN has five levels of offices. When both endpoints of a call are connected to the same CO, the traffic is switched within the CO. Otherwise, the call must be switched between a toll center and a tandem office. The greater the distance between the calls, the higher in the hierarchy the calls are switched. For example, in Figure 7.11, a call between callers 1 and 2 is switched within their central office. However, a call between callers 1 and 3 must be switched within the leftmost primary toll office. To accommodate the high volume of traffic, toll centers communicate with each other over fiber-optic cables. Today, the PSTN is also used for data communication over wide area networks. Users can use a modem to access a network over a phone line (see “Modems” in the “Layer 1” section of this chapter). Or, a digital subscriber line (DSL) can be used to access the Internet over a phone line. To meet the demands for improved reliability necessary for data communication, phone companies have converted most of the communication within the PSTN to digital. The most notable exception is the connection between the user and the CO, which is still analog. The PSTN is vulnerable to attacks. There is a subculture of phone hackers (phreaks) that attempt to make toll calls for free, manipulate public and private phone switches, gain unauthorized access to voice mail systems, etc. For example, in the 1960s, phone hackers discovered that AT&T signaled a 2600-Hz tone on all free toll lines and devised methods of reproducing that tone to make free long-distance calls. Although modems allow remote access to networks from almost anywhere, they could be used as a portal into the network by an attacker. Using automated dialing software, he or she can dial the entire range of phone numbers used by the company to identify modems. If the host, to 452
AU8231_C007.fm Page 453 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security
Figure 7.11. PSTN.
which the modem is attached, has a weak password, then the attacker can easily gain access to the network. Worse yet, if voice and data share the same network, then both voice and data could be compromised. The best defense to this attack is not to use modems. However, if modems are necessary, then organizations must ensure that the passwords protecting the attached host are strong, preferably with the help of authentication mechanisms, such as RADIUS, one-time passwords, etc. Also, shutting modems off when not in use will reduce the risk posed by the modem. Users with modems that are connected to a network, especially the Internet, could be an easy target of attack. Sensitive information on the connected workstation could be stolen, or the workstation could become infected by malicious software, such as viruses, worms, or programs that allow an attacker to remotely control the workstation. To greatly reduce the risk of accessing a network remotely, workstations should be protected with personal firewalls, and antivirus software with current virus signatures. Security patches should also be installed as soon as possible. More importantly, users should learn and practice safe habits while connected to a network. For instance, do not open suspicious e-mail attachments, download software from trusted Web sites only, etc. 453
AU8231_C007.fm Page 454 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK®
Figure 7.12. ISDN. Integrated Services Digital Network (ISDN). Before the days of DSL and cable modems, users wanted remote access with higher bandwidth than dial-up. ISDN provides such bandwidth by using a set of protocols and specialized equipment (see Figure 7.12). ISDN uses two types of channels: the B channel (bearer) is used for voice and data (at 64 kbps) and the D channel (delta) is used for signaling (at 16 kbps) and can also be used for data. The D channels are used to establish, maintain, and tear down connections with a remote site. Voice and data traffic are sent on the B channel. Each B channel can support a separate call or can be multiplexed (B channel bonding) to combine the bandwidth into a single channel.
ISDN comes in two varieties, basic rate interface (BRI) and primary rate interface (PRI). BRI supports two B channels and one D channel. Each B channel will support separate 64-kbps sessions or can be multiplexed into one 128-kbps session. PRI is ISDN’s high-end. When all of the B channels are bonded, the ISDN connection provides the bandwidth of a leased line. In North America, PRI supports 23 B channels and 1 D channel that can support as many as 23 sessions or, combined, 1 1.55-Mbps session (a full T1). In Europe and Australia, PRI supports 30 B channels and 1 D channel. The B channels can support 30 sessions or, bonded, 1 2.0-Mbps channel (a full E1). The following are common ISDN devices: Terminal equipment 1 (TE1): Computer, fax, etc., is ISDN ready. Terminal equipment 2 (TE2): Computer, fax, etc., cannot be directly attached to an ISDN network. To connect to ISDN, a TE2 must connect to a terminal adapter. 454
AU8231_C007.fm Page 455 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security Channel # 1
2
3
4
5
6
1 101 1001 11110010 00110011 01001011 11100111 11000101
23
24
10001111 10110011
Framing bit
Figure 7.13. Structure of a T1 frame.
Network termination 1 (NT1): Marks the end of the phone company and the beginning of the customer’s network. Network termination 2 (NT2): Acts as a concentrator. NT2 devices are typically not used in the home. Terminal adapter (TA): Interfaces with TE2 devices to allow them to access an ISDN network. Some organizations use PRI ISDN as a low-cost backup for a leased line. Point-to-Point Lines. A point-to-point line connects two endpoints, most often over a WAN. In a wired WAN, point to point uses high-bandwidth fiber cable, but unlike FDDI, the traffic is dedicated to the endpoints. Point-topoint lines are an expensive option. T1, T3, etc. T1 carrier is a popular WAN method in North America and Japan. Using time division multiplexing, T1 multiplexes 24 channels over copper cable. In a 193-bit frame, each of the channels in a round-robin transmits 8 bits (seven data and one control bit). One bit for synchronization is appended to the beginning of the frame. A T1 frame is shown in Figure 7.13.
Eight thousand T1 frames are transmitted every second. Therefore, the transmission rate is 1.544 Mbps (8000 frames/sec × 193 bits/frame). Fractional T1 is available for organizations with a modest T1 budget. Customers may purchase fewer than 24 channels, which could be much less expensive than a full T1. Channels that are not purchased do not carry data. To meet the demand for more WAN bandwidth, multiple T1 channels are multiplexed into technologies with more throughput. In general, customers use T1 and T3. A summary of T channels is shown in Table 7.2. Fractional T3 is available at a reduced cost for organizations that do not need all T3 channels. As with fractional T1, fractional T3 channels that are not purchased do not carry data. E1, E3, etc. E-carrier, used in Europe, employs a similar concept as T-carrier. Using time division multiplexing, 32 channels take their turn transmitting 8 bits of data in a frame. E1 transmits 8000 frames per second (the same rate as T1). The throughput for E1 is therefore 2.048 Mbps. 455
AU8231_C007.fm Page 456 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Table 7.2. T-Carrier Bandwidth Channel
Multiplex Ratio
Bandwidth (Mbps)
T1 T2 T3 T4
1 T1 4 × T1 7 × T2 6 × T3
1.544 6.312 44.736 274.176
Table 7.3. E-Carrier Bandwidth Channel
Bandwidth (Mbps)
E1 E2 E3 E4
2.048 8.848 34.304 139.264
As with T-carrier, E1 channels are multiplexed into E-carrier technology with more bandwidth. Each successive E-carrier level contains four times the channels as the previous one. Table 7.3 shows the bandwidth of E1 to E4. Customers typically use E1 and E3. In addition, fractional E-carrier lines are available for organizations that do not require the entire capacity of E1 or E3 line. OC1, OC12, etc. T- and E-carriers facilitate high bandwidth over a WAN, but have limitations. First, the carriers are incompatible with each other and other technologies. Also, the fastest incarnations for the carriers, E4 and T4, will not keep pace with the enormous amount of voice and data that organizations will want to transmit over WANs in the future. Synchronous optical network (SONET) standards address these two issues by using single-mode fiber and synchronous communication to facilitate transfer rates as fast as 9953 Mbps (and in the future, faster), which is compatible worldwide.
SONET employs a master clock to tightly control the timing of transmissions and is designed to tolerate timing differences in network devices. User data can be inserted almost anywhere in the frame. Furthermore, frames are continually transmitted. When there is no data to transmit, SONET will transmit frames with dummy data. Like T- and E-carriers, 8000 frames are transmitted per second. However, a SONET frame is larger and more complex, as shown in Figure 7.14. A SONET frame is a 90 × 9 byte matrix (810 bytes) with overhead in the first three columns. The overhead includes information for network management and the pointer to the start of user data. The rest of the frame is 456
AU8231_C007.fm Page 457 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security 3 columns of overhead
Synchronous Payload Envelope (User Data)
9 Bytes
1 column of overhead 90 Bytes
Figure 7.14. SONET frame.
Table 7.4. OC Bandwidth OC Level OC-1 OC-3 OC-9 OC-12 OC-18 OC-24 OC-36 OC-48 OC-192
Bandwidth (Mbps) 51.84 155.52 466.56 622.08 933.12 1244.16 1866.24 2488.32 9953.28
devoted to user data, known as the synchronous payload envelope (SPE), which can start at any byte within that area. However, the first column of the SPE is used as overhead. SONET’s basic transfer rate, optical carrier-1 (OC1) is 51.84 Mbps (810 bytes × 8 bits/byte × 8000 bytes/s). Time division multiplexing of SONET signals is used to generate levels of SONET with a faster bandwidth. Table 7.4 shows the potential speed of various OC levels. . Digital Subscriber Lines (DSL). To meet the ever-growing demand by home users for more affordable bandwidth, telephone companies offer digital subscriber lines (DSLs) that use CAT-3 cables and the local loop. Fortunately for telephone companies, this technology requires relatively little change to their equipment.
The local loop, the weakest link of the PSTN, can support a relatively high transmission rate. Traditionally, all frequencies above 4 KHz are filtered to optimize the network for human speech. When the filter is removed from the line, the line has the capacity for frequencies as high as 1.1 MHz, which is ample for the desired throughput of DSL. 457
AU8231_C007.fm Page 458 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Customer
Central Office
ADSL Modem Splitter Voice NID + Splitter DSLAM
To ISP
Figure 7.15. ADSL network.
There are several methods of implementing DSL, including: Asymmetric digital subscriber line (ADSL): Downstream transmission rates are much greater than upstream ones, typically 256 to 512 kbps downstream and 64 kbps upstream. Rate-adaptive DSL (RADSL): The upstream transmission rate is automatically tuned based on the quality of the line. Symmetric digital subscriber line (SDSL): Uses the same rates for upstream and downstream transmissions. Very high bit rate DSL (VDSL): Supports much higher transmission rates than other DSL technologies, such as 13 Mbps downstream and 2 Mbps upstream. Due to ADSL’s popularity, we will focus on it here. ASDL, like V.90 modems, dedicates more bandwidth for downstream transmission (from CO to user) than upstream, which matches the bandwidth requirements of most home users. For example, a user’s mail client requires very little bandwidth to ask a mail server if the user has new mail. On the other hand, downloading mail could require much more network resources. A typical ADSL network is shown in Figure 7.15. The network interface device (NID), which is usually installed on a customer’s outside wall, marks the end of the telephone company’s responsibility. The phone line is attached to the NID. Splitters are installed at both ends of the phone line to separate voice and data traffic. How does a home computer transmit at high speeds on a creaky old local loop over CAT-3 cable? The leading method is to use discrete multitone (DMT). The 1.1-MHz bandwidth of the local loop is subdivided into 256 channels. Voice is transmitted over one channel; 250 channels are allocated for ADSL, and 5 unused channels help isolate ASDL and voice. Because downstream transmission is favored over upstream, many more channels are allocated for downstream. All channels are modulated by an ASDL modem before transmitting over the local loop. 458
AU8231_C007.fm Page 459 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security At the CO, the ASDL channels are forwarded to a digital subscriber line access multiplexer (DSLAM), which demodulates the signal and sends the resulting bits to an ISP. There are two significant issues with all variations of DSL: • There is a limit to the length of the phone line between the CO and the customer. The precise limit depends on several factors, including the quality of the cable and transmission rates. In other words, the customer cannot be too far from the CO. • DSL allows the users to be connected to the Internet for much longer time intervals. Certainly, this is very convenient for the user, but extended time exposed to the hostile Internet greatly increases the risk of being attacked. To mitigate this serious risk, it is imperative that the host has a firewall, vendor security patches are installed, and dangerous and unused protocols are disabled. Cable Modem. As with DSL, cable modems allow home users to enjoy high-speed Internet connectivity. Instead of sending data through the phone company, cable modems use their cable provider as an ISP. The user connects their PC Ethernet NIC to a cable modem, which is connected via coaxial cable to the cable provider’s network.
Most major cable providers supply cable modems that comply with Data-Over-Cable Service Interface Specifications (DOCSIS), which helps ensure compatibility. At a high level, when a cable modem is powered on, it is assigned upstream and downstream channels. Next, it establishes timing parameters by determining how far it is from the head end (the core of the cable network). The cable modem makes a Dynamic Host Configuration Protocol (DHCP) request to obtain an IP address. To help protect the cable provider from piracy and its users from their data being intercepted by other cable users, the modem and head end exchange cryptography keys. From that point forward, all traffic between the two ends is encrypted.39 Like DSL, cable modems make it practical for home users to remain connected to the Internet for an extended time, which exposes cable modem users to the same risks as DSL users. Cable modem users must take the same precautions as DSL users: ensure that PCs on the home network have a personal firewall, install vendor security patches, and disable dangerous and unused protocols. X.25. X.25 is a protocol from a very different era of networking. In the 1970s, when it was developed, users had dumb terminals (essentially a cathode ray tube monitor and keyboard) that were connected to a large computer. Also, networks were very unreliable, and a lot of resources had to be invested in error checking and correction. 459
AU8231_C007.fm Page 460 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK®
Router with DTE Frame Relay Cloud
Router with DTE
Figure 7.16. Frame Relay network.
X.25 allows users and hosts to connect through a modem to remote hosts via a packet-switched network. As with all packet-switched networks, the user’s stream of data is subdivided into packets and forwarded through the X.25 network to the destination host. Although it may seem as if the user has a dedicated circuit over the WAN, actually, packets could take different paths along the way. Because networks were very unreliable when X.25 was developed, packets go through rigorous error checking, which add much overhead — too much by today’s standards. Most organizations now opt for Frame Relay and ATM, instead of X.25, for packet switching. Frame Relay. Frame Relay is an economical alternative to circuitswitched networks and dedicated lines between networks that have significant idle time. Because Frame Relay uses packet-switching technology, organizations are charged for used bandwidth, which is less expensive than maintaining a dedicated line or the cost of a circuit that is based on the duration of the connection.
A Frame Relay network is shown in Figure 7.16. The heart of a Frame Relay network is the Frame Relay cloud of switches on the provider’s premises. All Frame Relay customers share the resources in the cloud, which are assumed to be reliable, and do not require the intense error checking and correcting of X.25. This significantly increases the throughput over X.25. Devices within the cloud are considered data circuit-terminating equipment (DCE). 460
AU8231_C007.fm Page 461 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security Devices that connect to the Frame Relay cloud, which are generally customer owned and on the customer’s premises, are considered data terminal equipment (DTE). Communication between endpoints is connection oriented over permanent virtual circuits or switched virtual circuits. Organizations use a permanent virtual circuit when the connection between the DTEs will be active most of the time. For occasional connections, a switched virtual circuit is more cost-effective because the connection will be disconnected when it is completed. Frame Relay provides a mechanism that guarantees a customer-specified throughput through the cloud, called the committed information rate (CIR). For example, if 10 Kbps is specified, the provider will ensure that 10 Kbps will be available. In addition, the provider will permit bursts of higher throughput if the resources are available in the cloud. Naturally, higher CIRs are more expensive. Asynchronous Transfer Mode (ATM). Asynchronous Transfer Mode (ATM) is a connection-oriented protocol designed to transmit data, voice, and video over the same network at very high speeds, such as 155 Mbps. This is facilitated by using small, fixed-length 53-byte cells for all ATM traffic.
Another hallmark of ATM is the use of virtual circuits. Cells transferred between circuit endpoints use the same path. To initiate a circuit, a cell is sent to the destination. As this cell transverses the network, all devices in the cell’s path allocate necessary resources to prepare for the eventual transfer of data. As with IP, ATM does not guarantee the delivery of cells. Virtual circuits can be either permanent or switched. Switched virtual circuits are torn down after the connection is terminated. Permanent virtual circuits, on the other hand, remain active. Traffic engineering is an aspect of ATM. All virtual circuits are classified in one of the following categories: Constant bit rate (CBR): The circuit’s cells are transmitted at a constant rate. Variable bit rate (VBR): The circuit’s cells are transmitted within a specified range. This is often used for bursty traffic. Unspecified bit rate (UBR): The circuit’s cells receive bandwidth that has not been allocated by circuits in other categories. This is ideal for applications that are not interactive, such as file transfers. Available bit rate (ABR): The circuit’s throughput is adjusted based on feedback from monitoring the available network bandwidth. Broadband Wireless. As mentioned in the DSL section, a customer cannot subscribe to DSL if his or her premises are too far from the CO. If using a 461
AU8231_C007.fm Page 462 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® cable modem or satellite is not an option, then these users cannot enjoy high-speed Internet access. To solve this and other problems, the IEEE has defined the IEEE 802.16 standard. The standard covers technology that allows users to connect to wireless base stations (access points) miles from where they are located and obtain access to a metropolitan area network (MAN). Base stations can be easily deployed on top of a building, which ensures complete coverage. The IEEE amended the standard with IEEE 802.16a, which addresses issues such as improved access when a base station and user are not in line of site. The new technology promises many features. The channel sizes are flexible, which allows providers to comply with local broadcast regulations. Broadband wireless will support protocols and services such as ATM, voice, video, IP, etc. Finally, security is part of the standard, using AES to protect the integrity and confidentiality of wireless data and the Extensible Authentication Protocol.40 Wireless Local Loop. IEEE 802.16, described in the previous section, allows a user to directly connect to a MAN, which could finally resolve the problem of the last mile, not by replacing the CAT-3 cables of the local loop, but by not using cables at all. Wireless local loop removes two major obstacles:
• The bandwidth limitation of a CAT-3 cable, designed for analog voice, is replaced by high-speed wireless. • The limit on the length of the CAT-3 cable, especially for provisioning DSL, is replaced with the ability to connect to a wireless base station from many miles away. Wireless local loops will also allow underdeveloped and rural areas to use broadband without the limitations of an analog voice telephone system. Wireless Optics. As an alternative to running fiber cables between buildings that are in line of sight, an organization can use lasers to transmit data. In principle, two laser transceivers are aimed at each other to communicate at speeds comparable to SONET. Wireless optics has advantages over microwave because wireless optics transmissions are harder to intercept, and a license is not required for deployment.
Unfortunately, wireless optics can be unreliable during inclement weather, especially heavy fog, which can outweigh its benefits until the technology is improved. Metropolitan Area Network (MAN). Metropolitan area networks encompass the area of a large city. For example, a business may connect offices scattered throughout a city with FDDI or SONET. 462
AU8231_C007.fm Page 463 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security Global Area Network (GAN). Internet. The Internet, a global network of interconnected networks, is changing life on Earth. People from anywhere on the globe can share information almost instantaneously. With a few keystrokes, a document can be sent from one continent to another just as easily and fast as sending it next door. Regrettably, there are people and governments that abuse the Internet for destructive purposes, such as identity theft, espionage, fraud, etc. Governments, organizations, and people must understand the risk and threats of sharing information on the Internet and take appropriate measures to reduce such risks to an acceptable level. Intranet. An intranet is a network of interconnected networks within an organization, which allows information to be shared to make the organization more effective. Hosts can replicate information so that such information is easier to access. During a project, staff in a global company can exchange documents, thereby working together almost as if they were in the same office. As with the Internet, the ease with which information can be shared comes with the responsibility to protect it from harm. As law enforcement organizations often state, the threat from within a company is greater than from outside. Extranet. To be more competitive, companies share information with their business partners. This exchange allows companies to make better decisions. For example, to tightly manage inventory, it is now common practice for manufacturing companies to share information with their suppliers and with the companies to which they supply.
It is not sufficient for executives of partner companies to chat during a round of golf. Companies must share large quantities of information, often in an automated fashion. An effective way to accomplish this is to implement an extranet between the two companies. Typically, one company will grant the other controlled access to an isolated segment of its network to exchange information. Granting an external organization access to a network comes with significant risk. Both companies have to be certain that the controls, both technical and nontechnical (e.g., contracts), effectively minimize the risk of unauthorized access to information. This includes protecting a business partner from accessing a third-party’s information. Companies that share an extranet do not have control of each other’s security posture. Who knows what can happen to a company if it shares an extranet with an organization whose network has been compromised, has poor change control processes, etc. To mitigate this, some companies demand that certain security controls are in place before granting access to an extranet.
463
AU8231_C007.fm Page 464 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Technology and Implementation Routers. Routers forward packets to other networks. They read the destination layer 3 address (e.g., destination IP address) in received packets, and based on the router’s view of the network, it determines the next device on the network (the next hop) to send the packet. If the destination address is not on a network that is directly connected to the router, it will send the packet to another router.
Routers can be used to interconnect different technologies. For example, connecting a Token Ring and Ethernet networks to the same router would allow IP Ethernet packets to be forwarded to a Token Ring network. Firewalls. Firewalls are devices that enforce administrative security policies by filtering incoming traffic based on a set of rules. Often firewalls are thought of as protectors of an Internet gateway only. While a firewall should be placed at Internet gateways, there are more general considerations when one might be appropriate. Firewalls should be placed between entities that have different trust domains. For instance, if an engineering department LAN segment is on the same network as general LAN users, there would be two trust domains: general LAN users and engineers with the organization’s intellectual property. Installing a firewall where the two trust domains meet would help protect the intellectual property from the general LAN user population, as shown in Figure 7.17.
Firewalls will not be effective right out of the box. Firewall rules must be defined correctly and not inadvertently grant unauthorized access. Like all hosts on a network, administrators must install patches to the firewall and disable all unnecessary services. Also, firewalls offer limited protection against vulnerabilities caused by software on other hosts. For example, a firewall will not prevent an attacker from manipulating a database to disclose confidential information. Filtering. As mentioned previously, firewalls filter traffic based on a rule set. Each rule instructs the firewall to block or forward a packet based on one or more conditions. For each incoming packet, the firewall will look through its rule set for a rule whose conditions apply to that packet, and block or forward the packet as specified in that rule. Below are two important conditions used to determine if a packet should be filtered. BY ADDRESS. Firewalls will often use the packet’s source or destination address, or both, to determine if the packet should be filtered. For example, in the case shown in Figure 7.17, to grant a trusted user access to the engineering LAN segment, a rule can be defined to forward a packet whose source address is from a trusted user’s host on the general LAN. BY SERVICE. Packets can also be filtered by service. The firewall inspects the service the packet is using (if the packet is part of the TCP or UDP, the 464
AU8231_C007.fm Page 465 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security
Engineering LAN
Engineering Dept. Domain of Trust
General LAN Domain of Trust
Figure 7.17. Firewall between two domains of trust.
service is the destination port number) to determine if the packet should be filtered. For example, firewalls will often have a rule to filter the Finger service to prevent an attacker from using it to gather information about a host. Address and service are often combined in rules. If the engineering department wanted to grant anyone on the LAN access to its Web server, a rule could be defined to forward packets whose destination address is the Web server’s and the service is HTTP (TCP port 80). Network Address Translation (NAT). Firewalls can change the source address of each outgoing (from trusted to untrusted network) packet to a different address. This has several applications, most notably to allow hosts with RFC 191841 addresses access to the Internet by changing their nonroutable address to one that is routable on the Internet.
Anonymity is another reason to use NAT. Many organizations do not want to advertise their IP addresses to an untrusted host, and thus unnecessarily give information about the network. They would rather hide the entire network behind translated addresses. Port Address Translation (PAT). An extension to NAT is to translate all addresses to one routable IP address and translate the source port number in the packet to a unique value. The port translation allows the firewall to keep track of multiple sessions that are using PAT. 465
AU8231_C007.fm Page 466 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Static Packet Filtering. When a firewall uses static packet filtering, it examines each packet without regard to the packet’s context in a session. Packets are examined against static criteria, for example, blocking all packets with a port number of 79 (finger). Because of its simplicity, static packet filtering requires very little overhead, but has a significant disadvantage. Static rules cannot be temporarily changed by the firewall to accommodate legitimate traffic. If a protocol requires a port to be temporarily opened, administrators have to choose between permanently opening the port and disallowing the protocol. Stateful Inspection or Dynamic Packet Filtering. Stateful inspection examines each packet in the context of a session, which allows it to make dynamic adjustments to the rules to accommodate legitimate traffic and block malicious traffic that would appear benign to a static filter. Consider FTP. A user connects to an FTP server on TCP port 21 and then tells the FTP server on which port to transfer files. The port can be any TCP port above 1023. So, if the FTP client tells the server to transfer files on TCP port 1067, the server will attempt to open a connection to the client on that port. A stateful inspection firewall would watch the interaction between the two hosts, and even though the required connection is not permitted in the rule set, it would allow the connection to occur because it is part of FTP.
Static packet filtering, in contrast, would block the FTP server’s attempt to connect to the client on TCP port 1067 unless a static rule was already in place. In fact, because the client could instruct the FTP server to transfer files on any port above 1023, a static rule would have to be in place to permit access to over 65,536 ports (see Section Transmission Control Protocol). Proxies. A proxy firewall communicates to untrusted hosts on behalf of the hosts that it protects. It forwards traffic from trusted hosts, creating the illusion to the untrusted server that the traffic originated from the proxy firewall, thus hiding the trusted hosts from potential attackers. A typical interaction with a server through a proxy is shown in Figure 7.18.
To the user, it appears that he or she is communicating directly with the untrusted server. Proxy servers are often placed at Internet gateways to hide the internal network behind one IP address and to prevent direct communication between internal and external hosts. CIRCUIT-LEVEL PROXY. A circuit-level proxy creates a conduit through which a trusted host can communicate with an untrusted one. This type of proxy does not inspect any of the traffic that it forwards, which adds very little overhead to the communication between the user and untrusted server. The lack of application awareness also allows circuit-level proxies to for-
466
AU8231_C007.fm Page 467 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security
2 1 4 User
3 Proxy Server
Untrusted Server
1 - User’s request goes to proxy server 2 - Proxy server forward the request to untrusted host. To the untrusted host, it will appear as if the request originated from the proxy server. 3 - The untrusted host responds to the proxy server. 4 - The proxy server forwards the response to the user.
Figure 7.18. Accessing a server through a proxy.
ward any TCP and UDP port. The disadvantage is that traffic will not be analyzed for malicious content. APPLICATION-LEVEL PROXY. An application-level proxy relays the traffic from a trusted host running a specific application to an untrusted server. Each application-level proxy can support one application only, such as FTP, HTTP, etc. Therefore, a separate proxy is required for each application. The most significant advantage of application-level proxies is that they analyze the traffic that they forward for malicious packets and can restrict some of the application’s functionality. Application-level proxies add overhead to using the application because they scrutinize the traffic they forward.
Web proxy servers are a very popular example of application-level proxies. Many organizations place one at their Internet gateway and configure their users’ Web browsers to use the Web proxy whenever they browse an external Web server (other controls are implemented to prevent users from bypassing the proxy server). The proxies typically include required user authentication, inspection of URLs to ensure that users do not browse inappropriate sites, logging, and caching of popular Web pages. Personal Firewalls. The firewalls that we have discussed so far protect a network or a segment of one. But what protects users from hosts that are behind a firewall? For example, the firewall in Figure 7.17 does not protect a user on the engineering LAN segment from someone on the same segment.
Following the principle of security in depth, personal firewalls are installed on workstations, which protect the user from all hosts on the network. It is critical for home users with DSL or cable modem access to the Internet to have a personal firewall installed on every PC, especially if they do not have a firewall protecting their network.
467
AU8231_C007.fm Page 468 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Because personal firewalls are employed by general users, they are easy to install and configure. Firewall rules are created with a nontechnical interface that does not require expertise in networking or security. Although they do not provide the flexibility of the best enterprise firewalls, they provide all of the essential functions of a firewall, such as stateful inspection, logging, etc. End Systems. In a way, this section should be called beginning systems because most of the topics here are the reason networks exist — to allow these systems to exchange information. Because the information on these devices is the crown jewel of the network, there are many opportunities for harm to come to them. Much could be written about protecting some of the following, but we will present the highlights. Operating Systems. Have you ever thought about what is required to execute several applications simultaneously, or how data is physically located on disks, or how to share memory among programs when the total demand for memory is greater than the memory on the computer? Probably not — nor should you. These are a sample of the behind-the-scenes details that are tended to by operating systems.
An operating system is the layer of software that interfaces between programs42 and the hardware. It services requests for peripherals on behalf of the users and interfaces with the central processing unit. The operating system responds to numerous potentially simultaneous events, such as a mouse click, input from the network, and a user authenticating, and juggles the demands for the computer’s resources by applications and its own processes. Operating systems also ensure that everything on the computer runs smoothly. Anything that threatens the integrity of the system or a process is identified and handled, which could include routinely asking a remote system to retransmit a packet with an error or attempting to gracefully shut down after a catastrophic event, such as a fatal software error or loss of power. Security is an integral and critical function of the operations system. Operating systems implement access control lists to help ensure that users only access resources to which they are authorized. Users have to prove their identity to the operating system through authentication. Operating systems also provide mechanisms to improve the availability of information, such as redundant array of independent disks (RAID) and computer clusters. Regardless of the applications that are running on a computer, the operating system will always be a target of attack because a compromised operation system will allow an intruder to control the computer. Therefore, 468
AU8231_C007.fm Page 469 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security owners should remove unnecessary targets of attack, such as dangerous or unused services, and install security patches as soon as it is feasible. Servers and Mainframes. Servers and mainframes are repositories of information, much of which is critical to an organization’s mission, its employees, and clients. These computers contain information to support critical business processes, customer databases, and intellectual property. Further, information on these hosts is protected by an ever-growing number of international, national, and local security and privacy regulations.
There is much risk posed to the information due to its importance and the accessibility of servers and mainframes on a network. Often people think that an intruder is a nameless and faceless person breaking into a server or mainframe from an external machine. Yet, it is internal personnel, many of whom abuse the trust of their employers, who most often gain unauthorized access. Also, many do not realize that human error can be just as devastating as malicious activity. Due to the importance of the information on servers and mainframes, it is vital that organizations minimize the risk posed to the information. Much in this book covers this, but here are a few important controls. Owners should only grant access that is required for personnel to perform their duties, log such access sufficiently to support forensic investigation, and regularly review logs. Because a failure of an application could increase the risk to other applications on the same host, multiple applications should not run on the same physical or virtual machine. For instance, a compromise of low-risk application (which may not be rigorously protected) could allow an intruder access to a high-risk application on the same server. Or, a failure of a development instance of an application could corrupt production, especially if the applications share the same database. Also, sensitive data should be encrypted with strong encryption algorithms. Likewise, risks from remote access should be reduced. Use controls discussed previously, such as firewalls, to help ensure that only authorized clients may access the servers and mainframes. Verify, perhaps with digital certificates, that clients are authorized to access the server, and a middleware server is not an attacker’s machine masquerading as middleware. Finally, when appropriate, encrypt network traffic between client and servers/mainframes. Workstations. Workstations are computers that users physically log into, and are typically used as clients. As with all network devices, they are potential targets of attack. Some of the vulnerabilities are similar to servers, for example, unpatched operating systems and unnecessary network services. In fact, some workstation operating systems provide server functionality, such as a lightweight FTP and Web server. This allows worksta469
AU8231_C007.fm Page 470 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® tion users, who probably are not being trained in system administration, to install server functionality that is poorly configured and is never patched. Workstations are less expensive than servers. While this allows employees to have PCs at their desk, the low cost has a downside. Business units can purchase new workstations and attach them to the network without IT’s knowledge or approval, and not install security patches, personal firewalls, and antivirus software. As a result, these hosts threaten the entire network because of their poor installation. Workstation user behavior can threaten a network. Some users forget that they do not own their workstation and freely engage in dangerous behavior, such as downloading from untrusted sites, chatting with instant messaging, or leaving their office with their workstation unprotected. It is also common for users to underestimate the sensitivity of the information on their workstation, and therefore the importance of protecting it. Many users do not realize that if their workstation is compromised, it can be used to attack the network and it would appear as if they were the perpetrator. To mitigate these risks, IT should always administer workstations, instead of the business units, because IT is familiar with current leading practices and vulnerabilities. Organizations should consider not granting privileged rights to workstation users. As with other end systems, software updates must be installed as soon as possible, unnecessary and dangerous services must be disabled, and antivirus software and personal firewalls should be installed. Organizations can remedy behavioral problems with policies and security training and awareness. Managers and users should be shown the dangers of administering their workstations and attaching them to the network, and how inappropriate behavior, such as using instant messaging, can threaten the network. In addition, appropriate behavior must be enforced by policies that are visibly supported by executives. Notebooks. Notebooks (also known as laptops) share the same problems as workstations. However, there are additional issues because of the notebook’s portability. From a physical standpoint, the notebook can be easily stolen or lost, potentially disclosing sensitive information that is on the computer. Furthermore, the notebook is not protected by the organization’s firewall when a user attaches the notebook to another network, such as a home network or wireless hot spot. There is a significant danger that the user could unintentionally download malicious code on his or her notebook when attached to another network, and unleash it on the organization when the notebook returns to the office.
The above issues can be partly mitigated with technical controls, such as ensuring that all notebooks have personal firewalls and antivirus soft470
AU8231_C007.fm Page 471 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security ware with current signatures. Also, sensitive files and disks should be encrypted in case the notebook is lost or stolen. Notebook users must be educated on their additional responsibilities of using a notebook, and must understand how to use their notebook on a different network and how to physically protect it from thieves. Tablet PCs, notebooks that use a specialized stylus for handwritten input, have the same issues as notebooks. Because Tablet PCs are used to take meeting notes, a lost or stolen Tablet PC could cause very sensitive information to be disclosed. Personal Digital Assistants. Personal digital assistants can store confidential and private information. Yet many models do not include sufficient controls, such as strong encryption, to protect their contents in the event they are stolen or lost.
A thief can easily steal information by downloading it to a PDA from an unprotected computer. Also, a PDA with a camera attachment can be used to take unauthorized photographs. Encrypting data on a PDA will help protect the information if the device is lost or stolen. To help reduce the risk of confidential information walking out of the office on a PDA, computer ports, where PDAs are attached to computers to download information, should be physically and logically protected. The most effective defense is for users, through awareness, to understand the threat of PDAs and how to stop it. Smart Phones. Smart phones are a combination of PDAs and cell phones. Although they offer the convenience of both devices, they also have the security issues of both. In addition to the issues discussed in the previous section, a smart phone’s Subscriber Identity Module (SIM) can be cloned and used to steal personal information that is on the card. A thief can install the cloned SIM in a phone and make calls that will be charged to the owner of the original SIM.
A thief can use the cellular capabilities of a smart phone to e-mail stolen information while still on his or her victim’s premises. As with PDAs, the best defenses to the above issues are encrypting sensitive information on the smart phones, protecting computer ports that can be used to download information, and education to learn how to recognize and stop theft with a smart phone. Internet Protocol (IP). Internet protocol (IP) is responsible for sending packets from the source to the destination hosts. Because it is an unreliable protocol, it does not guarantee that packets arrive error-free or in the 471
AU8231_C007.fm Page 472 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Table 7.5. Network Classes
Class
Range of First Octet
A B C D E
1–127 128–191 192–223 224–239 240–255
Number of Octets for Network Number
Number of Hosts in Network
1 2 3
16,777,216 65,536 256 Multicast Reserved
correct order. That task is left to protocols on higher layers. IP will subdivide packets into fragments when a packet is too large for a network. Hosts are distinguished by the IP addresses of their network interfaces. The address is expressed as four octets separated by a dot (.), for example, 216.12.146.140. Each octet may have a value between 0 and 255. However, 0 and 255 are not used for hosts. The latter is used for broadcast addresses, and the former’s meaning depends on the context in which it is used. Each address is subdivided into two parts: the network number and the host. The network number, assigned by an external organization, such as the Internet Corporation for Assigned Names and Numbers (ICANN), represents the organization’s network. The host represents the network interface within the network. Originally, the part of the address that represented the network number depended on the network’s class. As shown in Table 7.5, a Class A network used the leftmost octet as the network number, Class B used the leftmost two octets, etc. The part of the address that is not used as the network number is used to specify the host. For example, the address 216.12.146.140 is a Class C network. Therefore, the network is 216.12.146 and the host within the network is 140. The address Class A network 127.0.0.0 is reserved for a computer’s loopback address. Usually the address 127.0.0.1 is used. The explosion of Internet utilization in the 1990s caused a shortage of unallocated IP addresses. To help remedy the problem, classless interdomain routing (CIDR) was implemented. CIDR does not require that a new address is allocated based on the number of hosts in a network class. Instead, addresses are allocated in contiguous blocks from the pool of unused addresses. To ease network administration, networks are typically subdivided into subnets. Because subnets cannot be distinguished with the addressing scheme discussed so far, a separate mechanism, the subnet mask, is used 472
AU8231_C007.fm Page 473 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security to define the part of the address that is used for the subnet. Bits in the subnet mask are 1 when the corresponding bits in the address are used for the subnet. The remaining bits in the mask are 0. For example, if the leftmost three octets (24 bits) are used to distinguish subnets, the subnet mask is 11111111 11111111 11111111 00000000. A string of 32 1s and 0s is very unwieldy, so the mask is usually converted to decimal: 255.255.255.0. Alternatively, the mask is expressed with a slash (/) followed by the number of 1s in the mask. The above mask would be /24. IPv6. After the explosion of Internet usage in the mid-1990s, IP began to experience serious growing pains. It was obvious that the phenomenal usage of the Internet was stretching the protocol to its limit. The most obvious problems were a shortage of unallocated IP addresses and serious shortcomings in security.
IPv6 is a modernization of the current IP, IPv4 (version 5 was an experimental real-time streaming protocol), that includes: A much larger address field: IPv6 addresses are 128 bits, which supports 2128 hosts. Suffice it to say that we will not run out of addresses. Computing how many hosts will be supported by IPv6 and comparing the result to some other large constant will be left as an exercise for the student. Improved security: As we will discuss below, IPSec must be implemented in IPv6. This will help ensure the integrity and confidentiality of IP packets, and allow communicating partners to authenticate with each other. A more concise IP packet header: Hosts will require less time to processes each packet, which will result in increased throughput. Improved quality of service: This will help services obtain an appropriate share of a network’s bandwidth. The slow process of converting to IPv6 has already begun. Public IPv6 networks, such as 6Net43 and 6Bone,44 are accepting additional networks to connect to their IPv6 network. Because there are always stragglers, it will probably take a long time for every network to convert to the new protocol. But if recent history is an indicator, the vast majority of networks will convert in a relatively small interval of time. Risks and Attacks. IP FRAGMENTATION ATTACKS. Teardrop — In this attack, IP packet fragments are constructed so that the target host calculates a negative fragment length when it attempts to reconstruct the packet. If the target host’s IP stack does not ensure that fragment lengths are reasonable, the host could crash or become unstable.
This problem is fixed with a vendor patch. 473
AU8231_C007.fm Page 474 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Overlapping Fragment Attack — Overlapping fragment attacks are used to subvert packet filters that only inspect the first fragment of a fragmented packet. The technique involves sending a harmless first fragment, which will satisfy the packet filter. Other packets follow that overwrite the first fragment with malicious data, thus resulting in a harmful packet bypassing the packet filter and being accepted by the victim host. A solution to this problem is for TCP/IP stacks not to allow fragments to overwrite each other. IP ADDRESS SPOOFING. Packets are sent with a bogus source address so that the victim will send a response to a different host. Spoofed addresses can be used to abuse the three-way handshake that is required to start a TCP session. Under normal circumstances, a host offers to initiate a session with a remote host by sending a packet with the SYN option. The remote host responds with a packet with the SYN and ACK options. The handshake is completed when the initiating host responds with a packet with the ACK option.
An attacker can launch a denial-of-service attack by sending the initial packet with the SYN option with a source address of a host that does not exist. The victim will respond to the forged source address by sending a packet with the SYN and ACK options, and then wait for the final packet to complete the handshake. Of course, that packet will never arrive because the victim sent the packet to a host that does not exist. If the attacker sends a storm of packets with spoofed addresses, the victim may reach the limit of uncompleted (half-open) three-way handshakes and refuse other legitimate network connections. The above scenario takes advantage of a protocol flaw. To mitigate the risk of a successful attack, vendors have released patches that reduce the likelihood of the limit of uncompleted handshakes being reached. In addition, security devices, such as firewalls, can block packets that arrive from an external interface with a source address from an internal network. SOURCE ROUTING EXPLOITATION. Instead of only permitting routers to determine the path a packet takes to its destination, IP allows the sender to explicitly specify the path. An attacker can abuse source routing so that the packet will be forwarded between network interfaces on a multihomed computer that is configured not to forward packets. This could allow an external attacker access to an internal network.
Source routing is specified by the sender of an IP datagram, whereas the routing path would normally be left to the router to decide. The best solution is to disable source routing on hosts and to block source-routed packets. 474
AU8231_C007.fm Page 475 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security SMURF AND FRAGGLE ATTACKS. Both attacks use broadcasts to create denial-
of-service attacks. • A Smurf attack misuses the ICMP echo request to create denial-ofservice attacks. In a Smurf attack, the intruder sends an ICMP echo request with a spoofed source address of the victim. The packet is sent to a network’s broadcast address, which forwards the packet to every host on the network. Because the ICMP packet contains the victim’s host as the source address, the victim will be overwhelmed by the ICMP echo replies, causing a denial-of-service attack. • The Fraggle attack uses UDP instead of ICMP. The attacker sends a UDP packet on port 7 with a spoofed source address of the victim. Like the Smurf attack, the packet is sent to a network’s broadcast address, which will forward the packet to all of the hosts on the network. The victim host will be overwhelmed by the responses from the network. Virtual Private Network (VPN). A VPN is an encrypted tunnel between two hosts that allows them to securely communicate over an untrusted network (e.g., the Internet). Remote users employ VPNs to access their organization’s network, and depending on the VPN’s implementation, they may have most of the same resources available to them as if they were physically at the office. As an alternative to expensive dedicated point-topoint connections, organizations use gateway-to-gateway VPNs to securely transmit information over the Internet between sites or even with business partners. IPSec Authentication and Confidentiality for VPNs. IP Security (IPSec) is a suite of protocols for communicating securely with IP by providing mechanisms for authenticating and encryption. Implementation of IPSec is mandatory in IPv6, and many organizations are using it over IPv4. Further, IPSec can be implemented in two modes, one that is appropriate for end-toend protection and one that safeguards traffic between networks.
Standard IPSec authenticates only hosts with each other. If an organization requires users to authenticate, they must employ a nonstandard proprietary IPSec implementation, or use IPSec over L2TP (Layer 2 Tunneling Protocol). The latter approach uses L2TP to authenticate the users and encapsulate IPSec packets within L2TP. Because IPSec interprets the change of IP address within packet headers as an attack, NAT does not work well with IPSec. To resolve the incompatibility of the two protocols, NAT-Transversal (a.k.a. NAT-T) encapsulates IPSec within UDP port 4500 (see RFC 3948 for details).45 AUTHENTICATION HEADER (AH). The authentication header is used to prove the identity of the sender and ensure that the transmitted data has not 475
AU8231_C007.fm Page 476 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® been tampered with. Before each packet (headers + data) is transmitted, a hash value of the packet’s contents (except for the fields that are expected to change when the packet is routed) based on a shared secret is inserted in the last field of the AH. The endpoints negotiate which hashing algorithm to use and the shared secret when they establish their security association. To help thwart replay attacks (when a legitimate session is retransmitted to gain unauthorized access), each packet that is transmitted during a security association has a sequence number, which is stored in the AH. In transport mode, the AH is shimmed between the packet’s IP and TCP header. The AH helps ensure integrity, not confidentiality. Encryption is implemented with the encapsulating security payload. ENCAPSULATING SECURITY PAYLOAD (ESP). The encapsulating security payload encrypts IP packets and ensures their integrity. Both services are optional; however, at least one must be used. ESP contains four sections:
ESP header: Contains information showing which security association to use and the packet sequence number. Like the AH, the ESP sequences every packet to thwart replay attacks. ESP payload: The payload contains the encrypted part of the packet. If the encryption algorithm requires an initialization vector (IV), it is included with the payload. The endpoints negotiate which encryption to use when the security association is established. Because packets must be encrypted with as little overhead as possible, ESP typically uses a symmetric encryption algorithm.46 ESP trailer: May include padding (filler bytes) if required by the encryption algorithm or to align fields. Authentication: If authentication is used, this field contains the integrity check value (hash) of the ESP packet. As with the AH, the authentication algorithm is negotiated when the endpoints establish their security association. SECURITY ASSOCIATIONS. A security association (SA) defines the mechanisms that an endpoint will use to communicate with its partner. All SAs cover transmissions in one direction only. A second SA must be defined for twoway communication. Mechanisms that are defined in the SA include the encryption and authentication algorithms, and whether to use the AH or ESP protocol.
Deferring the mechanisms to the SA, as opposed to specifying them in the protocol, allows the communicating partners to use the appropriate mechanisms based on risk. TRANSPORT MODE AND TUNNEL MODE. Endpoints communicate with IPSec using either transport or tunnel mode. In transport mode, the IP payload is 476
AU8231_C007.fm Page 477 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security protected. This mode is mostly used for end-to-end protection, for example, between client and server. In tunnel mode, the IP payload and its IP header are protected. The entire protected IP packet becomes a payload of a new IP packet and header. Tunnel mode is often used between networks, such as with firewall-to-firewall VPNs. INTERNET KEY EXCHANGE (IKE). Internet key exchange allows communicating partners to prove their identity to each other and establish a secure communication channel. IKE uses two phases:
Phase 1: In this phase, the partners authenticate with each other, using one of the following: Shared secret: A key that is exchanged by humans via telephone, fax, encrypted e-mail, etc. Public key encryption: Digital certificates are exchanged. Revised mode of public key encryption: To reduce the overhead of public key encryption, a nonce is encrypted with the communicating partner’s public key, and the peer’s identity is encrypted with symmetric encryption using the nonce as the key. Next, IKE establishes a temporary security association and secure tunnel to protect the rest of the key exchange. Phase 2: The peers’ security associations are established, using the secure tunnel and temporary SA created at the end of phase 1. Secure Shell (SSH). Users often want to log on to a remote computer. Unfortunately, most early implementations to meet that need were designed for a trusted network. Protocols/programs, such as TELNET, RSH, and rlogin, transmit unencrypted over the network, which allows traffic to be easily intercepted.
Secure shell (SSH) was designed as an alternative to the above insecure protocols and allows users to securely access resources on remote computers over an encrypted tunnel. SSH’s services include remote log-on, file transfer, and command execution. It also supports port forwarding, which redirects other protocols through an encrypted SSH tunnel. Many users protect less secure traffic of protocols, such as X Windows and VNC (virtual network computing), by forwarding them through a SSH tunnel. The SSH tunnel protects the integrity of communication, preventing session hijacking and other man-in-the-middle attacks. Another advantage of SSH over its predecessors is that it supports strong authentication. There are several alternatives for SSH clients to authenticate to a SSH server, including passwords and digital certificates. Keep in mind that authenticating with a password is still a significant improvement over the other protocols because the password is transmitted encrypted. 477
AU8231_C007.fm Page 478 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® There are two incompatible versions of the protocol, SSH-1 and SSH-2, though many servers support both. SSH-2 has improved integrity checks (SSH-1 is vulnerable to an insertion attack due to weak CRC-32 integrity checking) and supports local extensions and additional types of digital certificates, such as Open PGP.47 SSH was originally designed for UNIX, but there are now implementations for other operating systems, including Windows, Macintosh, and OpenVMS. SOCKS. SOCKS is a popular circuit proxy server with several commercial and freeware implementations. The heart of SOCKSv5 (the current version) is RFC 1928, which does not require that developers include encryption of traffic in their implementations.
Users employ the SOCKS client to access a remote server. The client initiates a connection to the SOCKS proxy server, which accesses the remote server on behalf of the user. If the implementation supports encryption, then the server can act as a VPN, protecting the confidentiality of the traffic between the SOCKS and remote servers. Because SOCKS is concerned with maintaining a circuit, it can be used with almost any application. A key advantage of SOCKS and SSL VPNs is the possibility to use proxy servers. This is a feature most other VPNs are lacking. A SOCKS server may require that a user authenticates before providing services. SSL/TLS VPNs. SSL VPNs are another approach to remote access that is gaining momentum. Instead of building a VPN around the IPSec and the network layer, SSL VPNs leverage SSL/TLS (see Section Transport Layer Security) to create a tunnel back to the home office. Remote users employ a Web browser to access applications that are in the organization’s network. Even though users employ a Web browser, SSP VPNs are not restricted to applications that use HTTP. With the aid of plug-ins, such as Java, users can have access to back-end databases, and other non-Web-based applications.
SSL VPNs have several advantages over IPSec. They are easier to deploy on client workstations than IPSec, because they require a Web browser only, and almost all networks permit outgoing HTTP. SSL VPNs can be operated through a proxy server. In addition, applications can restrict users’ access, based on criteria, such as the network that the user is on, which is useful for building extranets with several organizations. IPSec VPNs, on the other hand, grant access directly to a network. A user is usually given access to applications and devices as if he or she were located at the office. Of course, this is a double-edged sword. Just as an authorized user has access to many devices on the internal net478
AU8231_C007.fm Page 479 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security work, so will an intruder who can steal IPSec VPN access. Currently, SSL VPNs do not support network-to-network tunnels. A significant disadvantage of IPSec VPNs is that a VPN client must be installed and updated on every workstation. Tunneling. Point-to-Point Tunneling Protocol (PPTP). Point-to-Point Tunneling Protocol (PPTP) is a VPN protocol that runs over other protocols. PPTP relies on generic routing encapsulation (GRE) to build the tunnel between the endpoints. After the user authenticates, typically with Microsoft Challenge Handshake Authentication Protocol version 2 (MSCHAPv2), a Pointto-Point Protocol (PPP) session creates a tunnel using GRE.
PPTP came under much fire in the 1990s. Cryptographers announced weaknesses in the protocol, including flaws with MSCHAPv1 (the authentication protocol) and the encryption implementation, and the use of user passwords as keys. Microsoft released PPTPv2, which addressed many of its predecessor’s weaknesses, such as using an improved version of MSCHAP for authentication, but PPTPv2 is still vulnerable to offline passwordguessing attacks. A key weakness of PPTP is the fact that it derives its encryption key from the user’s password. This violates the cryptographic principle of randomness and can provide a basis for attacks. Password-based VPN authentication in general violates the recommendation to use two-factor authentication for remote access. Layer 2 Tunneling Protocol (L2TP). Layer 2 Tunneling Protocol (L2TP) is a hybrid of Cisco’s Layer 2 Forwarding (L2F) and Microsoft’s PPTP. It allows callers over a serial line using PPP to connect over the Internet to a remote network. A dial-up user connects to his ISP’s L2TP access concentrator (LAC) with a PPP connection. The LAC encapsulates the PPP packets into L2TP and forwards it to the remote network’s layer 2 network server (LNS). At this point, the LNS authenticates the dial-up user. If authentication is successful, the dial-up user will have access to the remote network.
LAC and LNS may authenticate each other with a shared secret, but as RFC 2661 states, the authenticating is effective only while the tunnel between the LAC and LNS is being created.48 L2TP does not provide encryption and relies on other protocols, such as tunnel mode IPSec, for confidentiality. Dynamic Host Configuration Protocol (DHCP). S y s t e m a n d n e t w o r k administrators are busy people and hardly have the time to assign IP addresses to hosts and track which addresses are allocated. To relieve administrators from the burden of manually assigning addresses, many organizations use Dynamic Host Configuration Protocol (DHCP) to auto479
AU8231_C007.fm Page 480 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® matically assign IP addresses to workstations (servers and network devices usually are assigned static addresses). Dynamically assigning hosts configuration is fairly simple. When a workstation boots, it broadcasts a DHCPDISCOVER request on the local LAN, which could be forwarded by routers. DHCP servers will respond with a DCHPOFFER packet, which contains a proposed configuration, including an IP address. The DHCP selects a configuration from the received DHCPOFFER packets and replies with a DHCPREQUEST. The DHCP server replies with a DHCPACK (DHCP acknowledgment), and the workstation adapts the configuration. Receiving a DHCP-assigned IP address is referred to as receiving a lease. A client does not request a new lease every time it boots. Part of the negotiation of IP addresses includes establishing a time interval for which the lease is valid and timers that reflect when the client must attempt to renew the lease. As long as the timers have not expired, the client is not required to ask for a new lease. Within the DHCP servers, administrators assign IP addresses from which addresses are dynamically assigned. In addition, they can assign specific hosts to have static (i.e., permanent) addresses. Because the DHCP server and client do not authenticate with each other, neither host can be sure that the other is legitimate. For example, in a DHCP network, an attacker can plug his or her workstation into a jack and receive an IP address, without having to obtain one by guessing or social engineering. Also, a client cannot be certain that a DHCPOFFER packet is from a DHCP server, instead of an intruder masquerading as a server. Although these vulnerabilities are not trivial, the ease of administration of IP addresses usually makes the risk from the vulnerabilities acceptable, except in very high security environments. Internet Control Message Protocol (ICMP). The Internet Control Message Protocol (ICMP) is used for the exchange of control messages between hosts and gateways and is used for diagnostic tools, such as ping and traceroute.
ICMP can be leveraged for malicious behavior, including man-in-the-middle and denial-of-service attacks. Ping of Death. Ping is a diagnostic program to determine if a specified host is on the network and can be reached from the pinging host. It sends an ICMP echo packet to the target host and waits for the target to return an ICMP echo reply. Amazingly, an enormous number of operating systems would crash or become unstable upon receiving an ICMP echo greater than the legal packet limit of 65,536 bytes. 480
AU8231_C007.fm Page 481 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security Before the ping of death became famous, the source of the attack was difficult to find because many system administrators would ignore a harmless-looking ping in their logs. ICMP Redirect Attacks. A router may send an ICMP redirect to a host to tell to it to use a different, more effective default route. However, an attacker can send an ICMP redirect to a host telling it to use the attacker’s machine as a default route. The attacker will forward all of the redirected traffic to a router so that the victim will not know that his or her traffic has been intercepted. This is a good example of a man-in-the-middle attack.
Some operating systems would crash if they received a storm of ICMP redirects. Ping Scanning. Ping scanning is a basic network mapping technique that helps narrow the scope of an attack. An attacker can use one of many tools that can be downloaded from the Internet to ping all of the addresses in a range. If a host replies to a ping, then the attacker knows that a host exists at that address. Traceroute Exploitation. Traceroute is a diagnostic tool that displays the path a packet transverses between the source and destination hosts. Traceroute can be used maliciously to map a victim network and learn about its routing. In addition, there are tools, such as Firewalk, that use techniques similar to those of traceroute to enumerate a firewall rule set.49 Internet Group Management Protocol (IGMP). IGMP is used to manage multicasting groups, which are a set of hosts anywhere on a network that are interested in a particular multicast. Recall from the discussion of multicasting that multicast agents administer multicast groups. Hosts send IGMP messages to local agents to join and leave groups. There are three versions of the protocol, whose highlights follow:
Version 1: Multicast agents periodically send queries to a host on its network to update its database of multicast groups’ membership. Hosts stagger their replies to prevent a storm of traffic to the agent. When replies no longer come from a group, agents will stop forwarding multicasts to that group. Version 2: This version extends the functionality of version 1. It defines two types of queries: a general query to determine membership of all groups and a group-specific query to determine the membership of a particular group. In addition, a member can notify all multicast routers that it wishes to leave a group. Version 3: This version further enhances IGMP by allowing hosts to specify from which sources they want to receive multicasts. Routing Information Protocol (RIP). Routing Information Protocol is a dynamic routing protocol that is designed for small networks. Routers in a 481
AU8231_C007.fm Page 482 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® RIP network regularly merge their view of the network by exchanging their routing table with their neighbors. The number of hops is used to determine the best path for packets. However, RIP cannot route to a network or host that is more than 15 hops away. RIP has several shortcomings, including: • Routers exchange their entire route table every 30 s (by default), which can cause network congestion. • RIP only works in classful networks. In other words, RIP cannot be used in networks with different subnet masks. • There is no way for a router to verify the trustworthiness of a route update from its neighbors, which could allow an attacker to manipulate route tables with bogus route updates. RIP version 2 (RIPv2) was implemented to address some of RIP’s limitations. For example, RIPv2 can be used in a network with different subnet masks. Also, routers authentice with each other, originally with a plaintext password, and later, using RFC 2082, keyed MD5 authentication. Virtual Router Redundancy Protocol (VRRP). Organizations that demand that their network have 99.999 percent availability cannot tolerate critical routers as single points of failure. The most acceptable option is for a secondary router to automatically take the place of another router when it fails (i.e., failover). VRRP is a protocol that supports automatic failover. A virtual router is configured that appears to the rest of the network as a physical router.
Configured with the virtual router are physical routers, a primary router and at least one secondary. The primary router performs all of the routing on behalf of the virtual router. If the primary router fails, one of the secondary routers will automatically perform the routing for the virtual router. As long as hosts forward packets to the virtual router, it will not matter which physical router is doing the work. Primary and secondary routers are often placed in separate data centers to improve resilience against disasters (see the business continuity and disaster recovery chapter, Domain 6). Layer 4: Transport Layer The transport layer provides data communication between hosts. It is concerned with the information payload. It relies on the correct addressing (routing) of information happening on layer 3 while deferring any processrelated handling, starting from authentication, to layers 5 and above. Concepts and Architecture Layer 4 offers two principal types of communication: • A connection mode, where delivery of information is guaranteed and flow control and error recovery are provided 482
AU8231_C007.fm Page 483 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security Table 7.6. TCP Quick Reference Ports Definition
0/TCP … 65535/TCP RFC 79350
• A connectionless mode, where no such guarantee is required, for instance, due to performance considerations Attacks on layer 4 seek to manipulate, disclose, or prevent delivery of the payload as a whole. This can, for instance, happen by reading the payload (as would happen in a sniffer attack, for example) or changing it (which could happen in a man-in-the-middle attack). While disruptions of service can be done on other layers as well, the transport layer has become a common attack ground via ICMP (Section Internet Control Message Protocol). Commonly, the transport layer is implemented as part of an integrated network stack with layer 3 (TCP/IP stack). Transmission Control Protocol (TCP). The Transmission Control Protocol provides connection-oriented data management and reliable data transfer.
TCP, as well as UDP (see Section User Datagram Protocol), is mapping data connections by so-called port numbers. TCP and UDP port numbers are managed by the Internet Assigned Numbers Authority (IANA).51 A total of 65,536 (216) ports exist (see Table 7.6). These are structured into three ranges:52 Well-known ports: Ports 0 through 1023 are well known. Ports in this range are assigned by IANA53 and, on most systems, can only be used by privileged processes and users. This makes use of these ports contingent on privilege escalation. However, because no ad hoc assumptions should be made on the trustworthiness of any system (i.e., any unknown system on the Internet is considered untrusted by default) and anyone can establish a system on which he has system privileges, this should not be seen as adding any degree of security. Registered ports: Ports 1024 through 49151 can be registered with IANA by application developers but are not assigned by them. The reason for choosing a registered instead of a well-known port can be that on most systems, the user may not have the privileges to run an application on a well-known port.54 Dynamic or private ports: Ports 49152 through 65535 can be freely used by applications; one typical use for these ports is initiation of return connections. Attacks to TCP include sequence number attacks, session hijacking, and SYN floods. 483
AU8231_C007.fm Page 484 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Table 7.7. UDP Quick Reference Ports Definition
0/UDP … 65535/UDP RFC 76855
User Datagram Protocol (UDP). The User Datagram Protocol provides a lightweight service for connectionless data transfer without error detection and correction.
For UDP, the same considerations for port numbers as described for TCP in Section Transmission Control Protocol apply. Responding to technical development, a number of protocols within the transport layer have been defined on top of UDP, thereby effectively splitting the transport layer into two. Protocols stacked between layers 4 and 5 include Real-time Protocol (RTP) and Real-time Control Protocol (RTCP) as defined in RFC 3550,56 MBone, a multicasting protocol, Reliable UDP (RUDP), and Stream Control Transmission Protocol (SCTP) as defined in RFC 2960.57 As a connectionless protocol, UDP services are easy prey to spoofing attacks. Technology and Implementation Scanning Techniques. Port Scanning. Port scanning is the act of probing for TCP services on a machine. It is performed by establishing the initial handshake for a connection. Although not in itself an attack, it allows an attacker to test for the presence of potentially vulnerable services on a target system.
Port scanning can also be used for fingerprinting an operating system by evaluating its response characteristics, such as timing of a response, details of the handshake, etc. Protection from port scanning includes restriction of network connections, e.g., by means of a host-based or network-based firewall or by defining a list of valid source addresses on an application level. FIN, NULL, AND XMAS SCANNING. In FIN scanning, a stealth scanning method, a request to close a connection is sent to the target machine. If no application is listening on that port, a TCP RST or an ICMP packet will be sent.
This attack commonly only works on UNIX machines, as Windows machines behave in a slightly different manner, deviating from RFC 793 (always responding to a FIN packet with an RST, thereby rendering recognition of open ports impossible) and thereby being unsusceptible to the scan. 484
AU8231_C007.fm Page 485 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security Firewalls putting a system into stealth mode (i.e., suppressing responses to FIN packets) are available. In NULL scanning, no flags are set on the initiating TCP packet; in XMAS scanning, all TCP flags are set (or “lit,” as in a Christmas tree). Otherwise, these scans work in the same manner as the FIN scan. SYN SCANNING. As traditional TCP scans became widely recognized and were blocked, various stealth scanning techniques were developed. In TCP half scanning (also known as TCP SYN scanning), no complete connection is opened; instead, only the initial steps of the handshake are performed.
This makes the scan harder to recognize (for instance, it would not show up in application log files). However, it is possible to recognize and block TCP SYN scans with an appropriately equipped firewall. TCP Sequence Number Attacks. To detect and correct loss of data packets, TCP numbers transmitted data packets. If a transmission is not reported back as successful, a packet will be retransmitted.
By eavesdropping traffic, these sequence numbers can be predicted and fake packets with the correct sequence number can be introduced into the data stream by a third party. This class of attacks can, for instance, be used for session hijacking. Protection mechanisms against TCP sequence number attacks have been proposed based on better randomization of sequence numbers as described in RFC 1948.58 Session Hijacking. Session hijacking is the act of unauthorized insertion of packets into a data stream. It is normally based on sequence number attacks, where sequence numbers are either guessed or intercepted.
Different types of session hijacking exist: IP spoofing: Based on a TCP sequence number attack, the attacker would insert packets with a faked sender IP address and a guessed sequence number into the stream. The attacker would not be able to see the response to any commands inserted. Man-in-the-middle attack: The attacker would sniff or intercept packets, removing legitimate packets from the data stream and replacing them with his own. In fact, both sides of a communication would then communicate with the attacker instead of each other. Countermeasures against IP spoofing can be executed on layer 3 (see “IP Address Spoofing” section). As TCP sessions only perform an initial authentication, application layer encryption can be used to protect against man-in-the-middle attacks.
485
AU8231_C007.fm Page 486 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Denial of Service. SYN Flooding. A SYN flood attack is a denial-of-service attack against the initial handshake in a TCP connection. Many new connections from faked, random IP addresses are opened in short order, overloading the target’s connection table.
Countermeasures include tuning of operating system parameters (concretely, the size of the backlog table) according to vendor specifications. Another solution, which requires modification to the TCP/IP stack, is SYN cookies — changing TCP numbers in a way that makes faked packets immediately recognizable.59 Layer 5: Session Layer Concepts and Architecture As mentioned, the session layer provides a logical persistent connection between peer hosts. This includes several different types of services, which can be dependent upon one another. Arguably one of the most basic functions is directory services, allowing identification of objects between hosts. Another one is remote procedure calls, allowing execution of objects between hosts.60 Equally important to the higher layers are access services, such as NFS, which allow accessing and manipulating objects on another host. Technology and Implementation Remote Procedure Calls. Remote procedure calls (RPCs) are a general concept of executing objects across hosts.
Generically, several (mutually incompatible) services in this category exist, such as distributed computing environment RPC (DCE RPC) and open network computing RPC (ONC RPC, also referred to as SunRPC or simply RPC). Common Object Request Broker Architecture (CORBA) and Microsoft Distributed Component Object Model (DCOM) can be viewed as RPC-type protocols. Generically, RPC services are not limited to layer 5. In the context of this book, we will look at ONC RPC, as it is representative for other implementations. Open Network Computing Remote Procedure Call (ONC RPC).
ONC RPC or SunRPC was codeveloped with NFS, whose foundation it forms. It is important to note that RPC does not in fact provide any services on its own; instead, it provides a brokering service, by providing (basic) authentication and a way to address the actual service.
486
AU8231_C007.fm Page 487 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security Table 7.8. ONC RPC Quick Reference Ports Definition
111/TCP, 111/UDP RFC 105061 RFC 105762 RFC 183163 RFC 183364 RFC 269565
The core service of RPC is a so-called port mapper. Any client wishing to avail himself of services provided by an RPC server will obtain access through the port mapper. The client will request a service number, obtained from a map of services available through the client (preferably as a local file). The request will be sent to the port mapper daemon on the server, which is commonly listening on TCP or UDP port 111. The port mapper will then redirect the client to the TCP port on which the actual application providing the service is running. Security problems with RPC include its weak authentication mechanism and any implementation problem, which, due to the fact that the port mapper will normally run under administrative privileges, can be quickly leveraged for privilege escalation by an attacker. Directory Services. Domain Name Service (DNS). Of the network services below the application layer, the Domain Name System is arguably one of the most prominent and the most visible to the end user. The reason for this is DNS’s role in e-mail and WWW addresses (Uniform Resource Locators (URLs)), which have become a ubiquitous element of everyday life, be it in advertising or private communication.
By virtue of this fact, DNS has become a prominent target of attack, aggravating preexistent weaknesses in the protocol. By manipulating DNS, it is certainly possible to divert, intercept, or prevent the vast majority of end-user communications without having to resort to attacking any end-user devices. The Domain Name System as a whole is a distributed, hierarchical database. Through its caching architecture, it possesses a remarkable degree of robustness, flexibility, and scalability. Conversely, DNS does not enforce data consistency and integrity, its built-in authentication mechanisms are weak,66 and management of the global DNS infrastructure has become a subject of political and economical controversy, while the objects it manages — domain names — are often the subject of local and global trademark disputes. DNS’s central element is a set of hierarchical name (domain) trees, starting from a so-called top-level domain (TLD). A number of so-called root 487
AU8231_C007.fm Page 488 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Table 7.9. DNS Quick Reference Ports Definition
53/TCP, 53/UDP RFC 88269 RFC 103470 RFC 103571
servers manage the authoritative list of TLD servers. To resolve any domain name, each Domain Name Server in the world must hold a list of these root servers.67 Various extensions to DNS have been proposed, to enhance its functionality and security, for instance, by introducing authentication (DNSSEC; see below), multicasting, or service discovery.68 DNS SPOOFING. To resolve a domain name query, such as mapping a Web server address to an IP address, the user’s workstation will in turn have to undertake a series of queries through the Domain Name Server hierarchy. Such queries can be either recursive (a name server receiving a request will forward it and return the resolution) or iterative (a name server receiving a request will respond with a reference).
An attacker aiming to poison a DNS server’s (name server) cache by injecting fake records, and thereby falsifying responses to client requests, will need to send a query to this very name server. The attacker now knows that the name server will shortly send out a query for resolution. • In the first case, the attacker has sent a query for a domain, whose primary name server he controls. The response from this query will contain additional information that was not originally requested, but which the target server will now cache. • The second case is a dissimilar method that can also be used in iterative queries. Using IP spoofing, the attacker will send a response to his own query before the authoritative (correct) name server has a chance to respond. In both cases, the attacker has used an electronic conversation to inject false information into the name server’s cache. Not only will this name server now use the cached information, but the false information will propagate to other servers, making inquiries to this one. We note that due to the caching nature of DNS, attacks on DNS servers as well as countermeasures always have certain latency, determined by the configuration of a (domain) zone. There are two principal vulnerabilities here, both inherent in the design of the DNS protocol: it is possible for a DNS server to respond to a recursive query with information that was not requested, and the DNS server will not authenticate information. 488
AU8231_C007.fm Page 489 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security Approaches to address or mitigate this threat have only been partly successful. • Later versions of DNS server software are programmed to ignore responses that do not correspond to a query. • However, approaches to generally introduce stronger (or even better) authentication into DNS (for instance, through the use of DNSSEC72) have not found wider resonance, and authentication has been widely upward delegated to higher protocol layers. In other words, applications in need of guaranteeing authenticity cannot rely on DNS to provide such but will have to implement a solution themselves.73 MANIPULATION OF DNS QUERIES. Technically, the following two techniques are only indirectly related to DNS weaknesses. However, it is worth mentioning them in the context of DNS because they seek to manipulate name resolution in other ways.
• Manipulation of the host’s file is a technique employed, for instance, by certain viruses. A host’s file (/etc/hosts on many UNIX machines, C:\Windows\I386\Hosts on Windows XP) is the resource first queried before a DNS request is issued. It will always contain the mapping of the host name local host to the IP address 127.0.0.1 (loopback interface, as defined in RFC 333074) and potentially other hosts.75 The virus will add addresses of antivirus software vendors with invalid IP addresses to the hosts file to prevent download of virus pattern files. • Social engineering techniques will not try to manipulate a query on a technical level, but can trick the user into misinterpreting a DNS address that is displayed to him in an e-mail or in his Web browser address bar. One way to achieve this in e-mail or Hypertext Markup Language (HTML) documents is to display a link in text where the actual target address is different from what is displayed. Another way to achieve this is the use of non-ASCII character sets (for instance, Unicode — ISO/IEC 10646 — characters) that closely resemble ASCII (i.e., Latin) characters to the user. This may become a popular technique with the popularization of internationalized domain names. INFORMATION DISCLOSURE. Smaller corporate networks do not split naming zones, i.e., names of hosts that are accessible only from an intranet are visible from the Internet.
Although knowing a server name will not enable anyone to access it,76 this knowledge can aid and facilitate preparation of a planned attack as it provides an attacker with valuable information on existing hosts (at least with regard to servers), network structure, and, for instance, details such 489
AU8231_C007.fm Page 490 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® as organizational structure or server operating systems (if the OS is part of the host name, etc.). A business should therefore operate split DNS zones wherever possible and refrain from using telling naming conventions for their machines.77 In addition, a domain registrar’s database of administrative and billing domain contacts (Whois database) can be an attractive target for information and e-mail harvesting. NAMESPACE-RELATED RISKS. Besides the technical risks described, a number of other risks exist that, although not strictly security related, can lead to equivalent exposure.
• Domain litigation: Domain names are subject to trademark risks, related to a risk of temporary unavailability or permanent loss of an established domain name. For the business in question, the consequences can be equivalent to the loss of its whole Internet presence in an IT-related disaster. Businesses should therefore put in place contingency plans if they are concerned with trademark disputes of any kind over a domain name used as their main Web and e-mail address. Such contingency plans might include setting up a second domain unrelated to the trademark in question (based, for instance, on the trademark of a parent company) that can be advertised on short notice, if necessary. • Cyber squatting and illegitimate use of similar domains, containing common misspellings or representing the same second-level domain under a different top-level domain.78 The only way to protect a business from this kind of fraud is the registration of the most prominent adjacent domains or by means of trademark litigation. A residual risk will always remain, relating not only to public misrepresentation, but also to potential loss or disclosure of e-mail. Lightweight Directory Access Protocol (LDAP). LDAP is a client/server-based directory query protocol loosely based upon X.500,79 commonly used for managing user information. As opposed to DNS, for instance, LDAP is a front end and not used to manage or synchronize data per se.
Back ends to LDAP can be directory services, such as NIS (cf. “Network Information Service (NIS), NIS+” section), Lotus Notes, Microsoft Exchange, etc.
Table 7.10. LDAP Quick Reference Ports Definition
490
389/TCP, 389/UDP RFC 177780
AU8231_C007.fm Page 491 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security Table 7.11. NetBIOS Quick Reference Ports
Definition
135/UDP 137/TCP 138/TCP 139/UDP RFC 100184 RFC 100285
LDAP provides only weak authentication based on host name resolution. It would therefore be easy to subvert LDAP security by breaking DNS (cf. Section Domain Name Service). LDAP communication is transferred in cleartext and therefore is trivial to intercept. One way to address the issues of weak authentication and cleartext communication is deployment of LDAP over SSL, providing authentication, integrity, and confidentiality. Various other extensions to LDAP have been proposed to address these shortcomings; however, they have not widely gained ground.81 Conceivably, a reason for this could be that LDAP was meant to be simple and that building, for instance, a strong authentication and encryption framework around it could at least to a certain extent defeat that purpose. Note, however, that Microsoft Active Directory (see next section) does address these through its use of Kerberos. MICROSOFT ACTIVE DIRECTORY. LDAP is also the basis of Microsoft’s Active Directory Service (ADS). Applications such as Microsoft NetMeeting (cf. real-time communication in Section Instant Messaging) are making heavy use of LDAP for this reason.
As opposed to its predecessor, NetBIOS, ADS is fully TCP/IP and DNS based. ADS authentication is based on Kerberos (see Chapter 3). Network Basic Input Output System (NetBIOS). The NetBIOS application programming interface (API) was developed in 1983 by IBM. NetBIOS was later ported to TCP/IP (NetBIOS over TCP/IP, also known as NetBT); however, implementations running on top of NetBEUI82 or IPX83 are still in use.
Under TCP/IP, NetBIOS is running over TCP on ports 137 and 138 and over UDP on port 139. In addition, it uses port 135 for remote procedure calls (see Section Remote Procedure Calls). NetBIOS is susceptible to a number of attacks.
491
AU8231_C007.fm Page 492 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® • Exploiting the fact that its credentials are static, a user can be made to deliver his credentials by tricking his host into setting up a NetBIOS connection with a host under an attacker’s control. • NetBIOS services can be used for information collection (they will disclose information on users, hosts, and domains). • NetBIOS ports have become popular targets of attacks for Internet worms. Circulating exploits rely on weaknesses in the implementation of NetBIOS, not in the protocol itself. Network Information Service (NIS), NIS+. NIS and NIS+ are directory services developed by Sun Microsystems, which are mostly used in UNIX environments. They are commonly used for managing user credentials across a group of machines, for instance, a UNIX workstation cluster or client/server environment, but can be used for other types of directories. NIS. NIS (see Table 7.12) is using a flat namespace in so-called domains. It is based on RPC and manages all entities on a server (NIS server). NIS servers can be set up redundantly through the use of so-called slave servers.
NIS is known for a number of security weaknesses.86 • The fact that NIS does not authenticate individual RPC requests can be used to spoof responses to NIS requests from a client. This would, for instance, enable an attacker to inject fake credentials and thereby obtain or escalate privileges on the target machine. • Retrieval of directory information is possible if the name of a NIS domain has become known or is guessable, as any client can associate themselves with a NIS domain. Conversely, the fact that a NIS server is an attractive target of attacks cannot be considered a weakness of NIS as such; it is, in fact, an architectural issue with all client/server platforms. A number of guides have been published on how to secure NIS servers. The basic steps here are to secure the platform a NIS server is running on, to isolate the NIS server from traffic outside of a LAN, and to configure it in a way that limits the probability for disclosure of authentication credentials, especially system privileged ones.87 NIS+. NIS+ (see Table 7.13) is using a hierarchical namespace. It is based on Secure RPC (see Section Remote Procedure Calls).
Table 7.12. NIS Quick Reference Ports Definition
492
See RPC (Section Remote Procedure Calls) See RPC (Section Remote Procedure Calls)
AU8231_C007.fm Page 493 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security Table 7.13. NIS+ Quick Reference Ports Definition
See RPC (Section Remote Procedure Calls) See RPC (Section Remote Procedure Calls)
Table 7.14. CIFS/SMB Quick Reference Ports Definition
445/TCP See also NetBIOS (Section Network Basic Input Output System) Proprietary90
Authentication and authorization concepts in NIS+ are more mature; they require authentication for each access of a directory object. However, NIS+ authentication in itself will only be as strong as authentication to one of the clients in a NIS+ environment, as NIS+ builds on a trust relationship between different hosts. The most relevant attacks against a correctly configured NIS+ network come from attacks against its cryptographic security. NIS+ can be run at different security levels; however, most levels available are irrelevant for an operational network. Access Services. Common Internet File System (CIFS)/Server Message Block (SMB). CIFS/SMB88 (see Table 7.14) is a file-sharing protocol prevalent on
Windows systems. A UNIX/Linux implementation exists in the free Samba project.89 SMB was originally designed to run on top of the NetBIOS (see Section Network Based Input Output System) protocol; it can, however, be run directly over TCP/IP. CIFS is capable of supporting user-level and tree/object-level (sharelevel) security. Authentication can be performed via challenge/response authentication as well as by transmission of credentials in cleartext. This second provision has been added largely for backward compatibility in legacy Windows environments. The main attacks against CIFS are based upon obtaining credentials, be it by sniffing for cleartext authentication or by cryptographic attacks. Network File System (NFS). Network File System is a client/server file-sharing system common to the UNIX platform. It was originally developed by Sun Microsystems, but implementations exist on all common UNIX platforms, including Linux, as well as Microsoft Windows. NFS (see Table 7.15) has been revised several times. NFS VERSIONS 2 AND 3. NFS version 2 was based on UDP, and version 3 introduced TCP support. Both are implemented on top of RPC (see Section 493
AU8231_C007.fm Page 494 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Table 7.15. NFS Quick Reference Ports Definition
See RPC (Section 10.6.2.1) RFC 109491 RFC 181392 RFC 301093 RFC 353094
Remote Procedure Calls). NFS versions 2 and 3 are stateless protocols, mainly due to performance considerations. As a consequence, the server must manage file locking separately. NFS versions 2 and 3 have several drawbacks from a security perspective, due to their rather basic authentication mechanisms and to the fact that a file system protocol must possess some form of state management, for which some workarounds have to be introduced on top of the stateless protocol to enable, for instance, file locking. An attacker would have several opportunities to attack NFS, be it from a client, a server, or a network perspective. The first step in setting up an NFS connection will be the publication (exporting) of file system trees from the server. These trees can be arbitrarily chosen by the administrator. Access privileges are granted based upon the client IP address and directory tree. Within the tree, the privileges of the server file system will be mapped to client users. Several points of risk exist:95 • Export of parts of the file system that were not intended for publication or with inappropriate privileges, for instance, by accident or through the existence of UNIX file system hard links (which can be generated by the user). This is of particular concern if parts of the server root file system are made accessible. One can easily imagine scenarios where a password file can be assessed and the encrypted passwords contained therein subsequently broken by an off-theshelf tool. Regular review of exported file system trees is an appropriate mitigation. • Using an unauthorized client. Because NFS identifies the client by its IP address or (indirectly) a host name, it is relatively easy to use a different client than the authorized one, by means of IP spoofing or DNS spoofing.96 At the very least, resolution of server host names should therefore happen by a file (/etc/hosts on UNIX), not by DNS. • Incorrect mapping of user IDs between server and client. In fact, any machine not controlled by the server administrator can be used as an attack as NFS relies on user IDs as the only form of credential. An attacker, having availed himself of administrative access to a client, could generate arbitrary user IDs to match those on the 494
AU8231_C007.fm Page 495 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security server. It is paramount that user IDs on server and client are synchronized, e.g., through the use of NIS/NIS+ (see Section Network Information Service). • Sniffing and access request spoofing. Because NFS traffic, by default, is not encrypted, it is possible to intercept it, either by means of network sniffing or by a man-in-the-middle attack. Because NFS does not authenticate each RPC call, it is possible to access files if the appropriate access token (file handle) has been obtained, for instance, by sniffing. NFS itself does not offer appropriate mitigation, but use of secure NFS. • SetUID files. The directories accessed via NFS are used in the same way local directories are. On UNIX systems, files with the SUID bit can therefore be used for privilege escalation on the client. NFS should therefore be configured in such a way as to not respect SUID bits. SECURE NFS (SNFS). NFS offers secure authentication and encryption on the basis of secure RPC (Section Remote Procedure Calls), based on Data Encryption Standard (DES) encryption. In contrast to standard NFS, secure NFS (or rather secure RPC) will authenticate each RPC request.
This will increase latency for each request as the authentication is performed and introduces a light performance premium, mainly paid for in terms of computing capacity. Secure NFS uses DES encrypted time stamps as authentication tokens. If server and client do not have access to the same time server (see “Network Time Protocol (NTP)” section), this can lead to short-term interruptions until server and client have resynchronized themselves.97 NFS VERSION 4. The latest installment, NFS version 4,98 is a stateful protocol that solves a number of security issues resulting at the core. NFS version 4 uses TCP port 2049. UDP support (and dependency) has been discontinued.
NFS version 4 implements its own encryption protocols on the basis of Kerberos and has discontinued use of RPC. Foregoing RPC also means that additional ports are no longer dynamically assigned, which enables use of NFS through firewalls. This should not be interpreted as mitigation of the risks of operating NFS beyond a network perimeter, as described in the previous section. Operating NFS through firewalls remains bad practice. Layer 6: Presentation Layer Concepts and Architecture The presentation layer as a conceptual node of information translation may appear less relevant from a network security perspective. 495
AU8231_C007.fm Page 496 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Table 7.16. TLS Quick Reference Ports
Definition
Depending on protocol using SSL/TLS; commonly used ports include 443/TCP, 443/UDP: HTTPS (Section 10.8.2.5.3) 563/TCP, 563/UDP: NNTPS (Section 10.8.2.3.1) 636/TCP, 636/UDP: LDAPS (Section 10.6.2.2.2) 989/TCP, 989/UDP, 990/TCP, 990/UDP: FTPS (Section 10.8.2.5.1) 992/TCP, 992/UDP: TELNETS (Section 10.8.2.8.1) 993/TCP, 993/UDP: IMAPS (used to be port 585 for IMAP4-SSL) (Section 10.8.2.3) 995/TCP, 995/UDP: POP3S (Section 10.8.2.2) 5061/TCP, 5061/UDP: SIP-TLS (Section 10.8.2.10.1) RFC2246100
However, the presentation layer is also pertinent to compression and encryption. Although a separate chapter is dedicated to cryptography (see Chapter 3), we will at least examine the concepts of Transport Layer Security (TLS), which — despite its name — has its place on layer 6. 99 In the future, layer 6 may also be considered the layer for digital rights management. Technology and Implementation Transport Layer Security (TLS). Several methods of providing a secure and authenticated channel between hosts on the Internet above the transport layer have been proposed. In the end, the SSL protocol became the basis for TLS (see Table 7.16). From a model perspective, both would be considered transport layer elements, but, in implementation form, a software tier above the transport layer and below the TCP/IP application layer.
SSL was developed by Netscape Communications Corporation to improve security and privacy of HTTP connections. SSL is application independent and capable of negotiating encryption keys as well as server authentication. The TLS protocol is the successor to the earlier Secure Socket Layer (SSL) protocol, providing: • Mutual authentication of server and client (whereas client authentication is hardly used in practice) • Encrypted connections, extensibly based on algorithms implemented on both client and server TLS is composed of two protocols: • The TLS Handshake Protocol, establishing an opaque connection by exchanging asymmetric encryption keys and then creating a channel (of better performance) based on symmetric encryption 496
AU8231_C007.fm Page 497 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security • The TLS Record Protocol, maintaining the integrity of communications through an appropriate hash function The client can authenticate the server by verifying the signature of the server’s certificate using a hard-coded root CA public key, which is typically preinstalled on the client as part of its Web browser installation. Authentication based on symmetric keys can be provided via a Kerberos over TLS implementation (RFC 2712101). An open-source implementation of TLS is the Open SSL102 project. Layer 7: Application Layer Concepts and Architecture The application layer consists of protocols used directly by applications — it does therefore not describe applications, but the interfaces open to them. The application layer is sometimes used to compensate for deficiencies on the lower layers. One example for this is S-HTTP. Conversely, deficiencies on the application layer, i.e., lack of authentication mechanisms, can defeat security measures on the lower layers. Technology and Implementation Asynchronous Messaging (E-mail and News). Simple Mail Transfer Protocol (SMTP) and Enhanced Simple Mail Transfer Protocol (ESMTP) (see Tables 7.17 and 7.18). SMTP is a protocol to route e-mail on the
Internet. SMTP is pervasive and used for practically all mail routing outside of closed application networks (such as Lotus Notes). SMTP is a client/server protocol, using port 25/TCP; information on mail servers for Internet domains is managed through DNS (in so-called mail exchange (MX) records). Although SMTP takes a fairly simple approach at authentication, it is fairly robust in the way it deals with unavailability; an SMTP server will try to deliver e-mail over a configurable period. From a protocol perspective, SMTP’s main shortcomings are its nonexistent authentication and its lack of encryption. Identification is performed by the sender’s e-mail address. A mail server will be able to restrict sending access to certain hosts (which should be on the same network as the mail server), as well as set conditions on the sender’s e-mail address (which should be one of the domains served by this particular mail server). Otherwise, the mail server may be configured as an open relay (see below). From an implementation perspective, a number of mail agents have become notorious for their security holes. Not a few of them aggravate the 497
AU8231_C007.fm Page 498 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Table 7.17. SMTP Quick Reference Ports Definition
25/TCP RFC 821104 RFC 822105
Table 7.18. ESMTP Quick Reference Ports Definition
25/TCP RFC 2821106
problem due to the fact that they are running with system privileges on the server.103 To address the weaknesses identified in SMTP, an enhanced version of the protocol, ESMTP, was defined. ESMTP is modular in that client and server can negotiate the enhancements used. ESMPT does offer authentication, among other things, and allows for different authentication mechanisms, including basic and several secure authentication mechanisms. E-MAIL SPOOFING. As SMTP does not possess an adequate authentication mechanism, e-mail spoofing is extremely simple. The most effective protection against this is a social one, whereas the recipient can confirm or simply ignore implausible e-mail.
Spoofing e-mail sender addresses is extremely simple, and it can be done with a simple TELNET command to port 25 of a mail server and by issuing a number of SMTP commands. E-mail spoofing is frequently used as a means to obfuscate the identity of a sender in spamming (see below), whereas the purported sender of a spam e-mail is in fact another victim of spam, whose e-mail address has been harvested by or sold to a spammer. OPEN MAIL RELAY SERVERS. An open mail relay server is an SMTP server that allows inbound SMTP connections for domains it does not serve (i.e., for which it does not possess a DNS MX record). An open mail relay is generally considered a sign of bad system administration. 107
Open mail relays are a principal tool for distribution of spam, as they allow an attacker to hide his identity. A number of blacklists for open mail relay servers exist that can be used for blacklisting open mail relays (i.e., a legitimate mail server would not accept any e-mail from this host because it has a high likelihood of being spam).108 Although using blacklists as one indicator in spam filtering has its merits, it is risky to use them as an exclusive indicator. Generally, they are run 498
AU8231_C007.fm Page 499 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security by private organizations and individuals according to their own rules, they are able to change their policies on a whim, they can vanish overnight for any reason, and they can rarely be held accountable for the way they operate their lists. SPAM. We touched on the subject of spam (or unsolicited commercial email (UCE)) in the previous section.109 Spam benefits from the low cost of e-mail, as opposed to phone calls or letters. It can be sent in massive amounts with little additional cost and a low risk of retribution.
Over the years, sending spam has become a professional and highly profitable business. This means that spammers are highly organized and structured. In general, spam is not limited to e-mail; it can also occur in newsgroups, Web logs (blogs), or instant messaging (see “Spam over Instant Messaging (SPIM)” section). • Spam often promotes illegitimate or fraudulent businesses and shady Web sites. • It is often crafted in such a way as to trick the user into thinking that either he has been addressed personally (for instance, by inclusion of personal names or e-mail addresses) or that he has accidentally received an important e-mail intended for someone else. Spam per se is less of a security problem than a problem of acceptable use. However, spam relies on illegitimate (and security breaching) means for its distribution. Because spam is nowhere welcome and the majority of providers have acceptable use policies in place that disallow the sending of UCE, spammers have to resort to (illegitimate) distribution and redistribution of their e-mail and to obfuscation of its origin. • Spam is almost always sent with invalid (faked) sender addresses. • Spam can be sent through open mail relays. The legitimacy of this tactic is at least questionable. While, in our opinion, due care is certainly missing on behalf of the system administrator of the open mail relay, an intent to allow relaying can hardly be inferred. • Spam appears to be increasingly sent via virus-infected, backdoored hosts (zombie networks). Where this is the case, a security breach is exploited and the spammer may be a party in performing it. The average amount of spam a user receives can easily outnumber his normal e-mail. It has therefore become common practice to implement spam filters in e-mail gateways to protect network and server capacity, save working time on behalf of the recipient, and reduce the risk of actual e-mail accidentally being discarded. • By far the most common way of suppressing spam is e-mail filtering on an e-mail gateway. A large variety of commercial products exist, based on a variety of algorithms. Filtering based on simple key499
AU8231_C007.fm Page 500 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® words can be regarded as technically obsolete; (1) the method is prone to generate false-positives,110 and (2) spammers are having an easy day working around this type of filter. More sophisticated filters, based, for instance, upon statistical analysis or analysis of e-mail traffic patterns, have come to market. Filtering can happen on an e-mail server (mail transfer agent (MTA)) or in the client (mail user agent (MUA)). • As an administrator of a mail server, the server can be configured to limit or slow down an excessive number of connections (tar pit).111 • A mail server can be configured to honor blacklists (of spam sources) either as a direct blocking list or as one of several indicators for spam. Organizations need to take precautions against becoming a spam haven; i.e., their mail servers and hosts need to be secured to avoid becoming a relay point for spam. Organizations sending spam — whether deliberately or involuntarily — may face dire consequences and retribution, starting with being cut off from their own mail and Internet access partly or in its entirety. Post Office Protocol (POP). Although SMTP addresses the task of sending and receiving e-mail on a server, POP (see Table 7.19) solves the problem of accessing e-mail on a server from a client. Widely implemented in its current (and probably last) version (version 3 — POP3), POP only offers basic functionality, such as username/password authentication and unencrypted transmission.
Modern e-mail clients therefore rely on encryption through TLS to at least provide secure transmission to protect the confidentiality and integrity of a message. Once downloaded onto a client, e-mail will be protected (only) by the client’s operating system security. Internet Message Access Protocol (IMAP). IMAP (see Table 7.20) is one of two dominant protocols for accessing e-mail on a server (the other one being POP; Section Post Office Protocol). IMAP offers a number of notable functional security enhancements over POP’s simple authentication and e-mail management.
Table 7.19. POP Quick Reference Ports Definition
500
110/TCP RFC 1734112 RFC 1939113
AU8231_C007.fm Page 501 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security Table 7.20. IMAP Quick Reference Ports Definition
143/TCP RFC 1730114 RFC 3501115
Functional enhancements, which are remotely relevant from an availability and integrity perspective, are concurrent access from different clients and to different mailboxes and the ability to synchronize e-mail to a client from a server, as opposed to POP’s download-and-delete approach, whereby the availability of messages was delegated to the client. More importantly, IMAP offers native support for encrypted authentication as well as encrypted data transfer. IMAP supports plaintext transmission if forced by the server. Network News Transfer Protocol (NNTP). Network news was one of the first discussion systems on the Internet, predating Web-based discussion forums116 and loosely modeled after former dial-up mailbox electronic discussion systems. Internet newsgroups — in their entirety also known as Usenet, even though based on a social, rather than physical or logical, network — are distributed through a protocol called NNTP (see Table 7.21), which from the client perspective matches SMTP in its simplicity, but from a server perspective offers a vastly different set of functionality, as the basic requirement here is not the routing of e-mail but the hierarchical distribution of information to temporary caches, as well as the administration of the server and the newsgroup hierarchy.
From a security perspective, the main shortcoming in NNTP is again authentication. Confidentiality of the message is much less of a concern, as the information is indeed intended for publication; however, the proper identification and authentication of the sender remains a strong concern. One of the earlier solutions users found to this problem was signing messages with PGP. However, this did not prevent impersonation or faked identities, as digital signatures were not a requirement, and indeed would be unsuitable for the repudiation problem implied. To make matters worse, NNTP offers a cancellation mechanism to withdraw articles already published. Naturally, the same authentication weakness applies to the control messages used for these cancellations, allowing users with even moderate skills to delete messages at will. On a related note, network news have been plagued with spam for more than a decade (in essence, since Usenet spam became economically viable). A number of mechanisms to deal with this problem have evolved. It can be safely said that all technical measures are just add-ons to what is mostly a social self-regulation mechanism.117 501
AU8231_C007.fm Page 502 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Table 7.21. NNTP Quick Reference Ports Definition
119/TCP RFC 1036120
• The original Usenet way of dealing with unwanted information was maintenance of client-based blacklists by the user, so-called kill files.118 • Some newsgroups have been set up as moderated to prevent misuse, mostly by partisan participants, but naturally the mechanism also works against spam, even though it comes at an increased workload to the moderator of a newsgroup. • Over time, a convention evolved, after which messages classified as spam by well-defined criteria (excessive repetitions or cross-posting of identical or highly similar messages) were legitimate targets for cancellations. The problem of authentication has never been adequately addressed in NNTP, and it might even be undesirable to do so: in a certain way, Usenet as a social construct may well depend on the ability to post anonymously or under pseudonyms.119 Instant Messaging. Applications and Protocols. Instant messaging systems can generally be categorized in three classes: peer-to-peer networks, brokered communication, and server-oriented networks.
Most chat applications do offer additional services beyond their text messaging capability, for instance, screen sharing, remote control, exchange of files, and voice and video conversation. Some applications allow command scripting. We can expect convergence with Voice-over-IP services in the future (Section Voice-over-IP). It should be noted that many of the risks mentioned here apply also to online games, which today offer instant communication between participants. For instance, multiplayer role-playing games, such as multiuser domains (MUDs), rely heavily on instant messaging that is similar in nature to Internet Relay Chat (IRC), even though it is technically based on a variant of the TELNET protocol. A large collection of real-time communication protocols and applications exists. We will focus on applications based on open protocols.121 OPEN PROTOCOLS, APPLICATIONS, AND SERVICES. Extensible Messaging and Presence Protocol (XMPP) (see Table 7.22) and Jabber — Jabber122 is an open instant messaging protocol for which a variety of open-source clients exist. A number of commercial services based on Jabber exist.
502
AU8231_C007.fm Page 503 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security Table 7.22. XMPP Quick Reference Ports Definition
5222/TCP, 5222/UDP RFC 3920125 RFC 3921126
Jabber has been formalized as an Internet standard under the name Extensible Messaging and Presence Protocol (XMPP), as defined in RFC 3920123 and RFC 3921.124 Jabber is a server-based application. Its servers are designed to interact with other instant messaging applications. As with IRC, anybody can offer a Jabber server. The Jabber server network can therefore not be considered trusted. Although Jabber traffic can be encrypted via TLS, this does not prevent eavesdropping on the part of server operators. However, Jabber does provide an API to encrypt the actual payload data. Jabber itself offers a variety of authentication methods, including cleartext and challenge/response authentication. To implement interoperability with other instant messaging systems from the server, however, the server will have to cache the user’s credentials for the target network, enabling a number of attacks, mainly on behalf of the server operator, but also for anyone able to break into a server. Internet Relay Chat (IRC) — Of the widely deployed chat systems on the Internet, IRC (see Table 7.23) was arguably the first. IRC is still popular in academia, but has lost its dominant position to commercial services. Communication is organized in public discussion groups (channels) and private messaging between individual users. IRC is a client/server-based network. IRC is unencrypted, and therefore an easy target for sniffing attacks. The basic architecture of IRC, based on trust between servers, enables special forms of denial-of-service attacks, whereby a malicious user can hijack a channel while a server or group of servers has been disconnected from the rest (net split). IRC is also a common platform for social engineering attacks, aimed at inexperienced or technically unskilled users. Although original clients were UNIX based, IRC clients are now available for many platforms, including Windows, Apple Macintosh, and Linux. PROPRIETARY APPLICATIONS AND SERVICES. Readers will want to familiarize themselves128 with proprietary applications such as IBM Lotus Instant Mes503
AU8231_C007.fm Page 504 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Table 7.23. IRC Quick Reference Ports Definition
194/TCP, 194/UDP RFC1459127
saging and Web Conferencing (Sametime), as well as commercial services such as: • AOL Instant Messaging and ICQ, based on the proprietary Open System for Communication in Real-Time (OSCAR) protocol • Google Talk, based on open Jabber/XMPP • Microsoft MSN Messenger/Windows Messenger, based on the proprietary Mobile Status Notification Protocol (MSNP) • Yahoo! Messenger, based on a proprietary protocol All of these applications and services are server based. Interoperability between these services can be achieved through a server-based approach a la XMPP or through multiple protocol clients. As usual, security of all of these applications rests in the strength of the protocol, quality of the implementation, trustworthiness of the operator, and behavior of the user. If these applications are to be used in a business context, stringent architectural and policy measures need to be put in place to prevent security gaps. This is all the more important as many instant messaging applications by design support a variety of communication channels, offer the ability to tunnel through HTTP, and offer online awareness services that can be misused for technical or social attacks. Risks. AUTHENTICITY. User identification can be easily faked in chat applications
by: • Choosing a misleading identity upon registration or changing one’s nickname while online • Manipulating the directory service (if the application requires one) • Manipulating either the attacker’s or the target’s client to send or display a wrong identity Although these risks are inherent to all kinds of communication networks (and are also common in e-mail), they present an increased risk in real-time communication, where a user potentially has less time to analyze the communication presented to him or her. CONFIDENTIALITY. Many chat systems transmit their information in cleartext. Similar to unencrypted e-mail, information can be disclosed by sniffing on the network. 504
AU8231_C007.fm Page 505 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security A different form of confidentiality breach may occur based upon the fact that chat applications can generate an illusion and expectation of privacy, e.g., by establishing “closed rooms.” Depending on the kind of infrastructure used, all messages can however be read in cleartext by privileged users such as the chat system’s operators. File transfer mechanisms embedded in instant messaging clients can be considered an uncontrolled channel for information — especially file — leakage. Due to the large number of other, similarly uncontrollable channels, the resulting additional risk should not be overestimated, while of course the overall risk may still be high. SCRIPTING. Certain chat clients, such as IRC clients, can execute scripts that are intended to simplify administration tasks, such as joining a chat channel. Because these scripts are executed with the user’s privileges with relatively unsophisticated (no sandbox) or nonexistent protection, they are an attractive target for social engineering or other attacks. Once the victim has been tricked into executing commands, he can leave his computer wide open for other attacks. SOCIAL ENGINEERING. Similar to e-mail, but in a different social setting, attackers can exploit the lack of authenticity to evoke an impression of legitimacy, for instance, by claiming to belong to a certain company or social group.
As the social setting is informal and community oriented, there might even be social pressure to behave in an insecure manner, for instance, to demonstrate trust. The lack of authenticity (and subsequently of nonrepudiation) should be a concern, especially in business situations where chat systems can be used to give online support or enable other forms of customer interaction. SPAM OVER INSTANT MESSAGING (SPIM). With the proliferation of Windows clients with Windows Messenger Service enabled, a particular form of SPIM through pop-up windows ran rampant for a while. The easiest countermeasure is to disable the service. TUNNELING FIREWALLS AND OTHER RESTRICTIONS. Similar to streaming audio and video applications, corporate firewalls were perceived as an obstacle in establishing direct contact with Internet peers. The easy, but arguably illegitimate, solution for developers was to enable tunneling through the protocol that would always be available: HTTP.
Depending on the client, it can even be possible to enable incoming connections by polling an external server. (This technique has been widely exploited in another type of application, a certain kind of remote-access software.) 505
AU8231_C007.fm Page 506 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Control of HTTP tunneling can happen on the firewall or the proxy server. It should, however, be considered that in the case of peer-to-peer protocols, this would require a “deny by default” policy, and blocking instant messaging without providing a legitimate alternative is not likely to foster user acceptance and might give users incentive to utilize even more dangerous workarounds. Although we have discussed confidentiality risks in the information security and risk management chapter, Domain 1, it should be noted that inbound file transfers can also result in circumvention of policy or restrictions in place, in particular for the spreading of viruses. An effective countermeasure can be found in on-access antivirus scanning on the client (which probably should be enabled anyway). Data Exchange (World Wide Web). File Transfer Protocol (FTP). Before the advent of the World Wide Web and proliferation of HTTP (which is built on some of its features), FTP (see Table 7.24) was the protocol for publishing or disseminating data over the Internet.
In its early days, the usual way to use FTP was in a nonfirewalled environment from a UNIX command shell. The protocol reflects some of the early design decisions made to support this environment, even though it is typically used via dedicated FTP clients or Web browsers. As opposed to, for instance, HTTP, FTP is a stateful protocol. FTP requires two communication channels, one control channel on port 21 under TCP, over which state information is exchanged, and a data channel on port 20, through which payload information is transmitted. In its original form, FTP authentication is simple username/password authentication, and credentials as well as all data are transmitted in cleartext. This makes the protocol subject to guessing or stealing of credentials, man-in-the-middle attacks, and sniffing. Although this can be addressed by use of encryption on underlying layers, it is a severe drawback, as additional effort is required for secure configuration and additional requirements to support encryption have to be met by the client. FTP offers two principal modes of data transfer: ASCII and binary. In ASCII mode, a conversion of layer 6 representation, depending on the target platform, is performed, whereas this conversion is omitted in binary mode. Table 7.24. FTP Quick Reference Ports Definition
506
20/TCP (data stream) 21/TCP (control stream) RFC 959129
AU8231_C007.fm Page 507 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security TRANSFER MODES. Although the control channel is always opened by the client, there are two different modes for the data channel:130
• Active mode (PORT mode), where the server initiates the data connection to the client. This mode has obvious drawbacks in firewalled environments, as incoming connections to the client can (and should) be blocked. A number of firewall products still support active mode. • Passive mode (PASV mode), where the client initiates the data connection to the server. Even though RFC 959 does not mandate implementation of passive FTP, the majority of FTP servers in the market offer this type of connection. ANONYMOUS FTP. Before the advent and proliferation of HTTP, publication of information to an unspecific user group was fulfilled by FTP services offering guest authentication to anyone who so desired. These services were called anonymous FTP due to the fact that the guest user would be mapped to the FTP log-in ID “anonymous,” whereas the user would pseudoauthenticate (and thereby identify) himself with his e-mail address.
In practice, the user could have been using any password or e-mail address, whereas using one’s true e-mail address would still have been considered common courtesy. Although Web browsers still support the (social) protocol as such, the use of anonymous FTP has widely fallen by the wayside. There are probably three main reasons why: • With HTTP, anonymous publication of information can be handled in a much more efficient and seamless manner. • Disclosure of e-mail addresses on the part of the user (or a requirement to do so) is widely regarded as an unsafe and privacy-violating practice that will expose the user to address harvesting by spammers. • Guest access can expose the FTP server to security risks. Although these can be properly addressed by configuration measures, and a misconfigured HTTP server will expose quite a similar set of vulnerabilities (such as disclosure of credentials and directory traversal), the fact that the default configuration for FTP servers is authenticated communication, while for HTTP servers the default configuration will not require the user to authenticate, makes HTTP the preferred choice for server administrators. TRIVIAL FILE TRANSFER PROTOCOL (TFTP). TFTP (see Table 7.25) is a simplified version of FTP, which is used when authentication is not needed and quality of service is not an issue. TFTP runs on port 69/UDP. It should therefore only be used in trusted networks of low latency.
507
AU8231_C007.fm Page 508 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Table 7.25. TFTP Quick Reference Ports Definition
69/UDP RFC 1350131
In practice, TFTP is used mostly in LANs for the purpose of pulling packages, for instance, in booting up a diskless client. Hypertext Transfer Protocol (HTTP). HTTP (see Table 7.26), originally conceived as a stateless, stripped-down version of FTP, was developed at the European Organization for Nuclear Research (CERN132) to support the exchange of information in Hypertext Markup Language (HTML).
It is probably not an overstatement to say that it changed the world — the protocol has proven to be uniquely suited to publish information anywhere, anytime, it has driven the proliferation of the Internet as an ubiquitous commodity, and it has probably generated more security headaches than any other protocol. This last view is not due to the design of the protocol, which is fairly lightweight, but its popularity. • On one hand, HTTP’s popularity caused the deployment of an unprecedented number of Internet facing servers, not a small number of which were deployed with out-of-the-box, vendor preset configurations — most of which at the time were geared at convenience, rather than security. • If this were not bad enough, a whole number of previously closed applications were suddenly marketed as “Web enabled.” By implication, not much time was spent on developing the Web interface in a secure manner and authentication was simplified to become a browser-based style. • On the other hand, HTTP will work from within most networks, shielded or not, and thereby lends itself to tunneling an impressive number of other protocols, even though HTTP neither supports quality of service nor bidirectional communication — both for which workarounds were quickly developed. HTTP does not support encryption and has a fairly simple authentication mechanism based on domains, which in turn are normally mapped to directories on a Web server. Although in principle HTTP authentication is extensible, it is most often used in the classic username/password style. As we already said, nothing of this is, strictly speaking, HTTP’s fault. It was never really designed for its own success, by virtue of which security has been delegated in most cases to lower levels (TLS) or had to be coded into the application altogether. 508
AU8231_C007.fm Page 509 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security Table 7.26. HTTP Quick Reference Ports Definition
80/TCP; other ports are in use, especially for proxy services133 RFC 1945134 RFC 2109135 RFC 2616136
HTTP PROXYING. Anonymizing Proxies — Because HTTP is transmitting data in cleartext and generates a slew of logging information on Web servers and proxy servers along the road, resulting information can be readily used for competitor intelligence and illegitimate activities, such as industrial espionage activities or simply to satisfy a Web master’s curiosity.
To address this significant concern, a number of commercial and free services are available that allow anonymization of HTTP requests. These services are mainly geared at the privacy market. A relatively popular free service is JAP137; a number of commercial services exist that interested readers may want to familiarize themselves with. Open Proxy Servers — Like open mail relays, open proxy servers allow unrestricted access to GET commands from the Internet. They can therefore be used as stepping stones or simply to obscure the origin of illegitimate requests. More importantly, an open proxy server bears an inherent risk of opening access to protected or intranet pages from the Internet. (A misconfigured firewall allowing inbound HTTP requests would need to be present on top of the open proxy to allow this to happen.) As a general rule, HTTP proxy servers should not allow queries from the Internet. It is best practice to separate application gateways (sometimes implemented as reverse proxies) from the proxy for Web browsing, as both have very different security levels and business importance. (It would be even better to implement the application gateway as an application proxy and not an HTTP proxy, but this is not always possible.) Content Filtering — In many organizations, the HTTP proxy is used as a means to implement content filtering, for instance, by logging or blocking traffic that has been defined or is assumed to be nonbusiness related. Although filtering on a proxy server or firewall as part of a layered defense can be quite effective to prevent, for instance, virus infections (though it should never be the only protection against viruses), it will be only moderately effective in preventing access to unauthorized services (such as certain remote-access services or file sharing), as well as preventing download of unwanted content.138 509
AU8231_C007.fm Page 510 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Table 7.27. HTTP over TLS Quick Reference Ports Definition
443/TCP RFC 2818140
HTTP TUNNELING. HTTP tunneling is technically a misuse of the protocol (on the part of the designer of such tunneling applications). It has become a popular feature with the rise of the first streaming video and audio applications and has been implemented into many applications that have a market need to bypass user policy restrictions.139
Usually, HTTP tunneling is based on encapsulating outgoing traffic in an HTTP request and incoming traffic in a response. Often, this will require the client polling for inbound connections from time to time. Suitable countermeasures include filtering on a firewall or proxy server and assessing clients for installations of unauthorized software. However, a security officer will have to balance the business value and effectiveness of these countermeasures with the incentive for circumvention that a restriction of popular protocols will create. HTTP OVER TLS (HTTPS — SEE TABLE 7.27). It is important to note that for most applications, the security offered (and touted) through the use of HTTP over TLS is limited to confidentiality, i.e., HTTP over TLS offers protection against eavesdropping, depending on the strength of encryption jointly supported by both client and server. Common Web browsers, which make for the main share of clients, support 128-bit DES encryption, which is not a very high degree of protection (see Chapter 3).
HTTP over TLS is broadly supported and is even recognized in the general public as a secure solution and has become a de facto standard for online retailers of all kinds, all of which are offering encrypted connections for their ordering and credit card billing systems. On the other hand, the very popularity of HTTPS can lull the user into a false sense of security. • The security offered by TLS is mostly used to protect against eavesdropping, while authentication is still based on username/password credentials. This opens up the possibility of man-in-the-middle attacks, which can be executed, for instance, by DNS spoofing. • It is a safe guess that many users will not hesitate to accept any server certificate presented to them and ignore warnings about a mismatch between the fully qualified domain name (FQDN) of a site in DNS and that in the certificate. This is easily the case in phishing attacks. • Even confidentiality may not be protected if encryption does not cover all parts of the site. This might occur if transactions are served 510
AU8231_C007.fm Page 511 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security Table 7.28. S-HTTP Quick Reference Ports Definition
80/TCP RFC 2660142
through a server farm of different applications and can be avoided by using a reverse proxy as a front end.141 SECURE HYPERTEXT TRANSFER PROTOCOL (S-HTTP). As opposed to the HTTP over TLS protocol, which relies on an underlying TLS or SSL tunnel to protect its connection, S-HTTP (see Table 7.28) is an enhancement to HTTP 1.1 and aims to manage encryption entirely on the application layer.
S-HTTP is designed to coexist with HTTP and can use the same port. A server will distinguish an S-HTTP request from an HTTP request by header information. S-HTTP goes hand in hand with security extensions to HTML, as defined in RFC 2659.143 S-HTTP is a highly flexible protocol that allows negotiation and renegotiation of encryption mechanisms and security policies. Through its integration into the client/server requests, S-HTTP is more resilient than HTTP over TLS, and, in particular, less susceptible to man-in-the-middle attacks and known plaintext attacks. An application can be selective in which parts of a request to encrypt and thereby enhance performance. Although the entire approach gives the application greater control, it also gives rise to an increase in complexity for the server administrator, as well as for the page designer/application programmer. Consequently, SHTTP has not gained the same uniform acceptance and implementation support as HTTPS. RFC 2660 is explicitly marked as describing an “experimental protocol,” and no successor has been published. Passive and Active Content (HTML, ActiveX, Java, JavaScript). The subject of content structure and its potential side effects is not, strictly speaking, a network security topic but one of client security. It is mentioned here because security attacks carried through active content are executed in a network client, the Web browser.
Different types of active content come with different forms of protection: • Microsoft’s ActiveX security model is based upon security zones that will determine the actions active content is allowed to perform on a given client. Microsoft’s Internet Explorer browser (the only one in the market able to execute ActiveX controls) has four types of zones.144 • In Sun’s Java, applets are executed in a sandbox, a protected environment isolated from the operating system. 511
AU8231_C007.fm Page 512 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® • In Netscape’s JavaScript, restrictions are based in the language, i.e., a number of dangerous actions are impossible to express to start with, while others will be executed with the privilege of the user. The WWW client (browser) being the point of attack, it is also the first bastion of defense. Network and system administrators need to ensure that security zones are configured properly in all browsers on a network and clients are regularly updated through patches to close security holes. Peer -to-Peer Applications and Protocols. Peer -to-peer applications have gained recent popularity — or notoriety, depending on one’s point of view — due to their controversial role in sharing of intellectual property, mainly multimedia files.
Although bandwidth consumption, acceptable use conduct, and legal implications (see the legal, regulations, compliance and investigations chapter, Domain 10) may be sufficient to warrant attention, policing, and auditing peer-to-peer applications in a business environment, we focus on the security risks associated with peer-to-peer use. It is in the nature of the that market peer-to-peer (P2P) applications are moving in that they quickly gain popularity, and sometimes vanish just as quickly.145 Arguably, the first popular P2P application was Napster, whose demise was brought upon by legal disputes that the company lost, based among other things on the fact that it was operating a set of servers through which intellectual property violations had been committed.146 The security risks associated with P2P applications are associated with the nature of these applications, in particular the business model of their supplier, as well as the content distributed over these applications. • P2P applications are often designed to open an uncontrolled channel through network boundaries (normally through tunneling). They therefore provide a way for dangerous content, for instance, spyware applications or viruses, to enter an otherwise protected network. • The applications themselves may contain spyware or other undesired functionality. • Legal risks due to the nature of content that is often found in P2P networks can hit an organization even if it did not approve of the use of P2P applications. Administrative Services. Remote Authentication Dial-in User Service (RADIUS). R A D I U S ( s e e Ta b l e 7.29) is an authentication protocol used mainly in networked environments, such as Internet service providers (ISPs), or for similar services requiring single sign-on for layer 3 network access, for scalable authentica512
AU8231_C007.fm Page 513 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security Table 7.29. RADIUS Quick Reference Ports Definition
1812/TCP, 1812/UDP 1813/TCP, 1813/UDP RFC 2865147
tion combined with an acceptable degree of security. On top of this, RADIUS provides support for consumption measurement such as connection time. RADIUS authentication is based on provision of simple username/password credentials. These credentials are encrypted by the client using a shared secret with the RADIUS server. RADIUS is victim to a number of cryptographic attacks and can be successfully attacked with a replay attack.148 RADIUS also suffers from a lack of integrity protection and the fact that just specific fields are transmitted encrypted. Nonetheless, within its usual scope of deployment, RADIUS is generally considered to be sufficiently secure. An ISP, in particular, will want to balance the risk of unauthorized access (and theft of bandwidth) with deployment cost. As RADIUS is relatively easy to deploy and supported by a large number of devices in the market; its resulting cost reduction will offset the ISP’s risk. Conversely, RADIUS may not be sufficiently secure for higher security requirements, such as access to a corporate network. In these cases, the added security offered by VPNs or IPSec is clearly desirable. Simple Network Management Protocol (SNMP). SNMP (see Table 7.30) is designed to manage network infrastructure. While its basic architecture is a fairly simple client/server architecture with a relatively limited set of commands, managing a network via SNMP is anything but simple, so within the scope of this book, we can only start highlighting security issues with SNMP.
SNMP architecture consists of a management server (called manager in SNMP terminology) and a client, usually installed on network devices (such as routers and switches) (called an agent). SNMP allows the manager to retrieve (“get”) values of variables from the agent, as well as “set” variables. Such variables could be routing tables or performance-monitoring information.151
Table 7.30. SNMP Quick Reference Ports149 Definition
161/TCP, 161/UDP 162/TCP, 162/UDP RFC 1157150
513
AU8231_C007.fm Page 514 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Although SNMP has proven to be remarkably robust and scalable, and its near omnipresence suggests a high degree of resilience against common attacks, it does have a number of clear weaknesses. Some of them are by design; others are subject to configuration parameters. • Probably the most easily exploited SNMP vulnerability is a bruteforce attack on default or easily guessable passwords. This may sound like a moot point, but given the scale of deployment, combined sometimes with perhaps relative inexperience of network administrators, it is certainly a realistic scenario, and a potentially severe but easily mitigated risk at that. • Until version 2, SNMP does not provide any degree of authentication or transmission security. Authentication consists of an identifier called a community string, by which a manager will identify itself against an agent (this string is configured into the agent) and a password sent with a command. In other words, passwords can be easily intercepted and commands sniffed upon or faked. Remote-Access Services. The services described under this section, TELNET, rlogin, and the X Window System (X11), while present in many UNIX operations, and, when combined with NFS and NIS, provide the user with seamless remote working capabilities, do in fact form a risky combination if not administrated properly.
Conceptually, because they are built on mutual trust, they can be misused to obtain access and to horizontally and vertically escalate privileges in an attack. Their authentication and transmission capabilities are insecure by design; they therefore had to be retrofitted (as X11) or replaced altogether (TELNET and rlogin by SSH). TCP/IP Terminal Emulation Protocol (TELNET). TELNET (see Table 7.31) is a command line protocol designed to give command line access to another host. Although implementations for Windows exist, TELNET’s original domain was the UNIX server world, and in fact, a TELNET server is standard equipment for any UNIX server. (Whether it should be enabled is another question entirely, but in small LAN environments, TELNET is still widely used.)
Being a fairly low-level TCP implementation, a TCP client can be used to emulate other protocols, as shown below in the HTTP dialog in a TELNET shell: Arthur$ Ping elvis elvis is alive Arthur$ Telnet elvis 80 GET / 514
AU8231_C007.fm Page 515 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security Table 7.31. TELNET Quick Reference Ports Definition
23/TCP RFC 854152 RFC 855153
TELNET offers little security, and indeed, its use poses serious security risks in untrusted environments: • TELNET is limited to username/password authentication. • TELNET does not offer encryption. • Once an attacker has obtained even a normal user’s credentials, he has an easy road toward privilege escalation, as he cannot only transfer data from and to a machine, but also execute commands. • As the TELNET server is running under system privileges, it is an attractive target of attack in itself; exploits in TELNET servers pave the way to system privileges for an attacker. It is therefore reasonable to discontinue use of TELNET over the Internet and on Internet facing machines. In fact, the standard hardening procedure for any Internet facing server should include disabling its TELNET service (which under UNIX systems would normally run under the name of telnetd). Remote Log-in (rlogin), Remote Shell (rsh), Remote Copy (rcp). I n i t s m o s t generic form, rlogin is a protocol used for granting remote access to a machine, normally a UNIX server. Similarly, rsh grants direct remote command execution while rcp copies data from or to a remote machine (see Table 7.32).
If a rlogin daemon (rlogind) is running on a machine, rlogin access can be granted in two ways, by a central configuration file or by a user configuration. By the latter, a user may grant access that was not permitted by the system administrator. The same mechanism applies to rsh and rcp, while they are relying on a different daemon (rshd). Authentication can be considered host/IP address based. Although rlogin grants access based on user ID, it is not verified; i.e., the ID a remote client claims to possess is taken for granted if the request comes from a trusted host. The rlogin protocol transmits data without encryption and is hence subject to eavesdropping and interception. The rlogin protocol is of limited value — its main benefit can be considered its main drawback: remote access without supplying a password. It should only be used in trusted networks, if at all. A drastically more secure replacement is available in the form of SSH for rlogin, rsh, and rcp. 515
AU8231_C007.fm Page 516 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Table 7.32. rlogin, rsh, rcp Quick Reference Ports Definition
513/TCP RFC 1258
X Window System (X11). The X Window System (see Table 7.33) is a comprehensive environment for remote control and display of applications. Although its original realm is the world of UNIX workstations, implementations for other operating systems, such as Windows and Mac OSX, exist.
X Window is composed of a server (which is running on the user’s client (!)) used to display graphics and send local events such as mouse clicks back to the client (the remote machine (!)).154 The X Window System’s core functionality is also its key risk from a security perspective. X Window allows remote administration and remote display of graphics. If the server is not adequately configured, any client on the Internet can use it to display graphics on an attached console.155 This may sound humorous at first (and in fact has been the subject of many lab pranks). However, it would be equally possible to use an open X Window Server for eavesdropping, screen shots, and key logging. The X Window System is built on unencrypted communication. This can be addressed by using lower-layer encryption or by tunneling the X Window System, for instance, through SSH. Based on its simple security model, which can be used to subvert and compromise other, stronger authentication mechanisms, the X Window System should only be used in trusted environments, for instance, in a LAN-based UNIX cluster. Running X11 servers on the Internet bears serious risks, and Internet facing servers should never have X11 running. IDENTIFICATION, AUTHENTICATION, AND AUTHORIZATION. Identification and authorization under the X Window System is based on a host’s IP address or DNS name. A machine running an X Window Server can be dynamically configured to accept commands from a client (using the xhost command under UNIX).
Practically all approaches that have been described earlier that can be used to subvert such an authorization scheme are practical under X11. On a related note, an attacker who is able to log in on a given machine running an X Window Server will be able to access said server. To address this significant shortcoming, X11 has been fitted with a mechanism called xauth. Xauth is based on authentication strings called magic cookies, which are kept in a configuration file in the user’s home directory under UNIX. It should be noted that this mechanism is fairly complex to handle and that using the traditional “opening of a display” on an X 516
AU8231_C007.fm Page 517 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security Table 7.33. X Window Services Quick Reference Ports Definition
6000 and higher/TCP RFC 1013156
Table 7.34. Finger Quick Reference Ports Definition
79/TCP RFC 742158 RFC 1288159
Window Server using the xhost command will not only override, but compromise, the xauth authorization scheme. Information Services. Finger User Information Protocol. Finger (see Table 7.34) is an identification service that allows a user to obtain information about the last log-in time of a user and whether he is currently logged into a system. The “fingered” user has the possibility to have information from two files in his home directory displayed (the .project and .plan files).
Developed as early as 1971, Finger is implemented as an UNIX daemon, fingerd. Finger has become less popular for several reasons: • Finger has been the subject of a number of security exploits.157 • Finger is raising privacy and security concerns; it can easily be abused for social engineering attacks. • The user’s self-actuation (an important social aspect in early UNIX networks) happens on Web pages today. For all practical purposes, the Finger protocol has become obsolete. Its use should be restricted to situations where no alternatives are available. Network Time Protocol (NTP). NTP synchronizes (see Table 7.35) computer clocks in a network.160 This can be extremely important for operational stability (for instance, under NIS), but also for maintaining consistency and coherence of audit trails, such as in log files.
Table 7.35. NTP Quick Reference Ports Definition
123/TCP, 123/UDP RFC 778161 RFC 891162 RFC 956163 RFC 958164 RFC 1305165
517
AU8231_C007.fm Page 518 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® A variant of NTP exists in Simple Network Time Protocol (SNTP), offering a less resource consuming but also less exact form of synchronization. From a security perspective, our main objective with NTP is to prevent an attacker from changing time information on a client or a whole network by manipulating its local time server. NTP can be configured to restrict access based upon IP address. From NTP version 3 onward, cryptographic authentication has become available, based upon symmetric encryption, but to be replaced by public key cryptography in NTP version 4. To make a network robust against accidental or deliberate timing inaccuracies, a network should have its own time server and possibly a dedicated, highly accurate clock. As a standard precaution, a network should never depend on one external time server alone, but synchronize with several trusted time sources. Thus, manipulation of a single source will have no immediate effect. To detect desynchronization, standard logging mechanisms can be used with NTP to ensure synchronicity of time stamping. Voice-over -IP (VoIP). Although the possibility of transmitting voice over Internet connections has existed for a long time, only in recent years has the widespread acceptance of broadband home access created a market for Voice-over-IP solutions. In essence, Internet and telephony are switching roles — while previously the telephone network was a ubiquitous commodity that would carry Internet dial-up traffic,166 the Internet is taking over the role of the principal commodity.167
Increasingly VoIP is replacing corporate telephony networks. While the benefits, such as negligible connection cost at a comparable initial investment and a larger degree of configurability, are obvious, VoIP networks are impacted by security risks in ways that would have left traditional telephony systems unaffected, such as being assailable by viruses and hacking and being dependent on electric power at all communication endpoints. In addition, VoIP systems are significantly more complex and need higher expertise to operate. For public services, questions of interconnectivity and interoperability come into focus. From a legal perspective, the situation is still unclear as to whether VoIP networks should be regulated in the same way as the public switched telephone network (PSTN). One common requirement is the availability of gateways to public emergency services. Another one is access for lawful interception, which, while legitimate from a public policy perspective, raises concerns from a security perspective because of the potential design of backdoors into existing systems, which could then be exploited by third parties.168 518
AU8231_C007.fm Page 519 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security Table 7.36. SIP Quick Reference Ports Definition171
5060/TCP, 5060/UDP RFC 2543172 RFC 3261173 RFC 3325174
Deployment of VoIP services may raise security concerns for its carrier network, for instance, with regard to enabling interconnectivity with other VoIP applications in a secure manner. Last but not least, a form of backup communication channel should be available with any VoIP installation, to have independent communication channels available in case of a disaster or network outage. Readers may also want to familiarize themselves with protocols such as H.323.169 Session Initiation Protocol (SIP). As its name implies, SIP170 (see Table 7.36)
is designed to manage multimedia connections. It is not a comprehensive protocol suite and leaves much of the actual payload data transfer to other protocols, for instance, Real-Time Transport Protocol (RTP). A number of phone companies have begun offering SIP services to end users. SIP has been included in applications such as Microsoft Windows Messenger, and open-source clients have been developed. SIP is designed to support digest authentication structured by realms, similar to HTTP (basic username/password authentication has been removed from the protocol as of RFC 3261). In addition, SIP provides integrity protection through MD5 hash functions. SIP supports a variety of encryption mechanisms, such as TLS.175,176 Privacy extensions to SIP, including encryption and caller ID suppression, have been defined in extensions to the original Session Initiation Protocol (RFC 3325). Although SIP, which has been closely modeled after HTTP (see Section Hypertext Transfer Protocol), is a peer-to-peer application by design, it is possible to proxy SIP and thereby build a scalable and manageable public infrastructure. Conversely, SIP does not work with network address translation (NAT), as it is impossible for at least one client to address the other. This results in a target conflict between network security and VoIP operation that must be resolved in a secure manner, for instance, by building a gateway in the form of a session controller. This controller can act as a proxy for SIP sessions (not necessarily for some of the streaming protocols carrying the actual voice information). On a related note, a SIP client is also a server that can receive requests from another machine. This may be considered a general risk for the 519
AU8231_C007.fm Page 520 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® machine the software is deployed to, because as with any server software, there is a risk of security gaps such as buffer overflows that can be exploited over the network. Proprietary Applications and Services. Readers will want to familiarize themselves with proprietary applications, such as Skype.177
Most commercial chat services mentioned in Section Instant Messaging are also equipped with Voice-over-IP capabilities. General References For the reader’s reference and further study, we are listing a number of books that have been useful in preparation of this chapter. R. Bace, Intrusion Detection, Indianapolis, IN: MacMillan Technical Publishing, 2000. R. Bates, D. Gregory, and J. Ranade, Voice and Data Communications Handbook, New York: McGraw-Hill, 1998. T. Bautts et al., Hack Proofing Your Wireless Network, Rockland, MA: Syngress Publishing, 2002. C. Brenton, Mastering Network Security, Alameda, CA: Network Press, 1999. J. Cooper, Computer and Communications Security: Strategies for the 1990s, New York: McGrawHill, 1989. C. Davis, IPSec Securing VPNs, New York: Mc-Graw-Hill, 2001. N. Doraswamy and D. Harkins, IPSec: The New Security Standard for the Internet, Intranets and Virtual Private Networks, Upper Saddle River, NJ: Prentice Hall, 1999. Aaron E. Earle, Wireless Security Handbook, Boca Raton: Auerbach Publications, 2005. T. Escamilla, Intrusion Detection, Network Security beyond the Firewall, New York: John Wiley & Sons, 1998. B. Furht and M. Ilyas, Wireless Internet Handbook: Technologies, Standards, and Applications, New York: Auerbach Publications, 2003. S. Garfinkel and G. Spafford, Practical Unix and Internet Security, Sebastapol, CA: O’Reilly and Associates, 1996. S. Garfinkel and G. Spafford, Web Security and Commerce, Sebastapol, CA: O’Reilly and Associates, 1997. D. Harley, R. Slade, and U. Gattiker, Viruses Revealed: Understand and Counter Malicious Software, New York: McGraw-Hill, 2001. G. Held, The ABCs of IP Addressing, New York: Auerbach Publications, 2001. G. Held, The ABCs of TCP/IP, New York: Auerbach Publications, 2002. G. Held, Understanding Data Communications, Indianapolis: Sams Publishing, 1994. M. Ilyas and S. Ahson, Handbook of Wireless Local Area Networks: Applications, Technology, Security, and Standards, New York: Auerbach Publications, 2005. S. Jones and R. Kovac, Introduction to Communications Technologies: A Guide for Non-Engineers, New York: Auerbach Publications, 2002. M. Kaeo, Designing Network Security: A Practical Guide to Creating a Secure Network Infrastructure, Indianapolis: Cisco Press, 1999. V. Kasacavage, Complete Book of Remote Access: Connectivity and Security, New York: Auerbach Publications, 2002. L. Klander, Hacker Proof: The Ultimate Guide to Network Security, Las Vegas: Jamsa Press, 1997. M. Köhntopp, M. Seeger, and L. Gundermann, Firewalls, Munich: Computerwoche, 1998. M.K. Littman, Building Broadband Networks, New York: Auerbach Publications, 2002. M. Maxim and D. Plooino, Wireless Security, New York: McGraw-Hill, 2002. S. McClure, J. Scambray, and G. Kurtz, Hacking Exposed: Network Security Secrets and Solutions, San Francisco: Osborne/McGraw-Hill, 1999.
520
AU8231_C007.fm Page 521 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security R. Nichols, D. Ryan, and J. Ryan, Defending Your Digital Assets against Hackers, Crackers, Spies and Thieves, New York: McGraw-Hill, 2000. T.C. Piliouras, Network Design: Management and Technical Perspectives, 2nd ed., New York: Auerbach Publications, 2004. A. Rubin, D. Geer, and M. Ranum, Web Security Sourcebook: A Complete Guide to Web Security Threats and Solutions, New York: John Wiley & Sons, 1997. B. Schneier, E-mail Security: How to Keep Your Electronic Messages Private, New York: John Wiley & Sons, 1995. C. Scott, P. Wolfe, and M. Erwin, Virtual Private Networks, Sebastapol, CA: O’Reilly and Associates, 1999. A. Tannenbaum, Computer Networks, Upper Saddle River, NJ: Prentice Hall, 1998. J.S. Tiller, A Technical Guide to IPSec Virtual Private Networks, New York: Auerbach Publications, 2000. H.F. Tipton and M. Krause, Information Security Management Handbook, Part I, Boca Raton, FL: Auerbach, 1999. A. Tiwana, Web Security, London: Butterworth-Heinemann, 1999. J.R. Vacca, Public Key Infrastructure: Building Trusted Applications and Web Services, New York: Auerbach Publications, 2004. T. Wadlow, The Process of Network Security, Reading, MA: Addison-Wesley, 2000. S. Young and D. Aitel, The Hacker’s Handbook: The Strategy behind Breaking into and Defending Networks, New York: Auerbach Publications, 2003.
Endnotes 1. Because protocol design does not always happen according to the OSI model, there may be a certain ambiguity as to which layer a protocol can best be mapped. This is especially true on OSI layers 5 and 6. 2. In technical terms, this is accomplished by defining different sections of defined length or structure within a data packet. 3. As a mundane joke, the system user is sometimes referred to as layer 8. 4. ISO/IEC 7498-1:1994, http://isotc.iso.org/livelink/livelink/fetch/2000/2489/Ittf_Home/ PubliclyAvailableStandards.htm, current as of August 22, 2005. 5. Not unrelated, it was not uncommon in the early days of the field for an organizational integration of IT security into network security, or slightly more explicit: the person who managed the firewall is also — formally or informally — considered the security person for everything. This is still a normal situation in smaller IT organizations. 6. It should be intuitively clear that one gap in a corporation’s perimeter defense can invalidate the entire investment made elsewhere. 7. Although it is not uncommon in all areas of security for users to work around security measures perceived as an obstacle to achieving their primary objectives, network security measures (such as filtering) can become especially annoying (and correspondingly provide a high incentive for circumvention), as they are external to the user’s activity and can come as a surprise to the user, i.e., they are not taken into account in setting objectives. 8. The oft-quoted “80 percent of all attacks come from the inside” highlights that perimeter defense has become successful and commoditized, but not put an end to security breaches per se. 9. An example for the former would be controlling access for inbound VPN connection, where the network perimeter becomes the primary controlling instance for enabling access to a large number of other services. An example for the latter would be blocking of certain applications, such as peer-to-peer applications on a firewall. 10. A. Pfitzmann, Technologies for multilateral security, in Multilateral Security in Communications, G. Müller and K. Rannenberg, Eds., Addison-Wesley, Reading, MA, 1999.
521
AU8231_C007.fm Page 522 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® 11. The reason this is especially bad is that the trusted component is under limited physical control and open to manipulation, even by a legitimate user. As laptop users may have or avail themselves of system-level access to their machines, laptops remain an extremely weak link in the network security chain. 12. Tunneling through HTTP has in fact become a standard feature in streaming video and audio clients, real-time communication (chat) clients, even from major vendors, and, of course, peer-to-peer applications. It does defeat the purpose of layering entirely (and is discouraged), but it does quite effectively solve a user’s (or even administrator’s) convenience problem at the cost of security. 13. Several such “poor man’s VPN” services exist, offering remote administration capabilities through HTTP, thereby enabling access to a PC from the outside through a firewall and from a Web browser in an Internet café. The PC needs to stay switched on and will poll every so often for inbound connection requests. These services are described as secure by their vendors because the connection is encrypted. However, it is clear that they can easily be misused to circumvent corporate policy. 14. A trivial example is IMSI catchers (see Section Mobile Telephony) used to intercept GSM phone calls. As the network (or rather the GSM base station) does not authenticate against the mobile phone, this can be used to — lawfully or illegitimately — intercept and eavesdrop mobile phone calls. Have you ever wondered about the authenticity of a wireless LAN in a hotel, station, or airport when you were asked your credit card number for an hour of Internet access? 15. This was applied, for instance, by the administrators of the server www.whitehouse.gov in the W32/Nimda worm outbreak. 16. For an account of one such case, see S. Berinato, How a Bookmaker and a Whiz Kid Took on an Extortionist — and Won, http://www.csoonline.com/read/050105/extortion.html, current as of August 9, 2005. 17. For an account of one such case, see J. Libbenga, Lufthansa Online Activist Found Guilty, http://www.theregister.co.uk/2005/07/05/lufthansa_demo/, current as of August 9, 2005. 18. A conceivable and simple example would be logging of telephone or fax calls or registering e-mail traffic with another company. An unusual traffic pattern might already allow conclusions on business activities, such as an acquisition. 19. It has become more common for wireless sniffing to become targeted and prosecuted for eavesdropping, but also for stealing bandwidth. This so far has not prevented the detection and hijacking of weakly secured wireless networks to become a sport, based among other things on the fact that the targets of such attacks may rarely become aware of it, while others may not even care. As always in security, there is a fine line between making someone aware of an existing risk and outright trespassing. Finding the secure speed requires more than just technical skills, but we are optimistic. 20. B. Schneier, Attack Trees, http://www.schneier.com/paper-attacktrees-ddj-ft.html, current as of August 9, 2005, and Dr. Dobb’s Journal, December 1999. 21. It is important to note that depending on the type of an attack, the attacker might stop at any given level, as he may have already achieved his goal. For instance, gathering intelligence of a DNS hierarchy could be the final goal of someone preparing a social engineering attack on a company (so this is where it would end from a network perspective), and escalation of privileges would be unnecessary in a denialof-service attack. 22. As a real-world example, a rudimentary form of content security on the Web involves using a hidden path on a Web server (i.e., a path that no link is published to). As a number of businesses or educational institutions found out to their detriment, paths can be guessed (i.e., by varying names, dates, etc.), discovered (e.g., by search engines), or tried using brute-force attacks. From a security perspective, the defender would bear some responsibility for the lack of protection, which might even weaken their legal position, depending, of course, on local legislation.
522
AU8231_C007.fm Page 523 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security 23. The Internet Storm Center (http://isc.sans.org/index.php?on=port) reports ongoing scans on the Internet. It is educational to take a look at the history of these scans, which, for instance, clearly mark major virus outbreaks or the discovery of new security exploits. 24. A very common way of obtaining access to a system is through a Web server that may be running on the system. If the Web server is improperly configured, it may open access to files or allow execution of code through its common gateway interface (CGI). Similarly, an anonymous FTP server (allowing guest access to everybody) is an easy target for attacks for this very reason. Countermeasures include limiting access to a directory subtree and providing the FTP server with its own password directory. 25. Buffer overflows provide a common venue for privilege escalation. If an attacker has gained access to a system allowing him or her to execute nonprivileged commands, this access can then be escalated by exploiting an application with missing or defective boundaries checking on input parameters to overwrite other storage areas in a way that gives the attacker access to privileged commands. Buffer overflows are a common method to attack Web servers, for instance, this was one attack built into the 2001 W32/Nimda worm (for details cf. CERT® Advisory CA-2001-26, Nimda Worm, http://www.cert.org/advisories/CA-2001-26.html). The worm would generate a very long HTTP request to an IIS Web server, which would then execute code (to download the worm) with the privileges of the Web server, which was running under administrator privileges. 26. The Host Based Intrusion Detection FAQ (http://www.sans.org/resources/idfaq/ host_based.php, current as of August 29, 2005) provides a brief overview of existing offerings in this market. 27. A list of common network security tools, including a variety of commercial ones, is available on http://www.insecure.org/tools.html, current as of August 29, 2005. 28. The Snort home page is available at http://www.snort.org/, current as of August 29, 2005. 29. The Nessus Open Source Vulnerability Scanner Project home page can be found at http://www.nessus.org/, current as of August 29, 2005. 30. Nmap home page, http://www.insecure.org/nmap/, current as of August 29, 2005. 31. It should be noted that fiber-optics communications is not immune to interception per se. 32. V90 home page, http://www.v90.com/, current as of August 22, 2005. 33. CDMA2000 is a registered trademark of the Telecommunications Industry Association (TIA-USA) in the United States. 34. See also http://www.cellular.co.za/technologies/cdma/cdma2000.htm, current as of August 25, 2005. See also http://www.cellular.co.za/technologies/cdma/ cdma2000.htm. 35. V92 home page, http://www.v92.com/, current as of August 22, 2005. 36. M.S. Gast, 802.11 Wireless Networks: The Definitive Guide, 2nd ed., O’Reilly Media, Sebastopol, CA, 2005, p. 33. 37. 802.3 IEEE Standard for Information Technology, Institute of Electrical and Electronics Engineers, New York, 2002, pp. 44–47. 38. M.S. Gast, 802.11 Wireless Networks: The Definitive Guide, 2nd ed., O’Reilly Media, Sebastopol, CA, 2005, p. 216. 39. See also http://www.nextgendc.com/, current as of August 28, 2005. 40. See also http://wimax.com/education/faq/faq29. 41. Y. Rekhter, B. Moskowitz, D. Karrenberg, G.J. de Groot, and E. Lear, RFC 1918: Address Allocation for Private Internets, February 1996, http://www.ietf.org/rfc/rfc1918.txt, current as of October 2, 2005. 42. We are including software, such as browsers, e-mail clients, compilers, word processors, etc.
523
AU8231_C007.fm Page 524 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® 43. 6Net home page, http://www.6net.org/, current as of August 22, 2005. 44. 6Bone home page, http://www.6bone.net/, current as of August 22, 2005. 45. A. Huttunen et al., RFC 3948: UDP Encapsulation of IPsec ESP Packets, January 2005, http://www.ietf.org/rfc/rfc3948.txt, current as of October 3, 2005. 46. S. Kent and R. Atkinson, RFC 2406: IP Encapsulating Security Payload (ESP), November 1998, http://www.ietf.org/rfc/rfc2406.txt, current as of August 22, 2005. 47. See also http://www.snailbook.com/faq/ssh-1-vs-2.auto.html, current as of August 22, 2005. 48. D. Harrington, R. Presuhn, and B. Wijnen, RFC 2661: Layer Two Tunneling Protocol L2TP, January 1998, http://www.ietf.org/rfc/rfc2661.txt, current as of August 22, 2005. 49. See also http://www.packetfactory.net/firewalk/firewalk-final.html, current as of August 22, 2005. 50. J. Postel, Ed., RFC 793: Transmission Control Protocol, September 1981, http://www.ietf.org/rfc/rfc793.txt, current as of October 2, 2005. 51. Internet Assigned Numbers Authority (IANA) is a collection of registration services provided by the Internet Corporation for Assigned Names and Numbers (ICANN, http://www.icann.org/, current as of October 2, 2005). The home page of IANA can be found at http://www.iana.org/, current as of October 2, 2005. A memorandum of understanding about the respective roles of the Internet Engineering Task Force (IETF, http://www.ietf.org/, current as of October 2, 2005), ICANN, and IANA that may be instructive to the reader can be found at http://www.icann.org/general/ietf-icann-mou01mar00.htm, current as of October 2, 2005. 52. An up-to-date list of port number assignments can be found at http://www.iana.org/assignments/port-numbers, current as of October 2, 2005. Interestingly, of the 1024 wellknown ports, roughly two thirds have been assigned or reserved, and of the 48,128 registered ports, less than one tenth have been used. 53. It is important to note that technically any application can use any port without IANA registration. A common example is the occasional use of port 81 for HTTP proxy services. This is not a problem as long as the administrator is aware of possible side effects, such as effectively inhibiting use of the registered application for this port, but it is generally considered bad practice. 54. See, for instance, SIP (Section Voice-over-IP). 55. J. Postel, RFC 768: User Datagram Protocol, August 28, 1980, http://www.ietf.org/rfc/rfc768.txt, current as of October 2, 2005. 56. H. Schulzrinne, S. Casner, R. Frederick, and V. Jacobson, RFC 3550: RTP: A Transport Protocol for Real-Time Applications, July 2003, http://www.apps.ietf.org/rfc/rfc3550.html, current as of October 2, 2005. 57. R. Stewart, Q. Xie, K. Morneault, C. Sharp, H. Schwarzbauer, T. Taylor, I. Rytina, M. Kalla, L. Zhang, and V. Paxson, RFC 2960: Stream Control Transmission Protocol, October 2000, http://www.ietf.org/rfc/rfc2960.txt, current as of October 2, 2005. 58. S. Bellovin, RFC 1948: Defending against Sequence Number Attacks, May 1996, http://www.ietf.org/rfc/rfc1948.txt, current as of October 2, 2005. 59. Description of SYN cookies: http://cr.yp.to/syncookies.html. 60. Interestingly, remote procedure calls can depend on directory services and vice versa. 61. Sun Microsystems, RFC 1050: Remote Procedure Call Protocol Specification, April 1988, http://www.ietf.org/rfc/rfc1050.txt, current as of October 2, 2005. 62. Sun Microsystems, RFC 1057: RPC: Remote Procedure Call Protocol Specification Version 2, June 1988, http://www.ietf.org/rfc/rfc1057.txt, current as of October 2, 2005. 63. R. Srinivasan, RFC 1831: RPC: Remote Procedure Call Protocol Specification Version 2, August 1995, http://www.ietf.org/rfc/rfc1831.txt, current as of October 2, 2005. 64. R. Srinivasan, RFC 1833: Binding Protocols for ONC RPC Version 2, August 1995, http://www.ietf.org/rfc/rfc1833.txt, current as of October 2, 2005. 65. A. Chiu, RFC 2695: Authentication Mechanisms for ONC RPC, September 1999, http://www.ietf.org/rfc/rfc2695.txt, current as of October 2, 2005.
524
AU8231_C007.fm Page 525 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security 66. An overview of weaknesses in the Domain Name System can be found in C. Schuba, Addressing Weaknesses in the Domain Name System Protocol, M.Sc. thesis, Purdue University, August 1993, http://ftp.cerias.purdue.edu/pub/papers/christoph-schuba/schuba-DNS-msthesis.pdf, current as of August 21, 2005. 67. DNS does not, in fact, enforce uniformity, and in principle, it is possible to set up TLDs or even root servers that are running outside of the canonic Domain Name System. Attempts to set up competing root servers (for instance, Open Root Server Confederation, http://www.open-rsc.org/, and OpenNIC, http://www.opennic.unrated.net/, current as of August 21, 2005), however, have not met with a significant degree of commercial or political success, wherefore this option is considered academic as far as the Internet is concerned. In corporate networks, it is likewise possible to use private TLDs as a means of preventing information disclosure (cf. Section Domain Name Services). 68. Through so-called MX (mail exchanger) records, DNS provides specific support for SMTP (cf. Section Post Office Protocol), which is necessary to support SMTP routing. In principle, similar elements for other protocols would be trivial to implement. One such measure has received recent attention: http://antispam.yahoo.com/domainkeys. 69. P. Mockapetris, RFC 882: Domain Names — Concepts and Facilities, November 1983, http://www.ietf.org/rfc/rfc0882.txt, current as of August 21, 2005. 70. P. Mockapetris, RFC 1034: Domain Names — Concepts and Facilities, November 1987, http://www.ietf.org/rfc/rfc1034.txt, current as of August 21, 2005. 71. P. Mockapetris, RFC 1035: Domain Names — Implementation and Specification, November 1987, http://www.ietf.org/rfc/rfc1035.txt, current as of August 21, 2005. 72. DNSSEC home page, http://www.dnssec.net/, current as of August 21, 2005. 73. This has become especially important in the area of e-mail spam, where fake e-mail sender addresses are the rule and a mail server currently has limited possibility to ensure its counterpart is the server it claims to be. 74. IANA, RFC 3330: Special-Use IPv4 Addresses, September 2002, http://www.ietf.org/rfc/rfc3330.txt, current as of August 21, 2005. 75. For instance, to block banner ads, cf. http://www.everythingisnt.com/hosts.html, current as of August 21, 2005. 76. Users unfamiliar with the workings of the WWW are sometimes surprised to find that knowing a pointer (an address) will not enable one to access the target. A closely related, all too common misconception is that keeping addresses secret will prevent illegitimate access — this of course is not true. 77. A convention of a01.business.*, a02.business.*, a03.business.*, … will obviously be a lot less useful for an attacker than payroll.hr.business.*, presentations.auditorium.business.*, and accesslog.reception.business.*. 78. Easily one of the most prominent cases of this kind is the White House. Although its proper domain is whitehouse.gov (with http://www.whitehouse.gov/ being its Web address), very different Web services were offered under the equivalent .com address. It has become a common pastime to register domain names similar to politicians’ names and use them to voice opposing positions. 79. X.500 is a standard for hierarchical directories developed by the International Telecommunications Union and designed to work with X.400 e-mail services. X.500 defines a Directory Access Protocol (DAP) for queries that is fairly complex, thereby giving an incentive for the development of a lightweight DAP. While X.400-based e-mail services are limited to certain niches, X.500-based directory services that were originally designed to support them have gained much wider popularity, through the adoption of X.509 (see Chapter 3) for management of PKI certificates. For the interested reader, a description of X.500 is available at http://sec.cs.kent.ac.uk/x500book/, current as of August 21, 2005. 80. W. Yeong, T. Howes, and S. Kille, RFC 1777: Lightweight Directory Access Protocol, March 1995, http://www.ietf.org/rfc/rfc1777.txt, current as of August 21, 2005.
525
AU8231_C007.fm Page 526 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® 81. See, for instance, G. Barr, Security Issues with LDAP Connections (http:// search.cpan.org/~gbarr/perl-ldap/lib/Net/LDAP/Security.pod, current as of August 21, 2005) for a discussion of alternative and supplementary security solutions for LDAP. 82. NetBIOS Extended User Interface (NetBEUI) is a connection-oriented layer 4 protocol. It is nonroutable, which makes it unsuited for larger LANs. 83. Internetwork packet exchange (IPX) is a connectionless layer 3 protocol developed for Novell NetWare, supplemented by sequence packet exchange (SPX) on layer 4. IPX/SPX has lost its former market share almost entirely to TCP/IP. 84. NetBIOS Working Group, RFC 1001: Protocol Standard for a NetBIOS Service on a T C P / U D P Tr a n s p o r t : C o n c e p t s a n d M e t h o d s , M a r c h 1 9 8 7 , h t tp://www.ietf.org/rfc/rfc1001.txt, current as of October 2, 2005. 85. NetBIOS Working Group, RFC 1002: Protocol Standard for a NetBIOS Service on a T C P / U D P Tr a n s p o r t : D e t a i l e d S p e c i fi c a t i o n s , M a r c h 1 9 8 7 , h t tp://www.ietf.org/rfc/rfc1002.txt, current as of October 2, 2005. 86. An overview of NIS security weaknesses is available in D. Hess, D. Safford, and U. Pooch, A Unix Network Protocol Security Study: Network Information Service, Texas A&M University, http://www.deter.com/unix/papers/nis_paper.pdf, current as of August 22, 2005. 87. A starting point for obtaining more in depth information is K. Westphal, NFS and NIS Security, January 22, 2001, http://www.securityfocus.com/infocus/1387, current as of August 22, 2005. 88. Microsoft CIFS home page: ftp://ftp.microsoft.com/developr/drg/CIFS/cifs.html, current as of August 15, 2005. 89. Samba project home page: http://us1.samba.org/samba/, current as of August 15, 2005. 90. However, see the description of the CIFS protocol by Microsoft on http://msdn.microsoft.com/library/en-us/cifs/protocol/cifs.asp, current as of October 2, 2005. 91. Sun Microsystems, RFC 1094: NFS: Network File System Protocol Specification, March 1989, http://www.ietf.org/rfc/rfc1094.txt, current as of August 15, 2005. 92. B. Callaghan, B. Pawlowski, and P. Staubach, RFC 1813: NFS Version 3 Protocol Specification, June 1995, http://www.ietf.org/rfc/rfc1813.txt, current as of August 15, 2005. 93. S. Shepler et al., RFC 3010: NFS Version 4 Protocol, December 2000, http://www.ietf.org/rfc/rfc3010.txt, current as of August 15, 2005. 94. S. Shepler et al., RFC 3530: Network File System (NFS) Version 4 Protocol, http://www.ietf.org/rfc/rfc3530.txt, current as of August 15, 2005. 95. Vendors have published their own instructions on how to securely configure NFS. An overview of NFS security configuration questions for Linux systems, which is also instructive for administrators of other UNIX platforms, is available at http://nfs.sourceforge.net/nfs-howto/security.html, current as of August 16, 2005. 96. It is possible to do the same in reverse, i.e., trick a client into assessing a server that is not the correct one. 97. An instructive configuration overview of secure NFS is available at http://www.unet. univie.ac.at/aix/aixbman/commadmn/nfs_secure.htm, current as of August 16, 2005. 98. NFS version 4 home page: http://playground.sun.com/pub/nfsv4/webpage/, current as of August 16, 2005. 99. The sorting of “transport” layer security under the presentation layer may be surprising; we consider it as a protocol that adds security to the transport layer (that the transport layer fails to provide), not on the transport layer. 100. T. Dierks and C. Allen, RFC 2246: The TLS Protocol, January 1999, http://www.ietf.org/rfc/rfc2246.txt, current as of October 2, 2005. 101. A. Medvinsky and M. Hur, RFC 2712: Addition of Kerberos Cipher Suites to Transport Layer Security (TLS), October 1999, http://www.ietf.org/rfc/rfc2712.txt, current as of October 3, 2005. 102. OpenSSL home page: http://www.openssl.org/, current as of August 25, 2005.
526
AU8231_C007.fm Page 527 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security 103. Of the free MTAs, sendmail has been plagued for a while by frequent security holes. In the meantime, sendmail has become more stable and secure; however, it is still a fairly complex piece of software to configure, which bears risks of misrouting or loss of e-mail. The user will want to consult the vendor’s Web site as to the specific security gaps of the software used in specific installations. 104. J. Postel, RFC 821: Simple Mail Transfer Protocol, August 1982, http://www.ietf.org/rfc/rfc0821.txt, current as of October 2, 2005. 105. D. Crocker, RFC 822: Standard for ARPA Internet Text Messages, August 13, 1982, http://www.ietf.org/rfc/rfc0822.txt, current as of October 2, 2005. 106. J. Klensin, Ed., RFC 2821: Simple Mail Transfer Protocol, April 2001, http://www.ietf.org/rfc/rfc2821.txt, current as of October 2, 2005. 107. The liabilities associated with operating an open mail relay server should encourage any system administrator to ensure its mail servers are properly configured. 108. Examples of such blacklist services are at http://www.rahul.net/falk/index.html#blacklists, current as of August 28, 2005. 109. The term spam is derived from a Monty Python sketch, in which a horde of Vikings recite the word spam ad nauseam, rendering any conversation impossible. 110. Did you know that the word sex means the number 6 in Swedish? Ambiguities of this kind exist galore, and a simple keyword filter will never be able to properly analyze the (omni-important) context in which a word was used. 111. A general set of provisions aimed at the sendmail MTA but useful for other mail servers is available at http://www.sendmail.org/antispam.html, current as of August 28, 2005. 112. J. Myers, RFC 1734: POP3 AUTHentication command, December 1994, http://www.ietf.org/rfc/rfc1734.txt, current as of October 2, 2005. 113. J. Myers and M. Rose, RFC 1939: Post Office Protocol — Version 3, May 1996, http://www.ietf.org/rfc/rfc1939.txt, current as of October 2, 2005. 114. M. Crispin, RFC 1730: Internet Message Access Protocol — Version 4, December 1994, http://www.ietf.org/rfc/rfc1730.txt, current as of October 2, 2005. 115. M. Crispin, RFC 3501: Internet Message Access Protocol — Version 4rev1, March 2003, http://www.ietf.org/rfc/rfc3501.txt, current as of October 2, 2005. 116. Yes, these times existed and glorious indeed they were. Even today, the more useful information can be retrieved from newsgroups. 117. It is instructive from a social perspective (less from a technical one) that the newsgroup alt.hackers introduced a self-moderation mechanism aimed at requiring the would-be poster to possess certain technical skills to manipulate his posting in such a way that it would bypass a simple filter. We will not give away the trick, of course. 118. A standing Usenet joke is the word *plonk*, which supposedly reflects the sound of a new entry hitting the bottom of the (already crowded but infinitely large) kill file. 119. Although commercial or free posting services existed (and still exist) that allow this, legal precedents forced the disclosure of the users’ true identities. 120. M. Horton and R. Adams, RFC 1036: Standard for Interchange of Usenet Messages, December 1987, http://www.ietf.org/rfc/rfc1036.txt, current as of October 2, 2005. 121. Because many of these applications have become multipurpose software, including the ability to share files, messages, and voice messages, we have grouped them by their main objective. A comprehensive overview of chat software is available, for instance, on Wikipedia, e.g., List of Instant Messengers (http://en.wikipedia.org/wiki/List_of_instant_messengers, current as of August 7, 2005) and List of Instant Messaging Protocols (http://en.wikipedia.org/wiki/List_of_instant_messaging_protocols, current as of August 7, 2005). 122. Home page of the Jabber Software Foundation: http://www.jabber.org/, current as of August 24, 2005. 123. RFC 3920: Extensible Messaging and Presence Protocol (XMPP): Core, http://www.ietf.org/rfc/rfc3920.txt, current as of August 24, 2005.
527
AU8231_C007.fm Page 528 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® 124. RFC 3921: Extensible Messaging and Presence Protocol (XMPP): Instant Messaging and Presence, http://www.ietf.org/rfc/rfc3921.txt, current as of August 24, 2005. 125. P. Saint-Andre, Ed., RFC 3920: Extensible Messaging and Presence Protocol (XMPP): Core, October 2004, http://www.ietf.org/rfc/rfc3920.txt, current as of August 24, 2005. 126. P. Saint-Andre, Ed., RFC 3921: Extensible Messaging and Presence Protocol (XMPP): Instant Messaging and Presence, October 2004, http://www.ietf.org/rfc/rfc3921.txt, current as of August 24, 2005. 127. J. Oikarinen and D. Reed, RFC 1459: Internet Relay Chat Protocol, May 1993, http://www.ietf.org/rfc/rfc1459.txt, current as of August 7, 2005. 128. A starting point for a comparison of these and other applications and protocols is, for instance, http://en.wikipedia.org/wiki/Comparison_of_instant_messengers, current as of August 24, 2005. 129. J. Postel and J. Reynards, RFC 959: File Transfer Protocol (FTP), 1985, http://www.ietf.org/rfc/rfc0959.txt, current as of August 28, 2005. 130. An overview of some of the nonsecurity-related issues regarding active and passive mode is given at http://www.ncftp.com/ncftpd/doc/misc/ftp_and_firewalls.html, current as of August 28, 2005. 131. K. Sollins, RFC 1350: The TFTP Protocol (Revision 2), http://www.ietf.org/rfc/ rfc1350.txt, current as of August 28, 2005. 132. CERN home page: http://www.cern.ch/, current as of August 29, 2005. 133. It is not uncommon that alternative ports are variations on the number 80, e.g., 81, 8080, etc. The use of port numbers below 1024 (i.e., in the well-known range) is of course not advisable, as these should be kept free for new services. 134. T. Berners-Lee, R. Fielding, and H. Frystyk, RFC 1945: Hypertext Transfer Protocol — HTTP/1.0, May 1996, http://www.ietf.org/rfc/rfc1945.txt, current as of October 2, 2005. 135. D. Kristol and L. Montulli, RFC 2109: HTTP State Management Mechanism, February 1997, http://www.ietf.org/rfc/rfc2109.txt, current as of October 2, 2005. 136. R. Fielding et al., RFC 2616: Hypertext Transfer Protocol — HTTP/1.1, June 1999, http://www.ietf.org/rfc/rfc2616.txt, current as of October 2, 2005. 137. JAP home page: http://anon.inf.tu-dresden.de/index_en.html, current as of August 29, 2005. 138. In the authors’ opinion, although network controls can be used as a bottleneck to control user behavior, they arguably should not be interpreted as a security measure as such, because the risk they are trying to mitigate is unrelated to IT operations. It is an unfortunate fact that organizations mix acceptable use policy and security policy, generating a perception that they are one and the same, when in actuality business ownership resides in quite different parts of the organization, namely, line management and IT operations. 139. While it is practically possible to tunnel any protocol through any other (at least in the TCP world), HTTP access is ubiquitous and cannot be easily restrained. 140. E. Rescorla, RFC 2818: HTTP over TLS, May 2000, http://www.ietf.org/rfc/rfc2818.txt, current as of October 3, 2005. 141. The reader is invited to look out for sites, especially online retailers, that do deploy encryption inconsistently. Household fallacies include offering secure authentication but no secure transmission of information (which opens up the possibility of manin-the-middle attacks). This is especially dangerous when an authentication page is transmitted securely — evoking the impression of a secure page — but the HTML form submission (i.e., the transmission of actual authentication information back to the server) is not. It is important to note that in such a case, the user might start from an encrypted page and land on an encrypted page, but that the HTML forms transmission in between is unencrypted. 142. E. Rescorla and A. Schmiffman, RFC 2660: The Secure HyperText Transfer Protocol, August 1999, http://www.ietf.org/rfc/rfc2660.txt, current as of October 3, 2005.
528
AU8231_C007.fm Page 529 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security 143. E. Rescorla and A. Schiffman, RFC 2659: Security Extensions for HTML, August 1999, http://www.ietf.org/rfc/rfc2659.txt, current as of October 3, 2005. 144. In practice, permissions, even in the untrusted Internet zone, will need to be set to a fairly permissive degree due to the fact that many sites rely on active content for their normal functioning. Managing these permissions on a per-site basis will be impractical for a nonsavvy user, especially as quite a number of sites traverse domains. 145. The dynamic development of the field makes it difficult to even start to list the most important P2P applications. A useful starting point is http://en.wikipedia.org/wiki/P2P, current as of August 27, 2005. 146. Successors of Napster tried to reduce or remove their dependency on centralized servers. In some architectures, centralized servers or services (which can be Web sites) still exist as a directory, while others have fully moved to a deentralized, clientbased model. 147. C. Rigney, S. Willens, A. Rubens, and W. Simpson, RFC 2865: Remote Authentication Dial in User Service (RADIUS), June 2000, http://www.ietf.org/rfc/rfc2865.txt, current as of October 2, 2005. 148. A detailed description of attacks on RADIUS is, for instance, available at http:// www.untruth.org/~josh/security/radius/radius-auth.html, current as of August 28, 2005. 149. Certain vendor implementations may be using additional or different ports. 150. J. Case, M. Fedor, M. Schoffstall, and J. Davin, RFC 1157: A Simple Network Management Protocol (SNMP), May 1990, http://www.ietf.org/rfc/rfc1157.txt, current as of October 2, 2005. 151. The information exchanged between agent and manager is defined in and structured by a management information base (MIB). An MIB is a table with clearly defined entries residing on an agent. 152. J. Postel and J. Reynolds, RFC 854: TELNET Protocol Specification, 1983, http://www.ietf.org/rfc/rfc854.txt, current as of August 28, 2005. 153. J. Postel and J. Reynolds, RFC 854: TELNET Options Specifications, 1983, http://www.ietf.org/rfc/rfc855.txt, current as of August 28, 2005. 154. The terminology used by the X Window System in which host it regards as the server and which host it regards as the client may be slightly surprising and confusing at first. The reader should think of the X Window Server as the element one needs to address to trigger windowing events. Conversely, the X Window System does not have an expectation as to how it is used for remote controlling. In fact, a machine running an X Window Server will practically always also run an X Window Client. 155. Note that an X Window Server can have more than one display; it is possible to export one’s entire display to another machine, which of course has to be an X Window Server. 156. R. Scheifler, RFC 1013: X Window System Protocol, Version 11, 1987, http:// www.ietf.org/rfc/rfc1013.txt, current as of August 28, 2005. 157. fingerd played a role in one of the Internet’s first, most well-known, and devastating attacks, the Morris Worm. The Morris Worm exploited a buffer overflow condition in fingerd (as well as a number of other vulnerabilities, namely, in sendmail and rsh). The Morris Worm could thereby very well be considered a predecessor of modern blended threats. A description of the Morris Worm’s functionality is available in E. Spafford, The Internet Worm Program: An Analysis (http://www.textfiles.com/100/tr823.txt, current as of August 24, 2005) and B. Page, A Report on the Internet Worm (http://www.ee.ryerson.ca:8080/~elf/hack/iworm.html, current as of August 24, 2005). 158. K. Harrenstien, RFC 742: Name/Finger, December 20, 1977, http://www.ietf.org/rfc/ rfc742.txt, current as of August 24, 2005.
529
AU8231_C007.fm Page 530 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® 159. D. Zimmerman, RFC 1288: The Finger User Information Protocol, December 1991, http://www.ietf.org/rfc/rfc1288.txt, current as of August 24, 2005. 160. Network Time Synchronization Project home page: http://www.eecis.udel.edu/~mills/ntp.html, current as of August 24, 2005. 161. D. Mills, RFC 778: DCNET Internet Clock Service, April 18, 1981, http:// www.ietf.org/rfc/rfc778.txt, current as of August 24, 2005. 162. D. Mills, RFC 891: DCN Local-Network Protocols, December 1983, http:// www.ietf.org/rfc/rfc891.txt, current as of August 24, 2005. 163. D. Mills, RFC 956: Algorithms for Synchronizing Network Clocks, September 1985, http://www.ietf.org/rfc/rfc956.txt, current as of August 24, 2005. 164. D. Mills, RFC 958: Network Time Protocol (NTP), September 1985, http:// www.ietf.org/rfc/rfc958.txt, current as of August 24, 2005. 165. D. Mills, RFC 1305: Network Time Protocol (Version 3) Specification, Implementation and Analysis, March 1992, http://www.ietf.org/rfc/rfc1305.txt, current as of August 24, 2005. 166. It is interesting to note that until not too long ago, regulations existed in some countries on the use of modems that would at least in theory have prohibited their use altogether. A combination of theoretical illegitimacy and practical tolerance helped grow the then dial-up mailbox community that later developed into an Internet scene. Up to now, a number of countries and not a few telephony providers restrict the use of VoIP to protect existing telephony monopolies or market share. 167. As the authors’ personal opinion, it is probably fair to say that the analog PSTN will face an imminent decline in market share due to VoIP, and that digital mobile telephony may come next. It is not unreasonable to assume that after the convergence of Internet and PSTN, we will next see convergence of phones and workstations. 168. It should be noted that the design of interfaces for the public executive is not a new requirement; it has been a legal requirement for telecommunication services in general in various legislations for some time. Apart from the political debate on how much surveillance is affordable and legitimate, a common question in this context is who should cover the resulting cost, i.e., whether it would be considered a requirement imposed by the state (and therefore be funded from tax money) or a cost of doing business for providers. 169. An introduction to H.323 is available at http://en.wikipedia.org/wiki/H.323, current as of August 24, 2005. 170. A large number of Web sites giving a technical overview of SIP are available, for instance, http://www.voip-info.org/wiki-SIP or http://en.wikipedia.org/wiki/Session_ Initiation_Protocol, both current as of August 25, 2005. 171. A comprehensive overview of SIP RFCs is available at http://www.networksorcery.com/enp/protocol/sip.htm, current as of August 24, 2005. 172. M. Handley et al., RFC 2543: SIP: Session Initiation Protocol, March 1999, http://www.ietf.org/rfc/rfc2543.txt, current as of August 24, 2005. 173. J. Rosenberg et al., RFC 3261: SIP: Session Initiation Protocol, June 2002, http://www.ietf.org/rfc/rfc3261.txt, current as of August 24, 2005. 174. C. Jennings, J. Peterson, and M. Watson, RFC 3325: Private Extensions to the Session Initiation Protocol (SIP) for Asserted Identity within Trusted Networks, November 2002, http://www.ietf.org/rfc/rfc3325.txt, current as of August 24, 2005. 175. An overview of SIP security is available, for instance, in A. Steffen, D. Kaufmann, and A. Stricker, SIP Security, http://security.zhwin.ch/DFN_SIP.pdf, current as of August 24, 2005. 176. SIP over TLS is known as the Secure Session Initiation Protocol (SIPS).
530
AU8231_C007.fm Page 531 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security 177. Skype (http://www.skype.com/, current as of October 2, 2005) is an online telephony/Voice-over-IP application that offers clearing points with the public switched telephony network (PSTN). Although the protocol is proprietary, its basic architecture has been published by the vendor, as well as independent analysis. From a security perspective, Skype’s peer-to-peer architecture is its most important feature. Any Skype client can turn into a so-called super node; i.e., it will serve as a gateway for communication from other clients.
Sample Questions 1. In the OSI reference model, on which layer can a telephone number be described? a. Layer 1, because a telephone number represents a series of electrical impulses b. Layer 3, because a telephone number describes communication between different networks c. This depends on the nature of the telephony system (for instance, Voice-over-IP versus public switched telephony network (PSTN)) d. None, as the telephone system is a circuit-based network and the OSI system only describes packet-switched networks 2. Which transmission modes exist on OSI layer 5? a. Simplex, all other modes can be described as a series of simplex connections b. Simplex, duplex, triplex c. Simplex, half duplex, duplex d. Duplex, as the other modes are only maintained for legacy and not part of modern standards 3. In which of the following situations is the network itself not a target of attack? a. A denial-of-service attack on servers on a network b. Hacking into a router c. A virus outbreak saturating network capacity d. A man-in-the-middle attack 4. Which of the following are effective protective or countermeasures against a distributed denial-of-service attack? a = Redundant network layout; b = Secret fully qualified domain names (FQDNs); c = Reserved bandwidth; d = Traffic filtering; e = Network Address Translation (NAT). a. b and e b. b, d, and e c. a and c d. a, c, and d
531
AU8231_C007.fm Page 532 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® 5. What is the optimal placement for network-based intrusion detection systems (NIDSs)? a. On the network perimeter, to alert the network administrator of all attack attempts b. On network segments with business-critical systems (e.g., demilitarized zones (DMZs) and on certain intranet segments) c. At the network operations center (NOC) d. At an external service provider 6. Which of the following are meaningful uses for network-based scans? a = Discovery of devices and services on a network; b = Test of compliance with the security policy; c = Detection of attackers in a network, for instance, sniffers; d = Test for vulnerabilities and backdoors, for instance, as part of a penetration test or to detect PCs infected by Trojans; a. a, b, and c b. a, b, and d c. a, c, and d d. b, c, and d 7. Which of the following is an advantage of fiber-optic over copper cables from a security perspective? a. Fiber optics provides higher bandwidth. b. Fiber optics are more difficult to wiretap. c. Fiber optics are immune to wiretap. d. None — the two are equivalent; network security is independent from the physical layer. 8. Which of the following devices should not be part of a network’s perimeter defense? a. A screening router b. A firewall c. A proxy server d. None of the above — all are needed to protect the network behind the perimeter 9. Which of the following is a principal security risk of wireless LANs? a. Lack of physical access control b. Demonstrably insecure standards c. Implementation weaknesses d. War driving 10. Which of the following configurations of a WLAN’s SSID offers adequate security protection? a. Using an obscure SSID to confuse and distract an attacker b. Not using any SSID at all to prevent an attacker from connecting to the network c. Not broadcasting an SSID to make it harder to detect the WLAN d. None of the above 532
AU8231_C007.fm Page 533 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security 11. Which of the following is the principal security risk of broadband Internet access proliferation for home users? a. Users using peer-to-peer file-sharing networks for breaches of intellectual property. b. PCs connected permanently to the Internet are prone to receive more spam mails, thereby increasing the risk for the user to become infected with viruses and Trojans. c. PCs will become infected with dialers on DSL lines (run over telephony lines), thereby exposing the user to almost limitless financial risk. d. Home computers that are not securely configured or maintained and are permanently connected to the Internet become easy prey for attackers. 12. Who should be allowed to change rules on a firewall and for which reason? a. The network administrator, for testing and troubleshooting purposes b. The firewall administrator, on request of users after having assessed the validity of the business reason c. The firewall administrator in compliance with a change process that will, in particular, validate the request against the organization’s security policy and provide proper authorization for the request d. The security manager, who will, in particular, validate the request against the organization’s security policy and provide proper authorization for the request 13. Which of the following is the principal benefit of a personal firewall? a. They provide a PC on a public network with a reasonable degree of protection; if the PC connects to a trusted network later on (for instance, an Intranet), it will prevent the PC from becoming an agent of attack (e.g., by spreading viruses). b. They offer an additional degree of protection on intranets to the PC because, due to the trend of incremental weakening of the network boundary, these networks can no longer be considered trusted. c. They protect networks the PC connects to from threats, such as virus infections, that the PC could become an agent to. d. They prevent attacks on individual PCs. If everybody would use them, the Internet would be safe from virus attacks. 14. Which of the following are true statements about IPSec? a = IPSec provides mechanisms for authentication and encryption. b = IPSec provides mechanisms for nonrepudiation. c = IPSec will only be deployed with IPv6. d = IPSec authenticates hosts against each other. e = IPSec only authenticates clients against a server. f = IPSec is implemented in SSH and TLS. 533
AU8231_C007.fm Page 534 Wednesday, May 23, 2007 9:05 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® a. a and d b. a, b, and e c. a, b, c, d, and f d. a, b, c, e, and f 15. Which of the following statements about well-known ports (0 through 1023) on layer 4 is true? a. Well-known ports all run TCP, as UDP was considered not secure enough. b. Well-known ports historically were the ones defined in early (10bit address) implementations of TCP. When address space was extended to 16 bits, registered and dynamic ports were added. c. On most operating systems, use of well-known ports requires system-level (administrative, superuser) access. d. The distinction between well-known, registered, and dynamic ports will become obsolete in IPv6, as the service used becomes part of the IP address. 16. Which of the following is the enabler for TCP sequence number attacks, and which mitigation exists? a. The fact that packets can arrive in random order. Mitigation is offered by better control of the carrier medium, as described in RFC 2549. b. The fact sequence numbers can be assumed to monotonously increase, which enables guessing of a valid (but random) higher sequence number. Mitigation is offered by switching to UDP, as described in RFC 768. c. The fact that sequence numbers can be predicted, enabling insertion of illegitimate packets into the data stream. Mitigation is offered by better randomization, as described in RFC 1948. d. TCP sequence number attacks are based on a brute-force attack of guessing valid sequence numbers. No mitigation is possible. 17. Which of the following is the principal weakness of DNS (Domain Name System)? a. Lack of authentication of servers, and thereby authenticity of records b. Its latency, which enables insertion of records between the time when a record has expired and when it is refreshed c. The fact that it is a simple, distributed, hierarchical database instead of a singular, relational one, thereby giving rise to the possibility of inconsistencies going undetected for a certain amount of time d. The fact that addresses in e-mail can be spoofed without checking their validity in DNS, caused by the fact that DNS addresses are not digitally signed
534
AU8231_C007.fm Page 535 Wednesday, May 23, 2007 9:05 AM
Telecommunications and Network Security 18. Which of the following statements about open e-mail relays is incorrect? a. An open e-mail relay is a server that forwards e-mail from domains other than the ones it serves. b. Open e-mail relays are a principal tool for distribution of spam. c. Using a blacklist of open e-mail relays provides a secure way for an e-mail administrator to identify open mail relays and filter spam. d. An open e-mail relay is widely considered a sign of bad system administration. 19. A cookie is a way to: a. Track a user’s e-mail b. Add statefulness to the (originally stateless) HTTP c. Disclose a user’s identity d. Add history information to the (originally stateless) HTTP 20. From a disaster recovery perspective, which of the following is the principal concern associated with Voice-over-IP services? a. They can make the IP network of an organization a single point of failure for communication. b. They will increase the chance of a network outage due to capacity saturation. c. They will make the overall IT environment more complex, thereby increasing cost for the recovery site, dependency on external suppliers, and time needed to make it operational. d. None of the above — the choice of telephony technology is immaterial to business continuity planning. 21. Why is public key encryption unsuitable for multicast applications? a. The processing overhead is too high. b. The system is susceptible to man-in-the-middle attacks. c. All data is going to all members of the multicast group. d. Distribution of too many public keys allows them to be broken.
535
AU8231_C007.fm Page 536 Wednesday, May 23, 2007 9:05 AM
AU8231_C008.fm Page 537 Thursday, October 19, 2006 7:03 AM
Domain 8
Application Security Robert M. Slade, CISSP*
Domain Description and Introduction Application security involves processes and activities regarding the planning, programming, and management of software and systems. Somewhat recursively, the field also deals with those controls that may be installed within software systems to ensure the confidentiality, integrity, and availability of either software or data under processing. In addition, this domain concentrates on concepts involved in databases and database management and Web applications, because database applications are a major and unique field of applications and systems, and the World Wide Web is a ubiquitous and widely used interface to all manner of systems. As well as discussing the proper and secure means of designing and controlling applications, we also review maliciously created software, or malware. Current Threats and Levels Although information security has traditionally emphasized the systemlevel access controls, recent history has focused attention on applications. A great many information security incidents now involve software vulnerabilities in one form or another. Evidence is increasing that malware is much more than a mere nuisance: it is now a major security risk. Major consultancies and information technology publications and groups are noting that software security is a major problem. Development of in-house systems, commercial and off-the-shelf software, and controls on the choice, maintenance, and configuration of applications must be given greater attention than has been the case in the past. Unfortunately, too few security professionals have a significant programming or systems development background. At the same time, training in programming and development tends to emphasize speed and productivity over quality, let alone considerations of security. From the perspective of many developers, security is an impediment and roadblock. This perception is changing, but slowly, and in the current development environment, the security professional needs to take care not to be seen as a problem to be avoided. *© 2007 by Robert M. Slade. Used by permission.
537
AU8231_C008.fm Page 538 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® The CISSP® candidate who looks back over the past few years will recall many security incidents. When examined, most major problems will be found to involve software vulnerabilities in some way. Software is increasingly large and complex, offering many opportunities for difficulty simply on the basis of random chance alone. In addition, applications software is becoming standardized, both in terms of the programs and code used and in regard to the protocols and interfaces involved. Although this provides benefits in training and productivity, it also means that a troublesome characteristic may affect the computing and business environment quite broadly. Also, we are finding that legacy code, as well as design decisions taken decades ago, are still involved in current systems and interact with new technologies and operations in ways that may open additional vulnerabilities. A recent FBI computer crime survey examined the costs of various categories of events. It found not only that malware presented a significant cost to business, but also that malware accounted for fully a third of the total cost to business of all reported incidents. This was in spite of the fact that antivirus and antispyware protection was used almost universally throughout the surveyed companies. Application Development Security Outline This chapter addresses the important security concepts that apply during the software development, operation, and maintenance processes. Software includes both operating system software and application software. The computing environment is layered. The foundation is the hardware of the computer system and the functions that are built into that hardware. In some cases, a layer of microcode or firmware is implemented, to generate or ease the use of certain common operations. The operating system provides management of the computer hardware resources. The applications sit on top of the operating system and associated utilities. The user interacts with data and the network resources through applications. In some cases, there are additional layers, very often in terms of the interface either with the user or between systems. In considering applications security, one must remember the applications that the users use to do their jobs and interact with the operating system. However, also be aware that the fundamental concepts of application development also apply to operating system software development, even though most users purchase an existing operating system. Thus, although most enterprises do not develop operating system code, they do design, develop, operate, and maintain proprietary applications relevant to their business needs. Throughout this chapter, examples are given regarding the securityrelated problems that can and do occur during the operations of a com538
AU8231_C008.fm Page 539 Thursday, October 19, 2006 7:03 AM
Application Security puter system. The environment where software is designed and developed is also critical in providing security for the system. Therefore, this chapter can never be exhaustive, and as with the material on security management, we encourage readers to thoughtfully apply these concepts to the specifics of their own company or situation. Operating system and application software consist of increasingly complex computer programs. Without this software, it would be impossible to operate the computer for the purposes we currently require of it. In the early days of computers, users had to write code for each activity to be undertaken using a language native to the specific machine. To improve productivity, sets or libraries of code were developed that would implement many of the more common instructions. These standard files of functions, along with utilities to ease their use, became the forerunners of what we now know as programming languages. The development of programming languages has been referred to in terms of generations. There are specific concerns at each level of this progression, but particularly in more recent environments, where the tendency has been to have functions and operations masked from the user and handled by the system in the background. Reliance on the programming environment and code libraries may prevent the developer from fully understanding the dependencies and vulnerabilities included in the final structure. Expectation of the CISSP in This Domain The information system security professional should fully understand: • The principles related to designing secure information system software • Security and controls of the systems development process, application controls, change controls, data warehousing, data mining, knowledge-based systems, program interfaces, and concepts used to ensure data and application integrity, confidentiality, and availability • The knowledge, practices, and principles for securing systems and applications during the processes known as life-cycle management • The security and controls that should be included within systems and application software • The steps and security controls in the software life cycle and change control process • Concepts used to ensure data and software integrity, confidentiality, and availability • Malicious code and software, such as computer viruses • How malicious code can be introduced into the computing environment 539
AU8231_C008.fm Page 540 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® • Mechanisms that can be used to prevent, detect, and correct malicious code and their attacks • That security is a system-level attribute Please note that this list is only a rough outline. Refer to the Common Body of Knowledge listing at the end of this chapter for full details. It should be noted that the CISSP is not required to be an expert programmer or know the inner workings of developing application software code, like the Fortran programming language, or how to develop Web code using Java. It is not even necessary that the CISSP know detailed security specific coding practices, such as the reason for preferring str(n)cpy to strcpy in the C language (although all such knowledge is, of course, helpful). Because the CISSP may be the person responsible for ensuring that security is included in such developments, the CISSP should know the basic procedures and concepts involved during the design and development of software programming. That is, in order for the CISSP to manage the software development process and verify that security is included, the CISSP must understand the fundamental concepts of programming developments and the security strengths and weaknesses of various application development processes. This chapter could be arranged in many ways, such as by specific threat or the different types of applications. We will begin with the general process of development: outlining the needs for the protection of software and systems; the programming and development process itself, and the role software plays in the functioning of the computer; some of the protection mechanisms that can be used to overcome the threats and vulnerabilities to software, such as using a system development life-cycle (SDLC) methodology; and methods of assurance that the functional security requirements are properly implemented and operated. System assurance is an indispensable, and all too often disregarded, part of information systems security, and thus will be underscored in a major section of its own. In a similar fashion, we will then examine issues specific to the topic of malware, and the countermeasures against it. Database concepts, management systems, safeguards, and the particular aspects of data warehousing and mining make up a third major section. Due to its increasing importance in the current application environment, the security of Web applications will be given a section as well. Applications Development and Programming Concepts and Protection The security of data and information is one of the most important elements of information system security. It is through software mechanisms that we process and access the data on the system. In addition, almost all technical controls are implemented in software. The objective of information security is to make sure that the system and its resources are available when 540
AU8231_C008.fm Page 541 Thursday, October 19, 2006 7:03 AM
Application Security needed, that the integrity of the processing of the data and the data itself is ensured, and that the confidentiality of the data is protected. All of these purposes rely upon secure and properly operating software. Application development procedures are absolutely vital to the integrity of systems. If applications are not developed properly, data may be processed in such a way that the integrity of either the original data or the processed results is corrupted. In addition, the integrity of both application and operating system software itself must be maintained, in terms of both change control and attack from malicious software such as viruses. If special protection requirements (such as confidentiality) for the data controlled by a system are required, protective mechanisms and safeguards (like encryption) should be designed and built into the system and code from the beginning, and not added on as an afterthought. Because operating system software is also responsible for many of the controls on access to data and systems, it is vital that these areas of programming be tightly protected. Current Software Environment Information systems are becoming more distributed, with a substantial increase in open protocols, interfaces, and source code, as well as sharing of resources. Increased sharing requires that all resources be protected against unauthorized access. Many of these safeguards are provided through software controls, especially operating system mechanisms. The operating system must offer controls that protect the computer’s resources. In addition, the relationship between applications and the operating system is also important. Controls must be included in operating systems so that applications cannot damage or circumvent the operating system controls. A lack of software protection mechanisms can leave the operating system and critical computer resources open to corruption and attack. Some of the main security requirements for applications and databases are to ensure that only valid, authorized, and authenticated users can access the data; permissions related to use of the data can be controlled; the software provides some type of granularity for controlling the permissions; encryption is available for protecting sensitive information such as password storage; and audit trails can be implemented and reviewed. It is becoming increasingly evident that many problems in access control, networking, and operations security are related to the development of software and systems. Whether caused by improper system development, sloppy programming practices, or a lack of rigorous testing, it is clear that a number of vulnerabilities are present, and continue to be created, in the software that is in widespread use. An expanding number of titles in the security literature address this topic. 541
AU8231_C008.fm Page 542 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Essentially, security in operating systems, applications, and databases focuses on the ability of the software to enforce controls over the storage and transfer of information in and between objects. Remember that the underlying foundation of the software security controls is the organization’s security policy. The security policy reflects the security requirements of the organization. Therefore, if the security policy requires that only one set of users can access information, the software must have the capability to limit access to that specific group of users. Keep in mind that the ability to refer to a system as secure is based upon the reliable enforcement of the organization’s security policy. Open Source The term open source has a number of competing definitions. However, most advocates would agree to the basic condition that the vendor releases the software source code so that users may modify the software either to suit their own situation or for further development. When the source is open, this also means that others can comment on or assist in debugging the code. Traditionally, vendors have relied on the secrecy of their proprietary code to protect the intellectual property of their product: hiding the source code and releasing only an executable version in machine or object code. There is a trend toward open-source codes in commercial software houses, and many successful business models support this activity, but most software companies still keep their source code secret, relying on proprietary code to prevent others from producing competing products. Advocates of open-source software believe that security can be improved when source code is available to the public. This is expressed in Linus’s law: With sufficiently many eyeballs looking at the code, all bugs will become apparent. Let other developers and programmers review the code and help to find the security vulnerabilities. The idea is that this openness will lead to quick identification and repair of any issues, including those involved with security. Other developers disagree. Will other programmers be able to find all of the security vulnerabilities? Just releasing the source code does not ensure that all security bugs will be found, and the automatic assumption of reliability can lead to a false sense of security. Devotees of proprietary systems note that dishonest programmers may find security vulnerabilities but not disclose the problem, or at least not until they have exploited it. There have been instances where those in the black hat community tried to blackmail software vendors when they found problems. A final determination on this issue has not yet been made. However, in general, it is known that “security by obscurity” — the idea that if a sys542
AU8231_C008.fm Page 543 Thursday, October 19, 2006 7:03 AM
Application Security tem is little known, there is less likelihood that someone will find out how to break into it — does not work. Whether programs are available in source or only executable versions, observation, reverse engineering, disassembly, trial and error, and random chance may be able to find security vulnerabilities. (Turning to another field of security, in cryptography this abjuration against obscurity is formalized in Kerckhoff’s law, which states that reliance upon secrecy of the cryptographic algorithm used is false security.) Full Disclosure. A related issue, frequently tied to the idea of the open-
source model, is full disclosure. Full disclosure means that individuals who find security vulnerabilities will publicly disseminate the information, possibly including code fragments or programs that might exploit the problem. Many models of partial disclosure exist, such as first contacting the vendor of the software and asking that the vulnerability and a subsequent fix be released to the public, or the release only of information of the vulnerability and possible workaround solutions. Rather than making policy regarding the purchase of open-source or proprietary software, for security purposes it may be better to look at how the software was designed. Was security included as an initial consideration when decisions were made about such issues as programming languages, features, programming style, and tests and evaluations? Programming In the development phase, programmers have the option of writing code in several different programming languages. A programming language is a set of rules telling the computer what operations to perform. Programming languages have evolved in generations, and each language is characterized into one of the generations. Those in the lower level are closer in form to the binary language of the computer. Both machine and assembly languages are considered low-level languages. As the languages become easier and more similar to the language people use to communicate, they become higher level. High-level languages are easier to use than lowerlevel languages and can be used to produce programs more quickly. In addition, high-level languages are beneficial because they enforce coding standards and can provide more security. Programming langauges are frequently referred to by generations. The first generation is generally held to be the machine language, opcodes (operating codes), and object code used by the computer itself. These are very simple instructions that can be executed directly by the CPU of a computer. Each type of computer has its own machine language. However, the blizzard of hexadecimal or binary code is difficult for people to understand, and so a second generation of assembly language was created, 543
AU8231_C008.fm Page 544 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® which uses symbols as abbreviations for major instructions. The third generation, usually known as high-level language, uses meaningful words (generally English) as the commands. COBOL, FORTRAN, BASIC, and C are examples of this type. Above this point there may be disagreement on definitions. Fourth-generation languages, sometimes known as very high level languages, are represented by query languages, report generators, and application generators. Fifth-generation languages, or natural language interfaces, require expert systems and artificial intelligence. The intent is to eliminate the need for programmers to learn a specific vocabulary, grammar, or syntax. The text of a natural language statement very closely resembles human speech. Process and Elements. Many of those working in the information sys-
tems security profession are not experienced programmers. Therefore, the following is a very quick and simplistic explanation of the concepts and processes of different types of programming. It is provided purely for background understanding for the other material in this domain. Those who have experience with programming can skip this section. I need to start out by making a point that will be completely and simplistically obvious to those familiar with machine language programming — and totally bizarre to those who are not. Machine language does not consist of the type of commands we see in higher-level languages. Higher-level languages use words from normal human languages, and so, while a given program probably looks odd to the nonprogrammer, nevertheless we see recognizable words such as print, if, load, case, and so forth, which give us some indication of what might be going on in the program. This is not true of machine language. Machine language is, as we frequently say of other aspects of computing and data communications, all just ones and zeroes. The patterns of ones and zeroes are directions to the computer. The directive patterns, called opcodes, are the actual commands that the computer uses. Opcodes are very short — in most desktop microcomputers generally only a single byte (8 bits) in length, or possibly two. Opcodes may also have a byte or two of data associated with them, but the entire string of command and argument is usually no more than 4 bytes, or 32 bits, altogether. This is the equivalent of a word of no more than four letters. Almost all computers in use today are based on what is termed von Neumann architecture (named after John von Neumann). One of the fundamental aspects of von Neumann architecture is that there is no inherent difference between data and programming in the memory of the computer. Therefore, in isolation, we cannot tell whether the pattern 4Eh (00101110) is the letter N or a decrement opcode. Similarly, the pattern 72h (01110010) may be the letter r or the first byte of the “jump if below” opcode. There544
AU8231_C008.fm Page 545 Thursday, October 19, 2006 7:03 AM
Application Security
-d ds:100 11f B8 19 06 BA CF 03 05 FA-0A 3B 06 02 00 72 1B B4 .........;...r.. 09 BA 18 01 CD 21 CD 20-4E 6F 74 20 65 6E 6F 75 .....!. Not enou -u ds:100 0AEA:0100 0AEA:0103 0AEA:0106 0AEA:0109 0AEA:010D 0AEA:010F 0AEA:0111 0AEA:0114 0AEA:0116 0AEA:0118 0AEA:0119 0AEA:011A 0AEA:011C 0AEA:011D 0AEA:011E 0AEA:011F
11f B81906 MOV AX,0619 BACF03 MOV DX,03CF 05FA0A ADD AX,0AFA 3B060200 CMP AX,[0002] 721B JB 012A B409 MOV AH,09 BA1801 MOV DX,0118 CD21 INT 21 CD20 INT 20 4E DEC SI 6F DB 6F 7420 JZ 013C 65 DB 65 6E DB 6E 6F DB 6F 7567 JNZ 0188
Figure 8.1. Display of the same section of a program file, first as data and then as an assembly language listing.
fore, when we look at the contents of a program file, as we do in Figure 8.1, we will be faced with an initially confusing agglomeration of random letters and symbols and incomprehensible garbage. Ultimately, understanding this chaotic blizzard of symbols is going to be of the greatest use to machine language programmers or software forensic specialists. Source code may be available, particularly in cases where we are dealing with script, macro, or other interpreted programming. To explain some of those objects, we need to examine the process of programming itself. The Programming Procedure. In the beginning, programmers created object (machine or binary) files directly. The operating instructions (opcodes) for the computer and any necessary arguments or data were presented to the machine in the form that was needed to get it to process properly. Assembly language was produced to help with this process: although there is a fairly direct correspondence between the assembly mnemonics and specific opcodes, at least the assembly files are formatted in a way that is relatively easy for humans to read, rather than being strings of hexadecimal or binary numbers. You will notice in the second part of Figure 8.1 a column of codes that might almost be words: MOV (move), CMP (compare), DEC (decrement), and ADD (I hope I do not have 545
AU8231_C008.fm Page 546 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK®
OPEN INPUT RESPONSE-FILE OUTPUT REPORT-FILE INITIALIZE SURVEY-RESPONSES PERFORM UNTIL NO-MORE-RECORDS READ RESPONSE-FILE AT END SET NO-MORE-RECORDS TO TRUE NOT AT END PERFORM 100-PROCESS-SURVEY END-READ END-PERFORM begin. display "My parents went to Cape Cod and all they got" display "for me was this crummy COBOL program!".
Figure 8.2. Two sections of code from different COBOL programs. Note that the intention of the program is reasonably clear, as opposed to Figure 8.1.
to expand on that one). Assembly language added these mnemonics because “MOV to register AX” makes more sense to a programmer than simply B8h or 10111000. An assembler program also takes care of details regarding addressing in memory so that every time a minor change is made to a program, all the memory references and locations do not have to be manually changed. With the advent of high-level (or at least higher-level) languages (the socalled third generation), programming language systems split into two types. High-level languages are those where the source code is somewhat more comprehensible to people. Those who work with C or APL may dispute this assertion, of course. The much maligned COBOL is possibly the best example: as you can see in Figure 8.2, the general structure of a COBOL program should be evident from the source code, even for those not trained in the language. Compiled languages involve two separate processes before a program is ready for execution. The application must be programmed in the source (the text or human-readable) code, and then the source must be compiled into object code that the computer can understand: the strings of opcodes. Those who actually do programming will know that I am radically simplifying a process that generally involves linkers and a number of other utilities, but the point is that the source code for languages like Fortran and Modula cannot be run directly: it must be compiled first. Interpreted languages shorten the process. Once the source code for the program has been written, it can be run, with the help of the interpreter. The interpreter translates the source code into object code “on the 546
AU8231_C008.fm Page 547 Thursday, October 19, 2006 7:03 AM
Application Security fly,” rendering it into a form that the computer can use. There is a cost in performance and speed for this convenience: compiled programs are native, or natural, for the CPU to use directly (with some mediation from the operating system) and so run considerably faster. In addition, compilers tend to perform some level of optimization on the programs, choosing the best set of functions for a given situation. However, interpreted languages have an additional advantage: because the language is translated on the machine where the program is run, a given interpreted program can be run on a variety of different computers, as long as an interpreter for that language is available. Scripting languages, used on a variety of platforms, are of this type. JavaScript applets, such as the example in Figure 8.3, may be embedded in Web pages and then run in browsers that support the language regardless of the underlying computer architecture or operating system. (JavaScript is probably a bad example to use when talking about crossplatform operation, because a given JavaScript program may not even run on a new version of the same software company’s browser, let alone one from another vendor or for another platform. But, it is supposed to work across platforms.) As with most other technologies where two options are present, there are hybrid systems that attempt to provide the best of both worlds. Java, for example, compiles source code into a sort of pseudo-object code called bytecode. The bytecode is then processed by the interpreter (called the Java Virtual Machine, or JVM) for the CPU to run. Because the bytecode is already fairly close to object code, the interpretation process is much faster than for other interpreted languages. And because bytecode is still undergoing an interpretation, a given Java program will run on any machine that has a JVM. (Java does have a provision for direct compilation into object code, as do a number of implementations for interpreted languages, such as BASIC.)* The Software Environment The situation in which software operates is fundamental to computer operations. This environment begins with the standard model of hardware resources, with items such as the central processing unit (CPU), memory, input/output (I/O) requests, and storage devices. The operating system is *As a side note, because we have mentioned both, JavaScript is a language most commonly used in Web pages. It is not Java and has no relation to Java: it was originally named LiveScript and was renamed as a marketing strategy. It is interpreted by the user’s Web browser and allows control over most of the features of the Web browser. It has access to most of the contents of the Hypertext Markup Language (HTML) document and has full interaction with the displayed content. Depending upon the browser, it may have significant access to the system itself. Its security management is minimal — it is either enabled or disabled.
547
AU8231_C008.fm Page 548 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK®
Adding input <script> document.write ("Hello, "); document.write ("class.
This line "); document.write ("is written by the JavaScript in the header."); document.write ("
but appears in the body of the page,");
This line is the first line that is written by HTML itself.
Notice that this is the last line that appears until after the new input is obtained.
<script> // Note that within scripts we use C++ style comments // We declare a variable, studentName var studentName; // Then we get some input studentName = window.prompt ("What is your name?", "student name"); /* Although we can use C style multi-line comments */ <script> document.write ("Thank you for your input, " + studentName);
Figure 8.3. A JavaScript applet that will work in all browsers. Note that this script uses much more internal commenting than is usually the case.
responsible for controlling these resources and providing security mechanisms to protect them. The applications employed by the end users make requests or calls to the operating system, or sometimes directly to devices, to provide the required computer services. In some applications, security features are built into the software that allow the users more control over their information, such as access controls or auditing capabilities. Vulnerabilities can be introduced in the application, such as when a buffer overflow attack takes advantage of improper parameter checking within the 548
AU8231_C008.fm Page 549 Thursday, October 19, 2006 7:03 AM
Application Security application. Note that because of layering in the software, protections imposed at one level may be bypassed by functions at another. In addition, many applications now include some form of distributed computing. There are many varities, levels, and forms of distribution you may encounter, ranging from simple cooperation of programs to standard interfacing, message passing (in object environments), layering (as noted above, in more extensive forms), middleware (particularly in database applications), clustering, or virtual machines. We will examine some of these models later in this chapter. Details of others may be found in the chapter on security architecture and design, Domain 5. Distributed applications provide a particular challenge in terms of security due to the complexity of the information flow model. Threats in the Software Environment There are many threats to software during design, development, and operation. Most of these fall into standard patterns, and we will briefly examine the most common ones here. Buffer Overflow. The buffer overflow problem is one of the oldest and most common problems in software development and programming. It can result when a program fills up its buffer of memory with more data than its buffer can hold. When the program begins to write beyond the end of the buffer, the program’s execution path can be changed, or data can be written into areas used by the operating system itself. This can lead to the insertion of malicious code that can be used to gain administrative privileges on the program or system.
Buffer overflows can be created or exploited in a wide variety of ways, but the following is a general example of how a buffer overflow works. A program that is the target of an attack is provided with more data than the application was intended or expected to handle. This can be done by diverse means, such as entering too much text into a dialog box or submitting a Web address that is far too long. The attacked program (target) overruns the memory allocated for input data and writes the excess data into the system memory. The excess data can contain machine language instructions so that when the next step is executed, the attack code, like a Trojan horse or other type of malicious code, is run. (Frequently, the early part of the excess data contains characters that are read by the CPU as “perform no operation,” forming a “no-op sled.” The malicious code is usually at the end of the excess data.) An actual attack method is far more detailed and is highly dependent on the target operating system and hardware architecture. The desired result is to put into memory the attack instructions. These instructions usually do something such as patch the kernel in such a way as to execute another 549
AU8231_C008.fm Page 550 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® program at an elevated privilege level. Sometimes the malicious code will call other programs, even downloading them over the network. Citizen Programmers. Because desktop and personal computers (and even applications, now) come equipped with scripting and programming tools, allowing all computer users to create their own utilities is a common practice that can have extremely harmful consequences and may violate the principle of separation of duties. If this type of unsupervised programming is allowed, then a single user may have complete control over an application or process. Visual Basic, included in the Microsoft Office suite, is often used by citizen programmers to develop their applications or extend existing ones. Citizen, or casual, programmers are unlikely to be trained in, or bound by, system development practices that involve proper application design, change control, and support for the application. Therefore, application development in such a manner is likely to be chaotic and lack any form of assurance in regard to security. It is discussed in more detail in the Information Security and Risk Management chapter. Covert Channel. A covert channel or confinement problem is an information flow issue. It is a communication channel allowing two cooperating processes to transfer information in such a way that it violates the system’s security policy. Even though there are protection mechanisms in place, if unauthorized information can be transferred using a signaling mechanism via entities or objects not normally considered to be able to communicate, then a covert channel may exist. In simplified terms, it is any flow of information, intentional or inadvertent, that enables an observer, not authorized to have the information, to infer what it is or that it exists. This is primarily a concern in systems containing highly sensitive information.
There are two commonly defined types of covert channels: storage and timing. A covert storage channel involves the direct or indirect reading of a storage location by one process and a direct or indirect reading of the same storage location by another process. Typically, a covert storage channel involves a finite resource, such as a memory location or sector on a disk, that is shared by two subjects at different security levels. A covert timing channel depends upon being able to influence the rate that some other process is able to acquire resources, such as the CPU, memory, or I/O devices. The variation in rate may be used to pass signals. Essentially, the process signals information to another by modulating its own use of system resources in such a way that this manipulation affects the real response time observed by the second process. Timing channels are normally considerably less efficient than storage channels, because they have a reduced bandwidth and are usually harder to control. 550
AU8231_C008.fm Page 551 Thursday, October 19, 2006 7:03 AM
Application Security Malicious Software (Malware). Malware comes in many varieties and is written for different operating systems and applications, as well as for different machines. It will be examined in more detail in the next major section. Malformed Input Attacks. A number of attacks employing input from the user are currently known, and various systems detect and protect against such attacks. Therefore, a number of new attacks rely on configuring that input in unusual ways. For example, an attack that redirected a Web browser to an alternate site might be caught by a firewall, by detecting the Uniform Resource Locator (URL) of an inappropriate site. If, however, the URL was expressed in a Unicode format, rather than ASCII, the firewall would likely fail to recognize the content, whereas the Web browser would convert the information without difficulty. In another case, many Web sites allow query access to databases, but place filters on the requests to control access. When requests using the Structure Query Language (SQL) are allowed, the use of certain syntactical structures in the query can fool the filters into seeing the query as a comment, whereupon the query may be submitted to the database engine and retrieve more information than the owners intended. Memory Reuse (Object Reuse). Memory management involves sections of memory allocated to one process for a while, then deallocated, then reallocated to another process. Because residual information may remain when a section of memory is reassigned to a new process after a previous process is finished with it, a security violation may occur. When memory is reallocated, the operating system should ensure that memory is zeroed out completely or overwritten completely before it can be accessed by a new process. Thus, there is no residual information in memory carrying over from one process to another. While memory locations are of primary concern in this regard, developers should also be careful with the reuse of other resources that can contain information, such as disk space. The paging or swap file on the disk is frequently left unprotected and may contain an enormous amount of sensitive information, if care is not taken to prevent this occurrence. Executable Content/Mobile Code. Executable content, or mobile code, is software that is transmitted across a network from a remote source to a local system and is then executed on that local system. The code is transferred by user actions and, in some cases, without the explicit action of the user. The code can arrive to the local system as attachments to e-mail messages or through Web pages.
The concepts of mobile code have been called many names: mobile agents, mobile code, downloadable code, executable content, active capsules, remote code, etc. Even though the terms seem the same, there are slight differences. For example, mobile agents are programs that can 551
AU8231_C008.fm Page 552 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® migrate from host to host in a network, at times and to places of their own choosing. They have a high degree of autonomy, rather than being directly controlled from a central point. Mobile agents differ from applets, which are programs downloaded as the result of a user action, then executed from beginning to end on one host. Examples include ActiveX controls, Java applets, and scripts run within the browser. All of these deal with the local execution of remotely sourced code. One way of looking at mobile code is in terms of current security architectures. Typically, security in the operating system could answer the question “Can subject X use object Y?” The challenge with mobile code is how to resolve when one subject may be acting on behalf of another, or may be acting on its own behalf. Thus, security mechanisms must be put into place that resolve whether these requests should be allowed or denied. Many of the issues of mobile code are tightly connected to problems of malware, and therefore, we will leave the details of this discussion for that section. Social Engineering. One method of compromising a system is to befriend users to gain information; especially vulnerable are individuals with system administrator access. Social engineering is the art of getting people to divulge sensitive information to others either in a friendly manner, as an attempt to be “helpful,” or through intimidation. It is sometimes referred to as people hacking because it relies on vulnerabilities in people rather than those found in software or hardware.
Social engineering comes in many forms, but they are all based on the principle of representing oneself as someone who needs or deserves the information to gain access to the system. For example, one method is for attackers to pretend they are new to the system and need assistance with gaining access. Another method is when attackers pretend to be a system staff member and try to gain information by helping to fix a computer problem, even though there is not a problem. Typically, therefore, social engineering is not considered to be a concern of software development and management. However, there are two major areas where social engineering should be considered in system development and management. The first is in regard to the user interface and human factors engineering. It has frequently, and sadly, been the case where users have misunderstood the intent of the programmer with regard to the operation of certain commands or buttons, and sometimes the misunderstanding has had fatal results. (In one famous case, a correction to dosage levels on the input screen of a medical radiation treatment machine did not change the radia-
552
AU8231_C008.fm Page 553 Thursday, October 19, 2006 7:03 AM
Application Security tion-level settings, and dozens of patients suffered fatal overdoses before the problem was found and rectified.) The second issue of social engineering is in regard to its use in malicious software. Most malware will have some kind of fraudulent component, in an attempt to get the user to run the program, so that the malicious payload can perform undetected. Time of Check/Time of Use (TOC/TOU). This is a common type of attack that occurs when some control changes between the time that the system security functions check the contents of variables and the time the variables actually are used during operations. For instance, a user logs on to a system in the morning and later is fired. As a result of the termination, the security administrator removes the user from the user database. Because the user did not log off, he or she still has access to the system and might try to get even.
In another situation, a connection between two machines may drop. If an attacker manages to attach to one of the ports used for this link before the failure is detected, the invader can hijack the session by pretending to be the trusted machine. (A way to prevent this is to have some form of authentication performed constantly on the line.) Between-the-Lines Attack. Another similar attack is a between-the-lines entry. This occurs when the telecommunication lines used by an authorized user are tapped into and data falsely inserted. To avoid this, the telecommunication lines should be physically secured and users should not leave telecommunication lines open when they are not being used. Trapdoor/Backdoor. A trapdoor or backdoor is a hidden mechanism that bypasses access control measures. It is an entry point into a program that is inserted in software by programmers during the program’s development to provide a method of gaining access into the program for modification if the access control mechanism malfunctions and locks them out. (In this situation, it may also be called a maintenance hook.) They can be useful for error correction, but they are dangerous opportunities for unauthorized access if left in a production system. A programmer or someone who knows about the backdoor can exploit the trapdoor as a covert means of access after the program has been implemented in the system. An unauthorized user may also discover the entry point while trying to penetrate the system.
This list of software threats is to be used as a reminder of the types of threats that developers and managers of software development should be aware. It is not intended to be an inclusive list, as there are new threats developed every day. 553
AU8231_C008.fm Page 554 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Application Development Security Protections and Controls Having noted the environment in regard to applications development, and the security requirements (and threats) in regard to it, we will now look at various protection mechanisms that can be used to overcome the threats and vulnerabilities to software, such as using a system development lifecycle (SDLC) methodology. System Life Cycle and Systems Development. Software development and maintenance is the dominant expenditure in information systems. At the Third World Congress on Computers and Education, the keynote speaker foresaw a future in which only 1 percent of the population actually created computers: the rest of the population would be needed for programming. Because of the expenses associated with software development, industry research began to provide the best methods of reducing costs, which subsequently led to the discipline of software engineering. Software engineering simply stated that software products had to be planned, designed, constructed, and released according to engineering principles. It included software metrics, modeling, methods, and techniques associated with the designing of the system before it was developed, tracking project progress through the entire development process.
Software development faces numerous problems that could result in higher costs and lower quality. Budget and schedule overruns are two of the largest problems for software development. Remember that Windows 95 was released about 18 months late, and it is estimated that the budget was exceeded by 25 percent. Software projects continue to escalate. Subsequent to Windows 95, Windows NT required 4 million lines of code, whereas Windows XP is estimated at 70 million.* On the other side, if software development is rushed and software developers are expected to complete projects within a shortened timeframe, the quality of the software product could be reduced. In its 4 million lines of code, Windows NT was estimated to contain approximately 64,000 bugs, many of which would have security implications. Recently, industry analysts have fastened on software vulnerabilities as the greatest current issue to be addressed in the whole field of information security. Software development is a project. In many cases, it is a very large project. Like any other large project, software development benefits from a formal project management structure: a life cycle of systems development. *When a large software project is not proceeding as fast as one would like, the obvious thought is to hire more programmers and throw them at the task. In his wonderful book The Mythical Man-Month, Fred Brooks notes that there is an inherent complexity to the programming process, and that productivity is not a linear function of the number of people involved. In fact, because of the requirements for increasing communications between all those involved, there comes a point at which, to use Brooks’s immortal phrase, “adding manpower to a late software project makes it later.”
554
AU8231_C008.fm Page 555 Thursday, October 19, 2006 7:03 AM
Application Security A great many such structures have been proposed. No single management structure will equally benefit all possible programming projects, but the common elements of organization, design, communications, assessment, and testing will aid any project. The Software Engineering Institute released the Capability Maturity Model for Software (CMM or SW-CMM) in 1991. The SW-CMM focuses on quality management processes and has five maturity levels that contain several key practices within each maturity level. The five levels describe an evolutionary path from chaotic processes to mature, disciplined software processes. The result of using SW-CMM is intended to be higher-quality software products produced by more competitive companies. The SW-CMM framework establishes a basis for evaluation of the reliability of the development environment. At an initial level, it is assumed that good practices can be repeated. If an activity is not repeated, there is no reason to improve it. Organizations must commit to having policies, procedures, and practices and to using them so that the organization can perform in a consistent manner. Next, it is hoped that best practices can be rapidly transferred across groups. Practices need to be defined in such a manner as to allow for transfer across project boundaries. This can provide for standardization for the entire organization. At the penultimate level, quantitative objectives are established for tasks. Measures are established, done, and maintained to form a baseline from which an assessment is possible. This can ensure that the best practices are followed and deviations are reduced. At the final level, practices are continuously improved to enhance capability (optimizing). The similarity to the levels of the Capability Maturity Model should be evident. The CMM has been used to address a variety of fields, including security and systems integration. When followed, the SW-CMM provides key practices to improve the ability of organizations to meet goals for cost, schedule, functionality, and product quality. The model establishes a yardstick against which it is possible to judge, in a repeatable way, the maturity of an organization’s software process and also compare it to the state of the practice of the industry. The model can also be used by an organization to plan improvements to its software process. The International Standards Organization (ISO) has included software development in its ISO 9000 quality standards. Both the ISO and SEI efforts are intended to reduce software development failures, improve cost estimates, meet schedules, and produce a higher-quality product. Systems Development Life Cycle (SDLC). A project management tool used to plan, execute, and control a software development project is the 555
AU8231_C008.fm Page 556 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® systems development life cycle (SDLC). The SDLC is a process that includes systems analysts, software engineers, programmers, and end users in the project design and development. Because there is no industrywide SDLC, an organization can use any one or a combination of SDLC methods. The SDLC simply provides a framework for the phases of a software development project from defining the functional requirements to implementation. Regardless of the method used, the SDLC outlines the essential phases, which can be shown together or as separate elements. The model chosen should be based on the project. For example, some models work better with long-term, complex projects, while others are more suited for short-term projects. The key element is that a formalized SDLC is utilized. The number of phases can range from three basic phases (concept, design, and implement) on up. The basic phases of SDLC are: • • • • • •
Project initiation and planning Functional requirements definition System design specifications Build (develop) and document Acceptance Transition to production (installation)
The system life cycle extends beyond the SDLC to include two additional phases: • Operations and maintenance support (postinstallation) • Revisions and system replacement Project Initiation and Planning. In the beginning comes the idea. This may address particular business needs (functional requirements), along with a proposed technical solution. This information is contained in a document that outlines the project’s objectives, scope, strategies, and other factors, such as an estimate of cost or schedule. Management approval for the project is based on this project plan document.
During this phase, security should also be considered. Note that security activities should be done in parallel with project initiation activities and, indeed, with every task throughout the project. The security professional’s mental checklist during the project initiation phase should include topics such as: • Does particular information have special value or require special protection? • Has the system owner determined the information’s value? What are the assigned classifications? • Will application operation risk exposure of sensitive information? 556
AU8231_C008.fm Page 557 Thursday, October 19, 2006 7:03 AM
Application Security • Will control of output displays or reports require special measures? • Will data be generated in public or semipublic places? Are controlled areas required for operation? Functional Requirements Definition. The project management and systems development teams will conduct a comprehensive analysis of current and possible future functional requirements to ensure that the new system will meet end-user needs. The teams also review the documents from the project initiation phase and make any revisions or updates as needed. For smaller projects, this phase is often subsumed in the project initiation phase. System Design Specifications. This phase includes all activities related to designing the system and software. In this phase, the system architecture, system outputs, and system interfaces are designed. Data input, data flow, and output requirements are established and security features are designed. Development and Implementation. During this phase, the source code is generated, test scenarios and test cases are developed, unit and integration testing is conducted, and the program and system are documented for maintenance and for turnover to acceptance testing and production. Documentation and Common Program Controls. These are controls used when editing the data within the program, the types of logging the program should be doing, and how the program versions should be stored. A large number of such controls may be needed, including tests and integrity checks for:
• • • • • • • • • • • • • •
Program/application Operating instructions/procedures Utilities Privileged functions Job and system documentation Components — hardware, software, files, databases, reports, users Restart and recovery procedures Common program controls Edits, such as syntax, reasonableness (sanity), range checks, and check digits Logs (who, what, when) Time stamps Before and after images Counts — useful for process integrity checks; includes total transactions, batch totals, hash totals, and balances Internal checks — checks for data integrity within the program from when it gets the data to when it is done with the data 557
AU8231_C008.fm Page 558 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® • • • • •
•
• •
Parameter ranges and data types Valid and legal address references Completion codes Peer review — the process of having peers of the programmer review the code Program or data library when developing software applications – Automated control system – Current versions — programs and documentation – Record of changes made • By whom, when authorized by, what changed – Test data verifying changes – User sign-offs indicating correct testing A librarian ensures program or data library is controlled in accordance with policy and procedures – Controls all copies of data dictionaries, programs, load modules, and documentation and can provide version controls Change control/management — ensures no programs added unless properly tested and authorized Erroneous/invalid transactions detected are written to a report and reviewed by developers and management
Acceptance. In the acceptance phase, an independent group develops test data and tests the code to ensure that it will function within the organization’s environment and that it meets all the functional and security requirements. It is essential that an independent group test the code during all applicable stages of development to prevent a separation of duties issue. The goal of security testing is to ensure that the application meets its security requirements and specifications.
The security testing should uncover all design and implementation flaws that would allow a user to violate the software security policy and requirements. To ensure test validity, the application should be tested in an environment that simulates the production environment. This should include a security certification package and any user documentation. This is the first phase of what is commonly referred to as the certification and accreditation process (C&A), which we will detail shortly. Testing and Evaluation Controls. During the test and evaluation phase, the following guidelines can be included as appropriate to the environment:
Test data: Should include data at the ends of the acceptable data ranges, various points in between, and data beyond the expected and allowable data points. Some data should be chosen randomly, to uncover off-the-wall problems. However, some data should specifically be chosen on a fuzzy basis (that is, close to expected proper or problem values) to concentrate on particular areas. 558
AU8231_C008.fm Page 559 Thursday, October 19, 2006 7:03 AM
Application Security Test with known good data. Data validation: Before and after each test, review the data to ensure that data has not been modified inadvertently. Bounds checking: Field size, time, date, etc. Bounds checking prevents buffer overflows. Sanitize test data to ensure that sensitive production data is not exposed through the test process. Test data should not be production data until preparing for final user acceptance tests, at which point special precautions should be taken to ensure that actions are not taken as a result of test runs. When designing testing controls, make sure to test all changes. The program or media librarian should maintain implementation test data used to test modifications, and should retain copies that are used for particular investigations. Testing done in parallel with production requires that a separate copy of production data be utilized for the assessment. Use copies of master files, not production versions, and ensure either that the data has been sanitized or that the output of the test cannot generate production transactions. Management should be informed of, and acknowledge, the results of the test. Certification and Accreditation. Certification is the process of evaluating the security stance of the software or system against a predetermined set of security standards or policies. Certification also involves how well the system performs its intended functional requirements. The certification or evaluation document should contain an analysis of the technical and nontechnical security features and countermeasures and the extent to which the software or system meets the security requirements for its mission and operational environment. A certifying officer then verifies that the software has been tested and meets all applicable policies, regulations, and standards for securing information systems. Any exceptions are noted for the accreditation officer.
Security activities verify that the data conversion and data entry are controlled, and only those who need to have access are allowed on the system. Also, an acceptable level of risk is determined. Additionally, appropriate controls must be in place to reconcile and validate the accuracy of information after it is entered into the system. It should also test the ability to substantiate processing. The acceptance of risk is based on the identified risks and operational needs of the application to meet the organization’s mission. Management, after reviewing the certification, authorizes the software or system to be implemented in a production status, in a specific environment, for a specific period. There are two types of accreditation: provisional and full. Provisional accreditation is for a specific period and out559
AU8231_C008.fm Page 560 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® lines required changes to the applications, system, or accreditation documentation. Full accreditation implies that no changes are required for making the accreditation decision. Note that management may choose to accredit a system that has failed certification, or may refuse to accredit a system even if it has been certified correct. Certification and accreditation are related, but not simply two steps in a single process. Transition to Production (Implementation). During this phase, the new system is transitioned from the acceptance phase into the live production environment. Activities during this phase include obtaining security accreditation (if not included in the system accreditation process); training the new users according to the implementation and training schedules; implementing the system, including installation and data conversions; and, if necessary, conducting any parallel operations. Operations and Maintenance Support (Postinstallation). During this phase, the system is in general use throughout the organization. The activities involve monitoring the performance of the system and ensuring continuity of operations. This includes detecting defects or weaknesses, managing and preventing system problems, recovering from system problems, and implementing system changes.
The operating security activities during this phase include testing backup and recovery procedures, ensuring proper controls for data and report handling, and ensuring the effectiveness of security processes. During the maintenance phase, periodic risk analysis and recertification of sensitive applications are required when significant changes occur. Significant changes include a change in data sensitivity or criticality, relocation or major change to the physical environment, new equipment, new external interfaces, new operating system software, or new application software. Throughout the operation and maintenance phase, it is important to verify that any changes to procedures or functionality do not disable or circumvent the security features. Also, someone should be assigned the task of verifying compliance with applicable service level agreements according to the initial operational and security baselines. Revisions and System Replacement. As systems are in production mode, the hardware and software baselines should be subject to periodic evaluations and audits. In some instances, problems with the application may not be defects or flaws, but rather additional functions not currently developed in the application. Any changes to the application must follow the same SDLC and be recorded in a change management system.
Revision reviews should include security planning and procedures to avoid future problems. Periodic application audits should be conducted 560
AU8231_C008.fm Page 561 Thursday, October 19, 2006 7:03 AM
Application Security and include documenting security incidents when problems occur. Documenting system failures is a valuable resource for justifying future system enhancements. Software Development Methods. Several software development methods have evolved to satisfy different requirements. Waterfall. The traditional waterfall life-cycle method is the oldest method for developing software systems. It was developed in the early 1970s and provided a sense of order to the process. Each phase — concept, requirements definition, design, etc. — contains a list of activities that must be performed and documented before the next phase begins. The disadvantage of the waterfall model is that it demands a heavy overhead in planning and administration, and requires patience in the early stages of a project. Also, because each phase must be completed before the next, it can inhibit a development team from pursuing concurrent phases or activities. Usually, this method is not good for projects that must be developed in quick turnaround time periods (generally less than six months). The waterfall model is considered to be the paradigm for the following styles, known as noniterative models. From the perspective of security, noniterative models are preferred for systems development. STRUCTURED PROGRAMMING DEVELOPMENT. This is a method that programmers use to write programs allowing considerable influence on the quality of the finished products in terms of coherence, comprehensibility, freedom from faults, and security. It is one of the most widely known programming development models, and versions are taught in almost all academic systems development courses. The methodology promotes discipline, allows introspection, and provides controlled flexibility. It requires defined processes and modular development, and each phase is subject to reviews and approvals. It also allows for security to be added in a formalized, structured approach. SPIRAL METHOD. The spiral model is a sort of nested version of the waterfall method. The development of each phase is carefully designed using the waterfall model. A distinguishing feature of the spiral model is that in each phase there are four substages, in particular, a risk assessment review. Estimated costs to complete and schedules are revised each time the risk assessment is performed. Based on the results of the risk assessment, a decision is made to continue or cancel the project. CLEANROOM. Cleanroom was developed in the 1990s as an engineering process for the development of high-quality software. It is named after the process of cleaning electronic wafers in a wafer fabrication plant. (Instead of testing for and cleaning contaminants from the wafer after it has been made, the objective is to prevent pollutants from getting into the fabrication envi561
AU8231_C008.fm Page 562 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® ronment.) In software application development, it is a method of controlling defects (bugs) in the software. The goal is to write code correctly the first time, rather than trying to find the problems once they are there. Essentially, cleanroom software development focuses on defect prevention rather than defect removal. To achieve this, more time is spent in the early phases, relying on the assumption that the time spent in other phases, such as testing, is reduced. (Quality is achieved through design, rather than testing and remediation.) Because testing can often consume the majority of a project timeline, time saved during the testing phase can be substantial. In terms of security, if risk considerations are addressed up front, security becomes an integral part of the system, rather than an add-on. Iterative Development. The pure waterfall model is highly structured and does not allow for changes once the project is started, or revisiting a stage in light of discoveries made in a later phase. Iterative models allow for successive refinements of requirements, design, and coding.
Allowing refinements during the process requires that a change control mechanism be implemented. Also, the scope of the project may be exceeded if clients change requirements after each point of development. Iterative models also make it very difficult to ensure that security provisions are still valid in a changing environment. PROTOTYPING. The prototyping method was formally introduced in the early 1980s to combat the weaknesses of the waterfall model. The objective is to build a simplified version (prototype) of the application, release it for review, and use the feedback from the users’ review (or clients) to build a second, better version. This is repeated until the users are satisfied with the product. It is a four-step process: initial concept, design and implement initial prototype, refine prototype until acceptable, and complete and release final version. MODIFIED PROTOTYPE MODEL (MPM). This is a form of prototyping that is ideal for Web application development. It allows for the basic functionality of a desired system or component to be formally deployed in a quick timeframe. The maintenance phase is set to begin after the deployment. The goal is to have the process be flexible enough so the application is not based on the state of the organization at any given time. As the organization grows and the environment changes, the application evolves with it, rather than being frozen in time. RAPID APPLICATION DEVELOPMENT (RAD). RAD is a form of rapid prototyping that requires strict time limits on each phase and relies on tools that enable quick development. This may be a disadvantage if decisions are made so rapidly that it leads to poor design.
562
AU8231_C008.fm Page 563 Thursday, October 19, 2006 7:03 AM
Application Security JOINT ANALYSIS DEVELOPMENT (JAD). JAD was originally invented to enhance the development of large mainframe systems. Recently, JAD facilitation techniques have become an integral part of rapid application development (RAD), Web development, and other methods. It is a management process that helps developers to work directly with users to develop a working application. The success of JAD is based on having key players communicating at critical phases of the project. The focus is on having the people who actually perform the job (they usually have the best knowledge of the job) work together with those who have the best understanding of the technologies available to design a solution. JAD facilitation techniques bring together a team of users, expert systems developers, and technical experts throughout the development life cycle. EXPLORATORY MODEL. This is a set of requirements built with what is currently available. Assumptions are made as to how the system might work, and further insights and suggestions are combined to create a usable system. Other Methods and Models. There are other software development methods that do not rely on the iterate/do not iterate division, such as the following. COMPUTER-AIDED SOFTWARE ENGINEERING (CASE). This is the technique of using computers and computer utilities to help with the systematic analysis, design, development, implementation, and maintenance of software. It was designed in the 1970s, but has evolved to include visual programming tools and object-oriented programming. It is most often used on large, complex projects involving multiple software components and many people. It provides a mechanism for planners, designers, code writers, testers, and managers to share a common view of where a software project is at each phase of the life-cycle process. By having an organized approach, code and design can be reused, which can reduce costs and improve quality. The CASE approach requires building and maintaining software tools and training for the developers who will use them. COMPONENT-BASED DEVELOPMENT. This is the process of using standardized building blocks to assemble, rather than develop, an application. The components are encapsulated sets of standardized data and standardized methods of processing data, together offering economic and scheduling benefits to the development process. From a security perspective, the advantage is (or can be) that components have previously been tested for security. This is similar to object-oriented programming (OOP). REUSE MODEL. In this model, an application is built from existing components. The reuse model is best suited for projects using object-oriented development because objects can be exported, reused, or modified. 563
AU8231_C008.fm Page 564 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® EXTREME PROGRAMMING. This is a discipline of software development that is based on values of simplicity, communication, and feedback. Despite the name, extreme programming is a fairly structured approach, relying on subprojects of limited and defined scope and programmers working in pairs. The team produces the software in a series of small, fully integrated releases that fulfill the customer-defined needs for the software. Those who have worked with the method say that it works best with small teams: around a dozen programmers in total. Model Choice Considerations and Combinations. Depending on the application project and the organization, models can be combined to fit the specific design and development process. For example, an application may need a certain set of activities to take place to achieve success, or the organization may require certain standards or processes to meet industry or government requirements.
When deciding on the programming model, security must be a consideration. Many developers focus on functionality and not security; thus, it is important to educate those individuals responsible for the development and the managers who oversee the projects. If developers are brought into the project knowing there is a focus on security, they may better understand the importance of coding both functionality and security. Java Security. The Java programming language implements some specific security provisions. Some of these have been added to subsequent programming languages.
The three parts (sometimes referred to as layers) of the Java security approach are: 1. Verifier (or interpreter), which helps to ensure type safety. It is primarily responsible for memory and bounds checking. 2. Class loader, which loads and unloads classes dynamically from the Java runtime environment. 3. Security manager, which acts as a security gatekeeper protecting against rogue functionality. The verifier is responsible for scrutinizing the bytecode (regardless of how it was created) before it can run on a local Java VM. Because many programs written in Java are intended to be downloaded from the network, the Java verifier acts as a buffer between the computer and the downloaded program. Because the computer is actually running the verifier, which is executing the downloaded program, the verifier can protect the computer from dangerous actions that can be caused by the downloaded program. The verifier is built in to the Java VM and by design cannot be accessed by programmers or users.
564
AU8231_C008.fm Page 565 Thursday, October 19, 2006 7:03 AM
Application Security The verifier can check bytecode at a number of different levels. The simplest check ensures that the format of a code fragment is correct. The verifier also applies a built-in theorem prover to each code fragment. The theorem prover can ensure that the bytecode does not have rogue code, such as the ability to forge pointers, violate access restrictions, or access objects using incorrect type information. If the verifier discovers rogue code within a class file, it executes an exception and the class file is not executed. A criticism of the Java verifier is the length of time it takes to verify the bytecodes. Although the delay time is minimal, Web business owners thought that any delay, such as 10 to 20 s, would prevent customers from using their sites. This could be viewed as an example of a technology that is not quite ready for the argument (trade-off) between functionality and security. In most Java implementations, when the bytecode arrives at the Java VM, the class loader forms it into a class, which the verifier automatically examines. The class loader is responsible for loading the mobile code and determining when and how classes can be added to a running Java environment. For security purposes, the class loaders ensure that important parts of the Java runtime environment are not replaced by impostor code (known as class spoofing). Also for security purposes, class loaders typically divide classes into distinct namespaces according to origin. This is an important security element — to keep local classes distinct from external classes. However, a weakness was discovered in the class loader — in some instances, it was possible for the namespaces to overlap. This has subsequently been protected with an additional security class loader. The third part of the model is the security manager, which is responsible for restricting the ways an applet uses visible interfaces (Java API calls). It is a single Java object that performs runtime checks on dangerous operations. Essentially, code in the Java library consults the security manager whenever a potentially dangerous operation is attempted. The security manager has veto authority and can generate a security exception. A standard browser security manager will disallow most operations when they are requested by untrusted code, and will allow trusted code to perform all of its operations. It is the responsibility of the security manager to make all final decisions as to whether a particular operation is permitted or rejected. Java was originally designed for a distributed application environment, and so the security model implemented a sandbox that imposed strict controls on what distributed Java programs can and cannot do. An alternative to the sandbox approach of handling mobile code is to run only code that is trusted. For example, ActiveX controls should be run only when you completely trust the entity that signed the control. Unfortunately, there 565
AU8231_C008.fm Page 566 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® have been problems with both the design and implementation of the ActiveX system. In the Java sandbox model, the Web browser defines and implements a security policy for running downloaded Java code, such as an applet. A Java-enabled Web browser includes a Java verifier and runtime library along with classes (in Java, all objects belong to classes) to implement a security manager. The security manager controls the access to critical system resources and ensures that the Web browser’s version of the security manager is implemented correctly. In the extreme, if a Java-enabled Web browser did not install a system security manager, an applet would have the same access as a local Java application. The sandbox is not the only example of the operation of the security manager. Any Java application or environment can implement, and tune, a specific security manager and particular restrictions. A weakness of the three-part model is that if any of the three parts fail to operate, the security model may be completely compromised. Since Java’s introduction, several additional security features have been released, including the java.security package. This package is an application programming interface (API) that includes both a cryptographic provider interface and APIs for common cryptographic algorithms. It provides the ability to implement cryptography and manage/modify default security protections for a specific application. Other new Java releases focusing on security include: • Java Certification Path API for building and validating certification paths and managing certificate revocation lists. • Java GSS-API for securely exchanging messages between communication applications using Kerberos. Support for single sign-on using Kerberos is also included. • Java Authentication and Authorization Service (JASS), which enables services to authenticate and enforce access controls upon users. • Java Cryptography Extension (JCE) provides a framework and implementation for encryption, key generation, and key agreement, and message authentication code (MAC) algorithms. • Java Secure Socket Extension (JSSE) enables secure Internet connections. It implements a Java version of the Secure Socket Layer (SSL) and Transport Layer Security (TLS) protocols and includes functionality for data encryption, server authentication, message integrity, and optional client authentication. Object-Oriented Technology and Programming. Object-oriented programming (OOP) is considered by some to be a revolutionary concept 566
AU8231_C008.fm Page 567 Thursday, October 19, 2006 7:03 AM
Application Security that changed the rules in computer program development. It is organized around objects rather than linear procedures. OOP is a programming method that makes a self-sufficient object. The object is a block of preassembled programming code in a self-contained module. The module encapsulates both data and the processing instructions that may be called to process the data. Once a block of programming code is written, it can be reused in any number of programs. Examples of object-oriented languages are Eiffel, Smalltalk (one of the first), Ruby, Java (one of the most popular today), C++ (also one of the most popular today), Python, Perl, and Visual Basic. When defining an object-oriented language, the following are some of the key characteristics. Encapsulation (Also Known as Data Hiding). A class defines only the data it needs to be concerned with. When an instance of that class (i.e., an object) is run, the code will not be able to accidentally access other data, which is generally seen as positive in terms of security. Polymorphism. Objects may be processed differently depending on their data type. Unfortunately, this has implications for security that must be carefully assessed. Inheritance. The concept of a data class makes it possible to define subclasses of data objects that share some or all of the main (or super) class characteristics. If security is properly implemented in the high-level class, then subclasses should inherit that security. Polyinstantiation. Specific objects, instantiated from a higher class, may vary their behavior depending upon the data they contain. Therefore, it may be difficult to verify that inherited security properties are valid for all objects. However, polyinstantiation can also be used to prevent inference attacks against databases, because it allows different versions of the same information to exist at different classification levels.
Within an OOP environment, all predefined types are objects. A data type in a programming language is a set of data with values having predefined characteristics, such as integer, character, string, and pointer. In most programming languages, a limited number of such data types are built into the language. The programming language usually specifies the range of values for a given data type, how the values are processed by the computer, and how they are stored. In OOP, all user-defined types are also objects. The first step in OOP is to identify all the objects you want to manipulate and how they relate to each other; this is often known as data modeling. Once the object is identified, it is generalized as a class of objects and defined as the kind of data it contains and as any logic sequences that can 567
AU8231_C008.fm Page 568 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® manipulate it. Each distinct logic sequence is known as a method. A real instance of a class is called an object or an instance of a class, and this is what is run in the computer. The object’s methods provide computer instructions, and the class object characteristics provide relevant data. Communication with objects, and objects communication with each other, is established through interfaces called messages. When building traditional programs, the programmers must write every line of code from the beginning. With OOP, programmers can use the predetermined blocks of code (objects). Consequently, an object can be used repeatedly in different applications and by different programmers. This reuse reduces development time and thus reduces programming costs. Object-Oriented Security. In object-oriented systems, objects are encapsulated. Encapsulation protects the object by denying access to view what is located inside the object — it is not possible to see what is contained in the object because it is encapsulated. Encapsulation of the object does provide protection of private data from outside access. For security purposes, no object should be able to access another object’s internal data. On the other hand, it could be difficult for system administrators to apply the proper policies to an object if they cannot identify what the object contains.
Some of the security issues can be found in the use of polyinstantiation, polymorphism, and inheritance. Polyinstantiation allows for iteratively producing a more defined version of an object by replacing variables with values (or other variables). Thus, multiple distant differences between data within objects are done to discourage low-level objects from gaining information at a high level of security. It is also the technique used to avoid covert channels based on inference by causing the same information to exist at different classification levels. Therefore, users at a lower classification level do not know of the existence of a higher classification level. Covert channels are further explained in the chapter on security architecture and design, Domain 5. In object-oriented programming, polymorphism refers to a programming language’s ability to process objects differently depending on their data type. The term is sometimes used to describe a variable that may refer to objects whose class is not known at compile time, but will respond at runtime according to the actual class of the object to which they refer. Even though polymorphism seems straightforward, if used incorrectly, it can lead to security problems. One of the basic activities of an object-oriented design is establishing relationships between classes. One fundamental way to relate classes is through inheritance. This is when a class of objects is defined; any subclass that is defined can inherit the definitions of the general (or super) 568
AU8231_C008.fm Page 569 Thursday, October 19, 2006 7:03 AM
Application Security class. Inheritance allows a programmer to build a new class similar to an existing class without duplicating all the code. The new class inherits the old class’s definitions and adds to them. Essentially, for the programmer, an object in a subclass need not have its own definitions of data and methods that are generic to the class it is a part of. This can help decrease program development time — what works for the superclass will also work for the subclass. Multiple inheritances can introduce complexity and may result in security breaches for object accesses. Issues such as name clashes and ambiguities must be resolved by the programming language to avoid a subclass inheriting inappropriate privileges from a superclass. Distributed Object-Oriented Systems. As the age of mainframe-based applications began to wane, the new era of distributed computing emerged. Distributed development architectures allow applications to be divided into pieces that are called components, and each component can exist in different locations. This development paradigm allows programs to download code from remote machines onto a user’s local host in a seamless manner to the user.
Applications today are constructed with software systems that are based on distributed objects, such as the Common Object Request Broker Architecture (CORBA), Java Remote Method Invocation (JRMI), Enterprise JavaBean (EJB), and Distributed Component Object Model (DCOM). A distributed object-oriented system allows parts of the system to be located on separate computers within an enterprise network. The object system itself is a compilation of reusable self-contained objects of code designed to perform specific business functions. How objects communicate with one another is complex, especially because objects may not reside on the same machine, but may be located across machines on the network. To standardize this process, the Object Management Group (OMG) created a standard for finding objects, initiating objects, and sending requests to the objects. The standard is the Object Request Broker (ORB), which is part of the Common Object Request Broker Architecture (CORBA). Common Object Request Broker Architecture (CORBA). CORBA is a set of standards that address the need for interoperability between hardware and software products. CORBA allows applications to communicate with one another regardless of where they are stored. The Object Request Broker (ORB) is the middleware that establishes a client/server relationship between objects. Using an ORB, a client can transparently locate and activate a method on a server object either on the same machine or across a network. The ORB operates regardless of processor type or programming language. 569
AU8231_C008.fm Page 570 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Not only does the ORB handle all the requests on the system, but it also enforces the system’s security policy. The policy describes what the users (and the system) are allowed to do and also what user (or system) actions will be restricted. The security provided by the ORB should be transparent to the user’s applications. The CORBA security service supports four types of policies: access control, data protection, nonrepudiation, and auditing. The client application (through an object) sends a request (message) to the target object. 1. The message is sent through the ORB security system. Inside the ORB security system is the policy enforcement code, which contains the organization’s policy regarding objects. 2. If the policy allows the requester to access the targeted object, the request is then forwarded to the target object for processing. When reviewing CORBA implementations, consider the following: • The specific CORBA security features that are supported • The implementation of CORBA security building blocks, such as cryptography blocks or support for Kerberos systems • The ease by which system administrators can use the CORBA interfaces to set up the organization’s security policies • Types of access control mechanisms that are supported • Types, granularity, and tools for capturing and reviewing audit logs • Any technical evaluations (i.e., Common Criteria) CORBA is not the only method for securing distributed application environments. Java’s Remote Method Invocation (RMI) and Enterprise JavaBean (EJB) are similar. EJB is a Sun Microsystems model providing an API specification for building scalable, distributed, multitier, component-based applications. EJB uses Java’s RMI implementations for communications. The EBJ server provides a standard set of services for transactions, security, and resource sharing. One of the security advantages is the EJB allows the person assembling the components to control access. Instead of a component developer hard coding the security policies, the end user (i.e., system administrator or security officer) can specify the policy. Other security features are also available to the end user. A vulnerability of EJB is the noted weakness of the RMI. For example, the RMI is typically configured to allow clients to download code automatically from the server when it is not present. Thus, before the client can make a secure connection, it can still download code or a malicious attacker could masquerade as the client to the server and download code. Although improvements have been made to increase the security of RMI, all implementations must be reviewed for security features. 570
AU8231_C008.fm Page 571 Thursday, October 19, 2006 7:03 AM
Application Security Software Protection Mechanisms This section provides a brief overview of many of the software protection controls that can be made part of applications, to ameliorate some of the problems previously mentioned. Security Kernels. A security kernel is responsible for enforcing a security policy. It is a strict implementation of a reference monitor mechanism. The architecture of a kernel operating system is typically layered, and the kernel should be at the lowest and most primitive level. It is a small portion of the operating system through which must pass all references to information and all changes to authorizations. In theory, the kernel implements access control and information flow control between implemented objects according to the security policy.
To be secure, the kernel must meet three basic conditions: completeness (all accesses to information must go through the kernel), isolation (the kernel itself must be protected from any type of unauthorized access), and verifiability (the kernel must be proven to meet design specifications). The reference monitor, as noted previously, is an abstraction, but there may be a reference validator, which usually runs inside the security kernel and is responsible for performing security access checks on objects, manipulating privileges, and generating any resulting security audit messages. A term associated with security kernels and the reference monitor is the trusted computing base (TCB). The TCB is the portion of a computer system that contains all elements of the system responsible for supporting the security policy and the isolation of objects. The security capabilities of products for use in the TCB can be verified through various evaluation criteria, such as the earlier Trusted Computer System Evaluation Criteria and the current Common Criteria standard. Many of these security terms — reference monitor, security kernel, TCB — are used by vendors for marketing. Thus, it is necessary for security professionals to read the small print between the lines to fully understand what the vendor is offering in regard to security features. For more details on these concepts, see the chapter on security architecture and design, Domain 5. Processor Privilege States. The processor privilege states protect the processor and the activities that it performs. The earliest method of doing this was to record the processor state in a register that could only be altered when the processor was operating in a privileged state. Instructions such as I/O requests were designed to include a reference to this register. If the register was not in a privileged state, the instructions were aborted. The hardware typically controls entry into the privilege mode. For example, the Intel 486 processor defines four privilege rings to protect 571
AU8231_C008.fm Page 572 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® system code and data from being overwritten, although these protections are seldom directly used. The privilege-level mechanism should prevent memory access (programs or data) from less privileged to more privileged levels. The privileged levels are typically referenced in a ring structure. To illustrate this point, many operating systems use two processor access modes: user mode and kernel mode. User application code runs in user mode, and operating system code runs in kernel mode. The privileged processor mode is called kernel mode. Kernel mode allows the processor access to all system memory and all CPU instructions. Application code runs in a nonprivileged mode (the user mode) and has a limited set of interfaces available, limited access to system data, and no direct access to hardware resources. An advantage of the operating system having a higher privilege level than the application software is that problematic application software cannot disrupt the system’s functioning. When a user mode program calls a system service (such as reading a document from storage), the processor catches the call and switches the calling request to kernel mode. When the call is complete, the operating system switches the call back to user mode and allows the user mode program to continue. In the Microsoft model, the operating system and device drivers operate at ring level 0, also known as kernel-level or system-level privilege. At this privilege level, there are no restrictions on what a program can do. Because programs at this level have unlimited access, users should be concerned about the source of device drivers for machines that contain sensitive information. Applications and services should operate at ring level 3, also known as user-level or application-level privilege. Note that if an application or service fails at this level, a trap screen will appear (also known as a general protection fault) that can be dismissed and the operating system does not care. The decision to have services run at the same privilege level as regular applications is based on the idea that if the service traps, the operating system should continue to operate. As a side note, Windows 2000 is considered to be a monolithic operating system. A monolithic operating system exists as a large program consisting of a set of procedures; there are no restrictions on what procedures may be called by any other procedures. In the Windows 2000 model, this means that the majority of the operating system and device driver codes share the kernel mode protected memory space. Once in kernel mode, operating system and device driver code have complete access to system space memory and can bypass Windows 2000 security to access objects. Because most of the Windows 2000 operating system code runs in kernel mode, it is critical that kernel mode components be carefully designed to ensure they do not violate security features. If a system administrator 572
AU8231_C008.fm Page 573 Thursday, October 19, 2006 7:03 AM
Application Security installs a third-party device driver, it operates in kernel mode and then has access to all operating system data. If the device driver installation software also contains malicious code, that code will also be installed and could open the system to unauthorized accesses. In fact, in the Windows model, any window, including user programs, can send a message to any other window, including those running in kernel mode. An exploit called Shatter used this fact to submit system-level commands even from heavily restricted accounts. A privileged state failure can occur if an application program fails. The safest place for an application to fail is to a system halt. For example, if an application has an error, it will fail to the operating system program, and the user can then use the operating system to recover the application and data. This vulnerability could also be exploited by allowing an attacker to crash an application to get to the operating system with the identity and privileges of the person who started the application. Security Controls for Buffer Overflows. Another issue with privilege states is called ineffective parameter checking, which causes buffer overflows. A buffer overflow is caused by improper or lack of bounds checking on input to a program. Essentially, the program fails to see if too much data is provided for an allocated space of memory. Because programs are loaded into memory when run, when there is an overflow, the data has to go somewhere. If that data happens to be executable malicious code that is loaded, it will run as if it were the program.
Buffer overflows must be corrected by the programmer or by directly patching system memory. They can be detected and fixed by reverse engineering (disassembling programs) and looking at the operations of the application. Hardware states and other hardware controls can make buffer overflows impossible, although enterprises seldom specify hardware at this level. Bounds enforcement and proper error checking will also stop buffer overflows. Controls for Incomplete Parameter Check and Enforcement. A security risk exists when all parameters have not been fully checked for accuracy and consistency by the operating systems. The lack of parameter checking can lead to buffer overflow attacks. A recent parameter check attack involved an e-mail attachment with a name longer than 64K in length. Because the application required attachment names to be less than 64K, attachments that had longer names would overwrite program instructions.
To counter the vulnerability, operating systems must offer some type of buffer management. Parameter checking is implemented by the programmer and involves checking the input data for disallowed characters, length, data type, and format. 573
AU8231_C008.fm Page 574 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Memory Protection. Memory protection is concerned with controlling access to main memory. When several processes are running at the same time, it is necessary to protect the memory used by one process from unauthorized access by another. Thus, it is necessary to partition memory to ensure processes cannot interfere with each other’s local memory and to ensure common memory areas are protected against unauthorized access. For example, an operating system may use secondary memory (storage devices) to give the illusion of a larger main memory, or it may partition the main memory among users so that each user sees a virtual machine that has memory smaller than that on the real machine.
The memory used by the operating system needs to be protected to maintain the integrity of privileged code and data. Because memory protection deals with addressing, many protection mechanisms protect memory by placing it outside the address space available to a process. Using Windows 2000 as an example, there are four methods Windows 2000 uses to provide memory protection so no user process can inadvertently or deliberately corrupt the address space of another process or the operating system itself. The first method ensures all systemwide data structures and memory pools used by kernel mode system components can be accessed only while in kernel mode. Thus, user mode requests cannot access these pages. If they attempt to do so, the hardware will generate a fault, and then the memory manager will create an access violation. In early Windows operating systems, such as Windows 95 and Windows 98, some pages in the system address space were writable from user mode, thus allowing an errant application to corrupt key system data structures and crashing the system. Second, each process has a separate, private address space protected from being accessed by any request belonging to another process, with a few exceptions. Each time a request references an address, the virtual memory hardware, in conjunction with the memory manager, intervenes and translates the virtual address into a physical one. Because Windows 2000 is controlling how virtual addresses are translated, requests running in one process do not inappropriately access a page belonging to another process. Third, most modern processors provide some form of hardware-controlled memory protection, such as read or write access. The type of protection offered depends on the processor. For example, a memory protection option is page_noaccess. If an attempt is made to read from, write to, or execute code in this region, an access violation will occur. The fourth protection mechanism uses access control lists to protect shared memory objects, and they are checked when processes attempt to open them. Another security feature involves access to mapped files. To 574
AU8231_C008.fm Page 575 Thursday, October 19, 2006 7:03 AM
Application Security map to a file, the object (or user) performing the request must have at least read access to the underlying file object or the operation will fail. Covert Channel Controls. A covert channel or confinement problem is an information flow that is not controlled by a security control. It is a communication channel allowing two cooperating processes to transfer information in such a way that violates the system’s security policy. Even though there are protection mechanisms in place, if unauthorized information can be transferred using a signaling mechanism or other objects, then a covert channel may exist. The standard example used in application security is a situation where a process can be started and stopped by one program, and the existence of the process can be detected by another application. Thus, the existence of the process can be used, over time, to signal information.
The only channels of interest are those breaching the security policy; those channels that parallel legitimate communications paths are not of concern. Although there are differences for each type of covert channel, there is a common condition — the transmitting and receiving objects over the channel must have access to a shared resource. The first step is to identify any potential covert channels; the second is to analyze these channels to determine whether a channel actually exists. The next steps are based on manual inspection and appropriate testing techniques to verify if the channel creates security concerns. Cryptography. Cryptographic techniques protect information by transforming the data through encryption schemes. They are used to protect the confidentiality and integrity of information. Most cryptographic techniques are used in telecommunications systems; however, because of the increase in distributed systems, they are becoming increasingly used in operating systems.
Encryption algorithms can be used to encrypt specific files located within the operating system. For example, database files that contain user information, such as group rights, are encrypted using one-way hashing algorithms to ensure a higher protection of the data. Password Protection Techniques. Operating system and application software use passwords as a convenient mechanism to authenticate users. Typically, operating systems use passwords to authenticate the user and establish access controls for resources, including the system, files, or applications.
Password protections offered by the operating system include controls on how the password is selected and how complex the password is, password time limits, and password length. 575
AU8231_C008.fm Page 576 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Password files stored within a computer system must be secured by the protection mechanisms of the operating system. Because password files are prone to unauthorized access, the most common solution is to encrypt password files using one-way encryption algorithms (hashing). These, however, are very susceptible to a dictionary attack if the passwords chosen appear in any dictionary. Another feature offered by an operating system for password security involves an overstrike or password-masking feature. This prevents others from reading the typed password. Inadequate Granularity of Controls. If there is not enough granularity of security, users will get more access permission than needed. If the user is unable to access object A, but the user has access to a program that can access object A, then the security mechanisms could be bypassed. If the security controls are granular enough to address both program and user, then we can prevent the disclosure.
Inadequate granularity of controls can be addressed by properly implementing the concept of least privilege, setting reasonable limits on the user. Also, the separation of duties and functions should be covered. Programmers should never be system administrators or users of the application. Grant users only those permissions necessary to do their job. Users should have no access to computer rooms or legacy programs; programmers and system analysts should not have write access to production programs, allowing them to change the installed program code. Programmers should have no ongoing direct access to production programs. Access to fix crashed applications should be limited to the time required to repair the problem causing the failure. Mainframe operators should not be allowed to do programming. Maintenance programmers should not have access to programs under development. Assignment of system privileges must be tightly controlled and a shared responsibility. Control and Separation of Environments. The following environmental
types can exist in software development: • Development environment • Quality assurance environment • Application (production) environment The security issue is to control how each environment can access the application and the data and then provide mechanisms to keep them separate. For example, systems analysts and programmers write, compile, and perform initial testing of the application’s implementation and functionality in the development environment. As the application reaches maturity and is moving toward production readiness, users and quality assurance people perform functional testing within the quality assurance environ576
AU8231_C008.fm Page 577 Thursday, October 19, 2006 7:03 AM
Application Security ment. The quality assurance configuration should simulate the production environment as closely as possible. Once the user community has accepted the application, it is moved into the production environment. Blended environments combine one or more of these individual environments and are generally the most difficult to control. Control measures protecting the various environments include physical isolation of environment, physical or temporal separation of data for each environment, access control lists, content-dependent access controls, role-based constraints, role definition stability, accountability, and separation of duties. Time of Check/Time of Use (TOC/TOU). If there are multiple threads of execution at the same time, a TOC/TOU is possible. The most common TOC/TOU hazards are file-based race conditions that occur when there is a check on some property of the file that precedes the use of that file.
To avoid TOC/TOU problems, especially file-based issues, the programmer should avoid any file system call that takes a filename for an input, instead of a file handle or a file descriptor. When using file descriptors, it is possible to ensure that the file used does not change after it is first called. In addition, files that are to be used should be kept in their own directory, where the directory is only accessible by the universal ID (UID) of the program performing the file operation. In this manner, even when using symbolic names, attackers are not able to exploit a race condition unless they already have the proper UID. (If they have a proper UID, there is no reason to exploit the race condition.) Social Engineering. The best method of preventing social engineering is to make users aware of the threat and give them the proper procedures for handling unusual (or what may seem usual) requests for information. For example, if a user were to receive a phone call from a “system administrator” asking for their password, users should be aware of social engineering threats and ask that the system administrator come to their office to discuss the problems in a face-to-face format. Unless the user is 100 percent sure that the person on the phone is the system administrator and the phone line could not be tampered with, it is better to never give a password out using the phone lines. Backup Controls. Backing up operating system and application software is a method of ensuring productivity in the event of a system crash. Operation copies of software should be available in the event of a system crash. Also, storing copies of software in an off-site location can be useful if the building is no longer available. Data, programs, documentation, computing, and communications equipment redundancy can ensure that information is available in the event of an emergency. Requiring that the source code for custom-designed software is kept in escrow ensures that if the 577
AU8231_C008.fm Page 578 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® software vendor were to go out of business, the source code would be available to use or give to another vendor in the event upgrades or assistance is needed. Contingency planning documents help to provide a plan for returning operations to normal in the event of an emergency. Disk mirroring, redundant array of independent disks (RAID), etc., provide protection for information in the event of a production server crashing. Software Forensics. Software, particularly malicious software, has traditionally been seen in terms of a tool for the attacker. The only value that has been seen in the study of such software is in regard to protection against malicious code. However, experience in the virus research field, and more recent studies in detecting plagiarism, indicates that we can obtain evidence of intention, and cultural and individual identity, from examination of software itself. Although most would see software forensics strictly as a tool for assurance, in software development and acquisition, it has a number of uses in protective procedures.
Outside of virus research, forensic programming is a little known field. However, the larger computer science world is starting to take note of software forensics. It involves the analysis of program code, generally object or machine language code, to make a determination of or provide evidence for the intent or authorship of a program. Software forensics has a number of possible uses. In analyzing software suspected of being malicious, it can be used to determine whether a problem is a result of carelessness or was deliberately introduced as a payload. Information can be obtained about authorship and the culture behind a given programmer, and the sequence in which related programs were written. This can be used to provide evidence about a suspected author of a program or to determine intellectual property issues. The techniques behind software forensics can sometimes also be used to recover source code that has been lost. Software forensics generally deals with two different types of code. The first is source code, which is relatively legible to people. Analysis of source code is often referred to as code analysis and is closely related to literary analysis. The second, analysis of object, or machine, code, is generally referred to as forensic programming. Literary analysis has contributed much to code analysis and is an older and more mature field. It is referred to, variously, as authorship analysis, stylistics, stylometry, forensic linguistics, or forensic stylistics. Stylistic or stylometric analysis of messages and text may provide information and evidence that can be used for identification or confirmation of identity.
578
AU8231_C008.fm Page 579 Thursday, October 19, 2006 7:03 AM
Application Security Physical fingerprint evidence frequently does not help us identify a perpetrator in terms of finding the person once we have a fingerprint. However, a fingerprint can confirm an identity or place a person at the scene of a crime, once we have a suspect. In the same way, the evidence we gather from analyzing the text of a message, or a body of messages, may help to confirm that a given individual or suspect is the person who created the fraudulent postings. Both the content and the syntactical structure of text can provide evidence that relates to an individual. Some of the evidence that we discover may not relate to individuals. Some information, particularly that relating to the content or phrasing of the text, may relate to a group of people who work together, influence each other, or are influenced from a single outside source. This data can still be of use to us, in that it will provide us with clues in regard to a group that the author may be associated with, and may be helpful in building a profile of the writer. Groups may also use common tools. Various types of tools, such as word processors or databases, may be commonly used by groups and provide similar evidence. In software analysis, we can find indications of languages, specific compilers, and other development tools. Compilers leave definite traces in programs and can be specifically identified. Languages leave indications in the types of functions and structures supported. Other types of software development tools may contribute to the structural architecture of the program or the regularity and reuse of modules. In regard to programming, it is possible to trace indications of cultures and styles in programming. A very broad example is the difference between design of programs in the Microsoft Windows environment and the UNIX environment. Windows programs tend to be large and monolithic, with the most complete set of functions possible built into the main program, large central program files, and calls to related application function libraries. UNIX programs tend to be individually small, with calls to a number of single-function utilities. Evidence of cultural influences exists right down to the machine code level. Those who work with assembler and machine code know that a given function can be coded in a variety of ways, and that there may be a number of algorithms to accomplish the same end. It is possible, for example, to note, for a given function, whether the programming was intended to accomplish the task in a minimum amount of memory space (tight code), a minimum number of machine cycles (high-performance code), or a minimal effort on the part of the programmer (sloppy code).
579
AU8231_C008.fm Page 580 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® The syntax of text tends to be characteristic. Does the author always use simple sentences? Always use compound sentences? Have a specific preference when a mix of forms is used? Syntactical patterns have been used in programs that detect plagiarism in written papers. The same kind of analysis can be applied to source code for programs, finding identity between the overall structure of code even when functional units are not considered. A number of such plagiarism detection programs are available, and the methods that they use can assist with this type of forensic study. Errors in the text or program can be extremely helpful in our analysis and should be identified for further study. When dealing with authorship analysis, it may be important to distinguish between issues of style and stylometry. Literary critics, and anyone with a writing background, may be prejudiced against technologies that ignore content and concentrate on other factors. Although techniques such as cusum analysis have been proven to work in practice, they still engender unreasoning opposition from many who fail to understand that material can contain features quite apart from the content and meaning. It may seem strange to use meaningless features as evidence. However, Richard Forsyth reported on studies and experiments that found that short substrings of letter sequences can be effective in identifying authors. Even a relative count of the use of single letters can be characteristic of authors. Certain message formats may provide us with additional information. A number of Microsoft e-mail systems include a data block with every message that is sent. To most readers, this block contains meaningless garbage. However, it may include a variety of information, such as part of the structure of the file system on the sender’s machine, the sender’s registered identity, programs in use, and so forth. Other programs may add information that can be used. Microsoft’s word processing program, Word, for example, is frequently used to create documents sent by e-mail. Word documents include information about file system structure, the author’s name (and possibly company), and a global user ID. This ID was analyzed as evidence in the case of the Melissa virus. MS Word can provide us with even more data: comments and “deleted” sections of text may be retained in Word files and simply marked as hidden to prevent them from being displayed. Simple utility tools can recover this information from the file itself. Mobile Code Controls. The concept of attaching programs to Web pages has very real security implications. However, through the use of appropriate technical controls, the user does not have to consider the security consequences of viewing the page. Rather, the controls determine if the user 580
AU8231_C008.fm Page 581 Thursday, October 19, 2006 7:03 AM
Application Security can view the page. Secured systems should limit mobile code (applets) access to system resources such as the file system, the CPU, the network, the graphics display, and the browser’s internal state. Additionally, the system should garbage-collect memory to prevent both malicious and accidental memory leakage. The system must manage system calls and other methods that allow applets to affect each other as well as the environment beyond the browser. Fundamentally, the issue of safe execution of code comes down to a concern with access to system resources. Any running program has to access system resources to perform its task. Traditionally, that access has been given to all normal user resources. Mobile code must have restricted access to resources for safety. However, it must be allowed some access to perform its required functions. When creating a secure environment for an executable program, such as mobile code, it is important to identify the resources the program needs and then provide certain types of limited access to these resources to protect against threats. Examples of threats to resources include: • Disclosure of information about a user or the host machine • Denial-of-service attacks that make a resource unavailable for legitimate purposes • Damaging or modifying data • Annoyance attacks, such as displaying obscene pictures on a user’s screen Some resources are clearly more dangerous to give full access to than others. For example, it is hard to imagine any security policy where an unknown program should be given full access to the file system. On the other hand, most security policies would not limit a program from almost full access to the monitor display. Thus, one of the key issues in providing for safe execution of mobile code is determining which resources a particular piece of code is allowed access. That is, there is a need for a security policy that specifies what type of access any mobile code can have. Two basic mechanisms can be used to limit the risk to the user: • Attempt to run code in a restricted environment where it cannot do harm, such as in a sandbox. • Cryptographic authentication can be used to attempt to show the user who is responsible for the code. Sandbox. One of the control mechanisms for mobile code is the sandbox. The sandbox provides a protective area for program execution. Limits are placed on the amount of memory and processor resources the program can consume. If the program exceeds these limits, the Web browser terminates the process and logs an error code. This can ensure the safety of the 581
AU8231_C008.fm Page 582 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® browser’s performance. A sandbox can be created on the client side to protect the resource usage from Java applets. In the Java sandbox security model, there is an option to provide an area for the Java code to do what it needs to do, including restricting the bounds of this area. A sandbox cannot confine code and its behavior without some type of enforcement mechanism. In the Java arena, the Java security manager makes sure all restricted code stays in the sandbox. Trusted code resides outside the sandbox; untrusted code is confined within it. By default, Java applications live outside the sandbox and Java applets are confined within. The sandbox ensures that an untrusted application cannot gain access to system resources. This confined area is needed for applet security. As an example, if a user is viewing an applet showing an environmental picture show of endangered dolphins, the applet could also, unbeknownst to the user, search the hard drive for private information. A sandbox can prevent this type of intrusion by controlling the boundaries or limits of the program. Programming Language Support. A method of providing safe execution of programs is to use a type-safe programming language (also known as strong typing), such as Java. A type-safe language or safe language is a program that will never go wrong in certain ways. These ensure that arrays stay in bounds, the pointers are always valid, and code cannot violate variable typing (such as placing code in a string and then executing it). From a security perspective, the absence of pointers is important. Memory access through pointers is one of the main causes for weaknesses (bugs) and security problems in C or C++. Java does an internal check, called static type checking, which examines whether the arguments an operand may get during execution are always of the correct type.*
Audit and Assurance Mechanisms Although it may be felt that this section is a mere review of what has already been covered, assurance is a major, and all too often neglected, *Once upon a time, a group set out to build a language that would allow one to write programs that could be formally verified. Formal analysis and proof can be used to determine that a program will work the way you want it to, and not do something very weird (usually at an inopportune time). First came the attempt to build the Southampton Program Analysis Development Environment (SPADE) using a subset of the Pascal programming language. When it was determined that Pascal was not really suitable, research was directed to Ada, and the SPADE Ada Kernel, or (with a little poetic license) SPARK, was the result. SPARK can be considered both a subset and extension to Ada, but is best seen as a separate language in its own right. SPARK forbids language structures such as the infamous GOTO statement of Fortran and BASIC (which cannot be formally verified). Support for some object-oriented features has been included in SPARK, but not for aspects like polymorphism, which would make formal proof problematic. A great deal of the security of SPARK lies in the idea of contracts and the use of data specifications (usually referred to as interfaces) that prevent problems such as the unfortunately all too ubiquitous buffer overflow.
582
AU8231_C008.fm Page 583 Thursday, October 19, 2006 7:03 AM
Application Security part of information systems security. Therefore, we are emphasizing the importance of audit and assurance by making this a major section. Software is frequently delivered with vulnerabilities that are not discovered until after the software has been installed and is operational. Both UNIX and Microsoft products have numerous security weaknesses that have been discovered after their release. However, this is not a new problem. Early operating systems also had security vulnerabilities. Because this is an ongoing problem, organizations must implement policies and procedures to limit the vulnerabilities that are inherent in the software by expeditious implementation of applicable vendor patches. Information Integrity Procedures should be applied to compare or reconcile what was processed against what was supposed to be processed. For example, controls can compare totals or check sequence numbers. This would check that the right operation was performed on the right data. Information Accuracy To check input accuracy, data validation and verification checks should be incorporated into appropriate applications. Character checks compare input characters against the expected type of characters, such as numbers or letters. This is sometimes also known as sanity checking. Range checks verify input data against predetermined upper and lower limits. Relationship checks compare input data to data on a master record file. Reasonableness checks compare input data to an expected standard — another form of sanity checking. Transaction limits check input data against administratively set ceilings on specified transactions. Information Auditing Because vulnerabilities exist in the software life cycle, there is likelihood that attacks will occur. Auditing procedures assist in detecting any abnormal activities. A secure information system must provide authorized personnel with the ability to audit any action that can potentially cause access to, damage to, or in some way affect the release of sensitive information. The level and type of auditing is dependent on the auditing requirements for the installed software and the sensitivity of data that is processed and stored on the system. The key element is that the audit data provides information on what types of unauthorized activities have taken place and who or what processes took the action. The system resources should be protected when they are available for use. If security software or security features of software are disabled in any way, notification should be given to appropriate individuals. The ability to bypass security features must be limited to only those individuals who 583
AU8231_C008.fm Page 584 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® need that level of access, such as system administrators or information system security officers. Hardware and software should be evaluated for compatibility with existing or complementary systems. Certification and Accreditation In the United States, federal agencies are mandated to conduct security certification of systems that process sensitive information or perform critical support functions. Certification is the technical evaluation of security compliance of the information system within its operational environment: the endorsement by the users and managers that the system/application meets their functional requirements. The certification process is followed by accreditation. The accreditation process reviews the certification information and grants the official authorization to place the information system into operational use: it is the formal approval by senior management. The U.S. National Institute of Standards and Technology (NIST) has developed a document (SP 800-37) that recommends a certification and accreditation process and procedures. Every new system and application that goes into production should go through a process of certification and accreditation prior to implementation. There are vulnerabilities associated with certification. The first is that organizations and users cannot count on the certified product being free of security flaws. Because new vulnerabilities are always being discovered, no product is ever completely secure. Second, most software products must be securely configured to meet certain protection mechanisms. For example, even though the Windows NT 4.0 operating system offers auditing capabilities, the auditing is not enabled by default. Thus, the system administrator needs to enable and configure the auditing capabilities. Another issue is that certifications are not the definitive answer to security. Information system security depends on more than just technical software protection mechanisms, such as personnel and physical security measures. Information Protection Management If software is shared, it should be protected from unauthorized modification by ensuring that policies, developmental controls, and life-cycle controls are in place. In addition, users should be trained in security policies and procedures. Software controls and policies should require procedures for changing, accepting, and testing software prior to implementation. These controls and policies require management approval for any software changes and compliance with change control procedures.
584
AU8231_C008.fm Page 585 Thursday, October 19, 2006 7:03 AM
Application Security Change Management To ensure the integrity of our applications, in the process of maintenance of software, care must be taken to ensure that the application is not changed in a gratuitous or negligent manner. Most particularly, there must be controls in place to ensure that users cannot request changes that will breach security policies, and developers cannot implement modifications to the software with unknown effects. Change controls must be sufficient to protect against accidental or deliberate introduction of variations in code that would allow system failures, security intrusions, corruption of data, or improper disclosure of information. The change management process should have a formal cycle, in the same manner as the system development life cycle. There should be a formal change request, an assessment of impact and resource requirements and approval decision, implementation (programming) and testing, implementation in production, and a review and verification in the production environment. The key points of change management are that there is a rigorous process that addresses quality assurance, changes must be submitted, approved, tested, and recorded, and there should be a backout plan in case the change is not successful. The same process should be applied to patch management, when vendors supply patches, hot fixes, and service packs to commercial software. In addition, it should be noted that patches are frequently released to address security vulnerabilities, so they should be applied in a timely manner. This is particularly important given the evidence that black hat groups study released patches to craft new exploits. A strategy should be developed for patch management and should be kept in place as part of the software maintenance infrastructure. A team, responsible for the patch management process, should research (and authenticate) announcements and related information from vendor Web sites. Research should also be conducted in other areas, such as user groups where other experience with the patch may be reported. This requirement may need to be addressed for various systems and applications. Analysis should be conducted balancing the implications of the vulnerability addressed, the need for timely application, and the need for thorough testing. Test the patch, and then deploy it into production. The test environment should mirror the production environment as far as possible. A fallback position should be prepared so that the patch or system can be “rolled back” to a previous stage if the patch creates unforeseen problems. Patch less sensitive systems first, to ensure that an error in the patch does not immediately affect critical systems.
585
AU8231_C008.fm Page 586 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Configuration Management For software, configuration management refers to monitoring and managing changes to a program or documentation. The goal is to guarantee integrity, availability, and usage of the correct version of all system components, such as the software code, design documents, documentation, and control files. Configuration management consists of reviewing every change made to a system. This includes identifying, controlling, accounting for, and auditing all changes. The first step is to identify any changes that are made. The control task occurs when every change is subject to some type of documentation that must be reviewed and approved by an authorized individual. Accounting refers to recording and reporting on the configuration of the software or hardware throughout any change procedures. Finally, the auditing task allows the completed change to be verified, especially ensuring that any changes did not affect the security policy or protection mechanisms that are implemented. The best method of controlling changes is to have a configuration management plan that ensures that changes are performed in an agreed upon manner. Any deviations from the plan could change the configuration of the entire system and could essentially void any certification that it is a secure, trusted system. In a project, configuration management often refers to the controlling of changes to the scope or requirements of the project. Often called scope creep, a lack of configuration management can lead to a project never being completed or structured, because its requirements are continuously changing. Malicious Software (Malware) Malware is a relatively new term in the security field. It was created to address the need to discuss software or programs that are intentionally designed to include functions for penetrating a system, breaking security policies, or carrying malicious or damaging payloads. Because this type of software has started to develop a bewildering variety of forms — such as backdoors, data diddlers, DDoS, hoax warnings, logic bombs, pranks, RATs, Trojans, viruses, worms, zombies, etc. — the term malware has come to be used for the collective class of malicious software. However, the term is often used very loosely simply as a synonym for virus, in the same way that virus is often used simply as a description of any type of computer problem. This section will attempt to define the problem more accurately and to describe the various types of malware. Viruses are the largest class of malware, in terms of both numbers of known entities and impact in the current computing environment. Viruses 586
AU8231_C008.fm Page 587 Thursday, October 19, 2006 7:03 AM
Application Security will therefore be given primary emphasis in this discussion, but will not be the only malware type examined. Given the range of types of malware, and the sometimes subtle distinctions between them, some take the position that we should dispense with differentiations by category, as users are not inclined to understand fine peculiarities. In fact, the opposite is true: we should be very careful to discern the classes, because the variations are characterized by functional differences, and these distinctions inform our detection and protection of systems. Programming bugs or errors are generally not included in the definition of malware, although it is sometimes difficult to make a hard and fast distinction between malware and bugs. For example, if a programmer left a buffer overflow in a system and it creates a loophole that can be used as a backdoor or a maintenance hook, did he do it deliberately? This question cannot be answered technically, although we might be able to guess at it, given the relative ease of use of a given vulnerability. In addition, it should be noted that malware is not just a collection of utilities for the attacker. Once launched, malware can continue an attack without reference to the author or user, and in some cases will expand the attack to other systems. There is a qualitative difference between malware and the attack tools, kits, or scripts that have to operate under an attacker’s control, and which are not considered to fall within the definition of malware. There are grey areas in this aspect as well, because RATs and DDoS zombies provide unattended access to systems, but need to be commanded to deliver a payload. Malware can attack and destroy system integrity in a number of ways. Viruses are often defined in terms of their ability to attach to programs (or to objects considered to be programmable) and so must, in some way, compromise the integrity of applications. Many viruses or other forms of malware contain payloads (such as data diddlers) that may either erase data files or interfere with application data over time in such a way that data integrity is compromised and data may become completely useless. In considering malware, there is an additional type of attack on integrity. As with attacks where the intruder takes control of your system and uses it to explore or assail further systems, to hide his own identity, malware (viruses and DDoS zombies in particular) are designed to use your system as a platform to continue further assaults, even without the intervention of the original author or attacker. This can create problems within domains and intranets where equivalent systems “trust” each other, and can also create “badwill” when those you do business with find out that your system is sending viruses or probes to theirs. 587
AU8231_C008.fm Page 588 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® As noted, malware can compromise programs and data to the point where they are no longer available. In addition, malware generally uses the resources of the system it has attacked and can, in extreme cases, exhaust CPU cycles, available processes (process numbers, tables, etc.), memory, communications links and bandwidth, open ports, disk space, mail queues, and so forth. Sometimes this can be a direct denial-of-service (DoS) attack, and sometimes it is a side effect of the activity of the malware. Malware such as backdoors and RATs are intended to make intrusion and penetration easier. Viruses such as Melissa and SirCam send data files from your system to others (in these particular cases, seemingly as a side effect of the process of reproduction and spread). Malware can be written to do directed searches and send confidential data to specific parties, and can also be used to open covert channels of other types. The fact that you are infected with viruses, or compromised by other types of malware, can become quite evident to others. This compromises confidentiality by providing indirect evidence of your level of security, and may also create public relations problems. It has long been known that the number of variants of viruses or other forms of malware is directly connected to the number of instances of a given platform. The success of a given piece of malware is also related to the relative proportion of a given platform in the overall computing environment. The modern computing environment is one of extreme consistency. The Intel platform has extreme dominance in hardware, and Microsoft has a near monopoly on the desktop. In addition, compatible application software (and the addition of functional programming capabilities in those applications) can mean that malware from one hardware and operating system environment can work perfectly well in another. The functionality added to application macro and script languages has given them the capability to either directly address computer hardware and resources or easily call upon utilities or processes that have such access. This means that objects previously considered to be data, and therefore immune to malicious programming, must now be checked for malicious functions or payloads. In addition, these languages are very simple to learn and use, and the various instances of malware carry their own source codes, in plaintext and sometimes commented, making it simple for individuals wanting to learn how to craft an attack to gather templates and examples of how to do so, without even knowing how the technology actually works. This expands the range of authors of such software enormously. In the modern computing environment, everything, including many supposedly isolated mainframes, is next to everything else. Where older Trojans relied on limited spread for as long as users on bulletin board systems could be fooled, and early-generation viruses required disk exchange via 588
AU8231_C008.fm Page 589 Thursday, October 19, 2006 7:03 AM
Application Security sneakernet to spread, current versions of malware use network functions. For distribution, there can be e-mailing of executable content in attachments, compromise of active content on Web pages, and even direct attacks on server software. Attack payloads can attempt to compromise objects accessible via the net, can deny resource services by exhausting them, can corrupt publicly available data on Web sites, or spread plausible but misleading misinformation. Malware Types Viruses are not the only form of malicious software. Other forms include worms, Trojans, zombies, logic bombs, and hoaxes. Each of these has its own characteristics, which we will discuss below. Some forms of malware combine characteristics of more than one class, and it can be difficult to draw hard and fast distinctions with regard to individual examples or entities, but it can be important to keep the specific attributes in mind. It should be noted that we are increasingly seeing convergence in malware. Viruses and Trojans are being used to spread and plant RATs, and RATs are being used to install zombies. In some cases, hoax virus warnings are being used to spread viruses. Virus and Trojan payloads may contain logic bombs and data diddlers. Viruses. A computer virus is a program written with functions and intent to copy and disperse itself without the knowledge and cooperation of the owner or user of the computer. A final definition has not yet been agreed upon by all researchers. A common definition is “a program that modifies other programs to contain a possibly altered version of itself.” This definition is generally attributed to Fred Cohen from his seminal research in the mid-1980s, although Dr. Cohen’s actual definition is in mathematical form. The term computer virus was first defined by Dr. Cohen in his graduate thesis in 1984. Cohen credits a suggestion from his advisor, Leonard Adelman (of RSA fame), for the use of the term.
Cohen’s definition is specific to programs that attach themselves to other programs as their vector of infection. However, common usage now holds viruses to consist of a set of coded instructions that are designed to attach to an object capable of containing the material, without knowledgeable user intervention. This object may be an e-mail message, program file, document, floppy disk, CD-ROM, Short Message System (SMS) message on cellular telephones, or any similar information medium. A virus is defined by its ability to reproduce and spread. A virus is not just anything that goes wrong with a computer, and virus is not simply another name for malware. Trojan horse programs and logic bombs do not reproduce themselves. 589
AU8231_C008.fm Page 590 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® A worm, which is sometimes seen as a specialized type of virus, is currently distinguished from a virus because a virus generally requires an action on the part of the user to trigger or aid reproduction and spread. The action on the part of the user is generally a common function, and the user generally does not realize the danger of the action, or the fact that he or she is assisting the virus. The only requirement that defines a program as a virus is that it reproduces. There is no necessity that the virus carries a payload, although a number of viruses do. In many cases (in most cases of successful viruses), the payload is limited to some kind of message. A deliberately damaging payload, such as erasure of the disk or system files, usually restricts the ability of the virus to spread, because the virus uses the resources of the host system. In some cases, a virus may carry a logic bomb or time bomb that triggers a damaging payload on a certain date or under a specific, often delayed, condition. Types of Viruses. There are a number of functionally different types of viruses, such as a file infector, boot sector infector (BSI), system infector, e-mail virus, multipartite, macro virus, or script virus. These terms do not necessarily indicate a strict division. A file infector may also be a system infector. A script virus that infects other script files may be considered a file infector, although this type of activity, while theoretically possible, is unusual in practice. There are also difficulties in drawing a hard distinction between macro and script viruses. FILE INFECTORS. A file infector infects program (object) files. System infectors that infect operating system program files (such as COMMAND.COM in DOS) are also file infectors. File infectors can attach to the front of the object file (prependers), attach to the back of the file and create a jump at the front of the file to the virus code (appenders), or overwrite the file or portions of it (overwriters). A classic is Jerusalem. A bug in early versions caused it to add itself over and over again to files, making the increase in file length detectable. (This has given rise to the persistent myth that it is characteristic of a virus to eventually fill up all disk space: by far, the majority of file infectors add minimally to file lengths.) BOOT SECTOR INFECTORS. Boot sector infectors (BSIs) attach to or replace the master boot record, system boot record, or other boot records and blocks on physical disks. (The structure of these blocks varies, but the first physical sector on a disk generally has some special significance in most operating systems, and usually is read and executed at some point in the boot process.) BSIs usually copy the existing boot sector to another unused sector, and then copy themselves into the physical first sector, ending with a call to the original programming. Examples are Brain, Stoned, and Michelangelo. 590
AU8231_C008.fm Page 591 Thursday, October 19, 2006 7:03 AM
Application Security SYSTEM INFECTORS. System infector is a somewhat vague term. The phrase is often used to indicate viruses that infect operating system files, or boot sectors, in such a way that the virus is called at boot time and has or may have preemptive control over some functions of the operating system. (The Lehigh virus infected only COMMAND.COM on MS-DOS machines.) In other usage, a system infector modifies other system structures, such as the linking pointers in directory tables or the MS Windows system registry, in order to be called first when programs are invoked on the host computer. An example of directory table linking is the DIR virus family. Many e-mail viruses target the registry: MTX and Magistr can be very difficult to eradicate. COMPANION VIRUS. Some viral programs do not physically touch the target file at all. One method is quite simple, and may take advantage of precedence in the system. In MS-DOS, for example, when a command is given, the system checks first for internal commands, then .COM, .EXE, and .BAT files, in that order. .EXE files can be infected by writing a .COM file in the same directory with the same filename. This type of virus is most commonly known as a companion virus, although the term spawning virus is also used. E-MAIL VIRUS. An e-mail virus specifically, rather than accidentally, uses the e-mail system to spread. Although virus-infected files may be accidentally sent as e-mail attachments, e-mail viruses are aware of e-mail system functions. They generally target a specific type of e-mail system, harvest e-mail addresses from various sources, and may append copies of themselves to all e-mail sent, or may generate e-mail messages containing copies of themselves as attachments. Some e-mail viruses may monitor all network traffic and follow up legitimate messages with messages that they generate. Most e-mail viruses are technically considered to be worms, because they often do not infect other program files on the target computer, but this is not a hard and fast distinction. There are known examples of e-mail viruses that are file infectors, macro viruses, script viruses, and worms. Melissa, Loveletter, Hybris, and SirCam are all widespread current examples, and the CHRISTMA exec is an older example of the same type of activity.
E-mail viruses have made something of a change to the epidemiology of viruses. Traditionally, viruses took many months to spread, but stayed around for many years in the computing environment. Many e-mail viruses have become “fast burners” that can spread around the world, infecting hundreds of thousands or even millions of machines within hours. However, once characteristic indicators of these viruses become known, they die off almost immediately as users stop running the attachments. MULTIPARTITE. Originally the term multipartite was used to indicate a virus that was able to infect both boot sectors and program files. (This ability is 591
AU8231_C008.fm Page 592 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® the origin of the alternate term dual infector.) Current usage tends to mean a virus that can infect more than one type of object, or that infects or reproduces in more than one way. Examples of traditional multipartites are Telefonica, One Half, and Junkie, but these programs have not been very successful. MACRO VIRUS. A macro virus uses macro programming of an application such as a word processor. (Most known macro viruses use Visual Basic for Applications in Microsoft Word: some are able to cross between applications and function in, for example, a PowerPoint presentation and a Word document, but this ability is rare.) Macro viruses infect data files and tend to remain resident in the application itself by infecting a configuration template such as MS Word’s NORMAL.DOT. Although macro viruses infect data files, they are not generally considered to be file infectors: a distinction is made between program and data files. Macro viruses can operate across hardware or operating system platforms as long as the required application platform is present. (For example, many MS Word macro viruses can operate on both the Windows and Macintosh versions of MS Word.) Examples are Concept and CAP. Melissa is also a macro virus, in addition to being an e-mail virus: it mailed itself around as an infected document. SCRIPT VIRUS. Script viruses are generally differentiated from macro viruses in that they are usually stand-alone files that can be executed by an interpreter, such as Microsoft’s Windows Script Host (.vbs files). A script virus file can be seen as a data file in that it is generally a simple text file, but it usually does not contain other data, and often has some indicator (such as the .vbs extension) that it is executable. Loveletter is a script virus. Worms. A worm reproduces and spreads, like a virus, and unlike other forms of malware. Worms are distinct from viruses, though they may have similar results. Most simply, a worm may be thought of as a virus with the capacity to propagate independently of user action. In other words, they do not rely on (usually) human-initiated transfer of data between systems for propagation, but instead spread across networks of their own accord, primarily by exploiting known vulnerabilities in common software.
Originally, the distinction was made that worms used networks and communications links to spread, and that a worm, unlike a virus, did not directly attach to an executable file. In early research into computer viruses, the terms worm and virus tended to be used synonymously, it being felt that the technical distinction was unimportant to most users. The first worm to garner significant attention was the Internet Worm of 1988. Recently, many of the most prolific virus infections have not been strictly viruses, but have used a combination of viral and worm techniques to spread more rapidly and effectively. LoveLetter was an example of this 592
AU8231_C008.fm Page 593 Thursday, October 19, 2006 7:03 AM
Application Security convergence of reproductive technologies. Although infected e-mail attachments were perhaps the most widely publicized vector of infection, LoveLetter also spread by actively scanning attached network drives, infecting a variety of common file types. This convergence of technologies will be an increasing problem in the future. Code Red and a number of Linux programs (such as Lion) are modern examples of worms. (Nimda is an example of a worm, but it also spreads in a number of other ways, so it could be considered to be an e-mail virus and multipartite as well.) Hoaxes. Hoax virus warnings or alerts have an odd double relation to viruses. First, hoaxes are usually warnings about new viruses: new viruses that do not, of course, exist. Second, hoaxes generally carry a directive to the user to forward the warning to all addresses available to him. Thus, these descendants of chain letters form a kind of self-perpetuating spam.
Hoaxes use an odd kind of social engineering, relying on people’s naturally gregarious nature and desire to communicate, and on a sense of urgency and importance, using the ambition that people have to be the first to provide important new information. It is wisest, in the current environment, to doubt all virus warnings, unless they come from a known and historically accurate source, such as a vendor with a proven record of providing reliable and accurate virus alert information, or preferably an independent researcher or group. It is best to check any warnings received against known virus encyclopedia sites. It is best to check more than one such site: in the initial phases of a fast burner attack, some sites may not have had time to analyze samples to their own satisfaction, and the better sites will not post information they are not sure about. A recent example of a hoax, referring to SULFNBK.EXE, got a number of people to clear this legitimate utility off their machines. The origin was likely the fact that the Magistr virus targets Windows system software, and someone with an infection did not realize that the file is actually present on all Windows 98 systems. Thus, a new class of malicious hoax message has started to appear, attempting to make users actually cripple their own machines. Trojans. Trojans, or Trojan horse programs, are the largest class of malware, aside from viruses. However, use of the term is subject to much confusion, particularly in relation to computer viruses.
A Trojan is a program that pretends to do one thing while performing another, unwanted action. The extent of the pretense may vary greatly. Many of the early PC Trojans merely used the filename and a description on a bulletin board. Log-in Trojans, popular among university student mainframe users, mimicked the screen display and the prompts of the normal 593
AU8231_C008.fm Page 594 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® log-in program and could in fact pass the username and password along to the valid log-in program at the same time as they stole the user data. Some Trojans may contain actual code that does what it is supposed to while performing additional nasty acts. Some data security writers consider a virus to simply be a specific example of the class of Trojan horse programs. There is some validity to this usage because a virus is an unknown quantity that is hidden and transmitted along with a legitimate disk or program, and any program can be turned into a Trojan by infecting it with a virus. However, the term virus more properly refers to the added, infectious code rather than the virus/target combination. Therefore, the term Trojan refers to a deliberately misleading or modified program that does not reproduce itself. An additional confusion with viruses involves Trojan horse programs that may be spread by e-mail. In years past, a Trojan program had to be posted on an electronic bulletin board system or a file archive site. Because of the static posting, a malicious program would soon be identified and eliminated. More recently, Trojan programs have been distributed by mass e-mail campaigns, by posting on Usenet newsgroup discussion groups, or through automated distribution agents (bots) on Internet Relay Chat (IRC) channels. Because source identification in these communications channels can be easily hidden, Trojan programs can be redistributed in a number of disguises, and specific identification of a malicious program has become much more difficult. Social Engineering. A major aspect of Trojan design is the social engineering component. Trojan programs are advertised (in some sense) as having a positive component. The term positive can be in dispute, because a great many Trojans promise pornography or access to pornography, and this still seems to be depressingly effective. However, other promises can be made as well. A recent e-mail virus, in generating its messages, carried a list of a huge variety of subject lines, promising pornography, humor, virus information, an antivirus program, and information about abuse of the recipient’s e-mail account. Sometimes the message is simply vague and relies on curiosity.
It is instructive to examine some classic social engineering techniques. Formalizing the problem makes it easier to move on to working toward effective solutions, making use of realistic, pragmatic policies. Effective implementation of such policies, however good they are, is not possible without a considered user education program and cooperation from management. Social engineering really is nothing more than a fancy name for the type of fraud and confidence games that have existed since snakes started selling apples. Security types tend to prefer a more academic-sounding defini594
AU8231_C008.fm Page 595 Thursday, October 19, 2006 7:03 AM
Application Security tion, such as the use of nontechnical means to circumvent security policies and procedures. Social engineering can range from simple lying (such as a false description of the function of a file), to bullying and intimidation (to pressure a low-level employee into disclosing information), to association with a trusted source (such as the username from an infected machine), to dumpster diving (to find potentially valuable information people have carelessly discarded), to shoulder surfing (to find out personal identification numbers and passwords). A recent entry to the list of malicious entities aimed at computer users is the practice of phishing. Phishing attempts to get the user to provide information that will be useful for identity theft-type frauds. Although phishing messages frequently use Web sites and try to confuse the origin and ownership of those sites, very little programming, malicious or otherwise, is involved. Phishing is unadulterated social engineering or deception. Remote-Access Trojans (RATs). Remote-access Trojans are programs designed to be installed, usually remotely, after systems are installed and working (and not in development, as is the case with logic bombs and backdoors). Their authors would generally like to have the programs referred to as remote administration tools, to convey a sense of legitimacy.
All networking software can, in a sense, be considered remote-access tools: we have file transfer sites and clients, World Wide Web servers and browsers, and terminal emulation software that allows a microcomputer user to log on to a distant computer and use it as if he were on-site. The RATs considered to be in the malware camp tend to fall somewhere in the middle of the spectrum. Once a client, such as Back Orifice, Netbus, Bionet, or SubSeven, is installed on the target computer, the controlling computer is able to obtain information about the target computer. The master computer will be able to download files from and upload files to the target. The control computer will also be able to submit commands to the victim, which basically allows the distant operator to do pretty much anything to the prey. One other function is quite important: all of this activity goes on without any alert being given to the owner or operator of the targeted computer. When a RAT program has been run on a computer, it will install itself in such a way as to be active every time the computer is started subsequent to the installation. Information is sent back to the controlling computer (sometimes via an anonymous channel such as IRC) noting that the system is active. The user of the command computer is now able to explore the target, escalate access to other resources, and install other software, such as DDoS zombies, if so desired. Once more, it should be noted that remote-access tools are not viral. When the software is active, the master computer can submit commands to have the installation program sent on, via network transfer or e-mail, to 595
AU8231_C008.fm Page 596 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® other machines. In addition, RATs can be installed as a payload from a virus or Trojan. Many RATs now operate in very specialized ways, making the affected computer part of a botnet (robot network). Botnets use large numbers of computers to perform functions such as distributing spam messages, increasing the number of messages that can be sent, and isolating the actual sender from the targets of the messages. Recently it has been demonstrated that certain viruses have carried RAT programming payloads to set up spam botnets, and that such spam botnets have also been used to seed the release of new viruses. Rootkits, containing software that can subvert or replace normal operating system software, have been around for some time. RATs differ from rootkits in that a working account must be either subverted or created on the target computer to use a rootkit. RATs, once installed by a virus or Trojan, do not require access to an account. DDoS Zombies. DDoS (distributed denial of service) is a modified denial-of-service (DoS) attack. Denial-of-service attacks do not attempt to destroy or corrupt data, but attempt to use up a computing resource to the point where normal work cannot proceed. The structure of a DDoS attack requires a master computer to control the attack, a target of the attack, and a number of computers in the middle that the master computer uses to generate the attack. These computers in between the master and the target are variously called agents or clients, but are usually referred to as running zombie programs. As you can see, DDoS is a specialized type of RAT or botnet.
Again, note that DDoS programs are not viral, but checking for zombie software not only protects your system, but also prevents attacks on others. However, it is still in your best interest to ensure that no zombie programs are active. If your computers are used to launch an assault on some other system, you could be liable for damages. The efficacy of this platform was demonstrated in early 2000, when a couple of teenagers successfully paralyzed various prominent online players in quick succession, including Yahoo, Amazon, and eBay. DDoS is generally considered to be the first instance of the botnet concept to work in an effective manner. Logic Bombs. Logic bombs are software modules set up to run in a quiescent state, but to monitor for a specific condition or set of conditions and to activate their payload under those conditions. A logic bomb is generally implanted in or coded as part of an application under development or maintenance. Unlike a RAT or Trojan, it is difficult to implant a logic bomb after the fact. There are numerous examples of this type of activity, usually based upon actions taken by a programmer to deprive a company of needed resources if employment was terminated. 596
AU8231_C008.fm Page 597 Thursday, October 19, 2006 7:03 AM
Application Security A Trojan or a virus may contain a logic bomb as part of the payload. A logic bomb involves no reproduction and no social engineering. A variant on the concept of logic bombs involves what is known as the salami scam. The basic idea involves the siphoning off of small amounts of money (in some versions, fractions of a cent) credited to a specific account, over a large number of transactions. In most discussions of this type of activity, it is explained as the action of an individual, or small group, defrauding a corporation. However, a search of the RISKS-FORUM archives, for example, will find only one story about a fast food clerk who diddled the display on a drive-thru window and collected an extra dime or quarter from most customers. Other examples of the scheme are cited, but it is instructive to note that these narratives, in opposition to the classic salami scam anecdote, almost always are examples of fraudulent corporate activity, typically collecting improper amounts from customers. Spyware and Adware. It is extremely difficult to define which spyware and adware entities are malicious and which are legitimate marketing tools. Originally, many of the programs now known as spyware were intended to support the development of certain programs by providing advertising or marketing services. These were generally included with shareware, but were installed as a separate function or program that generated advertising screens or reported on user activities, such as other installed programs and user Web-surfing activities. Over time, a number of these programs became more and more intrusive, and frequently now have functions that will install without the user’s knowledge, and in the absence of any other utility being obtained. Pranks. Pranks are very much a part of the computer culture, so much so that you can now buy commercially produced joke packages that allow you to perform “stupid Mac (or PC or Windows) tricks.” There are numerous pranks available as shareware. Some make the computer appear to insult the user; some use sound effects or voices; some use special visual effects. A fairly common thread running through most pranks is that the computer is, in some way, nonfunctional. Many pretend to have detected some kind of fault in the computer (and some pretend to rectify such faults, of course making things worse). One entry in the virus field is PARASCAN, the paranoid scanner. It pretends to find large numbers of infected files, although it does not actually check for any infections.
Generally speaking, pranks that create some kind of announcement are not malware: viruses that generate a screen or audio display are actually quite rare. The distinction between jokes and Trojans is harder to make, but pranks are intended for amusement. Joke programs may, of course, result in a denial of service if people find the prank message frightening. 597
AU8231_C008.fm Page 598 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® One specific type of joke is the easter egg, a function hidden in a program and generally accessible only by some arcane sequence of commands. These may be seen as harmless, but note that they do consume resources, even if only disk space, and also make the task of ensuring program integrity much more difficult. Malware Protection In almost any recent work on security, there will be a list of signs to watch for to determine a virus infection. Unfortunately, all such catalogs seem to have extremely limited utility. The characteristics mentioned tend to refer to older malware instances, and may also relate to a number of conditions that do not involve any malicious programming. Training and explicit policies can greatly reduce the danger to users. Some guidelines that can really help in the current environment are: • Do not double-click on attachments. • When sending attachments, provide a clear and specific description as to the content of the attachment. • Do not blindly use the most widely used products as a company standard. • Disable Windows Script Host, ActiveX, VBScript, and JavaScript. Do not send HTML-formatted e-mail. • Use more than one scanner, and scan everything. Whether these guidelines are acceptable in a specific environment is a business decision based upon the level of acceptable risk. But remember, whether risks are evaluated and whether policies are explicitly developed, every environment has a set of policies (some are explicit, while some are implicit) and every business accepts risk. The distinction is that some companies are aware of the risks that they choose to accept. All antivirus software is essentially reactive, that is, it exists only because viruses and other programmed threats existed first. It is common to distinguish between virus-specific scanning or known virus scanning (KVS) on the one hand and generic measures on the other. We prefer to consider the technological aspects of antivirus software in terms of three main approaches. Protective tools in the malware area are generally limited to antivirus software. To this day there are three major types, first discussed by Fred Cohen in his research: known signature scanning, activity monitoring, and change detection. These basic types of detection systems can be compared with the common intrusion detection system (IDS) types, although the correspondence is not exact. A scanner is like a signature-based IDS. An activity monitor is like a rule-based IDS or an anomaly-based IDS. A change detection system is like a statistical-based IDS. 598
AU8231_C008.fm Page 599 Thursday, October 19, 2006 7:03 AM
Application Security Scanners. Scanners, also known as signature scanners or known virus scanners, look for search strings whose presence is characteristic of a known virus. They frequently have capabilities to remove the virus from an infected object. However, some objects cannot be repaired. Even where an object can be repaired, it is often preferable (in fact, safer) to replace the object rather than repair it, and some scanners are very selective about which objects they repair. Activity Monitors. An activity monitor performs a task very similar to an automated form of traditional auditing: it watches for suspicious activity. It may, for example, check for any calls to format a disk or attempts to alter or delete a program file while a program other than the operating system is in control. It may be more sophisticated and check for any program that performs direct activities with hardware, without using the standard system calls.
It is very hard to tell the difference between a word processor updating a file and a virus infecting a file. Activity monitoring programs may be more trouble than they are worth because they can continually ask for confirmation of valid activities. The annals of computer virus research are littered with suggestions for virus-proof computers and systems that basically all boil down to the same thing: if the operations that a computer can perform are restricted, viral programs can be eliminated. Unfortunately, so is most of the usefulness of the computer. Heuristic Scanners. A recent addition to scanners is intelligent analysis of unknown code, currently referred to as heuristic scanning. It should be noted that heuristic scanning does not represent a new type of antiviral software. More closely akin to activity monitoring functions than traditional signature scanning, this looks for suspicious sections of code that are generally found in viral programs. Although it is possible for normal programs to try to “go resident,” look for other program files, or even modify their own code, such activities are telltale signs that can help an informed user come to some decision about the advisability of running or installing a given new and unknown program. Heuristics, however, may generate a lot of false alarms and may either scare novice users, or give them a false sense of security after “wolf” has been cried too often. Change Detection. Change detection software examines system or program files and configuration, stores the information, and compares it against the actual configuration at a later time. Most of these programs perform a checksum or cyclic redundancy check (CRC) that will detect changes to a file even if the length is unchanged. Some programs will even use sophisticated encryption techniques to generate a signature that is, if not absolutely immune to malicious attack, prohibitively expensive, in processing terms, from the point of view of a piece of malware. 599
AU8231_C008.fm Page 600 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Change detection software should also note the addition of completely new entities to a system. It has been noted that some programs have not done this and allowed the addition of virus infections or malware. Change detection software is also often referred to as integrity-checking software, but this term may be somewhat misleading. The integrity of a system may have been compromised before the establishment of the initial baseline of comparison. A sufficiently advanced change detection system, which takes all factors, including system areas of the disk and the computer memory, into account, has the best chance of detecting all current and future viral strains. However, change detection also has the highest probability of false alarms, because it will not know whether a change is viral or valid. The addition of intelligent analysis of the changes detected may assist with this failing. Antimalware Policies. Creating policies or educating users in safe practices can reduce the risk of becoming infected, even when a virus enters the organization. There are many possible preemptive measures, such as avoiding the use of applications that are particularly vulnerable and denying entry to mail attachments that are likely to be vectors for inbound viruses. Such measures can be very effective at addressing aspects of antivirus damage that reactive antivirus software does not deal with very well.
You can use access control software suites to minimize the possibility of a virus or Trojan gaining entry, by enforcing authentication of program files, disks, users, or any combination of the three. This approach is sometimes combined with virus-specific or generic scanning. Applying such a multilayered strategy can be much more effective than using only one of these approaches, but the strategy’s success in avoiding threats has to be balanced against the probable impairment of performance that multilayering entails. We should note a significant difference between access control as it is used in malware control and access control as it is often understood by systems administrators. Access control systems determine the appropriate allocation of access privileges to individuals and grant systems access to authenticated individuals. In other words, if the system recognizes an individual, he or she is allowed to use that system to the extent that the user’s privileges allow. Authenticating the individual is not enough in the malware arena, because viruses and worms are usually spread (unwittingly) by trusted individuals. Confirming the identity of the individual does not tell us anything about his or her good intentions, though we would usually hope that the human resources department has applied the appropriate checks. It tells us still less about the individual’s competence 600
AU8231_C008.fm Page 601 Thursday, October 19, 2006 7:03 AM
Application Security at following security guidelines, or the currency and acuity of his or her antivirus measures. There is some software that the use of places you at higher risk of virus infection. This is a simple fact. As has been noted, the more widely an operating system is used, the more likely it is that someone has written a virus for it. The same is true for application platforms, such as e-mail programs or word processors. There are other factors that can increase or decrease risk. What you choose to use is, of course, up to you, but certain software designs are more dangerous than others. Specific strategic factors render Windows more vulnerable than it needs to be. Many users resent the restrictions that a highly secure environment imposes on the pursuit of business aims. Management often pays lip service to the importance of security in meetings and reports, but cuts corners on implementation. Computer users frequently resent the obtrusiveness of most security measures. (In this regard, it should also be noted that draconian security policies will frequently be ignored or circumvented. This will tend to diminish the enthusiasm for security measures overall, so harsh strategies should be avoided where possible.) The basic types of antivirus programs have a great many variations. You can run antiviral software as manual utilities (on demand) or set them up to be memory resident and scan automatically as potentially infected objects are accessed (on-access or real-time scanning). There are tens of thousands of PC viruses and variants known. When a scanner checks for all those viruses and variants, checking for every byte of viral code each time would impose a huge processing overhead. To keep this overhead to a minimum, scanners check for the shortest search strings they can afford and deduce the presence of a given virus accordingly. Scanners may apply a number of heuristics according to virus type. Therefore, on-access scanners, as well as those based on firewalls and network gateways, always have poorer detection capabilities than their on-demand, or manual, counterparts, and this difference sometimes accounts for as much as a 20 percent disparity in performance and accuracy. The memory resident and on-demand components of a modern antivirus suite may use the same definitions database and still not score identical results with the identical test set. Malware Assurance Have policies in place that will effectively protect against common malware and malware vectors, without unduly restricting operations. Explain to users the reasons for the control measures and the specific exploits that they protect against. Policies and education are your most useful protections against malware, regardless of malware scanning and restriction technologies. 601
AU8231_C008.fm Page 602 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® For your technical antimalware systems, regularly review their effectiveness. If you use on-demand or server-based scanners, have regular check scans with a manual scanner in addition to the automated scanning. Note that disinfection is not always effective or possible, and have a policy to prefer deletion of malware and replacement of infected items from an uncompromised backup. Monitor activity, especially communications. Check for open ports and scan outgoing, as well as incoming, e-mail. This will not protect your system from infection, but it will provide a means of detecting various malware-related activity should some problem get past your defenses. The Database and Data Warehousing Environment Database systems have always been a major class of computer applications and have specific security requirements all their own. Indeed, some aspects of database security have proven quite intractable and still present unique challenges. In the early history of information systems, data processing occurred on stand-alone systems that used separate applications that contained their own sets of data files. As systems expanded and more applications were run on the same machine, redundant files were gathered. Several complexities and conflicts also arose, mainly the possibility of having duplicate information within each application contained on the same system. For example, an employee’s address might be duplicated in several application systems within the organization, once in the payroll system and again in the personnel system. This duplication of information not only wasted storage space, but also led to the possibility of inconsistency in the data. If an employee moved and notified payroll (to make sure the payroll check still arrived), only the database in payroll would be updated. If the personnel department needed to send something to the employee, the address contained within its application would not show the change. Another danger might occur if the personnel department saw the change in the payroll system, considered it to be an error, and overwrote the newer payroll data with data from the personnel files. To resolve the potential inconsistencies of having information replicated in several files on a system, databases were developed to incorporate the information from multiple sources. They are an attempt to integrate and manage the data required for several applications into a common storage area that will support an organization’s business needs. DBMS Architecture Organizations tend to collect data from many separate databases into one large database system, where it is available for viewing, updating, and pro602
AU8231_C008.fm Page 603 Thursday, October 19, 2006 7:03 AM
Application Security cessing by either programs or users. A database management system (DBMS) is a suite of application programs that typically manage large structured sets of persistent data. It stores, maintains, and provides access to data using ad hoc query capabilities. The DBMS provides the structure for the data and some type of language for accessing and manipulating the data. The primary objective is to store data and allow users to view the data. DBMSs have transformed greatly since their introduction in the late 1960s. The earliest file access systems evolved into network databases in the 1970s. In the 1980s, relational databases became dominant. Most recently, in the 1990s, object-oriented databases have emerged. Because companies have become increasingly dependent upon the successful operation of the DBMS, it is anticipated that future demands will drive more innovations and product improvements. Typically, a DBMS has four major elements: the database engine itself, the hardware platform, application software (such as record input interfaces and prepared queries), and users. The database element is one (or more) large structured sets or tables of persistent data. Databases are usually associated with another element, the software that updates and queries the data. In a simple database, a single file may contain several records that contain the same set of fields and each field is a certain fixed width. The DBMS uses software programs that allow it to manage the large, structured sets of data and provide access to the data for multiple, concurrent users while at the same time maintaining the integrity of the data. The applications and data reside on hardware and are displayed to the user via some sort of display unit, like a monitor. The data consists of individual entities and entities with relationships linking them together. The mapping or organization of the data entities is based on a database model. The database model describes the relationship between the data elements and provides a framework for organizing the data. The data model is fundamental to the design because it provides a mechanism for representing the data and any correlations between the data. The database model should provide for: Transaction persistence: The state of the database is the same after a transaction (process) has occurred as it was prior to the transaction. Fault tolerance and recovery: In the event of a hardware or software failure, the data should remain in its original state. Two types of recovery systems available are rollback and shadowing. Rollback recovery is when incomplete or invalid transactions are backed out. Shadow recovery occurs when transactions are reapplied to a previous version of the database. Shadow recovery requires the use of transaction logging to identify the last good transaction. 603
AU8231_C008.fm Page 604 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Sharing by multiple users: The data should be available to multiple users at the same time without endangering the integrity of the data (i.e., locking of data). Security controls: Examples include access controls, integrity checking, and view definitions. DBMSs may operate on hardware that has been implemented to run only databases and often only specific database systems. This allows hardware designers to increase the number and speed of network connections, incorporate multiple processors and storage disks to increase the speed of searching for information, and also increase the amount of memory and cache. When an organization is designing a database, the first step is to understand the requirements for the database and then design a system that meets those requirements. This includes what information will be stored, who is allowed access, and estimating how many people will need to access the data at the same time. In most database developments, the database design is usually done by either a database design specialist or a combination of database administrators and software analysts. The database designers produce a schema that defines what and how the data is stored, how it relates to other data, and who can access, add, and modify the data. The data in a database can be structured in several different ways, depending upon the types of information stored. Different data storage techniques can exist on practically any machine level, from a PC to mainframe, and in various architectures, such as stand-alone, distributed, or client/server. Hierarchical Database Management Model. The hierarchical model is the oldest of the database models and is derived from the information management systems of the 1950s and 1960s. Even today, there are hierarchical legacy systems that are still being operated by banks, insurance companies, government agencies, and hospitals. This model stores data in a series of records that have field values attached. It collects all the instances of a specific record together as a record type. These record types are the equivalent of tables in the relational model, with the individual records being the equivalent of rows. To create links between the record types, the hierarchical model uses parent/child relationships through the use of trees. A weakness is that the hierarchical model is only able to cope with a single tree and is not able to link between branches or over multiple layers. For example, an organization could have several divisions and several subtrees that represent employees, facilities, and products. If an employee worked for several divisions, the hierarchical model would not be able to provide a link between the two divisions for one employee. The 604
AU8231_C008.fm Page 605 Thursday, October 19, 2006 7:03 AM
Application Security hierarchical model is no longer used in current commercially available DBMS products; however, these models still exist in legacy systems. Network Database Management Model. The network database management model, introduced in 1971, is an extended form of the hierarchical data structure. It does not refer to the fact that the database is stored on the network, but rather to the method of how data is linked to other data. The network model represents its data in the form of a network of records and sets that are related to each other, forming a network of links. Records are sets of related data values and are the equivalent of rows in the relational model. They store the name of the record type, the attributes associated with it, and the format for these attributes. For example, an employee record type could contain the last name, first name, address, etc., of the employee. Record types are sets of records of the same type. These are the equivalent of tables in the relational model. Set types are the relationships between two record types, such as an organization’s division and the employees in that division. The set types allow the network model to run some queries faster; however, it does not offer the flexibility of a relational model. The network model is not commonly used today to design database systems; however, there are some legacy systems remaining. Relational Database Management Model. The majority of organizations use software based on the relational database management model. The relational database has become so dominant in database management systems that many people consider it to be the only form of database. (This may create problems when dealing with other table-oriented database systems that do not provide the integrity functions required in a true relational database.) The relational model is based on set theory and predicate logic and provides a high level of abstraction. The use of set theory allows data to be structured in a series of tables that have columns representing the variables and rows that contain specific instances of data. These tables are organized using normal forms. The relational model outlines how programmers should design the DBMS so that different database systems used by the organization can communicate with each other.
The basic relational model consists of three elements: • Data structures that are called either tables or relations • Integrity rules on allowable values and combinations of values in tables • Data manipulation agents that provide the relational mathematical basis and an assignment operator Each table or relation in the relational model consists of a set of attributes and a set of tuples or entries in the table. Attributes correspond to a column in a table. Attributes are unordered left to right, and thus are referenced by name and not position. All data values in the relational 605
AU8231_C008.fm Page 606 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® model are atomic. Atomic values mean that at every row/column position in every table there is always exactly one data value and never a set of values. There are no links or pointers connecting tables; thus, the representation of relationships is contained as data in another table. A tuple of a table corresponds to a row in the table. Tuples are unordered top to bottom because a relation is a mathematical set and not a list. Also, because tuples are based on tables that are mathematical sets, there are no duplicate tuples in a table (sets in mathematics by definition do not include duplicate elements). The primary key is an attribute or set of attributes that uniquely identifies a specific instance of an entity. Each table in a database must have a primary key that is unique to that table. It is a subset of the candidate key. Any key that could be a primary key is called a candidate key. The candidate key is an attribute that is a unique identifier within a given table. One of the candidate keys is chosen to be the primary key, and the others are called alternate keys. Primary keys provide the sole tuple-level addressing mechanism within the relational model. They are the only guaranteed method of pinpointing an individual tuple; therefore, they are fundamental to the operation of the overall relational model. Because they are critical to the relational model, the primary keys cannot contain a null value and cannot change or become null during the life of each entity. When the primary key of one relation is used as an attribute in another relation, it is the foreign key in that relation. The foreign key in a relational model is different from the primary key. The foreign key value represents a reference to an entry in some other table. If an attribute (value) in one table matches those of the primary key of some other relation, it is considered the foreign key. The link (or matches) between the foreign and primary keys represents the relationships between tuples. Thus, the matches represent references and allow one table to be referenced to another table. The primary key and foreign key links are the binding factor that holds the database together. Foreign keys also provide a method for maintaining referential integrity in the data and for navigating between different instances of an entity. Integrity Constraints in Relational Databases. To solve the problems of concurrency and security within a database, the database must provide some integrity. The user’s program may carry out many operations on the data retrieved from the database, but the DBMS is only concerned about what data is read/written from or to the database — the transaction. Users submit transactions and view each transaction as occurring by itself. Concurrency occurs when the DBMS interleaves actions (reads/writes of database objects) of various transactions. For concurrency to be secure, each trans-
606
AU8231_C008.fm Page 607 Thursday, October 19, 2006 7:03 AM
Application Security action must leave the database in a consistent state if the database is consistent when the transaction begins. The DBMS does not really understand the semantics of the data, that is, it does not understand how an operation on data occurs, such as when interest on a bank account is computed. A transaction might commit after completing all its actions, or it could abort (or be aborted by the DBMS) after executing some actions. A very important property guaranteed by the DBMS for all transactions is that they are atomic. Atomic implies that a user can think of X as always executing all its actions in one step, or not executing any actions at all. To help with concurrency, the DBMS logs all actions so that it can undo the actions of aborted transactions. The security issues of concurrency can occur if several users who are attempting to query data from the database interfere with each other’s requests. The two integrity rules of the relational model are entity integrity and referential integrity. The two rules apply to every relational model and focus on the primary and foreign keys. These rules actually derive from the Clark and Wilson integrity model discussed in the security architecture and design domain. In the entity integrity model, the tuple must have a unique and nonnull value in the primary key. This guarantees that the tuple is uniquely identified by the primary key value. The referential integrity model states that for any foreign key value, the referenced relation must have a tuple with the same value for its primary key. Essentially, every table relation or join must be accomplished by coincidence of the primary keys or of a primary key and the foreign key that is the primary key of the other table. Each table participating in the join must demonstrate entity integrity and in the referenced relation must have a similar primary key/foreign key relationship. Another example of the loss of referential integrity is to assign a tuple to a nonexistent attribute. If this occurs, the tuple could not be referenced, and with no attribute, it would be impossible to know what it represented. Structured Query Language (SQL). The relational model also has several standardized languages. One is called the Structured Query Language (SQL), in which users may issue commands. An advantage of having a standard language is that organizations can switch between different vendor systems without having to rewrite all of its software or retrain staff.
SQL was developed by IBM and is an International Standards Organization (ISO) and American National Standards Institute (ANSI) standard. (ANSI is a private, nonprofit organization that administers and coordinates the U.S. voluntary standardization and conformity assessment system.) Because SQL is a standard, the commands for most systems are similar. There are several different types of queries, such as those for predesigned 607
AU8231_C008.fm Page 608 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® reports (included in applications) and ad hoc queries (usually done by database experts). The main components of a database using SQL are: Schemas: Describes the structure of the database, including any access controls limiting how the users will view the information contained in the tables. Tables: The columns and rows of the data are contained in tables. Views: Defines what information a user can view in the tables — the view can be customized so that an entire table may be visible or a user may be limited to only being able to see just a row or a column. Views are created dynamically by the system for each user and provide access control granularity. The simplicity of SQL is achieved by giving the users a high-level view of the data. A view is a feature that allows for virtual tables in a database; these virtual tables are created from one or more real tables in the database. For example, a view can be set up for each user (or group of users) on the system so that the user can then only view those virtual tables (or views). In addition, access can be restricted so that only rows or columns are visible in the view. The value of views is to have control over what users can see. For example, we can allow users to see their information in an employee database, but not the other employee salaries unless they have sufficient authorization. This view removes many of the technical aspects of the system from the users, and instead places the technical burden on the DBMS software applications. As an example, assume that all employees in the personnel department have the same boss, the director of personnel. To avoid repeating the data for each employee, this type of data would be stored in a separate table. This saves storage space and reduces the time it would take for queries to execute. SQL actually consists of three sublanguages. The data definition language (DDL) is used to create databases, tables, views, and indices (keys) specifying the links between tables. Because it is administrative in nature, users of SQL rarely utilize DDL commands. DDL also has nothing to do with the population of use of the database, which is accomplished by data manipulation language (DML), used to query and extract data, insert new records, delete old records, and update existing records. System and database administrators utilize data control language (DCL) to control access to data. It provides the security aspects of SQL and is therefore our primary area of concern. Some of the DCL commands are COMMIT (saves work that has been done), SAVEPOINT (identifies a location in a transaction to which you can later roll back, if necessary), ROLLBACK (restores the database to its state at the last COMMIT), 608
AU8231_C008.fm Page 609 Thursday, October 19, 2006 7:03 AM
Application Security and SET TRANSACTION (changes transaction options such as what rollback segment to use). There are other scripting and query languages that can be used in similar ways to create database interface applications that rely on an underlying database engine for function. Object-Oriented Database Model. The object-oriented (OO) database model is one of the most recent database models. Similar to OO programming languages, the OO database model stores data as objects. The OO objects are a collection of public and private data items and the set of operations that can be executed on the data. Because the data objects contain their own operations, any call to data potentially has the full range of database functions available. The object-oriented model does not necessarily require a high-level language like SQL, because the function (or methods) are contained within the objects. An advantage of not having a query language allows the OO DBMS to interact with applications without the language overhead.
Relational models are starting to add object-oriented functions and interfaces, to create an object-relational model. An object-relational database system is a hybrid system: a relational DBMS that has an object-oriented interface built on top of the original software. This can be accomplished either by a separate interface or by adding additional commands to the current system. The hybrid model allows organizations to maintain their current relational database software and, at the same time, provide an upgrade path for future technologies. Database Interface Languages The existence of legacy databases has proven a difficult challenge for managing new database access requirements. To provide an interface that combines newer systems and legacy systems, several standardized access methods have evolved, such as Open Database Connectivity (ODBC), Object Linking and Embedding Database (OLE DB), ActiveX Data Objects (ADO), Java Database Connectivity (JDBC), and eXtensible Markup Language (XML). These systems provide a gateway to the data contained in the legacy systems as well as the newer systems. Open Database Connectivity (ODBC). ODBC is the dominant means of standardized data access. It is a standard developed and maintained by Microsoft. Almost all database vendors use it as an interface method to allow an application to communicate with a database either locally or remotely over a network. It is an application programming interface (API) that is used to provide a connection between applications and databases. It was designed so that databases could connect without having to use specific database commands and features. 609
AU8231_C008.fm Page 610 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® ODBC commands are used in application programs, which translate them into the commands required by the specific database system. This allows programs to be linked between DBMSs with a minimum of code changes. It allows users to specify which database is being used, and can be easily updated as new database technologies enter the market. ODBC is a powerful tool; however, because it operates as a system entity, it can be exploited. The following are issues with ODBC security: • The username and password for the database are stored in plaintext. To prevent disclosure of this information, the files should be protected. For example, if an HTML document was calling an ODBC data source, the HTML source must be protected to ensure that the username and password in plaintext cannot be read. (The HTML should call a common gateway interface (CGI) that has the authentication details, because HTML can be viewed in a browser.) • The actual call and the returned data are sent as cleartext over the network. • Verification of the access level of the user using the ODBC application may be substandard. • Calling applications must be checked to ensure they do not attempt to combine data from multiple data sources, thus allowing data aggregation. • Calling applications must be checked to ensure they do not attempt to exploit the ODBC drivers and gain elevated system access. Java Database Connectivity (JDBC). JDBC is an API from Sun Microsystems used to connect Java programs to databases. It is used to connect a Java program to a database either directly or by connecting through ODBC, depending on whether the database vendor has created the necessary drivers for Java.
Regardless of the interface used to connect the user to the database, security items to consider include how and where the user will be authenticated, controlling user access, and auditing user actions. eXtensible Markup Language (XML). XML is a World Wide Web Consortium (W3C) standard for structuring data in a text file so that both the format of the data and the data can be shared on intranets and the Web. A markup language, such as the Hypertext Markup Language (HTML), is simply a system of symbols and rules to identify structures (format) in a document. XML is called extensible because the symbols are unlimited and can be defined by the user or author. The format for XML can represent data in a neutral format that is independent of the database, application, and the underlying DBMS.
610
AU8231_C008.fm Page 611 Thursday, October 19, 2006 7:03 AM
Application Security XML became a W3C standard in 1998, and many believe it will become the de facto standard for integrating data and content during the next few years. It offers the ability to exchange data and bridge different technologies, such as object models and programming languages. Because of this advantage, XML is expected to transform data and documents of current DBMSs and data access standards (i.e., ODBC, JDBC, etc.) by Web-enabling these standards and providing a common data format. Another, and probably more important, advantage is the ability to create one underlying XML document and display it in a variety of different ways and devices. The Wireless Markup Language (WML) is an example of an XML-based language that delivers content to devices such as cell phones, pagers, and personal digital assistants (PDAs). As with the any of the other programs used to make database interface calls, XML applications must also be reviewed for how authentication of users is established, access controls are implemented, auditing of user actions is implemented and stored, and confidentiality of sensitive data can be achieved. Object Linking and Embedding Database (OLE DB). Object Linking and Embedding (OLE) is a Microsoft technology that allows an object, such as an Excel spreadsheet, to be embedded or linked to the inside of another object, such as a Word document. The Component Object Model (COM) is the protocol that allows OLE to work.
Microsoft Web site support defines OLE as allowing users to share a single source of data for a particular object. The document contains the name of the file containing the data, along with a picture of the data. When the source is updated, all the documents using the data are updated as well. On the other hand, with object embedding, one application (the source) provides data or an image that will be contained in the document of another application (the destination). The destination application contains the data or graphic image, but does not understand it or have the ability to edit it. It simply displays, prints, or plays the embedded item. To edit or update the embedded object, it must be opened in the source application that created it. This occurs automatically when you double-click the item or choose the appropriate edit command while the object is highlighted.
OLE DB is a low-level interface designed by Microsoft to link data across various DBMSs. It is an open specification that is designed to build on the success of ODBC by providing an open standard for accessing all kinds of data. It enables organizations to easily take advantage of information contained not only in data within a DBMS, but also when accessing data from other types of data sources. (Note, however, that because it is based on OLE, OLE DB is restricted to Windows interface applications.) 611
AU8231_C008.fm Page 612 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Essentially, the OLE DB interfaces are designed to provide access to all data, regardless of type, format, or location. For example, in some enterprise environments, the organization’s critical information is located outside of traditional production databases, and instead is stored in containers such as Microsoft Access, spreadsheets, project management planners, or Web applications. The OLE DB interfaces are based on the Component Object Model (COM), and they provide applications with uniform access to data regardless of the information source. The OLE DB separates the data into interoperable components that can run as middleware on a client or server across a wide variety of applications. The OLE DB architecture provides for components such as direct data access interfaces, query engines, cursor engines, optimizers, business rules, and transaction managers. When developing databases and determining how data may be linked through applications, whether through an ODBC interface or an OLE DB interface, security must be considered during the development stage. If OLE DB is considered, there are optional OLE DB interfaces that can be implemented to support the administration of security information. OLE DB interfaces allow for authenticated and authorized access to data among components and applications. The OLE DB can provide a unified view of the security mechanisms that are supported by the operating system and the database components. Accessing Databases through the Internet. Many database developers are supporting the use of the Internet and corporate intranets to allow users to access the centralized back-end servers. Several types of application programming interfaces (APIs) can be used to connect the end-user applications to the back-end database. Although we will highlight a couple of APIs that are available, ActiveX Data Objects (ADO) and Java Database Connectivity (JDBC), there are several security issues about any of the API technologies that must be reviewed. These include authentication of users, authorizations of users, encryption, protection of the data from unauthorized entry, accountability and auditing, and availability of current data.
One approach for Internet access is to create a tiered application approach that manages data in layers. There can be any number of layers; however, the most typical architecture is to use a three-tier approach: presentation layer, business logic layer, and data layer. This is sometimes referred to as the Internet computing model because the browser is used to connect to an application server that then connects to a database. Depending on the implementation, it can be good or bad for security. The tier approach can add to security because the users do not connect directly to the data. Instead, they are connecting to a middle layer, the business logic layer, that connects directly to the database on behalf of the 612
AU8231_C008.fm Page 613 Thursday, October 19, 2006 7:03 AM
Application Security users. The bad side of security is that if the database provides security features, they may be lost in the translation through the middle layer. Thus, when looking at providing security, it is important to analyze not only how the security features are implemented, but also where they are implemented and how the configuration of the application with the back-end database affects the security features. Additional concerns for security are user authentication, user access control, auditing of user actions, protecting data as it travels between the tiers, managing identities across the tiers, scalability of the system, and setting privileges for the different tiers. ActiveX Data Objects (ADO). ADO is a Microsoft high-level interface for all kinds of data. It can be used to create a front-end database client or a middle-tier business object using an application, tool, or Internet browser. Developers can simplify the development of OLE DB by using ADO.
Objects can be the building blocks of Java, JavaScript, Visual Basic, and other object-oriented languages. By using common and reusable data access components (Component Object Model (COM)), different applications can access all data regardless of data location or data format. ADO can support typical client/server applications, HTML tables, spreadsheets, and mail engine information. Note that many security professionals are concerned about the use of ActiveX, because there are no configurable restrictions on its access to the underlying system. Data Warehousing A data warehouse is a repository for information collected from a variety of data sources. Because of the compilation of information from many sources, data warehouses eliminate the organization’s original information structures and access controls to enable sharing of that information to more levels of employees. The data stored in a data warehouse is not used for operational tasks, but rather for analytical purposes. The data warehouse combines all of the data from various databases into one large data container. Because the data is collected into one central location for analysis, instead of several smaller databases, the combined data can be used by executives to make business decisions. A current term associated with data warehouses is data marts. Data marts are smaller versions of data warehouses. While a data warehouse is meant to contain all of an organization’s information, a data mart may contain the information from just a division or only about a specific topic. In most instances, the creation of a data mart is less time-consuming, and thus the data can be available for analysis sooner than if a data warehouse was created. 613
AU8231_C008.fm Page 614 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® The following tasks illustrate a simplified process of building a data warehouse: • Feed all data into a large, high-availability, and high-integrity database that resides at the confidentiality level of the most sensitive data. • Normalize the data. Regardless of how the data is characterized in each system, it must be structured the same when moved into the data warehouse. For example, one database could categorize birth date as “month/date/year,” another as “date/month/year,” and still another as “year/month/date.” The data warehouse must normalize the various data categories into only one category. Normalization will also remove redundancies in the data. • Mine the data for correlations to produce metadata. • Sanitize and export the metadata, results of analysis of the data, to its intended users. • Feed all new incoming data and the metadata into the data warehouse. In traditional database administration, rules and policies are implemented to ensure the confidentiality and integrity of the database, such as defining user views and setting access permissions. Security is even more critical for data warehouses. Rules and policies must be in place to control access to the data. This includes items such as defining the user groups and the type of data each group can access and outlining the user’s security responsibilities and procedures. Another danger of data warehouses is if the physical or logical security perimeter of the database servers were breached, the unauthorized user could gain access to all of the organization’s data. In addition to confidentiality controls, security for the data also includes the integrity and availability of the information. For example, if the data warehouse were accidentally or intentionally destroyed, a valuable repository of the organization’s historical and compiled data would also be destroyed. To avoid such a total loss, appropriate plans for backups must be implemented and maintained, as well as recovery options for hardware and software applications. Metadata. The information about the data, called metadata (literally data about data or knowledge about data), provides a systematic method for describing resources and improving the retrieval of information. The objective is to help users search through a wide range of sources with better precision. It includes the data associated with either an information system or an information object for the purposes of description, administration, legal requirements, technical functionality, usage, and preserva614
AU8231_C008.fm Page 615 Thursday, October 19, 2006 7:03 AM
Application Security tion. It is considered the key component for exploiting and using a data warehouse. Metadata is useful because it provides: • Valuable information about the unseen relationships between data • The ability to correlate data that was previously considered unrelated • The keys to unlocking critical or highly important data inside the data warehouse Note that the data warehouse is usually at the highest classification level possible. However, users of the metadata are usually not at that level, and therefore, any data that should not be publicly available must be removed from the metadata. Generally this involves abstracting the correlations, but not the underlying data that the correlations came from. The Dublin Core metadata element set was developed during the first metadata workshop in Dublin, OH, in 1995 and 1996. It was a response to the need to improve retrieval of information resources, especially on the Web. It continues to be developed by an international working group as a generic metadata standard for use by libraries, archives, governments, and publishers of online information. The Dublin Core standard has received widespread acceptance among the electronic information community and has become the de facto Internet metadata standard. The Dublin Core Web site posts several proposals that are open for comment and review from the community. A current security proposal that the Dublin Core metadata group is working on is for access controls. The need is stated as follows: A user, particularly in a government information situation, may be looking specifically for items only available to a particular user group, or denied to a user group. Another user finds that by searching a reference to a resource, it cannot access the resource, but it can see who can.
The proposal states that security classification and access rights are not the same. Security classification deals with any official security stamp to give a particular status to the resource. Only some resources will have such a stamp. Access rights do not need official stamps and can be used more loosely for the handling of the resource; e.g., a resource marked “public” in a content management system can be published, and a resource marked “not public” will not be, although metadata about the resource could be published. The nature of the two qualifiers is different, but the values could be related, e.g., if the security classification is “top secret,” then access rights should contain a value reflecting this. The difference between access rights and audience is that audience contains values stating which segment of the user group the information in the resource is created for. Access rights state which user group has permis615
AU8231_C008.fm Page 616 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® sion to access the resource; it does not say anything about the content (which audience does). The proposed solution: “For full implementation of this refinement, a namespace is needed. Inclusion in DC will mean the availability of a practical, usable namespace.” For further information, refer to the Dublin Core metadata Web site. Data contained in a data warehouse is typically accessed through frontend analysis tools such as online analytical processing (OLAP) or knowledge discovery in databases (KDD) methods (which will be discussed in more detail in the “Knowledge Management” section). Online Analytical Processing (OLAP). OLAP technologies provide an analyst with the ability to formulate queries and, based on the outcome of the queries, define further queries. The analyst can collect information by roaming through the data. The collected information is then presented to management. Because the data analyst interprets aspects of the data, the data analyst should possess in-depth knowledge about the organization and also what type of knowledge the organization needs to adequately retrieve information that can be useful for decision making.
For example, a retail chain may have several locations that locally capture product sales. If management decided to review data on a specific promotional item without a data warehouse, there would be no easy method of capturing sales for all stores on the one item. However, a data warehouse could effectively combine the data from each store into one central repository. The analyst could then query the data warehouse for specific information on the promotional item and present the results to those people in management who are responsible for promotional items. Data Mining. In addition to OLAP, data mining is another process (or tool) of discovering information in data warehouses by running queries on the data. A large repository of data is required to perform data mining. Data mining is used to reveal hidden relationships, patterns, and trends in the data warehouse. Data mining is a decision-making technique that is based on a series of analytical techniques taken from the fields of mathematics, statistics, cybernetics, and genetics. The techniques are used independently and in cooperation with one another to uncover information from data warehouses.
There are several advantages to using data-mining techniques, including the ability to provide better information to managers that outlines the organization’s trends, its customers, and the competitive marketplace for its industry. There are also disadvantages, especially for security. The detailed data about individuals obtained by data mining might risk a violation of privacy. The danger increases when private information is stored on 616
AU8231_C008.fm Page 617 Thursday, October 19, 2006 7:03 AM
Application Security the Web or an unprotected area of the network, and thus becomes available to unauthorized users. In addition, the integrity of the data may be at risk. Because a large amount of data must be collected, the chance of errors through human data entry may result in inaccurate relationships or patterns. These errors are also referred to as data contamination. One positive security function of data mining is to use the tools to review audit logs for intrusion attempts. Because audit logs usually contain thousands of entries, data-mining tools can help to discover abnormal events by drilling down into the data for specific trends or unusual behaviors. Information system security officers can use a data-mining tool in a testing environment to try to view unauthorized data. For example, testers could log in with the rights assigned to a general user, then use a data-mining tool to access various levels of data. If during this test environment, they were able to successfully view sensitive or unauthorized data, appropriate security controls, such as limiting views, could be implemented. Data mining is still an evolving technology; thus, standards and procedures need to be formalized so that organizations will be able to use their data for a variety of business decisions and uses. The challenge will be to provide the business need and yet still comply with security requirements that will protect the data from unauthorized users. Database Vulnerabilities and Threats One of the primary concerns for the DBMS is the confidentiality of sensitive data. A major concern for most people is that many databases contain health and financial information, both of which are protected by privacy laws in many countries. Another primary concern for the DBMS is enforcing the controls to ensure the continued integrity of the data. A compromise of data integrity through an invalid input or an incorrect definition could jeopardize the entire viability of the database. In such an instance, the work required to restore the database or manually write queries to correct the data could have a serious impact on operations. The threats to a DBMS include: Aggregation: The ability to combine nonsensitive data from separate sources to create sensitive information. For example, a user takes two or more unclassified pieces of data and combines them to form a classified piece of data that then becomes unauthorized for that user. Thus, the combined data sensitivity can be greater than the classification of individual parts. For years, mathematicians have been struggling unsuccessfully with the problem of determining when the aggregation of data results in data at a higher classification. Bypass attacks: Users attempt to bypass controls at the front end of the database application to access information. If the query engine 617
AU8231_C008.fm Page 618 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® contains security controls, the engine may have complete access to the information; thus, users may try to bypass the query engine and directly access and manipulate the data. Compromising database views used for access control: A view restricts the data a user can see or request from a database. One of the threats is that users may try to access restricted views or modify an existing view. Another problem with view-based access control is the difficulty in verifying how the software performs the view processing. Because all objects must have a security label identifying the sensitivity of the information in the database, the software used to classify the information must also have a mechanism to verify the sensitivity of the information. Combining this with a query language adds even more complexity. Also, the view just limits the data the user sees; it does not limit the operations that may be performed on the views. An additional problem is that the layered model frequently used in database interface design may provide multiple alternative routes to the same data, not all of which may be protected. A given user may be able to access information through the view provided, through a direct query to the database itself, or even via direct system access to the underlying data files. Further, any standard views set up for security controls must be carefully prepared in terms of the granularity of the control. Views can restrict access to information down to a field, and even content-based, level, and modifications to these regulations can significantly change the degree of material provided. Concurrency: When actions or processes run at the same time, they are said to be concurrent. Problems with concurrency include running processes that use old data, updates that are inconsistent, or having a deadlock occur. Data contamination: The corruption of data integrity by input data errors or erroneous processing. This can occur in a file, report, or a database. Deadlocking: Occurs when two users try to access the information at the same time and both are denied. In a database, deadlocking occurs when two user processes have locks on separate objects and each process is trying to acquire a lock on the object that the other process has. (Deadlock is also sometimes known as deadly embrace.) When this happens, the database should end the deadlock by automatically choosing and aborting one process, allowing the other process to continue. The aborted transaction is rolled back and an error message is sent to the user or the aborted process. Generally, the transaction that requires the least amount of overhead to roll back is the transaction that is aborted. Denial of service: Any type of attack or actions that could prevent authorized users from gaining access to the information. Often this 618
AU8231_C008.fm Page 619 Thursday, October 19, 2006 7:03 AM
Application Security can happen through a poorly designed application or query that locks up the table and requires intensive processing (such as a table scan where every row in the table must be examined to return the requested data to the calling application). This can be partially prevented by limiting the number of rows of data returned from any one query. Improper modification of information: Unauthorized or authorized users may intentionally or accidentally modify information incorrectly. Inference: The ability to deduce (infer) information from observing available information. Essentially, users may be able to determine unauthorized information from what information they can access and may never need to directly access unauthorized data. For example, if a user is reviewing authorized information about patients, such as the medications they have been prescribed, the user may be able to determine the illness. Inference is one of the hardest threats to control. Interception of data: If dial-up or some other type of remote access is allowed, the threat of interception of the session and modification of the data in transit must be controlled. Query attacks: Users try to use query tools to access data not normally allowed by the trusted front end (e.g., those views controlled by the query application). Elsewhere we have noted the possibility of malformed queries using SQL or Unicode in such a way as to bypass security controls; there are many other instances where improper or incomplete checks on query or submission parameters can be used in a similar way to bypass access controls. Server access: The server where the database resides must be protected not only from unauthorized logical access, but also from unauthorized physical access to prevent the disabling of logical controls. Time of check/time of use (TOC/TOU): TOC/TOU can also occur in databases. An example is when some type of malicious code or privileged access could change data between the time that a user’s query was approved and the time the data is displayed to the user. Web security: Many DBMSs allow access to data through Web technologies. Static Web pages (HTML or XML files) are methods of displaying data stored on a server. One method is when an application queries information from the database and the HTML page displays the data. Another is through dynamic Web pages that are stored on the Web server with a template for the query and HTML display code, but no actual data is stored. When the Web page is accessed, the query is dynamically created and executed and the information is displayed within the HTML display. If the source for the page is viewed, all information, including restricted data, may be visible. Providing security control includes measures for protecting against unauthorized access during a log-in process, protecting the informa619
AU8231_C008.fm Page 620 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® tion while it is transferred from the server to the Web server, and protecting the information from being stored on or downloaded to the user’s machine. Unauthorized access: Allowing the release of information either intentionally or accidentally to unauthorized users. DBMS Controls The future of the database environment is becoming more technically complex. Organizations must find solutions to easily and quickly support their end users’ requirements. This includes user-friendly interfaces to access data stored in different DBMSs, from many different locations, and on a variety of platforms. Additionally, users want to manipulate the data from their own workstation using their own software tools and then transmit updates to other locations in the network environment. In addition, it is depressing to note that many of the most significant problems specific to the database environment, such as aggregation and inference attacks, have proven intractable to solutions. Database security is a very specific, and sometimes esoteric, field of study. The challenge for both the security and database managers is to retain control over the organization’s data and ensure business rules are consistently applied when core data is accessed or manipulated. The DBMS provides security controls in a variety of forms — both to prevent unauthorized access and to prevent authorized users from accessing data simultaneously or accidentally or intentionally overwriting information. As a first line of security to prevent unauthorized users from accessing the system, the DBMS should use identification, authentication, authorization, and other forms of access controls. Most databases have some type of log-on and password authentication control that limits access to tables in the database based on a user account. Another initial step is to assign permissions to the authorized users, such as the ability to read, write, update, query, and delete data in the database. Typically, there are fewer users with add or update privileges than users with read and query privileges. For example, in an organization’s personnel database, general users would be allowed to change their own mailing address, office number, etc., but only personnel officers would be allowed to change an employee’s job title or salary. We will concentrate on these basic controls, as they do provide effective protection against the most common threats in the database environment. We will not examine the more advanced levels of database security research in this book, because those topics are generally beyond the requirements for the CISSP. 620
AU8231_C008.fm Page 621 Thursday, October 19, 2006 7:03 AM
Application Security Lock Controls. The DBMS can control who is able to read and write data through the use of locks. Locks are used for read and write access to specific rows of data in relational systems, or objects in object-oriented systems.
In a multiuser system, if two or more people wish to modify a piece of data at the same time, a deadlock occurs. A deadlock is when two transactions try to access the same resource; however, the resource cannot handle two requests simultaneously without an integrity problem. The system will not release the resource to either transaction, thereby refusing to process both of the transactions. To prevent a deadlock so that no one can access the data, the access controls lock part of the data so that only one user can access the data. Lock controls can also be more granular, so that locking can be accomplished by table, row or record, or even field. By using locks, only one user at a time can perform an action on the data. For example, in an airline reservation system, there may be two requests to book the last remaining seat on the airplane. If the DBMS allowed more than one user (or process) to write information to a row at the same time, then both transactions could occur simultaneously. To prevent this, the DBMS takes both transactions and gives one transaction a write lock on the account. Once the first transaction has finished, it releases its lock and then the other transaction, that has been held in a queue, can acquire the lock and make its action or, in this example, be denied the action. These and related requirements are known as the ACID test, which stands for atomicity, consistency, isolation, and durability. • Atomicity is when all the parts of a transaction’s execution are either all committed or all rolled back — do it all or not at all. Essentially, all changes take effect, or none do. Atomicity ensures there is no erroneous data in the system or data that does not correspond to other data as it should. • Consistency occurs when the database is transformed from one valid state to another valid state. A transaction is allowed only if it follows user-defined integrity constraints. Illegal transactions are not allowed, and if an integrity constraint cannot be satisfied, the transaction is rolled back to its previously valid state and the user is informed that the transaction has failed. • Isolation is the process guaranteeing the results of a transaction are invisible to other transactions until the transaction is complete. • Durability ensures the results of a completed transaction are permanent and can survive future system and media failures, that is, once they are done, they cannot be undone. For access control, the relational and object-oriented database models use either discretionary access control (DAC) or mandatory access control (MAC). Refer to the access control chapter, Domain 2, for more information. 621
AU8231_C008.fm Page 622 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Other DBMS Access Controls. Security for databases can be implemented either at the user level, by restricting the operations (views) available to a user or placing permissions on each individual data item, or in an object-oriented database, the object. Objects can be tables, views of tables, and the columns in those tables, or views. For example, in the SQL 92 standard, rights to objects can be individually assigned. However, not all databases provide this capability, as outlined in SQL 92. The types of actions available in SQL include select (allows the reading of data), insert (allows adding new data to a table), delete (allows removing data from a table), and update (allows changing data in a table). Thus, it is possible to grant a set of actions to a particular table for a specific object. View-Based Access Controls. In some DBMSs, security can be achieved through the appropriate use and manipulation of views. A trusted front end is built to control assignment of views to users. View-based access control allows the database to be logically divided into pieces that allow sensitive data to be hidden from unauthorized users. It is important that controls are in place so that a user cannot bypass the front end and directly access and manipulate the data.
The database manager can set up a view for each type of user, and then each user can only access the view that is assigned to that user. Some database views allow the restriction of both rows and columns, while others allow for views that read-write (not just read-only). Grant and Revoke Access Controls. Grant and revoke statements allow users who have “grant authority” permission to grant permission and revoke permission to other users. In a grant and revoke system, if a user is granted permission without the grant option, the user should not be able to pass grant authority to other users. This is, in a sense, a modification of discretionary access control. However, the security risk is that a user granted access, but not grant authority, could make a complete copy of the relation and subvert the system. Because the user, who is not the owner, created a copy, the user (now the owner of the copy) could provide grant authority over the copy to other users, leading to unauthorized users being able to access the same information contained in the original relation. Although the copy is not updated with the original relation, the user making the copy could continue making similar copies of the relation, and continue to provide the same data to other users.
The revoke statement functions like the grant statement. One of the characteristics of the revoke statement is its cascading effect. When the rights previously granted to a user are subsequently revoked, all similar rights are revoked for all users who may have been granted access by the newly revoked user. 622
AU8231_C008.fm Page 623 Thursday, October 19, 2006 7:03 AM
Application Security Security for Object-Oriented (OO) Databases. Most of the models for securing databases have been designed for relational databases. Because of the complexity of object-oriented databases, the security models for object-oriented databases are also more complex. Adding to this complexity, the views of the object-oriented model differ; therefore, each security model has to make some assumptions about the object-oriented model used for its particular database.
Because several models for secure object-oriented databases have been proposed, we briefly mention a few. If you will be working with database security in your profession, it is recommended that you review books specifically related to database security. Keep in mind that the security models differ in their capabilities and protections. The differences are based on how the security problem is defined, how a secure database is defined, and what is the basis of the object-oriented model. Metadata Controls. In addition to facilitating the effective retrieving of information, metadata can also manage restricted access to information. Metadata can serve as a gatekeeper function to filter access and thus provide security controls. Data Contamination Controls. To ensure the integrity of data, there are two types of controls: input and output controls. Input controls consist of transaction counts, dollar counts, hash totals, error detection, error correction, resubmission, self-checking digits, control totals, and label processing. Output controls include the validity of transactions through reconciliation, physical-handling procedures, authorization controls, verification with expected results, and audit trails. Online Transaction Processing (OLTP). OLTP is designed to record all of the business transactions of an organization as they occur. It is a data processing system facilitating and managing transaction-oriented applications. These are characterized as a system used by many concurrent users who are actively adding and modifying data to effectively change real-time data. OLTP environments are frequently found in the finance, telecommunications, insurance, retail, transportation, and travel industries. For example, airline ticket agents enter data in the database in real-time by creating and modifying travel reservations.
The security concerns for OLTP systems are concurrency and atomicity. Concurrency controls ensure that two users cannot simultaneously change the same data, or that one user cannot make changes before another user is finished with it. In an airline ticket system, it is critical for an agent processing a reservation to complete the transaction, especially if it is the last seat available on the plane. Atomicity ensures that all of the steps involved in the transaction complete successfully. If one step should 623
AU8231_C008.fm Page 624 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® fail, then the other steps should not be able to complete. Again, in an airline ticketing system, if the agent does not enter a name into the name data field correctly, the transaction should not be able to complete. OLTP systems should act as a monitoring system and detect when individual processes abort, automatically restart an aborted process, back out of a transaction if necessary, allow distribution of multiple copies of application servers across machines, and perform dynamic load balancing. A security feature uses transaction logs to record information on a transaction before it is processed, and then mark it as processed after it is done. If the system fails during the transaction, the transaction can be recovered by reviewing the transaction logs. Checkpoint restart is the process of using the transaction logs to restart the machine by running through the log to the last checkpoint or good transaction. All transactions following the last checkpoint are applied before allowing users to access the data again. Knowledge Management Knowledge management involves several existing research areas tied together by their common application environment, that is, the enterprise. Some topics listed under the knowledge management category are workflow management, business process modeling, document management, databases and information systems, knowledge-based systems, and several methodologies to model diverse aspects relevant to the knowledge in an enterprise environment. A key feature of knowledge management is application of artificial intelligence techniques to decision support. A key term for knowledge management is corporate memory or organizational memory, because knowledge management systems frequently make use of data warehousing. The memory serves for storing the enterprise knowledge that has to be managed. Corporate memory contains several kinds of information stored in databases, including employee knowledge, lists of customers, suppliers, and products, and specific documents relating to the organization. Essentially, it is all of the information, data, and knowledge about an organization that can be obtained from several different sources. For data to be helpful, it must have meaning. The interpretation of the data into meaning requires knowledge. This knowledge is an integral aspect of interpreting the data. When an organization attempts to understand the raw data from various sources, it can have a knowledgeable employee attempt to interpret the data into some meaning for the organization. To automate this process, knowledge-based systems (KBSs) are used by problem-solving methods for inference. In the first case, the 624
AU8231_C008.fm Page 625 Thursday, October 19, 2006 7:03 AM
Application Security user knows or learns something, whereas in the KBS, the system contains the knowledge. Knowledge discovery in databases (KDD) is a mathematical, statistical, and visualization method of identifying valid and useful patterns in data. It is an evolving field of study to provide automated analysis solutions. The knowledge discovery process takes the data from data mining and accurately transforms it into useful and understandable information. This information is usually not retrievable through standard retrieval techniques, but is uncovered through the use of artificial intelligence (AI) techniques. There are many approaches to KDD. A probabilistic method uses graphical representation models to compare different knowledge representations. The models are based on probabilities and data independencies. The probabilistic models are useful for applications involving uncertainty, such as those used in planning and control systems. A statistical approach uses rule discovery and is based on data relationships. A learning algorithm can automatically select useful data relationship paths and attributes. These paths and attributes are then used to construct rules for discovering meaningful information. This approach is used to generalize patterns in the data and to construct rules from the noted patterns. An example of the statistical approach is OLAP. Classification groups data according to similarities. One example is a pattern discovery and data-cleaning model that reduces a large database to only a few specific records. By eliminating redundant and nonimportant data, the discovery of patterns in the data is simplified. Deviation and trend analysis uses filtering techniques to detect patterns. An example is an intrusion detection system that filters a large volume of data so that only the pertinent data is analyzed. Neural networks are specific AI methods used to develop classification, regression, association, and segmentation models based on the way neurons work in the human brain. A neural net method organizes data into nodes that are arranged in layers, and links between the nodes have specific weighting classifications. The neural net is helpful in detecting the associations among the input patterns or relationships. It is also considered a learning system because new information is automatically incorporated into the system. However, the value and relevance of the decisions made by the neural network are only as good as the experience it is given. The greater the experience, the better the decision. Note that neural nets have a specific problem in terms of an individual’s ability to substantiate processing in that they (the neural nets) are subject to superstitious knowledge, which is a tendency to identify relations when no relations actually exist. More sophisticated neural nets are less subject to this problem. The expert system uses a knowledge base (a collection of all the data, or knowledge, on a particular matter) and a set of algorithms or rules that infer new facts from knowledge and incoming data. The knowledge base 625
AU8231_C008.fm Page 626 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® could be the human experience that is available in an organization. Because the system reacts to a set of rules, if the rules are faulty, the response will also be faulty. Also, because human decision is removed from the point of action, if an error were to occur, the reaction time from a human would be longer. As always, a hybrid approach could combine more than one system, which provides a more powerful and useful system. Security controls include: • Protecting the knowledge base as you would any database. • Routinely verifying the decisions based on what outcomes are expected from specific inputs. • If using a rule-based approach, changes to the rules must go through a change control process. • If the data output seems suspicious or out of the ordinary, perform additional and different queries to verify the information. • Making risk management decisions because decisions that are based on data warehouse analysis techniques may be incorrect. • Developing a baseline of expected performance from the analytical tool. Web Application Environment We have briefly noted, in other examples, some of the ways in which Web applications can provide unauthorized access to data, but there are many other examples of insecurity in the Web environment. Web sites are prime targets for attacks these days. For one thing, Web pages are the most visible part of the enterprise, because they are designed to be seen from the outside. Therefore, they attract vandals, who delight in the manifest defacement of a public Web site. Even if the Web pages are not modified, it is likely that the invader can create a denial-of-service attack. Because Web sites are also the primary interface for E-commerce, there is also the potential for fraud, or even outright theft. In some cases, this may simply be access to information or resources that should have a charge associated with their use, but some situations may allow attackers to order goods without payment, or even transfer funds. In some cases, transaction data is kept on the Web server, thus allowing the attacker direct access to information that may contain details about either the activities of the company or customer particulars, such as credit card numbers. Because Web-based systems are tied to production or internal systems, for ease of maintenance, access to database information, or transaction processing, Web sites may also offer a vector for intrusion into the private networks themselves. If the Web server machine can be compromised, it offers the attacker a semitrusted platform from which to mount probes or other activities. Again, such access may provide the interloper with intelli626
AU8231_C008.fm Page 627 Thursday, October 19, 2006 7:03 AM
Application Security gence about corporate sales and projects, but can also provide an avenue to the enterprise’s proprietary intellectual property. Most attacks are conducted at the application level, either against the Web server application itself, in-house scripts, or common front-end applications used for E-commerce. The pace of change is quite rapid for this type of software, and quality checks do not always uncover vulnerabilities and security problems. Therefore, attacks on the application software are much more likely to succeed than attacks on the underlying platforms. (Once the application has been breached, an attack on the operating system is generally also possible.) There are additional factors common to Web sites that make them vulnerable. For one thing, Web sites are designed to be widely accessible and are frequently heavily advertised as well. Therefore, a very large number of people will have information about the site’s addresses. Web server software does make provisions for logging of traffic, but many administrators either turn off logging altogether or reduce the logging to minimal levels. The standard security tools of firewalls and intrusion detection systems can be applied, but are not particularly well suited to protecting such public sites. In the case of firewalls, a Web site must have a standard port or ports open for requests to be made. Intrusion detection systems must be very carefully tuned and maintained to provide any useful information out of a flood of data: Web sites will see all kinds of traffic, from all kinds of sites, requesting connections, Web pages, submitting form information, or even updating search engine facts. Web Application Threats and Protection In essence, Web applications are subject to all of the threats and protection mechanisms discussed elsewhere. However, Web applications are specifically vulnerable because of their accessibility. Therefore, additional efforts and precautions should be taken with this type of programming and implementation. Specific protections that may be helpful include having a particular assurance sign-off process for Web servers, hardening the operating system used on such servers (removing default configurations and accounts, configuring permissions and privileges correctly, and keeping up to date with vendor patches), extending Web and network vulnerability scans prior to deployment, passively assessing intrusion detection system (IDS) and advanced intrusion prevention system (IPS) technology, using application proxy firewalls, and disabling any unnecessary documentation and libraries. In regard to administrative interfaces, ensure that they are removed or secured appropriately. Only allow access from authorized hosts or net627
AU8231_C008.fm Page 628 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® works, and then use strong (possibly multifactor) user authentication. Do not hard code the authentication credentials into the application itself, and ensure the security of the credentials. Use account lockout and extended logging and audit, and protect all authentication traffic with encryption. Ensure that the interface is at least as secure as the rest of the application, and most often secure it at a higher level. Because of the accessibility of Web systems and applications, input validation is critical. Application proxy firewalls are appropriate in this regard, but ensure that the proxies are able to deal with problems of buffer overflows, authentication issues, scripting, submission of commands to the underlying platform (which includes issues related to database engines, such as SQL commands), encoding issues (such as Unicode), and URL encoding and translation. In particular, the proxy firewall may have to address issues of data submission to in-house and custom software, ensuring validation of input to those systems. (This level of protection will have to be custom programmed for the application.) In regard to sessions, remember that HTTP (Hypertext Transfer Protocol) is a stateless technology, and, therefore, periods of apparent attachment to the server are controlled by other technologies, such as cookies or URL data, which must be both protected and validated. If using cookies, always encrypt them. You may wish to have time validation included in the session data. Do not use sequential, calculable, or predictable cookies, session numbers, or URL data for these purposes: use random and unique indicators. Again, protection for Web applications is the same as for other programming. Use the same protections: validate all input and output, fail secure (closed), make your application or system as simple as possible, use secure network design, and use defense in depth. Specific points to consider in a Web system are not to cache secure pages, confirm that all encryption used meets industry standards, monitor your code vendors for security alerts, log any and all critical transactions and milestones, handle exceptions properly, do not trust any data from the client, and do not automatically trust data from other servers, partners, or other parts of the application. Summary When implementing security in an application program environment, it is important to consider security throughout the entire life-cycle process, especially in the conceptual, requirements, and design phases. In their book Building Secure Software, John Viega and Gary McGraw provide the following ten items as a 90/10 strategy: you can avoid 90 percent of the potential problems by following these ten guidelines. This list is a 628
AU8231_C008.fm Page 629 Thursday, October 19, 2006 7:03 AM
Application Security good guide for security overall, so keep it in mind when approaching system development and acquisition. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
Secure the weakest link. Practice defense in depth. Fail securely. Follow the principle of least privilege. Compartmentalize. Keep it simple. Promote privacy. Remember that hiding secrets is hard. Be reluctant to trust. Use your community resources.
References Ross Anderson, Security Engineering, Wiley, New York, 2001. Scott Barman, Writing Information Security Policies, New Riders Publishing, Indianapolis, IN, 2001. John Barnes, High Integrity Software, Prentice Hall, Englewood Cliffs, NJ, 2003. Bob Blakley, CORBA Security: An Introduction to Safe Computing with Objects, Addison-Wesley, Reading, MA, 2000. Frederick P. Brooks, The Mythical Man-Month: Essays on Software Engineering, anniversary edition, Addison-Wesley, Reading, MA, 1995. Brian Carrier, File System Forensic Analysis, Addison-Wesley, Reading, MA, 2005. Rod Dixon, Open Source Software Law, Artech House, Norwood, MA, 2004. Eldad Eilam, Reversing, Wiley, New York, 2005.Simson Garfunkel with Gene Spafford, Web Security and Commerce, O’Reilly & Associates, Sebastapol, CA, 1997. Greg Hoglund and Gary McGraw, Exploiting Software, Addison-Wesley, Reading, MA, 2004. Gary McGraw and Ed Felton, Securing Java: Getting Down to Business with Mobile Code, Wiley, New York, 1999. Jose Nazario, Defense and Detection Strategies against Internet Worms, Artech House, 2004. Scott Oaks, Java Security, O’Reilly & Associates, 1998. Aviel D. Rubin, Daniel Geer, and Marcus J. Ranum, Web Security Sourcebook: A Complete Guide to Web Security Threats and Solutions, Wiley, New York, 1997. Robert M. Slade, Software Forensics, McGraw-Hill, New York, 2004. Robert M. Slade et al., Viruses Revealed, McGraw-Hill, New York, 2001. Ian Sommerville, Software Engineering, 6th ed., Addison-Wesley, Reading, MA, 2000. Peter Szor, The Art of Computer Virus Research and Defense, Addison-Wesley, Reading, MA, 2005. Harold F. Tipton and Micki Krause, Eds., Information Security Management Handbook, Auerbach, Boca Raton, FL, multiple editions and years. John Viega and Gary McGraw, Building Secure Software: How to Avoid Security Problems the Right Way, Addison-Wesley, Reading, MA, 2002. Adam L. Young and Moti Yung, Malicious Cryptography, Wiley, New York, 2004.
Sample Questions 1. If a database is protected from modification using only symmetric encryption, someone may still be able to mount an attack by: 629
AU8231_C008.fm Page 630 Thursday, October 19, 2006 7:03 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK®
2.
3.
4.
5.
6.
7.
630
a. Moving blocks of data such that a field belonging to one person is assigned to another b. Changing the encryption key so that a collison occurs c. Using the public key instead of the private key d. Arranging to intercept the public key in transit and replace it with his own Why cannot outside programs determine the existence of malicious code with 100 percent accuracy? a. Users do not update their scanners frequently enough. b. Firewalls are not intended to detect malicious code. c. The purpose of a string depends upon the context in which it is interpreted. d. The sourced code language is often unknown. Format string vulnerabilities in programs can be found by: a. Forcing buffer overflows b. Submitting random long strings to the application c. Causing underflow problems d. Including string specifiers in input data Files temporarily created by applications can expose confidential data if: a. Special characters are not used in the filename to keep the file hidden b. The existence of the file exceeds three seconds c. File permissions are not set appropriately d. Special characters indicating this is a system file are not used in the filename. The three structural parts of a virus are: a. Malicious payload, message payload, and benign payload b. Infection, payload, and trigger c. Self-replication, file attachment, and payload d. Replication, destructive payload, and triggering condition An application that uses dynamic link libraries can be forced to execute malicious code, even without replacing the target .dll file, by exploiting: a. Registry settings b. The library search order c. Buffer overflows d. Library input validation flaws In terms of databases, cryptography can: a. Only restrict and reduce availability b. Improve availability by allowing data to be easily placed where authorized users can access it c. Improve availability by increasing granularity of access controls d. Neither reduce nor improve availability
AU8231_C008.fm Page 631 Thursday, October 19, 2006 7:03 AM
Application Security 8. Proprietary protocols and data formats: a. Are unsafe because they typically rely on security by obscurity b. Are safe because buffer overflows cannot be effectively determined by random submission of data c. Are insecure because vendors do not test them d. Are secure because of encryption 9. Integrating cryptography into applications may lead to: a. Increased stability as the programs are protected against viral attack b. Enhanced reliability as users can no longer modify source code c. Reduced breaches of policy due to disclosure of information d. Possible denial of service if the keys are corrupted
631
AU8231_C008.fm Page 632 Thursday, October 19, 2006 7:03 AM
AU8231_book.fm Page 633 Friday, October 13, 2006 8:00 AM
Domain 9
Operations Security Sean M. Price, CISSP
Introduction Operations security is primarily concerned with the protection and control of information processing assets in centralized and distributed environments. The security service of availability is the core goal for operations security. There are a number of processes and techniques that can be implemented to ensure that a system can maintain the desired availability when faced with threats that impact operations. This chapter discusses the concepts and techniques a security practitioner will need to implement to satisfy the availability requirements of a given system. This topic is divided into the following sections: • • • •
Privileged entity controls Resource protection Continuity of operations Change control management
Privileged Entity Controls This section discusses the assignment of privileges to various classes of system accounts. Operators, system administrators, service accounts, and security administrators have different functions and services. The assignment of privileges among the accounts should follow the concepts of least privilege and separation of duties. Ordinary user accounts, which are given minimal system privileges, are also discussed. Operators System operators represent a class of users typically found in data center environments where mainframe systems are used. Operators have elevated privileges, but less than those of system administrators. Much of the privileges assigned can allow a circumvention of the security policy. Use of these privileges should be monitored through audit logs. Some of the privileges and responsibilities assigned to operators include: • Implementing the initial program load. This is used to start the operating system. 633
AU8231_book.fm Page 634 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® • Monitoring execution of the system. Operators respond to various events, to include errors, interruptions, and job completion messages. • Volume mounting. This allows the desired application access to the system and its data. • Controlling job flow. Operators can initiate, pause, or terminate programs. • Bypass label processing. This allows the operator to bypass security label information to run foreign tapes (foreign tapes are those from a different data center that would not be using the same label format that the system could run). This privilege should be strictly controlled to prevent unauthorized access. • Renaming and relabeling resources. This is sometimes necessary in the mainframe environment to allow programs to properly execute. Use of this privilege should be monitored, as it can allow the unauthorized viewing of sensitive information. • Reassignment of ports and lines. Abuse of this privilege can result in a loss of confidentiality of sensitive information. Ordinary Users Ordinary users, in contrast with system operators, should be assigned restrictive system privileges. They are only allowed access to applications that in turn have only those operating system privileges necessary to run. This type of user typically participates in systems based on the client/server architecture. The concept of least privilege should be used to protect the system from intentional and unintentional damage or misuse. The boot process or initial program load of a system is a critical time for ensuring system security. Interruptions to this process may reduce the integrity of the system or cause the system to crash, precluding its availability. Because users have physical access to the system during this period, measures must be implemented to protect the start-up process. Users should be prevented from altering the boot process or initial program load for a system. Ordinary users should not be provided the capability or have access to tools to monitor system execution. Users with this level of access may be able to circumvent system security controls. System monitoring privileges include the ability to debug applications or the system. Skillful users may be able to circumvent hard-coded system security policies with the use of debugging tools. Likewise, if a Trojan horse is running in the context of the user, then it may be possible for the malware to elevate the Trojan’s privileges using debugging techniques. Controlling job flow involves the manipulation of configuration information needed by the system. Users with the ability to control a job or appli634
AU8231_book.fm Page 635 Friday, October 13, 2006 8:00 AM
Operations Security cation can cause output to be altered or diverted, which can threaten the confidentiality security property. Application and system services utilize resources according to the known label. Users with the ability to bypass label processing can access sensitive data that they are unauthorized to see or run programs they are not allowed to run. System applications rely on the use of resource labels for availability purposes. Application and system configuration information defines specific resources locations according to the applicable label. Allowing users access to change labels on system resources can result in a denial-of-service situation. Users should not be allowed to reassign ports or lines, as this can cause program errors, such as sending sensitive output to an unsecured location. Furthermore, an incidental port may be opened, subjecting the system to an attack through the creation of a vulnerability. System Administrators Administrators are trusted with managing system operations and maintenance. These individuals are assigned to ensure that the system is functioning properly for system users. The two primary activities of administrator tasks are maintenance and monitoring. System components requiring regular maintenance and monitoring include workstations, servers, network devices, databases, and applications. Each of these components requires various levels of recurring maintenance to ensure continued operations. Workstations must be properly configured to prevent the user from accidentally or intentionally modifying the system. System administrator privileges should only be granted to appropriately trained and authorized individuals. In many environments, the workstation participates in a domain security architecture that emulates a trusted computing base (TCB) design. The aggregate of system components constitutes a domain security model. In such a design, the security of the whole is dependent on the security of the one. In other words, the domain security model can be negatively affected if participating workstations introduce vulnerabilities into the system as a whole. Servers generally require more stringent security controls than workstations. Domain security policies, sensitive information, databases, and specialized services reside on servers and therefore require additional protection measures beyond those of workstations. For example, servers typically store user data and run enterprise applications such as a database management system. In this case, stringent file-level security settings 635
AU8231_book.fm Page 636 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® and system backups are imperative to ensure that the security services of confidentiality, integrity, and availability are controlled. Specialized applications, such as database management systems, can be considered systems unto themselves. They contain various internal accounts with rights and privileges similar to what is found in the typical client/server environment. Sometimes an administrator is dedicated to the task of database management as the database administrator (DBA). The operational control concepts expressed for ordinary system administrators are also applicable to individuals assigned administrative duties for specialized applications. Network devices such as routers, switches, firewalls, and intrusion detection systems (IDSs) also require maintenance and monitoring. These devices can affect large segments of a network when the device is improperly configured or fails. Unfortunately, many of these devices do not provide sufficient security services. For example, some devices have only one account available for maintenance purposes and may not even have an audit capability. This can result in a loss of accountability. In organizations with multiple administrators assigned the responsibility of maintaining network devices, procedures must be developed to provide accountability for administrative activities. System administrators require the ability to affect certain critical operations such as setting the time, boot sequence, system logs, and passwords. In contrast, ordinary users should not be allowed to affect these operational aspects, as they may negatively impact the security services of the system. System time is a critical element that must be set in systems to provide a reference for event correlation. Administrators depend on clock synchronization among network components for event correlation. A lack of synchronization can make it difficult for system administrators to match up events in different audit logs when conducting an analysis of an actual or suspected attack. System logs are used by administrators to identify problem areas in the network. Logs are useful for finding misconfigured systems that might be attempting to communicate with the wrong protocol or port. Furthermore, auditable events of user activity are also recorded in these logs. This provides the administrator with a record of events that can be used to correlate an actual or suspected attack on the system. Many application and system logs are not encrypted or digitally signed, making it possible for an attacker to modify a log without detection. Therefore, access to the logs in transit or in storage on the network should be denied to unauthorized individuals and processes. 636
AU8231_book.fm Page 637 Friday, October 13, 2006 8:00 AM
Operations Security Security Administrators The role of security administrators is to provide oversight for the security operations of a system. The aspects of security operations in their purview include account management, assignment of file sensitivity labels, system security settings, and review of audit data. Security administrators complement the functions of system administrators by providing an additional viewpoint of network activity. These administrators usually have fewer rights than system administrators. This is necessary to ensure that separation of duties is enforced. Security administrators provide a check and balance of the power assigned to system administrators with the ability to audit and review their activities. File Sensitivity Labels. Mandatory access control systems implement file sensitivity labels to control access to information. A sensitivity label is used to allow privileges or deny access to a file. Sensitivity labels can prevent data from being written to an area on the system with a lower classification or sensitivity. System Security Characteristics. Operating systems and some applications, such as database management systems, and networking equipment contain a significant number of security settings. Security administrators are responsible for defining the security settings of a system. In some cases, the security administrator may also implement the settings in conjunction with the system administrator or appropriate application manager. It is necessary for the security administrator and system administrator to work together on security settings because an improper configuration can impact the proper operation of the system or network. Clearances. Individuals are granted clearances according to their trustworthiness and the level of access to sensitive information needed for their assigned duties. Security administrators participate in the clearance process by ensuring that individuals have a proper level of clearance assigned prior to providing the individual an account and password. Periodic background checks should also be conducted to ensure that the level of trust granted to an individual is appropriate for his or her assigned duty. Individuals should not be given access to areas of the system where they have demonstrated a serious lack of judgment or illegal activity. For example, individuals convicted of committing financial fraud should not be granted access to financial systems and databases. Clearances are a useful tool for determining the trustworthiness of an individual and the likelihood of their compliance with organization policy. Passwords. Password distribution is an important function of the security administrator. Users should be given passwords in a manner that precludes revealing them to other individuals. Usually, passwords are given 637
AU8231_book.fm Page 638 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® to users in person by the security administrator. However, this is not practical for organizations distributed across a large geographical area. In these instances, the security administrator requires trusted distribution channels to avoid a compromise. One method of distribution is to have a known and trusted entity introduce the new user to the security administrator in a telephone conversation. A temporary password can then be conveyed telephonically to the new user. This is preferable, as it provides an out-of-channel communication of the sensitive information. E-mailing unencrypted passwords should be avoided because it would be in cleartext as it traverses the network. Furthermore, it is likely that the e-mail recipient would not be the actual user, resulting in an authorized disclosure of the password. Occasionally users forget their passwords. This requires the security administrator to reset the password for the user’s account. Once again, this is best accomplished in person by the security administrator. However, it again becomes problematic in large distributed environments. Distribution methods for password resets should use the same trusted distribution paths for establishing new accounts. The most important point to bear in mind regarding password management is that the identity of the intended recipient must be validated prior to providing the password. Account Characteristics. Accounts are given certain characteristics
regarding their ability to process within the system. Restrictions can be placed on accounts that only allow them to be used on specific workstations or servers. Another characteristic is the ability to have multiple active sessions. Service accounts may require the ability to have multiple active sessions when used in a distributed environment. Accounts used by individuals should not have multiple log-on session capabilities. Preventing this characteristic restricts the unauthorized sharing of accounts and interrupts the use of stolen accounts by denying the attacker access to the system. Restricting multiple log-ons also serves as a detection control when the authorized user is denied access due to the unauthorized use of his or her account. Security Profiles. Efficient management of users requires the assignment of individual accounts into groups or roles. Management of accounts through groups and roles should not be confused with the assignment of group accounts. A group account is an individual account shared between multiple individuals. Security practitioners should discourage the use of group accounts because the sharing of a single account among multiple users results in a loss of accountability. In contrast, the assigning of individual accounts to groups or roles provides an efficient mechanism for administering rights and privileges. This clustering of accounts allows rights and privileges to be assigned to groups or a role as opposed to individual accounts. A group represents an account management technique for 638
AU8231_book.fm Page 639 Friday, October 13, 2006 8:00 AM
Operations Security distributing rights and privileges among accounts that require similar security profiles. Group management involves assigning a user account to one or multiple groups. Each group is given a set of permissions to access objects within a system. The security profile of the account represents the aggregate of group memberships. The most common methodology of segregating user rights and privileges according to their assigned duties is known as role-based access control (RBAC). When using roles as a form of account management, users are typically members of one role at a time. Accounts are granted rights and permissions according to what is assigned to the particular role. Security administrators must devise the appropriate assignment of permissions and rights, depending on the access control strategy used. Setting the security profiles of groups and roles should support the concepts of least privilege and separation of duties to enforce the organization’s security policy. Audit Data Analysis and Management. Security administrators are responsible for reviewing audit data within the enterprise. Auditing information can be obtained from numerous areas within a typical enterprise to include servers, workstations, databases, routers, firewalls, antivirus software, and other security-related applications. The amount of data needing review is truly enormous. Tackling this challenge requires the use of a solid strategy and tools based on the organization’s security policy and risk acceptance.
A decision must be made regarding the granularity of auditing necessary within an organization. Devices and applications in the system responsible for the storage and manipulation of sensitive data should have sufficient audit capabilities enabled to detect unauthorized activity or attacks against the system. The auditing granularity implemented should support the organization’s policy for accountability and detection of malicious activity. Consideration must be given to the amount of network resources necessary to support the audit policy. Audit activities can consume considerable CPU time, network bandwidth, and storage space. These aspects of system resources must be well thought out for each implementation of the audit policy to ensure that system availability is not degraded to a point that it impacts end-user requirements and expectations. Once the critical areas are identified, an automated process should be enabled that gathers the various audit logs into a centralized location for review. The formats for many audit logs are proprietary. Getting a clear view of the audit information requires the use of a tool that standardizes the logs for review. When reviewing audit logs, the security administrator should be looking for unauthorized and unusual activity. Viewing failed attempts can 639
AU8231_book.fm Page 640 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® give an indication of attempted attacks, but this is not the whole picture. Large amounts of traffic for a particular account may point to the existence of a covert channel or backdoor. Multiple instances of an account log-in on the network at the same time may point to a compromised account. Although tools exist and are required to efficiently analyze audit logs, it is more important for the security administrator to have a through understanding of the types of events on the network that are normal versus those that are not. The final aspect of audit review and management concerns the disposition of audit data. Logs should be retained as long as necessary to go backward in time to determine when an initial event or breach of security occurred. Audit data should follow the concepts of online and offline storage. Online storage should provide the security administrator with a sufficient amount of time to reconstruct events when they are discovered, such as 90 days. Offline storage is used in an archival fashion to comply with policies, regulations, or laws. Audit data should not be kept any longer than necessary, as it can consume massive amounts of storage capacity. System Accounts Many systems contain autonomous processes using dedicated accounts to provide a variety of system services. Many of these services are background processes that run in their own security context. Background services are also referred to as daemons. Accounts are assigned to these services to provide the appropriate privileges for process functionality. In many instances, these services are assigned elevated privileges as a default on installation. Security practitioners should carefully review system and application documentation to ascertain the minimum privileges necessary for the service to properly run. The concept of least privilege should be equally applied to system accounts as it is to other accounts. In some instances of a default system or application installation, numerous system accounts are created. Database management systems often contain a number of these types of accounts. However, the organization may not require the functionality provided by the default installation. In this case, the accounts and privileges not needed should be either disabled or removed. A determination should be made in conjunction with system designers as to when the existence of a system account is necessary or if it should be removed. In situations where a determination is not clear, the account should be disabled. Account Management Organizations must maintain strong control over the number and types of accounts used on systems. Account management involves the life-cycle 640
AU8231_book.fm Page 641 Friday, October 13, 2006 8:00 AM
Operations Security process for every account in a system. There are primarily four types of accounts: root, service, privileged user, and ordinary user. Root accounts are the all-powerful default administrative accounts used to manage a device or system. These accounts are generally shared by administrators for performing specialized administrative tasks. However, administrators should refrain from using these accounts, as a loss of accountability is possible when multiple individuals have access to an account password. These accounts should be renamed whenever possible and strictly controlled. Default passwords should be changed prior to adding the device or computer to the production network. Manual logs should be kept to record individual use of the root account and password. The manual logs should correlate with the system audit log regarding the account activity. It is best to have the administrators log in at the device console in an area with restricted access. Remote log-in with root accounts should only occur when the session can be encrypted. This prevents a compromise of the root password or session hijacking by a rogue node on the system. Systems typically use a variety of accounts to provide automated services, such as Web servers, e-mail servers, and database management systems. The services require accounts to perform actions on the local system. Services might also have multiple internal accounts. Database management systems, such as Oracle 8i, can have 10 or more internal default accounts at the initial installation. Depending on the use of the system, internal accounts not needed should be disabled or deleted. Services may also have internal root-type accounts that should be managed as mentioned above. Management of service accounts can become challenging in a distributed environment where administrators must perform administrative functions remotely. Passwords for service accounts must be strictly controlled to prevent a masquerading attack. Developing a strategy for changing service account passwords on a routine basis is necessary to provide continued integrity for the system. Privileged user accounts are those assigned to system, security, database, and other application administrators. These types of accounts must be strictly controlled and not assigned to multiple individuals, so that adequate accountability exists. The number of privileged accounts ought to be kept to an absolute minimum to enforce the concept of separation of duties. Passwords for administrative accounts should be distributed in person. Administrators should acknowledge in writing receipt of their account and willingness to follow organizational usage policies for privileged accounts. Remove administrative accounts immediately from the system when individuals no longer require that level of access. Ordinary user accounts are assigned to individuals requiring access to information technology resources. Reviews of account activity are neces641
AU8231_book.fm Page 642 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® sary to determine the existence of inactive accounts. Those accounts found to be inactive due to the departure of an individual from the organization should be removed from the system. Accounts that are inactive due to extended leave or temporary duties should be disabled. Ideally, individuals or their supervisors would promptly report temporary or permanent departures of system users to the appropriate system or security administrator. However, this does not always occur, so the security practitioner must be vigilant in conducting periodic reviews of accounts for inactivity. Resource Protection Systems are comprised of a variety of resources. The principal resources available are facilities, network devices, software, data, and information. Facilities provide services for the entire network. Network devices enable the processing, distribution, and storage of data and information. The security practitioner seeks to ensure that all security services are properly employed, from the facility all the way down to individual data items. Each aforementioned resource is considered a special class that requires varying degrees of physical and logical security. Facilities Facilities require appropriate systems and controls to sustain the IT operation environment. Various utilities and systems are necessary to support operations and provide continuous protection. Fire detection and suppression systems are necessary for resource protection and worker safety. Heating, ventilation, and air conditioning systems provide appropriate temperature and humidity controls for user comfort and acceptable environmental operating ranges for equipment. Water and sewage systems are an integral part of any facility. IT systems cannot provide adequate availability without a reliable power supply and distribution system. Power should also be conditioned to remove spikes and fluctuations. Stable communications are a vital aspect of geographically distributed systems. Finally, an integrated facility access control and intrusion detection system forms the first line of defense regarding the IT operations security. For a more detailed discussion of facility physical security issues, see the chapter on physical (environmental) security, Domain 4. Hardware System hardware requires appropriate physical security measures to maintain the desired confidentiality, integrity, and availability. Physical security measures should be implemented following the concept of least privilege. In this sense, individuals not authorized access to the equipment should be prevented from tampering with it. 642
AU8231_book.fm Page 643 Friday, October 13, 2006 8:00 AM
Operations Security Access to servers and host systems should be restricted. Servers and host systems are usually located in a segregated room known as a server room, operations center, or data center. These rooms typically contain the bulk of the organization’s information. Access to these centers must be strictly controlled. Individuals not authorized access should be escorted while in the room. Operator consoles and workstations should also have limited access whenever possible. Users performing sensitive data operations should have their workstations located in a separate room with access limited to the individuals authorized to work in the room. Providing physical security of sensitive workstations can reduce the likelihood of an unauthorized individual from tampering with a workstation to bypass logical controls, remove media, or install malicious code or devices. Printing devices should be located near the authorized users. System policies should be established that prevent users from producing output to printers outside of their immediate area unless absolutely necessary. Users should be required through policy and instructed through training to immediately retrieve their output from printing devices to preclude unauthorized access to sensitive information. Firewalls act as the network sentinel. They allow authorized traffic while denying all others. Due to their critical nature, physical access to firewalls should be limited to authorized administrators. Firewalls require at least the same level of physical protection as servers. Virtual private network (VPN) devices encrypt network traffic that is sent over public networks. A VPN protects the integrity and confidentiality of encrypted network traffic. VPN devices extend the organizational network beyond the protection of firewalls, and therefore should be afforded the same level of physical protection as firewalls. Routers and switches form the backbone of network communications. Improperly configured routers or switches can quickly cause a loss of communications. Access to these devices should be limited to appropriate administrative personnel. Cable media provides the link between network nodes. Unshielded twisted-pair (UTP) and fiber-optic cables represent the two most common types of cable media in use. All cabling should be periodically inspected to detect tampering. It is possible to tap into the middle of a UTP cable and use a network monitoring device or a machine equipped with sniffing software to record network traffic. Cables that are not currently in use should be either removed or disconnected from other networking devices. Periodic scans of the network should be instituted to detect unauthorized links or devices. 643
AU8231_book.fm Page 644 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Wireless equipment allows the linking of network nodes without reliance on cable media. Implementing wireless access points requires strong encryption as well as authentication mechanisms to protect the network from an intrusion. The two primary security methods available are wired equivalent privacy (WEP) and Wi-Fi protected access (WPA). WEP is known to have some serious limitations and weaknesses. WPA is a substantial improvement over WEP with regard to the encryption algorithm implemented, as well as the encryption key exchange mechanism. However, implementations of wireless networks have the inherent weakness of exposing network nodes to any device within range. Security practitioners should consider augmenting wireless networks with strong authentication and an IDS. A more detailed discussion of cabling and wireless equipment is located in the chapter on telecommunications and network security, Domain 7. Software Original copies of licensed software must be controlled by the organization to prevent copyright infringement. Unscrupulous individuals within an organization may make illegal copies of software for their personal use. Security practitioners should assist their organizations in providing appropriate physical controls to prevent illegal duplication and distribution of licensed software. All software copies should be managed by the media librarian. Inventory scans of installed software should also be conducted by the organization to identify unauthorized installations or license violations. Operating systems and applications contain information critical to the proper operation of the security controls. Attackers with physical access to the machine can retrieve or alter sensitive information. One of the most sensitive types of information contained within a system is a password file. Once a password file is copied from the target system and cracked, the attacker can mount masquerading attacks against the system. Audit logs are another type of information within systems that must be protected. Individuals that can modify audit logs will be able to conceal their activities and avoid detection of system abuse. It is important to note that a lack of physical control can enable these types of compromises, but it is also possible to successfully compromise these sensitive types of information from over the network by implementing a variety of attack methods. Documentation All documentation associated with a given system should be catalogued and controlled. Internal documentation regarding network design, vulnerabilities, proprietary methods, and source code requires special controls for hard and soft copies. Proprietary information in either soft- or hardcopy format requires the same level of physical and logical controls to prevent the unauthorized removal of the information from the organization’s 644
AU8231_book.fm Page 645 Friday, October 13, 2006 8:00 AM
Operations Security premises. Systems also require a copious number of management passwords for services, root accounts, and network devices, which are typically written down or saved to a file. These items should not reside on the network in an unencrypted file. Furthermore, access to the hard-copy documents containing these passwords should be limited to a minimum number of administrators. Threats to Operations Operations can be impacted by a variety of threat agents. These threats are caused by individuals and environmental factors. A security practitioner that is aware of the threat agents affecting the system will be more prepared to propose or implement controls to mitigate or limit the potential damage. Disclosure. This threat represents the unauthorized release of informa-
tion. Systems utilizing discretionary access control (DAC) models, which are the most prevalent today, are vulnerable to disclosure due to weaknesses inherent in the security model. This weakness occurs due to the ability of an individual to make copies of sensitive information that he or she may have a right to read. This threat is manifested through unauthorized account sharing, inappropriate access by individuals with administrative privileges, and malicious code attacks. This threat is one of the more serious types, as it can occur without an indication or understanding of what is disclosed. Destruction. Malicious, unintentional, and uncontrollable irreparable damage can result in the destruction of system data and resources. Malicious activity on the part of malware and malicious users can cause the loss of a significant amount of information. Errors on the part of users can cause the accidental deletion of important data. Uncontrollable damage occurs due to natural and man-made disasters. Interruption and Nonavailability. Failure of equipment, services, and operational procedures can cause system components to become unavailable. Denial-of-service attacks and malicious code can also interrupt operations. Corruption and Modification. Environmental factors as well as the acts of individuals can cause damage to systems and data. Sporadic fluctuations in temperature or line power can cause systems to make errors while writing data. Inappropriate or accidental changes to file or table permissions can cause unintended data corruption. Theft. Data or equipment components can be stolen by insiders or through a burglary. 645
AU8231_book.fm Page 646 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Espionage. Information is power. The lack of controls on proprietary information within the organization can result in a competitive disadvantage and even the complete failure of the enterprise. This threat occurs not only between governments, but between businesses, as well as by espionage infiltrations by governments on foreign businesses on behalf of their domestic enterprises. Hackers and Crackers. Malevolent hackers attempt to penetrate systems for fun or profit through the use of self-developed or open-source hacking tools. After a successful penetration, they may introduce other vulnerabilities into the system, such as a backdoor, to provide easier access. Crackers, on the other hand, are focused on breaking security measures within software for their enjoyment or profit. Their activities are primarily conducted to break copyright protection methods incorporated into software packages. Malicious Code. This includes all types of programs designed to steal information or cause damage to system operations. The most prominent type of malicious code includes Trojan horses, viruses, worms, spyware, and logic bombs. Malware is thoroughly discussed in the chapter on application security, Domain 8.
Operations security is implemented with the use of a variety of administrative or management controls to counteract prevalent threats. This type of security focuses on the people aspect of systems. There are different control types and control methods that are common to all areas of operations security. Control types describe classes of administrative protection measures used to protect the virtual environment. Control methods represent workflow processes that provide constraints on user activity, providing security for the system and organizational information in the physical world. Control Types Preventative Controls. This class of controls is used to preclude events or actions that might compromise a system or cause a policy violation. Preventative controls, by virtue of their design, protect systems and information from intentional or accidental compromise by denying unauthorized access. Access control measures such as locks, encryption, and access control lists are examples of preventative controls. Detective Controls. These identify an attack or other undesirable action that avoids preventative controls. A detective control reacts to changes in an environment or process that deviate from a normal or accepted pattern. Detective controls can be automated or manual processes that are administrative, technical, or physical measures. Automated technical processes 646
AU8231_book.fm Page 647 Friday, October 13, 2006 8:00 AM
Operations Security include audit logs, intrusion detection systems, and vulnerability scans. Manual administrative types of detective controls include reviews of audit logs, compliance reviews of systems, security tests and evaluations, as well as penetration tests. Physical detective controls include tamper-evident tape and other types of intrusion detection seals. Corrective Controls. This type of control reacts to detected events by rectifying the violation and preventing its reoccurrence. Corrective controls can be automated or manual. Self-healing systems use corrective controls when they detect an unauthorized change or damage within the system. For example, rollback mechanisms within a database management system provide the administrator with the ability to undo an action. Awareness training is a form of corrective control used to inform users of new threats and how to deal with them. Directive Controls. Administrative instruments such as policies, procedures, guidelines, and agreements are considered directive controls. This type of control is provided in writing to organizational or system users, dictating appropriate behavior and acceptable types of activity regarding systems and information. Laws, governmental regulations, and industry standards are other types of directive controls that must be adhered to. For directive controls to be effective, users must be made aware of the consequences for noncompliance. Furthermore, upper management must fully support the directive controls and enforce penalties when violations occur so these controls are adhered to by users. Recovery Controls. This type of control encompasses processes used to return the system to a secure state after the occurrence of a security event. Business continuity, disaster recovery, and contingency plans are forms of administrative recovery controls. Backups and redundant system components, such as hot spares, redundant array of inexpensive disks (RAID), and antivirus corrective actions, are examples of technical implementations of recovery controls. Deterrent Controls. The implementation of a control that causes an attacker or violator to reconsider his actions represents a deterrent control. This type of control creates a situation for the attacker that presents an unsatisfactory or unacceptable outcome if the violation is discovered. Detective controls can also act as deterrent controls if the potential violator is aware of their presence. Examples of deterrent controls are policies that prescribe penalties for violators, video cameras, intrusion detection systems, misuse of detection systems, and auditing. Compensating Controls. Controls that augment or supplement existing controls to address risk are called compensating controls. In some situations, a vulnerability may exist that cannot be immediately corrected. When 647
AU8231_book.fm Page 648 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® this occurs, it is necessary to implement compensating controls to mitigate the vulnerability. For instance, suppose a critical vulnerability is discovered in a network service that is needed for internal authorized users and the timeframe for the fix is unknown. A compensating control, such as a firewall, could be used in front of the service on the network to prevent unauthenticated traffic from having access to the service. In this regard, the attack surface of the vulnerability is reduced to the insider threat. Control Methods Separation of Responsibilities. This method, also known as separation of duties, provides operational protection measures by segregating activity to prevent a single individual from compromising security. The principal idea is that no one person should be capable of perpetrating a fraud or an attack without detection. Least Privilege. This concept requires users and processes to be assigned the minimum access and privileges necessary to accomplish their assigned tasks. For example, ordinary users should not be delegated elevated privileges on a system unless they are promoted to system administrator. Likewise, individuals should not have unlimited access to organizational information. Users should be restricted from accessing sensitive information unless it is necessary in the performance of the assigned duties. Job Rotation. Workers in sensitive positions, such as administrators, should have their assigned duties rotated on a regular basis. This is necessary to prevent fraud in the event that there is insufficient separation of duties, which usually occurs in organizations that have limited staffing resources. Separation of duties precipitates collusion between individuals for committing fraud. By rotating job duties, it is possible to interrupt the continued occurrence of a fraud being perpetrated; if a fraud is occurring, the new individual assigned the task may discover it. Need to Know. Individuals should not be granted access to information unless there exists a bona fide need for access in the performance of their assigned duties. Implementing need-to-know methods can protect information from unauthorized disclosure or espionage, or can be used to enhance the investigation of a suspected information compromise. In mandatory access control (MAC), the system compares subject and object labels to control access based on clearances and classifications, while the owner supplies the need-to-know element, because not everyone cleared at a certain level should have access to everything classified at that level. It is desirable to institute need to know for DAC systems as well. Unfortunately, this is much more challenging to implement in a DAC system than MAC. The difficulty manifests itself when an individual with read-only access to 648
AU8231_book.fm Page 649 Friday, October 13, 2006 8:00 AM
Operations Security data can redistribute the information to any location where he or she has write access by simply creating a copy of the original information. Implementing need to know within a DAC system requires well-crafted policies, user education, and consistent enforcement. Security Audits and Reviews. A security audit is typically performed by an independent third party to the management of the system. The audit determines the degree with which the required controls are implemented. A security review is conducted by the system maintenance or security personnel to discover vulnerabilities within the system. A vulnerability occurs when policies are not followed, misconfigurations are present, or flaws exist in the hardware or software of the system. System reviews are sometimes referred to as a vulnerability assessment. Penetration testing is a form of security review where an attempt is made to gain access or compromise a system. Penetration tests can be conducted with physical access to the system or from the outside of the system and facility.
Security audits can be divided between internal and external reviews. Internal reviews are conducted by a member of the organization’s staff that does not have management responsibility for the system. External reviews involve outside entities that evaluate the system based on the organizational security requirements. Entities performing an external review provide an independent assessment of the system. Security practitioners may find this review particularly appealing if the assessment supports prior security concerns that have been avoided by management. Managers should invite an independent review as a fresh perspective that may uncover unknown weaknesses within the system or associated processes. Supervision. All users and administrators should be held accountable for their actions. Activity by authorized users should be monitored through system and application logs for inappropriate behavior. In extreme circumstances, it may be necessary to use direct monitoring techniques such as periodic screen shots and keyboard logging to detect unauthorized activity. Inappropriate activity should be reported to the immediate supervisor of the individual for disciplinary action in accordance with organizational policy. It is important to note that the network watchers should be watched too. Individuals assigned to monitor security activity should have their own activity monitored by either a supervisor or a third party to ensure integrity for the supervision control method.
Prior to providing individuals with access to the system, a background check should be performed. This can include verifications of their name, experience, and educational level. It is simply a matter of checking the validity of information provided through a resume or job application. It is also advisable to perform a criminal history check as well when the person is being assigned a position of trust. However, criminal background checks 649
AU8231_book.fm Page 650 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® for administrator and security positions should be an absolute requirement because these represent the more trusted positions within the organization. Periodic reinvestigations of individuals in positions of trust should also be conducted. Input/Output Controls. Input controls involve the use of time stamps, authentication, and logging for accountability and validation purposes. Output controls perform checks of the output versus the input to identify errors. Output controls are also used to identify the owner of the product. Print cover sheets that accompany data printouts are examples of output controls. They identify the owner of the output and the fact that an output was generated. Printing cover sheets, even though an error occurs or there is no data to output, serves as an output validation. Cover sheets should indicate the time, date, user, and number of pages printed. If no output is generated, then the cover sheet should indicate this with the statement “no output.” Antivirus Management. To remain effective, antivirus software requires continual updates. Security practitioners must monitor these critical controls within the system. Antivirus software is most commonly found on email servers, file servers, and workstations. Some firewalls also have antivirus capabilities. Each antivirus implementation should be monitored to ensure that updates are received and active. Likewise, the antivirus engines should be configured to perform automatic scanning for new media and e-mail attachments. Scanning of the host computer should be scheduled and accomplished on a regular basis. It is best for the scanning to be done automatically during nonpeak usage times.
Media Types and Protection Methods Organizational information resides on various media types. Security practitioners should keep in mind that media includes soft copy as well as hard copy. Soft-copy media can be found as magnetic, optical, and solid state. Magnetic media includes floppy disks, tapes, and hard drives. CD-ROMs and DVDs are examples of optical media. Solid-state media includes flash drives and memory cards. Hard-copy examples include paper and microfiche. Transmission of sensitive information should be protected regardless of the storage method. Media is vulnerable to compromise through unauthorized access, theft, and loss. Sensitive data on electronic media should be encrypted during the transmission process. This effectively mitigates the aforementioned compromises. Using special seals and tamper-evident tape is helpful in deterring and detecting unauthorized access to hard-copy information. Some electronic media can contain sensitive information or even passwords and should be encrypted to prevent a compromise. Hard650
AU8231_book.fm Page 651 Friday, October 13, 2006 8:00 AM
Operations Security copy media might contain privacy or other types of proprietary information, and therefore should also be afforded protection measures. Electronic transport strategies such as system snapshots, shadowing, network backups, and electronic vaulting send bulk information from one part of a network to another. The information may travel a significant distance and pass many network segments before reaching the intended storage area. The data is viewable by network sniffing devices within any segment where the traffic passes. For this reason, the data should be protected through the use of encryption to mitigate a compromise. Scans should be periodically conducted to discover the existence of sniffer devices and software. The scans should be network and host based to provide the highest level of assurance. Network scans are used to identify unknown or unauthorized devices on the network as well as the existence of nonresponsive ports, which are indicative of devices set to promiscuous mode. Host-based scans are necessary to identify unknown and unauthorized software that may be monitoring network traffic or providing a backdoor into the network. Data saved to backup media should also be protected from compromise through the use of encryption. Full backups can contain critical system data such as passwords as well as organizational proprietary information. Misuse or loss of backup media may cause irreparable damage to the organization. It is very difficult to implement accountability controls for backup media due to their portability and lack of automated controls. System auditing can be used to track access to information prior to a backup. However, backup media does not have the ability to account for access to the data. Therefore, encryption provides the best possible control to guard against loss or compromise. Object Reuse An object is a piece of information or data residing on magnetic media. The reassignment and reuse of storage media after assuring that residual data or information does not exist is known as object reuse. Deleting files or formatting media does not actually remove the information. File deletion and media formatting simply remove the pointers to the information. Providing assurance for object reuse requires specialized tools and techniques according to the type of media on which the data resides. Specialized hardware devices known as degaussers can be used to erase data saved to magnetic media. The measure of the amount of energy needed to reduce the magnetic field on the media to zero is known as coercivity. It is important to make sure that the coercivity of the degausser is of sufficient strength to meet object reuse requirements when erasing data. If a degausser is used with insufficient coercivity, then a remanence of the 651
AU8231_book.fm Page 652 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® data will exist. Remanence is the measure of the existing magnetic field on the media; it is what residue remains after an object is degaussed or written over. Data is still recoverable even when the remanence is small. While data remanence exists, there is no assurance of safe object reuse. Software tools also exist that can provide object reuse assurance. These tools overwrite every sector of magnetic media with a random or predetermined bit pattern. Overwrite methods are effective for all forms of electronic media with the exception of read-only optical media. There exists a drawback to using overwrite software. During normal write operations with magnetic media, the head of the drive moves back and forth across the media as data is written. The track of the head does not usually follow the exact path each time. The result is a miniscule amount of data remanence with each pass. With specialized equipment, it is possible to read data that has been overwritten. To provide higher assurance in this case, it is necessary to overwrite each sector multiple times. This causes the overwrite process to be very time-consuming. The U.S. Department of Defense (DoD) has defined an overwrite process deemed sufficient for sensitive information, up to but not including the classification of Top Secret. Furthermore, researchers have learned that with specialized equipment it is still possible to recover data even after using the overwrite method prescribed by DoD. The degree of difficulty and cost associated with the recovery efforts demonstrated by the researchers are likely to be beyond the capabilities of most attackers. Security practitioners should keep in mind that a one-time pass may be acceptable for noncritical information, but sensitive data should be overwritten with multiple passes. Overwrite software can also be used to clear the sectors within solidstate media. Because most of these devices use a form of file allocation table (FAT) formatting, the principles of overwriting apply. However, as of this writing, the effectiveness of this method is not well established by the research community. It is suggested that destruction methods should be considered for solid-state media that is no longer used. The last form of preventing unauthorized access to sensitive data is media destruction. Shedding, burning, grinding, and pulverizing are common methods of physically destroying media. However, it must be mentioned that degaussing can also be a form of media destruction. Highpower degaussers are so strong in some cases that they can literally bend and warp the platters in a hard drive. Shredding and burning are effective destruction methods for nonrigid magnetic media. Indeed, some shredders are capable of shredding some rigid media such as an optical disk. This may be an effective alternative for any optical media containing nonsensitive information due to the residue size remaining after feeding the disk into the machine. However, the residue size might be too large for media containing sensitive information. Alternatively, grinding and pulverizing 652
AU8231_book.fm Page 653 Friday, October 13, 2006 8:00 AM
Operations Security are acceptable choices for rigid and solid-state media. Specialized devices are available for grinding the face of optical media that either sufficiently scratches the surface to render the media unreadable or actually grinds off the data layer of the disk. Sensitive Media Handling Media storing sensitive information requires physical and logical controls. The security practitioner must continually bear in mind that media lacks the means for digital accountability when the data is not encrypted. For this reason, extensive care must be taken when handling sensitive media. Logical and physical controls, such as marking, handling, storing, and declassification, provide methods for the secure handling of sensitive media. Marking. Organizations should have policies in place regarding the marking of media. Storage media should have a physical label identifying the sensitivity of the information contained. Likewise, individual files containing sensitive data within the media should also be marked. Logical labels on files could be used within the filename and or as a property of the file. Some applications allow the inclusion of labels as properties within data files, which are useful for specifying the sensitivity of the information. Handling. Only designated personnel should have access to sensitive media. Policies and procedures describing the proper handling of sensitive media should be promulgated. Individuals responsible for managing sensitive media should be trained on the policies and procedures regarding the proper handling of sensitive media. Security practitioners should never assume that all members of the organization are fully aware or understand security policies. It is also important that logs and other records be used to track the activities of individuals handling backup media. Manual processes, such as access logs, are necessary to compensate for the lack of automated controls regarding access to sensitive media. Storing. Sensitive media should not be left lying about where a passerby could access it. Whenever possible, backup media should be stored in a security container, such as a safe or strong box with limited access. Storing backup media at an off-site location should be considered for disaster recovery purposes. Backup media stored at the same site as the system should be kept in a fire-resistant box whenever possible. In every case, the number of individuals with access to backups should be strictly limited and the separation of duties concept should be implemented to the greatest extent possible. Destruction. Media that is no longer needed or is defective should be destroyed rather than simply disposed of. A record of destruction should be used that corresponds to any logs used for handling media. Security 653
AU8231_book.fm Page 654 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® practitioners should implement object reuse controls for any media in question when the sensitivity is unknown rather than simply recycling it. Declassification. This control refers to the downgrading of the sensitivity of information. Over the course of time, information once considered sensitive may decline in value or criticality. In these instances, declassification efforts should be implemented to ensure that excessive protection controls are not used for nonsensitive information. When declassifying information, marking, handling, and storage requirements will likely be reduced. Organizations should have declassification practices well documented for use by individuals assigned with the task.
Misuse Prevention The portability of media provides an opportunity for misuse. The proliferation of media types and the growth of storage capacities allow for the easy removal of vast amounts of data from a system. Further complicating this issue are the inherent weaknesses of access control mechanisms. The discretionary access control model, which is the most employed access control mechanism, permits an individual with read access to an object to make copies of the target data. This can allow an ordinary user to remove large amounts of data. Theft of sensitive information is a real threat that must be dealt with by the security practitioner. Organizational policy and user training is the first line defense against the accidental removal of sensitive information. Users should also be prevented from introducing unauthorized media into the system. The portability of media is an avenue for an individual to introduce malicious code and unauthorized software into the system. Antivirus and antispyware tools are a first-line defense against malicious code. However, it is possible to import malicious software from removable media that is not recognized by antivirus and antispyware tools. Furthermore, individuals may install unauthorized or illegal copies of software, such as games and other utilities, which can violate organizational policy or copyright laws. Preventative tools and techniques should be employed that thwart the execution of unauthorized code from removable media or users introducing unauthorized software into the corporate environment. Although the DAC model has its weaknesses, it should still be used to the greatest extent possible to prevent unauthorized installations from portable media. For instance, directories containing system and application binaries should be set to read only for system users. This setting would prevent ordinary users from adding unauthorized executables and libraries to common directories. Media can be used to commit a fraud within the organization. Due to the lack of access control and auditing mechanisms, backup media containing sensitive information could be surreptitiously altered by individuals with 654
AU8231_book.fm Page 655 Friday, October 13, 2006 8:00 AM
Operations Security physical access to the media. For this reason, only a limited number of individuals should be allowed access to backup media. As previously mentioned, the use of encryption is also effective for protecting the confidentiality and integrity of the information. Furthermore, authorized individuals should be required to log their activities and access to backup media to provide some level of accountability. Lastly, organizations should use a media librarian for controlling access and managing media containing sensitive information. Record Retention Information and data should be kept only as long as it is required. Organizations may have to keep certain records for a period of time as specified by industry standards or in accordance with laws and regulations. Hard- and soft-copy records should not be kept beyond their required or useful life. Security practitioners should ensure that accurate records are maintained by the organization regarding the location and types of records stored. A periodic review of retained records is necessary to reduce the volume of information stored and ensure that only relevant information is preserved. Backups and archives are two different types of methods used to store information. Backups are conducted on a regular basis and are useful in recovering information or a system in the event of a disaster. Backups contain information that is regularly processed by the system users. Information that is needed for historical purposes, but not in continual use, should be saved and removed from the system as an archive. Each type of record retention requires appropriate management, to include strong physical access controls and periodic reviews for the relevance of the stored records. Continuity of Operations Availability is one of the primary pillars of information security and arguably the one that receives the most attention from IT professionals and managers. The concept of availability requires that systems provide timely and reliable access to data and services for authorized users. In the business environment, availability is one of the most important aspects of information security. Organizations depend on their systems to be online for employee use and customer transactions. The loss of availability can mean the loss of revenue and trust by customers, which could lead to the failure of the organization. Although confidentiality and integrity are also important, availability is more closely associated with revenue, and therefore has a greater impact and focus. This section discusses various availability aspects of operations security known as continuity of operations. Organizations that rely on IT sys655
AU8231_book.fm Page 656 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® tems must have plans and procedures to continue operations in the event of a failure or catastrophe. Events affecting availability can be as mundane as the temporary loss of electrical power or as severe as the complete destruction of the IT system facility. In either case, the system must have adequate controls implemented to provide continued operations. Policies, procedures, and plans must be documented and regularly tested to ensure that the IT staff is capable of recovering from an event in the desired amount of time. Continuity of operations also involves the implementation of detective and preventative controls that are used to detect the potential of or prevent the loss of availability. These types of controls serve as the eyes and ears of the security practitioner for monitoring the availability health of the system. Detective controls include intrusion detection systems and vulnerability scanning. Examples of preventative controls are antivirus software and firewalls. System availability is ensured through properly implemented redundancy and backups. Redundancy refers to an implementation whereby a duplicate item is immediately available in the event of a failure on the part of the primary. A backup is a copy of the primary data or system that is available in a short period and is usually housed at a separate facility. Although redundancy and backup may seem similar, they serve two distinct purposes. Redundancy accommodates failure in system components, whereas backups allow recovery from catastrophic failures. Therefore, the main differences between redundancy and backup regard the immediacy of the implementation, severity of the incident, and physical location of the copy. For instance, suppose a new virus attacks the network and begins deleting user data. Redundant systems will propagate the deletion activity. Once the virus is eradicated, data recovery can ensue with the use of backups. For reasons of this nature, offline backups provide sufficient system separation to enable recovery efforts. Fault Tolerance. Redundant items are said to provide fault tolerance within a system. This means that a system can continue to operate in the event of a component failure. Examples of fault tolerance components to be discussed include hot spares, redundant servers, redundant communication links, and data mirroring.
Providing continuity of operations is accomplished through the focused asset management and maintenance of hardware, software, data, communications, and facilities. Security practitioners should be aware of the organizational processes, plans, and procedures used to protect these assets. The degree of redundancy and backup implementations is driven by business requirements and risk management on the part of the system owners and upper-level management. 656
AU8231_book.fm Page 657 Friday, October 13, 2006 8:00 AM
Operations Security A fault-tolerant system needs to detect equipment failure and take immediate, automatic action to ensure the continuity of operation. Systems with fault-tolerant capabilities will automatically initiate the redundant component when the primary fails. This type of capability is commonly seen in servers with dual power supplies, RAID implementations, and redundant networking components such as routers and firewalls. Data Protection Data and information represent the collective intelligence of an organization. The loss of data can have serious impacts and consequences on the organization. Providing redundancy and backup are key ingredients for proper operations security management. There exist different techniques and strategies for providing both ingredients. However, it is important to keep in mind that data redundancy and backup are separate concepts fulfilling different goals. Data redundancy provides online availability when a failure in the system occurs. Data backup provides recovery in the event of a loss of data. Data redundancy techniques provide availability for data in the event of a system component failure. The two primary techniques for accomplishing this are to use a redundant array of independent (formerly inexpensive) disks (RAID) and database shadowing. RAID techniques involve the use of multiple hard drives and various writing techniques to provide the redundancy. The benefit of a RAID is realized when a hard drive in the array fails. Operations continue despite the failure due to the technique employed. A RAID implementation is identified according to the number of redundant disks and the type of writing technique employed. Each type is identified by the word RAID and a number indicating the RAID level. RAID 0: Writes files in stripes across multiple disks without the use of parity information. This technique allows for fast reading and writing to disk. However, without the parity information, it is not possible to recover from a hard drive failure. This technique does not provide redundancy and should not be used for systems with high availability requirements. RAID 1: This level duplicates all disk writes from one disk to another to create two identical drives. This technique is also known as data mirroring. Redundancy is provided at this level; when one hard drive fails, the other is still available. Mirroring also allows the redundancy of hard drive controllers, which is called duplexing. RAID 2: Data is spread across multiple disks at the bit level using this technique. Redundancy information is computed using a Hamming error correction code, which is the same technique used within hard 657
AU8231_book.fm Page 658 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® drives and error-correcting memory modules. Due to the complexity involved with this technique, it is not used commercially at this time. RAIDs 3 and 4: Data is striped across multiple disks in bytes for level 3 and blocks for level 4. Parity information is written to a dedicated disk. These levels provide redundancy and can tolerate the loss of any one drive in the array. RAID 5: This level requires three or more drives to implement. Data and parity information is striped together across all drives. This level is the most popular and can tolerate the loss of any one drive. RAID 6: This level extends the capabilities of RAID 5 by computing two sets of parity information. The dual parity distribution accommodates the failure of two drives. However, the performance of this level is slightly less than that of RAID 5. This implementation is not frequently used in commercial environments. RAID 10: This level is considered a multi-RAID level. It combines the characteristics of RAID 0 and RAID 1, which stripes data across mirrors. This level requires substantial storage capacity, but the benefits are excellent redundancy and overall performance. Redundancy can also be provided for tape media. This is known as redundant array of independent tapes (RAIT). A RAIT is created with the use of robotics mechanisms to automatically transfer tapes between storage and the drive mechanisms. RAIT utilizes striping without redundancy. Database shadowing is another online technique, where a database management system updates records in multiple locations. This technique updates an entire copy of the database at a remote location. Data backups involve the copying of data from one location to another. Usually, a backup involves copying data from the production system to removable media, such as high-density tapes. Data backups should include not only user data, but also system configuration information. The three principal methods of performing backups are incremental, differential, and full. The primary differences between incremental and differential backups involve the size of each backup and the method used to restore. Incremental backups copy data changes since the last full or incremental backup that occurred. Differential backups copy all cumulative changes since the last full backup. Differentials require more space than incremental backups. Restoring data from incremental backups requires the last full backup and all of the incremental backups performed. In contrast, restoring from a differential backup requires the last full backup and the latest differential. Incremental and differential backups are conducted in cycles, which are usually a week to a month in length. Full backups should copy not only data, but also the entire system, to facilitate a complete system restore from the backup media. 658
AU8231_book.fm Page 659 Friday, October 13, 2006 8:00 AM
Operations Security Backups should be stored at a secure off-site location. This provides assurance of a recovery in the event that the facility is destroyed in a catastrophe. The off-site location should be far enough away to preclude mutual destruction in the event of a catastrophe, but not so far away as to introduce difficulties in transporting the media or retrieving it for recovery purposes. Unfortunately, the answer to off-site storage is a difficult challenge in areas prone to natural catastrophes. Geographical areas prone to natural disasters such as forest fires, earthquakes, tornados, typhoons, or hurricanes make it difficult to decide on an appropriate off-site location. However, there are two alternatives to this problem. First, multiple backup copies could be created. One copy could be sent to the secure location, while the other is retained locally for ordinary recovery purposes. This does incur additional workload for the backup efforts. Another technique is to utilize online backup techniques, such as vaulting and snapshots, to provide the appropriate backup. Electronic vaulting is accomplished by backing up system data over a network. The backup location is usually at a separate geographical location known as the vault site. Vaulting can be used as a mirror or a backup mechanism using the standard incremental or differential backup cycle. Changes to the host system are sent to the vault server in real-time when the backup method is implemented as a mirror. If vaulting updates are recorded in real-time, then it will be necessary to perform regular backups at the off-site location to provide recovery services due to inadvertent or malicious alterations to user or system data. Vault servers can also be configured to act in a similar fashion as a backup device. As opposed to performing real-time updates, file changes can be transferred to the vault using an incremental or differential method. Off-line backups of the vault server may not be necessary if there is sufficient storage space for multiple backup cycles. Remote journaling is a technique used by database management systems to provide redundancy for their transactions. When a transaction is completed, the database management system duplicates the journal entry at a remote location. This provides for database recovery in the event that the database becomes corrupted. Software Applications and system software require appropriate management controls to ensure that the latest copy can be restored on the system. Although full backups should capture the latest version of the software on the system, it is still necessary to retain a library for integrity purposes. Two critical aspects of software that require tracking for continuity purposes are updates and configurations. Most software products require periodic updates due to flaws and improvements. Updates should be 659
AU8231_book.fm Page 660 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® tracked through the change control system by the software librarian to ensure that the organization has the most up-to-date copy stored in a secure location. This situation becomes especially important for in-house developed software, such as Web-based applications. Additionally, application-specific configuration information should also be kept by the librarian when changes occur. Tracking configuration information is necessary for system troubleshooting for integrity purposes. Hardware Hardware devices are composed of physical elements that are susceptible to failure due to environmental factors and age. Heat is perhaps the greatest environmental threat to IT hardware. Excessive heat can cause system components to behave erratically, stop working, or break. Likewise, wear and tear will also cause hardware components to malfunction and fail. Given the availability requirements of an organization, redundant and backup components help mitigate availability failures due to hardware issues. Redundant hardware components are also known as hot spares. For example, some systems have multiple power supplies running in parallel. When one power supply fails, the other is already operating and simply picks up the full load. However, not all computers can accommodate hot spares. In this case, organizations will keep a stock of replacement items, known as cold spares, on hand. Cold spares are essentially backup hardware components. An entire device can also act as a hot spare. In situations where communications availability is critical, organizations will install failover network devices. Failover equipment act as secondary devices that take over for the primary when it fails. Routers and firewalls are often implemented in this fashion when telecommunications are vital. Some organizations may not have sufficient resources to maintain redundant or backup hardware to accommodate a wide variety of contingencies. An alternative to this situation is to have an agreement or contract with a service provider where it can immediately take over operations in the event that the primary site is unavailable. This is known as a standby service. Communications IT systems rely heavily on well-connected networks to support their functionality. The loss of communications can quickly cripple a network of distributed resources. Communication links can make use of redundancy and backup capabilities. Redundant communications involve multiple lines or links between distributed resources. An example of redundancy could be the use of multiple T-1 lines by different providers. This setup can provide sufficient redundancy if one provider has an internal failure. Backup communications are provided through the use of different communication ven660
AU8231_book.fm Page 661 Friday, October 13, 2006 8:00 AM
Operations Security dors or media. For example, suppose an organization uses the local telecommunications company as the primary communications backbone for an office. Alternatively, a backup link is made available through the use of a broadband connection that provides limited communications availability in the event the primary fails. Some of the different types of communications links that should be explored for backup and redundancy purposes include: 1. 2. 3. 4. 5. 6. 7.
Local phone company Long-distance carriers Competitive telecommunication carriers Broadband through telephone lines Broadband over cable modems Wireless metropolitan area networks Satellite links
Security practitioners should stay informed regarding emerging technologies and investigate their applicability for communications contingencies. Some of these newer technologies include WiMax, which provides broadband over wireless links, and broadband power lines (BPLs), which provide high-bandwidth local power lines. Facilities The room or building housing the system is an essential element in operations security. Facilities act as the outermost physical security control for the system. This outer shell provides a variety of services necessary for proper system operation. Any failures in the services provided by the facility will impact the operations of the system. Some of the services that require attention from the security practitioner include power, environment, and physical security. Obviously, computers require electricity to operate. Continuous and well-regulated power is essential to system operations. Interruptions in the electrical power impact a system’s continued availability. Two aspects regarding electrical power are line power quality and availability. Variations or fluctuations in electrical power can cause damage to IT systems. When power is too low, systems may behave erratically or even stop working. If the power level greatly exceeds the rated output, circuit breakers may trip, shutting off power, or may cause damage to the system. Additionally, various types of electrical equipment can introduce low-level noise on the power line, which can also affect sensitive IT equipment. These unpredictable variations can be controlled through power line regulators or filtering devices that can clean up the incoming power to remove noise and excessive fluctuations. Uninterruptible power supplies (UPSs) have the capability to clean power line signals. They can also supplement situations where power levels drop below an acceptable level by augmenting the incoming power or replacing it with their own power stored in the battery. 661
AU8231_book.fm Page 662 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® However, a UPS can only supply power for a limited amount of time and is insufficient to augment complete power losses over an extended period. Power failures are serious events affecting a system’s availability. Care must be taken by the security practitioner and the organization to ensure that all devices along the communication path are provided power in the event of an electrical power failure. Given the organization’s availability requirements, the extent of backup power must be assessed on a per item basis to ensure that sufficient power is available. Backup power is usually supplied through a generator powered by gasoline, diesel, or natural gas. Other environmental factors, such as lighting, heat, cooling, and physical security, must also be provided during a power outage. Otherwise, the system may become unavailable due to secondary or incidental issues related to the power outage. Environmental controls play an important part in ensuring continued system operations. Air temperature and humidity must be kept within appropriate limits to prevent damage to the equipment. As discussed earlier, excessive heat can cause equipment to fail. The room temperature should be kept low enough to ensure that equipment is being properly cooled, but not too cold, or people may not be able to work efficiently in the area. Likewise, humidity must also be kept at the proper levels. Dry air can cause the buildup and discharge of static electricity. Static electricity can develop a potential of hundreds or thousands of volts. Although a static discharge does not cause damage to humans, it can ruin sensitive equipment. Environments with a relative humidity below 20 percent are at risk of developing static electricity. High-humidity environments can cause water to form on equipment. Environments with greater than 60 percent relative humidity can cause condensation conditions on the equipment, depending on the room temperature. This condensation can cause corrosion and eventual component failure over a period. It is best to ensure that humidity in the computer room is between 35 and 60 percent at all times to avoid the development of static discharge and condensation. Building redundant or backup environmental control systems is a difficult and costly task. Indeed, there are other measures that can be used to temporarily mitigate a failed environmental control system. Assume that the air conditioning system for a facility fails. Some alternatives to support IT system availability while the environmental system is being repaired can include the use of portable fans and temporary air conditioning equipment. Turning off lights and powering down unnecessary equipment can go a long way to reduce heat within a computer room. When possible, an open window or door can also help when done in compliance with local safety and fire codes. Above all else, it is important to have a contingency plan when backup or redundant environmental controls do not exist. 662
AU8231_book.fm Page 663 Friday, October 13, 2006 8:00 AM
Operations Security Physical security controls for a facility include access control systems, facility intrusion detection systems, and fire detection/prevention systems. Malfunctioning or inoperative access control and intrusion detection systems can expose the IT system to unauthorized access. Backups to an access control system include locks, security containers, and sign-in rosters. A mobile guard force or regular security sweeps by the security practitioner are effective compensating controls that are useful for temporarily augmenting an inoperative intrusion detection system. Although the aforementioned methods are discussed as backups, they also serve as active primary defenses that should be part of a comprehensive physical security program for the organization. Physical security issues are covered in detail in the chapter on physical (environmental) security, Domain 4. Sometimes it becomes necessary for an organization to establish a secondary site for continued operations. Two classes of backup facilities that are used when the primary is unavailable are hot and cold sites. Hot sites are facilities that duplicate the functionality of the primary site. This type of site contains all of the necessary equipment necessary to take over operations for the primary in a very short period. Hot sites are activated when the primary is expected to be unavailable for an extended period. A cold site is a facility that has the capability of supporting the primary, but requires the installation of the appropriate hardware systems to continue operations. This type of site has the capacity to continue operations, but requires more time and effort to resume the duties of the primary. Hot and cold sites are sometimes maintained as contingency locations by a service provider. More information on backup sites is available in the chapter on business continuity and disaster recovery planning, Domain 6. A final consideration regarding backups and redundancy involves periodic maintenance and testing. Equipment of all types requires periodic maintenance. Schedules should be in place and followed that provide inspection and servicing of equipment and the facility. Part of the periodic maintenance schedule should include tests to ensure that the devices and procedures used for backups and redundancy are working properly. Periodic test procedures and their results should be documented as well. Problem Management Problems arise during the course of system operations. The security practitioner must understand the processes used to handle events that affect the continuity of operations. Dealing with problems involves responding to the event, identifying the root cause, and implementing a corrective control. This process of responding to events forms the basis of problem management. The backbone document to the continuity of operations is the contingency plan. Security practitioners assist users, management, and adminis663
AU8231_book.fm Page 664 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® ters in the development of the plan. This document discusses the various types of anticipated threats and how to deal with them. The plan should be of sufficient detail that anyone within the facility would be capable of executing it with minimal assistance. The names and number of affected parties should be included in the plan to expedite the recovery process. Some of the action items that should be addressed include the following. System Component Failure. Procedures should include the process of implementing redundant components or activating backup items. The location of backup items should be noted in the plan. The plan should point to any detailed procedures developed addressing the steps to implement for activating the redundant item. Power Failure. Dealing with a power failure requires the security practitioner to be chronometrically cognizant, or in other words, to be fully aware of the passage of time. Backup power is provided primarily by batteries or fuel. Each of these resources requires replenishment and is likely to exist in limited quantities. Conservation of power is crucial to ensuring the availability of the system until the facility power is restored. Migration of operations to a backup site may be required when power is expected to be unavailable for a long duration. The security practitioner should assist management with the identification of alternate processing locations when necessary. Telecommunications Failure. Communication lines should be in place as an alternative prior to the event. It is likely that backup communication links might have significantly smaller bandwidth, which could require an appropriate throttling of network traffic to ensure that necessary users or customers are provided sufficient bandwidth. There exist quality of service (QoS) devices and software that can assist the security practitioner in ensuring that priority traffic receives sufficient bandwidth. Physical Break-In. Organizations must show due diligence for protecting their systems by notifying authorities when a crime is committed. Involving the local authorities can cause serious disruption to system operations. The authorities may need to review a system forensically to determine if logical as well as physical trespass occurred. A forensic review of a device requires that it be taken offline while the persistent storage is imaged. Upper management will likely need to be involved in a criminal investigation. However, it is important to note that it is the responsibility and decision of upper management to contact the authorities. Security practitioners should refrain from contacting the authorities unless they are authorized to do so. Tampering. System tampering is indicative of an insider attack on a system. An internal investigation should be conducted to determine the pos664
AU8231_book.fm Page 665 Friday, October 13, 2006 8:00 AM
Operations Security sibility of criminal activity. Care should be given not to disturb the evidence; otherwise, forensic clues may become lost. Authorities should be notified if criminal intent is suspected. Once again, the notification is primarily the responsibility of management. Production Delay. Development and integration activities can encounter problems that cause time delays for system deployment. This can occur due to troubleshooting, failures, or delays in delivery schedules. Likewise, security events can result in the unavailability of system services. These delays can cause production delays for the organization and incur additional costs. The security practitioner should be cognizant of these potential costs and propose alternatives to reduce the delay times while the problem is resolved. Input/Output Errors. Users accidentally delete a needed data file more times than administrators care to acknowledge. Likewise, users may also input invalid or erroneous data into a system, which causes an incorrect output. Errors and omissions on the part of users are known as input errors. Recovery from input errors may require either system rollbacks, in the case of a database management system, or recovery of affected files through backup media. Users may experience output errors when data or information appears garbled or is nonexistent. Output errors frequently indicate a hardware or software problem within the system. However, it can also indicate malicious modification to system data.
There are various types of attacks that occur in greater frequency and seriousness as the Internet grows. These attacks occur due to various types of tools or attack programs utilized by hackers, script kiddies, illegitimate businesses, and criminals. More details on attacks are found in the chapter on application security, Domain 8. Some of the types of attacks that may impact operations follow. Denial of Service. A denial-of-service (DoS) attack has two primary root causes. The first case occurs when the bandwidth of a system is flooded with unwanted or unauthorized network traffic, which utilizes most, if not all, of the available bandwidth. This case is usually caused by malicious software or hacker activity. The second case occurs when an input to a device or application is incorrect and causes the affected item to stop working. This case can occur due to flaws in software that cause the process, service, or device to stop working. Network denial of service is identified through network monitoring with either management tools or an IDS. A DoS from outside the network boundary may require the assistance of the Internet service provider or changing firewall settings to recover. The security practitioner will need to collect as much information as possible to identify the type of attack, target, and originating IP addresses. The Internet service provider will use this information for its response. 665
AU8231_book.fm Page 666 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Devices, applications, and services affected by DoS due to erroneous input is indicative of a flaw in the underlying software. This will likely be fixed with a security patch, but may require other mitigations to provide for the required availability. Intrusion. An intrusion is the digital equivalent of a physical break-in. Care must be given when responding to an intrusion. A particularly malicious intruder may install malicious code that destroys a system if improperly removed. It is imperative that active intrusions be handled delicately. Suspected intrusions should be handled in a manner similar to that of a physical break-in. The primary difference is that the recovery process should prevent further or reoccurring intrusions. The identification of an intrusion will likely necessitate the involvement of law enforcement officials. Malware. Malicious code includes viruses, worms, and Trojan horses. Antivirus software is the primary tool used to detect and remove malicious code. However, as history has shown, new malware regularly emerges that is not detected by antivirus tools. Security practitioners should monitor news traffic regarding new threats and how to deal with them. Attention to IDS, audit, and system log information is necessary to detect an infection. Malware is described in detail in the chapter on applications security, Domain 8. Spyware. This class of malware is a recent threat that attempts to track the activities of local users. In many cases, the information tracked is as innocuous as Web site visits. However, this does impact an individual’s privacy and possibly the proprietary nature of an organization’s activities. Furthermore, spyware has been known to have flaws that cause workstations to crash, resulting in a denial of service for end users. Some antivirus vendors are now beginning to develop signatures for known spyware to address the threat. Likewise, there are numerous antispyware tools available to detect and remove known spyware. Recently, the problem with spyware has taken another turn, where criminals are now using these devices to capture sensitive user information, such as passwords, through keystroke logging and even to provide a backdoor into the user’s system. SPAM. Unsolicited e-mail messages sent in large volumes are known as SPAM. Recent research indicates that the majority of network traffic on the Internet is known to be related to SPAM. This represents a significant cost to service providers for the misused network traffic. It also is a substantial cost for organizations when employees must sift though numerous SPAM messages to find legitimate e-mail. Dealing with SPAM in the short term requires the use of filters to identify and remove SPAM from user e-mails on servers and through the e-mail client. Spammers attempt to circumvent filters by forging e-mail addresses and inserting specially crafted messages into the body of the e-mail. Some long-term initiatives to deal with SPAM 666
AU8231_book.fm Page 667 Friday, October 13, 2006 8:00 AM
Operations Security include Sender ID, which is being handled by the Internet Engineering Task Force (IETF), and DomainKeys Identified Mail, which was jointly developed by Yahoo and Cisco. Both of these technologies provide a method of proving an e-mail message is authentic. Security practitioners will need to pay attention to developments and recommend changes that will best meet the needs of the organization to fight this problem. Phishing. A recent trend in attacks by cyber criminals involves sending individuals a bogus e-mail claiming to be from a legitimate retailer or banking institution. The e-mail directs the user to a Web site that is nearly identical in appearance as the actual one. The unsuspecting user is enticed to enter her user identification and password or other personal information that leads to identity theft. Phishing is essentially a new method of social engineering with devastating results. Security practitioners should educate system users and management about this type of threat. Likewise, customers should also be sent information regarding this form of attack and told that the organization will never ask the user to verify his password, etc., in this manner.
An important tool for the security practitioner’s arsenal for dealing with malware, spyware, and phishing is user education. These types of attacks are activated through downloads or by clicking on a hyperlink. Informing users of the threats and demonstrating the potential damage may go a long way in reducing the number of incidents an organization experiences. System Recovery In the event of a disruption in operations, it is necessary to perform a system recovery. A system recovery is usually conducted when a service fails or the operating system hangs. Recovery methods include: Application restart: Terminate the application or service and attempt to restart it. Warm reboot: Perform a graceful shutdown of the system without powering down the system. Reinitialize the operating system. Graceful shutdowns allow running services and applications to save their states and information. Cold reboot: This involves a complete shutdown to include powering off of the system. A graceful shutdown of the system is performed with this method. Emergency restart: This method involves terminating the power and then repowering the system. This method is normally only used when the entire system is unresponsive. Data in nonpersistent memory is lost. Systems with advanced database management systems may be able to recover from incomplete transactions as long as the transaction log is not corrupted. 667
AU8231_book.fm Page 668 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Intrusion Detection System An intrusion detection system (IDS) is used to identify suspected or actual security incidents. An IDS is a real-time or near-real-time tool that examines system activity against a set of indicators or signatures that indicate the likelihood of an ongoing attack. Three types of IDS include host based, network based, and misuse detection. A host-based IDS looks for attacks against a specific device, such as a workstation or server. It does not monitor passing network traffic for signs of an attack. A host-based IDS typically looks for unauthorized configuration changes, processes, and software. In contrast, a network-based IDS identifies suspicious traffic on the monitored media. Network-based IDSs use large databases of attack signatures to identify malicious traffic. Typically, a network IDS is also configurable to identify network activity that is not normal, such as Web traffic over an unauthorized port or network segment. Misuse detection systems represent a hybrid model between host-based and network-based IDSs that look for violations of policy. Misuse detection systems analyze network traffic and file access to detect unauthorized sharing or access to information. A common problem with an IDS is false-positives. A false-positive occurs when the IDS identifies an event as a possible or actual attack, when it is, in fact, not an incident. False-positives represent a substantial amount of “noise” that must be ignored by the security practitioner. False-positives occur because the threshold of an event is set too low or the system does not have the capability of filtering out the type of event. The danger with filtering or thresholds is that an actual event may become excluded, and therefore missed by the security practitioner. An IDS requires frequent to constant attention. An IDS requires the response of a human who is knowledgeable enough with the system and types of normal activity to make an educated judgment about the relevance and significance of the event. Alerts need to be investigated to determine if they represent an actual event. Some IDSs generate enormous log files that are required to be backed up or cleared as appropriate. Vulnerability Scanning Sun Tzu once wrote, “If you know the enemy and know thyself, then you need not fear the result of a hundred battles.” Two principal factors needed for an organization to “know thyself” involve configuration management and vulnerability scanning. Configuration management provides an organization with knowledge about all of its parts, while vulnerability scanning identifies the weakness present within the parts. Knowing what composes the system is the first critical step in understanding what is needed to defend it. Identifying vulnerabilities of a known system provides the security practitioner with the necessary knowledge to defend against the onslaught of all types of attackers. 668
AU8231_book.fm Page 669 Friday, October 13, 2006 8:00 AM
Operations Security Vulnerabilities arise from flaws, misconfigurations (also known as weaknesses), and policy failures. Flaws result from product design imperfections. The most common type of flaw in software is the buffer overflow. Flaws are usually fixed with a security patch, new code, or a hardware change. Misconfigurations represent implementation errors that expose a system to attack. Examples of misconfigurations include weak access control lists, open ports, and unnecessary services. Policy failures occur when individuals fail to follow or implement security as required. This includes weak passwords, unauthorized network devices, and unapproved applications. Vulnerability scanning is conducted against network, host system, and application resources. Each type of scan is used to detect vulnerabilities specific to the type of scan. Network scans look for vulnerabilities on the network. Flaws in devices are found with scanning tools designed to perform tests that simulate an attack. Misconfigured network settings such as unauthorized services can be found during network scans. Policy violations, which include unauthorized devices, workstations, and servers, are also found with a comprehensive network scanning tool. Host-based scans are conducted at the system console or through the use of agents on servers and workstations throughout the network. Host-based scans are critical for identifying missing security updates on servers and workstations. This type of scan can also identify when local policy or security configurations, such as audit log settings, are not implemented correctly. A good host-based scanner can also identify unauthorized software or services that might indicate a compromised system or a blatant violation of configuration management within the organization. The last type of vulnerability scanning involves specialized application security scanners. These tools check for patch levels and implementations of applications. For instance, some application scanning tools can identify vulnerabilities in Web-based applications. Other tools are designed to work with large applications, such as a database management system, to identify default settings or improper rights for sensitive tables. Business Continuity Planning Contingency plans offer short-term recovery techniques for continued operations. Disaster recovery plans establish the methods used to restore normal system operations in the event of a disaster. Both types of plans are used to support the organization’s business continuity plan (BCP). More detailed discussion regarding this type of planning is available in the chapter on business continuity and disaster recover planning, Domain 6. Change Control Management Systems experience frequent changes. Software packages are added, removed, or modified. New hardware is introduced, while legacy devices 669
AU8231_book.fm Page 670 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® are replaced. Updates due to flaws in software are regular business activities for system managers. The rapid advancement of technology, coupled with regular discovery of vulnerabilities, requires proper change control management to maintain the necessary integrity of the system. Change control management is embodied in policies, procedures, and operational practices. Configuration Management Organizational hardware and software require proper tracking, implementation testing, approvals, and distribution methods. Configuration management is a process of identifying and documenting hardware components, software, and the associated settings. The security practitioner plays an important role in configuration management through the identification of misconfigured systems. Detailed hardware inventories are necessary for recovery and integrity purposes. Having an inventory of each workstation, server, and networking device is necessary for replacement purposes in the event of facility destruction. All devices and systems connected to the network should be in the hardware list. At a minimum, the security practitioner should include in the hardware list the following information about each device and system: 1. 2. 3. 4. 5. 6. 7. 8. 9.
Make Model MAC address Serial number Operating system or firmware version Location BIOS and other hardware-related passwords Assigned IP address if applicable Organizational property management label or bar code
The inventory is also helpful for integrity purposes when attempting to validate systems and devices on the network. Knowing the hardware versions of network components is valuable from two perspectives. First, the security practitioner will be able to quickly find and mitigate vulnerabilities related to the hardware type and version. Most hardware vulnerabilities are associated with a particular brand and model of hardware. Knowing the type of hardware and its location within the network can substantially reduce the effort necessary to identify the affected devices. Additionally, the list is invaluable when performing a network scan to discover unauthorized devices connected to the network. A new device appearing on a previously documented network segment may indicate an unauthorized connection to the network. 670
AU8231_book.fm Page 671 Friday, October 13, 2006 8:00 AM
Operations Security A configuration list for each device should also be maintained. Devices such as firewalls, routers, and switches can have hundreds or thousands of configuration possibilities. It is necessary to properly record and track the changes to these configurations to provide assurance for network integrity and availability. These configurations should also be periodically checked to make sure that unauthorized changes have not occurred. Operating systems and applications also require configuration management. Organizations should have configuration guides and standards for each operating system and application implementation. System and application configuration should be standardized to the greatest extent possible to reduce the number of issues that may be encountered during integration testing. Software configurations and their changes should be documented and tracked with the assistance of the security practitioner. It is possible that server and workstation configuration guides will change frequently due to changes in the software baseline. Production Software Original copies and installed versions of system and application software require appropriate protection and management for information assurance purposes. Weak controls on software can subject a system to compromise through the introduction of backdoors and malicious code such as Trojan horses, viruses, and worms. Protecting the integrity of system code is necessary to defend against these types of threats. The process of handling software from original media through installation, use, modification, and removal should follow the concepts of least privilege and separation of duties. The types of controls necessary include access control, change control management, and library maintenance. Software Access Control Installed software should have appropriate access controls in place to prevent unauthorized access or modification. Ordinary users should have read and execute permissions for executable content and other system binaries and libraries. Setting this level of access control can prevent accidental or unauthorized modification to system binaries. For example, some viruses infect executable content through modification of the binary. If the user does not have write, modify, or delete permissions to system binaries, then the virus will be unable to affect these files when executing in the context of an ordinary user. Ordinary users should not be given read access to application binaries not needed for their duties. For instance, ordinary users frequently do not need read access to commonly installed system utilities that are designed for administration. Denying users the ability to read these types of application files follows the concept of least privilege and can furthermore protect 671
AU8231_book.fm Page 672 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® the system in the event that a flaw is discovered that is exploitable by one of the common administrative tools. Change Control Process Maintaining system integrity is accomplished through the process of change control management. A well-defined process implements structured and controlled changes necessary to support system integrity, and accountability for changes. Decisions to implement changes should be made by a committee of representatives from various groups within the organization, such as ordinary users, security, system operations, and upper-level management. Each group provides a unique perspective regarding the need to implement a proposed change. Users have a general idea of how the system is used in the field. Security can provide input regarding the possible risks associated with a proposed change. System operations can identify the challenges associated with the deployment and maintenance of the change. Management provides final approval or rejection of the change based on budget and strategic directions of the organization. Actions of the committee should be documented for historical and accountability purposes. The change management structure should be codified as an organization policy. Procedures for the operational aspects of the change management process should also be created. Change management policies and procedures are forms of directive controls. The following subsections outline a recommended structure for a change management process. Requests. Proposed changes should be formally presented to the committee in writing. The request should include a detailed justification in the form of a business case argument for the change, focusing on the benefits of implementation and costs of not implementing. Impact Assessment. Members of the committee should determine the impacts to operations regarding the decision to implement or reject the change. Approval/Disapproval. Requests should be answered officially regarding their acceptance or rejection. Build and Test. Subsequent approvals are provided to operations support for test and integration development. The necessary software and hardware should be tested in a nonproduction environment. All configuration changes associated with a deployment must be fully tested and documented. The security team should be invited to perform a final review of the proposed change within the test environment to ensure that no vulnerabilities are introduced into the production system. Change requests involving the removal of a software or system component require a similar 672
AU8231_book.fm Page 673 Friday, October 13, 2006 8:00 AM
Operations Security approach. The item should be removed from the test environment and have a determination made regarding any negative impacts. Notification. System users are notified of the proposed change and the schedule of deployment. Implementation. The change is deployed incrementally, when possible, and monitored for issues during the process. Validation. The change is validated by the operations staff to ensure that the intended machines received the deployment package. The security staff performs a security scan or review of the affected machines to ensure that new vulnerabilities are not introduced. Changes should be included in the problem tracking system until operations has ensured that no problems have been introduced. Documentation. The outcome of the system change, to include system modifications and lessons learned, should be recorded in the appropriate records.
Library Maintenance Original copies of software media should be controlled through a software librarian. Establishing a software library where original copies of software are strictly controlled provides accountability and a form of integrity control. A librarian is necessary to catalog and securely store the original copies of test data, binaries, object files, and source code. Patch Management A critical part of change control management involves the deployment of security updates, which is also known as patch management. Flaws in vendor products are continuously discovered. The development and distribution of vendor patches cause a never-ending cycle of security updates to production systems. Managing security updates is not a trivial task for any organization. The patch management process must be formalized through documentation and receive management approval to provide the best possible strategy for implementing this type of system change. Vendors will frequently fix security problems in software or firmware through version updates. They may not specify the reason for the version change or what flaws were addressed in a given update. In this case, it is important to obtain vulnerability information from third-party services. Several sources of vulnerability and patch availability information can be obtained from resource centers such as: • sans.org: A security training organization that also publishes recent information regarding critical vulnerabilities. 673
AU8231_book.fm Page 674 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® • nvd.nist.gov: An online database of known vulnerabilities managed by the National Institute of Standards and Technology (NIST). • Uscert.gov: A government-sponsored clearinghouse of information security information. Security practitioners should monitor their networks for known vulnerabilities due to product flaws using automated and manual methods. There exists a variety of tools that test devices, systems, and applications for known flaws. The tools work by probing the network according to a database of known vulnerabilities. Care should be exercised when using these types of tools, as they might attempt to take advantage of a flaw, which could have a detrimental impact on the availability of the resource. Automated tools represent the most efficient method of locating legacy flaws. However, their databases must be kept up to date to locate newly discovered product vulnerabilities. Sometimes the tool databases may lag behind the discovery of new flaws or may be incapable of finding flaws in every device or application in the system. In this case, it is necessary to supplement automated testing with manual methods. Using a compiled list of known devices, systems, and applications, the security practitioner can determine if unpatched items exist on the network. The manual process of finding vulnerable items is just as critical as the automated methods. Once a discovery is made of a flawed item in the system, a determination should be made whether to patch the item. A risk-based decision is required to determine the necessity of patching the problem. The cost of performing the maintenance may be too great given the level of risk associated with the flaw. In this case, it may be wise to put off installing the patch until it becomes necessary to address the issue, when other circumstances dictate the act of taking the system component offline. However, it is important to also consider implementing compensating controls to mitigate the flaw. When the need arises to patch a product, a schedule for conducting the fix must be established. In some cases, there may be numerous patches requiring deployment. The order of patches can sometimes affect the outcome of an update. Consideration must be given for the order in which patches are deployed. Furthermore, the organization should prioritize updates according to the criticality they represent. Three variables to consider when evaluating the need and order of an update include the type of flaw, ease of exploit, and required locality. Flaw types can be categorized according to the level of access gained or damage inflicted on a system. The level of access is the associated privileges and their ability to impact the system. These include: • Provides administrator or root privilege for executing a process 674
AU8231_book.fm Page 675 Friday, October 13, 2006 8:00 AM
Operations Security • Allows execution of arbitrary code in the context of the executing process or user • Denial of a network service • Denial of service for local user Flaws allowing root exploits are the most serious, as they allow the attacker to bypass most, if not all, security measures, subverting all of the information security properties. The next most serious threat is the execution of arbitrary code, which can cause loss of integrity as well as confidentiality very quickly. At this level, an attacker could steal information or insert a Trojan horse to further the exploit. Denial of a network service means the potential loss of availability to a large number of users. The least threatening scenario is the loss of availability for only a user at the local console. The ease of exploit defines the level of computer skills necessary and the existence of specialized tools or exploit code to accomplish the attack. Easy exploits create more cause for concern regarding the vulnerability. Difficult-to-exploit vulnerabilities may indicate that the likelihood of the occurrence of an exploit sufficiently remote to accept temporarily an operational risk. • Easy: Exploit tools exist for the attack, or it is too trivial in nature to exploit. • Moderate: Requires moderate skill given the ease of the attack or the use of complicated exploit tools. • Difficult: Requires a high level of technical skill with no exploit code available. It is important to note that a vulnerability can quickly move from difficult to easy given the tenacity of hackers to develop and distribute tools that take advantage of the flaw. For this reason, the security practitioner should continue to monitor organizations regarding the publication of exploit code for the vulnerability. Required locality defines the physical or logical access necessary to exploit the flaw, for example: • • • •
Network exploitable from any port or protocol Network exploitable through a particular port or protocol Network exploitable by authorized users only Local console or physical access required
Attacks that are only exploitable with physical access require the attack to be launched by an insider. Although the insider threat is real, it is much more manageable for an organization than attacks that can be launched from any location on the Internet. However, organizations should not be lulled into a sense of false security just because an exploit requires access 675
AU8231_book.fm Page 676 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® to the console. Consider that an attacker could circumvent the problem of remote access through the use of social engineering or other methods to cause unsuspecting users to run the exploit code on behalf of the attacker. Many users unwittingly execute e-mail attachments or visit malicious sites, providing an avenue for the attacker to take advantage of a known flaw. The security practitioner should evaluate the flaw given the three variable categories to determine the level of exposure represented by the flaw. At such time, a recommendation should be given to management regarding the best course of action given the severity of the exposure, cost of the fix, and potential cost of any anticipated loss. Upon determining the level of exposure, and management concurrence with the need to deploy the patch, testing of the updated fix should commence. The security practitioner should work with administrators to determine if the update causes any undesirable affects. For example, some updates can change system configurations or security settings. Some vendor patches have been known to reset access control lists on various sensitive files, creating a subsequent vulnerability. In this regard, patch testing should address not only the proper functioning of the system, but also the effect the update may have on the overall security state and policy of the system. Once the update is tested and residual issues addressed, a schedule should be established for system deployment. It is important that users be notified of system updates prior to deployment. This way, if an unanticipated error occurs, it can be corrected more readily. When possible, it is best to schedule updates during periods of low productivity, such as evenings or weekends. Again, this is primarily accomplished to accommodate unforeseen system crashes. Prior to deploying updates to production servers, make certain that a full system backup is conducted. In the regrettable event of a system crash, due to the update, the server and data can be recovered without a significant loss of data. Additionally, if the update involved propriety code, it will be necessary to provide a copy of the server or application image to the media librarian. Deploy the update in stages, when possible, to accomplish a final validation of the update in the production environment. This may not always be possible given the network configuration, but is desirable to limit unforeseen difficulties. After the deployment, it is necessary to confirm that the updates deployed to all of the appropriate machines. System management tools and vulnerability scanners can be used to automate the validation. Continue checking the network until every network component scheduled for 676
AU8231_book.fm Page 677 Friday, October 13, 2006 8:00 AM
Operations Security the change has been validated. Redeploy updates as necessary until all systems receive the update. The last step in the patch management process is to document the changes. This provides a record of what was accomplished, the degree of success, and issues discovered. Documentation should also be conducted when decisions are made to not patch a system. The reasons for the decision and the approving authority should be recorded. This serves the dual purpose of providing external auditors with evidence that the organization is practicing due diligence regarding system maintenance, and imparting a history of uniqueness within the system. Summary Operations security encompasses a body of knowledge that is focused on enuring the availability of IT resources. All IT system users require appropriate rights and permissions for the systems used in conjunction with their assigned duties. Rights and privileges that can allow potential compromises should be controlled and assigned to those individuals designated as administrators or system operators. Privileged entities are responsible for maintaining system operations and security services. Account and audit management are two important functions that require attention by the security practitioner. Various control types broadly describe classes of methods used to protect the system. Control methods are used to counter threats and support the operational availability of the system. Media management requires special attention from the security practitioner due to potential damage associated with the loss or misuse of sensitive information of the media. Continuity of operations forms the assurance base for system availability. Redundancy and backups of system components and services are used to increase the likelihood that a system will be available for users and customers. Problem management is initiated when threats arise against system operations. Change control management is necessary to provide continued operations with an acceptable level of risk. Patch management is a critical part of change control management that must be conducted on a regular basis due to the continuous emergence of flaws in vendor products. References Agrawal, R. and Dewitt, D.J. (1985). Integrated concurrency control and recovery mechanisms: design and performance evaluation. ACM Transactions on Database Systems, 10, 529–564. Bertino, E., Samarati, P., and Jajodia, S. (1993). High assurance discretionary access control for object bases. In Proceedings of the 1st ACM Conference on Computer and Communications Security, pp. 140–150. Bishop, M. (2003). Computer Security: Art and Science. Boston: Pearson Education.
677
AU8231_book.fm Page 678 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Chen, P.M., Lee, E.K., Gibson, G.A., Katz, R.H., and Patterson, D.A. (1994). RAID: high-performance, reliable secondary storage. ACM Computing Surveys, 26, 145–185. Committee on National Security Systems. (2003). National Information Assurance Glossary, Instruction 4009. Department of Defense. (1995). National Industrial Security Program Operating Manual, DoD 5200.22-M. Ferraiolo, D.F., Kuhn, D.R., and Chandramouli, R. (2003). Role-Based Access Control. Norwood, MA: Artech House. Gutman, P. S ecure deletion of data from magnetic and solid state memory. In Proceedings of the Sixth USENIX Security Symposium, pp. 77–90, 1996. Hansche, S., Berti, J., and Hare, C. (2004). Official (ISC)2 Guide to the CISSP Exam. Boca Raton, FL: Auerbach. Hutt, A. E., Bosworth, S., and Hoyt D.B. (1995). Computer Security Handbook, 3rd ed. New York: John Wiley & Sons. Johnson, T. and Prabhakar, S. Tape group parity protection. In Proceedings of the 16th IEEE Symposium on Mass Storage Systems, pp. 72–79, 1996. Landwehr, C.E. Formal models for computer security. Computing Surveys, 13, 247–278. Majstor, F. (2005). WLAN security update. In Handbook of Information Security Management, 5th ed., Vol. 2, M. Krause and H. Tipton (Eds.). Boca Raton, FL: Auerbach, pp. 379–392. McGhie, L. (2005). Security patch management 101: it just makes good sense! In Handbook of Information Security Management, 5th ed., Vol. 2, M. Krause and H. Tipton (Eds.). Boca Raton, FL: Auerbach, pp. 405–409. Nicastro, F. (2005). Curing the Patch Management Headache. New York: Auerbach Publications. Swanson, M., Grance, T., Hash, J., Pope, L., Thomas, R., and Whol, A. (2001). Contingency Planning Guide for Information Technology Systems, Special Publication 800-34. National Institute of Standards and Technology. Tipton, H.F. and Krause, M. (2003–2005). Information Security Management Handbook, 5th ed. New York: Auerbach Publications.
Sample Questions 1. Which of the following permissions should not be assigned to system operators? a. Volume mounting b. Changing the system time c. Controlling job flow d. Monitoring execution of the system 2. Which type of network component typically lacks sufficient accountability controls? a. Workstations b. Servers c. Switches d. Database management systems 3. The correlation of system time among network components is important for what purpose? a. Availability b. Network connectivity c. Backups d. Audit log review 678
AU8231_book.fm Page 679 Friday, October 13, 2006 8:00 AM
Operations Security 4. Which type of access control system would use security labels? a. Mandatory access control b. Discretionary access control c. Role-based access control d. Simplex access control 5. Individuals are granted clearance according to their: a. Duties assigned b. Trustworthiness c. Both a and b d. Neither a or b 6. Which group characteristic or practice should be avoided? a. Account groupings based on duties b. Group accounts c. Distribution of privileges to members of the group d. Assigning an account to multiple groups 7. Which of the following resources does not impact audit log management? a. Memory b. Bandwidth c. CPU time d. Storage space 8. Which type of users should be allowed to use system accounts? a. Ordinary users b. Security administrators c. System administrators d. None of the above 9. Wireless network traffic is the best security with which of the following protocols a. Wireless Encryption Protocol (WEP) b. Wired Equivalent Privacy (WEP) c. Wi-Fi Protected Access (WPA) d. Wireless Protected Access (WPA) 10. Original copies of software should reside with: a. Media librarian b. Software librarian c. Security administrator d. System administrator 11. All of the following are control types except: a. Detective b. Preventative c. Recovery d. Configuration 12. Compensating controls are used: a. To detect errors in the system 679
AU8231_book.fm Page 680 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK®
13.
14.
15.
16.
17.
18.
19.
680
b. When an existing control is insufficient to provide the required access c. To augment a contingency plan d. As a deterrent control Need-to-know enforcement is most easily implemented using: a. Mandatory access control b. Discretionary access control c. Role-based access control d. None of the above What measurement unit is used to describe the amount of energy necessary to reduce a magnetic field to zero? a. Reduction b. Remanence c. Coercivity d. Gauss Which object reuse method is best used for a CD-ROM containing sensitive information? a. Degauss b. Pulverize c. Overwrite software d. None of the above Backups and archives: a. Perform the exact same function b. Provide redundancy capabilities c. Are only necessary in high threat areas d. Serve different purposes Redundant components are characterized by all of the following except: a. Hardware only b. Hot spares c. Online d. Duplicative Which RAID level provides data mirroring? a. 0 b. 1 c. 3 d. 5 Relative humidity levels in the IT operations center should be less than: a. 20 percent b. 35 percent c. 50 percent d. 60 percent
AU8231_book.fm Page 681 Friday, October 13, 2006 8:00 AM
Operations Security 20. Who is ultimately responsible for notifying authorities of a data or system theft? a. Users b. Security administrator c. System administrator d. Management 21. Phishing is essentially another form of: a. Denial of service b. Social engineering c. Malware d. Spyware 22. Intrusion detection systems are used to detect all of the following except: a. Physical break-ins b. System misuse c. Unauthorized changes to system files d. SPAM 23. Which of the following does not give rise to a vulnerability? a. Hackers b. Flaws c. Policy failures d. Weaknesses 24. Configuration management involves: a. Identifying weaknesses b. Documenting system settings c. Vulnerability scanning d. Periodic maintenance 25. Patch management is a part of: a. Contingency planning b. Change control management c. Business continuity planning d. System update management
681
AU8231_book.fm Page 682 Friday, October 13, 2006 8:00 AM
AU8231_book.fm Page 683 Friday, October 13, 2006 8:00 AM
Domain 10
Legal, Regulations, Compliance and Investigations Marcus K. Rogers, Ph.D., CISSP
Introduction The current chapter covers the domain of legal, regulations, compliance and investigation — formerly known as the law, investigation, and ethics domain. The legal, regulations, compliance and investigations domain addresses general computer crime legislation and regulations, the investigative measures and techniques that can be used to determine if an incident has occurred, and the gathering, analysis, and management of evidence if it exists. The focus is on concepts and international generally accepted methods, processes, and procedures. It is important to highlight the international focus at the very beginning. This chapter will avoid indepth discussions of country/regional specific laws, legislation, and regulations. Although some regional examples are presented to clarify certain discussion points, these will be limited to the emphasis of principles common across most, if not all, jurisdictions. The chapter is geared toward the conceptual issues and concerns and is not intended as a deep technical discussion on the domain. This conceptual level of depth is in keeping with the need to proverbially walk before we run. Without a solid understanding of the concepts and issues, any deep technical discussions would be problematic and superficial. A secondary reason for the choice of depth is directly related to the sheer size of this topic; it is not unrealistic to find entire books devoted to each of the sections this chapter will attempt to address; thus, only a high-level examination is possible. Having fully qualified and constrained the scope of this chapter, it is time for me to delve into what exactly will be covered and what the reader can expect to glean from the pages contained herein. The chapter has been logically broken down into three broad categories, each with several subsections. The first major section sets the stage for subsequent sections and 683
AU8231_book.fm Page 684 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® deals with major legal systems of the world. The intention is not to turn readers into international law experts, but to introduce the context and backdrop for the remainder of the chapter. Under the major legal systems we will examine, at a high level, principles of common law; civil or code law; and customary, religious, and mixed legal systems. Similarities and differences between these systems that are important for information security professionals will be briefly introduced. The second section deals specifically with the law as it relates to information systems. The need for awareness of legislative and regulatory compliance is examined; this includes general information system legislative and regulatory principles (e.g., protection of property, intellectual property; protection of persons, privacy; and licensing issues). We then move to the subtopic of cybercrime: what is it, who is doing it, what impact it has on the information systems community and society in general, and, finally, issues related to international harmonization of cybercrime laws and prosecution (e.g., jurisdiction, legislation). The third section focuses on detection and investigation of information system-related events and looks at incident response from policy requirements and developing a response capacity to proper evidence management and handling procedures. This section goes into more of the investigative aspects and examines cyber forensics (both network and computer forensics). This section briefly discusses cybercrime scene analysis and cyber forensics protocol (e.g., identification, preservation, collection, analysis, examination, and report/presentation of digital evidence). The chapter concludes with an overall discussion of the current and future roles of detective and investigative controls, and what needs to be done to ensure that these controls are flexible enough to keep pace with the constantly changing technology environment and the reality of increased regulatory and legislative compliance. Readers interested in obtaining more information on any of the sections or concepts are encouraged to review the extensive reference section and consult these excellent sources. CISSP® Expectations The professional should fully understand: 1. Laws – What kinds of laws and regulations pertain to information security – How to determine if the respective organization is in compliance with these laws and regulations – How to determine what laws are applicable to computer crime – How to determine if a computer crime has occurred – What are some of the recommended methods to gather and preserve evidence and investigate computer crimes 684
AU8231_book.fm Page 685 Friday, October 13, 2006 8:00 AM
Legal, Regulations, Compliance and Investigations 2. Security incidents – What constitutes a security incident, including: • Viruses and other malicious code • Terrorist attack • Unauthorized acts by privileged and nonprivileged employees • Natural disasters • Hardware and software malfunctions • Utilities outage • Common system/network attacks, including: • Spam attacks • E-mail attacks • Firewall breaches • Social engineering • Human errors and omissions • Redirection of traffic • Hacker and cracker attacks • Protocol analyzers and sniffers • Wireless approaches – What are the skills needed to recognize incidents, such as: • Pattern recognition • Detecting abnormal activities • Suspicious activities • Alarms • Virus activities – What are the skills needed to respond to incidences, such as: • The key components and items necessary to properly identify the security event • The ability to contain or repair the damage • The generally accepted guidelines to gather, protect, control, and preserve the evidence • The ability to maintain incident logs and related documentation • The generally accepted guidelines for reporting incidents • The escalation procedures used after a security incident is discovered • The generally accepted guidelines for confiscating equipment, software, and data • The appropriate countermeasures and corrective measures • The ability to prevent future incidences 3. How the (ISC)2 Code of Ethics applies to CISSPs Major Legal Systems As stated in the introduction, readers of this chapter will not be qualified to practice international law, or serve on the bench of the world court for 685
AU8231_book.fm Page 686 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® that matter. However, readers will hopefully have a better basic understanding of the major legal systems found throughout the world. This understanding is required for several reasons: Information systems security is an international phenomenon; crimes committed using information systems or targeted at information systems know no geographical boundaries. The whole world is now your neighbor, both the good and the bad. It is also important that we, as information security professionals, do not have false preconceptions of legal systems that we are not familiar with (i.e., all common law countries have identical laws).1 It will soon be rare to find a professional in this field that, during the course of an investigation, has not dealt with legal professionals from various countries or been introduced to several different systems of law.1 For the sake of this chapter, the major legal systems are categorized as: • • • • •
Common law Civil or code law Customary law Religious Mixed law
This taxonomy is consistent with current legal literature in this area.2 Maritime law is not addressed in this discussion, although it is an excellent example of the harmonization of international law. Common Law The legal system referred to as common law traces its roots back to England, or more precisely, the development of a customary law system of both the Anglo-Saxons in Northern France and the early residents of England.3 Due to England’s rich history of colonization, the common law framework can be found in many parts of the world that were once colonies or territories of the British empire (e.g., United States, Canada, United Kingdom, Australia, and New Zealand). The European continent has resisted the common law influence and is based primarily on a codified legal system, civil law.1 The common law system is based on the notion of legal precedents, past decisions, and societal traditions.2 The system is based on customs that predated any written laws or codification of laws in these societies.4 Prior to the 12th century, customary law was unwritten and not unified in England; it was extremely diverse and was dependent on local norms and superstitions. During the 12th century, the king of England created a unified legal system that was common to the country.5,6 This national system allowed for the development of a body of public policy principles.3 A defining characteristic of common law systems is the adversarial approach to litigation, and the findings of fact in legal fictions.3 It is assumed that adjudicated argumentation is a valid method for arriving at 686
AU8231_book.fm Page 687 Friday, October 13, 2006 8:00 AM
Legal, Regulations, Compliance and Investigations the truth of a matter. This approach led to the creation of barristers (lawyers) who take a very active role in the litigation process.3 Another discriminating element of the common law system stems from its reliance on previous cour t rulings. Decisions by the cour ts are predicated on jurisprudence (case law), with only narrow interpretation of legislative law occurring.3 In this system, judges play a more passive role than in civil law systems and are not actively involved in the determination of facts. Although historically, common law was a noncodified legal system, this is no longer true; most, if not all, common law countries have developed statute laws and a codified system of laws related to criminal and commercial matters.5,6 Most descriptions of common law systems are quick to point out that the differences between civil and common law systems are becoming increasingly difficult to distinguish, with civil systems adopting a jurisprudence approach and common law systems increasingly relying on legislative statutes and regulations.5,6 Most common law systems consist of three branches of law: criminal law, tort law, and administrative law. Criminal Law. Criminal law can be based on common law, statutory law, or a combination of both. Criminal law deals with behaviors or conduct that is seen as harmful to the public or society in general. In these cases, an individual has violated a governmental law designed to protect the public and, thus, the real victim is society. The government therefore prosecutes the transgressor on behalf of the public. Typically the punishment metered out by the criminal courts involves some loss of personal freedom for the guilty party (e.g., incarceration, probation, death). However, monetary punishments in the way of fines or restitution to the court or victim are not uncommon. Tort Law. Tort law deals with civil wrongs (torts) against an individual or business entity. As the transgressions are not against the general public or society (in most cases), the law (government) provides for different remedies than in criminal cases. These remedies usually consist of money for the damages caused to the victim. These damages can be compensatory, punitive, or statutory.7 Interestingly enough, tort law can trace its origin to criminal law, and in some jurisdictions, offenses can fall into both the criminal and tort law categories (e.g., assault against an individual).7,8 Tort law can be divided into intentional torts, wrongs against a person or property, dignitary wrongs, economic wrongs, negligence, nuisance, and strict liability.7,8 Administrative Law. Administrative law or, as it is known in some countries, regulatory law is primarily an artifact of the Anglo-American common law legal system.9 However, some civil law systems have administrative courts to oversee social security law or grievances against the government itself (either national or local).9 This branch of common law is concerned 687
AU8231_book.fm Page 688 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® with the governance of public bodies and designation of power to administrative agencies.9,10 These agencies are often controlled by other government agencies, but can come under the purview of the courts and are reviewed “under some principle of due process.”9,10 Punishments under administrative law consist of fines and, in some cases, incarceration. Civil Law Civil law traces its roots back to two beginnings. The first was the living law of the Roman Empire, which culminated with the compilation of the Code and Digest of Emperor Justinian.11 The second birth began as a result of Italian legal scholars and progressed through the codification of law in Europe as exemplified with the Napoleonic Code of France and the French Civil Code of 1804.11,12 The civil law system was, at one time, the most common legal system on the European continent.4 The system became regionalized over time with Germany, Norway, Sweden, Denmark, and Switzerland developing their own national systems, unique from the French Napoleonic system.4,11 Due to this nationalization, civil law can be subdivided into French civil law, German civil law, and Scandinavian civil law.4,11 Civil law is not confined to Europe alone. Many Asian countries have legal systems based on the German model of civil law.4 The distinguishing feature of civil law is thought to be the codification of law and heavy reliance on legislation as the primary source of law, as opposed to jurisprudence.4,11 This is not accurate, as there are several countries that follow an uncodified civil law legal system (e.g., Scotland and South Africa).12 However, when contrasted against the common law system, other differences become apparent. Civil law emphasizes the abstract concepts of law and is influenced by the writings of legal scholars and academics, more so than common law systems.4,11 The common law doctrine of stare decisis (lower courts are compelled to follow decisions of higher courts) is absent from the civil law system.12 The role of judges in civil law systems is also different than in common law systems. In civil law legal systems, judges are distinct from lawyers and are not attorneys who have graduated through the ranks.4,11 Judges also play a more active role in determining the facts of legal fictions. Customary Law. Custom or customary law systems are regionalized systems and reflect the society’s norms and values based on programmatic wisdom and traditions13,14 These countries have a rich history of traditions and customs that dictate acceptable behavior between the various members of society.15 These customs or norms over the years have become recognized as defining legitimate social contracts and have become part of the rule of law.16 It is rare to find a country whose rule of law is based solely on customary law. Most countries that have a strong law of custom also pre688
AU8231_book.fm Page 689 Friday, October 13, 2006 8:00 AM
Legal, Regulations, Compliance and Investigations scribe to another legal system, such as civil or common law (e.g., many African countries).13,14 This combination of legal systems is referred to as a mixed legal system and will be discussed in the “Mixed Law” section. Punishment under customary law systems focuses on restitution to the victim by means of some kind of fine.16 Religious Law In a manner of speaking, all laws have been influenced by religion. The earliest societal rules of conduct that dictated the behavior of the people reflected the predominant religious teachings on morality. Over the years, many countries have attempted to separate the spiritual and secular lives of its citizens (e.g., First Amendment of the United States). Other countries not necessarily under the direct influence of Judeism/Christianism have made no such cultural/societal distinction.17 Although there are technically several religious law systems, we will confine this chapter to a very brief discussion of Muslim law. This system was chosen because the Islamic faith is practiced by a large portion of the world’s population. Muslim societies, such as those found in North Africa and the Middle East, follow Islamic laws or Sharia. Although Sharia has been the dominant system defining the rule of law, there is increasing pressure to adopt or, at the very least, incorporate more secular legal thinking and concepts (see “Mixed Law” section). Traditional Islamic law is separated into rules of worship and rules of human interaction and is guided by the Qur’arn and the “way,” or Sunnah — the manner in which the prophet Muhammad lived his life.13,17 Sharia covers all aspects of a person’s life, from religious practices, dietary choices, dress code, marriage/family life, and commerce, to domestic justice and sexual behavior.17 Law is not considered a man-made entity; it is decreed by divine will. Lawmakers and law scholars do not create laws; they attempt to discover the truth of law.17 Jurists and clerics play a central role in this system and have a high degree of authority within the society. Like the civilian systems, Sharia has been codified, but still remains open to interpretation and modification.17 Mixed Law With the new global economy, trade pacts such as the North American Free Trade Agreement (NAFTA), the creation of the European Union, etc., the introduction or blending of two or more systems of law is now becoming more common.12 Mixed law by definition is the convergence of two or more legal systems, usually civil law and common law, but increasingly customary, religious, and or civil or common law.12,13 The interaction of these legal systems can be the result of historical, economic, or political pressures. Examples of mixed systems can be found in Europe with Holland, in North America with Quebec and Louisiana, in Africa with South Africa, and in the United Kingdom with Scotland. 689
AU8231_book.fm Page 690 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Information Technology Laws and Regulations Although no one expects an information systems security professional to be a legal expert on all areas of technology-related law — as with the various legal systems — a working knowledge of legal concepts directly related to information technology is required to fully understand the context, issues, and risks inherent with information systems in general. Two general categories of information technology law have the largest impact on information systems: intellectual property and privacy regulations. This section only provides a brief overview of these concepts. Readers wishing to delve deeper into this area are strongly encouraged to refer to the relevant legislation and regulations in their respective countries. Intellectual Property Laws Intellectual property laws are designed to protect both tangible and intangible items or property. Although there are various rationales behind the state-based creation of protection for this type of property, the general goal of intellectual property law is to protect property from those wishing to copy or use it, without due compensation to the inventor or creator. The notion is that copying or using someone else’s ideas entails far less work than what is required for the original development.18 According to the World Intellectual Property Organization (WIPO): Intellectual property is divided into two categories: Industrial property, which includes inventions (patents), trademarks, industrial designs, and geographic indications of source; and Copyright, which includes literary and artistic works such as novels, poems and plays, films, musical works, artistic works such as drawings, paintings, photographs and sculptures, and architectural designs.19 Patent. Simply put, a patent grants the owner a legally enforceable right to exclude others from practicing the invention covered for a specific period of time (usually 20 years).20 A patent is the “strongest form of intellectual property protection.”18 A patent protects novel, useful, and nonobvious inventions. The granting of a patent requires the formal application to a government entity. Once a patent is granted, it is published in the public domain, to stimulate other innovations.18 The World Intellectual Property Organization (WIPO), an agency of the United Nations, looks after the filling and processing of international patent applications.* Trademark. Trademark laws are designed to protect the goodwill a merchant or vendor invests in its products.18 Trademark law creates exclusive rights to the owner of markings that the public uses to identify various vendor/merchant products or goods. A trademark consists of any word, name, symbol, color, sound, product shape, device, or combination of these that *http://www.wipo.int.
690
AU8231_book.fm Page 691 Friday, October 13, 2006 8:00 AM
Legal, Regulations, Compliance and Investigations is used to identify goods and distinguish them from those made or sold by others.21 Trademarks are registered with a government registrar. International harmonization of trademark laws began in 1883 with the Paris Convention, which prompted the Madrid Agreement of 1891.20 Like patents, the World Intellectual Property Organization (WIPO) oversees international trademark law efforts, including international registration. Copyright. A copyright covers the expression of ideas rather than the ideas themselves; it usually protects artistic property such as writing, recordings, and computer programs.18 In most countries, once the work or property is completed or in a tangible form, the copyright protection is automatically assumed. Copyright protection is weaker than patent protection, but the duration of protection is considerably longer (e.g., 75 years under U.S. copyright protection).18 Although individual countries may have slight variations in their domestic copyright laws, as long as the country is a member of the international Berne Convention, the protection afforded will be at least at a minimum level as dictated by the convention; unfortunately, not all countries are members. Trade Secret. The final area covered in this section is trade secrets. Trade secret refers to proprietary business or technical information, processes, designs, practices, etc., that are confidential and critical to the business (e.g., Coca-Cola’s formula). The trade secret may provide a competitive advantage or, at the very least, allow the company to compete equally in the marketplace.22,23 To be categorized as a trade secret, it must not be generally known and must provide some economic benefit to the company.22,23 Additionally, there must be some form of reasonable steps taken to protect its secrecy. A trade secret dispute is unique, as the actual contents of the trade secret need not be disclosed. Legal protection for trade secrets depends upon the jurisdiction. In some countries, it is assumed under unfair business legislation, and in others, specific laws have been drafted related to confidential information.24 In some jurisdictions, legal protection for trade secrets is practically perpetual and does not carry an expiry date, as is the case with patents. Trade secrets are often at the heart of industrial and economic espionage cases and are the proverbial crown jewels of some companies. Licensing Issues. The issue of illegal software and piracy is such a large problem that it warrants some discussion. More than one company has been embarrassed publicly, sued civilly, or criminally prosecuted for failing to control the use of illegal software or violating software licensing agreements. With high-speed Internet access readily available to most employees, the ability — if not the temptation — to download and use pirated software has greatly increased. According to a recent (2004) study by the Business Software Alliance (BSA) and International Data Corpora691
AU8231_book.fm Page 692 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® tion (IDC), prevalence and frequency of illegal software is exceedingly high, 36 percent worldwide.25 The same study found that for every two dollars worth of legal software purchased, one dollar’s worth of software was pirated.25 Though not all countries recognize the forms of intellectual property protection previously discussed, the work of several international organizations and industrialized countries seems somewhat successful in curbing the official sanctioning of intellectual property rights violations (e.g., software piracy). There are four categories of software licensing: freeware, shareware, commercial, and academic. Within these categories, there are specific types of agreements. Master agreements and end-user licensing agreements (EULAs) are the most prevalent; most jurisdictions have refused to enforce the shrink-wrap agreements that were commonplace at one time. Master agreements set out the general overall conditions of use along with any restrictions, whereas the EULA specifies more granular conditions and restrictions. The EULA is often a “click through” or radio button that the end user must click on to begin the install, indicating that he or she understands the conditions and limitations and agrees to comply. Various third parties have developed license metering software to ensure and enforce compliance with software licensing agreements. Some of these applications can produce an audit report and either disable software attempting to run in violation of an agreement (e.g., exceeding the number of concurrent running software) or produce an automated alert. The use of carefully controlled software libraries is also a recommended solution. Ignorance is no excuse when it comes to compliance with licensing conditions and restrictions. The onus is clearly on the organization to enforce compliance and police the use of software or face the possibility of legal sanctions, such as criminal prosecution or civil penalties. Privacy With the proliferation of technology, and the increasing awareness that most of our personal information is stored online or electronically in some way, shape, or form, there is growing pressure to protect personal information. Almost monthly, there are media reports worldwide of databases being compromised, files being lost, and attacks against businesses and systems that house personal private information. This has spurred concerns over the proper collection, use, retention, and destruction of information of a personal/confidential nature. This public concern has prompted the creation of regulations intended to foster the responsible use and stewardship of personal information. In the context of this discussion, privacy is one of the primary areas that business, in almost all industries, is forced to deal with regulations and regulatory compliance.
692
AU8231_book.fm Page 693 Friday, October 13, 2006 8:00 AM
Legal, Regulations, Compliance and Investigations The actual enactment of regulations or, in some cases, laws dealing with privacy depends on the jurisdiction. Some countries have opted for a generic approach to privacy regulations — horizontal enactment (i.e., across all industries, including government), while others have decided to regulate by industry — vertical enactment (e.g., financial, health, publicly traded). Regardless of the approach, the overall objective is to protect citizen’s personal information, while at the same time balancing the business and governmental need to collect and use this information appropriately.26 Unfortunately, there is no one international privacy law, resulting in a mosaic of legislation and regulations. Some countries have been progressive in dealing with privacy and personal information, while others have yet to act in this area. Given the fact that the Internet has created a global community, our information and business transactions and operations may cross several different borders and jurisdictions. Therefore, it is prudent that we have a basic understanding of privacy principles and guidelines, and keep up to date with the changing landscape of privacy regulations that may affect our business as well as our personal information. Privacy can be defined as “the rights and obligations of individuals and organizations with respect to the collection, use, retention, and disclosure of personal information.”26 Personal information is a rather generic concept and encompasses any information that is about or on an identifiable individual.26,27 Although international privacy laws are somewhat different in respect to their specific requirements, they all tend to be based on core principles or guidelines.26–28 The Organization for Economic Co-operation and Development (OECD) has broadly classified these principles into the collection limitation, data quality, purpose specification, use limitation, security safeguards, openness, individual participation, and accountability. The guidelines are as follows:27 • There should be limits to the collection of personal data, and any such data should be obtained by lawful and fair means and, where appropriate, with the knowledge or consent of the data subject. • Personal data should be relevant to the purposes for which they are to be used and, to the extent necessary for those purposes, should be accurate, complete, and kept up to date. • The purposes for which personal data is collected should be specified not later than at the time of data collection, and the subsequent use limited to the fulfillment of those purposes or such others as are not incompatible with those purposes and as are specified on each occasion of change of purpose. • Personal data should not be disclosed, made available, or otherwise used for purposes other than those specified above except: – With the consent of the data subject – By the authority of law 693
AU8231_book.fm Page 694 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® • Personal data should be protected by reasonable security safeguards against such risks as loss or unauthorized access, destruction, use, modification, or disclosure of data. • There should be a general policy of openness about developments, practices, and policies with respect to personal data. Means should be readily available of establishing the existence and nature of personal data, and the main purposes of their use, as well as the identity and usual residence of the data controller. • An individual should have the right: – To obtain from a data controller, or otherwise, confirmation of whether the data controller has data relating to him – To have communicated to him, data relating to him: • Within a reasonable time • At a charge, if any, that is not excessive • In a reasonable manner • In a form that is readily intelligible to him – To be given reasons if a request made is denied, and to be able to challenge such denial – To challenge data relating to him and, if the challenge is successful, to have the data erased, rectified, completed, or amended • A data controller should be accountable for complying with measures that give effect to the principles stated above. There is a consensus that these principles should form the minimum set of requirements for the development of reasonable legislation, regulations, and policy, and that nothing prevents organizations adding additional principles. However, the actual application of these principles has proved more difficult and costly in almost all circumstances; there has been a vast underestimation of the impact of the various privacy laws and policies both domestically and with cross-border commerce. This is not an excuse to abandon, block, or fail to comply with applicable laws, regulations, or policies. However, information security professions need to appreciate that business practices have or will soon forever change due to the need to be in compliance, and that budgets will have to be appropriately increased to meet the demand.28 Like it or not, the privacy genie is out of the bottle and there is no putting it back. Liability Another integral part of an information security professional’s job function is understanding issues related to liability, negligence, and due care. In the world’s more increasingly litigious culture, these concepts become especially important, as we are seeing examples of shareholder lawsuits and third-party liability claims against organizations suffering information technology attacks and breaches. When organizations are weighing the costs versus the benefits of certain actions, inactions, or security controls, 694
AU8231_book.fm Page 695 Friday, October 13, 2006 8:00 AM
Legal, Regulations, Compliance and Investigations the ability to demonstrate reasonable corporate behavior and overall due diligence is an essential factor. In law (i.e., tort), liability refers to being legally responsible. Sanctions in cases dealing with liability include both civil and criminal penalties. Liability and negligence are somewhat associated, as negligence is often used to establish liability.29 Negligence is simply acting without care, or the failure to act as a reasonable and prudent person would under similar circumstances.29 The exact definition of a reasonable and prudent person is somewhat more complicated, as the courts usually engage in legal fiction by prescribing qualities that this person has without reference to any real person. The reasonable person yardstick is determined by the given circumstances in question, and is usually the center of heated debate during the litigation process. Due care and due diligence are other terms that have found their way into issues of corporate governance. According to Sheridan, due care can be thought of as the requirement that officers and other executives with fiduciary responsibilities meet certain requirements to protect the company’s assets.22 These requirements include the safety and protection of technology and information systems that fall under the term corporate assets. Due diligence is a much more ethereal concept and is often judged against a continually moving benchmark. What used to constitute due diligence last year may not this year or the next. The dynamic nature requires a commitment to an ongoing risk analysis and risk management process and a good understanding of generally accepted business and information security practices, within the applicable industry, as well as international standards, and increasingly as dictated by legislation or regulations. The increase in government scrutiny of information system practices has resulted in the majority of companies allocating their security budgets to be compliant with the various current and pending regulatory requirements. Some estimates indicate that information security budgets will rise from the current average of approximately 5 percent of the overall IT budget to as much as 15 percent in direct response to the requirement for regulatory compliance. Though this is strictly speculative, it demonstrates the importance of controlling liability. Computer Crime As information system security professionals, our concerns are focused on risks that arise not only from errors and omissions, but also from behavior that is both malicious and intentional. The fact that computer systems have become the target for criminals should come as no surprise. The very features that make technology, information systems, and the Internet attractive to businesses also make them very attractive to criminals, both petty and professional/organized.1,30–33 As more of the new currency (e.g., 695
AU8231_book.fm Page 696 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® personal information, bank account numbers, credit card information) moves online, the likelihood, impacts, and, correspondingly, risk that private citizens, companies, and governments will become victims of computer crime increase.34–36 Although engaging in a comprehensive examination and discussion of computer crime is well beyond the scope of this chapter, it is important that conceptually pertinent elements are covered, at least cursorily. The phenomena of computer crime, or cybercrime, as it is often called, although a relatively recent concept compared to other, more traditional crimes, has plagued society for several years. In fact, some of the so-called new computer crimes are actually more traditional criminal activities that have benefited from the new technological advances, such as the Internet, color scanners/copiers, etc.37 Computer crimes are often divided into the following categories:1 • Computer as a tool • Computers as the target of crime • Computers incidental to the crime As a tool, computers merely allow criminals to become more efficient at practicing their tradecraft, more able to target victims, or more easily able to share contraband. Examples of these types of crimes are fraud, counterfeiting, theft, and child pornography. With our society’s increasing dependence on technology, the characteristics of what constitutes evidence have drastically changed. Some estimates indicate that currently 80 percent of all criminal investigations include evidence that is digital in nature.32 Here again, this is no surprise: we have become dependent on email, PDAs, electronic calendars, etc. In this context, computers as incidental is almost a useless category because it is so generic that it encompasses all but a very few types of criminal behavior. Computers as the target refers to those criminal activities that have their origins in technology and have no analogous real-world equivalents. These crimes target information systems and the underlying architecture and represent some of the largest issues for information security.32 These activities denote concepts that legal systems have not had experience dealing with and have not effectively embodied into the statutes, regulations, etc. Within the classification of computer targeted crime, several subcategories of activities exist. Examples of such directed activities include: • • • • • • • 696
Insider abuse Viruses White collar/financial fraud Corporate espionage Hacking Child pornography Stalking
AU8231_book.fm Page 697 Friday, October 13, 2006 8:00 AM
Legal, Regulations, Compliance and Investigations • • • •
Organized crime Terrorism Identity theft Social engineering
Although this list is by no means exhaustive, it does capture some of the uniqueness of this type of criminal behavior. Just as the criminal activity seems unique, the type of offender seems to be exclusive as well.33 It is fair to say that individuals are attracted to specific types of crime for various reasons, e.g., personal choice, aptitude, social learning, etc.30,38–41 Rather than trying to label offenders, for the sake of this discussion, it is more pragmatic to simply state that computer criminals have developed a distinctive tradecraft and that the various subclasses of computer crime require specific skills, knowledge, abilities, and access to technology.30,33 A word of caution is necessary: although the media has tended to portray the threat of cybercrime as existing almost exclusively from the outside, external to a company, reality paints a much different picture. The greatest risk of cybercrime comes from the inside, namely, criminal insiders.42–46 As information security professionals, we have to be particularly sensitive to the phenomena of the criminal or dangerous insider, as these individuals usually operate under the radar, inside of our primarily outward/external facing security controls, thus significantly increasing the impact of their crimes while leaving few if any audit trails to follow.47,48 International Cooperation. The biggest hindrance to effectively dealing with computer crime is the fact that this activity is truly international in scope, and thus requires an international solution, as opposed to a domestic one based on archaic concepts of borders and jurisdictions.49 The concept of geographical borders is meaningless in the realm of cyber space; we are truly seeing the manifestation of a global village. The World Wide Web is exactly that, worldwide; criminals in one country can victimize individuals clear across the world with a keystroke, Web site, spam attack, phishing scam, etc. Previous attempts based on domestic solutions (e.g., the introduction of criminal statutes, regulations) designed to stop activities that utilized the ubiquitous nature of the Internet and distributed information systems (e.g., online gambling, adult pornography) were inadequate and completely unsuccessful.49 The framers of these solutions failed to take into account the global reach of technology; the owners of these sites simply moved their operations to countries whose governments condoned, tolerated, or turned a blind eye to the activities. The desired effect of stopping or at the very least deterring the activity did not occur. In fact, in some cases, the activity thrived due to the unprecedented media exposure.
International responses to computer crime have met with mixed results. The Council of Europe (CoE) Cybercrime Convention is a prime example of 697
AU8231_book.fm Page 698 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® a multilateral attempt to draft an international response to criminal behaviors targeted at technology and the Internet. Thirty countries, including Canada, the United States, and China, ratified the convention that came into effect November 23, 2001. The Cybercrime Convention requires parties to: • Establish laws against cybercrime • Ensure that their law enforcement officials have the necessary procedural authorities to investigate and prosecute cybercrime offenses effectively • Provide international cooperation to other parties in the fight against computer-related crime One of the Cybercrime Convention’s stated objectives was to assist international enforcement efforts by creating a framework for the domestication and cooperation between ratifying states. This objective directly addresses one of the most difficult problems faced when dealing with computer crime, jurisdictional disputes.49 Issues related to establishing jurisdiction, extradition of accused, and lack of domestication have hamstrung many past investigations. The ultimate success of the convention is still unknown, but it is definitely a step in the right direction. Incident Response* Incident response, or more precisely incident handling, has become one of the primary functions of today’s information security department, and thus of those professionals working in this capacity. This increased importance is a direct result of the fact that attacks against networks and information systems appear to be increasing yearly.43,50 Although statistics related to the exact increase in volumes of attacks and the corresponding economic costs are impossible to calculate given the lack of universal reporting, the gross trends indicate significant increases in the last few years.43,50 Not only is the volume of attacks increasing, but the types of attacks undergo almost continuous modifications. Today we see spam, phishing scams, worms, spyware, distributed denial-of-service attacks (DDoS), and other imaginative yet malicious attacks and mutations inundating our personal computers, networks, and corporate systems on a daily basis. Historically, incident response has been precisely that, a reaction to a trigger event. Incident response in its simplest form is “the practice of detecting a problem, determining its cause, minimizing the damage it causes, resolving the problem, and documenting each step of the response *For the sake of clarity, the discussion will be confined to very simple incidents involving systems or devices. The focus of this section is to illustrate basic principles and facilitate the understanding of concepts related to incident response and handling, not to dissect every possible incident type that one might encounter.
698
AU8231_book.fm Page 699 Friday, October 13, 2006 8:00 AM
Legal, Regulations, Compliance and Investigations for future reference.”51 Although reactive controls are obviously necessary, lessons learned from the various attacks against information systems worldwide make it painfully obvious that preventive controls as well as detective controls are also required if we are to have any hope of recovering or maintaining business operations.52,53 Although various entities have developed detailed models for incident handling (e.g., Computer Emergency Response Team Coordination Center (CERT/CC), AusCERT, Forum of Incident Response Teams (FIRST), U.S. National Institute of Standards and Technology (NIST), British Computing Society, and Canadian Communications Security Establishment (CSE)), there is a common framework to these models. The framework consists of the following components: • Creation of a response capability • Incident response and handling • Recovery and feedback Response Capability To have effective and efficient incident handling, a solid foundation must exist. In this instance, the foundation is comprised of a corporate incident handling and response policy, clearly articulated procedures and guidelines that take into consideration the various legal implications of reacting to incidents, and the management of evidence.54 The policy must be clear, concise, and provide a mandate for the incident response/handling team to deal with any and all incidents. The policy must also provide direction for employees on the escalation process to follow when a potential incident is discovered, and how various notifications, contacts, and liaisons with third-party entities, the media, government, and law enforcement authorities are to be notified, by whom, and in what manner.52,53 A properly staffed and trained response team is also required; the team can be virtual or permanent depending on the requirements of the organization. Virtual teams usually consist of individuals that, while assigned to the response team, have other regular duties and are only called upon in the event that there is some need to activate the incident handling capability. Some organizations have teams whose members are permanently assigned to the incident team and work in this capacity on a full-time basis. A third model can be described as a hybrid of the virtual and permanent, with certain core members permanently assigned to the incident team and others called up as necessary.* Although the actual makeup of the response team depends upon the structure of the organization, there are core areas that need to be represented: legal department (in lieu of in-house legal counsel, arrangements should be made with external counsel), human resources, communica*A good analogy here is that of a volunteer fire department versus a full-time fire department.
699
AU8231_book.fm Page 700 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® tions, executive management, physical/corporate security, internal audit, IS security, and IT.55,56 Obviously, there needs to be representation by other pertinent business units as well as systems administrators and anyone else that can assist in the recovery and investigation of an incident. Once the team has been established, it must be trained and stay current with its training. This sounds easy enough at first glance, but the initial and ongoing training requires a budget and resources to cover for team members who are away at training; more than one organization has been stymied in its attempt to establish a response team because of the failure to anticipate realistic costs associated with training and education. Incident Response and Handling Assuming that the appropriate groundwork has been laid, the next phase is the actual handling of an incident. Although there are various definitions of what constitutes an incident (usually any event that has the potential to negatively impact the business or its assets), it is ultimately up to the organization to categorize events that warrant the activation of the incident response escalation process.52,53 In most cases, this is spelled out in some level of detail in the policies and guidelines. When an event becomes an incident, it is essential that a methodical approach be followed. This is necessary given the complexities of dealing with the dynamics of an incident; several tasks must be carried out in parallel as well as serially. Often the output of one phase or stage in the handling of an incident produces input for a subsequent phase. In some cases, previous steps need to be redone in light of new information obtained as the investigation develops.52–54 CERT/CC at Carnegie Melon University, one of the foremost authorities on incident response and incident handling, depicts the incident handling model as a circular process that feeds back into itself, thus capturing the various dynamics and dependencies of the incident life cycle.54 The incident response and handling phase can be broken down further into triage, investigation, containment, and analysis and tracking. Triage. Regardless of what actual model of incident handling is prescribed to, there is usually some trigger event that kick starts the process.55,56 The consensus of the various models is that the first step is some type of triage process. A good analogy here (and one that is mentioned in several models) is that of a hospital emergency department receiving a new patient. Once the patient arrives, he is examined to determine the urgency of care required. Patients with life-threatening conditions receive priority, patients with less life threatening conditions are placed into a queue, and patients with minor conditions may be directed to their own physicians or neighborhood clinics.
Triage encompasses the detection, identification, and notification subphases. Following the medical model, once an incident has been detected, 700
AU8231_book.fm Page 701 Friday, October 13, 2006 8:00 AM
Legal, Regulations, Compliance and Investigations an incident handler is tasked with the initial screening to determine the seriousness of the incident and to filter out false-positives. One of the most time-consuming aspects of information security can be dealing with falsepositives (events that are incorrectly deemed to be incidents based on rules or some other rubric). If during the initial phase of the triage it is determined that it is a false-positive, the event is logged and the process returns to the preincident escalation level of readiness. However, if it is a real incident, then the next step is identifying or classifying the type of incident. This classification is dependent on the organization, but is commonly based on a hierarchy beginning with the general classifiers (e.g., apparent source = internal versus external) and progressing to more granular or specific characteristics (e.g., worm versus spam). This categorization is used to determine the level of potential risk or criticality of the incident, which in turn is used to determine what notifications are required. Here again, the policies, procedures, and guidelines that were developed prior to the incident provide direction for the incident handler to follow.54 It is important to recognize that in the triage phase, the initial detection can come from automated safeguards or security controls and also from employees. Often, the end user will notice that his system is behaving oddly or that he has received some type of suspicious e-mail, etc., that was not blocked by the controls. If the end user is well educated and informed about the policy and procedures to follow when he notices something unusual or suspicious, the entire response escalation process becomes far more efficient and effective.53 Investigative Phase. The next major phase deals directly with the analysis, interpretation, reaction, and recovery from an incident. Regardless of the specific model that is followed, the desired outcomes of this phase are to reduce the impact of the incident, identify the root cause, get back up and running in the shortest possible time, and prevent the incident from occurring again.53–56 All of this occurs against the backdrop of adhering to company policy, applicable laws and regulations, and proper evidence management and handling. This last point cannot be stressed enough. Various countries have enacted privacy laws that protect employees and others from frivolous monitoring of network and online activities by employers. Potential evidence must also be handled correctly in accordance with rules of evidence, or it runs the risk of being inadmissible in the case of civil or criminal sanctions, or even as grounds for terminating someone’s employment (see the “Computer Forensics” section). Containment. After the notification, the next task is to contain the incident. Using the medical analogy yet again, this is similar to quarantining a patient until the exact nature of the disease or pathogen is determined. This quarantining prevents an outbreak if it turns out that the cause was some infectious agent, and allows medical staff to conduct directed analy701
AU8231_book.fm Page 702 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® sis of the cause of the malady. In our case, the patient is a system, device, or subset of systems on the network. The containment is used to reduce the potential impact of the incident by reducing the number of other systems, devices, or network systems that can become infected.53–56 The method of containment can vary depending on the category of the attack (e.g., external, worm), the asset affected (e.g., Web server, router), and the criticality of the data or the risk of infection to the rest of the network. Strategies include removing the system from the network by disconnecting it, virtually isolating the systems by way of network segmentation (e.g., switch, virtual local area network (VLAN)), or implementing a firewall or filtering router with the appropriate rule sets.53 It should be noted that in some cases, complete isolation or containment may not be a viable solution, or if the ultimate goal of the exercise is to track the event or capture additional evidence of further wrongdoing, other alternatives such as sniffing traffic, etc., can be used. However, depending on the incident or attack, the act of containing a system can alert the attacker that she has been detected. This can result in the attacker deleting any trails she has left or, in extreme cases, escalating the damage in an attempt to overwhelm the victim’s resources, thus allowing the attacker to escape or obfuscate the source of the attack.52,53 While dealing with the process of containment, proper documentation and handling of any potential evidence, and sources of evidence, must be maintained. It is very difficult at the beginning of an incident to anticipate the final outcome (e.g., criminal attack, error or omission); therefore, operating under the highest standard or burden of proof is prudent. If it turns out to be a false alarm or something not worth pursuing, the documentation and data can be used for training purposes, as well as for postmortem or postincident debriefing purposes (we discuss evidence handling and management in more detail in the “Computer Forensics” section). Analysis and Tracking. The next logical step after isolation or containment is to begin to examine and analyze what has occurred, with a focus on determining the root cause. The concept of root cause goes deeper than identifying only symptoms. It looks at what is actually the initial event in the cause–effect chain. Root cause analysis also attempts to determine the actual source and the point of entry into the network. Different models portray this step in various forms, but the ultimate goal is to obtain sufficient information to stop the current incident, prevent future “like” incidents from occurring, and identify what or whom is responsible. This stage requires a well-trained team of individuals with heterogeneous or eclectic skills and a solid understanding of the systems affected, as well as system and application vulnerabilities. The ability to read and parse through large log files is also a skill that is in high demand during this phase, as log files from routers, switches, firewalls, Web servers, etc., are often the primary source of initial information.56 Secondary sources of information are arti702
AU8231_book.fm Page 703 Friday, October 13, 2006 8:00 AM
Legal, Regulations, Compliance and Investigations facts. An artifact is any file, object, or data directly related to the incident or left behind or created as part of the attack.54 As with any form of analysis, individuals need a combination of formal training and sufficient real-world applied experience to make appropriate interpretations without the luxury of an unlimited timeframe. A side benefit of containment is that it “buys you time.” By containing the potential spread, a bit of breathing room can be gained to proceed with the analysis and tracking in a controlled manner, as opposed to the complete state of chaos that may ensue at the beginning of the incident response and handling process. One of the biggest enemies to the tracking process is the dynamic nature of many of the logs, both internal and external.56 Log files tend to a have very limited life expectancy, and depending upon the organization, logs may be purged or overwritten in as little as 24 hours. The proverbial clock starts ticking the minute the attack, worm, virus, etc., is launched, not necessarily at the point when it is first detected. Tracking often takes place in parallel with the analysis and examination. As soon as information is obtained, it is fed into the tracking process to weed out false leads or intentionally spoofed sources.53 To have an effective tracking or traceback, it is extremely important that the organization/team have a good working relationship with other entities, such as Internet service providers, other response teams, and law enforcement. These relationships can expedite the tracking process, and needless timeconsuming hiccups can be avoided (e.g., not knowing whom to notify at the Internet service provider to request log information). Despite the cultural myth that law enforcement is woefully inept at dealing with Internet and technology-based crimes, today many law enforcement agencies have specialized units dedicated to high-tech crime investigations, and these agencies can be extremely helpful in assisting with the tracking and tracing. An important point to consider, as part of developing the incident handling policy and guidelines, is what to do once the root cause has been both identified and traced back to the source. As an aside, some policies forbid tracking and traceback and direct the response team to focus on the recovery and future prevention aspects. An alarming trend that is surfacing deals with the suggestion of striking back at the source. The ramifications regarding this are huge, not only legally but also ethically. Source addresses can be spoofed, and often the source turns out to be a compromised machine that the owner had no idea had been used in an illegal manner. Although it is tempting to seek revenge after being wronged, it is better to take the moral high ground and seek redress through the proper legal channels. Recovery Phase The next major category deals with recovery, repair, and prevention of the affected systems and assets. The goal of this phase is to get the business 703
AU8231_book.fm Page 704 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® back up and running in a worst-case scenario, or in the best case, bring the affected systems back into production, being sensitive to other activities that may be happening in unison (e.g., tracking and traceback).52–54,56 Recovery and Repair. Once the root cause analysis has provided sufficient information, the recovery process should begin. The exact strategy and techniques used are dependent on the type of incident and the characteristics of the “patient.” The important consideration is to recover in a manner that has the maximum likelihood of withstanding another directed incident. There is little to be gained by simply recovering a system or device to the same level that it was at prior to the incident, as the probability that it will be attacked again is quite high.53 If it did not survive the first attack, it will not likely survive a subsequent attack. The more prudent approach is to delay putting the system or device back into production until it is at least protected from the incident that affected it in the first place. This can be accomplished by upgrading the operating system, updating service packs, applying the appropriate patches (after they are thoroughly tested, of course), or, in more drastic cases, replacing the original with a different or newer product. Once the system or device appears to be ready to be reintroduced back into production, it should be tested for vulnerabilities and weaknesses. It is not advisable to have the same members who worked on the recovery and repair conduct this activity, to ensure some independence and objectivity. There is an abundance of firstrate vulnerability testing software, both open-source and retail software, available that can be used to test the systems.53
As was stated earlier, incident response is a dynamic process, with very fuzzy lines between the various phases; often these phases are conducted in parallel and each has some natural dependencies on the other. In fact, incident response and handling can be thought of as an iterative process that feeds back into itself until there is some form of closure to the incident.53–55 What exactly constitutes incident closure is dependent upon a number of variables, the nature or category of the incident, the desired outcome of the organization (e.g., business resumption, prosecution, system restoration), and the success of the team in determining the root cause and source of the incident. It is advisable that the corporate policy or guideline contain some sort of checklist or metric by which the team can determine when an incident is to be closed.54 Debriefing/Feedback Once an incident has been deemed closed, the incident handling process is not yet done. One of the most important, yet overlooked, phases is the debriefing and feedback phase. 53 It would be utopian to believe that despite having the best policy, team, etc., there is nothing to be learned from each and every incident that is handled. Issues invariably arise; Mr. 704
AU8231_book.fm Page 705 Friday, October 13, 2006 8:00 AM
Legal, Regulations, Compliance and Investigations Muphy’s law rears its ugly head, or some previously unexpected variable creeps into the mix. As the saying goes, we often learn more from our mistakes than from our successes. This is why it is vital to have a formal process in place to document what worked well, what did not work well, and what was totally unexpected. The debriefing needs to include all of the team members, including representatives from the various business units that may have been affected by the incident53 The output from the feedback process should also be used to adapt or modify policy and guidelines.. A side benefit to the formalism of the debriefing/feedback is the ability to start collecting meaningful data that can be used to develop/track performance metrics for the response team. Metrics (e.g., number and type of incidents handled, mean time from detection of incident to closure) can be used when determining budget allocations, manpower requirements, and baselines, demonstrating due diligence and reasonableness, and for numerous other statistical purposes. One of the biggest challenges faced by information security professionals is the ability to produce meaningful statistics and metrics specific to the organization, or at the very least, the industry in general. By formalizing a process for capturing data specific to the organization, the incident team can finally reverse this trend. What has yet to be discussed or even hinted at is dealing with the public or the media. This is not an oversight, as the whole domain of public relations and communications is an extremely sensitive issue at the best of times. When an event becomes an incident, the proper handling of public disclosure can either compound the negative impact or, if handled correctly, provide an opportunity to engender public trust in the organization. This is why communications, human resources, and only properly trained and authorized individuals should handle the communications and external notifications. In some countries/jurisdictions, legislation exists (or is being contemplated) that requires organizations to publicly disclose when they reasonably believe there has been an incident that may have jeopardized someone’s private or financial information. Obviously, denial and no comment are not an effective public relations strategy in today’s information culture. Computer Forensics As was mentioned with the section on incident response and incident handling, one area that has traditionally been lacking in most organizations is proper evidence handling and management. The exact name given to this area ranges from computer forensics, digital forensics, and network forensics, to electronic data discovery, cyber forensics, and forensic computing. For our purposes, we will use the term computer forensics to encompass all of the components expressed in the other terms mentioned; thus, no one definition will be provided. Instead, computer forensics will include all 705
AU8231_book.fm Page 706 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® domains in which the evidence or potential evidence exists in a digital or electronic form, whether in storage or on the wire.34,57 We intentionally omit digital multimedia from the mix, as this is a related, yet highly differentiated, field within the umbrella of digital forensic science. Unlike the media depiction, computer forensics is not some piece of software or hardware. It is based on a methodical, repeatable, defensible, and auditable set of procedures and protocols. Computer forensics falls under the larger domain of digital forensic science. The Digital Forensic Science Research Workshop (DFRWS) defines digital forensic science as: The use of scientifically derived and proven methods toward the preservation, collection, validation, identification, analysis, interpretation, documentation and presentation of digital evidence derived from digital sources for the purpose of facilitating or furthering the reconstruction of events found to be criminal, or helping to anticipate unauthorized actions shown to be disruptive to planned operations.58
As a forensic discipline, this area deals with evidence and the legal system and is really the marriage of computer science, information technology, and engineering with law. The inclusion of the law introduces concepts that may be foreign to many information security professionals. These include crime scene, chain of custody, best evidence, admissibility requirements, rules of evidence, etc. It is extremely important that anyone who may potentially be involved in an investigation be familiar with the basics of dealing with and managing evidence. There is nothing worse than finding the proverbial smoking gun only to learn that the evidence cannot be used, will be suppressed, or, even worse, you have violated the rights of the individuals in question and are now in worse trouble than the “bad guys.”59,60 Although different countries and legal systems have slight variations in determining how evidence and the digital crime scene should be handled, there are enough commonalities that a general discussion is possible. Like incident response, there are various computer forensics models (e.g., International Organization of Computer Evidence (IOCE), Scientific Working Group on Digital Evidence (SWGDE), Association of Chief Police Officers (ACPO)). These models formalize the computer forensic processes by breaking them into numerous phases or steps.31,34,57,60–68 A generic model includes: • Identifying evidence: Correctly identifying the crime scene, evidence, and potential containers of evidence. • Collecting or acquiring evidence: Adhering to the criminalistic principles and ensuring that the contamination and the destruction of the scene are kept to a minimum. Using sound, repeatable, collection techniques that allow for the demonstration of the accuracy and integrity of evidence, or copies of evidence. 706
AU8231_book.fm Page 707 Friday, October 13, 2006 8:00 AM
Legal, Regulations, Compliance and Investigations • Examining or analyzing the evidence: Using sound scientific methods to determine the characteristics of the evidence, conduct comparison for individuation of evidence, and conducting event reconstruction. • Presentation of findings: Interpreting the output from the examination and analysis based on findings of fact and articulating these in a format appropriate for the intended audience (e.g., court brief, executive memo, report). Crime Scene Prior to identifying evidence, the larger crime scene needs to be dealt with. A crime scene is nothing more than the environment in which potential evidence may exist.69The same holds for a digital crime scene. The principles of criminalistics apply in both cases: identify the scene, protect the environment, identify evidence and potential sources of evidence, collect evidence, and minimize the degree of contamination.57,59,69 With digital crime scenes, the environment consists of both the physical and the virtual, or cyber. The physical (e.g., server, workstation, laptop, PDA, digital music device) is relatively straightforward to deal with; the virtual is more complicated, as it is often more difficult to locate the exact location of the evidence (e.g., data on a cluster or GRID, or storage area networks [SANS]). It is also more difficult to protect the virtual scene; attempting to cordon off the entire Internet with crime scene tape is not advisable. The crime scene can provide additional information related to whom or what might be responsible for the attack or incident. Locard’s principle of exchange states that when a crime is committed, the perpetrators leave something behind and take something with them, hence the exchange.70 This principle allows us to identify aspects of the persons responsible, even with a purely digital crime scene.71 As with traditional investigations, understanding the means, opportunity, and motives (MOM), as well as the modus operandi (MO), allows for a more thorough investigation or root cause analysis. As was mentioned in the “Incident Response” section, correctly and quickly identifying the root cause is extremely important when dealing with an incident, whether it is criminal or not. Criminologists, sociologists, and psychologists agree that behavior is intentional and serves to fulfill some purpose (e.g., drive reduction, need fulfillment).40,72 Criminal behavior is no different, and thus neither is criminal computer behavior. Recent research suggests that computer criminals and hackers have no significant differences related to motivation for attacking systems.30,71,73 Like traditional criminals, computer criminals have specific MOs (e.g., hacking software, type of system or network attacked) and leave behind signature behaviors (e.g., programming syntax, e-mail messages, bragging notices) that can be used to identify the attacker (or at least the tool), link other criminal behaviors together, and provide insight into the thought processes of the attackers.41,73 This information 707
AU8231_book.fm Page 708 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® can be extremely useful in the event of an insider attack, as it can be used during the interview process to solicit more accurate responses from the accused. With an external attack, the information can assist law enforcement in piecing together other offenses by the same individual, assist in the interview and interrogation process, and provide strategies at trial when the accused will be the most defensive. Given the importance of the evidence that is available at a crime scene, only those individuals with knowledge of basic crime scene analysis should be allowed to deal with the scene. The logical choice is members of the incident response or handling team. The need for a formal approach to this task, coupled with very thorough documentation, is essential. So too is the ability to deal with a scene in a manner that minimizes the amount of disruption, contamination, or destruction of evidence (either inculpatory or exculpatory). Once a scene has been contaminated, there is no undo or redo button to push; the damage is done. In many jurisdictions, the accused or opposing party has the right to conduct its own examination and analysis, requiring as original a scene as possible. Digital/Electronic Evidence The exact requirements for the admissibility of evidence vary across legal systems and between different classes (e.g., criminal versus tort). At a more generic level, evidence should have some probative value, be relevant to the case at hand, and meet the following criteria (often called the five rules of evidence):74 • • • • •
Be Be Be Be Be
authentic accurate complete convincing admissible
Digital or electronic evidence, although more fragile/volatile, must meet these criteria as well. What constitutes digital/electronic evidence is dependent on the investigation; do not rule out any possibilities until such time as they can be positively discounted. With evidence, it is better to have and not need than vice versa. Given the variance that is possible, the axiom to follow here is check with the respective judiciary, attorneys, or officer of the court for specific admissibility requirements. The dynamic nature of digital electronic evidence bears further comment. Unlike more traditional types of evidence (e.g., fingerprints, hair, fibers, bullet holes), digital/electronic evidence can be very fragile and can be erased, partially destroyed, or contaminated very easily, and, in some circumstances, without the investigator knowing this has occurred.34,65,66 This type of evidence may also have a short life span and must be collected 708
AU8231_book.fm Page 709 Friday, October 13, 2006 8:00 AM
Legal, Regulations, Compliance and Investigations very quickly (e.g., cache memory, primary/random access memory, swap space) and by order of volatility (i.e., most volatile first). Sufficient care must also be taken not to disturb the timeline or chronology of events. Although time stamps are best considered relative and easily forged, the investigator needs to ensure that any actions that could alter the chronology (e.g., examining a live file system or accessing a drive that has not been write protected) are recorded or, if possible, completely avoided.75 Two concepts that are at the heart of dealing effectively with digital/electronic evidence, or any evidence for that matter, are the chain of custody and accuracy/integrity. The chain of custody refers to the who, what, when, where, and how the evidence was handled — from its identification through its entire life cycle, which ends with destruction or permanent archiving. Any break in this chain can cast doubt on the integrity of the evidence and on the professionalism of those directly involved in either the investigation or the collection and handling of the evidence.59,74,76 The chain of custody requires following a formal process that is well documented and forms part of a standard operating procedure that is used in all cases, no exceptions. Ensuring the accuracy and integrity of evidence is critical. If the courts feel the evidence or its copies are not accurate or lack integrity, it is doubtful that the evidence or any information derived from the evidence will be admissible. The current protocol for demonstrating accuracy and integrity relies on hash functions that create unique numerical signatures that are sensitive to any bit changes (e.g., MD5, SHA-256). Currently, if these signatures match the original or have not changed since the original collection, the courts will accept that integrity has been established.34,65,66,77,78 General Guidelines Most seasoned computer forensics investigators have mixed emotions regarding detailed guidelines for dealing with an investigation. The common concern is that too much detail and formalism will lead to rigid checklists and negatively impact the creative aspects of the analysis and examination. Too little formalism and methodology leads to sloppiness, difficulty in recreating the investigative process, and the lack of an auditable process that can be examined by the courts. In response to this issue, several international entities have devised general guidelines that are based on the IOCE/Group of 8 Nations (G8) principles for computer forensics and digital/electronic evidence:79 • When dealing with digital evidence, all of the general forensic and procedural principles must be applied. • Upon seizing digital evidence, actions taken should not change that evidence. 709
AU8231_book.fm Page 710 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® • When it is necessary for a person to access original digital evidence, that person should be trained for the purpose. • All activity relating to the seizure, access, storage, or transfer of digital evidence must be fully documented, preserved, and available for review. • An individual is responsible for all actions taken with respect to digital evidence while the digital evidence is in his possession. • Any agency that is responsible for seizing, accessing, storing, or transferring digital evidence is responsible for compliance with these principles. These principles form the foundation for the current international models most prominent today (e.g., NIST, DOJ/FBI Search and Seizure Manual, NIST SP 800: Computer Forensic Guidelines, SWGDE best practices, ACPO Good Practices Guide for Computer Based Evidence, IACIS forensic examination procedures). These models are also responsive to the prevailing requirements of the court systems and updated on a frequent basis. The sagest advice that can be given to anyone involved in a computer forensics investigation or any form of incident response is to act ethically, in good faith, attempt to do no harm, and do not exceed one’s knowledge, skills, and abilities. The following heuristics were developed by the Australian CERT Center (AusCERT) and should be a part of an investigator’s methodology:80 • • • • • • • • • • •
Minimize handling/corruption of original data. Account for any changes and keep detailed logs of your actions. Comply with the five rules of evidence. Do not exceed your knowledge. Follow your local security policy and obtain written permission. Capture as accurate an image of the system as possible. Be prepared to testify. Ensure your actions are repeatable. Work fast. Proceed from volatile to persistent evidence. Do not run any programs on the affected system.
As an information security professional, it is incumbent upon us to stay current on the latest techniques, tools, processes, and requirements for admissibility of evidence. The entire area of computer forensics is coming under increased scrutiny by both the courts and the public and will undergo significant changes in the next few years as the field matures and develops, as did other more traditional forensic disciplines, such as DNA and latent fingerprint analysis.81,82 Conclusions By now, it should be apparent that the domain of legal, regulations, compliance and investigations covers a very large range of knowledge, skills, and 710
AU8231_book.fm Page 711 Friday, October 13, 2006 8:00 AM
Legal, Regulations, Compliance and Investigations abilities. The intent of this domain is to provide concepts, based on a fairly high level overview of the issues, topics, and processes that need to be a part of the repertoire of information security professionals. It is unreasonable to expect an individual to have a deep expertise in all the areas that were discussed. However, it is very reasonable to expect a professional to have enough general knowledge to understand the issues and potential pitfalls and to know how to search out the appropriate expertise. The legal, regulations, compliance and investigation domain highlights the international nature of today’s business environment and the necessity to have global cooperation to have truly effective information assurance and security. Cross-border commerce necessitates the understanding of various legal systems, legislation, and regulations. Today, no business, and in a sense, no network is an island; our business practices and stewardship of data and information may fall under the purview of several different regulations and laws, both foreign and domestic. Understanding compliance requirements, effectively assessing our abilities to comply, making the appropriate changes, and maintaining this compliance and due diligence on a go-forward basis are now integral parts of corporate governance. Many of the controls and safeguards discussed fall into the traditional categories of detective and reactive controls. Historically, the focus has been on detecting attacks against information systems or infrastructures, and once detected, how to properly determine the who, what, when, where, why, and how — with the objective of minimizing the impact and returning to a production state as quickly as is possible.53 With the increased public and governmental focus on the protection of personal information, and the passing in several countries of privacy laws and regulations, the focus is now shifting to preventative and proactive approaches (e.g., policies, encryption). It is no longer reasonable to have a strictly reactive information security posture; businesses must demonstrate that they have put sufficient forethought into how to prevent system compromises or the unauthorized access to data, and if these are detected, how to disclose the incident to affected parties (e.g., the public). As more attacks are launched on our systems, the synthesis of incident response and handling with computer forensics will become increasingly more important. One of the artifacts of having more and better detective controls has been the increase in the volume of incidents that need to be dealt with.52,53,56 As these incidents end up in the various court systems, care must be taken to ensure that from the very start of the incident, evidence is handled and managed properly (i.e., forensically sound practices). Digital evidence is coming under increased scrutiny, and what was allowable and admissible yesterday may not be tomorrow.57,81,83 In the dynamic field of information security and assurance, knowledge is one of the greatest resources. We must be diligent to ensure that our knowl711
AU8231_book.fm Page 712 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® edge, skills, and abilities meet the current and future demands. This can be accomplished with a commitment to ongoing education, training, and maintaining proficiency in our profession across all of the domains that make up the foundation of information assurance and security. The domain of legal, regulations, compliance and investigation does not exist in a vacuum. It is but one piece of the larger mosaic collectively referred to as information assurance and security. To have truly effective information assurance and security, a holistic multidisciplinary approach that weaves all of the foundations (domains) together is necessary. References 1. Smith, R., P. Grabosky, and G. Urbas, Cyber Criminals on Trial, Cambridge, UK: Cambridge University Press, 2004. 2. University of Ottawa, World Legal Systems, 2005. 3. Hughes, G., Common law systems, in Fundamentals of American Law, A. Morrison (Ed.), New York: Oxford University Press, 1998, pp. 9–25. 4. Wikipedia, Civil law (legal system), Wikipedia, 2005. 5. Kiralfy, A.K., English law, in An Introduction to Legal Systems, J. Derret (Ed.), Washington, DC: Fredrick A. Praeger, 1968, pp. 158–195. 6. Wikipedia, Common law, Wikipedia, 2005. 7. Law, S., Torts, in Fundamentals of American Law, A. Morrison (Ed.), New York: Oxford University Press, 1998, pp. 240–263. 8. Wikipedia, Tort, Wikipedia, 2005. 9. Wikipedia, Administrative law, Wikipedia, 2005. 10. Schwartz, B., Administrative law, in Fundamentals of American Law, A. Morrison (Ed.), New York: Oxford University Press, 1998, pp. 129–150. 11. Thomas, J.A.C., Roman law, in An Introduction to Legal Systems, J.D.M. Derret (Ed.), New York: Fredrisk A. Praeger, 1968, pp. 1–27. 12. Tetley, W., Mixed Jurisdictions: Commonlaw vs. Civil Law (codified and uncodified), 1999. 13. University of Ottawa, Legal Systems, University of Ottawa Law School, n.d. 14. Wikipedia, Custom (law), Wikipedia, 2005. 15. Woodman, G., Customary Law in Common Law Systems, n.d. 16. Benson, B., Customary law with private means of resolving disputes and dispensing justice: a description of a modern system of law and order without state coercion, Journal of Libertarian Studies, IX: 25–42 (1990). 17. Wikipedia, Sharia, Wikipedia, 2005. 18. Dreyfuss, R., Intellectual property law, in Fundamentals of American Law, A. Morrison (Ed.), New York: Oxford University Press, 1998, pp. 507–534. 19. WIPO, Intellectual Property, World Intellectual Property Organization, 2005. 20. WIPO, About Intellectual Property, World Intellectual Property Organization, 2005. 21. Law.com, Trademark, Law.com, 2005. 22. Sherizen, S., Law, Investigation & ethics, in CBK Review Course, (ISC)2, 2003. 23. Wikipedia, Trade secret, Wikipedia, 2005. 24. WIPO, Protecting Trade Secrets of Your SME, World Intellectual Property Organization, 2005. 25. BSA and IDC, Piracy Study, Business Software Alliance, International Data Corporation, 2004. 26. AICPA/CICA, AICPA/CICA Privacy Framework, Assurance Services Executive Committee of the AICPA and the Assurance Services Board of the CICA, 2004.
712
AU8231_book.fm Page 713 Friday, October 13, 2006 8:00 AM
Legal, Regulations, Compliance and Investigations 27. OECD, OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data, Organization for Economic Co-operation and Development, n.d. 28. Karol, T., A guide to cross-border privacy impact assessments, Deloitte & Touche, March 2001, retrieved from http://www.itgi.org/cbprivacyguide.doc. 29. Knapp, C., Contract law, in Fundamentals of American Law, A. Morrison (Ed.), New York: Oxford University Press, 1998, pp. 201–236. 30. Furnell, S., Cybercrime: Vandalizing the Information Society, Boston: Addison-Wesley, 2002, pp. xi, 316. 31. Icove, D.J., K.A. Seger, and W. VonStorch, Computer Crime: A Crimefighter’s Handboo, 1st ed., Sebastopol, CA: O’Reilly & Associates, 1995, pp. xxi, 437. 32. Rogers, M., DCSA: digital crime scene analysis, in Information Security Management Handbook, H. Tipton and M. Krause (Eds.), New York: Auerbach Publications, 2006, pp. 601–614. 33. Rogers, M.K. and J.R.P. Ogloff, A comparative analysis of Canadian computer and general criminals, Canadian Journal of Police and Security Services, Spring 2004, 366–376. 34. Casey, E., Handbook of Computer Crime Investigation: Forensic Tools and Technology, San Diego: Academic Press, 2002, pp. xiv, 448. 35. Gordon, L.A. et al., 2004 CSI/FBI Computer Crime and Security Survey, San Francisco: Computer Security Institute, 2004. 36. Parker, D.B., Fighting Computer Crime: A New Framework for Protecting Information, New York: Wiley, 1998, pp. xv, 512. 37. Rogers, M., The information technology insider risk, in The Information Security Handbook, H. Bigdoli (Ed.), New York: John Wiley & Sons, 2005. 38. Ainsworth, P.B., Offender Profiling and Crime Analysis, Devon, U.K.: Willan Publishing, 2001, pp. x, 197. 39. Canter, D. and L.J. Alison, The Social Psychology of Crime: Groups, Teams, and Networks, Offender Profiling Series, Vol. 3, Hants, England: Aldershot, 2000, pp. ix, 334. 40. Holmes, R.M. and S.T. Holmes, Profiling Violent Crimes: An Investigative Tool, 2nd ed., Thousand Oaks, CA: Sage Publications, 1996, pp. xiii, 208. 41. Rogers, M.K. and J.R.P. Ogloff, A comparative analysis of Canadian computer and general criminals, Canadian Journal of Police and Security Services, Spring 2004, 366–376. 42. DoD Insider Threat Mitigation: Final Report of the Insider Threat Integrated Process Team, 2000. 43. Information Security Breaches Survey 2004, PricewaterhouseCoopers, 2004. 44. Canton, D., Inside Breaches Pose Threat, Too., Ontario: London Free Press, 2004. 45. Kimberland, K., SEI Press Release: Secret Service and CERT Coordination Center Release Comprehensive Report Analyzing Insider Threats to Banking and Finance Sector, 2004. 46. Quigley, A., Inside Job: Ex-Employees and Trusted Partners May Pose the Greatest Threats to Network Security, 2002. 47. Randazzo, M. et al., Insider Threat Study: Illicit Cyber Activity in the Banking and Finance Sector, Pittsburgh: Carnegie Mellon, 2004. 48. Shaw, E.D., K.G. Ruby, and J.M. Post, The insider threat to information systems: the psychology of the dangerous insider, Security Awareness Bulletin, 2:1–10 (1998). 49. Brenner, S., Defining cyber crime: a review of state and federal law, in The Investigation and Defense of a Computer Related Crime, R.D. Clifford (Ed.), Durham, NC: Carolina Academic Press, 2001, pp. 11–70. 50. Melek, A., Deloitte 2004 Global Security Survey, Deloitte Touche Tohmatsu, 2004. 51. Educause, Glossary, Educause, 2005. Retrieved from http:www.educause.edu/ Glossary/2497. 52. Prosise, C., K. Mandia, and M. Pepe, Incident Response and Computer Forensics, 2nd ed., New York: McGraw-Hill/Osborne, 2004.
713
AU8231_book.fm Page 714 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® 53. Schultz, E. and Shumay, R., Incident Response: A Strategic Guide to Handling System and Network Security Breaches, Indianapolis: Sams, 2002. 54. West-Brown, M. et al., Handbook for Computer Security Response Teams (CSIRTs), 2nd ed., Pittsburgh: Carnegie Mellon University, 2003. 55. Grance, T., K. Kent, and B. Kim, Computer Security Handling Guide, NIST, 2004. 56. Northcutt, S., Computer Security Incident Handling Step by Step, Version 2.2, Bethsda, MD: SANS Institute, 2001. 57. Carrier, B. and E. Spafford, Getting physical with digital forensics investigation, International Journal of Digital Evidence, 2(2) (Fall 2003). 58. Palmer, G., A Road Map for Digital Forensics Research, Utica, NY: Digital Forensic Research Workshop, 2001. 59. Ahmad, A., The forensic chain-of-custody: improving the process of evidence collection in incident handling procedures, in 6th Pacific Asia Conference on Information Systems, Tokyo, 2002. 60. McKemmish, R., What is forensic computing? Trends and Issues in Crime and Criminal Justice, 118:1–6 (1999). 61. Britz, M., Computer Forensics and Cyber Crime: An Introduction, Upper Saddle River, NJ: Pearson/Prentice Hall, 2004, pp. xv, 248. 62. Caloyannides, M., Computer Forensics and Privacy, Boston, MA: Artech House, 2001, pp. xvii, 392. 63. Scientific Working Group on Digital Evidence, Digital evidence: standards and principles, Forensic Science Communications, 2:2 (April 2000). Retrieved from http:// www.fbi.gov/hq/lab/fsc/backissu/april2000/swgde.htm. 64. Scientific Working Group on Digital Evidence, June 5, 2003. 65. Kruse, W.G. and J.G. Heiser, Computer Forensics: Incident Response Essentials, Boston: Addison-Wesley, 2001, pp. xiii, 392. 66. Lange, M. and K. Nimsger, Electronic Evidence: What Every Lawyer Should Know, Chicago: American Bar Association, 2004. 67. Marcella, A.J. and R. Greenfield, Cyber Forensics: A Field Manual for Collecting, Examining, and Preserving Evidence of Computer Crimes, Boca Raton, FL: Auerbach Publications, 2002, pp. xx, 443. 68. Stephenson, P., Investigating Computer-Related Crime, Boca Raton, FL: CRC Press, 2000, p. 304. 69. Lee, H., T. Palmbach, and M. Miller, Henry Lee’s Crime Scene Handboo, San Diego: Academic Press, 2001.k 70. Saferstein, R., Criminalistics: An Introduction to Forensic Science, New York: Prentice Hall, 2004. 71. Rogers, M., The role of criminal profiling in computer forensic investigations, Computers and Security, 4 (2003). 72. Britton, P., The Jigsaw Man, London: Transworld Publishers, 1997. 73. Woo, H.-J., The Hacker Mentality: Exploring the Relationship between Psychological Variables and Hacking Activities, Athens, GA: University of Georgia, 2003. 74. Sommer, P., Computer Forensics: An Introduction, 1997. Retrieved from http://www. virtualcity.co.uk/vcaforens.htm. 75. Evidence, S.W.G.o.D., SWGDE Draft Best Practices, June 5, 2005. 76. Prosise, C. and K. Mandia, Incident Response and Computer Forensics, 2nd ed., Berkeley, CA: Osborne, 2003, pp. xxix, 507. 77. Carrier, B., File System Forensic Analysis, Crawfordsville, IN: Addison Wesley, 2005. 78. Carrier, B. and E. Spafford, Digital crime scene event reconstruction, Academy of Forensic Sciences 56th Annual Meeting, Dallas, 2004. 79. International Organization on Computer Evidence, The G8 Proposed Principles for Procedures Relating to Digital Evidence, 2002. 80. Braid, M., Collecting Evidence after a System Compromise, AusCERT, 2001.
714
AU8231_book.fm Page 715 Friday, October 13, 2006 8:00 AM
Legal, Regulations, Compliance and Investigations 81. Palmer, G., Forensic Analysis in a Digital World, International Journal of Digital Evidence 1, Spring 2001. 82. Whitcomb, C., A Historical Perspective of Digital Evidence: A Forensic Scientist’s View,International Journal of Digital Evidence 1, Spring 2002. 83. Rogers, M. and K. Seigfried, The Future of Computer Forensics: A Needs Analysis Survey, Computers and Security, Spring 2004.
Sample Questions 1. Where does the greatest risk of cybercrime come from? a. Outsiders b. Nation-states c. Insiders d. Script kiddies 2. What is the biggest hindrance to dealing with computer crime? a. Computer criminals are generally smarter than computer investigators. b. Adequate funding to stay ahead of the computer criminals. c. Activity associated with computer crime is truly international. d. There are so many more computer criminals than investigators that it is impossible to keep up. 3. Computer forensics is really the marriage of computer science, information technology, and engineering with: a. Law b. Information systems c. Analytical thought d. The scientific method 4. What principal allows us to identify aspects of the person responsible for a crime when, whenever committing a crime, the perpetrator takes something with him and leaves something behind? a. Meyer’s principal of legal impunity b. Criminalistic principals c. IOCE/Group of 8 Nations principals for computer forensics d. Locard’s principal of exchange 5. Which of the following is not one of the five rules of evidence? a. Be authentic b. Be redundant c. Be complete d. Be admissible 6. What is not mentioned as a phase of an incident response? a. Documentation b. Prosecution c. Containment d. Investigation 7. _______________ emphasizes the abstract concepts of law and is influenced by the writings of legal scholars and academics. 715
AU8231_book.fm Page 716 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK®
8.
9.
10.
11.
12.
13.
14.
716
a. Criminal law b. Civil law c. Religious law d. Administrative law Which type of intellectual property covers the expression of ideas rather than the ideas themselves? a. Trademark b. Patent c. Copyright d. Trade secret Which type of intellectual property protects the goodwill a merchant or vendor invests in its products? a. Trademark b. Patent c. Copyright d. Trade secret Which of the following is not a computer forensics model? a. IOCE b. SWGDE c. MOM d. ACPO Which of the following is not a category of software licensing? a. Freeware b. Commercial c. Academic d. End-user licensing agreement What are the rights and obligations of individuals and organizations with respect to the collection, use, retention, and disclosure of personal information related to? a. Privacy b. Secrecy c. Availability d. Reliability Triage encompasses which of the following incident response subphases? a. Collection, transport, testimony b. Traceback, feedback, loopback c. Detection, identification, notification d. Confidentiality, integrity, availability Integrity of a forensic bit stream image is often determined by: a. Comparing hash totals to the original source b. Keeping good notes c. Taking pictures d. Can never be proven
AU8231_book.fm Page 717 Friday, October 13, 2006 8:00 AM
Legal, Regulations, Compliance and Investigations 15. When dealing with digital evidence, the crime scene: a. Must never be altered b. Must be completely reproducible in a court of law c. Must exist in only one country d. Must have the least amount of contamination that is possible
717
AU8231_book.fm Page 718 Friday, October 13, 2006 8:00 AM
AU8231_A001.fm Page 719 Saturday, June 2, 2007 1:24 PM
Appendix A
Answers to Sample Questions Domain 1: Information Security and Risk Management 1. According to this chapter, consideration of computer ethics is recognized to have begun with the work of which of the following? a. Joseph Weizenbaum b. Donn B. Parker c. Norbert Wiener d. Walter Maner Answer c 2. Which of the following U.S. laws, regulations, and guidelines does not have a requirement for organizations to provide ethics training? a. Federal Sentencing Guidelines for Organizations b. Health Insurance Portability and Accountability Act c. Sarbanes–Oxley Act d. New York Stock Exchange Governance Structure Answer b 3. According to Peter S. Tippett, which of the following common ethics fallacies is demonstrated by the belief that if a computer application allows an action to occur, the action is allowable because if it was not, the application would have prevented it? a. The computer game fallacy b. The shatterproof fallacy c. The hacker’s fallacy d. The law-abiding citizen fallacy Answer a 4. According to Stephen Levy, which of the following is one of the six beliefs he described within the hacker ethic? a. There must be a way for an individual to correct information in his or her records. b. Thou shalt not interfere with other people’s computer work. 719
AU8231_A001.fm Page 720 Saturday, June 2, 2007 1:24 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® c. Preserve the value of their systems, applications, and information. d. Computers can change your life for the better. Answer d 5. According to Fritz H. Grupe, Timothy Garcia-Jay, and William Kuechler, which of the following represents the concept behind the “no free lunch rule” ethical basis for IT decision making? a. If an action is not repeatable at all times, it is not right at any time. b. Assume that all property and information belong to someone. c. To be financially viable in the market, one must have data about what competitors are doing and understand and acknowledge the competitive implications of IT decisions. d. IT personnel should avoid potential or apparent conflicts of interest. Answer b 6. The concept of risk management is best described as the following: a. Risk management reduces risks by defining and controlling threats and vulnerabilities. b. Risk management identifies risks and calculates their impacts on the organization. c. Risk management determines organizational assets and their subsequent values. d. All of the above. Answer a 7. Qualitative risk assessment is earmarked by which of the following? a. Ease of implementation b. Detailed metrics used for calculation of risk c. Can be completed by personnel with a limited understanding of the risk assessment process d. a and c only Answer d 8. Single loss expectancy (SLE) is calculated by using: a. Asset value and annualized rate of occurrence (ARO) b. Asset value, local annual frequency estimate (LAFE), and the standard annual frequency estimate (SAFE) c. Asset value and exposure factor d. All of the above Answer c 9. Consideration for which type of risk assessment to perform includes all of the following except: a. Culture of the organization 720
AU8231_A001.fm Page 721 Saturday, June 2, 2007 1:24 PM
Answers to Sample Questions b. Budget c. Capabilities of resources d. Likelihood of exposure Answer d 10. Security awareness training includes: a. Legislated security compliance objectives b. Security roles and responsibilities for staff c. The high-level outcome of vulnerability assessments d. None of the above Answer d 11. A signed user acknowledgment of the corporate security policy: a. Ensures that users have read the policy b. Ensures that users understand the policy, as well as the consequences for not following the policy c. Can be waived if the organization is satisfied that users have an adequate understanding of the policy d. Helps to protect the organization if a user’s behavior violates the policy Answer d 12. Effective security management: a. Achieves security at the lowest cost b. Reduces risk to an acceptable level c. Prioritizes security for new products d. Installs patches in a timely manner Answer b 13. Identity theft is best mitigated by: a. Encrypting information in transit to prevent readability of information b. Implementing authentication controls c. Determining location of sensitive information d. Publishing privacy notices Answer b 14. Availability makes information accessible by protecting from each of the following except: a. Denial of services b. Fires, floods, hurricanes c. Unreadable backup tapes d. Unauthorized transactions Answer d 721
AU8231_A001.fm Page 722 Saturday, June 2, 2007 1:24 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® 15. The security officer could report to any of the following except: a. CEO b. Chief information officer c. Risk manager d. Application development Answer d 16. Tactical security plans: a. Establish high-level security policies b. Enable entitywide security management c. Reduce downtime d. Deploy new security technology Answer b 17. Who is accountable for information security? a. Everyone b. Senior management c. Security officer d. Data owners Answer b 18. Security is most expensive when addressed in which phase? a. Design b. Rapid prototyping c. Testing d. Implementation Answer d 19. Information systems auditors help the organization: a. Mitigate compliance issues b. Establish an effective control environment c. Identify control gaps d. Address information technology for financial statements Answer c 20. Long-duration security projects: a. Provide greater organizational value b. Increase return on investment (ROI) c. Minimize risk d. Increase completion risk Answer d 21. Setting clear security roles has the following benefits except: a. Establishes personal accountability b. Enables continuous improvement 722
AU8231_A001.fm Page 723 Saturday, June 2, 2007 1:24 PM
Answers to Sample Questions c. Reduces cross-training requirements d. Reduces departmental turf battles Answer c 22. Well-written security program policies should be reviewed: a. Annually b. After major project implementations c. When applications or operating systems are updated d. When procedures need to be modified Answer a 23. Orally obtaining a password from an employee is the result of: a. Social engineering b. Weak authentication controls c. Ticket-granting server authorization d. Voice recognition software Answer a 24. A security policy that will stand the test of time includes the following except: a. Directive words such as shall, must, or will b. Defined policy development process c. Short in length d. Technical specifications Answer d 25. Consistency in security implementation is achieved through: a. Policies b. Standards and baselines c. Procedures d. SSL encryption Answer b 26. The ability of one person in the finance department to add vendors to the vendor database and subsequently pay the vendor illustrates which concept? a. A well-formed transaction b. Segregation of duties c. Job rotation d. Data sensitivity level Answer b 27. Which function would be most compatible with the security function? a. Data entry b. Database administration 723
AU8231_A001.fm Page 724 Saturday, June 2, 2007 1:24 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® c. Change management d. Network management Answer c 28. Collusion is best mitigated by: a. Job rotation b. Data classification c. Defining job sensitivity level d. Least privilege Answer a 29. False-positives are primarily a concern during: a. Drug and substance abuse testing b. Credit and background checks c. Reference checks d. Forensic data analysis Answer a 30. Data access decisions are best made by: a. User managers b. Data owners c. Senior management d. Application developer Answer b 31. Company directory phone listings would typically be classified as: a. Public b. Classified c. Sensitive information d. Internal use only Answer d Domain 2: Access Control 1. A preliminary step in managing resources is: a. Conducting a risk analysis b. Defining who can access a given system or information c. Performing a business impact analysis d. Obtaining top management support Answer b 2. Which best describes access controls? a. Access controls are a collection of technical controls that permit access to authorized users, systems, and applications. 724
AU8231_A001.fm Page 725 Saturday, June 2, 2007 1:24 PM
Answers to Sample Questions b. Access controls help protect against threats and vulnerabilities by reducing exposure to unauthorized activities and providing access to information and systems to only those who have been approved. c. Access control is the employment of encryption solutions to protect authentication information during log-on. d. Access controls help protect against vulnerabilities by controlling unauthorized access to systems and information by employees, partners, and customers. Answer b 3. _______ requires that a user or process be granted access to only those resources necessary to perform assigned functions. a. Discretionary access control b. Separation of duties c. Least privilege d. Rotation of duties Answer c 4. What are the six main categories of access control? a. Detective, corrective, monitoring, logging, recovery, and classification b. Deterrent, preventative, detective, corrective, compensating, and recovery c. Authorization, identification, factor, corrective, privilege, and detective d. Identification, authentication, authorization, detective, corrective, and recovery Answer b 5. What are the three types of access control? a. Administrative, physical, and technical b. Identification, authentication, and authorization c. Mandatory, discretionary, and least privilege d. Access, management, and monitoring Answer a 6. Which approach revolutionized the process of cracking passwords? a. Brute force b. Rainbow table attack c. Memory tabling d. One-time hashing Answer b 7. What best describes two-factor authentication? a. Something you know 725
AU8231_A001.fm Page 726 Saturday, June 2, 2007 1:24 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® b. Something you have c. Something you are d. A combination of two listed above Answer d 8. A potential vulnerability of the Kerberos authentication server is: a. Single point of failure b. Asymmetric key compromise c. Use of dynamic passwords d. Limited lifetimes for authentication credentials Answer a 9. In mandatory access control the system controls access and the owner determines a. Validation b. Need to know c. Consensus d. Verification Answer b 10. Which is the least significant issue when considering biometrics? a. Resistance to counterfeiting b. Technology type c. User acceptance d. Reliability and accuracy Answer b 11. Which is a fundamental disadvantage of biometrics? a. Revoking credentials b. Encryption c. Communications d. Placement Answer a 12. Role-based access control _______: a. Is unique to mandatory access control b. Is independent of owner input c. Is based on user job functions d. Can be compromised by inheritance Answer c 13. Identity management is: a. Another name for access controls 726
AU8231_A001.fm Page 727 Saturday, June 2, 2007 1:24 PM
Answers to Sample Questions b. A set of technologies and processes intended to offer greater efficiency in the management of a diverse user and technical environment c. A set of technologies and processes focused on the provisioning and decommissioning of user credentials d. A set of technologies and processes used to establish trust relationships with disparate systems Answer b 14. A disadvantage of single sign-on is: a. Consistent time-out enforcement across platforms b. A compromised password exposes all authorized resources c. Use of multiple passwords to remember d. Password change control Answer b 15. Which of the following is incorrect when considering privilege management? a. Privileges associated with each system, service, or application, and the defined roles within the organization to which they are needed, should be identified and clearly documented. b. Privileges should be managed based on least privilege. Only rights required to perform a job should be provided to a user, group, or role. c. An authorization process and a record of all privileges allocated should be maintained. Privileges should not be granted until the authorization process is complete and validated. d. Any privileges that are needed for intermittent job functions should be assigned to multiple user accounts, as opposed to those for normal system activity related to the job function. Answer d 16. Capability lists are related to the subject, whereas access control lists (ACLs) are related to the object, and therefore: a. Under capability lists, attacker subjects can simply refuse to submit their lists and act with no restrictions. b. Under access control lists, a user can invoke a program to access objects normally restricted. c. Capability lists can only manage subject-to-subject access, and thus cannot be part of the reference monitor. d. Only access control lists can be used in object-oriented programming. Answer b 727
AU8231_A001.fm Page 728 Saturday, June 2, 2007 1:24 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Domain 3: Cryptography 1. Asymmetric key cryptography is used for all of the following except: a. Encryption of data b. Access control c. Nonrepudiation d. Steganography Answer d 2. The most common forms of asymmetric key cryptography include: a. Diffie–Hellman b. Rijndael c. Blowfish d. SHA-256 Answer a 3. One of the most important principles in the secure use of a public key algorithm is: a. Protection of the private key b. Distribution of the shared key c. Integrity of the message d. History of session keys Answer a 4. Secure distribution of a confidential message can be performed by: a. Encrypting the message with the receiver’s public key b. Encrypting a hash of the message c. Having the message authenticated by a certificate authority d. Using a password-protected file format Answer a 5. What are the disadvantages of using a public key algorithm compared to a symmetric algorithm? a. A symmetric algorithm provides better access control. b. A symmetric algorithm is a faster process. c. A symmetric algorithm provides nonrepudiation of delivery. d. A symmetric algorithm is more difficult to implement. Answer b 6. When a user needs to provide message integrity, what options may be best? a. Send a digital signature of the message to the recipient b. Encrypt the message with a symmetric algorithm and send it c. Encrypt the message with a private key so the recipient can decrypt with the corresponding public key 728
AU8231_A001.fm Page 729 Saturday, June 2, 2007 1:24 PM
Answers to Sample Questions d. Send an encrypted hash of the message along with the message to the recipient Answer d 7. A certification authority provides which benefits to a user? a. Protection of public keys of all users b. History of symmetric keys c. Proof of nonrepudiation of origin d. Validation that a public key is associated with a particular user Answer d 8. What is the output length of a RIPEMD-160 hash? a. 160 bits b. 150 bits c. 128 bits d. 104 bits Answer b 9. What is the primary risk of using cryptographic protection for systems or data? a. Loss of the system may mean loss of all data. b. A hardware failure may lead to lost data or system integrity. c. A disgruntled user may lead to denial of service. d. An employee may hide his activities from the security department. Answer c 10. ANSI X9.17 is concerned primarily with: a. Protection and secrecy of keys b. Financial records and retention of encrypted data c. Formalizing a key hierarchy d. The lifespan of key-encrypting keys (KKMs) Answer a 11. When a certificate is revoked, what is the proper procedure? a. Setting new key expiry dates b. Updating the certificate revocation list c. Removal of the private key from all directories d. Notification to all employees of revoked keys Answer b 12. What is not true about link encryption? a. Link encryption encrypts routing information. b. Link encryption is often used for Frame Relay or satellite links. 729
AU8231_A001.fm Page 730 Saturday, June 2, 2007 1:24 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® c. Link encryption is suitable for high-risk environments. d. Link encryption provides better traffic flow confidentiality. Answer c 13. A __________is the sequence that controls the operation of the cryptographic algorithm. a. Encoder b. Decoder wheel c. Cryptovariable d. Cryptographic routine Answer c 14. The process used in most block ciphers to increase their strength is: a. Diffusion b. Confusion c. Step function d. SP-network Answer d 15. The two methods of encrypting data are: a. Substitution and transposition b. Block and stream c. Symmetric and asymmetric d. DES and AES Answer b 16. Cryptography supports all of the core principles of information security except: a. Availability b. Confidentiality c. Integrity d. BCP Answer d 17. A way to defeat frequency analysis as a method to determine the key is to use: a. Substitution ciphers b. Transposition ciphers c. Polyalphabetic ciphers d. Inversion ciphers Answer c 18. The running key cipher is based on: a. Modular arithmetic b. XOR mathematics 730
AU8231_A001.fm Page 731 Saturday, June 2, 2007 1:24 PM
Answers to Sample Questions c. Factoring d. Exponentiation Answer a 19. The only cipher system said to be unbreakable by brute force is: a. AES b. DES c. One-time pad d. Triple DES Answer c 20. Messages protected by steganography can be transmitted to: a. Picture files b. Music files c. Video files d. All of the above Answer d Domain 4: Physical (Environmental) Security 1. Which of these statements best describes the concept of defense in depth or the layered defense model? a. A combination of complementary countermeasures b. Replicated defensive techniques, such as double firewalling c. Perimeter fencing and guarding d. Contingency measures for recovery after, e.g., system failure Answer a 2. Sprinkler systems to defeat a fire outbreak may include either a dry pipe or wet pipe mechanism. Which of these statements is not true of a dry pipe mechanism? a. It delays briefly before providing water to the fire. b. It uses gas or powder, rather than a fluid, to choke the fire. c. It offers a brief opportunity for emergency shutdown procedures. d. It offers a brief opportunity to evacuate staff from the affected rooms. Answer b 3. The geographical location of the site may affect the security requirement if it: a. May be vulnerable to natural disaster (e.g., a floodplain) b. Lacks adequate access for, or the logistical support of, emergency services 731
AU8231_A001.fm Page 732 Saturday, June 2, 2007 1:24 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® c. Experiences crime, including burglary, vandalism, street crime, and arson d. All of the above Answer d 4. Which of these infrastructure features would most likely present a physical vulnerability for an information system? a. Fire escapes, including external and internal stairways b. The information security architecture c. The corporate compliance policy d. The internal telephone network Answer a 5. Which one of these would be the principal practical benefit of utilizing existing physical or procedural measures in an information system’s security strategy? a. They offer duplication of, e.g., access controls. b. They are already tried, tested, and accepted by staff. c. They are managed by facilities staff. d. They are written into corporate procedures. Answer b 6. Which one of these is least likely to provide a physical security barrier for an ICT system? a. External site perimeter b. Protected zones (e.g., a floor or suite of rooms) within a building c. Communications channels d. Office layout Answer c 7. Which of these is a procedural (rather than an administrative or technical) control? a. System logging b. Purging storage media on, e.g., fax, photocopier, or voice mail facilities c. Developing a system security policy d. Configuring a firewall rule base Answer b 8. Which of these is not a common type of fire/smoke detection system? a. Ionization b. Photoelectric 732
AU8231_A001.fm Page 733 Saturday, June 2, 2007 1:24 PM
Answers to Sample Questions c. Heat d. Movement Answer d 9. Which one of these fire extinguisher classes is most appropriate for controlling fires in electrical equipment or wiring? a. Class A b. Class B c. Class C d. Class D Answer c 10. Which one of these is the strongest form of protective window glass? a. Standard plate b. Tempered c. Embedded polycarbonate sheeting d. Embedded wire mesh Answer d 11. Which one of these physical intruder detection systems reacts to fluctuations of ambient light energy within its range? a. Electrical circuit b. Light beam c. Passive infrared detector (PIR) d. Microwave system Answer c 12. Which one of these physical locking devices requires the knowledge of a set of numbers and a rotation sequence to achieve access? a. Deadbolt lock b. Combination lock c. Keypad d. Smart lock Answer b 13. Which one of these is the most critical aspect of ensuring the effectiveness of a CCTV system? a. Positioning cameras at a height that prevents physical attack b. Adequate lighting and positioning to address blind spots c. Monitoring of and reaction to camera feeds d. Safe storage of footage Answer c 14. In terms of physical security, which one of these is the best measure to prevent loss of data in a mobile computing scenario? 733
AU8231_A001.fm Page 734 Saturday, June 2, 2007 1:24 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® a. b. c. d.
Carry the laptop in an unmarked bag or briefcase. Carry the laptop’s hard disk separately from the laptop. Use tamper detection measures or tracing software. Restrict access via tokens, such as smart cards.
Answer b 15. Procedural security measures often fail because staff fail to appreciate why they should use them. Which one of these measures may best address this? a. Security operating procedures b. Security training and awareness c. Disciplinary procedures d. Dissemination of the corporate security policy Answer b Domain 5: Security Architecture and Design 1. What is the name for an operating system that switches from one process to another process quickly to speed up processing? a. Multiprocessing b. Multitasking c. Multithreading d. Multidimensional Answer b 2. What mode do applications run to limit their access to system data and hardware? a. Supervisor mode b. User mode c. Tunnel mode d. Interprocess mode Answer b 3. Which of the following is not true of the reference monitor? a. It must mediate all accesses. b. It must be protected from modification. c. It must be verifiable as correct. d. It must provide continuous monitoring of file privileges. Answer d 4. In the Bell–LaPadula model, the simple security property addresses which of the following? a. Reads b. Writes 734
AU8231_A001.fm Page 735 Saturday, June 2, 2007 1:24 PM
Answers to Sample Questions c. Executes d. Read/writes Answer a 5. Which of the following does not provide a certification process? a. ISO/IEC 17799:2005 b. BS 7799:2 c. ISO 27001 d. ISO 15408 Answer a 6. Data hiding is a required TCSEC criterion of module development for systems beginning at what criterion level? a. A1 b. B3 c. B2 d. C3 Answer b 7. Which of the following security models addresses three goals of integrity? a. Biba b. Bell–LaPadula c. Clark–Wilson d. Brewer–Nash Answer c 8. ITSEC added which of the following requirements that TCSEC did not address? a. Confidentiality and availability b. Integrity and confidentiality c. Availability and integrity d. Nonrepudiation and integrity Answer c 9. Which of the following is not a usual integrity goal? a. Prevent unauthorized users from making modifications b. Prevent authorized users from making improper modifications c. Maintain conflict of interest protections d. Maintain internal and external consistency Answer c 10. Which model establishes a system of subject–program–object bindings such that the subject no longer has direct access to the object, but instead this is done through a program? 735
AU8231_A001.fm Page 736 Saturday, June 2, 2007 1:24 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® a. b. c. d.
Biba Bell–LaPadula Clark–Wilson Brewer–Nash
Answer c 11. The Biba integrity * (star) property ensures: a. No write up b. No write down c. No read up d. No read down Answer a 12. Which model fails to address the fact that because all subjects that have an appropriate clearance may not need access, the system owner must still allow access by providing the need-to-know decision? a. Biba b. Bell–LaPadula c. Clark–Wilson d. Brewer–Nash Answer b 13. Which model helps ensure that high-level actions (inputs) do not determine what low-level users can see (outputs)? a. Noninterference model b. Lattice model c. Information flow model d. Graham–Denning model Answer a 14. Which access control model has three parts — a set of objects, a set of subjects, and a set of rights — as well as defining eight primitive rights? a. Access control matrix b. Lattice model c. Information flow model d. Graham–Denning model Answer d 15. What is the name for the collections of distributed software that are present between the application running on the operating system and the network services that reside on a network node? a. Applications b. Middleware 736
AU8231_A001.fm Page 737 Saturday, June 2, 2007 1:24 PM
Answers to Sample Questions c. Trusted computer base (TCB) d. System kernel Answer b 16. Which model assigns access rights to subjects for their accesses to objects? a. Jueneman model b. Access control matrix c. Information flow model d. Noninterference model Answer b 17. Which model describes a partially ordered set for which every pair of elements has a greatest lower bound and a least upper bound? a. Lattice-based model b. Access control matrix c. Information flow model d. Noninterference model Answer a 18. What are typically trusted areas that are separated from untrusted areas by an imaginary boundary sometimes referred to as the security perimeter? a. Mainframes and centralized computing environments b. PCs and distributed computing environments c. Network partitions d. Chinese wall Answer c 19. The Common Criteria uses which designations for evaluation? a. D1, C1, C2, B1, B2, B3, A1 b. E0, E1, E2, E3, E4, E5, E6, E7 c. EAL1, EAL2, EAL3, EAL4, EAL5, EAL6, EAL7 d. F0, F1, F2, F3, F4, F5, F6, F7 Answer c Domain 6: Business Continuity and Disaster Recovery Planning 1. Which of the following is considered the most important component of the enterprisewide continuity planning program? a. Business impact assessment b. Formalized continuity plans 737
AU8231_A001.fm Page 738 Saturday, June 2, 2007 1:24 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® c. Executive management support d. Hotsite arrangements Answer c 2. During the threat analysis phase of the continuity planning methodology, which of the following threats should be addressed? a. Physical security b. Environmental security c. Information security d. All of the above Answer d 3. The major objective of the business impact assessment process is to: a. Prioritize time-critical business processes b. Determine the most appropriate recovery time objective for business processes c. Assist in prioritization of IT applications and networks d. All of the above Answer d 4. Continuity of IT technologies or IT network infrastructure capabilities is addressed in what type of continuity plan? a. Disaster recovery plans b. Emergency response/crisis management plans c. Business continuity plans d. Continuous availability plans Answer a 5. Crisis management planning focuses management attention on the following: a. Preplanning that will enable management to anticipate and react in the event of emergency b. Reacting to a natural disaster such as a hurricane or earthquake c. Anticipating adverse financial events d. IT systems’ restart and recovery activities Answer a 6. Performing benchmarking and peer review relative to enterprise continuity planning business processes is a valuable method to do all of the following except: a. Help identify leading business continuity planning processes and practices b. Allow realistic goal setting for action plans and agendas 738
AU8231_A001.fm Page 739 Saturday, June 2, 2007 1:24 PM
Answers to Sample Questions c. Provide a method for developing metrics and measures for the continuity planning process d. Compare continuity planning personnel salary levels Answer d 7. An effective continuity plan will contain all of the following type of information except for: a. Prioritized list of business processes or IT systems to be recovered b. The business impact assessment report c. Recovery team structures and assignments d. The primary and secondary location where backup and recovery activities will take place Answer b 8. All but one of the following are advantages of automating or utilizing continuity planning software: a. It standardizes training approaches. b. It provides a platform for management and audit oversight. c. It eases long-term continuity plan maintenance. d. It provides business partners with an enterprisewide view of the continuity planning infrastructure. Answer d 9. Which is the least important reason for developing business continuity and disaster recovery plans? a. Disasters really do occur b. Budgeting IT expenditures c. Good business practice and standard of due care d. Legal or regulatory compliance Answer b 10. When conducting the business impact assessment, business processes are examined relative to all but one of the following criteria: a. Customer interruption impacts b. Embarrassment or loss of confidence impacts c. Executive management disruption impacts d. Revenue loss potential impacts Answer c 11. The primary purpose of formalized continuity planning test plans is to accomplish all except: a. Define test scope and objectives b. Define test timeframes 739
AU8231_A001.fm Page 740 Saturday, June 2, 2007 1:24 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® c. Define test costs d. Define the test script Answer c 12. The primary reason for conducting continuity planning tests is to: a. Provide employees’ families with a method for contacting management b. Ensure that continuity plans are current and viable c. Prepare third parties to react to an emergency within the enterprise d. Identify which employees can go home following a disaster Answer b 13. During development of alternative recovery strategies, all of the following activities should be performed except: a. Use the prioritized business process maps developed during the BIA to map time-critical supporting resources b. Develop short- and long-term testing and maintenance strategies c. Prepare cost estimates for acquisition of continuity support resources d. Provide executive management with recommendations on acquiring appropriate continuity resources Answer b 14. The primary phases of the enterprise continuity planning implementation methodology include all of the following except: a. Current state assessment phase b. Execution phase c. Design and development phase d. Management phase Answer b 15. Which of the following statements most appropriately describes the timeliness of processes and supporting resources prioritization and recovery? a. The processes are mission critical. b. The processes are critical. c. The processes are time critical. d. All of the above. Answer c Domain 7: Telecommunications and Network Security 1. In the OSI reference model, on which layer can a telephone number be described? 740
AU8231_A001.fm Page 741 Saturday, June 2, 2007 1:24 PM
Answers to Sample Questions a. Layer 1, because a telephone number represents a series of electrical impulses b. Layer 3, because a telephone number describes communication between different networks c. This depends on the nature of the telephony system (for instance, Voice-over-IP versus public switched telephony network (PSTN)) d. None, as the telephone system is a circuit-based network and the OSI system only describes packet-switched networks Answer c 2. Which transmission modes exist on OSI layer 5? a. Simplex, all other modes can be described as a series of simplex connections b. Simplex, duplex, triplex c. Simplex, half duplex, duplex d. Duplex, as the other modes are only maintained for legacy and not part of modern standards Answer c 3. In which of the following situations is the network itself not a target of attack? a. A denial-of-service attack on servers on a network b. Hacking into a router c. A virus outbreak saturating network capacity d. A man-in-the-middle attack Answer c 4. Which of the following are effective protective or countermeasures against a distributed denial-of-service attack? a = Redundant network layout b = Secret fully qualified domain names (FQDNs) c = Reserved bandwidth d = Traffic filtering e = Network Address Translation (NAT) a. b and e b. b, d, and e c. a and c d. a, c, and d Answer c 5. What is the optimal placement for network-based intrusion detection systems (NIDSs)? a. On the network perimeter, to alert the network administrator of all attack attempts 741
AU8231_A001.fm Page 742 Saturday, June 2, 2007 1:24 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® b. On network segments with business-critical systems (e.g., demilitarized zones (DMZs) and on certain intranet segments) c. At the network operations center (NOC) d. At an external service provider Answer b 6. Which of the following are meaningful uses for network-based scans? a = Discovery of devices and services on a network b = Test of compliance with the security policy c = Detection of attackers in a network, for instance, sniffers d = Test for vulnerabilities and backdoors, for instance, as part of a penetration test or to detect PCs infected by Trojans a. a, b, and c b. a, b, and d c. a, c, and d d. b, c, and d Answer b 7. Which of the following is an advantage of fiber-optic over copper cables from a security perspective? a. Fiber optics provides higher bandwidth. b. Fiber optics are more difficult to wiretap. c. Fiber optics are immune to wiretap. d. None — the two are equivalent; network security is independent from the physical layer. Answer b 8. Which of the following devices should not be part of a network’s perimeter defense? a. A screening router b. A firewall c. A proxy server d. None of the above — all are needed to protect the network behind the perimeter Answer c 9. Which of the following is a principal security risk of wireless LANs? a. Lack of physical access control b. Demonstrably insecure standards c. Implementation weaknesses d. War driving Answer a 10. Which of the following configurations of a WLAN’s SSID offers adequate security protection? 742
AU8231_A001.fm Page 743 Saturday, June 2, 2007 1:24 PM
Answers to Sample Questions a. Using an obscure SSID to confuse and distract an attacker b. Not using any SSID at all to prevent an attacker from connecting to the network c. Not broadcasting an SSID to make it harder to detect the WLAN d. None of the above Answer d 11. Which of the following is the principal security risk of broadband Internet access proliferation for home users? a. Users using peer-to-peer file-sharing networks for breaches of intellectual property. b. PCs connected permanently to the Internet are prone to receive more spam mails, thereby increasing the risk for the user to become infected with viruses and Trojans. c. PCs will become infected with dialers on DSL lines (run over telephony lines), thereby exposing the user to almost limitless financial risk. d. Home computers that are not securely configured or maintained and are permanently connected to the Internet become easy prey for attackers. Answer d 12. Who should be allowed to change rules on a firewall and for which reason? a. The network administrator, for testing and troubleshooting purposes b. The firewall administrator, on request of users after having assessed the validity of the business reason c. The firewall administrator in compliance with a change process that will, in particular, validate the request against the organization’s security policy and provide proper authorization for the request d. The security manager, who will, in particular, validate the request against the organization’s security policy and provide proper authorization for the request Answer c 13. Which of the following is the principal benefit of a personal firewall? a. They provide a PC on a public network with a reasonable degree of protection; if the PC connects to a trusted network later on (for instance, an Intranet), it will prevent the PC from becoming an agent of attack (e.g., by spreading viruses). b. They offer an additional degree of protection on intranets to the PC because, due to the trend of incremental weakening of the 743
AU8231_A001.fm Page 744 Saturday, June 2, 2007 1:24 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® network boundary, these networks can no longer be considered trusted. c. They protect networks the PC connects to from threats, such as virus infections, that the PC could become an agent to. d. They prevent attacks on individual PCs. If everybody would use them, the Internet would be safe from virus attacks. Answer a 14. Which of the following are true statements about IPSec? a = IPSec provides mechanisms for authentication and encryption. b = IPSec provides mechanisms for nonrepudiation. c = IPSec will only be deployed with IPv6. d = IPSec authenticates hosts against each other. e = IPSec only authenticates clients against a server. f = IPSec is implemented in SSH and TLS. a. a and d b. a, b, and e c. a, b, c, d, and f d. a, b, c, e, and f Answer a 15. Which of the following statements about well-known ports (0 through 1023) on layer 4 is true? a. Well-known ports all run TCP, as UDP was considered not secure enough. b. Well-known ports historically were the ones defined in early (10bit address) implementations of TCP. When address space was extended to 16 bits, registered and dynamic ports were added. c. On most operating systems, use of well-known ports requires system-level (administrative, superuser) access. d. The distinction between well-known, registered, and dynamic ports will become obsolete in IPv6, as the service used becomes part of the IP address. Answer c 16. Which of the following is the enabler for TCP sequence number attacks, and which mitigation exists? a. The fact that packets can arrive in random order. Mitigation is offered by better control of the carrier medium, as described in RFC 2549. b. The fact sequence numbers can be assumed to monotonously increase, which enables guessing of a valid (but random) higher sequence number. Mitigation is offered by switching to UDP, as described in RFC 768. 744
AU8231_A001.fm Page 745 Saturday, June 2, 2007 1:24 PM
Answers to Sample Questions c. The fact that sequence numbers can be predicted, enabling insertion of illegitimate packets into the data stream. Mitigation is offered by better randomization, as described in RFC 1948. d. TCP sequence number attacks are based on a brute-force attack of guessing valid sequence numbers. No mitigation is possible. Answer c 17. Which of the following is the principal weakness of DNS (Domain Name System)? a. Lack of authentication of servers, and thereby authenticity of records b. Its latency, which enables insertion of records between the time when a record has expired and when it is refreshed c. The fact that it is a simple, distributed, hierarchical database instead of a singular, relational one, thereby giving rise to the possibility of inconsistencies going undetected for a certain amount of time d. The fact that addresses in e-mail can be spoofed without checking their validity in DNS, caused by the fact that DNS addresses are not digitally signed Answer a 18. Which of the following statements about open e-mail relays is incorrect? a. An open e-mail relay is a server that forwards e-mail from domains other than the ones it serves. b. Open e-mail relays are a principal tool for distribution of spam. c. Using a blacklist of open e-mail relays provides a secure way for an e-mail administrator to identify open mail relays and filter spam. d. An open e-mail relay is widely considered a sign of bad system administration. Answer c 19. A cookie is a way to: a. Track a user’s e-mail b. Add statefulness to the (originally stateless) HTTP c. Disclose a user’s identity d. Add history information to the (originally stateless) HTTP Answer b 20. From a disaster recovery perspective, which of the following is the principal concern associated with Voice-over-IP services? a. They can make the IP network of an organization a single point of failure for communication. 745
AU8231_A001.fm Page 746 Saturday, June 2, 2007 1:24 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® b. They will increase the chance of a network outage due to capacity saturation. c. They will make the overall IT environment more complex, thereby increasing cost for the recovery site, dependency on external suppliers, and time needed to make it operational. d. None of the above — the choice of telephony technology is immaterial to business continuity planning. Answer a 21. Why is public key encryption unsuitable for multicast applications? a. The processing overhead is too high. b. The system is susceptible to man-in-the-middle attacks. c. All data is going to all members of the multicast group. d. Distribution of too many public keys allows them to be broken. Answer c Domain 8: Application Security 1. If a database is protected from modification using only symmetric encryption, someone may still be able to mount an attack by: a. Moving blocks of data such that a field belonging to one person is assigned to another b. Changing the encryption key so that a collison occurs c. Using the public key instead of the private key d. Arranging to intercept the public key in transit and replace it with his own Answer a 2. Why cannot outside programs determine the existence of malicious code with 100 percent accuracy? a. Users do not update their scanners frequently enough. b. Firewalls are not intended to detect malicious code. c. The purpose of a string depends upon the context in which it is interpreted. d. The sourced code language is often unknown. Answer c: Most of these answers are partially correct, but answer c is the primary reason. Even if users update their signature files frequently, they may not detect a new or unknown attack. Firewalls will stop many types of attacks, but again, they are limited to the rules they have been provided. 3. Format string vulnerabilities in programs can be found by: a. Forcing buffer overflows b. Submitting random long strings to the application 746
AU8231_A001.fm Page 747 Saturday, June 2, 2007 1:24 PM
Answers to Sample Questions c. Causing underflow problems d. Including string specifiers in input data Answer d: Answers a and b are too vague or broad. Answer c relates to integer values not strings. 4. Files temporarily created by applications can expose confidential data if: a. Special characters are not used in the filename to keep the file hidden b. The existence of the file exceeds three seconds c. File permissions are not set appropriately d. Special characters indicating this is a system file are not used in the filename. Answer c 5. The three structural parts of a virus are: a. Malicious payload, message payload, and benign payload b. Infection, payload, and trigger c. Self-replication, file attachment, and payload d. Replication, destructive payload, and triggering condition Answer b 6. An application that uses dynamic link libraries can be forced to execute malicious code, even without replacing the target .dll file, by exploiting: a. Registry settings b. The library search order c. Buffer overflows d. Library input validation flaws Answer b 7. In terms of databases, cryptography can: a. Only restrict and reduce availability b. Improve availability by allowing data to be easily placed where authorized users can access it c. Improve availability by increasing granularity of access controls d. Neither reduce nor improve availability Answer b 8. Proprietary protocols and data formats: a. Are unsafe because they typically rely on security by obscurity b. Are safe because buffer overflows cannot be effectively determined by random submission of data 747
AU8231_A001.fm Page 748 Saturday, June 2, 2007 1:24 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® c. Are insecure because vendors do not test them d. Are secure because of encryption Answer a 9. Integrating cryptography into applications may lead to: a. Increased stability as the programs are protected against viral attack b. Enhanced reliability as users can no longer modify source code c. Reduced breaches of policy due to disclosure of information d. Possible denial of service if the keys are corrupted Answer d Domain 9: Operations Security 1. Which of the following permissions should not be assigned to system operators? a. Volume mounting b. Changing the system time c. Controlling job flow d. Monitoring execution of the system Answer b 2. Which type of network component typically lacks sufficient accountability controls? a. Workstations b. Servers c. Switches d. Database management systems Answer c 3. The correlation of system time among network components is important for what purpose? a. Availability b. Network connectivity c. Backups d. Audit log review Answer d 4. Which type of access control system would use security labels? a. Mandatory access control b. Discretionary access control c. Role-based access control d. Simplex access control Answer a 748
AU8231_A001.fm Page 749 Saturday, June 2, 2007 1:24 PM
Answers to Sample Questions 5. Individuals are granted clearance according to their: a. Duties assigned b. Trustworthiness c. Both a and b d. Neither a or b Answer c 6. Which group characteristic or practice should be avoided? a. Account groupings based on duties b. Group accounts c. Distribution of privileges to members of the group d. Assigning an account to multiple groups Answer b 7. Which of the following resources does not impact audit log management? a. Memory b. Bandwidth c. CPU time d. Storage space Answer a 8. Which type of users should be allowed to use system accounts? a. Ordinary users b. Security administrators c. System administrators d. None of the above Answer d 9. Wireless network traffic is the best security with which of the following protocols a. Wireless Encryption Protocol (WEP) b. Wired Equivalent Privacy (WEP) c. Wi-Fi Protected Access (WPA) d. Wireless Protected Access (WPA) Answer c 10. Original copies of software should reside with: a. Media librarian b. Software librarian c. Security administrator d. System administrator Answer a 749
AU8231_A001.fm Page 750 Saturday, June 2, 2007 1:24 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® 11. All of the following are control types except: a. Detective b. Preventative c. Recovery d. Configuration Answer d 12. Compensating controls are used: a. To detect errors in the system b. When an existing control is insufficient to provide the required access c. To augment a contingency plan d. As a deterrent control Answer b 13. Need-to-know enforcement is most easily implemented using: a. Mandatory access control b. Discretionary access control c. Role-based access control d. None of the above Answer a 14. What measurement unit is used to describe the amount of energy necessary to reduce a magnetic field to zero? a. Reduction b. Remanence c. Coercivity d. Gauss Answer c 15. Which object reuse method is best used for a CD-ROM containing sensitive information? a. Degauss b. Pulverize c. Overwrite software d. None of the above Answer b 16. Backups and archives: a. Perform the exact same function b. Provide redundancy capabilities c. Are only necessary in high threat areas d. Serve different purposes Answer d 750
AU8231_A001.fm Page 751 Saturday, June 2, 2007 1:24 PM
Answers to Sample Questions 17. Redundant components are characterized by all of the following except: a. Hardware only b. Hot spares c. Online d. Duplicative Answer a 18. Which RAID level provides data mirroring? a. 0 b. 1 c. 3 d. 5 Answer b 19. Relative humidity levels in the IT operations center should be less than: a. 20 percent b. 35 percent c. 50 percent d. 60 percent Answer d 20. Who is ultimately responsible for notifying authorities of a data or system theft? a. Users b. Security administrator c. System administrator d. Management Answer d 21. Phishing is essentially another form of: a. Denial of service b. Social engineering c. Malware d. Spyware Answer b 22. Intrusion detection systems are used to detect all of the following except: a. Physical break-ins b. System misuse 751
AU8231_A001.fm Page 752 Saturday, June 2, 2007 1:24 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® c. Unauthorized changes to system files d. SPAM Answer d 23. Which of the following does not give rise to a vulnerability? a. Hackers b. Flaws c. Policy failures d. Weaknesses Answer a 24. Configuration management involves: a. Identifying weaknesses b. Documenting system settings c. Vulnerability scanning d. Periodic maintenance Answer b 25. Patch management is a part of: a. Contingency planning b. Change control management c. Business continuity planning d. System update management Answer b Domain 10: Legal, Regulations, Compliance and Investigation 1. Where does the greatest risk of cybercrime come from? a. Outsiders b. Nation-states c. Insiders d. Script kiddies Answer c 2. What is the biggest hindrance to dealing with computer crime? a. Computer criminals are generally smarter than computer investigators. b. Adequate funding to stay ahead of the computer criminals. c. Activity associated with computer crime is truly international. d. There are so many more computer criminals than investigators that it is impossible to keep up. Answer c 3. Computer forensics is really the marriage of computer science, information technology, and engineering with: 752
AU8231_A001.fm Page 753 Saturday, June 2, 2007 1:24 PM
Answers to Sample Questions a. b. c. d.
Law Information systems Analytical thought The scientific method
Answer a 4. What principal allows us to identify aspects of the person responsible for a crime when, whenever committing a crime, the perpetrator takes something with him and leaves something behind? a. Meyer’s principal of legal impunity b. Criminalistic principals c. IOCE/Group of 8 Nations principals for computer forensics d. Locard’s principal of exchange Answer d 5. Which of the following is not one of the five rules of evidence? a. Be authentic b. Be redundant c. Be complete d. Be admissible Answer b 6. What is not mentioned as a phase of an incident response? a. Documentation b. Prosecution c. Containment d. Investigation Answer b 7. _______________ emphasizes the abstract concepts of law and is influenced by the writings of legal scholars and academics. a. Criminal law b. Civil law c. Religious law d. Administrative law Answer b 8. Which type of intellectual property covers the expression of ideas rather than the ideas themselves? a. Trademark b. Patent c. Copyright d. Trade secret Answer c 753
AU8231_A001.fm Page 754 Saturday, June 2, 2007 1:24 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® 9. Which type of intellectual property protects the goodwill a merchant or vendor invests in its products? a. Trademark b. Patent c. Copyright d. Trade secret Answer a 10. Which of the following is not a computer forensics model? a. IOCE b. SWGDE c. MOM d. ACPO Answer c 11. Which of the following is not a category of software licensing? a. Freeware b. Commercial c. Academic d. End-user licensing agreement Answer d 12. What are the rights and obligations of individuals and organizations with respect to the collection, use, retention, and disclosure of personal information related to? a. Privacy b. Secrecy c. Availability d. Reliability Answer a 13. Triage encompasses which of the following incident response subphases? a. Collection, transport, testimony b. Traceback, feedback, loopback c. Detection, identification, notification d. Confidentiality, integrity, availability Answer c 14. Integrity of a forensic bit stream image is often determined by: a. Comparing hash totals to the original source b. Keeping good notes 754
AU8231_A001.fm Page 755 Saturday, June 2, 2007 1:24 PM
Answers to Sample Questions c. Taking pictures d. Can never be proven Answer a 15. When dealing with digital evidence, the crime scene: a. Must never be altered b. Must be completely reproducible in a court of law c. Must exist in only one country d. Must have the least amount of contamination that is possible Answer d
755
AU8231_A001.fm Page 756 Saturday, June 2, 2007 1:24 PM
AU8231_book.fm Page 757 Friday, October 13, 2006 8:00 AM
Appendix B
Certified Information Systems Security Professional (CISSP®) Candidate Information Bulletin Effective Date — April 1, 2006 This Candidate Information Bulletin provides the following: • Exam blueprint to a limited level of detail that outlines major topics and sub-topics within the domains, • Suggested reference list, • Description of the format of the items on the exam, and • Basic registration/administration policies. Applicants must have a minimum of four years of direct full-time security professional work experience in one or more of the ten domains of the (ISC)2 CISSP® CBK® or three years of direct full-time security professional work experience in one or more of the ten domains of the CISSP® CBK® with a fouryear college degree. Additionally, a Master’s Degree in Information Security from a National Center of Excellence can substitute for one year toward the four-year requirement. CISSP professional experience includes: • Work requiring special education or intellectual attainment, usually including a liberal education or college degree. • Work requiring habitual memory of a body of knowledge shared with others doing similar work. • Management of projects and/or other employees.
757
AU8231_book.fm Page 758 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® • Supervision of the work of others while working with a minimum of supervision of one’s self. • Work requiring the exercise of judgment, management decision-making, and discretion. • Work requiring the exercise of ethical judgment (as opposed to ethical behavior). • Creative writing and oral communication. • Teaching, instructing, training and the mentoring of others. • Research and development. • The specification and selection of controls and mechanisms (i.e., identification and authentication technology) (does not include the mere operation of these controls). • Applicable titles such as officer, director, manager, leader, supervisor, analyst, designer, cryptologist, cryptographer, cryptanalyst, architect, engineer, instructor, professor, investigator, consultant, salesman, representative, etc. Title may include programmer. It may include administrator, except where it applies to one who simply operates controls under the authority and supervision of others. Titles with the words “coder” or “operator” are likely excluded. Copyright 2006, International Information Systems Security Certification Consortium, Inc. All rights reserved. No claim, copyright or otherwise, is made regarding U.S. Government laws, rules, regulations, policies, standards, or documents. 1 — Information Security and Risk Management Overview Security management entails the identification of an organization’ information assets and the development, documentation, and implementation of policies, standards, procedures and guidelines that ensure confidentiality, integrity, and availability. Management tools such as data classification, risk assessment, and risk analysis are used to identify the threats, classify assets, and to rate their vulnerabilities so that effective security controls can be implemented. Risk management is the identification, measurement, control, and minimization of loss associated with uncertain events or risks. It includes overall security review, risk analysis, selection and evaluation of safeguards, cost benefit analysis, management decision, safeguard implementation, and effectiveness review. The candidate will be expected to understand the planning, organization, and roles of individuals in identifying and securing an organization’ information assets; the development and use of policies stating management’s views and position on particular topics and the use of guidelines, 758
AU8231_book.fm Page 759 Friday, October 13, 2006 8:00 AM
CISSP Candidate Information Bulletin standards, and procedures to support the policies; security awareness training to make employees aware of the importance of information security, its significance, and the specific security-related requirements relative to their position; the importance of confidentiality, proprietary and private information; employment agreements; employee hiring and termination practices; and risk management practices and tools to identify, rate, and reduce the risk to specific resources. Key Areas of Knowledge • Understand and document goals, mission, and objectives of the Organizations • Establish governance • Understand concepts of availability, integrity, and confidentiality • Apply the following security concepts in planning 1 Defense-in-depth 2 Avoid single points of failure • Develop and implement Security Policy • Define the Organization’s security roles and responsibilitiess • Secure outsourcing • Develop and maintain internal service level agreements • Integrate and support identity management • Understand and apply risk management concepts • Evaluate personnel security • Develop and conduct security education, training, and awareness • Understand data classification concepts • Evaluate information system security strategies • Support certification and accreditation efforts • Design, conduct, and evaluate security assessment • Report security issues to management • Understand professional ethics 2 — Access Control Overview Access control is the collection of mechanisms that permits managers of a system to exercise a directing or restraining influence over the behavior, use, and content of a system. It permits management to specify what users can do, which resources they can access, and what operations they can perform on a system. The candidate should fully understand access control concepts, methodologies, and implementation within centralized and decentralized environments across the enterprise’s computer systems. Access control techniques, detective and corrective measures should be studied to understand the potential risks, vulnerabilities, and exposures. 759
AU8231_book.fm Page 760 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Key Areas of Knowledge • Control access by applying the following concepts/methodologies/techniques • Identify, evaluate and respond to access control attacks (e.g., Brute Force, Dictionary, Spoofing, Denial of Service) • Design, coordinate, and evaluate penetration test(s) • Design, coordinate, and evaluate vulnerability test(s) 3 — Cryptography Overview The Cryptography domain addresses the principles, means, and methods of disguising information to ensure its integrity, confidentiality, and authenticity. The candidate will be expected to know basic concepts within cryptography; public and private key algorithms in terms of their applications and uses; algorithm construction, key distribution and management, and methods of attack; and the applications, construction and use of digital signatures to provide authenticity of electronic transactions, and nonrepudiation of the parties involved. Key Areas of Knowledge • Understand the application and use of cryptography (e.g., confidentiality, availability, and integrity) • Understand methods of encryption (e.g., one-time pads, substitutions, permutations) • Understand types of encryption (e.g., stream, block) • Understand initialization vectors (IV) • Understand cryptographic systems • Understand the use of and employ key management techniques • Understand message digests/hashing (e.g., MD5, SHA, HMAC) • Understand digital signatures • Understand nonrepudiation • Understand methods of cryptanalytic attacks • Employ cryptography in network security (e.g., SSL) • Use cryptography to maintain e-mail security (e.g., PGP, S/MIME) • Understand Public Key Infrastructure (PKI) (e.g., certification authorities, etc.) • Understand alternatives (e.g., steganography, watermarking) 4 — Physical (Environmental) Security Overview The Physical (Environmental) Security domain addresses the threats, vulnerabilities, and countermeasures that can be utilized to physically pro760
AU8231_book.fm Page 761 Friday, October 13, 2006 8:00 AM
CISSP Candidate Information Bulletin tect an enterprise’s resources and sensitive information. These resources include people, the facility in which they work, and the data, equipment, support systems, media, and supplies they utilize. The candidate will be expected to know the elements involved in choosing a secure site, its design and configuration, and the methods for securing the facility against unauthorized access, theft of equipment and information, and the environmental and safety measures needed to protect people, the facility, and its resources. Key Areas of Knowledge • • • •
Participate in site and facility design considerations Support the implementation and operation of perimeter security Support the implementation and operation of interior security Support the implementation and operation of operations/facility security • Participate in the protection and securing equipment 5 — Security Architecture and Design Overview The Security Architecture and Models domain contains the concepts, principles, structures, and standards used to design, implement, monitor, and secure operating systems, equipment, networks, applications, and those controls used to enforce various levels of confidentiality, integrity, and availability. The candidate should understand security models in terms of confidentiality, integrity, information flow, commercial versus government requirements; system models in terms of the Common Criteria, international (ITSEC), United States Department of Defense (TCSEC), and Internet (IETF IPSEC); technical platforms in terms of hardware, firmware, and software; and system security techniques in terms of preventative, detective, and corrective controls. Key Areas of Knowledge • Understand the theoretical concepts of security models • Understand the Components of Information Systems Evaluation Models • Understand security capabilities of computer systems • Understand how the security architecture is affected by 1 Covert channels 2 States attacks (e.g., time of check/time of use) 3 Emanations 4 Maintenance hooks and privileged programs 5 Countermeasures 761
AU8231_book.fm Page 762 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® 6 Assurance, trust, and confidence 7 Trusted computer base (TCB), reference monitors, and kernels 6 — Business Continuity and Disaster Recovery Planning Overview The Business Continuity Planning (BCP) and Disaster Recovery Planning (DRP) domain addresses the preservation of the business in the face of major disruptions to normal business operations. BCP and DRP involve the preparation, testing, and updating of specific actions to protect critical business processes from the effect of major system and network failures. Business Continuity Plans counteract interruptions to business activities and should be available to protect critical business processes from the effects of major failures or disasters. It deals with the natural and man-made events and the consequences if not dealt with promptly and effectively. Business Impact Assessment determines the proportion of impact an individual business unit would sustain subsequent to a significant interruption of computing or telecommunication services. These impacts may be financial, in terms of monetary loss, or operational, in terms of inability to deliver. Disaster Recovery Plans contain procedures for emergency response, extended backup operation and post-disaster recovery should a computer installation experience a partial or total loss of computer resources and physical facilities. The primary objective of the Disaster Recovery Plan is to provide the capability to process mission-essential applications, in a degraded mode, and return to normal mode of operation within a reasonable amount of time. The candidate will be expected to know the difference between business continuity planning and disaster recovery; business continuity planning in terms of project scope and planning, business impact analysis, recovery strategies, recovery plan development, and implementation. The candidate should understand disaster recovery in terms of recovery plan development, implementation, and restoration. Key Areas of Knowledge • • • •
762
Develop and document project scope and plan Conduct Business Impact Analysis Develop recovery strategy Incorporate the following elements into the plan: 1 Emergency response 2 Notification (e.g., calling tree) 3 Personnel safety
AU8231_book.fm Page 763 Friday, October 13, 2006 8:00 AM
CISSP Candidate Information Bulletin 4 5 6 7 8 9 10
Communications Public utilities Logistics and supplies Fire and water protection Business resumption planning Damage assessment Restoration (e.g., cleaning, data recovery, relocation to primary site) • Training • Plan Maintenance 7 — Telecommunications and Network Security Overview Telecommunications and Network Security domain encompasses the structures, transmission methods, transport formats, and security measures used to provide integrity, availability, authentication, and confidentiality for transmissions over private and public communications networks and media. The candidate is expected to demonstrate an understanding of communications and network security as it relates to voice communications; data communications in terms of local area, wide area, and remote access; Internet/Intranet/Extranet in terms of Firewalls, Routers, and TCP/IP; and communications security management and techniques in terms of preventive, detective and corrective measures. In today’s global marketplace, the ability to communicate with others is a mandatory requirement. The data communications domain encompasses the network structure, transmission methods, transport formats, and security measures used to maintain the integrity, availability, authentication, and confidentiality of the transmitted information over both private and public communication networks. The candidate is expected to demonstrate an understanding of communications and network security as it relates to data communications in local area and wide area networks; remote access; internet/intranet/extranet configurations, use of firewalls, network equipment and protocols (such as TCP/IP), VPNs, and techniques for preventing and detecting network based attacks. Key Areas of Knowledge • Establish secure data communications • Establish secure multimedia communications • Develop and maintain secure networks 763
AU8231_book.fm Page 764 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® • Prevent attacks and control potential attack threats (e.g., Malicious Code, Flooding, Spamming) • Remote access protocols (e.g., CHAP, EAP) 8 — Application Security Overview Applications and systems development security refers to the controls that are included within systems and applications software and the steps used in their development. Applications refer to agents, applets, software, databases, data warehouses, and knowledge-based systems. These applications may be used in distributed or centralized environments. The candidate should fully understand the security and controls of the systems development process, system life cycle, application controls, change controls, data warehousing, data mining, knowledge-based systems, program interfaces, and concepts used to ensure data and application integrity, security, and availability. Key Areas of Knowledge • Understand the role of security in the system life cycle • Understand the application environment and security controls • Understand databases and data warehousing and protect against vulnerabilities and threats • Understand application & system development knowledge securitybased systems (e.g., expert systems) • Understand application and system vulnerabilities and threats 9 — Operations Security Overview Operations Security is used to identify the controls over hardware, media, and the operators with access privileges to any of these resources. Audit and monitoring is the mechanisms, tools and facilities that permit the identification of security events and subsequent actions to identify the key elements and report the pertinent information to the appropriate individual, group, or process. The candidate will be expected to know the resources that must be protected, the privileges that must be restricted, the control mechanisms available, the potential for abuse of access, the appropriate controls, and the principles of good practice. Key Areas of Knowledge • Apply the following security concepts to activities: 1 Need-to-know/least privilege 764
AU8231_book.fm Page 765 Friday, October 13, 2006 8:00 AM
CISSP Candidate Information Bulletin 2 3 4 5
• • • • • • •
Separation of duties and responsibilities Monitor special privileges (e.g., operators, administrators) Job rotation Marking, handling, storing, and destroying of sensitive information and media 6 Record retention 7 Backup critical information 8 Antivirus management 9 Remote working 10 Malware management Employ resource protection Handle violations, incidents, and breaches and report when necessary Support high availability (e.g., fault tolerance, Denial of Service prevention) Implement and Support Patch and Vulnerability Management Ensure administrative management and control Understand Configuration Management Concepts (e.g., Hardware/Software) Respond to attacks and other vulnerabilities (e.g., spam, virus, spyware, phishing)
10 — Legal, Regulations, Compliance and Investigations Overview The Legal, Regulations, Compliance and Investigations domain addresses computer crime laws and regulations; the investigative measures and techniques which can be used to determine if a crime has been committed, methods to gather evidence if it has, as well as the ethical issues and code of conduct for the security professional. Incident handling provides the ability to react quickly and efficiently to malicious technical threats or incidents. The candidate will be expected to know the methods for determining whether a computer crime has been committed; the laws that would be applicable for the crime; laws prohibiting specific types of computer crime; methods to gather and preserve evidence of a computer crime, investigative methods and techniques; and ways in which RFC 1087 and the (ISC)2™ Code of Ethics can be applied to resolve ethical dilemmas. Key Areas of Knowledge • Understand information • Understand • Understand
common elements of international laws that pertain to systems security and support investigations forensic procedures 765
AU8231_book.fm Page 766 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® References (ISC)2 does not intend that candidates purchase and read all of the books and articles listed in this reference list. Since most of the information tested in the examination pertains to a common body of knowledge, this additional information serves only as a supplement to one’s understanding of basic knoledge. A reference list is not intended to be inclusive but is provided to allow flexibility. The candidate is encouraged to supplement his or her education and experience by reviewing other resources and finding information in areas which he or she may consider himself or herself not as skilled or experienced. (ISC)2 does not endorse any particular text or author. Although the list may include more than one reference that covers a content area, one such reference may be enough. The candidate may also have resources available that are not on the list but which will adequately cover the content area. The list does not represent the only body of information to be used as study material. Questions in the examination are also developed from information gained through practical experience. This reference list is not intended to be all-inclusive, but rather, a useful list of references used to support the test question development process. Use of the references does not guarantee successful completion of the test. Below is the suggested reference list: (ISC)2 Code Of Ethics An Overview Of SSL V2 Applied Cryptography Black’s Law Dictionary Building A Security Computer System CCTV Surveillance-Video Practices And Technology Cert Guide To System And Network Security Practices Cisco TCP/IP Commonsense Computer Security Computer & Communications Security Computer And Information Ethics Computer Audit, Control, And Security Computer Crime — A Crime Fighter’s Handbook Computer Ethics And Society Computer Networks: Protocols, Standards, And Interfaces Computer Security 766
Adam Shostak Schneier Henry C. Blacky Morrie Gasser Herman Kruegle Julia Allen Lewis Smith, Martin James A. Cooper Wekert & Adeney Robert R. Moeller Icove/Seger/Von Storchk M. David Erman Black John M. Carroll
AU8231_book.fm Page 767 Friday, October 13, 2006 8:00 AM
CISSP Candidate Information Bulletin Computer Security Basics Computer Security Handbook Computer Security Management Control And Security Of Information Systems Counter Hack Cryptography And Data Security Cryptography & Network Security Cryptography: A New Dimension In Data Security Data Security & Controls Data Security Management. Passwords, Userids And Security Codes Database Security And Integrity Defending Your Digital Assets Against Hackers Designing Network Security Disaster Planning & Recovery Disaster Recovery Planning Effect Physical Security E-Mail Security Encyclopedia Of Computer Science And Engineering Fighting Computer Crime Fire Protection Handbook Fundamentals Of Criminal Investigations Hacker Proof Hacking Exposed Handbook Of Applied Cryptography Handbook Of EDP Auditing Handbook Of Information Security Management Handbook Of Information Security Management Handbook Of Personal Data Protection Immediate Response
Russell & Gangemi Hoyt D.B. Parker Fites, Philip E., Kratz Martin P.J., Brebner, Alan F. Ed Skoudis D.R. Denning W. Stallings Carl H. Meyer and Stephen M. Matyas Edward R. Buck Steven J. Ross
Fernandez, Summer and Wood Nichols Merike Kaeo Alan Levitt Jon Toigo Lawrence Fennelly B. Scheier A. Ralston and E. Reilly Parker National Fire Protection Association Charles E. O’Haar Lars Klander Osborne Alfred J. Menenzes, Alfred J., Paul C. Van Oorschot, Scott A Vanstone Michael A. Murphy and Xenia Ley Parker Krause/Tipton Ruthberg/Tipton W. Madsen Schultz, P.
767
AU8231_book.fm Page 768 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Implementing Internet Security
Incidence Response Information Security Dictionary Of Concepts, Standards And Terms Information Systems Security: A Practitioner’s Reference Interconnections: Bridges And Routers Internet Cryptography Intrusion Detection ISP Liability Survival Guide Kahn On Codes: Secrets Of The New Cryptology Management Strategies For Computer Security Managing Information Security: A Program For The Electronic Information Age Mastering Network Security Network Security Network Security: A Beginners Guide OSI Reference Model PC Security And Virus Protection Handbook PKI Practical Unix & Internet Security Pretty Good Privacy Principles Of Security Management Risk Assessment And Management Secure Computing — Threats And Safeguards Security Accuracy And Privacy In Computer System Security Architecture For The Internet Protocol (RFC2401) Security Engineering Security ID Systems And Locks The Book Of Electronic Access Control Security In Computing
768
Cooper, Goggans, Halvey, Hughes, Morgan, Siyan, Stallings, Stephenson Mandia William Caelli, Dennis Longly Fites and Kratze Radia Perlman Smith Escamilla Timothy D. Casey David Kahn William E. Perry James A. Schweitzer
Brenton Fred Simmons Eric Maiwald International Standards Organization Pamela Kane Andrew Nash Garfinkel/Spafford Garfinkel Healy, Richard J. and Walsh, Dr. Timothy J. Will Ozier Rita Summers Martin, James Kent/Atkinson Anderson Joel Konicek & Karen Little Charles P. Pfleeger
AU8231_book.fm Page 769 Friday, October 13, 2006 8:00 AM
CISSP Candidate Information Bulletin Security Of Information And Data Survey Of Risk Assessment Methodologies Surviving Security The NCSA Guide To PC And LAN Security The Point-To-Point Tunneling Protocol Technical Specification The Process Of Network Security Tokens: A Comparison Of Password Generators Top-Down Network Design Trade Knowledge Underground Guide To Computer Security Voice And Data Security Voice Network Fraud Web Publishing With HTML 4 Web Security Web Security & Commerce Web Security Privacy And Commerce Web Security Source Book Wireless Security
Daler Torgeir Peter S. Browne Andress Stephen Cobb Gurdeep Singh Pall, Kory Hanzeh, William Verthein, Jeff Taarud, Andrew Little Wadlow Hootman Oppenheimer, Priscilla R. Sanovic & A. Terwilliger M. Alexander Archer, Core Ray Horak Lemay Tiwana Simson Garfinkel with Gene Spafford Simson Rubin Merritt
1. Which one of the following is the MOST important security consideration when selecting a new computer facility? (A) Local law enforcement response times (B) Adjacent to competitors’ facilities (C) Aircraft flight paths (D) Utility infrastructure Answer — D 2. Which one of the following describes a SYN flood attack? (A) Rapid transmission of Internet Relay Chat (IRC) messages (B) Creating a high number of half-open connections (C) Disabling the Domain Name Service (DNS) server (D) Excessive list linking of users and files Answer — B 3. The typical function of Secure Sockets Layer (SSL) in securing Wireless Application Protocol (WAP) is to protect transmissions (A) Between the WAP gateway and the wireless device. 769
AU8231_book.fm Page 770 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® (B) Between the web server and WAP gateway. (C) From the web server to the wireless device. (D) Between the wireless device and the base station. Answer — B GENERAL EXAMINATION INFORMATION 1. General Information. The doors to all examination rooms will open at 8:00 a.m. Examination instructions will begin promptly at 8:30 a.m. All examinations will begin at approximately 9:00 a.m. The CISSP® exam will end at approximately 3:00 p.m. All other exams will end at approximately 12:00 noon. Please note there will be no lunch break during the testing period of 9:00 a.m. to 3:00 p.m. However, you are permitted to bring a snack with you. You may, at your option, take a break and eat your snack at the back of the examination room. No additional time will be allotted for breaks. 2. Examination Admittance. Please arrive at 8:00 a.m. when the doors open. Please bring your admission letter to the examination. In order to be admitted; a photo identification is also required. You will not be admitted without proper identification. The only acceptable forms of identification are a driver’s license, governmentissued identification card, or passport. No other written forms of identification will be accepted. 3. Examination Security. Failure to follow oral and written instructions will result in your application being voided and forfeiture of your application fee. Conduct that results in a violation of security or disrupts the administration of the examination could result in the confiscation of your test and dismissal from the examination. In addition, your examination will be considered void and will not be scored. Examples of misconduct include, but are not limited to, the following: writing on anything other than designated examination materials, writing after time is called, looking at another candidate’s examination materials, talking with other candidates at any time during the examination period, failing to turn in all examination materials before leaving the testing room. You must not discuss or share reference materials or any other examination information with any candidate during the entire examination period. You are particularly cautioned not to do so after you have completed the exam and checked out of the test room, as other candidates in the area might be taking a break and still not have completed the examination. You may not attend the examination only to review or audit test materials. You may not copy any portion of the examination for any reason. No examination materials may leave the test room under any circum770
AU8231_book.fm Page 771 Friday, October 13, 2006 8:00 AM
CISSP Candidate Information Bulletin stances and all examination materials must be turned in and accounted for before leaving the testing room. No unauthorized persons will be admitted into the testing area. Please be further advised that all examination content is strictly confidential. You may only communicate about the test, or questions on the test, using the appropriate comment forms provided by the examination staff at the test site. At no other time, before, during, or after the examination, may you communicate orally, electronically, or in writing with any person or entity about the content of the examination or individual examination questions. 4. Reference Material. Candidates writing on anything other than examination materials distributed by the proctors will be in violation of the security policies above. Reference materials are not allowed in the testing room. Candidates are asked to bring as few personal and other items as possible to the testing area. Hard copy language translation dictionaries are permitted for the examination, should you choose to bring one to assist you with language conversions. Electronic dictionaries will not be permitted under any circumstances. The Examination Supervisor will fully inspect your dictionary at check-in. Your dictionary may not contain any writing or extraneous materials of any kind. If the dictionary contains writing or other materials or papers, it will not be permitted in the examination room. Additionally, you are not permitted to write in your dictionary at any time during the examination, and it will be inspected a second time prior to dismissal from the examination. Finally, (ISC)2 takes no responsibility for the content of such dictionaries or interpretations of the contents by a candidate. 5. Examination Protocol. While the site climate is controlled to the extent possible, be prepared for either warm or cool temperatures at the testing center. Cellular phones and beepers are prohibited in the testing area. The use of headphones inside the testing area is prohibited. Electrical outlets will not be available for any reason. Earplugs for sound suppression are allowed. No smoking or use of tobacco will be allowed inside the testing area. Food and drinks are only allowed in the snack area located at the rear of the examination room. You must vacate the testing area after you have completed the examination. If you require special assistance, you must contact SMT at least one week in advance of the examination date and appropriate arrangements will be made. Due to limited parking facilities at some sites, please allow ample time to park and reach the testing area. 6. Admission Problems. A problem table for those candidates who did not receive an admission notice or need other assistance will be available 30 minutes prior to the opening of the doors. 771
AU8231_book.fm Page 772 Friday, October 13, 2006 8:00 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® 7. Examination Format and Scoring. – The CISSP examination consists of 250 multiple choice questions with four (4) choices each. – The SSCP® exam contains 125 multiple choice questions with four (4) choices each. – The ISSAP®, ISSEP®, and ISSMP® exams contain 125, 150, 125 multiple choice questions, respectively, with four (4) choices each. – The Certification and Accreditation Professional (CAP) exam contains 125 multiple choice questions with four (4) choices each. There may be scenario-based items which may have more than one multiple choice question associated with it. These items will be specifically identified in the test booklet. Each of these exams contains 25 questions which are included for research purposes only. The research questions are not identified; therefore, answer all questions to the best of your ability. Examination results will be based only on the scored questions on the examination. There are several versions of the examination. It is important that each candidate have an equal opportunity to pass the examination, no matter which version is administered. Expert certified information security professionals have provided input as to the difficulty level of all questions used in the examinations. That information is used to develop examination forms that have comparable difficulty levels. When there are differences in the examination difficulty, a mathematical procedure is used to make the scores equal. Because the number of questions required to pass the examination may be different for each version, the scores are converted onto a reporting scale to ensure a common standard. The passing grade required is a scale score of 700 out of a possible 1000 points on the grading scale. 8. Examination Results. Examination results will normally be released, via U.S. first class mail, within 4 to 6 weeks of the examination date. A comprehensive statistical and psychometric analysis of the score data is conducted for each spring and fall testing cycle prior to the release of scores. A minimum number of candidates must have taken the examination for the analysis to be conducted. Accordingly, depending upon the schedule of test dates for a given cycle, there may be occasions when scores are delayed beyond the 4–6 week time frame in order to complete this critical process. Results WILL NOT be released over the phone. In order to receive your results, your address must be current and any address change must be submitted to SMT in writing. 9. Exam Response Information. Your answer sheet MUST be completed with your name and other information as required. The answer sheet must be used to record all answers to the multiple-choice questions. 772
AU8231_book.fm Page 773 Friday, October 13, 2006 8:00 AM
CISSP Candidate Information Bulletin Upon completion, you are to wait for the proctor to collect your examination materials. Answers marked in the test booklet will not be counted or graded, and additional time will not be allowed in order to transfer answers to the answer sheet. All marks on the answer sheet must be made with a No. 2 pencil. You must blacken the appropriate circles completely and completely erase any incorrect marks. Only your responses marked on the answer sheet will be considered. An unanswered question will be scored as incorrect. Dress is “business casual” (neat, but certainly comfortable). Any questions should be directed to: (ISC)2 ® c/o Schroeder Measurement Technologies, Inc. 2494 Bayshore Blvd., Suite 201 Dunedin, FL 34698 (888) 333-4458 (U.S. only) (727) 738-8657
773
AU8231_book.fm Page 774 Friday, October 13, 2006 8:00 AM
AU8231_A003.fm Page 775 Thursday, October 19, 2006 7:10 AM
Appendix C
Glossary 45 CFR — Code of Federal Regulations Title 45 Public Welfare. 802.11 — Family of IEEE standards for wireless LANs first introduced in 1997. The first standard to be implemented, 802.11b, specifies from 1 to 11 Mbps in the unlicensed band using DSSS (direct sequence spread spectrum) technology. The Wireless Ethernet Compatibility Association (WECA) brands it as Wireless Fidelity (Wi-Fi). 802.1X — An IEEE standard for port-based layer two authentications in 802 standard networks. Wireless LANS often use 802.1X for authentication of a user before the user has the ability to access the network. A/S, A.S., or AS — Under HIPAA, see administrative simplification. AAL — ATM adaptation layer. AARP — AppleTalk Address Resolution Protocol. Abduction — A form of inference that generates plausible conclusions (which may not necessarily be true). As an example, knowing that if it is night, then a movie is on television and that a movie is on television, then abductive reasoning allows the inference that it is night. Abend — Acronym for abnormal end of a task. It generally means a software crash. The abnormal termination of a computer application or job because of a non-system condition or failure that causes a program to halt. Ability — Capacity, fitness, or tendency to act in a specified or desired manner. Skill, especially the physical, mental, or legal power to perform a task. ABR — Area border router. Abstraction — The process of identifying the characteristics that distinguish a collection of similar objects; the result of the process of abstraction is a type. AC — Access Control (Token Ring). ACC — Audio Communications Controller.
775
AU8231_A003.fm Page 776 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Acceptable risk — The level of residual risk that has been determined to be a reasonable level of potential loss/disruption for a specific IT system. See also total risk, residual risk, and minimum level of protection. Acceptable use policy — A policy that a user must agree to follow to gain access to a network or to the Internet. Acceptance confidence level — The degree of certainty in a statement of probabilities that a conclusion is correct. In sampling, a specified confidence level is expressed as a percentage of certainty. Acceptance inspection — The final inspection to determine whether or not a facility or system meets the specified technical and performance standards. Note: This inspection is held immediately after facility and software testing, and is the basis for commissioning or accepting the information system. Acceptance testing — The formal testing conducted to determine whether a software system satisfies its acceptance criteria, enabling the customer to determine whether to accept the system. Access — The ability of a subject to view, change, or communicate with an object. Typically, access involves a flow of information between the subject and the object. Access control — The process of allowing only authorized users, programs, or other computer system (i.e., networks) to access the resources of a computer system. A mechanism for limiting the use of some resource (system) to authorized users. Access control certificate — ADI in the form of a security certificate. Access control check — The security function that decides whether a subject’s request to perform an action on a protected resource should be granted or denied. Access control decision function (ADF) — A specialized function that makes access control decisions by applying access control policy rules to a requested action, ACI (of initiators, targets, actions, or that retained from prior actions), and the context in which the request is made. Access control decision information (ADI) — The portion (possibly all) of the ACI made available to the ADF in making a particular access control decision. Access control enforcement function (AEF) — A specialized function that is part of the access path between an initiator and a target on each access that enforces the decisions made by the ADF. Access control information (ACI) — Any information used for access control purposes, including contextual information. 776
AU8231_A003.fm Page 777 Thursday, October 19, 2006 7:10 AM
Glossary Access control list (ACL) — An access control list is the usual means by which access to, and denial of, service is controlled. It is simply a list of the services available, each with a list of the hosts permitted to use the services. Most network security systems operate by allowing selective use of services. Access control mechanisms — Hardware, software, or firmware features and operating and management procedures in various combinations designed to detect and prevent unauthorized access and to permit authorized access to a computer system. Access control policy — The set of rules that define the conditions under which an access may take place. Access controls — The management of permission for logging on to a computer or network. Access list — A catalog of users, programs, or processes and the specifications of the access categories to which each is assigned. Access path — The logical route that an end user takes to access computerized information. Typically, it includes a route through the operating system, telecommunications software, selected application software, and the access control system. Access period — A segment of time, generally expressed on a daily or weekly basis, during which access rights prevail. Access protocol — A defined set of procedures that is adopted at an interface at a specified reference point between a user and a network to enable the user to employ the services or facilities of that network. Access provider (AP) — Provides a user of some network with access from the user’s terminal to that network. This definition applies specifically for the present document. In a particular case, the AP and network operator (NWO) may be a common commercial entity. Access rights — Also called permissions or privileges, these are the rights granted to users by the administrator or supervisor. These permissions can be read, write, execute, create, delete, etc. Access type — The nature of access granted to a particular device, program, or file (e.g., read, write, execute, append, modify, delete, or create). Accident — (1) Technical — any unplanned or unintended event, sequence, or combination of events that results in death, injury, or illness to personnel or damage to or loss of equipment or property (including data, intellectual property, etc.), or damage to the environment. (2) Legal 777
AU8231_A003.fm Page 778 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® — any unpleasant or unfortunate occurrence that causes injury, loss, suffering, or death; an event that takes place without one’s foresight or expectation. Accountability — (1) A security principle stating that individuals must be able to be identified. With accountability, violations or attempted violations can be traced to individuals who can be held responsible for their actions. (2) The ability to map a given activity or event back to the responsible party; the property that ensures that the actions of an entity can be traced to that entity. Accounting — The process of apportioning charges between the home environment, serving network, and user. Accreditation — (1) A program whereby a laboratory demonstrates that something is operating under accepted standards to ensure quality assurance. (2) A management or administrative process of accepting a specific site installation/implementation for operational use based upon evaluations and certifications. (3) A formal declaration by a Designated Approving Authority (DAA) that the AIS is approved to operate in a particular security mode using a prescribed set of safeguards. Accreditation is the official management authorization for operation of an AIS and is based on the certification process as well as other management considerations. The accreditation statement affixes security responsibility with the DAA and shows that due care has been taken for security. (4) Formal declaration by a (DAA) that an information system is approved to operate in a particular security mode using a prescribed set of safeguards at an acceptable level of risk. Accreditation Authority — Synonymous with Designated Approving Authority (DAA). Accreditation boundary — All components of an information system to be accredited by designated approving authority and excluding separately accredited systems, to which the information system is connected. . Accreditation letter — The accreditation letter documents the decision of the authorizing official and the rationale for the accreditation decision and is documented in the final accreditation package, which consists of the accreditation letter and supporting documentation. Accreditation package — A product of the certification effort and the main basis for the accreditation decision. Note: The accreditation package, at a minimum, will include a recommendation for the accreditation decision and a statement of residual risk in operating the system in its environment. Other information included may vary, depending on the system and the DAA. 778
AU8231_A003.fm Page 779 Thursday, October 19, 2006 7:10 AM
Glossary Accredited — Formally confirmed by an accreditation body as meeting a predetermined standard of impartiality and general technical, methodological, and procedural competence. Accredited Standards Committee (ASC) — An organization that has been accredited by ANSI for the development of American National Standards. Accrediting Authority — Synonymous with Designated Approving Authority (DAA). Accumulator — An area of storage in memory used to develop totals of units or items being computed. Accuracy — A performance criterion that describes the degree of correctness with which a function is performed. ACF — User data protection access control functions. ACG — Ambulatory Care Group. ACH — See automated clearinghouse. ACI — Access control information. ACK — Acknowledgment. Acknowledgment (ACK) — A type of message sent to indicate that a block of data arrived at its destination without error. A negative acknowledgment is called a “NAK.”. ACL — See access control list. ACM — Configuration management assurance class. Acquisition organization — The government organization responsible for developing a system. Acquisition, development, and installation controls — The process of assuring that adequate controls are considered, evaluated, selected, designed, and built into the system during its early planning and development stages and that an ongoing process is established to ensure continued operation at an acceptable level of risk during the installation, implementation, and operation stages. ACR — Abbreviation for acoustic conference room, an enclosure that provides acoustic but not electromagnetic emanations shielding; ACRs are no longer procured; TCRs are systematically replacing them. 779
AU8231_A003.fm Page 780 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Acrostic — A poem or series of lines in which certain letters, usually the first in each line, form a name, motto, or message when read in sequence. Action — The operations and operands that form part of an attempted access. Action ADI — Action decision information associated with the action. Active object — An object that has its own process; the process must be ongoing while the active object exists. Active system — A system connected directly to one or more other systems. Active systems are physically connected and have a logical relationship to other systems. Active threat — The threat of a deliberate unauthorized change to the state of the system. Active wiretapping — The attachment of an unauthorized device (e.g., a computer terminal) to a communications circuit to gain access to data by generating false messages or control signals or by altering the communications of legitimate users. ActiveX — Microsoft’s Windows-specific non-Java technique for writing applets. ActiveX applets take considerably longer to download than the equivalent Java applets; however, they more fully exploit the features of Windows. Activity monitor — Anti-viral software that checks for signs of suspicious activity, such as attempts to rewrite program files, format disks, etc. Ad blocker — Software placed on a user’s personal computer that prevents advertisements from being displayed on the Web. Benefits of an ad blocker include the ability of Web pages to load faster and the prevention of user tracking by ad networks. Ada — A programming language that allows use of structured techniques for program design; concise but powerful language designed to fill government requirements for real-time applications. Adaptive array (AA) — Continually monitors received signal for interference. The antenna automatically adjusts its directional characteristics to reduce the interference. Also called adaptive antenna array. Adaptive filter — Prompts users to rate products or situations and also monitors users’ actions over time to find out what they like and dislike. Adaptivity — The ability of intelligent agents to discover, learn, and take action independently.
780
AU8231_A003.fm Page 781 Thursday, October 19, 2006 7:10 AM
Glossary Add-on security — The retrofitting of protection mechanisms, implemented by hardware, firmware, or software, on a computer system that has become operational. Address — (1) A sequence of bits or characters that identifies the destination and sometimes the source of a transmission. (2) An identification (e.g., number, name, or label) for a location in which data is stored. Address mapping — The process by which an alphabetic Internet address is converted into a numeric IP address, and vice versa. Address mask — A bit mask used to identify which bits in an IP address correspond to the network address and subnet portions of the address. This mask is often referred to as the subnet mask because the network portion of the address can be determined by the class inherent in an IP address. The address mask has ones (1) in positions corresponding to the network and subnet numbers and zeros (0) in the host number positions. Address resolution — A means for mapping network layer addresses onto media-specific addresses. Address Resolution Protocol (ARP) — The Internet protocol used to dynamically map Internet addresses to physical (hardware) addresses on the local area network. Limited to networks that support hardware broadcast. Adequate security — Security commensurate with the risk and magnitude of the harm resulting from the loss, misuse, or unauthorized access to or modification of information. This includes assuring that systems and applications operate effectively and provide appropriate confidentiality, integrity, and availability, through the use of cost-effective management, acquisition, development, installation, operational, and technical controls. ADG — Ambulatory Diagnostic Group. Adjacent channel interference — Interference of a signal caused by signal transmissions of another frequency too close in proximity. ADM — Guidance documents, administrator guidance. Administrative code sets — Code sets that characterize a general business situation, rather than a medical condition or service. Under HIPAA, these are sometimes referred to as nonclinical or nonmedical code sets. Compare to medical code sets. Administrative controls — The actions or controls dealing with operational effectiveness, efficiency, and adherence to regulations and management policies.
781
AU8231_A003.fm Page 782 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Administrative security — The management constraints, operational procedures, accountability procedures, and supplemental controls established to provide an acceptable level of protection for sensitive data. Administrative security information — Persistent information associated with entities; it is conceptually stored in the Security Management Information Base. Examples are security attributes associated with users and set up on user account installation, which is used to configure the user’s identity and privileges within the system information configuring a secure interaction policy between one entity and another entity, which is used as the basis for the establishment of operational associations between those two entities. Administrative Services Only (ASO) — An arrangement whereby a selfinsured entity contracts with a third-party administrator (TPA) to administer a health plan. Administrative Simplification (A/S) — Title II, Subtitle F of HIPAA, which gives HHS the authority to mandate the use of standards for the electronic exchange of healthcare data; to specify what medical and administrative code sets should be used within those standards; to require the use of national identification systems for healthcare patients, providers, payers (or plans), and employers (or sponsors); and to specify the types of measures required to protect the security and privacy of personally identifiable healthcare information. This is also the name of Title II, Subtitle F, Part C of HIPAA. ADO — Delivery and operation assurance class. ADSL — Asymmetric digital subscriber line. ADSP — AppleTalk Data Stream Protocol. ADV — Development assurance class. Adversary — Any individual, group, organization, or government that conducts activities, or has the intention and capability to conduct activities, detrimental to critical assets. Advisory sensitivity attributes — User-supplied indicators of file sensitivity that alert other users to the sensitivity of a file so that they may handle it appropriate to its defined sensitivity. Advisory sensitivity attributes are not used by the AIS to enforce file access controls in an automated manner. Adware — Software to generate ads that installs itself on your computer when you download some other (usually free) program from the Web. AEF — Access control enforcement function. 782
AU8231_A003.fm Page 783 Thursday, October 19, 2006 7:10 AM
Glossary AES — Advanced Encryption Standard, a new encryption standard, whose development and selection was sponsored by NIST, that will support key lengths of 128, 192, and 256 bits. AFEHCT — See Association for Electronic Health Care Transactions. Affiliate programs — Arrangements made between E-commerce sites that direct users from one site to the other and by which, if a sale is made as a result, the originating site receives a commission. Affordability — Extent to which C4I features are cost effective on both a recurring and nonrecurring basis. AFL — Authentication failures. AFP — AppleTalk File Protocol. AGD — Guidance documents assurance class. Agent — In the client/server model, the part of the system that performs information preparation and exchange on behalf of a client or server application. Aggregate information — Information that may be collected by a Web site but is not “personally identifiable” to you. Aggregate information includes demographic data, domain names, Internet provider addresses, and Web site traffic. As long as none of these fields is linked to a user’s personal information, the data is considered aggregate. Aggregation — A relation, such as CONSISTS OF or CONTAINS, between types that defines the composition of a type from other types. Aging — The identification, by date, of unprocessed or retained items in a file. This is usually done by date of transaction, classifying items according to ranges of data. AH — Authentication Header. Alarm collector function — A function that collects the security alarm messages, translates them into security alarm records, and writes them to the security alarm log. Alarm examiner function — A function that interfaces with a security alarm administrator. ALARP — As low as reasonably practical; a method of correlating the likelihood of a hazard and the severity of its consequences to determine risk exposure acceptability or the need for further risk reduction. ALC — Life-cycle support assurance class. ALE — Annual loss expectancy. 783
AU8231_A003.fm Page 784 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Algorithm — A computing procedure designed to perform a task such as encryption, compression, or hashing. Aliases — Used to reroute browser requests from one URL to another. Alphabetic test — The check on whether an element of data contains only alphabetic or blank characters. Alphanumeric — A character set that includes numeric digits, alphabetic characters, and other special symbols. Alternate mark inversion (AMI) — The line coding format in T-1 transmission systems whereby successive 1s (marks) are alternately inverted (sent with polarity opposite that of the preceding mark). Alternating current (AC) — Typically, the 120-V electricity delivered by the local power utility to the three-pin power outlet in the wall. The polarity of the current alternates between plus and minus, 60 times per second. AM — Amplitude modulation. Ambulatory Payment Class (APC) — A payment type for outpatient PPS claims. Amendment — See amendments and corrections. Amendments and corrections — In the final Privacy Rule, an amendment to a record would indicate that the data is in dispute while retaining the original information, whereas a correction to a record would alter or replace the original record. American National Standards (ANS) — Standards developed and approved by organizations accredited by ANSI. American National Standards Institute (ANSI) — The agency that recommends standards for computer hardware, software, and firmware design and use. American Registry for Internet Numbers (ARIN) — A nonprofit organization established for the purpose of administration and registration of Internet Protocol (IP) numbers to the geographical areas currently managed by Network Solutions (InterNIC). Those areas include, but are not limited to North America, South America, South Africa, and the Caribbean. American Society for Testing and Materials (ASTM) — A s t a n d a rd s group that has published general guidelines for the development of standards, including those for healthcare identifiers. ASTM Committee E31 on Healthcare Informatics develops standards on information used within healthcare. 784
AU8231_A003.fm Page 785 Thursday, October 19, 2006 7:10 AM
Glossary American Standard Code for Information Interchange (ASCII) — A byte-oriented coding system based on an 8-bit code and used primarily to format information for transfer in a data communications environment. AMI — Alternate Mark Inversion (T1/E1). AMIA — See American Medical Informatics Association. Ampere (amp) — A unit of measurement for electric current. One volt of potential across a 1-ohm impedance causes a current flow of 1 ampere. Amplitude modulation (AM) — The technique of varying the amplitude or wavelength of a carrier wave in direct proportion to the strength of the input signal while maintaining a constant frequency and phase. AMT — Protection of the TSF, underlying abstract machine test. Analog — A voice transmission mode that is not digital in which information is transmitted in its original form by converting it to a continuously variable electrical signal. Analysis and design phase — The phase of the systems development life cycle in which an existing system is studied in detail and its functional specifications are generated. Anamorphosis — An image or the production of an image that appears distorted unless it is viewed from a special angle or with a special instrument. Annual Loss Expectancy (ALE) — In risk assessment, the average monetary value of losses per year. ANO — Privacy, anonymity. Anonymity — The state in which something is unknown or unacknowledged. Anonymizer — A service that prevents Web sites from seeing a user’s Internet Protocol (IP) address. The service operates as an intermediary to protect the user’s identity. Anonymous File Transfer Protocol (FTP) — A method for downloading public files using the File Transfer Protocol. Anonymous FTP is called anonymous because users do not provide credentials before accessing files from a particular server. In general, users enter the word “anonymous” when the host prompts for a username; anything can be entered for the password, such as the user’s e-mail address or simply the word “guest.” In many cases, an anonymous FTP site will not even prompt for a name and password. Anonymous Web browsing (AWB) — Services hide your identity from the Web sites you visit. 785
AU8231_A003.fm Page 786 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® ANS — See American National Standards. ANSI — See American National Standards Institute. Antenna gain — The measure in decibels of how much more power an antenna will radiate in a certain direction with respect to that which would be radiated by a reference antenna. Anti-air warfare (AAW) — A primary warfare mission area dealing with air superiority. Anti-submarine warfare (ASW) — A primary warfare mission area aimed against the subsurface threat. Anti-surface warfare (ASUW) — A primary warfare mission area dealing with sea-going, surface platforms. Anti-virus software — Applications that detect prevent and possibly remove all known viruses from files located in a microcomputer hard drive. APC — See Ambulatory Payment Class. APE — Protection profile evaluation assurance class. API (Application programming interface) — The interface between the application software and the application platform, across which all services are provided. The application programming interface is primarily in support of application portability, but system and application interoperability are also supported by a communication API. Applet — A small Java program embedded in an HTML document. Application — Computer software used to perform a distinct function. Also used to describe the function itself. Application architects — IT professionals who can design creative technology-based business solutions. Application controls — The transaction and data relating to each computer-based application system. Therefore, they are specific to each such application controls, which may be manual or programmed, are to endure the completeness and accuracy of the records and the validity of the entries made therein resulting from both manual and programmed processing. Examples of application controls include data input validation, agreement of batch controls and encryption of data transmitted. Application generation subsystem — Contains facilities to help you develop transaction-intensive applications. Application layer — The top-most layer in the OSI Reference Model, providing such communication service is invoked through a software package. This layer provides the interface between end users and networks. It 786
AU8231_A003.fm Page 787 Thursday, October 19, 2006 7:10 AM
Glossary allows use of e-mail and viewing Web pages, along with numerous other networking services. Application objects — Applications and their components that are managed within an object-oriented system. Example operations on such objects are OPEN, INSTALL, MOVE, and REMOVE. Application program interface (API) — A set of calling conventions defining how a service is invoked through a software package. Application programs — Computer software designed for a specific job, such as word processing, accounting, spreadsheet, etc. Application proxy — A type of firewall that controls external access by operating at the application layer. Application firewalls often re-address outgoing traffic so that it appears to have originated from the firewall rather than the internal host. Application service provider (ASP) — Provides an outsourcing service for business software applications. Application software — Software that enables you to solve specific problems or perform specific tasks. APPN — Advanced peer-to-peer networking. Approval to operate — See certification and accreditation. Architecture — The structure or ordering of components in a computational or other system. The classes and the interrelation of the classes define the architecture of a particular application. At another level, the architecture of a system is determined by the arrangement of the hardware and software components. The terms “logical architecture” and “physical architecture” are often used to emphasize this distinction. ARCNET — Developed by Datapoint Corporation in the 1970s; a LAN (local area network) technology that competed strongly with Ethernet, but no longer does. Initially, a computer connected via ARCNET could communicate at 2.5 Mbps, although this technology now supports a throughput of 20 Mbps (compared to current Ethernet at 100 Mbps and 1 Gbps). Arithmetic logic unit (ALU) — A component of the computer’s processing unit in which arithmetic and matching operations are performed. Arithmetic operator — In programming activities, a symbol representing an arithmetic calculation or process. ARP (Address Resolution Protocol) — This is a protocol that resides in the TCP/IP suite of protocols. Its purpose is to associate IP addresses at the network layer with MAC addresses at the data-link layer. ARPA — Advanced Research Projects Agency. 787
AU8231_A003.fm Page 788 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Array — Consecutive storage areas in memory that are identified by the same name. The elements (or groups) within these storage areas are accessed through subscripts. Artificial intelligence (Al) — A field of study involving techniques and methods under which computers can simulate such human intellectual activities as learning. Artificial neural network (ANN) — Also called a neural network; an artificial intelligence system that is capable of finding and differentiating patterns. AS — Authentication server; part of Kerberos KDC. ASBR — Autonomous system boundary router. ASC — See Accredited Standards Committee. ASCII — American Standard Code for Information Interchange. ASE — Security Target evaluation assurance class. ASIC — Application-specific integrated circuit. ASIS — American Society Industrial Security. ASK — Amplitude shift keying. ASO — See Administrative Services Only. ASP — AppleTalk Session Protocol. ASP/MSP — A third-party provider that delivers and manages applications and computer services, including security services to multiple users via the Internet or virtual private network (VPN). ASPIRE — AFEHCT’s Administrative Simplification Print Image Research Effort workgroup. Assembler language — A computer programming language in which alphanumeric symbols represent computer operations and memory addresses. Each assembler instruction translates into a single machine language instruction. Assembler program — A program language translator that converts assembler language into machine code. Assertion — Explicit statement in a system security policy that security measures in one security domain constitute an adequate basis for security measures (or lack of them) in another. Assessment — (1) An effort to gain insight into system capabilities and limitations. May be conducted in many ways, including a paper analysis, laboratory type testing, or even through limited testing with operationally 788
AU8231_A003.fm Page 789 Thursday, October 19, 2006 7:10 AM
Glossary representative users and equipment in an operational environment. Not sufficiently rigorous in and of itself to allow a determination of effectiveness and suitability to be made for purposes of operational testing. (2) Surveys and Inspections; an analysis of the vulnerabilities of an AIS. Information acquisition and review process designed to assist a customer to determine how best to use resources to protect information in systems. Asset — Any person, facility, material, information, or activity that has a positive value to an owner. Association Control Service Element (ACSE) — Part of the application layer of the OSI Model. ASCE provides the means to exchange authentication information coming from the Specific Application Service Element (SASE) of the OSI Model. Association for Electronic Health Care Transactions (AFEHCT) — A n organization that promotes the use of EDI in the healthcare industry. Association-security-state — The collection of information that is relevant to the control of communications security for a particular applicationassociation. Assumption of risk — A plaintiff may not recover for an injury to which he assents; that is, that a person may not recover for an injury received when he voluntarily exposes himself to a known and appreciated danger. The requirements for the defense … are that: (1) the plaintiff has knowledge of facts constituting a dangerous condition, (2) he knows that the condition is dangerous, (3) he appreciates the nature or extent of the danger, and (4) he voluntarily exposes himself to the danger. Secondary assumption of risk occurs when an individual voluntarily encounters known, appreciated risk without an intended manifestation by that individual that he consents to relieve another of his duty. Assurance — (1) Grounds for confidence that the other four security goals (integrity, availability, confidentiality, and accountability) have been adequately met by a specific implementation. “Adequately met” includes the following: functionality that performs correctly, sufficient protection against unintentional errors (by users or software), and sufficient resistance to malicious penetration or by-pass. (2) A measure of confidence that the security features and architecture of an AIS accurately mediate and enforce the security policy. (3) A measure of confidence that the security features and architecture of an AIS accurately mediate and enforce the security policy. Note: Assurance refers to a basis for believing that the objective and approach of a security mechanism or service will be achieved. Assurance is generally based on factors such as analysis involving theory, testing, software engineering, validation, and verification. Lifecycle assurance requirements provide a framework for secure system design, implementation, and maintenance. The level of assurance that a de789
AU8231_A003.fm Page 790 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® velopment team, certifier, or accreditor has about a system reflects the confidence that they have that the system will be able to enforce its security policy correctly during use and in the face of attacks. Assurance may be provided through four means: 1. the way the system is designed and built, 2. analysis of the system description for conformance to requirement and for vulnerabilities, 2. testing the system itself to determine its operating characteristics, and 4. operational experience. Assurance is also provided through complete documentation of the design, analysis, and testing. ASTM — See American Society for Testing and Materials. Asymmetric cryptosystem — This is an information system utilizing an algorithm or series of algorithms which provide a cryptographic key pair consisting of a private key and a corresponding public key. The keys of the pair have the properties that (1) the public key can verify a digital signature that the private key creates, and (2) it is computationally infeasible to discover or derive the private key from the public key. The public key can therefore be disclosed without significantly risking disclosure of the private key. This can be used for confidentiality as well as for authentication. Asymmetric key (Public Key) — A cipher technique whereby different cryptographic keys are used to encrypt and decrypt a message. Asynchronous — A variable or random time interval between successive characters, blocks, operations, or events. Asynchronous data transmission provides variable intercharacter time but fixed interbit time within characters. Asynchronous Transfer Mode — ATM is a high-bandwidth, low-delay switching and multiplexing technology. It is a data-link layer protocol. This means that it is a protocol-independent transport mechanism. ATM allows very high-speed data transfer rates at up to 155 Mbps. Data is transmitted in the form of 53-byte units called cells. Each cell consists of a 5-byte header and a 48-byte payload. The term “asynchronous” in this context refers to the fact that cells from any one particular source need not be periodically spaced within the overall cell stream. That is, users are not assigned a set position in a recurring frame as is common in circuit switching. ATM can transport audio/video/data over the same connection at the same time and provide QoS (Quality of Service) for this transport. ATD — Identification and authentication user attribute definition. ATE — Tests assurance class. ATM — See Asynchronous Transfer Mode. Atomicity — The assurance that an operation either changes the state of all participating objects consistent with the semantics of the operation or changes none at all. 790
AU8231_A003.fm Page 791 Thursday, October 19, 2006 7:10 AM
Glossary Atoms — The smallest particle of an element that can exist alone or in combination. ATP — AppleTalk Transaction Protocol. Attenuation — The decrease in power of a signal, light beam, or light wave, either absolutely or as a fraction of a reference value. The decrease usually occurs as a result of absorption, reflection, diffusion, scattering, deflection, or dispersion from an original level and usually not as a result of geometric spreading. Attribute — A characteristic defined for a class. Attributes are used to maintain the state of the object of a class. Values can be connected to objects via the attributes of the class. Typically, the connected value is determined by an operation with a single parameter identifying the object. Attributes implement the properties of a type. Audio masking — A condition where one sound interferes with the perception of another sound. Audio output — Voice synthesizers that create audible signals resembling a human voice out of computer-generated output. Audio response system — The method of delivering output by using audible signals and transmitters that simulate a spoken language. Audit — An independent review and examination of system records and activities that test for the adequacy of system controls, ensure compliance with established policy and operational procedures, and recommend any indicated changes in controls, policy, and procedures. Audit authority — The manager responsible for defining those aspects of a security policy applicable to maintaining a security audit. Audit event detector function — A function that detects the occurrence of security-relevant events. This function is normally an inherent part of the functionality implementing the event. Audit recorder function — A function that records the security-relevant messages in a security audit trail. Audit review — The independent review and examination of records and activities to assess the adequacy of system controls, to ensure compliance with established policies and operational procedures, and to recommend necessary changes in controls, policies or procedures. Audit risk — The probable unfavorable monetary effect related to the occurrence of an undesirable event or condition. Audit trail — A chronological record of system activities that is sufficient to enable the reconstruction, review, and examination of each event in a transaction from inception to output of final results. 791
AU8231_A003.fm Page 792 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Audit trail analyzer function — A function that checks a security audit trail in order to produce, if appropriate, security alarm messages. Audit trail archiver function — A function that archives a part of the security audit trail. Audit trail collector function — A function that collects individual audit trail records into a security audit trail. Audit trail examiner function — A function that builds security reports out of one or more security audit trails. Audit trail provider function — A function that provides security audit trails according to some criteria. Audit trail/log — Application or system programs, when activated, automatically monitor system activity in terms of online users, accessed programs, periods of operation, file accesses, etc. AUI — Attachment unit interface. AURP — AppleTalk Update-Based Routing Protocol. AUT — CM automation. Authenticate — To verify the identity of a user, user device, or other entity, or the integrity of data stored, transmitted, or otherwise exposed to possible unauthorized modification in an automated information system, or establish the validity of a transmitted message. Authenticated identity — An identity of a principal that has been assured through authentication. Authentication — The act of identifying or verifying the eligibility of a station, originator, or individual to access specific categories of information. Typically, a measure designed to protect against fraudulent transmissions by establishing the validity of a transmission, message, station, or originator. Authentication certificate — Authentication information in the form of a security certificate that may be used to assure the identity of an entity guaranteed by an authentication authority. Authentication exchange — A sequence of one or more transfers of exchange authentication information (AI) for the purposes of performing an authentication. Authentication header — An IPSec protocol that provides data origin authentication, packet integrity, and limited protection from replay attacks. 792
AU8231_A003.fm Page 793 Thursday, October 19, 2006 7:10 AM
Glossary Authentication information (AI) — Information used to establish the validity of a claimed identity. Authentication initiator — The entity that starts an authentication exchange. Authentication method — Method for demonstrating knowledge of a secret. The quality of the authentication method, its strength is determined by the cryptographic basis of the key Architecture for Public-Key Infrastructure (APKI) Draft distribution service on which it is based. A symmetric key-based method, in which both entities share common authentication information, is considered to be a weaker method than an asymmetric key-based method, in which not all the authentication information is shared by both entities. Authenticity — (1) The ability to ensure that the information originates or is endorsed from the source which is attributed to that information. (2) The service that ensures that system events are initiated by and traceable to authorized entities. It is composed of authentication and nonrepudiation. Authorization — The granting of right of access to a user, program, or process. Authorization policy — A set of rules, part of an access control policy, by which access by security subjects to security objects is granted or denied. An authorization policy may be defined in terms of access control lists, capabilities or attributes assigned to security subjects, security objects or both. Authorize processing — See Accreditation. Authorized access list — A list developed and maintained by the information systems security officer of personnel who are authorized unescorted access to the computer room. Authorizing official — Official with the authority to formally assume responsibility for operating an information system at an acceptable level of risk to agency operations (including mission, functions, image, or reputation), agency assets, or individuals. Autofilter function — Filters a list and allows you to hide all the rows in a list except those that match criteria you specify. Automated Clearinghouse (ACH) — See Health Care Clearinghouse. Automated Information System (AIS) — (1) An assembly of computer hardware, software, firmware, and related peripherals configured to collect, create, compute, disseminate, process, store, and control data or information. (2) Information systems that manipulate, store, transmit, or 793
AU8231_A003.fm Page 794 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® receive information, and associated peripherals such as input/output and data storage and retrieval devices and media. Automated information system security — Synonymous with information technology security. Automated information system security program — Synonymous with IT security program. Automated security monitoring — The use of automated procedures to ensure that the security controls implemented within a computer system or network are not circumvented or violated. Automatic call distribution (ACD) — A specialized phone system originally designed simply to route incoming calls to all available personnel so that calls are evenly distributed. An ACD recognizes and answers an incoming call, looks in its database for instructions on what to do with that call, and sends the call to a recording or voice response unit or to an available operator. Automatic speech recognition (ASR) — A system that not only captures spoken words, but also distinguishes word groupings to form sentences. Autonomy — The ability of an intelligent agent to act without your telling it every step to take. AVA — Vulnerability assessment assurance class. Availability — The property of being accessible and usable upon demand by an authorized entity. Availability formula — This formula is used to calculate how reliable the equipment that is being installed will be for a particular application. Awareness — Awareness programs set the stage for training by changing organizational attitudes toward realization of the importance of security and the adverse consequences of its failure. [NIST SP 800-18]. Awareness, training, and education controls — Awareness programs that set the stage for training by changing organizational attitudes to realize the importance of security and the adverse consequences of its failure; training that teaches people the skills that will enable them to perform their jobs more effectively; and education that is targeted for IT security professionals and focuses on developing the ability and vision to perform complex, multidisciplinary activities. B2B marketplace (busin ess-to-business marketplace) — A n I n t e r n e t based service that brings together many buyers and sellers. Backbone — The primary connectivity mechanism of a hierarchical distributed system. All systems that have connectivity to an intermediate system on the backbone are assured of connectivity to each other. 794
AU8231_A003.fm Page 795 Thursday, October 19, 2006 7:10 AM
Glossary Backbone network — A network that interconnects various computer networks and mainframe computers in an enterprise. The backbone provides the structure through which computers communicate. Backdoor — A function built into a program or system that allows unusually high or even full access to the system, either with or without an account in a normally restricted account environment. The backdoor sometimes remains in a fully developed system either by design or accident. (See also trap door.) Backoff — The (usually random) retransmission delay enforced by contentious MAC protocols after a network node with data to transmit determines that the physical medium is already in use. Back-propagation neural network — A neural network trained by someone. Backup and recovery — The ability to recreate current master files using appropriate prior master records and transactions. Backup operation — A method of operation used to complete essential tasks (as identified by risk analysis) subsequent to the disruption of the information processing facility and continuing to do so until the facility is sufficiently restored. Backup procedures — Provisions make for the recovery of data files and program libraries and for the restart or replacement of computer equipment after the occurrence of a system failure or disaster. Backward chaining — A process related to an expert system inference engine that starts with a hypothesis and attempts to confirm that the hypothesis is consistent with information in the knowledge base. Bandwidth — Difference between the highest and lowest frequencies available for network signals. The term is also used to describe the rated throughput capacity of a given network medium or protocol. Banner ad — A small ad on one Web site that advertises the products and services of another business. Bar code — A series of solid bars of different widths used to encode data. Special optical character recognition (OCR) devices can read this data. Bar code reader — Captures information that exists in the form of vertical bars whose width and distance from each other determine a number. Baseband — A form of modulation in which data signals are pulsed directly on the transmission medium without frequency division and usually utilize a transceiver. In baseband, the entire bandwidth of the transmission medium (cable) is utilized for a single channel. It uses a single carrier 795
AU8231_A003.fm Page 796 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® frequency and requires all stations attached to the network to participate in every transmission. See broadband. Baseline — A set of critical observations or data used for a comparison or control. Note: Examples include a baseline security policy, a baseline set of security requirements, and a baseline system. Baseline architecture — A complete list and description of equipment that can be found in operation today. Baseline security — The minimum security controls required for safeguarding an IT system based on its identified needs for confidentiality, integrity, and availability protection. BASIC — See Beginner’s All-Purpose Symbolic Instruction Code. Basic rate interface (BRI) — Supports a total signaling rate of 144 kbps, which is divided into two B or bearer channels running at 64 kbps, and a D or data channel runing at 16 kbps. The bearer channels carry the actual voice, video, or data information, and the D channel is used for signaling. Basic Service Set (BSS) — Basic Service Set is a set of 802.11-compliant stations that operate as a fully connected wireless network. Basic text formatting tag — HTML tags that allow you to specify formatting for text. Batch control — A computer information processing technique in which numeric fields are totaled and records are tabulated to provide a comparison check for subsequent processing results. Baud — Signal or state change during data transmission. Each state change can be equal to multiple bits, so the actual bit rate during data transmission may exceed the baud rate. Bayesian Belief network — Graphical networks that represent probabilistic relationships among variables. The nodes represent uncertain variables and the arcs represent the causal/relevance relationships between the variables. The probability tables for each node provide the probabilities of each state of the variable for that node, conditional on each combination of values of the parent node. BBA — The Balanced Budget Act of 1997. BBN — Bayesian Belief network. BBRA — The Balanced Budget Refinement Act of 1999. BBS — See bulletin board system. BCBSA — See Blue Cross and Blue Shield Association. 796
AU8231_A003.fm Page 797 Thursday, October 19, 2006 7:10 AM
Glossary BCP (Best Current Practice) — The newest subseries of RFCs that are written to describe Best Current Practices in the Internet. Rather than specify the best ways to use the protocols and the best ways to configure options to ensure interoperability between various vendors’ products, BCPs carry the endorsement of the IESG. BDR — Backup designated router. Beamwidth — The width of the main lobe of an antenna pattern, usually defined as 3 db down from the peak of the lobe. BECN — Backward Explicit Congestion Notification (Frame Relay). Beginner’s All-Purpose Symbolic Instruction Code (BASIC) — A p ro gramming language designed in the 1960s to teach students how to program and to facilitate learning. The powerful language syntax was designed especially for time-sharing systems. Behavioral outcome — What an individual who has completed the specific training module is expected to be able to accomplish in terms of IT security-related job performance. Behaviorally object-oriented — The data model incorporates features to define arbitrarily complex object types together with a set of specific operators (abstract data types). Benchmark test — A simulation evaluation conducted before purchasing or leasing equipment to determine how well hardware, software, and firmware perform. Benign environment — A nonhostile environment that can be protected from external hostile elements by physical, personnel, and procedural security countermeasures. Benign system — A system that is not related to any other system. Benign systems are closed communities without physical connection or logical relationship to any other system. Benign systems are operated exclusive of one another and do not share users, information, or end processing with other systems. BER — Bit error rate. Bespoke learning materials — Materials that are designed and tailored to meet an organization’s specific learning needs and outcomes. Best-effort QoS — The lowest of all QoS traffic classes. If the guaranteed QoS cannot be delivered, the bearer network delivers the QoS, which is called best-effort QoS. Best-effort service — A service model that provides minimal performance guarantees, allowing an unspecified variance in the measured performance criteria. 797
AU8231_A003.fm Page 798 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Between-the-lines entry — Access obtained through the use of active wiretapping by an unauthorized user to a momentarily inactive terminal of a legitimate user assigned to a communications channel. BGP — Border Gateway Protocol. BIA — (1) Business impact analysis. (2) Burned-in address. Billing — A function whereby CDRs generated by the charging function are tranformed into bills requiring payment. Binary — Where only two values or states are possible for a particular condition, such as “on” or “off” or “1” or “0.” Binary is the way digital computers function because it represents data as on or off. Binary digit — A state of function represented by the digit 0 or 1. Biometric system — A pattern recognition system that establishes the authenticity of a specific physiological or behavioral characteristic possessed by a user. Biometrics — A security technique that verifies an individual’s identity by analyzing a unique physical attribute, such as a handprint. BIOS — The BIOS is built-in software that determines what a computer can do without accessing programs from a disk. On PCs, the BIOS contains all the code required to control the keyboard, display screen, disk drives, serial communications, and a number of miscellaneous functions. Bipolar 8 zero substitution (B8ZS) — A technique used to accommodate the density requirement for digital T-carrier facilities in the public network, while allowing 64 kbps clear data per channel. Rather than inserting a 1 for every seven consecutive 0s, B8ZS inserts two violations of bipolar line encoding technique for digital transmission links. B-ISDN — Broadband ISDN. Bit — A binary value represented by an electronic component that has a value of 0 or 1. BIT — Built-in test. Bit error rate (BER) — The probability that a particular bit will have the wrong value. Bit map — A specialized form of an index indicating the existence or nonexistence of a condition for a group of blocks or records. Although expensive to build and maintain, they provide very fast comparison and access facilities. Bit mask — A pattern of binary values that is combined with some value using bitwise AND with the result that bits in the value in positions where the mask is zero are also set to zero. 798
AU8231_A003.fm Page 799 Thursday, October 19, 2006 7:10 AM
Glossary Bit rate — This is the speed at which bits are transmitted on a circuit, usually expressed in bits per second. Bits per second (bps) — The speed at which bits are sent during data transmission. Bitstream image — Bitstreams backups (also referred to as mirror image backups) involve all areas of a computer hard disk drive or another type of storage media. Such backups exactly replicate all sectors on a given storage device. Thus, all files and ambient data storage areas are copied. Black — In the information processing context, black denotes data, text, equipment, processes, systems, or installations associated with unencrypted information that requires no emanations security related protection. For example, electronic signals are “black” if bearing unclassified information. Black-hat hackers — Cyber vandals. Blind scheme — An extraction process method that can recover the hidden message by means only of the encoded data. Block cipher — A method of encrypting text to produce ciphertext in which a cryptographic key and algorithm are applied to a block of data as a group instead of one bit at a time. Block structure — In programming, a segment of code that can be treated as an independent module. Blocking factor — The number of records appearing between interblock gaps on magnetic storage media. Blog — (1) A contraction of Weblog, a form of online writing characterized in format by a single column of chronological text, usually with a sidebar, and frequently updated. As of mid-2002, the vast majority of blogs are nonprofessional (with only a few experimental exceptions) and are run by a single writer. (2) To write an article on a blog. BLP — Bypass label processing. Blue Cross and Blue Shield Association (BCBSA) — An association that represents the common interests of Blue Cross and Blue Shield health plans. The BCBSA serves as the administrator for the Health Care Code Maintenance Committee and also helps maintain the HCPCS Level II codes. Bluetooth — Technology that provides entirely wireless connections for all kinds of communication devices. Body — One of four possible components of a message. Other components are the headings, attachment, and the envelope. 799
AU8231_A003.fm Page 800 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Bootleg — An unauthorized recording of a live or broadcast performance. They are duplicated and sold without the permission of the artist, composer or record company. BOOTP — Bootstrap Protocol. Bote-swaine cipher — A steganographic cipher used by Francis Bacon to insert his name within the text of his writings. Bounds checking — The testing of computer program results for access to storage outside of its authorized limits. Bounds register — A hardware or firmware register that holds an address specifying a storage boundary. BPDU — Bridge Protocol Data Unit. Bps — Bits per second. Branch — An alteration of the normal sequential execution of program statements. Brevity lists — A coding system that reduces the time required to transmit information by representing long, stereotyped sentences with only a few characters. BRI — Basic rate interface (ISDN). Bridge — A device that connects two or more physical networks and forwards packets between them. Bridges can usually be made to filter packets, that is, to forward only certain traffic. Broadband — Characteristic of any network that multiplexes multiple, independent network carriers onto a single cable. Broadband technology allows several networks to coexist on one single cable; traffic from one network does not interfere with traffic from another because the conversations happen on different frequencies in the “ether,” rather like the commercial radio system. Broadcast — A packet delivery system where a copy of a given packet is given to all hosts attached to the network. Example: Ethernet. Broadcast storm — A condition that can occur on broadcast type networks such as Ethernet. This can happen for a number of reasons, ranging from hardware malfunction to configuration error and bandwidth saturation. Brouter — A concatenation of “bridge” and “router.” Used to refer to devices that perform both bridging and routing. Browser — Short for Web browser, a software application used to locate and display Web pages. The two most popular browsers are Netscape Navigator and Microsoft Internet Explorer. Both of these are graphical 800
AU8231_A003.fm Page 801 Thursday, October 19, 2006 7:10 AM
Glossary browsers, which means that they can display graphics as well as text. In addition, most modern browsers can present multimedia information, including sound and video, although they require plug-ins for some formats. Browser-safe colors — A range of 216 colors that can be represented using 8 bits and are visible in all browsers. Browsing — The searching of computer storage to locate or acquire information, without necessarily knowing whether it exists or in what format. Brute force — The name given to a class of algorithms that repeatedly try all possible combinations until a solution is found. Brute-force attack — A form of cryptoanalysis where the attacker uses all possible keys or passwords in an attempt to crack an encryption scheme or login system. BSP — Biometric service provider. Buffer — A temporary storage area, usually in RAM. The purpose of most buffers is to act as a holding area, enabling the CPU to manipulate data before transferring it to a device. Because the processes of reading and writing data to a disk are relatively slow, many programs keep track of data changes in a buffer and then copy the buffer to a disk. For example, word processors employ a buffer to keep track of changes to files. Then when you save the file, the word processor updates the disk file with the contents of the buffer. This is much more efficient than accessing the file on the disk each time you make a change to the file. Note that because your changes are initially stored in a buffer, not on the disk, all of them will be lost if the computer fails during an editing session. For this reason, it is a good idea to save your file periodically. Most word processors automatically save files at regular intervals. Another common use of buffers is for printing documents. When you enter a PRINT command, the operating system copies your document to a print buffer (a free area in memory or on a disk) from which the printer can draw characters at its own pace. This frees the computer to perform other tasks while the printer is running in the background. Print buffering is called spooling. Most keyboard drivers also contain a buffer so that you can edit typing mistakes before sending your command to a program. Many operating systems, including DOS, also use a disk buffer to temporarily hold data that they have read from a disk. The disk buffer is really a cache. Bug — A coded program statement containing a logical or syntactical error. Built-in test — A design feature that provides information on the ability of the item to perform its intended functions. BIT is implemented in software or firmware and may use or control BIT equipment (BITE). 801
AU8231_A003.fm Page 802 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Bulletin Board System (BBS) — A computer that allows you to log on and post messages to other subscribers to the service. To use a BBS, a modem and the telephone number of the BBS is required. A BBS application runs on a computer and allows people to connect to that computer for the purpose of exchanging e-mail, chatting, and file transfers. A BBS is not part of the Internet. Burn box — A device used to destroy computer data. Usually a box with magnets or electrical current that will degauss disks and tapes. Burst — The separation of multiple-copy printout forms into individual sheets. Bus — An electrical connection that allows two or more wires or lines to be connected together. Typically, all circuit cards receive the same information that is put on the bus, but only the card the information is “addressed” to will use that data. Bus structure — A network topology in which nodes are connected to a single cable with terminators at each end. Business associate — Under HIPAA, a person who is not a member of a covered entity’s workforce (see workforce) and who performs any function or activity involving the use or disclosure of individually identifiable health information, such as temporary nursing services, or who provides services to a covered entity that involves the disclosure of individually identifiable health information, such as legal, accounting, consulting, data aggregation, management, accreditation, etc. A covered entity can be a business associate of another covered entity. Business Continuity Plan (BCP) — A documented and tested plan for responding to an emergency. Business Impact Analysis (BIA) — An exercise that determines the impact of losing the support of any resource to an organization, establishes the escalation of that loss over time, identifies the minimum resources needed to recover, and prioritizes the recovery of processes and supporting systems. Business intelligence — Knowledge about customers, competitors, partners, and own internal operations. Business intelligence from information. Business model — A model of a business organization or process. Business Partner (BP) — See business associate. Business Process — A standardized set of activities that accomplishes a specific task such as processing a customer’s order. Business process reengineering (BPR) — The reinventing of a process within a business. 802
AU8231_A003.fm Page 803 Thursday, October 19, 2006 7:10 AM
Glossary Business relationships — (a) The term “agent” is often used to describe a person or organization that assumes some of the responsibilities of another one. This term has been avoided in the final rules so that a more HIPAA-specific meaning could be used for business associate. The term “business partner” (BP) was originally used for business associate. (b) A Third-Party Administrator (TPA) is a business associate that performs claims administration and related business functions for a self-insured entity. (c) Under HIPAA, a healthcare clearinghouse is a business associate that translates data to or from a standard format on behalf of a covered entity. (d) The HIPAA Security NPRM used the term :Chain of Trust Agreement” to describe the type of contract that would be needed to extend the responsibility to protect healthcare data across a series of sub-contractual relationships. (e) A business associate is an entity that performs certain business functions for you, and a trading partner is an external entity, such as a customer, with whom you do business. This relationship can be formalized via a trading partner agreement. It is quite possible to be a trading partner of an entity for some purposes, and a business associate of that entity for other purposes. Business requirement — A detailed knowledge worker request that the system must meet to be successful. Business-to-business (B2B) — Companies whose customers are primarily other businesses. Business-to-consumer (B2C) — Companies whose customers are primarily individuals. Buyer agent or shopping bot — An intelligent agent or application on a Web site that helps customers find the products and services they want. Byte — The basic unit of storage for many computers; typically, one configuration consists of 8 bits used to represent data plus a parity bit for checking the accuracy of representation. Byte–digit portion — Usually, the four rightmost bits in a byte. C — A third-generation computer language used for programming on microcomputers. Most microcomputer software products such as spreadsheets and DBMS programs are written in C. C&A — Certification and Accreditation; a comprehensive evaluation of the technical and nontechnical security features of a system to determine if it meets specified requirements and should receive approval to operate. C2 — A formal product rating awarded to a product by the National Computer Security Center (NCSC). A C2 rated system incorporates controls capable of enforcing access limitations on an individual basis, making 803
AU8231_A003.fm Page 804 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® users individually accountable for their actions through log-on procedures, auditing of security relevant events, and resource isolation. CA — Certificate Authority. Cable — Transmission medium of copper wire or optical fiber wrapped in a protective cover. Cable modem — A device that uses a TV cable to deliver an Internet connection. Cabulance — A taxi cab that also functions as an ambulance. Cache — Pronounced cash, a special high-speed storage mechanism. It can be either a reserved section of main memory or an independent highspeed storage device. Two types of caching are commonly used in personal computers: memory caching and disk caching. A memory cache, sometimes called a cache store or RAM cache, is a portion of memory made of high-speed static RAM (SRAM) instead of the slower and cheaper dynamic RAM (DRAM) used for main memory. Memory caching is effective because most programs access the same data or instructions over and over. Disk caching works under the same principle as memory caching, but instead of using high-speed SRAM, a disk cache uses conventional main memory. When data is found in the cache, it is called a cache hit, and the effectiveness of a cache is judged by its hit rate. Call — Any connection (fixed or temporary) capable of transferring information between two or more users of a telecommunications system. In this context, a user may be a person or a machine. It is used for transmission of the content of communication. This term refers to circuit-switched calls only. Callback — A procedure that identifies a terminal dialing into a computer system or network by disconnecting the calling terminal, verifying the authorized terminal against the automated control table, and then, if authorized, reestablishing the connection by having the computer system dial the telephone number of the calling terminal. Caller identification (CLID) — One of several custom local area signaling services (CLASS) provided by the local exchange carrier. The service that allows you to see the name and number of the person who is calling you. Call-identifying information (CII) — Dialing or signaling information that identifies the origin, direction, destination or termination of each communication generated by means of any equipment, facility, service, or a telecommunications carrier. CAP — CM capabilities. Capability — A token used as an identifier for a resource such that possession of the token confers access rights for the resource. 804
AU8231_A003.fm Page 805 Thursday, October 19, 2006 7:10 AM
Glossary Capacitor — Capacitors provide a means of storing electric charge so that it can be released at a specific time or rate. A capacitor acts as a battery but does not use a chemical reaction. Capacity planning — Determining the future IT infrastructure requirements for new equipment and additional network capacity. Cardano’s Grille — A method of concealing a message by which a piece of paper has several holes cut in it (the grille); and when it is placed over an innocent looking message, the holes cover all but specific letters spelling out the message. It was named for its inventor Girolamo Cardano. Carrier Sense Multiple Access/Collision Detection (CSMA/CD) — Also known as Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA). Carrier Sense, Multiple Access (CSMA) — A multiple-station access scheme for avoiding contention in packet networks in which each station can sense the presence of carrier signals from other stations and thus avoid transmitting a packet that would result in a collision. See also collision detection. Cathode-ray tube (CRT) — The display device for computer terminals, typically a television-like electronic vacuum tube. Cause — (1) Technical: the action or condition by which a hazardous event (physical or cyber) is initiated; an initiating event. The cause may arise as the result of failure, accidental or intentional human error, design inadequacy, induced or natural environment, system configuration, or operational modes/states. (2) Legal: each separate antecedent of an event. Something that precedes and brings about an effect or result. A reason for an accident or condition. Cave (cave automatic virtual environment) — A special 3-D virtual reality room that can display images of other people and objects located in other cave’s all over the world. CBC — Cipher block chaining. CBEFF — Common Biometric Exchange File Format; being defined by U.S. Biometric Consortium and ANSI X9F4 Subcommittee. CBO — Congressional Budget Office, or Cost Budget Office. CBR — Constant bit rate. CC — Common Criteria. See ISO/IEC 15408. CCA (covert channel analysis) — Vulnerability analysis, covert channel analysis. CCF — Common cause failure. 805
AU8231_A003.fm Page 806 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® CCITT — Consultative Committee for International Telegraph and Telephone. CCITT — See Telecommunications Standardization Sector of the International Telecommunications Union (TSSUITU). CCO — Cisco Connection Online. CCP — Compression Control Protocol. CCS — Common channel signaling. CCTV — Closed-circuit television. CD — (1) Carrier detect. (2) Compact disk. CDC — See Centers for Disease Control and Prevention. CDDI — Copper Distributed Data Interface. CDP — Cisco Discovery Protocol. CD-R (compact disk-recordable) — An optical or laser disk that offers one-time writing capability with about 700 MB or greater of storage. CD-ROM (compact disk–read-only memory) — A compact disk, similar to an audio compact disk, which is used to store computer information (e.g., programs, data, or graphics). CD-RW (compact disk-rewritable) — A compact disk (CD) that offers unlimited writing and updating capabilities. CDT — See Current Dental Terminology. CE — See covered entity. CEFACT — See United Nations Centre for Facilitation of Procedures and Practices for Administration, Commerce, and Transport (UN/CEFACT). Cell sites — A transmitter-receiver location, operated by the wireless service provider, through which radio links are established between the wireless system and the wireless unit. Cellular service — Also known as cellular mobile telephone system. A wireless telephone system using multiple transceiver sites linked to a central computer for coordination. CEN — European Center for Standardization, or Comité Européen de Normalisation. Central Office of Record (COR) — Office of a federal department or agency that keeps records of accountable COMSEC material held by elements subject to its oversight. 806
AU8231_A003.fm Page 807 Thursday, October 19, 2006 7:10 AM
Glossary Central processing unit (CPU) — The part of a computer that performs the logic, computation, and decision-making functions. It interprets and executes instructions as it receives them. PCs have one CPU, typically a single chip. CEO — Chief Executive Officer. CEPS — Common Electronic Purse Specifications, a standard used with smartcards. CER — Crossover error rate. CERN — European Laboratory for Particle Physics. Birthplace of the World Wide Web. CERT/CC — Computer Emergency Response Team Coordination Center, a service of CMU/SEI. Certificate — A set of information that at least: identifies the certification authority issuing the certificate; unambiguously names or identifies its owner; contains the owner’s public key and is digitally signed by the certification authority issuing it. Certificate Authority (CA) — A trusted third party that associates a public key with proof of identity by producing a digitally signed certificate. A CA provides to users a digital certificate that links the public key with some assertion about the user, such as identity, credit payment card number etc. Certification authorities may offer other services such as timestamping, key management services, and certificate revocation services. It can also be defined as an independent trusted source that attests to some factual element of information for the purposes of certifying information in the electronic environment. Certification — The acceptance of software by an authorized agent, usually after the software has been validated by the agent or its validity has been demonstrated to the agent. Certification agent — The individual responsible for making a technical judgment of the system’s compliance with stated requirements, identifying and assessing the risks associated with operating the system, coordinating the certification activities, and consolidating the final certification and accreditation packages. Certification and Accreditation Plan — A plan delineating objectives, responsibilities, schedule, technical monitoring, and other activities in support of the C&A process. Certification and Repair Center (CRC) — A U.S. Department of State (DoS) facility utilized by IM/SO/TO/OTSS departments for program activities. 807
AU8231_A003.fm Page 808 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Certification level — A combination of techniques and procedures used during a certification and accreditation process to verify the correctness and effectiveness of security controls in an information technology system. Security certification levels represent increasing levels of intensity and rigor in the verification process and include such techniques as reviewing and examining documentation; interviewing personnel; conducting demonstrations and exercises; conducting functional, regression, and penetration testing; and analyzing system design documentation. Certification package — Product of the certification effort documenting the detailed results of the certification activities. The certification package includes the security plan, developmental or operational certification test reports, risk assessment report, and certifier’s statement. Certification path — A chain of certificates between any given certificate and its trust anchor (CA). Each certificate in the chain must be verifiable in order to validate the certificate at the end of the path; this functionality is critical to the usable PKI. Certification Practices statement — A statement of the certification authority’s practices with respect to a wide range of technical, business, and legal issues that can be used as a basis for the certification authorities contract with the entity to whom the certificate was issued. Certification Requirements Review (CRR) — The review conducted by the DAA, Certifier, program manager, and user representative to review and approve all information contained in the System Security Authorization Agreement (SSAA). The CRR is conducted before the end of Phase 1. Certification statement — The certifier’s statement provides an overview of the security status of the system and brings together all of the information necessary for the DAA to make an informed, risk-based decision. The statement documents that the security controls are correctly implemented and effective in their application. The report also documents the security controls not implemented and provides corrective actions. Certification Test and Evaluation (CT&E) — Software and hardware security tests conducted during development of an IS. Certifier — See Certification Authority; certification agent CFO — Chief Financial Officer. CFR or C.F.R. — Code of Federal Regulations. CGI — Common gateway interface. Chain of custody — (1) The identity of persons who handle evidence between the time of commission of the alleged offense and the ultimate disposition of the case. It is the responsibility of each transferee to ensure 808
AU8231_A003.fm Page 809 Thursday, October 19, 2006 7:10 AM
Glossary that the items are accounted for during the time that they are in their possession, that they are properly protected, and that there is a record of the names of the persons from whom they received the items and to whom they delivered those items, together with the time and date of such receipt and delivery. (2) The control over evidence. Lack of control over evidence can lead to it being discredited completely. Chain of custody depends on being able to verify that evidence could not have been tampered with. This is accomplished by sealing off the evidence so that it cannot in any way be changed and providing a documentary record of custody to prove that the evidence was at all times under strict control and not subject to tampering. Chain of evidence — The “sequencing” of the chain of evidence follows this order: collection and identification; analysis; storage; preservation; presentation in court; return to owner. Chain of evidence shows who obtained the evidence; where and when the evidence was obtained; who secured the evidence; who had control or possession of the evidence. Chain of Trust (COT) — A term used in the HIPAA Security NPRM for a pattern of agreements that extend protection of healthcare data by requiring that each covered entity that shares healthcare data with another entity require that that entity provide protections comparable to those provided by the covered entity, and that that entity, in turn, require that any other entities with which it shares the data satisfy the same requirements. Challenge Handshake Authentication Protocol (CHAP) — A secure login procedure for dial-in access that avoids sending in a password in the clear by using cryptographic hashing. CHAMPUS — Civilian Health and Medical Program of the Uniformed Services. Channel — Typically what you rent from the telephone company, voicegrade transmission facility with defined frequency response, gain, and bandwidth. A path of communication, either electrical or electromagnetic, between two or more points. Also a circuit, facility, line, or path. Channel service unit (CSU), or digital service unit (DSU) — D e v i c e s used to interface between transmitting equipment and the external circuit in the wide area network that will carry the information. CHAP (Challenge Handshake Authentication Protocol) — A p p l i e s a three-way handshaking procedure. After the link is established, the server sends a “challenge” message to the originator. The originator responds with a value calculated using a one-way hash function. The server checks the response against its own calculation of the expected hash value. If the values match, the authentication is acknowledged; otherwise, the connection is usually terminated. 809
AU8231_A003.fm Page 810 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Character — A single numeric digit, special symbol, or letter. Charging data record (CDR) — A formatted collection of information about a chargeable event (e.g., time of call setup, duration of the call, amount of data transferred, etc.) for use in billing and accounting. For each party to be charged for parts of or all the charges of a chargeable event, a separate CDR shall be generated, i.e., more than one CDR may be generated for a single chargeable event, e.g., because of its long duration or because more than one charged party is to be charged. Chat room — An area of a Web chat service that people can “enter” with their Web browsers where the conversations are devoted to a specific topic; equivalent to a channel in IRC. Check digit — (1) One digit, usually the last, of an identifying field is a mathematical function of all of the other digits in the field. This value can be calculated from the other digits in the field and compared with the check digit to verify the validity of the whole field. (2) A numeric digit that is used to verify the accuracy of a copied or transcribed number. The numeric digit is typically appended to the end of a number. Checksum — A computed value that depends on the contents of a packet. This value is sent along with the packet when it is transmitted. The receiving system computes a new checksum based on receiving data and compares this value with the one sent with the packet. If the two values are the same, the receiver has a high degree of confidence that the data was received correctly. Chief Information Officer (CIO) — The title for the highest-ranking MIS officer in the organization. CHIM — See Center for Healthcare Information Management. CHIME — See College of Healthcare Information Management Executives. Chip — A wafer containing miniature electronic imprinted circuits and components. CHIP — Child Health Insurance Program. Choice — The third step in the decision-making process where one decides on a plan to address the problem or opportunity. Chosen message attack — A type of attack where the steganalyst generates a stego-medium from a message using some particular tool, looking for signatures that will enable the detection of other stego-media. Chosen stego attack — A type of attack when both the stego-medium and the steganography tool or algorithm is available. 810
AU8231_A003.fm Page 811 Thursday, October 19, 2006 7:10 AM
Glossary CIA (Confidentiality, Integrity, and Availability) — With regard to information security — Confidentiality, Integrity, and Availability. CIDF — Common intrusion detection framework model. CIDR — Classless interdomain routing. CIO — Chief Information Officer. Cipher disk — An additive cipher device used for encrypting and decrypting messages. The disk consists of two concentric circular scales, usually of letters, and the alphabets can be repositioned with respect to one another at any of the 26 relationships. Cipher system — A system in which cryptography is applied to plaintext elements of equal length. Ciphertext — A message that has been encrypted using a specific algorithm and key. (Contrast with plaintext.). Information that has been encrypted, making it unreadable without knowledge of the key. CIR — Committed information rate. Circuit switching — A communications paradigm in which a dedicated communication path is established between two hosts and on which all packets travel. The telephone system is an example of a circuit-switched network. CISL — Common Intrusion Specification Language. CISM — Certified Information Security Manager. CISO — Chief Information Security Officer. CISSP — Certified Information Systems Security Professional. CKM — Cryptographic key management. Claim Adjustment Reason Codes — A national administrative code set that identifies the reasons for any differences, or adjustments, between the original provider charge for a claim or service and the payer’s payment for it. This code set is used in the X12 835 Claim Payment & Remittance Advice and the X12 837 Claim transactions, and is maintained by the Health Care Code Maintenance Committee. Claim attachment — Any of a variety of hardcopy forms or electronic records needed to process a claim in addition to the claim itself. Claim authentication information — Information used by a claimant to generate exchange AI needed to authenticate a principal. Claim Medicare Remark Codes — See Medicare Remittance Advice Remark Codes. 811
AU8231_A003.fm Page 812 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Claim Status Category Codes — A national administrative code set that indicates the general category of the status of healthcare claims. This code set is used in the X12 277 Claim Status Notification transaction, and is maintained by the Health Care Code Maintenance Committee. Claim Status Codes — A national administrative code set that identifies the status of healthcare claims. This code set is used in the X12 277 Claim Status Notification transaction, and is maintained by the Health Care Code Maintenance Committee. Claimant — An entity that is or represents a principal for the purposes of authentication. A claimant includes the functions necessary for engaging in authentication exchanges on behalf of a principal. Class — An implementation of an abstract data type. A definition of the data structures, methods, and interface of software objects. A template for the instantiation (creation) of software objects. Classification — The determination that certain information requires protection against unauthorized disclosure in the interest of national security, coupled with the designation of the level of classification Top Secret, Secret, or Confidential. Classification Authority — The authority vested in an official of an agency to originally classify information or material that is determined by that official to require protection against unauthorized disclosure in the interest of national security. Classification guides — Documents issued in an exercise of authority for original classification that include determinations with respect to the proper level and duration of classification of categories of classified information. Classified information — Information that has been determined pursuant to Executive Order 12958 or any predecessor order, or by the Atomic Energy Act of 1954, as amended, to require protection against unauthorized disclosure and is marked to indicate its classified status. Classifier — An individual who makes a classification determination and applies a security classification to information or material. A classifier may either be a classification authority or may assign a security classification based on a properly classified source or a classification guide. Clear mode — Unencrypted plaintext mode. Cleared U.S. citizen — A citizen of the United States who has undergone a favorable background investigation resulting in the issuance of a security clearance by the Bureau of Diplomatic Security permitting access to classified information at a specified level. 812
AU8231_A003.fm Page 813 Thursday, October 19, 2006 7:10 AM
Glossary Clearinghouse — See health care clearinghouse. Cleartext — Data that is not encrypted; plaintext. CLIA — Clinical Laboratory Improvement Amendments. Click trail — A record of all the Web page addresses you have visited during a specific online session. Click trails tell not just what Web site you visited, but also which pages inside that site. Clickstream — A stored record of a Web surfing session containing information such as Web sites visited, how long the user was there, what ads were looked at, and the items purchased. Click-throughs — A count of the number of people who visit one site and click on an ad, and are taken to the site of the advertiser. Client — A workstation in a network that is set up to use the resources of a server. Client/Server — In networking, a network in which several PC-type systems (clients) are connected to one or more powerful, central computers (servers). In databases, refers to a model in which a client system runs a database application (front end) that accesses information in a database management system situated on a server (back end). Client/Server architecture — A local area network in which microcomputers, called servers, provide specialized service on behalf of the user’s computers, which are called clients. Client/Server model — A common way to describe network services and the model user processes (programs) of those services. Examples include the name-serve/name-resolver paradigm of the DNS and file-server/fileclient relationships such as NFS and diskless hosts. Clinger–Cohen Act of 1996 — Also known as the Information Technology Management Reform Act. A statute that substantially revised the way that information technology resources are managed and procured, including a requirement that each agency design and implement a process for maximizing the value and assessing and managing the risks of information technology investments. Clinical Code Sets — See Medical Code Sets. CLNP — Connectionless Network Protocol. CLNS — Connectionless Network Services. Cloning — The term given to the operation of creating an exact duplicate of one medium on another like medium. This is also referred to as a Mirror Image or Physical Sector Copy. 813
AU8231_A003.fm Page 814 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Closed network/closed user group — These are systems that generally represent those in which certificates are used within a bounded context such as within a payment system. A contract or series of contracts identify and define the rights and responsibilities of all parties to a particular transaction. CLP — Cell loss priority. CM — See ICD. CMF — Common mode failure. CMI — Coded mark inversion. CO — Central office. Coaxial cable — A medium used for telecommunications. It is similar to the type of cable used for carrying television signals. COB — See coordination of benefits. COBOL — See Common Business-Oriented Language. Code Division Multiple Access (CDMA) — A technique permitting the use of a single frequency band by a number of users. Users are allocated a sequence that uniquely identifies them. Code generator — A precompiler program that translates fourth-generation language-like code into the statements of a third-generation language code. Code of fair information practices — The basis for privacy best practices, both online and offline. The practices originated in the Privacy Act of 1974, the legislation that protects personal information collected and maintained by the U.S. Government. In 1980, these principles were adopted by the Organization for Economic Cooperation and Development and incorporated in its Guidelines for the Protection of Personal Data and Transborder Data Flows. They were adopted later in the EU Data Protection Directive of 1995, with modifications. The Fair Information Practices include notice, choice, access, onward transfer, security, data integrity, and remedy. Code room — The designated and restricted area in which cryptographic operations are conducted. Code set — Under HIPAA, this is any set of codes used to encode data elements, such as tables of terms, medical concepts, medical diagnostic codes, or medical procedure codes. This includes both the codes and their descriptions. Also see Part II, 45 CFR 162.103. Code Set Maintaining Organization — Under HIPAA, this is an organization that creates and maintains the code sets adopted by the secretary 814
AU8231_A003.fm Page 815 Thursday, October 19, 2006 7:10 AM
Glossary for use in the transactions for which standards are adopted. Also see Part II, 45 CFR 162.103. Code system — Any system of communication in which groups of symbols represent plaintext elements of varying length. Coder — The individual who translates program design into executable computer code. Coding — The activity of translating a set of computer processing specifications into a formal language for execution by a computer. Coefficient — A number or symbol multiplied with a variable or an unknown quantity in an algebraic term. Cohesion — The manner and degree to which the tasks performed by a single software module are related to another. Types of cohesion include coincidental, communication, functional, logical, procedural, sequential, and temporal. Cold site — An IS backup facility that has the necessary electrical and physical components of a computer facility, but does not have the computer equipment in place. The site is ready to receive the necessary replacement computer equipment in the event the users have to move from their main computing location to the alternative computer facility. Collaboration — Enabling collaboration that transforms shared awareness into actions that can achieve a competitive advantage. Collaboration system — A system that is designed specifically to improve the performance of teams by supporting the sharing and flow of information. Collaborative filtering — A method of placing you in an affinity group of people with the same characteristics. Collaborative planning, forecasting, and replenishment (CPFR) — A concept that encourages and facilitates collaborative processes between members of a supply chain. Collaborative processing enterprise information portal — P ro v i d e s knowledge workers with access to workgroup information such as e-mails, reports, meeting minutes, and memos. Collateral information — National security information classified in accordance with E.O. 12356, dated April 2, 1982. College of Healthcare Information Management Executives (CHIME) — A professional organization for healthcare chief information officers (CIOs). Collision — (1) A condition that is present when two or more terminals are in contention during simultaneous network access attempts. (2) In 815
AU8231_A003.fm Page 816 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® cryptography, an instance when a hash function generates the same output for different inputs. Collision detection — An avoidance method for communications channel contention that depends on two stations detecting the simultaneous start of each other’s transmission, stopping, and waiting a random period of time before beginning again. See also carrier sense, multiple access. Collision resistance — In cryptography, the idea that a hash function does not generate the same output for different inputs. Co-location — A vendor that rents space and telecommunications equipment to other companies. Color palette — A set of available colors a computer or an application can display. Also known as a CLUT: Color Look Up Table. COM (computer output microfilm) — The production of computer output on photographic film. Command and Control — The exercise of authority and direction by a properly designated commander over assigned and attached forces in the accomplishment of the mission. Command and control warfare (C2W) — The integrated use of operations security (OPSec), military deception, psychological operations (PSYOP), electronic warfare (EW) and physical destruction, mutually supported by intelligence, to deny information to, influence, degrade or destroy adversary C2 capabilities, while protecting friendly C2 capabilities against such actions. Comment — Public commentary on the merits or appropriateness of proposed or potential regulations provided in response to an NPRM, an NOI, or other federal regulatory notice. Commit — A condition implemented by the programmer signaling to the DBMS that all update activity that the program conducts be executed against a database. Before the commit, all update activity can be rolled back or canceled without negative impact on the database contents. Commit Protocol — An algorithm to ensure that a transaction is successfully completed. Common Business Oriented Language (COBOL) — A high-level programming language for business computer applications. Common carrier — An organization or company that provides data or other electronic communication services for a fee. Common cause failure — Failure of multiple independent system components occurring from a single cause that is common to all of them. 816
AU8231_A003.fm Page 817 Thursday, October 19, 2006 7:10 AM
Glossary Common Control — See HIPPA Part II, 45 CFR 164.504. Common Criteria Testing Laboratory (CCTL) — Within the context of the NIAP Common Criteria Evaluation and Validation Scheme, an IT security evaluation facility, accredited by the National Voluntary Laboratory Accreditation Program (NVLAP) and approved by the NIAP Oversight Body to conduct CC-based evaluations. Common mode failure — Failure of multiple independent system components that fail in the identical mode. Common Object Request Broker Architecture (CORBA) — C O R B A i s the Object Management Group’s (OMG) answer to the need for interoperability among the rapidly proliferating number of hardware and software products available today. Simply stated, CORBA allows applications to communicate with one another no matter where they are located or who has designed them. Common Operating Environment (COE) — The collection of standards, specifications, and guidelines, architecture definitions, software infrastructures, reusable components, application programming interfaces (APIs), methodology, runtime environment definitions, reference implementations, and methodology, that establishes an environment on which a system can be built. The COE is the vehicle that assures interoperability through a reference implementation that provides identical implementation of common functions. It is important to realize that the COE is both a standard and an actual product. Common ownership — See Part II, 45 CFR 164.504. Common security control — A security control that can be applied to one or more organization information systems and has the following properties: (1) the development, implementation, and assessment of the control can be assigned to a responsible official or organizational element (other than the information system owner); and (2) the results from the assessment of the control can be used to support the security certification and accreditation processes of an organization information system where that control has been applied. Communication — Information transfer according to agreed conventions. Communications medium — The path or physical channel in a network over which information travels. Communications Protocols — A set of rules that every computer follows to transfer information and that govern the operation of hardware or software entities to achieve communication. Communications satellite — A microwave repeater in space. 817
AU8231_A003.fm Page 818 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Communications security — The protection that ensures the authenticity of telecommunications and that results from the application of measures taken to deny unauthorized persons access to valuable information that might be derived from the acquisition of telecommunications. Communications service provider — A third party that furnishes the conduit for information. Communications software — Software that helps you communicate with other people. Communications system — A mix of telecommunications and automated information systems used to originate, control, process, encrypt, and transmit or receive information. Such a system generally consists of the following connected or connectable devices: (1) automated information equipment (AIS) on which information is originated; (2) a central controller (i.e., CIHS, C-LAN) of, principally, access rights and information distribution; (3) a telecommunications processor (i.e., TERP, IMH) that prepares information for transmission; and (4) nationallevel devices that encrypt information (COMSEC/CRYPTO/CCI) prior to its transmission via Diplomatic Telecommunications Service (DTS) or commercial carrier. Companding — The process where there is a greater number of samples provided at lower power conditions of the signal waveform rather than at the higher power portions of the same waveform. Compare — A computer-applied function that examines two elements of data to determine their relationship to one another. Compartmentalization — The isolation of the operating system, user programs, and data files from one another in main storage to protect them against unauthorized or concurrent access by other users or programs. Also, the division of sensitive data into small, isolated blocks to reduce risk to the data. Compartmented mode — INFOSec mode of operation wherein each user with direct or indirect access to a system, its peripherals, remote terminals, or remote hosts has all of the following: (1) valid security clearance for the most restricted information processed in the system; (2) formal access approval and signed nondisclosure agreements for that information which a user is to have access; and (3) valid need-to-know for information that a user is to have access. Competitive advantage — Providing a product or service in a way that customers value more than what the competition is able to do. Competitive local exchange carrier (CLEC) — A competitive access provider that also provides switched local services, such as local dial tone 818
AU8231_A003.fm Page 819 Thursday, October 19, 2006 7:10 AM
Glossary and Centrex. CLECs are authorized by state commissions to resell existing incumbent LEC services at wholesale rates and lease component facilities for use with their own facilities. Compiler — A program that translates high-level computer language instructions into machine code. Complementor — Provides services that complement the offerings of the enterprise and thereby extend its value-adding capabilities to its customers. Completeness — The property that all necessary parts of an entity are included. Completeness of a product often means that the product has met all requirements. Compliance date — Under HIPAA, this is the date by which a covered entity must comply with a standard, an implementation specification, or a modification. This is usually 24 months after the effective data of the associated final rule for most entities, but 36 months after the effective data for small health plans. For future changes in the standards, the compliance date would be at least 180 days after the effective data, but can be longer for small health plans and for complex changes. Also see Part II, 45 CFR 160.103. Component — Basic unit designed to satisfy one or more functional requirements. Composite primary key — The primary key fields from two intersecting relations. Composite threat list — A Department of State threat list intended to cover all localities operating under the authority of a chief of mission and staffed by direct-hire U.S. personnel. This list is developed in coordination with the intelligence community and issued semiannually by the Bureau of Diplomatic Security. Compression — A method of storing data in a format that requires less space than normal. . Compromise — Unauthorized disclosure or loss of sensitive information. Compromising emanations — Electromagnetic emanations that convey data and that, if intercepted and analyzed, could compromise sensitive information being processed by a computer system. COMPUSec — Computer security. Computer — The hardware, software, and firmware components of a system that are capable of performing calculations, manipulations, or storage of data. It usually consists of arithmetic, logical, and control units, and may have input, output, and storage devices. 819
AU8231_A003.fm Page 820 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Computer crime — The act of using IT to commit an illegal act. Computer Emergency Response Team (CERT) — The CERT is chartered to work with the Internet community to facilitate its response to computer security events involving Internet hosts, to take proactive steps to raise the community’s awareness of computer security issues, and to conduct research targeted at improving the security of existing systems. The U.S. CERT is based at Carnegie Mellon University in Pittsburgh; regional CERTs are like NICs, springing up in different parts of the world. Computer ethics — The issues and standards that support the proper uses of IT that are not criminal or threatening to another person or organization. Computer evidence — Computer evidence is a copy of a document stored in a computer file that is identical to the original. The legal “best evidence” rules change when it comes to the processing of computer evidence. Another unique aspect of computer evidence is the potential for unauthorized copies to be made of important computer files without leaving behind a trace that the copy was made. This situation creates problems concerning the investigation of the theft of trade secrets (e.g., client lists, research materials, computer-aided design files, formulas, and proprietary software). Computer forensics — The term “computer forensics” was coined in 1991 in the first training session held by the International Association of Computer Specialists (IACIS) in Portland, Oregon. Since then, computer forensics has become a popular topic in computer security circles and in the legal community. Like any other forensic science, computer forensics deals with the application of law to a science. In this case, the science involved is computer science and some refer to it as Forensic Computer Science. Computer forensics has also been described as the autopsy of a computer hard disk drive because specialized software tools and techniques are required to analyze the various levels at which computer data is stored after the fact. Computer forensics deals with the preservation, identification, extraction, and documentation of computer evidence. The field is relatively new to the private sector, but it has been the mainstay of technology-related investigations and intelligence gathering in law enforcement and military agencies since the mid-1980s. Like any other forensic science, computer forensics involves the use of sophisticated technology tools and procedures that must be followed to guarantee the accuracy of the preservation of evidence and the accuracy of results concerning computer evidence processing. Typically, computer forensic tools exist in the form of computer software. Computer Fraud and Abuse Act (PL 99-474) — Computer Fraud and Abuse Act of 1986. Strengthens and expands the 1984 Federal Computer 820
AU8231_A003.fm Page 821 Thursday, October 19, 2006 7:10 AM
Glossary Crime Legislation. Law extended to computer crimes in private enterprise and anyone who willfully disseminates information for the purpose of committing a computer crime (i.e., distribute phone numbers to hackers from a BBS). Computer Matching Act (PL 100-503) — The Computer Matching and Privacy Act of 1988 ensures privacy, integrity, and verification of data disclosed for computer matching and establishes data integrity boards within federal agencies. Computer Matching Act Public Law (PL 100-53) — Computer Matching and Privacy Act of 1988. Ensures privacy, integrity, and verification of data disclosed for computer matching; establishes Data Integrity Boards within federal agencies. Computer network — Two or more computers connected so that they can communicate with each other and share information, software, peripheral devices, and processing power. Computer output microfilm (COM) — The production of computer output on photographic film. Computer program — A series of operations that perform a task when executed in logical sequence. Computer security — The practice of protecting a computer system against internal failures, human error, attacks, and natural catastrophes that might cause improper disclosure, modification, destruction, or denialof-service. Computer Security Act (PL 100-235) — The Computer Security Act of 1987 directs the National Bureau of Standards (now the National Institute of Standards and Technology [NIST]) to establish a computer security standards program for federal computer systems. Computer system — An interacting assembly of elements, including at least computer hardware and usually software, data procedures, and people. Computer system security — All of the technological safeguards and managerial procedures established and applied to computers and their networks (including related hardware, firmware, software, and data) to protect organizational assets and individual privacy. Computer virus — Software that is written with malicious intent to cause annoyance or damage. Computer-aided design (CAD) — A term used to describe the use of computer technology as applied to the design of problems and opportunities. 821
AU8231_A003.fm Page 822 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Computer-aided instruction (CAI) — The interactive use of a computer for instructional purposes. Software provides educational content to students and adjusts its presentation to the responses of the individual. Computer-aided manufacturing (CAM) — The use of computer technology as applied to the manufacturing of computer technology as applied to the manufacturing of goods and services. Computer-Aided Software Engineering (CASE) tools — Tools that automate the design, development, operation, and maintenance of software. Computer-Based Patient Record Institute (CPRI) — Healthcare Open Systems and Trials (HOST) — An industry organization that promotes the use of healthcare information systems, including electronic healthcare records. Computing environment — The total environment in which an automated information system, network, or component operates. The environment includes physical, administrative, and personnel procedures as well as communication and networking relationships with other information systems. COMSec — Communications security. COMSec account — Administrative entity, identified by an account number, used to maintain accountability, custody, and control of COMSec material. COMSec custodian — Person designated by proper authority to be responsible for the receipt, transfer, accounting, safeguarding, and destruction of COMSec material assigned to a COMSec account. COMSec facility — Space used for generating, storing, repairing, or using COMSec material. COMSec manager — Person who manages the COMSec resources of an organization. COMSec material — Item designed to secure or authenticate telecommunications. COMSec material includes, but is not limited to key, equipment, devices, documents, firmware, or software that embodies or describes cryptographic logic and other items that perform COMSec functions. COMSec material control system (CMCS) — Logistics and accounting system through which COMSec material marked “CRYPTO” is distributed, controlled, and safeguarded. Included are the COMSec central offices of record, crypto-logistic depots, and COMSec accounts. . COMSec officer — The properly appointed individual responsible to ensure that COMSec regulations and procedures are understood and adhered to, that the COMSec facility is operated securely, that personnel are trained 822
AU8231_A003.fm Page 823 Thursday, October 19, 2006 7:10 AM
Glossary in proper COMSec practices, and who advises on communications security matters. Only Department of State personnel will be appointed. Concealment systems — A method of keeping sensitive information confidential by embedding it in irrelevant data. Concentrator — A computer that consolidates the signals from any slower speed transmission lines into a single, faster line or performs the reverse function. Concurrent processing — The capability of a computer to share memory with several programs and simultaneously execute the instructions provided by each. Condensation — The process of reducing the volume of data managed without reducing the logical consistency of data. It is essentially different than compaction in that condensation is done at the record level whereas compaction is done at the system level. Condition test — A comparison of two data items in a program to determine whether one value is equal to, less than, or greater than the second value. Conditional branch — The alteration of the normal sequence of program execution following the text of the contents of a memory area. Conditional formatting — Highlights the information in a cell that meets some specified criteria. Conductor — A material that allows the easy transfer of electrons from one atom to another. Conference on Data Systems Languages (CODASYL) — A Department of Defense-sponsored group that studies the requirements and design specifications for a common business programming language. Confidence — Confidence in electronic interactions can be significantly increased by solutions that address the basic requirements of integrity, confidentiality, authentication, authorization, and access management or access control. Confidentiality — A concept that applies to data that must be held in confidence and describes that status or degree of protection that must be provided for such data about individuals as well as organizations. Confidentiality loss — The compromise of sensitive, restricted, or classified data or software. Configuration control — The process of controlling modifications to the system’s hardware, firmware, software, and documentation that provides sufficient assurance that the system is protected against the introduction 823
AU8231_A003.fm Page 824 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® of improper modifications prior to, during, and after system implementation. Compare configuration management. Configuration management — The use of procedures appropriate for controlling changes to a system’s hardware, software, or firmware structure to ensure that such changes will not lead to a weakness or fault in the system. Configuration manager — The individual or organization responsible for configuration control or configuration management. Confinement — (1) Confining an untrusted program so that it can do everything it needs to do to meet the user’s expectation, but nothing else. (2) Restricting an untrusted program from accessing system resources and executing system processes. Common confinement techniques include DTE, least privilege, and wrappers. Connected mode — The state of user equipment switched on and an RRC connection established. Connection — A communication channel between two or more endpoints (e.g., terminal, server, etc.). Connectionless — The model of interconnection in which communication takes place without first establishing a connection. Sometimes (imprecisely) called datagram. Examples: Internet IP and OSI CLNP, UDP, ordinary postcards. Connection-oriented — The model of interconnection in which communication proceeds through three well-defined phases: connection establishment, data transfer, and connection release. Examples: X.25, Internet TCP and OSI TP4, ordinary telephone calls. Connectivity — The uninterrupted availability of information paths for the effective performance of C2 functions. Connectivity software — Enables a computer to “dial up” or connect to another computer. Consent — Explicit permission, given to a Web site by a visitor, to handle her personal information in specified ways. Web sites that ask users to provide personally identifiable information should be required to obtain “informed consent,” which implies that the company fully discloses its information practices prior to obtaining personal data or permission to use it. Consistency — Logical coherency among all integrated parts; also, adherence to a given set of instructions or rules. Console operator — Someone who works at a computer console to monitor operations and initiate instructions for efficient use of computer resources. 824
AU8231_A003.fm Page 825 Thursday, October 19, 2006 7:10 AM
Glossary Constant — A value in a computer program that does not change during program execution. Construct — An object; especially a concept that is constructed or synthesized from simple elements. Consumer electronics — Any electronic/electrical devices, either AC- or battery-powered, that are not part of the facility infrastructure. Some examples are radios, televisions, electronic recording or playback equipment, PA systems, paging devices, and dictaphones (see also electronic equipment). Consumer — Traditionally, the ultimate user or consumer of goods, ideas, and services. However, the term also is used to imply the buyer or decision maker as well as the ultimate consumer. A mother buying cereal for consumption by a small child is often called the consumer although she may not be the ultimate user. Content — See completeness. Content of communication (CC) — Information exchanged between two or more users of a telecommunications service, excluding intercept related information (IRI). This includes information that may, as part of some telecommunications service, be stored by one user for subsequent retrieval by another. Content of communication link — A communication channel for HI3 information between a mediation function and an LEMF. Contention — Occurs during multiple access to a network in which the network capacity is allocated on a “first come, first served” basis. Contextual information — Information derived from the context in which an access is made (for example, time of day). Contingency plans — Plans for emergency response, backup operations, and post-disaster recovery maintained by a computer information processing facility as a part of its security program. Continuity — The uninterrupted availability of information paths for the effective performance of organizational function. Continuous-mode operation — Systems that are operational continuously, 24 hours a day, 7 days a week. Control — Any protective action, device, procedure, technique, or other measure that reduces exposures. Control break — A point during program processing at which some special processing event takes place. A change in the value of a control field within a data record is characteristic of a control break. 825
AU8231_A003.fm Page 826 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Control field — A field of data within a record used to identify and classify a record. Control logic — The specific order in which processing functions are carried out by a computer. Control signal — A computer-generated signal for the automatic control of machines and processes. Control statement — A command in a computer program that establishes the logical sequence of processing operations. Control structure — A program that contains a logical construct of sequences, repetitions, and selections. Control total — Accumulation of numeric data fields that are used to check the accuracy of the input, processing, or output data. Control unit — A component of the CPU that evaluates and carries out program processing and execution. Control zone — The space surrounding equipment that is used to process sensitive information and that is under sufficient physical and technical control to preclude an unauthorized entry or compromise. Controllability — The ability to control the situation following a failure. (Note that controllability has a different meaning when used in the context of testability analysis.). Controllable isolation — Controlled sharing in which the scope or domain of authorization can be reduced to an arbitrarily small set or sphere of activity. Controlled access area — A specifically designated area within a building where classified information may be handled, stored, discussed, or processed. Controlled cryptographic item (CCI) — Secure telecommunications or information handling equipment, or associated cryptographic components, which are unclassified but governed by a special set of control requirements. Controlled security mode — A system is operating in the controlled security mode when at least some users with access to the system have neither a security clearance nor a need-to-know for all classified material contained in the system. However, the separation and control of users and classified material on the basis, respectively, of security clearance and security classification are not essentially under operating system control as in the multilevel security mode. 826
AU8231_A003.fm Page 827 Thursday, October 19, 2006 7:10 AM
Glossary Controlled sharing — The condition that exists when access control is applied to all users and components of a resource-sharing computer system. Controlled shipment — The transport of material from the point at which the destination of the material is first identified for a site, through installation and use, under the continuous 24-hour control of Secret cleared U.S. citizens or by DS-approved technical means and seal. Conversational program — A program that permits interaction between a computer and a user. Conversion — The process of replacing a computer system with a new one. Conversion rate — The percentage of customers who visit a Web site and actually buy something. Cookie — A cookie is a piece of text that a Web server can store on a user’s hard disk. Cookies allow a Web site to store information on a user’s machine and later retrieve it. The pieces of information are stored as namevalue pairs. Cooperative processing — The ability to distribute resources (i.e., programs, files, and databases) across the network. Coordination of benefits (COB) — A process for determining the respective responsibilities of two or more health plans that have some financial responsibility for a medical claim. Also called cross-over. COP — Cryptographic operation. Copy — An accurate reproduction of information contained on an original physical item, independent of the original physical item. Copyright — The author or artist’s right to control the copying of his or her work. CORBA — Common Object Request Broker Architecture, introduced in 1991 by the OMG, defined the Interface Definition Language (IDL) and the application programming interfaces (APIs) that enable client/server object interaction within a specific implementation of an Object Request Broker (ORB). CORBA security — The Object Management Group standard that describes how to secure CORBA environments. CORF — Comprehensive Outpatient Rehabilitation Facility. Corporate security policy — The set of laws, rules, and practices that regulate how assets, including sensitive information, are managed, protected, and distributed within a user organization. 827
AU8231_A003.fm Page 828 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Corrective action — The practice and procedure for reporting, tracking, and resolving identified problems, in both the software product and the development process. Their resolution provides a final solution to the identified problem. Corrective maintenance — The identification and removal of code defects. Correctness — The extent to which software is free from design and coding defects (i.e., fault free). Also, the extent to which software meets its specified requirements and user objectives. Corruption — Departure from an original, correct data file or correctly functioning system to an improper state. Cost/benefit analysis — Determination of the economic feasibility of developing a system on the basis of a comparison of the projected costs of a proposed system and the expected benefits from its operation. Cost-risk analysis — The assessment of the cost of potential risk of loss or compromise of data in a computer system without data protection versus the cost of providing data protection. COT — See chain of trust. COTS software — Commercial off-the-shelf software. Counterfeit software — Software that is manufactured to look like the real thing and sold as such. Counterfeits — Duplicates that are copied and packaged to resemble the original as closely as possible. The original producer’s trademarks and logos are reproduced in order to mislead consumers into believing that they are buying an original product. Countermeasure — The deployment of a set of security services to protect against a security threat. Coupling — The manner and degree of interdependence between software modules. Types include common environment coupling, content coupling, control coupling, data coupling, hybrid coupling, and pathological coupling. Courseware — Computer programs used to deliver educational materials within computer-assisted instruction systems. COV — Tests, coverage. Cover escrow — An extraction process method that needs both the original piece of information and the encoded one in order to extract the embedded data. 828
AU8231_A003.fm Page 829 Thursday, October 19, 2006 7:10 AM
Glossary Cover medium — The medium in which we want to hide data; it can be an innocent looking piece of information for steganography, or an important medium that must be protected for copyright or integrity reasons. Covered entity — The specific types of organizations to which HIPAA applies, including providers, health plans (payers), and clearinghouses (who process nonstandard claims from providers and distribute them to the payers in their required formats — a process that will not be necessary if providers adopt the HIPAA transactions standards). Covered function — Functions that make an entity a health plan, a healthcare provider, or a healthcare clearinghouse. Also see Part II, 45 CFR 164.501. Covert channel — A channel of communication within a computer system, or network, that is not designed or intended to transfer information. Covert storage channel — A covert channel that involves the direct or indirect writing of a storage location by one process and the direct or indirect reading of the storage location by another process. Covert storage channels typically involve a finite resource that is shared by two subjects at different security levels. Covert timing channel — A covert channel in which one process signals information to another by modulating its own use of system resources in such a way that this manipulation affects the real response time observed by the second process. CPE — Customer premise equipment. CPRI — Computer-based Patient Record Institute, an organization formed in 1992 to promote adoption of healthcare information systems. Has created a Security Toolkit with sample policies and procedures. CPRI-HOST — See Computer-based Patient Record Institute — Healthcare Open Systems and Trials. CPT — See Current Procedural Terminology. CPU — The central processing unit; the brains of the computer. Cracker — The correct name for an individual who hacks into a networked computer system with malicious intentions. The term “hacker” is used interchangeably (although incorrectly) because of media hype of the word “hacker.” A cracker explores and detects weak points in the security of a computer networked system and then exploits these weaknesses using specialized tools and techniques. Crash-proof software — Utility software that helps save information if the system crashes and the user is forced to turn it off and then back on. 829
AU8231_A003.fm Page 830 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® CRC — Cyclical redundancy check. Credentials — Data that is transferred to establish the claimed identity of an entity. Critical path — A tool used in project management techniques and is the duration based on the sum of the individual tasks and their dependencies. The critical path is the shortest period in which a project can be accomplished. Critical software — A defined set of software components that have been evaluated and whose continuous operation has been determined essential for safe, reliable, and secure operation of the system. Critical software is composed of three elements: (1) safety-critical and safety-related software, (2) reliability-critical software, and (3) security-critical software. Critical Success Factor (CSF) — A factor simply critical to the organization’s success. Criticality — The severity of the loss of either data or system functionality. Involves judicious evaluation of system components and data when a property or phenomenon undergoes unwanted change. Criticality analysis — An analysis or assessment of a business function or security vulnerability based on its criticality to the organization’s business objectives. A variety of criticalities can be used to illustrate the criticality. CRL — Certificate revocation list. Cross certification — Practice of mutual recognition of another certification authority is certificates to an agreed level of confidence. Usually evidenced in contract. Crossover — The process within a genetic algorithm where portions of the good outcome are combined in the hope of creating an even better outcome. Crossover error rate (CER) — A comparison metric for different biometric devices and technologies; the error rate at which FAR equals FRR. The lower the CER, the more accurate and reliable the biometric device. Crosstalk — An unwanted transfer of energy from one communications channel to another. Cross-Walk — See data mapping. CRT (cathode-ray tube) — A monitor that looks like a television set. CRUD (create, read, update, delete) — The four primary procedures or ways a system can manipulate information. 830
AU8231_A003.fm Page 831 Thursday, October 19, 2006 7:10 AM
Glossary Cryptanalysis — The study of techniques for attempting to defeat cryptographic techniques and, more generally, information security services. Cryptanalyst — Someone who engages in cryptanalysis. CRYPTO — Marking or designator identifying COMSec keying material used to secure or authenticate telecommunications carrying classified or sensitive U.S. Government or U.S. Government-derived information. Crypto ignition key (CIK) — The device or electronic key used to unlock the secure mode of crypto equipment. Cryptographic access — The prerequisite to, and authorization for, access to crypto information, but does not constitute authorization for use of crypto equipment and keying material issued by the department. Cryptographic algorithm — A method of performing a cryptographic transformation (see cryptography) on a data unit. Cryptographic algorithms may be based on symmetric key methods (the same key is used for both encipher and decipher transformations) or on asymmetric keys (different keys are used for encipher and decipher transformations). Cryptographic checkvalue — Information that is derived by performing a cryptographic transformation on a data unit. Cryptographic key — A parameter used with a cryptographic algorithm to transform, validate, authenticate, encrypt, or decrypt data. Cryptographic material — All COMSec material bearing the marking “CRYPTO” or otherwise designated as incorporating cryptographic information. Cryptographic system — The documents, devices, equipment, and associated techniques that are used as a unit to provide a single means of encryption. Cryptography — The study of mathematical techniques related to aspects of information security such as confidentiality, data integrity, entity authentication, and data origin authentication. Cryptography is not the only means of providing information security services, but rather one set of techniques. The word itself comes from the Greek word kryptos, which means “hidden” or “covered.” Cryptography is a way to hide writing but yet retain a way to uncover it again. Cryptology — The science that deals with hidden, disguised, or encrypted communications. It embraces communications security and communications intelligence. Cryptolope — An IBM product that means “cryptographic envelope.” Cryptolope objects are used for secure, protected delivery of digital content using encryption and digital signatures. 831
AU8231_A003.fm Page 832 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Cryptosystem — A general term referring to a set of cryptographic primitives used to provide information security services. CSI — Computer Security Institute. CSMA/CD — Carrier Sense Multiple Access/Collision Detect. CSNP — Complete Sequence Number PDU. CSPDN — Circuit-switched public data network. CSU/DSU — Channel service unit/digital service unit. CTS — Clear to send. CUD — Caller user data (X.25). Culture — The collective personality of a nation, society, or organization, encompassing language, traditions, currency, religion, history, music, and acceptable behavior, among other things. Current — A measure of how much electricity passes a point on a wire in a given time frame. Current is measured in amperes or amps. Current Dental Terminology (CDT) — A medical code set, maintained and copyrighted by the ADA, that has been selected for use in the HIPAA transactions. Current Procedural Terminology (CPT) — A medical code set, maintained and copyrighted by the AMA, that has been selected for use under HIPAA for non-institutional and non-dental professional transactions. Custodian — An individual who has possession of or is otherwise charged with the responsibility for safeguarding and accounting for classified information. Custom auto filter function — Allows one to hide all the rows in a list except those that match criteria specified. Customer relationship management (CRM) — CRM entails all aspects of service and sales interactions a company has with its customer. CRM often involves personalizing online experiences, help-desk software, and e-mail organizers. Customer-integrated system — An extension of a TPS that places technology in the hands of an organization’s customers and allows them to process their own transactions. Customer — The actual or prospective purchaser of products or services. Cybercops — A criminal investigator of online fraud or harassment. Cybercrime — A criminal offense that involves the use of a computer network. 832
AU8231_A003.fm Page 833 Thursday, October 19, 2006 7:10 AM
Glossary Cyberspace — Refers to the connections and locations (even virtual) created using computer networks. The term “Internet” has become synonymous with this word. Cyberterrorist — One who seeks to cause harm to people or destroy critical systems or information. Cycle — One complete sequence of an event or activity. Often refers to electrical phenomena. One electrical cycle is a complete sine wave. Cyclical redundancy check (CRC) — A process used to check the integrity of a block of data. It provides an integrity check of the data before it is sent out into the wide area network. Its value depends on the hexadecimal value of the number of 1s in the data block. The transmitting device calculates the value and appends it to the data block; the receiving end makes a similar calculation and compares its results to the added character. If there is a difference, the recipient requests retransmission. DoD — Department of Defense. D2 — A rating provided by the NCSC for PC security subsystems that corresponds to the features of the C2 level. A computer security subsystem is any hardware, firmware, and software that are added to a computer system to enhance the security of the overall system. DA — Destination address. DAC — (1) Discretionary access control. (2) Dual attached concentrator. Damage — Loss, injury, or deterioration caused by the negligence, design, or accident of one person to another, in respect to the latter’s person or property; the harm, detriment, or loss sustained by reason of an injury. DARPA — Defense Advanced Research Projects Agency. DAS — Dual Attachment Station (FDDI, CDDI). DASS — Distributed authentication security service. Data — Raw facts and figures that are meaningless by themselves. Data can be expressed in characters, digits, and symbols, which can represent people, things, and events. Data administration — The function in an organization that plans for, oversees the development of, and monitors the information resource. Data administration subsystem — Helps manage the overall database environment by providing facilities for backup and recovery, security management, query optimization, concurrency control, and change management. Data aggregation — See Part II, 45 CFR 164.501. 833
AU8231_A003.fm Page 834 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Data classification — Data classification is the assigning a level of sensitivity to data as they are being created, amended, enhanced, stored, or transmitted. The classification of the data should then determine the extent to which the data need to be controlled/secured and is also indicative of its value in terms of its importance to the organization. Data communications — The transmission of data between more than one site through the use of public and private communications channels or lines. Data condition — A description of the circumstances in which certain data is required. Also see Part II, 45 CFR 162.103. Data contamination — A deliberate or accidental process or act that compromises the integrity of the original data. Data content — Under HIPAA, this is all the data elements and code sets inherent in a transaction, and not related to the format of the transaction. Also see Part II, 45 CFR 162.103. Data Content Committee (DCC) — See Designated Data Content Committee. Data Council — A coordinating body within HHS that has high-level responsibility for overseeing the implementation of the A/S provisions of HIPAA. Data definition language (DDL) — A set of instructions or commands used to define data for the data dictionary. A data definition language (DDL) is used to describe the structure of a database. Data dictionary — A document or listing defining all items or processes represented in a data flow diagram or used in a system. Data diddling — Changing data with malicious intent before or during input to the system. Data element — The smallest unit of data accessible to a database management system or a field of data within a file processing system. Data Encryption Standard (DES) — A private key cryptosystem published by the National Institutes of Standards and Technology (NIST). DES is a symmetric block cipher with a block length of 64 bits and an effective key length of 56 bits. DES has been used commonly for data encryption in the forms of software and hardware implementation. Data flow analysis — A graphic analysis technique to trace the behavior of program variables as they are initialized, modified, or referenced during program execution. 834
AU8231_A003.fm Page 835 Thursday, October 19, 2006 7:10 AM
Glossary Data flow diagram — A descriptive modeling tool providing a graphic and logical description of a system. Data grids — Grids that provide shared data storage. Based on a Catalog where Logical File Names are associated to Physical File Names. Data integrity — The state that exists when automated information or data is the same as that in the source documents and has not been exposed to accidental or malicious modification, alteration, or destruction. Data Interchange Standards Association (DISA) — A body that provides administrative services to X12 and several other standards-related groups. Data item — A discrete representation having the properties that define the data element to which it belongs. See also data element. Data link — A serial communications path between nodes or devices without any intermediate switching nodes. Also, the physical two-way connection between such devices. Data-link control layer — Layer 2 in the SNA architectural model. Responsible for the transmission of data over a particular physical link. Corresponds roughly to the data-link layer of the OSI model. Data-link layer (DLL) — (1) Layer 2 of the OSI Reference Model. Provides reliable transit of data across a physical link. The data-link layer is concerned with physical addressing, network topology, line discipline, error notification, ordered delivery of frames, and flow control. The IEEE divided this layer into two sublayers: the MAC sublayer and the LLC sublayer. Sometimes simply called the link layer. Roughly corresponds to the datalink control layer of the SNA model. (2) A layer with the responsibility of transmitting data reliably across a physical link (cabling, for example) using a networking technology such as Ethernet. The DLL encapsulates data into frames (or cells) before it transmits it. It also enables multiple computer systems to share a single physical medium when used in conjunction with a media access control methodology such as CSMA/CD. Data manipulation language (DML) — A data manipulation language (DML) provides the necessary commands for all database operations, including storing, retrieving, updating, and deleting database records. Data mapping — The process of matching one set of data elements or individual code values to their closest equivalents in another set of them. This is sometimes called a cross-walk. Data mart — Subset of a data warehouse in which only a focused portion of the data warehouse is stored. Data mining — A methodology used by organizations to better understand their customers, products, markets, or any other phase of the business. 835
AU8231_A003.fm Page 836 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Data model — A conceptual model of the information needed to support a business function or process. Data networking switches — Equipment that performs the functions of establishing and releasing connections on a data network. Data normalization — In data processing, a process applied to all data in a set that produces a specific statistical property. It is also the process of eliminating duplicate keys within a database. Useful as organizations use databases to evaluate various security data. Data object — Object or information of potential probative value that is associated with physical items. Data objects may occur in different formats without altering the original information. Data origin authentication — The corroboration that the entity responsible for the creation of a set of data is the one claimed. Data owner — See information owner. Data profiling — The use of information about your lifestyle and habits to provide a descriptive profile of your life. At its simplest, data profiling is used by marketing companies to identify you as a possible customer. At its most complex, data profiling can be used by security services to identify potential suspects for unlawful activity, or to highlight parts of a person’s life where other forms of surveillance may reveal something about their activities. In those states where the European Directive on Data Protection is in force, you have rights of access to any data held about you for the purposes of data processing or profiling. Data protection engineering — The methodology and tools used to design and implement data protection mechanisms. Data record — An identifiable set of data values treated as a unit, an occurrence of a schema in a database, or collection of atomic data items describing a specific object, event, or tuple (e.g., row of a table). Data representation — The manner in which data is characterized in a computer system and its peripheral devices. Data safety — Ensuring that (1) the intended data has been correctly accessed, (2) the data has not been manipulated or corrupted intentionally or accidentally, and (3) the data is legitimate. Data security — The protection of data from accidental or malicious modification, destruction, or disclosure. Data segment — A collection of data elements accessible to a database management system; a record in a file processing system. 836
AU8231_A003.fm Page 837 Thursday, October 19, 2006 7:10 AM
Glossary Data set — A named collection of logically related data items, arranged in a prescribed manner and described by control information to which the programming system has access. Data warehouse — A collection of integrated subject-oriented databases designed to support the Decision Support function, where each unit of data is relevant to some moment in time. The data warehouse contains atomic data and summarized data. Database — An integrated aggregation of data usually organized to reflect logical or functional relationships among data elements. Database Administrator (DBA) — (1) A person who is in charge of defining and managing the contents of a database. (2) The individual in an organization who is responsible for the daily monitoring and maintenance of the databases. The database administrator’s function is more closely associated with physical database design than the data administrator’s function is. Database Management System (DBMS) — The software that directs and controls data resources. Database-based workflow system — Stores the document in a central location and automatically asks the knowledge workers to access the document when it is their turn to edit the document. Data-dependent protection — The protection of data at a level that is commensurate with the sensitivity of the entire file. Datagram — Logical grouping of information sent as a network layer unit over a transmission medium without prior establishment of a virtual circuit. IP datagrams are the primary information units in the Internet. The terms “cell,” “frame,” “message,” “packet,” and “segment” are also used to describe logical information groupings at various layers of the OSI Reference Model and in various technology circles. Data-mining agent — An intelligent agent or application that operates in a data warehouse discovering information. Data-mining tool — Software tool used to query information in a data warehouse. Data-related concepts — (1) Clinical or medical code sets identify medical conditions and the procedures, services, equipment, and supplies used to deal with them. Nonclinical, nonmedical, or administrative code sets identify or characterize entities and events in a manner that facilitates an administrative process. HIPAA defines a data element as the smallest unit of named information. In X12 language, that would be a simple data element. But X12 also has composite data elements, which are not really data elements, but are groups of closely related data elements that can repeat 837
AU8231_A003.fm Page 838 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® as a group. X12 also has segments, which are also groups of related data elements that tend to occur together, such as street address, city, and state. These segments can sometimes repeat, or one or more segments may be part of a loop that can repeat. For example, you might have a claim loop that occurs once for each claim, and a claim service loop that occurs once for each service included in a claim. An X12 transaction is a collection of such loops, segments, etc. that supports a specific business process, whereas an X12 transmission is a communication session during which one or more X12 transactions is transmitted. (2) Data elements and groups may also be combined into records that make up conventional files, or into the tables or segments used by DBMS. A designated code set is a code set that has been specified within the body of a rule. These are usually medical code sets. Many other code sets are incorporated into the rules by reference to a separate document, such as an implementation guide, that identifies one or more such code sets. These are usually administrative code sets. (3) Electronic data is data that is recorded or transmitted electronically, whereas non-electronic data would be everything else. Special cases would be data transmitted by fax and audio systems, which is, in principle, transmitted electronically, but which lacks the underlying structure usually needed to support automated interpretation of its contents. (4) Encoded data is data represented by some identification or classification scheme, such as a provider identifier or a procedure code. Non-encoded data would be more nearly freeform, such as a name, a street address, or a description. Theoretically, of course, all data, including grunts and smiles, is encoded. (5) For HIPAA purposes, internal data, or internal code sets, are data elements that are fully specified within the HIPAA implementation guides. For X12 transactions, changes to the associated code values and descriptions must be approved via the normal standards development process, and can only be used in the revised version of the standards affected. X12 transactions also use many coding and identification schemes that are maintained by external organizations. For these external code sets, the associated values and descriptions can change at any time and still be usable in any version of the X12 transactions that uses the associated code set. (6) Individually identifiable data is data that can be readily associated with a specific individual. Examples would be a name, a personal identifier, or a full street address. If life were simple, everything else would be non-identifiable data. But even if you remove the obviously identifiable data from a record, other data elements present can also be used to re-identify it. For example, a birth date and a zip code might be sufficient to re-identify half the records in a file. The re-identifiability of data can be limited by omitting, aggregating, or altering such data to the extent that the risk of it being re-identified is acceptable. (7) A specific form of data representation, such as an X12 transaction, will generally include some structural data that is needed to identify and interpret the transaction itself, as well as the business data 838
AU8231_A003.fm Page 839 Thursday, October 19, 2006 7:10 AM
Glossary content that the transaction is designed to transmit. Under HIPAA, when an alternate form of data collection such as a browser is used, such structural or format-related data elements can be ignored as long as the appropriate business data content is used. (8) Structured data is data, the meaning of which can be inferred to at least some extent based on its absolute or relative location in a separately defined data structure. This structure could be the blocks on a form, the fields in a record, the relative positions of data elements in an X12 segment, etc. Unstructured data, such as a memo or an image, would lack such clues. DAU — User data protection data authentication. DBMS — Database management system. DCC — See Data Content Committee. DCE — Data circuit-terminating equipment. D-codes — A subset of the HCPCS Level II medical code set with a highorder value of “D” that has been used to identify certain dental procedures. The final HIPAA transactions and code sets rule states that these D-codes will be dropped from the HCPCS, and that CDT codes will be used to identify all dental procedures. DD — See data dictionary. DDE — See direct data entry. DDoS attacks — Distributed denial-of-service attacks. These are denial-ofservice assaults from multiple sources. DDP — Datagram Delivery Protocol (AppleTalk). DDR (1) — Dial-on-demand routing. DDR (2) — Dual data rate RAM. Dead drop — A method of secret information exchange where the two parties never meet. Deadlock — A condition that occurs when two users invoke conflicting locks in trying to gain access to a specific record or records. Deadlock — A situation in which computer processing is suspended because two or more devices or processes are each awaiting resources assigned to the other. Debugging — The process of correcting static and logical errors detected during coding. With the primary goal of obtaining an executable piece of code, debugging shares certain techniques and strategies with testing but differs in its usual ad hoc application and scope. DeCC — See Dental Content Committee. 839
AU8231_A003.fm Page 840 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Decentralized computing — An environment in which an organization splits computing power and locates it in functional business areas as well as on the desktops of knowledge workers. Deceptive trade practices — Misleading or misrepresenting products or services to consumers and customers. In the United States, these practices are regulated by the Federal Trade Commission at the federal level and typically by the Attorney General’s Office of Consumer Protection at the state level. Decipher — The ability to convert, by use of the appropriate key, enciphered text into its equivalent plaintext. Decipherment — The reversal of a corresponding reversible encipherment. Decision processing enterprise information portal — Provides knowledge workers with corporate information for making key business decisions. Decision superiority — Better decisions arrived at and implemented faster than an opponent can react, or in a noncombat situation, at a tempo that allows the force to shape the situation or react to changes and accomplish its mission. Decision support system (DSS) — A computer information system that helps executives and managers formulate policies and plans. This support system enables the users to access information and assess the likely consequences of their decisions through scenario projections. Declassification — The determination that particular classified information no longer requires protection against unauthorized disclosure in the interest of national security. Such determination shall be by specific action or automatically after the lapse of a requisite period of time or the occurrence of a specified event. If such determination is by specific action, the material shall be so marked with the new designation. Declassification event — An event which would eliminate the need for continued classification. Decoding — Changing a digital signal into analog form or another type of digital signal. The opposite of encoding. Decontrol — The authorized removal of an assigned administrative control designation. Decrypt/Decipher/Decode — Decryption is the opposite of encryption and synonomous with decipher. It is the transformation of encrypted information back into a legible form. Essentially, decryption is about removing disguise and reclaiming the meaning of information. 840
AU8231_A003.fm Page 841 Thursday, October 19, 2006 7:10 AM
Glossary Decryption — The conversion, through mechanisms or procedures, of encrypted data into its original form. Decryption key — A piece of information, in a digitized form, used to recover the plaintext from the corresponding ciphertext by decryption. Dedicated lines — Private circuits between two or more stations, switches, or subscribers. Dedicated mode — The operation of a computer system such that the central computer facility, connected peripheral devices, communications facilities, and all remote terminals are used and controlled exclusively by the users or groups of users for the processing of particular types and categories of information. Dedicated security mode — A system is operating in the dedicated security mode when the system and all of its local and remote peripherals are exclusively used and controlled by specific users or groups of users who have a security clearance and need-to-know for the processing of a particular category and type of classified material. . Dedicated server — A microcomputer used exclusively to perform a specific service, such as to process the network operating system. Deduction — A method of logical reasoning which results in necessarily true statements. As an example, if it is known that every man is mortal and that George is a man, then it can be deduced that George is mortal. Deduction is equivalent to the logical rule of modus ponens. Defect — Deficiency; imperfection; insufficiency; the absence of something necessary for completeness or perfection; a deficiency in something essential to the proper use for the purpose for which a thing is to be used; a manufacturing flaw, a design defect, or inadequate warning. Defense-in-depth — Provision of several overlapping subsequent limiting barriers with respect to one safety or security threshold, so that the threshold can only be surpassed if all barriers have failed.The practice of layering defenses to provide added protection. Security is increased by raising the cost to mount the attack. This system places multiple barriers between an attacker and an organization’s business critical information resources. This strategy also provides natural areas for the implementation of intrusion-detection technologies. Defense Information Infrastructure (DII) — The complete set of DoD information transfer and processing resources, including information and data storage, manipulation, retrieval, and display. More specifically, the DII is the shared or interconnected system of computers, communications, data, applications, security, people, training, and other support structure, serving the DoD’s local and worldwide information needs. It connects DoD 841
AU8231_A003.fm Page 842 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® mission support, command and control, and intelligence computers and users through voice, data, imagery, video, and multimedia services; and it provides information processing and value-added services to subscribers over the DISN and interconnected Service and Agency networks. Data, information, and user applications software unique to a specific user are not considered part of the DII. Defense Information Systems Network (DISN) — A subelement of the Defense Information Infrastructure (DII), the DISN is the DoD’s consolidated worldwide enterprise level telecommunications infrastructure that provides the end-to-end information transfer network for supporting military operations. It is transparent to its users, facilitates the management of information resources, and is responsive to national security and defense needs under all conditions in the most efficient manner. Defensive programming — Designing software that detects anomalous control flow, data flow, or data values during execution and reacts in a predetermined and acceptable manner. The intent is to develop software that correctly accommodates design or operational shortcomings; for example, verifying a parameter or command through two diverse sources before acting upon it. Degauss — To erase or demagnetize magnetic recording media (usually tapes) by applying a variable, alternating current (AC) field. Degraded-mode operation — Maintaining the availability of the more critical system functions, despite failures, by dropping the less critical functions. Also referred to as graceful degradation. Degree (of a relation) — The number of attributes or columns of a relation. DEL — Delivery and operation; delivery. Delegated Accrediting Authority (DAA) — Official with the authority to formally assume responsibility for operating a system at an acceptable level of risk. Synonymous with designated accrediting authority and designated approval authority. Delegation — The notation that an object can issue a request to another object in response to a request. The first object therefore delegates the responsibility to the second object. Delegation can be used as an alternative to inheritance. Delphi — A forecasting method where several knowledgeable individuals make forecasts, and a forecast is derived by a trained analyst from a weighted average. Demand aggregation — Combines purchase requests from multiple buyers into a single large order, which justifies a discount from the business. 842
AU8231_A003.fm Page 843 Thursday, October 19, 2006 7:10 AM
Glossary Demand-mode operation — Systems that are used periodically on-demand; for example, a computer-controlled braking system in a car. Demodulation — The reconstruction of an original signal from the modulated signal received at a destination device. Denial of service (DoS) — The unauthorized prevention of authorized access to resources or the delaying of time-critical operations. Denial-of-service (DoS) attack — The attacker floods a Web site with many electronic message requests for service that it slows down or crashes the network or computer targeted. Dental Content Committee (DeCC) — An organization hosted by the American Dental Association that maintains the data content specifications for dental billing. The Dental Content Committee has a formal consultative role under HIPAA for all transactions affecting dental healthcare services. Dependability — That property of a computer system such that reliance can be justifiably placed on the service it delivers. The service delivered by a system is its behavior as it is perceived by its user(s); a user is another system or human that interacts with the former. Depth — (1) Penetration layer achieved during or the degree of intensity of an IO attack. (2) The most profound or intense part or stage. The severest or worst part. The degree of richness or intensity. Derivative classification — A determination that information is in substance the same as information currently classified, coupled with the designation of the level of classification. DES — Data Encryption Standard. Descriptive attribute — The intrinsic characteristics of an object. Descriptor — The text defining a code in a code set. Also see Part II, 45 CFR 162.103. Design — The aspect of the specification process that involves the prior consideration of the implementation. Design is the process that extends and modifies an analysis specification. It accommodates certain qualities including extensibility, reusability, testability, and maintainability. Design also includes the specification of implementation requirements such as user interface and data persistence. Design and Implementation — A phase of the systems development life cycle in which a set of functional specifications produced during systems analysis is transformed into an operational system for hardware, software, and firmware. 843
AU8231_A003.fm Page 844 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Design review — The quality assurance process in which all aspects of a system are reviewed publicly. Designated Accrediting Authority (DAA) — Official with the authority to formally assume responsibility for operating a system at an acceptable level of risk. Synonymous with designated approval authority and delegated accrediting authority. Designated Approving Authority (DAA) — The official who has the authority to decide on accepting the security safeguards prescribed for an AIS or that official who may be responsible for issuing an accreditation statement that records the decision to accept those safeguards. Designated Code Set — A medical code set or an administrative code set that HHS has designated for use in one or more of the HIPAA standards. Designated Data Content Committee, or Designated DCC — An organization that HHS has designated for oversight of the business data content of one or more of the HIPAA-mandated transaction standards. Designated Record Set — See Part II, 45 CFR 164.501. Designated Standard — A standard that HHS has designated for use under the authority provided by HIPAA. Designated Standard Maintenance Organization (DSMO) — See Part II, 45 CFR 162.103. Desktop computer — The most popular choice for personal computing needs. Desktop publishing — The use of computer technology equipped with special hardware, firmware, and software features to produce documents that look equivalent to those printed by a professional print company. Destruction — Irretrievable loss of data file, or damage to hardware or software. Detect — To discover threat activity within information systems, such as initial intrusions, during the threat activity or post-activity. Providing prompt awareness and standardized reporting of attacks and other anomalous external or internal system and network activity. Developer — The organization that develops the IS. DHCP — Dynamic Host Configuration Protocol. DHHS — See HHS. Dial-up — Access to switched network, usually through a dial or pushbutton telephone. DIAP — Defense-wide IA program (U.S. DoD). 844
AU8231_A003.fm Page 845 Thursday, October 19, 2006 7:10 AM
Glossary DICOM — See Digital Imaging and Communications in Medicine. Dielectric — A non-conducting or insulating substance that resists passage of electric current, allowing electrostatic induction to act across it, as in the insulating medium between the plates of a condenser. Diffraction — Signal loss as a result of variations in the terrain the signal crosses. Digimark — A company that creates digital watermarking technology used to authenticate, validate, and communicate information within digital and analog media. Digit — A single numeral representing an arithmetic value. Digital — A mode of transmission where information is coded in binary form for transmission on the network. Digital audio tape (DAT) — A magnetic tape technology. DAT uses 4-mm cassettes capable of backing up anywhere between 26 and 126 bytes of information. Digital cash — An electronic representation of cash. Also called e-cash. Digital certificates — A certificate identifying a public key to its subscriber, corresponding to a private key held by that subscriber. It is a unique code that typically is used to allow the authenticity and integrity of communication can be verified. Digital code signing — The process of digitally signing computer code so that its integrity remains intact and it cannot be tampered with. Digital divide — The fact that different peoples, cultures, and areas of the world or within a nation do not have the same access to information and telecommunications technologies. Digital economy — Marked by the electronic movement of all types of information, not limited to numbers, words, graphs, and photos but also including physiological information such as voice recognition and synthesization, biometrics (a person’s retina scan and breath, for example), and 3-D holograms. Digital fingerprint — A characteristic of a data item, such as a cryptographic checkvalue or the result of performing a one-way hash function on the data, that is sufficiently peculiar to the data item that it is computationally infeasible to find another data item that possesses the same characteristics. Digital Imaging and Communications in Medicine (DICOM) — A standard for communicating images, such as x-rays, in a digitized form. This standard could become part of the HIPAA claim attachments standards. 845
AU8231_A003.fm Page 846 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Digital modem — A piece of equipment that joins a digital phone line to a piece of communication equipment, which may be a phone or a PC. Such equipment allows testing, condition, timing, interfacing, etc. However, it does not do what a modem does: namely, convert digital signals from machines into analog signals which can be carried on analog phone lines. The term “digital modem” is thus somewhat of a misnomer. Digital PABX — An automatic switching system. No operator is needed to complete the call. In the original PBX system, operators were sometimes needed to complete the calls. Also called private automatic branch exchange. Digital Rights Management (DRM) — Focuses on security and encryption to prevent unauthorized copying limit distribution to only those who pay. This is considered first-generation DRM. Second-generation DRM covers description, identification, trading, protection, monitoring and tracking of all forms of rights usages over both tangible and intangible assets including management of rights holders’ relationships. It is important to note that DRM manages all rights, not just those involving digital content. Additionally, it is important to note that DRM is the “digital management of rights” and not the “management of digital rights.” That is, DRM manages all rights, not only the rights applicable to permissions over digital content. Digital signature — The act of electronically affixing an encrypted message digest to a computer file or message in which the originator is then authenticated to the recipient. Digital Signature Standard (DSS) — The National Security Administration’s standard for verifying an electronic message. Digital subscriber line (DSL) — A technology that dramatically increases the digital capacity of ordinary telephone lines (the local loops) into the home or office. DSL speeds are tied to the distance between the customer and the telephone company’s central office. Digitize — Converting an analog or continuous signal into a series of 1s and 0s, i.e., into a digital format. DII — Defense information infrastructure. DIMM — dual inline memory module. Diode — A device that conducts electricity in one direction only. Sometimes referred to as a PN (positive-negative) device because it is made of a single semiconductive crystal with a positive terminal and a negative terminal. 846
AU8231_A003.fm Page 847 Thursday, October 19, 2006 7:10 AM
Glossary Direct access — The method of reading and writing specific records without having to process all preceding records in a file. Direct access storage device (DASD) — A data storage unit on which data can be accessed directly without having to progress through a serial file such as a magnetic tape file. A disk unit is a direct access storage device. Direct current — A flow of electricity always in the same direction. Direct data entry (DDE) — Under HIPAA, this is the direct entry of data that is immediately transmitted into a health plan’s computer. Also see Part II, 45 CFR 162.103. Direct organization — A method of file organization under which records are located on the basis of their keys and associated addresses on the storage media. Direct Treatment Relationship — See Part II, 45 CFR 164.501. Direction of Arrival (DoA) — The electromagnetic waves arrive at the directional antenna and are received more readily from one direction than from another. The antenna needs to be aligned with the direction of arrival. Directory — A table specifying the relationships between items of data. Sometimes a table (index) giving the addresses of data. Directory engine search — Organizes listings of Web sites into hierarchical lists. Directory service — A service provided on a computer network that allows one to look up addresses (and perhaps other information such as public key certificates) based upon usernames. DISA — See Data Interchange Standards Association. Disaster notification fees — The fee a recovery site vendor usually charges when the customer notifies them that a disaster has occurred and the recovery site is required. The fee is implemented to discourage false disaster notifications. Disaster recovery cost curve — Charts (1) the cost to the organization due to the unavailability of information and technology, and (2) the cost to the organization of recovering from a disaster over time. Disaster recovery plan — A detailed process for recovering information or an IT system in the event of a catastrophic disaster such as a fire or flood. 847
AU8231_A003.fm Page 848 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Disclosure — The release, transfer, provision of access to, or divulging in any other manner of information outside the entity holding the information. (See use, in contrast.) Disclosure history — Under HIPAA, this is a list of any entities that have received personally identifiable healthcare information for uses unrelated to treatment and payment. Discrepancy reports — A listing of items that have violated some detective control and require further investigation. Discrete cosine transform (DCT) — used in JPEG compression, the discrete cosine transform helps separate the image into parts of differing importance based on the image’s visual quality; this allows for large compression ratios. The DCT function transforms data from a spatial domain to a frequency domain. Discretionary Access Control (DAC) — A means of restricting access to objects based on the identity of subjects and groups to which they belong. The controls are discretionary in the sense that a subject with certain access permission is capable of passing that permission on to another subject. Disintermediation — The use of the Internet as a delivery vehicle whereby intermediate players in a distribution channel can be bypassed. Disk address — The positioned location of a data record on magnetic disk storage. Disk duplexing — This refers to the use of two controllers to drive a disk subsystem. Should one of the controllers fail, the other is still available for disk I/O. Software applications can take advantage of both controllers to simultaneously read and write to different drives. Disk (disc) mirroring — Disk mirroring protects data against hardware failure. In its simplest form, a two-disk subsystem would be attached to a host controller. One disk serves as the mirror image of the other. When data is written to it, it is also written to the other disk. Both disks will contain exactly the same information. If one fails, the other can supply the user data without problem. Disk operating system (DOS) — Software that controls the execution of programs and may provide system services as resource allocation. Disk optimization software — Utility software that organizes information on the hard disk in the most efficient way. Diskette — A flexible disk storage medium most often used with microcomputers; also called a floppy disk. 848
AU8231_A003.fm Page 849 Thursday, October 19, 2006 7:10 AM
Glossary Distinguishing identifier — Data that unambiguously distinguishes an entity in the authentication process. Such an identifier shall be unambiguous at least within a security domain. Distortion — An undesired change in an image or signal. A change in the shape of an image resulting from imperfections in an optical system, such as a lens. Distributed application — A set of information processing resources distributed over one or more open systems that provides a well-defined set of functionality to (human) users, to assist a given (office) task. Distributed Component Object Model (DCOM) — A protocol that enables software components to communicate directly over a network. Developed by Microsoft and previously called “Network OLE,” DCOM is designed for use across multiple network transports, including Internet Protocols such as HTTP. Distributed computing — The distribution of processes among computing components that are within the same computer or different computers on a shared network. Distributed Computing Environment (DCE) — An architecture of standard programming interfaces, conventions, and server functionalities (e.g., naming, distributed file system, remote procedure call) for distributing applications transparently across networks of heterogeneous computers. Promoted and controlled by the Open Software Foundation (OSP), a consortium led by Hewlett-Packard, Digital Equipment Corp, and IBM. Distributed database — A database management system with the ability to effectively manage data that is distributed across multiple computers on a network. Distributed denial-of-service (DDoS) attack — Multiple computers flooding a Web site with so many requests for service that it slows down or crashes. Distributed environment — A set of related data processing systems in which each system has its own capacity to operate autonomously but has some applications that are executed at multiple sites. Some of the systems may be connected with teleprocessing links into a network with each system serving as a node. Distributed system — A multi-work station, or terminal system where more than one workstation shares common system resources. The work stations are connected to the control unit/data storage element through communication lines. Dithering — Creating the illusion of new colors and shades by varying the pattern of dots in an image. Dithering is also the process of converting an image with a certain bit depth to one with a lower bit depth. 849
AU8231_A003.fm Page 850 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® DITSCAP — Department of Defense Information Technology Security Certification and Accreditation Process. Diversity — Using multiple different means to perform a required function or solve the same problem. Diversity can be implemented in software and hardware. DIX — Digital-Intel-Xerox. DLC — Data link control. DLCI — Data link connection identifier in Frame Relay. DME — Durable medical equipment. DMEPOS — Durable medical equipment, prosthetics, orthotics, and supplies. DMERC — See medicare durable medical equipment regional carrier. DMZ — Commonly, it is the network segment between the Internet and a private network. It allows access to services from the Internet and the internal private network, while denying access from the Internet directly to the private network. DNA SCP — Digital Network Architecture Session Control Protocol (DECnet). DNIC — Data Network Identification Code (X.25). DNS (Domain Name System, Service, or Server) — (1) A hierarchical database that is distributed across the Internet and allows names to be resolved to IP addresses and vice versa to locate services such as Web sites and e-mail. (2) An Internet service that translates domain names into IP addresses. Document — Any recorded information, regardless of its physical form or characteristics, including, without limitation, written or printed material; data processing cards and tapes; maps; charts; paintings; drawings; engravings; sketches; working notes and papers; reproductions of such things by any means or process; and sound, voice, or electronic recordings in any form. Documentation — The written narrative of the development, workings, and operation of a program or system. DoD Information Technology Security Certification and Accreditation Process (DITSCAP) — The standard DoD process for identifying information security requirements, providing security solutions, and managing IS security activities. 850
AU8231_A003.fm Page 851 Thursday, October 19, 2006 7:10 AM
Glossary DoD Trusted Computer System Evaluation Criteria (TCSEC) — D o c u ment containing basic requirements and evaluation classes for assessing degrees of effectiveness of hardware and software security controls built into an IS. This document, DoD 5200.28 STD, is frequently referred to as the Orange Book. Domain — The set of objects that a subject (user or process) has the ability to access. Domain and type enforcement — A confinement technique in which an attribute called a domain is associated with each subject and another attribute called a type is associated with each object. A matrix specifies whether a particular mode of access to objects of a type is granted or denied to subjects in a domain. Domain dimension — The dimension dealing with the structural aspects of the system involving broad, static patterns of internal behavior. Domain Name — The name used to identify an Internet host. Domain Name Server — See DNS. Domain name system (DNS) — The distributed name and address mechanism used in the Internet. Domain of Interpretation (DOI) — The DOI defines payload formats, the situation, exchange types, and naming conventions for certain information such as security policies, or cryptographic algorithms. It is also used to interpret the ISAKMP payloads. DoS — (1) Abbreviation for a denial-of-service attack, a type of attack on a network that is designed to bring the network to its knees by flooding it with useless traffic. For all known DoS attacks, there are software fixes that system administrators can install to limit the damage caused by the attacks. (2) In general, any malicious action that denies availability of a system to users. Downgrading — The determination that particular classified information requires a lesser degree of protection or no protection against unauthorized disclosure than currently provided. Such determination shall be by specific action or automatically after lapse of the requisite period of time or the occurrence of a specified event. If such determination is by specific action, the material shall be so marked with the new designation. Downlink frequencies — Frequencies used in the transmission link reaching from a satellite to the ground. Downtime — A period of time in which the computer is not available for operation. DPT — Tests, depth. 851
AU8231_A003.fm Page 852 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® DQDB — Distributed queue dual bus (SMDS). DR — Designated router. Draft Standard for Trial Use (DSTU) — An archaic term for any X12 standard that has been approved since the most recent release of X12 American National Standards. The current equivalent term is “X12 standard.” DRAM — Dynamic random access memory. DRG — Diagnosis related group. DRP — Disaster recovery plan. DS-0 — Digital signal, level 0. A DS-0 is a voice-grade channel of 64 kbps. DS-1 — Digital signal Level 1 (1.544 Mb). DS-3 — Digital Signal, level 3 (45 Mb). DSA — Digital signature algorithm. DSAP — Destination service access point (LLC). DSE — Data switching equipment. DSL — Digital subscriber line. DSMO — See Designated Standard Maintenance Organization. DSR — Data set ready. DSS — Digital signature standard; See FIPS PUB 186.165. DSS — (1) Digital Subscriber Signaling System 1. (2) Digital Signature Standard. DSS shell — A set of programs that can be used for constructing a decision support system. DSSA — Distributed system security architecture; developed by Digital Equipment Corporation. DSTU — See Draft Standard for Trial Use. DSU — Data service unit. DTE — (1) Domain and type enforcement. (2) Data and type enforcement. DTR — Data terminal ready. DUAL — Diffused update algorithm (EIGRP). Dual control — A procedure that uses to or more entities (usually persons) operating in concert to protect a system resources, such that no single entity acting alone can access that resource. 852
AU8231_A003.fm Page 853 Thursday, October 19, 2006 7:10 AM
Glossary Dual tone multifrequency (DTMF) — A term describing push-button or touch-tone dialing. When you push a button, it makes a tone that is actually a combination of two tones, one high frequency and one low frequency. Due care — Managers and their organizations have a duty to provide for information security to ensure that the type of control, the cost of control, and the deployment of control are appropriate for the system being managed. Dumb terminal — A device used to interact directly with the end user where all data is processed on a remote computer. A dumb terminal only gathers and displays data; it has no processing capability. Dump — The contents of a file or memory that are output as listings. These listing can be formatted. Duplex — Communications systems or equipment that can simultaneously carry information in both directions between two points. Also used to describe redundant equipment configurations (e.g., duplexed processors). DVS — Life-cycle support, development security. Dynamic analysis — Exercising the system being assessed through actual execution; includes exercising the system functionally (traditional testing) and logically through techniques such as failure assertion, structural testing, and statistical-based testing. Major system components must have been built before dynamic analysis can be performed. Dynamic binding — The responsibility for executing an action on an object resides within the object itself. The same message can elicit a different response, depending upon the receiver. Dynamic dimension — The dimension concerned with the non-static, process-related properties of the system. Dynamic Host Configuration Protocol (DHCP) — DHCP is an industry standard protocol used to dynamically assign IP addresses to network devices. Dynamic processing — The technique of swapping jobs in and out of computer memory. This technique can be controlled by the assignment priority and the number of time slices allocated to each job. Dynamically phased array (PA) — Type of radio antenna used in certain satellite and wireless communications. This small, flat antenna mounts on the side of a building or on a rooftop. It has an array of chip-based radio receivers, which lock in on the target transmission frequency on a dynamic basis. Also called a “pizza box antenna.” EAL — Evaluation assurance level. 853
AU8231_A003.fm Page 854 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® EAP — Extensible Authentication Protocol. Early token release — Technique used in Token Ring networks that allows a station to release a new token onto the ring immediately after transmitting, instead of waiting for the first frame to return. This feature can increase the total bandwidth on the ring. See also Token Ring. Earth stations — Ground terminals that use antennas and other related electronic equipment designed to transmit, receive, and process satellite communications. Ease — Amount of time and skill level required to either penetrate or restore function. Measures the degree of difficulty. Eavesdropping — The unauthorized interception of information-bearing emanations through methods other than wiretapping. EBCDIC — Extended Binary Encoded Decimal Interchange Code. EBGP — Exterior Border Gateway Protocol. ebXML — A set of technical specifications for business documents built around XML designed to permit enterprises of any size and in any geographical location to conduct business over the Internet. EC — See electronic commerce. ECC — Elliptic curve cryptography. Echo — The display of characters on a terminal output device as they are entered into the system. Echo hiding — Relies on limitations in the human auditory system by embedding data in a cover audio signal. Using changes in delay and relative amplitude; two types of echos are created, which allows for the encoding of 1s and 0s. Ecological dimension — The dimension dealing with the interface properties of a system; inflow and outflow of forces in a system. Economy — Scalable system packages ease the application of economy. Space, weight, or time constraints limit the quantity or capability of systems that can be deployed. Information requirements must be satisfied by consolidating similar functional facilities, integrating commercial systems into tactical information works, or accessing to a different information system. EDI — Electronic data interchange (computer-to-computer transactions). EDI translator — A software tool for accepting an EDI transmission and converting the data into another format, or for converting a non-EDI data file into an EDI format for transmission. 854
AU8231_A003.fm Page 855 Thursday, October 19, 2006 7:10 AM
Glossary EDIFACT — See United Nations Rules for Electronic Data Interchange for Administration, Commerce, and Transport (UN/EDIFACT). Edit — The process of inspecting a data field or element to verify the correctness of its content. EDP auditor — A professional whose responsibility is to certify the validity, reliability, and integrity of all aspects of the computer information system environment of an organization; also know as IS auditor, CIS auditor, or IT auditor. Education — IT security education focuses on developing the ability and vision to perform complex, multidisciplinary activities and the skills needed to further the IT security profession. Education activities include research and development to keep pace with changing technologies and threats. EEPROM — Electrically erasable programmable read-only memory. Effective date — Under HIPAA, this is the date that a final rule becomes effective, which is usually 60 days after it is published in the Federal Register. Effectiveness — Efficiency, potency, or capability of an act in producing a desired (or undesired) result. The power of the protection or the attack. Efficiency — Capability, competency, or productivity. The efficiency of an act is a measure of the work required to achieve a desired result. EFT — See electronic funds transfer. E-government — The application of E-commerce technologies in government agencies. EGP — Exterior Gateway Protocol. EHNAC — See Electronic Healthcare Network Accreditation Commission. EIA — Electronic Industries Association. EIGRP — Enhanced Interior Gateway Routing Protocol. EIN — Employer identification number. Electromagnetic emanations — Signals transmitted as radiation through the air or conductors. Electromagnetic interference (EMI) — Electromagnetic waves emitted by a device. Electron — A light, subatomic particle that carries a negative charge. Electronic attack (EA) — Use of EM or directed energy to attack personnel, facilities, or equipment to destroy/degrade combat capability. 855
AU8231_A003.fm Page 856 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Electronic bill presentation and payment (EBPP) — A s y s t e m t h a t sends people their bills over the Internet and gives them an easy way to pay. Electronic bulletin board — An application program that lets users contribute messages via e-mail that can be routed or shared with users. Electronic business XML — See ebXML. Electronic catalog — Designed to present products to customers via the Internet. Electronic Code Book (ECB) — A basic encryption method that provides privacy but not authentication. Electronic commerce (E-commerce) — A broad concept that covers any trade or commercial transaction that is effected via electronic means; this would include such means as facsimile, telex, EDI, Internet, and the telephone. For the purpose of this book, the term is limited to those commercial transactions involving computer to computer communications whether utilizing an open or closed network. Electronic Communications Privacy Act of 1986, PL 99-508 (ECPA) — Electronic Communications Privacy Act of 1986; extends the Privacy Act of 1974 to all forms of electronic communication, including e-mail. Electronic data interchange (EDI) — A process whereby such specially formatted documents as an invoice can be transmitted from one organization to another. A system allowing for inter-corporate commerce by the automated electronic exchange of structured business information. Electronic data vaulting — Electronic vaulting protects information from loss by providing automatic and transparent backup of valuable data over high-speed phone lines to a secure facility. Electronic document file — A magnetic storage area that contains electronic images of papers and other communications documents. Electronic Frontier Foundation — A foundation established to address social and legal issues arising from the impact on society of the increasingly pervasive use of computers as the means of communication and information distribution. Electronic funds transfer (EFT) — The process of moving money between accounts via computer. Electronic Healthcare Network Accreditation Commission (EHNAC) — An organization that tests transactions for consistency with the HIPAA requirements, and that accredits healthcare clearinghouses. 856
AU8231_A003.fm Page 857 Thursday, October 19, 2006 7:10 AM
Glossary Electronic job market — Consists of employers using the Internet to advertise for and screen potential employees. Electronic journal — A computerized log file summarizing, in chronological sequence, the processing activities and events performed by a system. The log file is usually maintained on magnetic storage media. Electronic mail (e-mail) — Formal or informal communications electronically transmitted or delivered. Electronic media claims (EMC) — This term usually refers to a flat file format used to transmit or transport claims, such as the 192-byte UB-92 Institutional EMC format and the 320-byte Professional EMC NSF. Electronic office — An office that relies on word processing, computer systems, and communications technologies to support its operations. Electronic portfolio — Collection of Web documents used to support a stated purpose such as writing skills. Electronic protect (EP) — Actions to protect personnel, facilities, and equipment from enemy/friendly EW that degrades or destroys own-force combat capability. Electronic Remittance Advice (ERA) — Any of several electronic formats for explaining the payments of healthcare claims. Electronic signature — Any technique designed to provide the electronic equivalent of a handwritten signature to demonstrate the origin and integrity of specific data. Digital signatures are an example of electronic signatures. Electronic warfare (EW) — Action involving the use of electromagnetic (EM) and directed energy to control the EM spectrum or to attack the enemy. Electronic warfare support (ES) — That division of EW involving actions tasked by, or under direct control of, an operational commander to search for, intercept, identify, and locate sources of intentional and unintentional radiated electromagnetic energy for the purpose of immediate threat recognition. Thus, electronic warfare support provides information required for immediate decisions involving EW operations and other tactical actions such as threat avoidance, targeting and homing. ES data can be used to produce signals intelligence . Element management functions — A set of functions for management of network elements on an individual basis. These are basically the same functions as those supported by the corresponding local terminals. Element manager — Provides a package of end-user functions for management of a set of closely related types of network elements. 857
AU8231_A003.fm Page 858 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® E-mail software (electronic mail software) — Enables people to electronically communicate with other people by sending and receiving e-mail. Emanation security — The protection that results from all measures designed to deny unauthorized persons access to valuable information that might be derived from interception and analysis of compromising emanations. Embedded message — In steganography, it is the hidden message that is to be put into the cover medium. Embedding — To cause to be an integral part of a surrounding whole. In steganography and watermarking, embedding refers to the process of inserting the hidden message into the cover medium. EMC — (1) Electromagnetic conductance. (2) See Electronic media claims. EMF — Electromagnetic field. EMI — Electromagnetic interference. Emission security (EMSec) — The protection resulting from all measures taken to deny unauthorized persons information of value that might be derived from interception and from an analysis of compromising emanations from systems. EMP — Electromagnetic pulse. EMR — Electronic medical record. Encapsulated Security Payload — An IPSec protocol that provides confidentiality, data origin authentication, data integrity services, tunneling, and protection from replay attacks. Encapsulated subsystem — A collection of procedures and data objects that is protected in a domain of its own so that the internal structure of a data object is accessible only to the procedures of the encapsulated subsystem and that those procedures may be called only at designated domain entry points. Encapsulated subsystem, protected subsystem and protected mechanisms of the TCB are terms that may be used interchangeably. Encapsulation — The technique used by layered protocols in which a layer adds header information to the protocol data unit (PDI) from the layer above. Encipher — The process of converting plaintext into unintelligible form by means of a cipher system. Encipherment — The cryptographic transformation of data (see cryptography) to produce ciphertext. 858
AU8231_A003.fm Page 859 Thursday, October 19, 2006 7:10 AM
Glossary Enclave — An environment that is under the control of a single authority and has a homogeneous security policy, including personnel and physical security. Local and remote elements that access resources within an enclave must satisfy the policy of the enclave. Enclaves can be specific to an organization or a mission and may also contain multiple networks. They may be logical, such as an operational area network (OAN) or be based on physical location and proximity. Encoding — The process of converting data into code or analog voice into a digital signal. Encrypt/Encipher/Encode — Encryption is the transformation of information into a form that is impossible to read unless you have a specific piece of information, which is usually referred to as the “key.” The purpose is to keep information private from those who are not intended to have access to it. To encrypt is essentially about making information confusing and hiding the meaning of it. Encrypted text — Data that is encoded into an unclassified form using a nationally accepted form of encoding. Encryption — The use of algorithms to encode data in order to render a message or other file readable only for the intended recipient. Encryption algorithm — A set of mathematically expressed rules for encoding information, thereby rendering it unintelligible to those who do not have the algorithm decoding key. Encryption key — A special mathematical code that allows encryption hardware/software to encode and then decipher an encrypted message. End entity — An end entity can be considered an end user, a device such as a router or a server, a process, or anything that can be identified in the subject name of a public key certificate. End entities can also be thought of as consumers of the PKI-related services. End system — An OSI system that contains application processes capable of communication through all seven layers of OSI protocols. Equivalent to Internet host. Endorsed cryptographic products list — A list of products that provide electronic cryptographic coding (encrypting) and decoding (decrypting), and which have been endorsed for use for classified or sensitive unclassified U.S. Government or government-derived information during its transmission. Endorsed TEMPEST products list — A list of commercially developed and commercially produced TEMPEST telecommunications equipment 859
AU8231_A003.fm Page 860 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® that NSA has endorsed, under the auspices of the NSA Endorsed TEMPEST Products Program, for use by government entities and their contractors to process classified U.S. Government information. End-to-end encipherment — Encipherment of data within or at the source end system, with the corresponding decipherment occurring only within or at the destination end system. End-to-end encryption — The encryption of information at the point of origin within the communications network and postponing of decryption to the final destination point. Enrollment — The initial process of collecting biometric data from a user and then storing it in a template for later comparison. Enterprise application integration (EAI) — The process of developing an IT infrastructure that enables employees to implement new or changing business processes. Enterprise application integration middleware (EAI middleware) — Allows organizations to develop different levels of integration from the information level to the business process level. Enterprise information portal (EIP) — Allows knowledge workers to access company information via a Web interface. Enterprise resource planning (ERP) — The method of getting and keeping an overview of every part of the business, so that production and selling of goods and services will be coordinated to contribute to the company’s goals. Enterprise root — A certificate authority (CA) that grants itself a certificate and creates a subordinate CA. The root CA gives the subordinate CAs their certificates, but the subordinate CAs can grant certificates to users. Enterprise software — A suite of software that includes (1) a set of common business applications; (2) tools for modeling how the organization works; and (3) development tools for building applications unique to the organization. Entity — Either a subject (an active element that operates on information or the system state) or an object (a passive element that contains or receives information). Entity barrier — A product or service feature that customers have come to expect from companies. Entity class — A concept — typically people, places, or things — about which information can be stored and then identified with a unique key called the primary key. 860
AU8231_A003.fm Page 861 Thursday, October 19, 2006 7:10 AM
Glossary Entity-Relationship (ER) diagram — A graphic method of representing entity classes and their relationships. Entrapment — The deliberate planting of apparent flows in a system to invite penetrations. ENV — (1) Protection profile evaluation, security environment. (2) Security target evaluation, security environment. Environment (system) — The aggregate of procedures, conditions, and objects that affects the development, operation, and maintenance of a system. Note: Environment is often used with qualifiers such as computing environment, application environment, or threat environment, which limit the scope being considered. EOB — Explanation of Benefits. EOMB — Explanation of Medicare Benefits, Explanation of Medicaid Benefits, or Explanation of Member Benefits. EOT — End of transmission. EPROM — Erasable programmable read-only memory. EPSDT — Early and Periodic Screening, Diagnosis, and Treatment. ERA — See electronic remittance advice. Erasable programmable read-only memory (EPROM) — A memory chip that can have its circuit logic erased and reprogrammed. ERISA — The Employee Retirement Income Security Act of 1974. ERP — Emergency response plan. Error — The difference between a computed, observed, or measured value or condition and the true, specified, or theoretically correct value or condition. Error of commission — An error that results from making a mistake or doing something wrong. Error of omission — An error that results from something that was not done. Error rate — A measure of the quality of circuits or equipment. The ratio of erroneously transmitted information to the total sent (generally computed per million characters sent). ESF — Extended Super Framing (T1/E1). ESP — Encapsulated Security Payload protocol. 861
AU8231_A003.fm Page 862 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Espionage — The practice or employment of spies; the practice of watching the words and conduct of others, to make discoveries, as spies or secret emissaries; secret watching. This category of computer crime includes international spies and their contractors who steal secrets from defense, academic, and laboratory research facility computer systems. It includes criminals who steal information and intelligence from law enforcement computers, and industrial espionage agents who operate for competitive companies or for foreign governments who are willing to pay for the information. What has generally been known as industrial espionage is now being called competitive intelligence. A lot of information can be gained through “open source” collection and analysis without ever having to break into a competitor’s computer. This information gathering is also competitive intelligence, although it is not as ethically questionable as other techniques. ET — Exchange termination. E-tailor — An Internet retail site. ETC — User data protection export to outside TSF control. Ethernet — A LAN technology that is in wide use today utilizing CSMA/CD (Carrier Sense Multiple Access/Collision Detection) to control access to the physical medium (usually a category 5 Ethernet cable). Normal throughput speeds for Ethernet are 10 Mbps, 100 Mbps, and 1 Gbps. Ethernet card — The most common type of network interface card. Ethical (whitehat) hacker — A computer security professional who is hired by a company to break into its computer system. Ethics — The principles and standards that guide people’s behavior toward others. ETSI — European Telecommunication Standards Institute. Evaluated Products List (EPL) — A list of equipment, hardware, software, and firmware that have been evaluated against, and found to be technically compliant, at a particular level of trust, with the DoD TCSEC by the NCSC. The EPL is included in the National Security Agency Information Systems Security Products and Services Catalogue, which is available through the Government Printing Office. Evaluation — The inspection and testing of specific hardware and software products against accepted Information Assurance/Information Security standards. Evaluation assurance level — One of seven levels defined by the Common Criteria that represent the degree of confidence that specified functional security requirements have been met by a commercial product. 862
AU8231_A003.fm Page 863 Thursday, October 19, 2006 7:10 AM
Glossary Evaluation criteria — See IT security evaluation criteria. Evaluation methodology — See IT Security evaluation methodology. Event — A trigger for an activity. Evolution checking — Testing to ensure the completeness and consistency of a software product at different levels of specification when that product is a refinement or elaboration of another. Evolutionary program strategies — Generally characterized by design, development, and deployment of a preliminary capability that includes provisions for the evolutionary addition of future functionality and changes, as requirements are further defined. Exception Report — A manager report that highlights abnormal business conditions. Usually, such reports prompt management action or inquiry. Exchange authentication information — Information exchanged between a claimant and a verifier during the process of authenticating a principal. Exchange type — Exchange type defines the number of messages in an ISAKMP exchange and the ordering of the used payload types for each of these messages. Through this arrangement of messages and payloads, security services are provided by the exchange type. Executive Information System (EIS) — A very interactive IT system that allows the user to first view highly summarized information and then choose how to see greater detail, which may be an alert to potential problems or opportunities. Expand — To increase in extent, number, volume, or scope. Expandability — Refers to how easy it is to add features or functions to a system. Expansion bus — Moves information from the CPU and RAM to all other hardware devices such as a microphone or printer. Expansion card — A circuit board that is inserted into an expansion slot. Expansion slot — A long skinny pocket on the motherboard into which an expansion card can be inserted. Expert system — The application of computer-based artificial intelligence in areas of specialized knowledge. Explanation module — The part of an expert system where the “why” information, supplied by the domain expert, is stored to be accessed by knowledge workers who want to know why the expert systems asked a question or reached a conclusion. 863
AU8231_A003.fm Page 864 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Exposure — The potential loss to an area due to the occurrence of an adverse event. Extended Binary-Coded Decimal Interchange Code (EBCDIC) — A data representation and code system based on the use of an 8-bit byte. Extended SuperFrame — A new version of the SuperFrame that allows for more frames to be grouped together. In a T1 circuit, each of the 24 DS0 channels are sampled every 125 microseconds and 8 bits are taken from each. If you multiply the 8 bits by the 24 channels, you get 192 bits in a chain, and then add one bit for timing, you get 193 total bits in one frame. Twelve frames comprise the SuperFrame. For the Extended SuperFrame, we double the number of frames, making the total 24. Extensibility — A property of software such that new kinds of object or functionality can be added to it with little or no effect to the existing system. Extensible Authentication Protocol (EAP) — An IETF standard means of extending authentication protocols, such as CHAP and PAP, to include additional authentication data; for example, biometric data. eXtensible Markup Language (XML) — Designed to enable the use of SGML on the World Wide Web, XML is a regular markup language that defines what you can do (or what you have done) in the way of describing information for a fixed class of documents (like HTML). XML goes beyond this and allows you to define your own customized markup language. It can do this because it is an application profile of SGML. XML is a metalanguage, a language for describing languages. External certificate authority (ECA) — An agent that is trusted and authorized to issue certificates to approved vendors and contractors for the purpose of enabling secure interoperability with DoD entities. Operating requirements for ECAs must be approved by the DoD CIO, in coordination with the DoD Comptroller and the DoD General Counsel. External information — Describes the environment surrounding the organization. Extraction engine — Smart software with a vocabulary of job-related skills that allows it to recognize and catalog terms in a scannable resume. Extranet — An intranet that is restricted to an organization and certain outsiders, such as customers and suppliers. Facsimile (fax) — A technology used to send document images over telecommunications lines. Fading — Signal disruption caused by multipath signals and heavy rains. 864
AU8231_A003.fm Page 865 Thursday, October 19, 2006 7:10 AM
Glossary Fail operational — The system must continue to provide some degree of service if it is not to be hazardous; it cannot simply shut down — for example, an aircraft flight control system. See degraded-mode operation. Fail safe — The automatic termination and protection of programs or other processing operations when a hardware, software, or firmware failure is detected in a computer system. Fail safe/secure — (1) A design wherein the component/system, should it fail, will fail to a safe/secure condition. (2) The system can be brought to a safe/secure condition or state by shutting it down; for example, the shutdown of a nuclear reactor by a monitoring and protection system. Fail soft — The selective termination of nonessential processing affected by a hardware, software, or firmware failure in a computer system. Failure — Failing to or inability of a system, entity, or component to perform its required function, according to specified performance criteria, due to one or more fault conditions. Three categories of failure are commonly recognized: (1) incipient failures are failures that are about to occur; (2) hard failures are failures that result in a complete shutdown of a system; and (3) soft failures are failures that result in a transition to degraded-mode operations or a fail operational status. Failure access — Unauthorized and usually inadvertent access to data resulting from a hardware, software, or firmware failure in the computer system. Failure control — The methodology used to detect and provide fail-safe or fail-soft recovery from hardware, software, or firmware failure in a computer system. Failure minimization — Actions designed or programmed to reduce failure possibilities to the lowest rates possible. Fair Credit Reporting Act (P.L. 91-508) — A federal law that gives individuals the right of access to credit information pertaining to them and the right to challenge such information. Fair Use Doctrine — Allows the use of copyrighted material in certain situations. Fallback procedures — Predefined operations (manual or automatic) invoked when a fault or failure is detected in a system. Fall-through logic — Predicting which way a program will branch when an option is presented. It is an optimized code based on a branch prediction. False acceptance rate (FAR) — The percentage of imposters incorrectly matched to a valid user’s biometric. False rejection rate (FRR) is the percentage of incorrectly rejected valid users. 865
AU8231_A003.fm Page 866 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® FAQ(s) — Frequently asked questions. Fast Ethernet — Any of a number of 100-Mbps Ethernet specifications. Fast Ethernet offers a speed increase ten times that of the 10BaseT Ethernet specification, while preserving such qualities as frame format, MAC mechanisms, and MTU. Such similarities allow the use of existing 10BaseT applications and network management tools on Fast Ethernet networks. Based on an extension to the IEEE 802.3 specification. Compare with Ethernet. FAU — Security audit functional class. Fault — (1) A defect that results in an incorrect step, process, data value, or mode/state. (2) A weakness of the system that allows circumventing protective controls. Fault tolerance — Built-in capability of a system to provide continued correct execution in the presence of a limited number of hardware or software faults. FBI — Federal Bureau of Investigation. FC — Frame Control (Token Ring). FCC — Federal Communications Commission. FCO — Communication functional class. FCPA — Foreign Corrupt Practices Act. FCS — Frame check sequence. FCS — Cryptographic support functional class. FD — Feasible distance (EIGRP). FDA — Food and Drug Administration. FDD — Floppy disk drive. FDDI — Fiber Distributed Data Interface is a Token Ring type of technology that utilizes encoded light pulses transmitted via fiber-optic cabling for communications between computer systems. It supports a data rate of 100 Mbps and is more likely to be used as a LAN backbone between servers. It has redundancy built in so that if a host on the network fails, there is an alternate path for the light signals to take to keep the network up. FDM — Frequency division multiplexing. FDP — User data protection functional class. 866
AU8231_A003.fm Page 867 Thursday, October 19, 2006 7:10 AM
Glossary Feasibility study — An investigation of the legal, political, social, operational, technical, economic, and psychological effects of developing and implementing a system. Feature analysis — The step of ASR in which the system captures the users’ words as spoken into a microphone, eliminates any background noise, and converts the digital signals of speech into phonemes (syllables). Feature creep — Occurs when developers add extra features that were not part of the initial requirements. FECN — Forward explicit congestion notification. FedCIRC — The U.S. federal government Computer Incident Response Center; managed by the General Services Administration (GSA). Federal Computer Fraud Act — The Counterfeit Access Device and Computer Fraud and Abuse Act of 1986 outlaws unauthorized access to the federal government’s computers and financial databases as protected under the Right to Financial Privacy Act of 1978 and the Fair Credit Reporting Act of 1971. This Act is an amendation of the 1984 Federal Computer Fraud Act. Feistal network — A Feistal network generates blocks of keystream from blocks of the message itself, through multiple rounds of groups of permutations and substitutions, each dependent on transformations of a key. FEP — Front-end processor. FERPA — Family Educational Rights and Privacy Act. Fetch protection — A system-provided restriction to prevent a program from accessing data in another user’s segment of storage. FFIEC — Federal Financial Institutions Examination Council. FFS — Fee-for-service. FI — See Medicare Part A Fiscal Intermediary. FIA — Identification and authentication functional class. Fiber Distributed Data Interface (FDDI) — LAN standard, defined by ANSI X3T9.5, specifying a 100-Mbps token-passing network using fiberoptic cable, with transmission distances of up to two kilometers. FDDI uses a dual-ring architecture to provide redundancy. Fiber optic — A strand of very pure, very clear glass that can carry more information longer distances. FIC — Federal Interest Computer. 867
AU8231_A003.fm Page 868 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Fiche — A sheet of photographic film containing multiple microimages; a form of computer output microfilm. Fidelity — Accuracy, exact correspondence to truth or fact, the degree to which a system or information is distortion-free. Field — A basic unit of data, usually part of a record that is located on an input, storage, or output microfilm. Field Definition Record (FDR) — A record of field definition. A list of the attributes that define the type of information that can be entered into a data field. FIFO — First in, first out. File — A basic unit of data records organized on a storage medium for convenient location, access, and updating. File creation — The building of master or transaction files. File format dependence — A factor in determining the robustness of a piece of stegoed media. Coverting an image from on format to another will usually render the embedded message unrecoverable. File inquiry — The selection of records from files and immediate display of their contents on a terminal output device. File maintenance — The changing of master file by changing the contents of existing records, adding new records, or deleting old records. File protection — The aggregate of all processes and procedures established in a computer system and designed to inhibit unauthorized access, contamination, or elimination of a file. File transfer — The process of copying a file from one computer to another over a network. File Transfer Protocol (FTP) — The Internet protocol (and program) used to transfer files between hosts. File updating — The posting of transaction data to master files or maintenance of master files through record additions, changes, or deletions. Filter — A process or device that screens incoming information for definite characteristics and allows a subset of that information to pass through. Financial cybermediaries — Internet-based companies that make it easy for one person to pay another over the Internet. Financial EDI (FEDI) — The use of EDI for payments. 868
AU8231_A003.fm Page 869 Thursday, October 19, 2006 7:10 AM
Glossary Finger — (1) A program (and a protocol) that displays information about a particular user, or all users, logged on a local system or on a remote system. It typically shows full-time name, last log-in time, idle time, terminal line, and terminal location (where applicable). It may also display plan and project files left by the user. (2) The traceroute or finger commands to run on the source machine (attacking machine) to gain more information about the attacker. Fingerprint — A form of marking that embeds a unique serial number. FIPS — Federal Information Processing Standard. Firewall — A device that forms a barrier between a secure and an open environment. Usually the open environment is considered hostile. The most notable open system s the Internet. Firmware — Software or computer instructions that have been permanently encoded into the circuits of semiconductor chips. FISMA — Federal Information Security Management Act. FISSEA — The Federal Information Systems Security Educator’s Association is an organization whose members come from federal agencies, industry, and academic institutions devoted to improving the IT security awareness and knowledge within the federal government and its related external workforce. Fixed wireless access (FWA) — Replaces the last mile from the central office to the customer. This process usually consists of a pair of digital radio transmitters placed on rooftops, one at the central office and one at the users’ site. These systems usually operate at the 38 GHz portion of the spectrum. Also known as wireless fiber (because of the high speeds of throughput) and as fixed wireless local loop. Flame — To express strong opinion or criticism of something, usually as a frank inflammatory statement in an electronic message. Flat file — A collection of records containing no data aggregates, nested, or repeated data items, or groups of data items. Flat-panel display — Thin, lightweight monitor that takes up much less space than a CRT. Flexibility — Responsiveness to change, specifically as it relates to user information needs and operational environment. Flooded transmission — A transmission in which data is sent over every link in the network. Floppy disk — A flexible removable disk used for magnetic storage of data, programs, or information. 869
AU8231_A003.fm Page 870 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® FLR — Lifecycle support, flaw remediation. FLS — Protection of the TSF, failure secure. FLT — Resource utilization, fault tolerance. FMBS — Frame-Mode Bearer Service. FMECA — Failure mode effects criticality analysis; an IA analysis technique that systematically reviews all components and materials in a system or product to determine cause(s) of their failures, the downstream results of such failures, and the criticality of such failures as accident precursors. FMECA can be performed on individual components (hardware, software, and communications equipment) and integrated at the system level. See IEC 60812 (1985). FMT — Security management functional class. Force — A group of platforms and sites organized for a particular purpose. Foreign Corrupt Practices Act — The act covers an organization’s system of internal accounting control and requires public companies to make and keep books, records, and accounts that, in reasonable detail, accurately and fairly reflect the transactions and disposition of company assets and to devise and maintain a system of sufficient internal accounting controls. This act was amended in 1988. Foreign Government Information — (1) Information provided to the United States by a foreign government or international organization of governments in the expectation, express or implied, that the information is to be kept in confidence. (2) Information, requiring confidentiality, produced by the United States pursuant to a written joint arrangement with a foreign government or international organization of governments. A written joint arrangement may be evidenced by an exchange of letters, a memorandum of understanding, or other written record of the joint arrangement. Foreign key — A primary key of one file (relation) that appears in another file (relation). Forensic examination — After a security breach, the process of assessing, classifying and collecting digital evidence to assist in prosecution. Standard crime-scene standards are used. Forensic image copy — An exact copy or snapshot of the contents of an electronic medium. Forgery — A false, fake, or counterfeit datum, document, image, or act. Formal analysis — The use of rigorous mathematical techniques to analyze a solution. The algorithms may be analyzed for numerical properties, efficiency, and correctness. 870
AU8231_A003.fm Page 871 Thursday, October 19, 2006 7:10 AM
Glossary Formal design — The part of a software design written using a formal notation. Formal method — (1) A software specification and production method, based on discrete mathematics, that comprises: a collection of mathematical notations addressing the specification, design, and development processes of software production, resulting in a well-founded logical inference system in which formal verification proofs and proofs of other properties can be formulated, and a methodological framework within which software can be developed from the specification in a formally verifiable manner. (2) The use of mathematical techniques in the specification, design, and analysis of computer hardware and software. Formal notation — The mathematical notation of a formal method. Formal proof — The discharge of a proof obligation by the construction of a complete mathematical proof. Formal review — A type of review typically scheduled at the end of each activity or stage of development to review a component of a deliverable or, in some cases, a complete deliverable or the software product and its supporting documentation. Formal specification — The part of the software specification written using a formal notation. Format — The physical arrangement of data characters, fields, records, and files. Formerly restricted data — Information removed from the restricted data category upon determination jointly by the Department of Energy and Department of Defense that such information relates primarily to the military utilization of atomic weapons and that such information can be adequately safeguarded as classified defense information subject to the restrictions on transmission to other countries and regional defense organizations that apply to restricted data. Formula Translation (Fortran) — A high-level programming language developed primarily to translate mathematical formulas into computer code. Formulary — A technique for permitting the decision to grant or deny access to be determined dynamically at access time rather than at the time the access list is created. Fortran — See Formula translation. Forum of Incident Response and Security Teams (FIRST) — A unit of the Internet Society that coordinates the activities of worldwide Computer Emergency Response Teams, regarding security-related incidents and information sharing on Internet security risks. 871
AU8231_A003.fm Page 872 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Fourier transform — An image processing tool that is used to decompose an image into its constituent parts or to view a signal in either the time or frequency domain. Fourth-Generation Language (4GL) — A computer language that is easy to learn and use, and often associated with rapid applications development. FPA — Federal Privacy Act. FPR — Privacy functional class. FPT — Protection of the TSF functional class. FRAD — Frame Relay access device. Fragile watermark — A watermark that is designed to prove authenticity of an image or other media. A fragile watermark is destroyed, by design, when the cover is manipulated digitially. If the watermark is still intact, then the cover has not been tampered with. Fragile watermark technology could be useful in authenticating evidence or ensuring the accuracy of medical records or other sensitive data. Fragment — A piece of a packet. When a router is forwarding an IP packet to a network with a Maximum Transmission Unit smaller than the packet size, it is forced to break up that packet into multiple fragments. These fragments will be reassembled by the IP layer at the destination host. Fragmentation — The process in which an IP datagram is broken into smaller pieces to fit the requirements of a given physical network. The reverse process is termed “reassembly.” Frame Relay — A switching interface that operates in packet mode. Generally regarded as the replacement for X.25. Framework — Defines a set of application programming interface (API) classes for developing applications and for providing system services to those applications. Free electrons — Electrons that are not attached to an atom or molecule. Also known as static electricity. Free space and atmospheric attenuation — Defined by the loss the signal undergoes traveling through the atmosphere. Changes in air density and absorption by atmospheric particles are principle reasons for affecting the microwave signal in a free air space. Frequency — The rate at which an electromagnetic waveform alternates, usually measured in Hertz. Frequency diversity — A form of backup used to protect a radio signal. A second signal continually operates on a separate frequency and assumes the load when the regular channel fails. 872
AU8231_A003.fm Page 873 Thursday, October 19, 2006 7:10 AM
Glossary Frequency division multiple access (FDMA) — FDMA is the allocation of specific channels within a defined radio frequency bandwidth to carry a specific user’s information. FDMA is a mature, reliable method of RF communication, but requires more spectrum than competing technologies to deliver its payload. Frequency division multiplexing (FDM) — An older technique in which the available transmission bandwidth of a circuit is divided by frequency into narrow bands, each used for a separate voice or data transmission channel, which many conversations can be carried on one circuit. Frequency domain — A way of representing a signal where the horizontal deflection is the frequency variable and the vertical deflection is the signals amplitude at that frequency. Frequency masking — A condition where two tones with relatively close frequencies are played at the same time and the louder tone masks the quieter tone. Frequency modulation (FM) — A modulation technique in which the carrier frequency is shifted by an amount proportional to the value of the modulating signal. The amplitude of the carrier signal remains constant. The information signal causes the carrier signal to increase or decrease its frequency based on the waveform of the information signal. Front office space — The primary interface to customers and sales channels. Front porch — The access point to a secure network environment; also known as a firewall. Front-end computer — A computer that offloads input and output activities from the central computer so it can operate primarily in a processing mode; sometimes called a front-end processor. Front-end processor (FEP) — (1) A communications computer associated with a host computer can perform line control, message handling, code conversion, error control, and application functions. (2) A teleprocessing concentrator and router, as opposed to a back-end processor or a database machine. FRU — Resource utilization functional class. FSIP — Fast serial interface processor. FSK — Frequency shift keying. FSP — Development, functional specification. FTA — Fault tree analysis; an IA analysis technique by which possibilities of occurrence of specific adverse events are investigated. All factors, con873
AU8231_A003.fm Page 874 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® ditions, events, and relationships that could contribute to that event are analyzed. FTA can be performed on individual components (hardware, software, and communications equipment) and integrated at the system level. See IEC 61025 (1990). FTP — (1) File Transfer Protocol. (2) Trusted path/channels functional class. FTP (File Transfer Protocol) server — Maintains a collection of files that can be downloaded. Full-duplex (FDX) — An asynchronous communications protocol that allows the communications channel to transmit and receive signals simultaneously. Full operational capability (FOC) — The time at which a new system has been installed at all planned locations and has been fully integrated into the operational structure. Full-wave rectifier — Diodes designed to be placed in an alternating current circuit and to convert alternating current into direct current. Fully qualified domain name (FQDN) — A complete Internet address, including the complete host and domain name. FUN — Tests, functional tests. Function — In computer programming, a processing activity that performs a single identifiable task. Functional analysis — Translating requirements into operational and systems functions and identifying the major elements of the system and their configurations and initial functional design requirements. Functional domain — An identifiable DoD functional mission area. For purposes of the DoD policy memorandum, the functional domains are: command and control, space, logistics, transportation, health affairs, personnel, financial services, public works, research and development, and Intelligence, Surveillance, and Reconnaissance (ISR). Functional requirements — Architectural atoms; the elementary building blocks of architectural concepts; made up of activities/functions, attributes associated with activities/processes and processes/methods sequencing activities. Functional safety — The ability of a safety-related system to carry out the actions necessary to achieve or maintain a safe state for the equipment under control. Functional specification — The main product of systems analysis, which presents a detailed logical description of the new system. It contains sets 874
AU8231_A003.fm Page 875 Thursday, October 19, 2006 7:10 AM
Glossary of input, processing, storage, and output requirements specifying what the new system can do. Functional testing — The segment of security testing in which the advertised security mechanisms of the system are tested, under operational conditions, for correct operation. Functionality — Degree of acceptable performance of an act. GAO — General Accounting Office. Garbage collection — A language mechanism that automatically deallocates memory for objects that are not accessible or referenced. Gateway — A product that enables two dissimilar networks to communicate or interface with each other. In the IP community, an older term referring to a routing device. Today, the term “router” is used to describe nodes that perform this function, and “gateway” refers to a special-purpose device that performs an application layer conversion of information from one protocol stack to another. Compare with router. GEN — Security audit generation. General support system — An interconnected information resource under the same direct management control that shares common functionality. It normally includes hardware, software, information, data, applications, communications, facilities, and people and provides support for a variety of users and applications. Individual applications support different mission-related functions. Users may be from the same or different organizations. General-purpose computer — A computer that can be programmed to perform a wide variety of processing tests. Genetic algorithm — An artificial intelligence system that mimics the evolutionary, survival-of-the-fittest process to generate increasingly better solutions to a problem. Geographic Information System (GIS) — A decision support system designed specifically to work with spatial information. GIF — Graphics Interchange Format. Gigabyte (G byte) — The equivalent of one billion bytes. Gigahertz (GHz) — The number of billions of CPU cycles per second. GIGO — Garbage in, garbage out. GII — Global information infrastructure. GLBA — The Gramm-Leach-Bliley Act. 875
AU8231_A003.fm Page 876 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Global Digital Divide — The term used specifically to describe differences in IT access and capabilities between different countries or regions of the world. Global economy — One in which customers, businesses, suppliers, distributors, and manufacturers operate without regard to physical and geographical boundaries. Global Information Grid — The globally interconnected, end-to-end set of information capabilities, associated processes and personnel for collecting, processing, storing, disseminating, and managing information on demand to warfighters, policy makers, and support personnel. The GiG includes all owned and leased communications and computing systems, services, software (including applications), data, security services, and other associated services necessary to achieve Information Superiority. Global Information Grid Architecture — The architecture, composed of interrelated operational, systems, and technical views, that defines the characteristics of and relationships among current and planned Global Information Grid assets in support of National Security missions. Global positioning system (GPS) — A collection of 24 earth-orbiting satellites that continuously transmit radio signals to determine an object or target’s current longitude, latitude, speed, and direction of movement. Global reach — The ability to extend a company’s reach to customers anywhere through an Internet connection and at a lower cost. Glove — An input device that captures and records the shape, movement, and strength of the users’ hands and fingers. GNS — Get Nearest Server (Novell). GOSIP — Government OSI Profile (U.S.). Governing security requisites — Those security requirements that must be addressed in all systems. These requirements are set by policy, directive, or common practice; e.g., by Executive Order, Office of Management and Budget (OMB), Office of the Secretary of Defense, a Military Service, or DoD agency. Governing security requisites are typically high-level requirements. While implementations will vary from case to case, these requisites are fundamental and must be addressed. Government OSI Profile (GOSIP) — A U.S. Government procurement specification for OSI protocols. Government-to-business (G2B) — The E-commerce activities performed between a government and its business partners for purposes such as purchasing materials or soliciting and accepting bids for work. 876
AU8231_A003.fm Page 877 Thursday, October 19, 2006 7:10 AM
Glossary Government-to-consumer (G2C) — The E-commerce activities performed between a government and its citizens or consumers, including paying taxes and providing information and services. Government-to-government (G2G) — The E-commerce activities limited to a single nation’s government focusing on vertical integration (local, city, state, and federal) and horizontal integration (within the various branches and agencies). GPKI — Global public key infrastructure. Graceful degradation — See degraded-mode operation. Grand design program strategies — Characterized by acquisition, development, and deployment of the total functional capability in a single increment. Granularity — The level of detail contained in a unit of data. The more there is, the lower the level of granularity; the less detail, the higher the level of granularity. Graphical user interface (GUI) — An interface in which the user can manipulate icons, windows, pop-down menus, or other related constructs. A graphical user interface uses graphics such as a window, box, and menu to allow the user to communicate with the system. Allows users to move in and out of programs and manipulate their commands using a pointing device (usually a mouse). Synonymous with user interface. Graphics output — Computer-generated output in the form of pictures, charts, and line drawings. Graphics software — Helps the user create and edit photos and art. Graphics terminal — An output device that displays pictures, charts, and line drawings, typically a high-resolution CRT. GRE — Generic Routing Encapsulation. Grid computing — Harnesses computers together by way of the Internet or a virtual network to share CPU power, databases, and storage. Group document databases — A powerful storage facility for organizing and managing all documents relayed to specific teams. Group health plan — Under HIPAA, an employee welfare benefit plan that provides for medical care and that either has 50 or more participants or is administered by another business entity. Also see Part II, 45 CFR 160.103. Groupware — Software designed to function over a network to allow several people to work together on documents and files. 877
AU8231_A003.fm Page 878 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® GSM — Originally stood for Groupe Speciale Mobile, but is now known as Global System for Mobile Communications. It is the standard for cellular phone service in Europe, Japan, and Australia, and will soon be the standard for 30 to 50 percent of the cellular networks in the Untied States. Guaranteed service — A service model that provides highly reliable performance with little or no variance in the measured performance criteria. Guard — A component that mediates the flow of information or control between different systems or networks. GUI (graphical user interface) screen design — The ability to model the information system screens for an entire system. Guidelines — Documented suggestions for regular and consistent implementation of accepted practices. They usually have less enforcement powers. GZL — Get Zone List (AppleTalk). Hacker — A person who attempts to break into computers that he or she is not authorized to use. Hacking — A computer crime in which a person breaks into an information system simply for the challenge of doing so. Hacktivist — A politically motivated hacker who uses the Internet to send a political message of some kind. HAG — High assurance guard. Half-duplex — (1) Capability for data transmission in only one direction at a time between a sending station and a receiving station. (2) A circuit designed for data transmission in both directions but not at the same time. Halon — An abbreviation for halogenated hydrocarbon coined by the U.S. Army Corps of Engineers. Halon nomenclature follows the following rule: if a hydrocarbon compound contains the elements CaFbClcBrdIe, it is designated as Halon abcde (terminal zeros are dropped). Thus, Halon 1211 is chlorobromodifluoromethane, etc. Handoff (or switching) — A cellular call is switched from one cell tower to another as the user moves from one area to the next. The switch is usually unnoticed by the user. Handover interface — A physical and logical interface across which the interception measures are requested from the NWO/AP/service provider, and the results of interception are delivered from a NWO/AP/service provider (SvP) to an LEMF. Handprint character recognition (HCR) — One of several pattern recognition technologies used by digital imaging systems to interpret handprinted characters. 878
AU8231_A003.fm Page 879 Thursday, October 19, 2006 7:10 AM
Glossary Handshake — Sequence of messages exchanged between two or more network devices to ensure transmission synchronization. Handshaking procedure — Dialogue between a user and a computer, two computers, or two programs to identify a user and authenticate his or her identity. This is done through a sequence of questions and answers that are based on information either previously stored in the computer or supplied to the computer by the initiator of the dialogue. Handspring — A type of PDA that runs on the Palm Operating System (Palm OS). Hard disk — A fixed or removable disk mass storage system permitting rapid direct access to data, programs, or information. Hard handoff — Sometimes a cell phone user being switched from one site to the next will need to be disconnected and reconnected to make the switch possible. Also called a “break and make” handoff, it is usually unnoticed by the user. Hardware — The physical components of a computer network. Hardware key logger — A hardware device that captures keystrokes on their way from the keyboard to the motherboard. Hardware reliability — The ability of an item to correctly perform a required function under certain conditions in a specified operational environment for a stated period of time. Hardware safety integrity — The overall failure rate for continuousmode operations and the probability to operate on demand for demandmode operations relative to random hardware failures in a dangerous mode of failure. Hash — Producing hash values for accessing data or for security. A hash value (or simply hash), also called a message digest, is a number generated from a string of text. The hash is substantially smaller than the text itself, and is generated by a formula in such a way that it is extremely unlikely that some other text will produce the same hash value. Hashing is also a common method of accessing data records. To create an index, called a hash table, for these records, you would apply a formula to each name to produce a unique numeric value. Hash function/hashing — A hash function is a mathematical process based on an algorithm that creates a digital representation or compressed form of the message. It is often referred to as the message digest in the form of a hash value or hash result of a standard length that is usually much smaller than the message, but nevertheless substantially unique to it. 879
AU8231_A003.fm Page 880 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Hash total — A total of the values on one or more fields, used for the purpose of auditability and control. Hazard — A source of potential harm or a situation with potential to harm. Note that the consequences of a hazard can be physical or cyber. Hazard likelihood — The qualitative or quantitative likelihood that a potential hazard will occur. Most international standards define six levels of hazard likelihood (lowest to highest): incredible, improbable, remote, occasional, probable, and frequent. Hazard severity — The severity of the worst-case consequences should a potential hazard occur. Most international standards define four levels of hazard severity (lowest to highest): insignificant, marginal, critical, and catastrophic. HAZOP — Hazard and operability study; a method of determining hazards in a proposed or existing system, their possible causes and consequences, and recommending solutions to minimize the likelihood of occurrence. Design and operational aspects of the system are analyzed by an interdisciplinary team. HCFA — See Health Care Financing Administration. Also see Part II, 45 CFR 160.103. HCFA Common Procedural Coding System (HCPCS) — A medical code set that identifies healthcare procedures, equipment, and supplies for claim submission purposes. It has been selected for use in the HIPAA transactions. HCPCS Level I contains numeric CPT codes that are maintained by the AMA. HCPCS Level II contains alphanumeric codes used to identify various items and services that are not included in the CPT medical code set. These are maintained by HCFA, the BCBSA, and the HIAA. HCPCS Level III contains alphanumeric codes that are assigned by Medicaid state agencies to identify additional items and services not included in levels I or II. These are usually called “local” codes, and must have “W,” “X,” “Y,” or “Z” in the first position. HCPCS Procedure Modifier Codes can be used with all three levels, with the WA-ZY range used for locally assigned procedure modifiers. HCFA-1450 — HCFA’s name for the institutional uniform claim form, or UB-92. HCFA-1500 — HCFA’s name for the professional uniform claim form. Also known as UCF-1500. HCPCS — See HCFA Common Procedural Coding System. Also see Part II, 45 CFR 162.103. HDLC (High-Level Data-Link Control) — Bit-oriented synchronous datalink layer protocol developed by the ISO. Derived from SDLC, HDLC spec880
AU8231_A003.fm Page 881 Thursday, October 19, 2006 7:10 AM
Glossary ifies a data encapsulation method on synchronous serial links using frame characters and checksums. HDSL — High-data-rate Digital Subscriber lLne. One of four DSL technologies. HDSL delivers 1.544 Mbps of bandwidth each way over two copper twisted pairs. Because HDSL provides T1 speed, telephone companies have been using HDSL to provision local access to T1 services whenever possible. The operating range of HDSL is limited to 12,000 feet (3658.5 meters), so signal repeaters are installed to extend the service. HDSL requires two twisted pairs, so it is deployed primarily for PBX network connections, digital loop carrier systems, interexchange POPs, Internet servers, and private data networks. Compare with ADSL, SDSL, and VDSL. Header — The beginning of a message sent over the Internet; typically contains addressing information to route the message or packet to its destination. Heading tag — HTML tag that puts certain information, such as the title, at the top of the page. Headset — It combines input and output devices that (1) capture and record the movements of the user’s head, and (2) contains a screen that covers the user’s field of vision and displays various views of an environment based on the head’s movements. Health and Human Services (HHS) — The federal government department that has overall responsibility for implementing HIPAA. Health care — See Part II, 45 CFR 160.103. Health Care Clearinghouse — Under HIPAA, this is an entity that processes or facilitates the processing of information received from another entity in a nonstandard format or containing nonstandard data content into standard data elements or a standard transaction, or that receives a standard transaction from another entity and processes or facilitates the processing of that information into nonstandard format or nonstandard data content for a receiving entity. Also see Part II, 45 CFR 160.103. Health Care Code Maintenance Committee — An organization administered by the BCBSA that is responsible for maintaining certain coding schemes used in the X12 transactions and elsewhere. These include the Claim Adjustment Reason Codes, the Claim Status Category Codes, and the Claim Status Codes. Health Care Component — See Part II, 45 CFR 164.504. Health Care Financing Administration (HCFA) — The HHS agency responsible for Medicare and parts of Medicaid. HCFA has historically maintained the UB-92 institutional EMC format specifications, the professional EMC NSF specifications, and specifications for various certifications and 881
AU8231_A003.fm Page 882 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® authorizations used by the Medicare and Medicaid programs. HCFA also maintains the HCPCS medical code set and the Medicare Remittance Advice Remark Codes administrative code set. Health care operations — See Part II, 45 CFR 164.501. Health care provider — See Part II, 45 CFR 160.103. Health Care Provider Taxonomy Committee — An organization administered by the NUCC that is responsible for maintaining the Provider Taxonomy coding scheme used in the X12 transactions. The detailed code maintenance is done in coordination with X12N/TG2/WG15. Health Industry Business Communications Council (HIBCC) — A council of healthcare industry associations that has developed a number of technical standards used within the healthcare industry. Health Informatics Standards Board (HISB) — An ANSI-accredited standards group that has developed an inventory of candidate standards for consideration as possible HIPAA standards. Health information — See Part II, 45 CFR 160.103. Health information clearinghouses — Any public or private entities that process or facilitate processing nonstandard health information into standard data elements. For example, third party administrators; pharmacy benefits managers; billing services; information management and technology vendors; and others. (HIPAA). Health Insurance Association of America (HIAA) — An industry association that represents the interests of commercial healthcare insurers. The HIAA participates in the maintenance of some code sets, including the HCPCS Level II codes. Health insurance issuer — See Part II, 45 CFR 160.103. Health Insurance Portability and Accountability Act of 1996 (HIPAA) — A federal law that allows persons to qualify immediately for comparable health insurance coverage when they change their employment relationships. Title II, Subtitle F, of HIPAA gives HHS the authority to mandate the use of standards for the electronic exchange of healthcare data; to specify what medical and administrative code sets should be used within those standards; to require the use of national identification systems for healthcare patients, providers, payers (or plans), and employers (or sponsors); and to specify the types of measures required to protect the security and privacy of personally identifiable healthcare information. Also known as the Kennedy-Kassebaum Bill, the Kassebaum-Kennedy Bill, K2, or Public Law 104-191. 882
AU8231_A003.fm Page 883 Thursday, October 19, 2006 7:10 AM
Glossary Health Level Seven (HL7) — An ANSI-accredited group that defines standards for the cross-platform exchange of information within a healthcare organization. HL7 is responsible for specifying the Level Seven OSI standards for the health industry. The X12 275 transaction will probably incorporate the HL7 CRU message to transmit claim attachments as part of a future HIPAA claim attachments standard. The HL7 Attachment SIG is responsible for the HL7 portion of this standard. Health Maintenance Organization (HMO) — See Part II, 45 CFR 160.103. Health Oversight Agency — See Part II, 45 CFR 164.501. Health plan — See Part II, 45 CFR 160.103. Health plan ID — See National Payer ID. Health plans — Individual or group plans (or programs) that provide health benefits directly, through insurance, or otherwise. For example, Medicaid; State Children’s Health Insurance Program (SCHIP); state employee benefit programs; Temporary Assistance for Needy Families (TANF); and others. (HIPAA) Healthcare Financial Management Association (HFMA) — An organization for the improvement of the financial management of healthcare-related organizations. The HFMA sponsors some HIPAA educational seminars. Healthcare Information Management Systems Society (HIMSS) — A professional organization for healthcare information and management systems professionals. Healthcare providers — Providers (or suppliers) of medical or other health services or any other person furnishing health care services or supplies, and who also conduct certain health-related administrative or financial transactions electronically. For example, local health departments; community and migrant health centers; rural health clinics; schoolbased health centers; homeless clinics and shelters; public hospitals; maternal and child health programs (Title V); family planning programs (Title X); HIV/AIDS programs; and others. (HIPAA) HEDIC — The Healthcare EDI Coalition. HEDIS — Health Employer Data and Information Set. Help desk — Responds to knowledge workers’ questions. HERF — High-energy radio frequency. Hertz (Hz) — The basic measurement of bandwidth frequency in cycles per second (1 Hertz = 1 cycle per second). Heuristics — The mode of analysis in which the next step is determined by the results of the current step of analysis. Used for decision support processing. 883
AU8231_A003.fm Page 884 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Hexadecimal — A number system with a base of 16. HFMA — See Healthcare Financial Management Association. HHA — Home health agency. HHIC — The Hawaii Health Information Corporation. HHS — See Health and Human Services. Also see Part II, 45 CFR 160.103. HIAA — See Health Insurance Association of America. HIBCC — See Health Industry Business Communications Council. Hidden partition — A method of hiding information on a hard drive where the partition is considered unformatted by the host operating system and no drive letter is assigned. HIDS — Host-based intrusion detection system. Hierarchical database — In a hierarchical database, data is organized like a family tree or organization chart with branches of parent records and child records. High-capacity floppy disk — Storage device that holds between 100MB and 250MB of information. Superdisks and Zip disks are examples. High-Level Data-Link Control (HDLC) — A protocol used at the data-link layer that provides point-to-point communications over a physical transmission medium by creating and recognizing frame boundaries. High-Level Language — The class of procedure-oriented language. HIMSS — See Healthcare Information Management Systems Society. HIPAA Act of 1996 — The Administrative Simplification provisions of the Health Insurance Portability and Accountability Act of 1996 (HIPAA, Title II) require the Department of Health and Human Services to establish national standards for electronic healthcare transactions and national identifiers for providers, health insurers, and employers. It also addresses the security and privacy of health data. Adopting these standards will improve the efficiency and effectiveness of the nation’s healthcare system by encouraging the widespread use of electronic data interchange in healthcare. HIPAA Data Dictionary, or HIPAA DD — A data dictionary that defines and cross-references the contents of all X12 transactions included in the HIPAA mandate. It is maintained by X12N/TG3. HISB — See Health Informatics Standards Board. HL7 — See Health Level Seven. HLD — Development, high-level design. 884
AU8231_A003.fm Page 885 Thursday, October 19, 2006 7:10 AM
Glossary HMO — See Health Maintenance Organization. Holographic device — A device that creates, captures, and displays images in true three-dimensional form. Home page — The initial screen of information displayed to the user when initiating the client or browser software or when connecting to a remote computer. The home page resides at the top of the directory tree. Home PNA (Home Phoneline Networking Alliance) — Allows one to network home computer using telephone wiring. Homeland Security Act of 2002 — The Act restructures and strengthens the executive branch of the federal government to better meet the threat to the United States posed by terrorism. In establishing a new department of Homeland Security, the Act for the first time creates a federal department whose primary mission will be to help prevent, protect against, and respond to acts of terrorism on the U.S. soil. Honey-pot — A specifically configured server, designed to attract intruders so their actions do not affect production systems; also known as a decoy server. Hop — A term used in routing. A hop is one data link. A path from source to destination in a network is a series of hops. Horizontal market software — Application software that is general enough to be suitable for use in a variety of industries. Host — A remote computer that provides a variety of services, typically to multiple users concurrently. Host address — The IP address of the host computer. Host computer — A computer that, in addition to providing a local service, acts as a central processor for a communications network. Hostname — The name of the user computer on the network. Hot site — A fully operational offsite data processing facility equipped with both hardware and system software to be used in the event of disaster. Hot standby — Secondary equipment in place as a back up in case of primary equipment failure. HPAG — The HIPAA Policy Advisory Group, a BCBSA subgroup. HPSA — Health Professional Shortage Area. HSRP — Hot Standby Routing Protocol. HSSI — High-speed serial interface. 885
AU8231_A003.fm Page 886 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® HTML — See HyperText Markup Language. HTML document — A file made from the HTML language. HTML tag — Specifies the formatting and presentation of information in an HTML document. HTTP — See HyperText Transport Protocol. Hub — A device connected to several other devices. In ARCnet, a hub is used to connect several computers together. In a message-handling service, a hub is used for transfer of messages across the network. An Ethernet hub is basically a “collapsed network-in-a-box” with a number of ports for the connected devices. Humanware — Computer programs that interface or communicate with users by means of voice-integrated technology, interpret user-specified command, and execute or translate commands into machine-executable code. HVAC — Heating, ventilation, air conditioning systems. Hybrid entity — A covered entity whose covered functions are not its primary functions. Also see Part II, 45 CFR 164.504. Hypermedia — An extension to hypertext in which frames contain graphics, illustrations, images, audio, animation, text, and other forms of information or knowledge. Hypertext — Text that is held in frames and authors develop or define the linkage between frames. Hypertext Markup Language (HTML) — A language created by programmers at the CERN in Switzerland to create Web pages. HyperText Transfer Protocol (HTTP) — A communication protocol used to connect to serves on the World Wide Web. Its primary function is to establish a connection with a Web server and transmit HTML pages to the client browser. The protocol used to transport hypertext files across the Internet. I&A — Identification and authentication. IA — (1) Information assurance. (2) Intra-area (OSPF). IA integrity — The likelihood of a system, entity, or function achieving its required security, safety, and reliability features under all stated conditions within a stated measure of use. IA integrity case — A systematic means of gathering, organizing, analyzing, and reporting the data needed by internal, contractual, regulatory, or Certification Authorities to confirm that a system has met the specified 886
AU8231_A003.fm Page 887 Thursday, October 19, 2006 7:10 AM
Glossary IA goals and IA integrity level and is fit for use in the intended operational environment. An IA integrity case includes assumptions, claims, and evidence. IA integrity level — The level of IA integrity that must be achieved or demonstrated to maintain the IA risk exposure at or below its acceptable level. IAB — Internet Architecture Board. Board of internetwork researchers who discuss issues pertinent to Internet architecture. Responsible for appointing a variety of Internet-related groups such as the IANA, IESG, and IRSG. The IAB is appointed by the trustees of the ISOC. IA-critical — A term applied to any condition, event, operation, process, or item whose proper recognition, control, performance, or tolerance is essential to the safe, reliable, and secure operation and support of a system. IAIABC — See International Association of Industrial Accident Boards and Commissions. IAP — Information Awareness Program. IA-related — A system or entity that performs or controls functions which are activated to prevent or minimize the effect of a failure of an IA-critical system or entity. IBGP — Interior Border Gateway Protocol. ICD, ICD-n-CM, and ICD-n-PCS — International Classification of Diseases, with “n” = “9” for Revision 9 or “10” for Revision 10, with “CM” = “Clinical Modification,” and with “PCS” = “Procedure Coding System.” ICF — Intermediate care facility. ICMP — Internet Control Message Protocol. Network layer Internet protocol that reports errors and provides other information relevant to IP packet processing. Documented in RFC 792. Icon — A pictorial symbol used to represent data, information, or a program on a GUI screen. ICQ — Pronounced “I Seek You.” This is a chat service available via the Internet that enables users to communicate online. This service (you load the application on your computer) allows chat via text, voice, bulletin boards, file transfers, and e-mail. ICSA — Internet Computer Security Association. ICZ — Intensive Control Zone. 887
AU8231_A003.fm Page 888 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Ida (infrared date association) port — A port for wireless devices that works in essentially the same way as the remote control on TV. Identification — (1) The process, generally employing unique machinereadable names, that enables recognition of users or resources as identical to those previously described to the computer system. (2) The assignment of a name by which an entity can be referenced. The entity may be high level (such as a user) or low level (such as a process or communication channel). Identification media — A building or visitor pass. Identifier — A set of one or more attributes that uniquely distinguishes each instance of an object. Identity — Information that is unique within a security domain and which is recognized as denoting a particular entity within that domain. Identity-based security policy — A security policy based on the identities or attributes of users, a group of users, or entities acting on behalf of the users and the resources or targets being accessed. IDN — Integrated Delivery Network. IDS — Intrusion detection system. IEC 61025 — International Electrotechnical Commission Publication 61025 Fault tree analysis (FTA). IEEE — Institute of Electrical and Electronics Engineers. IETF — Internet Engineering Task Force; a public consortium that develops standards for the Internet. IETF — Internet Engineering Task Force. IFC — User data protection information flow control policy. IFF — User data protection information flow control functions. IG — See Implementation Guide. IGP — Interior Gateway Protocol. IGRP — Interior Gateway Routing Protocol. IGS — Delivery and operation, installation, generation, and start-up. IHC — Internet Healthcare Coalition. IIHI — See Individually Identifiable Health Information. IKE — Internet Key Exchange protocol. IMP — Development, implementation representation. 888
AU8231_A003.fm Page 889 Thursday, October 19, 2006 7:10 AM
Glossary Impact — The amount of loss or damage that can be expected, or may be expected from a successful attack of an asset. Impact printer — A hard-copy device on which a print mechanism strikes against a ribbon to create imprints on paper. Some impact printers operate one character at a time; others strike an entire line at a time. Impersonation — An attempt to gain access to a system by posing as an authorized user. Implant chip — A technology-enabled microchip implanted into the human body. Implementation — The specific activities within the systems development life cycle through which the software portion of the system is developed, coded, debugged, tested, and integrated with existing or new software. Implementation guide (IG) — A document that explains the proper use of a standard for a specific business purpose. The X12N HIPAA IGs are the primary reference documents used by those implementing the associated transactions, and are incorporated into the HIPAA regulations by reference. Implementation phase — Distributes the system to the knowledge workers who begin using the system in their everyday jobs. Implementation specification — Under HIPAA, this is the specific instruction for implementing a standard. Also see Part II, 45 CFR 160.103. See also implementation guide. Importance — A subjective assessment of the significance of a system’s capability and the consequences of the loss of that capability. In band — Made up of tones that pass within the voice frequency band and are carried along the same circuit as the talk path established by the signals. Also known as in-band signaling. Inadvertent disclosure — Accidental exposure of information to a person not authorized access. Inadvertent loss — The unplanned loss or compromise of data or system. Incident — An unusual occurrence or breach in the security of a computer system. An event that has actual or potentially adverse effects on an information system. A computer security incident can result from a computer virus, other malicious code, intruder, terrorist, unauthorized insider act, malfunction, etc. Incomplete parameter checking — A system fault that exists when all parameters have not been fully checked for correctness and consisten889
AU8231_A003.fm Page 890 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® cy by the operating system, thus leaving the system vulnerable to penetration. Incremental program strategies — Characterized by acquisition, development, and deployment of functionality through a number of clearly defined system “increments” that stand on their own. IND — Tests, independent testing. Independent Basic Service Set Network (IBSS Network) — I n d e p e n dent Basic Service Set Network is an IEEE 802.11-based wireless network that has no backbone infrastructure and consists of at least two wireless stations. This type of network is often referred to as an ad hoc network because it can be constructed quickly without much planning. Indexed sequential filing — A file organization method in which records are maintained in logical sequence and indices (or tables) are used to reference their storage addresses. The method allows direct and serial access to records. Indirect material — Material that is necessary for running a modern corporation but does not relate to the company’s primary business activities. Commonly called MRO materials. Induction — A process of logically arriving at a conclusion about a member of a class from examining a few other members of the same class. This method of reasoning may not always produce true statements. As an example, suppose it is known that George’s car has four tires and that Fred’s car has four tires. Inductive reasoning would allow the conclusion that all cars have four tires. Induction is closely related to learning. Inference engine — A system of computer programs in an expert systems application that uses expert experience as a basis for conclusions. Infobots — Software agents that perform specified tasks for a user or application. Information — Intelligence or knowledge capable of being represented in forms suitable for communication, storage, or processing. Information can be represented, for example, by signs, symbols, pictures, or sounds. Information age — A time when knowledge is power. Information assurance — (1) An engineering discipline that provides a comprehensive and systematic approach to ensuring that individual automated systems and dynamic combinations of automated systems interact and provide their intended functionality, no more and no less, safely, reliably, and securely in the intended operational environments. (2) Infor890
AU8231_A003.fm Page 891 Thursday, October 19, 2006 7:10 AM
Glossary mation operations that protect and defend information and information systems by ensuring their availability, integrity, authentication, confidentiality, and non-repudiation; including providing for restoration of information systems by incorporating protection, detection, and reaction capabilities (DoD Directive 5-3600.1). Information Assurance Support Environment (IASE) — The IASE is an online Web-based help environment for DoD INFOSEC and IA professionals. Information Assurance Vulnerability Alert (IAVA) — The comprehensive distribution process for notifying CINC’s, services, and agencies (C/S/A) about vulnerability alerts and countermeasures information. The IAVA process requires C/S/A receipt acknowledgment and provides specific time parameters for implementing appropriate countermeasures, depending on the criticality of the vulnerability. Information attributes — The qualities, characteristics, and distinctive features of information. Information category — The term used to bind information and tie it to an information security policy. Information decomposition — Breaking down the information for ease of use and understandability. Information environment — The aggregate of individuals, organizations, and systems that collect, process, or disseminate information, including the information itself. Information float — The amount of time it takes to get information from its source into the hands of the decision makers. Information granularity — The extent of detail within the information. Information hiding — (1) A software development technique in which each module’s interfaces reveal as little as possible about the module’s inner workings and other modules are prevented from using information about the module that is not in the module’s interface specification. (2) A software development technique that consists of isolating a system function, or set of data and operations on those data, within a module and providing precise specifications for the module. Information in identifiable form — Information in an IT system or online collection that (i) directly identifies an individual (e.g., name, address, Social Security number, or other identifying number or code, telephone number, e-mail address, etc.); or (ii) by which an agency intends to identify specific individuals in conjunction with other data elements, i.e., indirect identification. These data elements may include a combination of gender, race, birth date, geographic indicator, and other descriptors. 891
AU8231_A003.fm Page 892 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Information interoperability — The exchange and use of information in any electronic form. Information-literate knowledge workers — Can define what information they need, know how to obtain that information, understand the information once they receive it, and act appropriately to help the organization achieve the greatest advantage. Information model — A conceptual model of the information needed to support a business function or process. Information operations (IO) — Actions taken to affect adversary information and information systems while defending one’s own information and information systems. Information Operations Condition (INFOCON) — The INFOCON is a comprehensive defense posture and response based on the status of information systems, military operations, and intelligence assessments of adversary capabilities and intent. The INFOCON system presents a structured, coordinated approach to defend against a computer network attack. INFOCON measures focus on computer network-based protective measures. Each level reflects a defensive posture based on the risk of impact to military operations through the intentional disruption of friendly information systems. INFOCON levels are NORMAL (normal activity); ALPHA (increased risk of attack); BRAVO (specific risk of attack); CHARLIE (limited attack); and DELTA (general attack). Countermeasures at each level include preventive actions, actions taken during an attack, and damage control/mitigating actions. Information owner — An official having statutory or operational authority for specified information and having responsibility for establishing controls for its generation, collection, processing, dissemination, and disposal. Information partnership — Two or more companies that cooperate by integrating their IT systems, thereby providing customers with the best of what each has to offer. Information requirements — Those items of information regarding the enemy and his environment which need to be collected and processed in order to meet the intelligence requirements of a commander. Information resource management — A concept or practice in which information is recognized as a key asset to be appropriately managed as a vital resource. Information security — Safeguarding information against unauthorized disclosure; or, the result of any system of administrative policies and procedures for identifying, controlling, and protecting from unauthorized 892
AU8231_A003.fm Page 893 Thursday, October 19, 2006 7:10 AM
Glossary disclosure, information the protection of which is authorized by Executive Order or statute. Information security governance — The management structure, organization, responsibility, and reporting processes surrounding a successful information security program. Information security program — The overall process of preserving confidentiality, integrity, and availability of information. Information security service — A method to provide some specific aspect of security. For example, integrity of transmitted data is a security objective, and a method that would achieve that is considered an information security service. Information services — The offering of a capability for generating, storing, transforming, retrieving, utilizing, or making available information via telecommunications, and includes electronic publishing but does not include the use of such capability for the management, control, or operation of a telecommunications system or the management of a telecommunications service. Information sharing — The requirements for information sharing by an IT system with one or more other IT systems or applications, for information sharing to support multiple internal or external organizations, missions, or public programs. Information superiority — The capability to collect, process, and disseminate an uninterrupted flow of information while exploiting or denying an adversary’s ability to do the same. Forces attain information superiority through the acquisition of systems and families-of-systems that are secure, reliable, interoperable, and able to communicate across a universal information technology (IT) infrastructure, to include National Security Systems (NSS). This IT infrastructure includes the data, information, processes, organizational interactions, skills, and analytical expertise, as well as systems, networks, and information exchange capabilities. Information system — A discrete set of information resources organized for the collection, processing, maintenance, use, sharing, dissemination, or disposition of information. Information system owner (or program manager) — See system owner. Information system security — A system characteristic and a set of mechanisms that span the system both logically and physically. Information system security officer — Individual responsible to the OA ISSO, designated approving authority, or information system owner for ensuring that the appropriate operational security posture is maintained for an information system or a closely related group of systems. 893
AU8231_A003.fm Page 894 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Information Systems Security (INFOSec) — The protection of information systems against unauthorized access to or modification of information, whether in storage, processing, or transit, and against the denial-ofservice to authorized users or the provision of service to unauthorized users, including those measures necessary to detect, document, and counter such threats. Information systems security program — Synonymous with IT security program. Information technology (IT) — The hardware and software operated by a federal agency or by a contractor of a federal agency or other organization that processes information on behalf of the federal government to accomplish a federal function, regardless of the technology involved, whether computers, telecommunications, or others. It includes automatic data processing equipment as that term is defined in Section 111(a)(2) of the Federal Property and Administrative Services Act of 1949. For the purposes of this circular, automatic data processing and telecommunications activities related to certain critical national security missions, as defined in 44 U.S.C. 3502(2) and 10 U.S.C. 2315, are excluded. Information technology disruptions due to natural or man-made disasters — Failure to exercise due care and diligence in the implementation and operation of the information technology system. Information view — Includes all of the information stored within a system. Information warfare (IW) — Actions taken to achieve information superiority by affecting adversary information, information-based processes, information systems and computer-based networks while defending one’s own information, information-based processes, information systems and computer-based networks. INFOSec — (1) The combination of COMSec and COMPUSec — the protection of information against unauthorized disclosure, transfer, modification, or destruction, whether accidental or intentional. (2) Protection of information systems against unauthorized access to or modification of information, whether in storage, processing, or transit, and against denialof-service to authorized users, including those measures necessary to detect, document, and counter such threats. Infrared — A wireless communications medium that uses light waves to transmit signals or information. Infrastructure — The framework of interdependent networks and systems comprising identifiable industries, institutions, and distribution capabilities that provide a continual flow of goods and services essential to the defense and economic security of the United States, the smooth functioning of government at all levels, or society as a whole. 894
AU8231_A003.fm Page 895 Thursday, October 19, 2006 7:10 AM
Glossary Infrastructure system — A network of independent, mostly privately owned, automated systems and processes that function collaboratively and synergistically to produce and distribute a continuous flow of essential goods and services. The eight critical infrastructure systems defined by PDD-63 are: telecommunications, banking and finance, power generation and distribution, oil and gas distribution and storage, water processing and supply, transportation, emergency services, and government services. Infrastructure-centric — A security management approach that considers information systems and their computing environment as a single entity. Inheritance — The language mechanism that allows the definition of a class to include the attributes and methods for another more general class. Inheritance is an implementation construct for the specialization relation. The general class is the superclass and the specific class is the subclass in the inheritance relation. Inheritance is a relation between classes that enables the reuse of code and the definition of generalized interface to one or more subclasses. Inhibit — A design feature that provides a physical interruption between an energy source and a function actuator. Two inhibits are independent if no single failure can eliminate them both. Initial operational capability (IOC) — The first time a new system is introduced into operation. Initialization vector — A non-secret binary vector used as the initializing input algorithm for the encryption of a plaintext block sequence to increase security by introducing additional cryptographic variance and to synchronize cryptographic equipment. Initiator — An entity (for example, human user or computer-based entity) that attempts to access other entities. Initiator access control decision information — ADI associated with the initiator. Initiator access control information — Access control information relating to the initiator. Injection — Using this method, a secret message is put in a host file in such a way that when the file is actually read by a given program, the program ignores the data. Injury — Any wrong or damage done to another, either his person, rights, reputation, or property; the invasion of any legally protected interest of another. Inkjet printer — Makes images by forcing ink droplets through nozzles. Inmate — See Part II, 45 CFR 164.501. 895
AU8231_A003.fm Page 896 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Input controls — Techniques and methods for verifying, validating, and editing data to ensure that only correct data enters a system. Input device — A tool used to capture information and commands by the user. Inquiry processing — The process of selecting a record from a file and immediately displaying its contents. Insourcing — It means that IT specialists within the organization will develop the system. Inspection — A manual analysis technique that examines the program requirements, design, or code in a formal and disciplined manner to discover errors. Instance — A set of values representing a specific entity belonging to a particular entity type. A single value is also the instance of a data item. Instance — An occurrence of an entity class that can be uniquely described. Instrumental input — The capture of data and its placement directly into a computer by machines. Insulator — A material that does not conduct electricity but is suitable for surrounding conductors to prevent the loss of current. INT — In Common Criteria, (1) protection profile evaluation, PP introduction. (2) Security target evaluation, ST introduction. (3) Development, TSF internals. Integrated circuit — A miniature microchip incorporating circuitry and semi-conductor components. The circuit elements and components are created as a part of the same manufacturing process. Integrated Data Dictionary (IDD) — A database technology that facilitates functional communication among system components. Integrated Services Digital Network (ISDN) — An emerging technology that is beginning to be offered by the telephone carriers of the world. ISDN combines voice and digital network services in a single medium, making it possible to offer customers digital data services as well as voice connections through a single wire. The standards that define ISDN are specified by ITU-TSS. Integration — Allows separate systems to communicate directly with each other by automatically exporting data files from one system and importing them into another. Integration testing — The orderly progression of testing in which software, hardware, or both are combined and tested until all intermodule communication links have been integrated. 896
AU8231_A003.fm Page 897 Thursday, October 19, 2006 7:10 AM
Glossary Integrator — The organization that integrates the IS components. Integrity — (1) The accuracy, completeness and validity of information in accordance with business values and expectations. The property that data or information has not been modified or altered in an unauthorized manner. (2) A security service that allows verification that an unauthorized modification (including changes, insertions, deletions and duplications) has not occurred either maliciously or accidentally. See also data integrity. Integrity checking — The testing of programs to verify the soundness of a software product at each phase of development. Integrity level — (1) A range of values of an item necessary to maintain system risks within acceptable limits. For items that perform IA-related mitigating functions, the property is the reliability with which the item must perform the mitigating function. For IA-critical items whose failure can lead to threat instantiation, the property is the limit on the frequency of that failure. (2) A range of values of a property of an item necessary to maintain risk exposure at or below its acceptability threshold. Intellectual property — Intangible creative work that is embodied in physical form. Intellectual property identification — A method of asset protection that identifies or defines a copyright, patent, trade secret, etc. or validates ownership and ensures that intellectual property rights are protected. Intellectual property management and protection (IPMP) — A re fi n e ment of digital rights management (DRM) that refers specifically to MPEGs. Intelligence — The first step in the decision-making process where a problem, need, or opportunity is found or recognized. Also called the diagnostic phase of decision making. Intelligence method — The method used to provide support to an intelligence source or operation, and that, if disclosed, is vulnerable to counteraction that could nullify or significantly reduce its effectiveness in supporting the foreign intelligence or foreign counterintelligence activities of the United States, or that would, if disclosed, reasonably lead to the disclosure of an intelligence source or operation. Intelligence source — A person, organization, or technical means that provides foreign intelligence or foreign counterintelligence and that, if its identity or capability is disclosed, is vulnerable to counteraction that could nullify or significantly reduce its effectiveness in providing foreign intelligence or foreign counterintelligence to the United States. An intelligence source also means a person or organization that provides foreign intelligence or foreign counterintelligence to the United States only on the condition that its identity remains undisclosed. 897
AU8231_A003.fm Page 898 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Intelligent agent — Software that assists the user in performing repetitive computer-related tasks. Intelligent cabling — Research is ongoing in this area. The goal is to eliminate the large physical routers, hubs, switches, firewalls, etc. and move these functions (i.e., embed the intelligence) into the cabling itself. Currently this is an electrochemical/neuronic research process. Intelligent transportation systems — A subset or specific application of the NII that provides real-time information and services to the transportation sector. Specific examples include travel and transportation management systems, travel demand management systems, public transportation operation systems, electronic payment systems, commercial vehicle operation systems, emergency management systems, and advanced vehicle control and safety systems. Interactive — A mode of processing that combines some aspects of online processing and some aspects of batch processing. In interactive processing, the user can directly interact with data over which he or she has exclusive control. In addition, the user can cause sequential activity to initiate background activity to be run against the data. Interactive chat — Lets the user engage in real-time exchange of information with one or more individuals over the Internet. Interactive video — A system in which video segments are integrated via a menu-based processing application. Interagency coordination — Within the context of Department of Defense involvement, the coordination that occurs between elements of the Department of Defense and engaged U.S. Government agencies, non-government organizations, private voluntary organizations, and regional and international organizations for the purpose of accomplishing an objective. Interblock gap (IBG) — A blank space appearing between records or groups of records on magnetic storage media. Intercept-related information — Collection of information or data associated with telecommunications services involving the target identity, specifically communication-associated information or data (including unsuccessful communication attempts), service-associated information or data (e.g., service-profile management by subscriber), and location information. Interception — Action (based on the law) performed by an NWO/AP/SvP, of making available certain information and providing that information to an LEMF. Usually, this term is not used to describe the action of observing communications directly by an LEA. 898
AU8231_A003.fm Page 899 Thursday, October 19, 2006 7:10 AM
Glossary Interception interface — Physical and logical locations within the NWO/AP/SvP telecommunications facilities where access to the CC and IRI is provided. The interception interface is not necessarily a single fixed point. Interception measure — A technical measure that facilitates the interception of telecommunications traffic pursuant to the relevant national laws and regulations. Interception subject — A person or persons, specified in a lawful authorization, whose telecommunications are to be intercepted. Interconnection security agreement — An agreement established between the organizations that own and operate connected information technology systems to document the technical requirements of the interconnection. The ISA also supports a memorandum of understanding or agreement (MOU/A) between the organizations. Interdiction — Impeding or denying someone the use of system resources. Interface — A shared boundary between devices, equipment, or software components defined by common interconnection characteristics. Interface analysis — The checking and verification process that ensures intermodule communications links are performed correctly. Interference — Electromagnetic energy that is picked up with the signal you are receiving. This extra energy distorts the signal and interferes with its transmission. Interim accreditation — Temporary authorization granted by a designated approving authority for an information technology system to process, store, and transmit information based on preliminary results of security certification of the system. Interim approval to operate (IATO) — Temporary approval granted by a DAA for an IS to process information based on preliminary results of a security evaluation of the system. Interleaving — The alternating execution of programs residing in the memory of a multiprogramming environment. Intermediary — A specialist company that provides services better than its client companies. Internal accounting control — The process of safeguarding the accounting functions and processes of a business. This process includes validating that the accounting system complies with the appropriate, generally accepted accounting principles and that audit trails exist for verification of all processes. 899
AU8231_A003.fm Page 900 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Internal control — The method of safeguarding business assets, including verifying the accuracy and reliability of accounting data, promoting operational efficiency, and encouraging adherence to prescribed organizational policies and procedures. Internal information — Information that describes specific operational aspects of the organization. Internal network interface — Network’s internal interface between the internal intercepting function and a mediation function. International Association of Industrial Accident Boards and Commissions (IAIABC) — One of their standards is under consideration for use for the First Report of Injury standard under HIPAA. International Classification of Diseases (ICD) — A medical code set maintained by the World Health Organization (WHO). The primary purpose of this code set was to classify causes of death. A U.S. extension, maintained by the NCHS within the CDC, identifies morbidity factors, or diagnoses. The ICD-9-CM codes have been selected for use in the HIPAA transactions. International Government-to-Government (IG2G) — The E-commerce activities performed between two or more governments, including foreign aid. International organization — An organization of governments. International Organization for Standardization (ISO) — An organization that coordinates the development and adoption of numerous international standards. “ISO” is not an acronym, but the Greek word for “equal.” International Standards Organization — See International Organization for Standardization (ISO). International virtual private network (IVPN) — Vir tual private networks that depend on services offered by phone companies of various nationalities. Internet — A global computer network that links minor computer networks, allowing them to share information via standardized communication protocols. The Internet consists of large national backbone networks (such as MILNET, NSFNET, and CREN) and a myriad of regional and local campus networks all over the world. The Internet uses the Internet Protocol suite. To be on the Internet, you must have IP connectivity (i.e., be able to Telnet to — or ping — other systems). Networks with only e-mail connectivity are not actually classified as being on the Internet. Although it is commonly stated that the Internet is not controlled or owned by a single entity, this is really misleading, giving many users the perception 900
AU8231_A003.fm Page 901 Thursday, October 19, 2006 7:10 AM
Glossary that no one is really in control (no one “owns”) the Internet. In practical reality, the only way the Internet can function is to have the major telecom switches, routers, satellite, and fiber-optic links in place at strategic locations. These devices at strategic locations are owned by a few major corporations. At any time, these corporation could choose to shut down these devices (which would shut down the Internet), alter these devices so only specific countries or regions could be on the Internet, or modify these devices to allow/disallow/monitor any communications occurring on the Internet. Internet address — A 32-bit address assigned to hosts using TCP/IP. Internet Architecture Board (IAB) — Formally called the Internet Activities Board. The technical body that oversees the development of the Internet suite of protocols (commonly referred to as TCP/IP). It has two task forces (the IRTF and the IETF), each charged with investigating a particular area. Internet Assigned Numbers Authority (IANA) — A largely governmentfunded overseer of IP allocations chartered by the FNC and the ISOC. Internet backbone — The major set of connections for computers on the Internet. Internet Control Message Protocol (ICMP) — The protocol used to handle errors and control messages at the IP layer. ICMP is actually part of the IP. Internet Engineering Task Force (IETF) — The Internet standards-setting organization with affiliates internationally from network industry representatives. This includes all network industry developers and researchers concerned with evolution and planned growth on the Internet. Internet layer — The stack in the TCP/IP protocols that addresses a packet and sends the packets to the network access layer. Internet Message Access Protocol (IMAP) — A method of accessing electronic mail or bulletin board messages that are kept on a (possibly shared) mail server. IMAP permits a “client” e-mail program to access remote message stores as if they were local. For example, e-mail stored on an IMAP server can be manipulated from a desktop computer at home, a workstation at the office, and a notebook computer while traveling, without the need to transfer messages of files back and forth between these computers. IMAP can be regarded as the next-generation POP. Internet Protocol (IP, lPv4) — The Internet Protocol (version 4), defined in RFC 791, is the network layer for the TCP/IP suite. It is a connectionless, best-effort, packet-switching protocol. 901
AU8231_A003.fm Page 902 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Internet Protocol (Ping, IPv6) — IPv6 is a new version of the Internet Protocol that is designed to be evolutionary. Internet server computer — Computer that provides information and services on the Internet. Internet service provider (ISP) — An organization that provides direct access to the Internet, such as the provider that links your college or university to the Net. Internet telephony — A combination of hardware and software that uses the Internet as the medium for transmission of telephone calls in place of traditional telephone networks. Internetwork — A group of networks connected by routers so that computers on different networks can communicate; the Internet. Interoperability — The ability to exchange requests between entities. Objects interoperate if the methods that apply to one object can request services of another object. Interorganizational System (IOS) — Automates the flow of information between organizations to support the planning, design, development, production, and delivery of products and services. Intersection relation — A relation the user creates to eliminate a manyto-many relationship. Also called a composite relation. Intracell handovers — A cellular call is passed from one frequency to the next or one carrier to the next within a single cell site. Intranet — An internal organizational internet that is guarded against outside access by a special security feature called a firewall. Intrusion detection — The process of monitoring the events occurring in a computer system or network, detecting signs of security problems. Intrusion-detection software — Looks for unauthorized users attempting to gain access to a network on the Internet. Investigation — The phase of the systems development life cycle in which the problem or need is identified and a decision is made on whether to proceed with a full-scale study. Invisible GIFs (Tracker GIF, Clear GIF) — Electronic images, usually not visible to site visitors, that allow a Web site to count those who have visited that page or to access certain cookies. Invisible ink — A method of steganography that uses a special ink that is colorless and invisible until treated by a chemical, heat, or special light. It is sometimes referred to as sympathetic ink. 902
AU8231_A003.fm Page 903 Thursday, October 19, 2006 7:10 AM
Glossary Invisible watermark — An overlaid image that is invisible to the naked eye, but which can be detected algorithmically. There are two different types of invisible watermarks: fragile and robust. IO — Information operations. IOM — Institute of Medicine. Prestigious group of physicians that study issues and advise Congress. The IOM developed a report on computerbased patient records that led to the creation of CPRI. IOS — Internetwork Operating System. IP — Internet Protocol. IP address — A unique number assigned to each computer on the Internet, consisting of four numbers, each less than 256, and each separated by a period, such as 129.16.255.0. IP datagram — The fundamental unit of information passed across the Internet. Contains source and destination addresses, along with data and a number of fields that define such things as the length of the datagram, the header checksum, and flags to say whether the datagram can be (or has been) fragmented. IP Security protocol (IPSec) — A protocol in development by the IETF to support secure data exchange. Once completed, IPSec is expected to be widely deployed to implement virtual private networks (VPNs). IPSec supports two encryption modes: Transport and Tunnel. Transport mode encrypts the data portion (payload) of each packet but leaves the header untouched. Tunnel mode is more secure because it encrypts both the header and the payload. On the receiving side, an IPSec-compliant device decrypts each packet. IP spoofing — IP (address) spoofing is a technique used to gain unauthorized access to computers or network devices, whereby the intruder sends messages with an IP source address to pretend that the message is coming from a trusted source. IPA — Independent Providers Association. IPC — Inter-process communication. IPL — Initial program load. IPSec — The security architecture for IP; developed by the IETF to support reliable and secure datagram exchange at the IP layer. The IPSec architecture specifies AH, ESP, Internet Key Exchange (IKE), and Internet Security Association Key Management Protocol (ISAKMP), among other things. IPX — Internet packet exchange. 903
AU8231_A003.fm Page 904 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® IRB — Integrated routing and bridging. IRB — Institutional Review Board. IRC — Internet Relay Chat. This is a service (you must load the application on your computer) that allows interactive conversation on the Internet. IRC also allows you to exchange files and have “private” conversations. Some major supporters of this service are IRCnet and DALnet. IS — Intermediate system. IS security goal — See security goal. ISACA — Information Systems Audit and Control Association. ISAKMP — Internet Security Association Key Management Protocol. (ISC)2 — International Information Systems Security Certification Consortium. ISDN (Integrated Services Digital Network) — There are two forms of ISDN: PRI and BRI. BRI interface supports a total signaling rate of 144 kbps, which is divided up into two B or bearer channels, which run at 64 kbps, and a D or data channel, which runs at 16 kbps. The bearer channels carry the actual voice, video, or data information, and the D channel is used for signaling. PRI or primary rate interface provides the same throughput as a T-1 1.544 Mbps, has 23 B or bearer channels, which run at 64 kbps, and a D or data channel, which runs at 16 kbps. ISDN BRI — Integrated Services Digital Network — Basic Rate Interface. ISDN PRI — Integrated Services Digital Network — Primary Rate Interface. ISIS — Intermediate System Intermediate System (OSI standard routing protocol). ISM (industrial, scientific, and manufacturing) frequencies — A t e r m describing several frequencies in the radio spectrum set aside for specific purposes. ISO — See International Organization for Standardization. ISO 17799 — ISO 17799 gives general recommendations for information security management. It is intended to provide a common international basis for developing organizational security standards and effective security management practice and to provide confidence in inter-organizational dealings. ISO 9000 — A certification program that demonstrates an organization adheres to steps that ensure quality of goods and services. A quality series that comprises a set of five documents and was developed in 1987 by the International Standards Organization (ISO). 904
AU8231_A003.fm Page 905 Thursday, October 19, 2006 7:10 AM
Glossary Isolation — The separation of users and processes in a computer system from one another, as well as from the protection controls of the operating system. ISP — See Internet service Pprovider. IS-related risk — The probability that a particular threat agent will exploit, or trigger, a particular information system vulnerability and the resulting mission/business impact if this should occur. IS-related risks arise from legal liability or mission/business loss due to (1) unauthorized (malicious, nonmalicious, or accidental) disclosure, modification, or destruction of information; (2) nonmalicious errors and omissions; (3) IS disruptions due to natural or man-made disasters; (4) failure to exercise due care and diligence in the implementation and operation of the IS. ISSA — Information Systems Security Association. ISSO — Information System Security Officer. IT infrastructure — The hardware, software, and telecommunications equipment that, when combined, provides the underlying foundation to support the organization’s goal. IT security — Technological discipline concerned with ensuring that IT systems perform as expected and do nothing more; that information is provided adequate protection for confidentiality; that system, data, and software integrity is maintained; and that information and system resources are protected against unplanned disruptions of processing that could seriously impact mission accomplishment. Synonymous with automated information system security, computer security, and information systems security. IT security architecture — A description of security principles and an overall approach for complying with the principles that drive the system design; that is, guidelines on the placement and implementation of specific security services within various distributed computing environments. IT security basics — A core set of generic IT security terms and concepts for all federal employees as a baseline for further role-based learning. IT security body of knowledge topics and concepts — A set of 12 highlevel topics and concepts intended to incorporate the overall body of knowledge required for training in IT security. IT security goals — See security goals. IT security literacy — The first solid step of the IT security training level where the knowledge obtained through training can be directly related to the individual’s role in his or her specific organization. 905
AU8231_A003.fm Page 906 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® IT security program — A program established, implemented, and maintained to assure that adequate IT security is provided for all organizational information collected, processed, transmitted, stored, or disseminated in its information technology systems. Synonymous with automated information system security program, computer security program, and information systems security program. IT system — A collection of computing or communications components and other resources that support one or more functional objectives of an organization. IT system resources include any IT component plus associated manual procedures and physical facilities that are used in the acquisition, storage, manipulation, display, or movement of data or to direct or monitor operating procedures. An IT system may consist of one or more computers and their related resources of any size. The resources that comprise a system do not have to be physically connected. ITA — Protection of the TSF, availability of exported TSF data. ITC — (1) User data protection, import from outside TSF control; (2) protection of the TSF, confidentiality of exported TSF data; (3) trusted path/channels, inter-TSF trusted channel. Iterative development life cycle — A strategy for developing systems that allows for the controlled reworking of parts of a system to remove mistakes or to make improvements based on feedback. ITL — Information Technology Laboratory. IT-related risk — The net mission/business impact considering the probability that a particular threat source will exploit, or trigger, a particular information system vulnerability, and the resulting impact if this should occur. IT-related risks arise from legal liability or mission/business loss due to, but not limited to, (1) unauthorized (malicious, nonmalicious, or accidental) disclosure, modification, or destruction of information; (2) nonmalicious errors and omissions; (3) IT disruptions due to normal or man-made disasters; (4) failure to exercise due care and diligence in the implementation and operation of the IT. ITS — Intelligent transportation systems. ITSec — Information Technology Security Evaluation Criteria. ITT — (1) User data protection, internal TOE transfer. (2) Protection of the TSF, internal TOE TSF data transfer. ITU — International Telecommunications Union. ITU-T — ITU Telecommunication Standardization Sector. IW — Information warfare. 906
AU8231_A003.fm Page 907 Thursday, October 19, 2006 7:10 AM
Glossary Jargon code — A code that uses words (esp. nouns) instead of figures or letter-groups as the equivalent of plain language units. Java — Object-oriented programming language developed at Sun Microsystems to solve a number of problems in modern programming practice. The Java language is used extensively on the World Wide Web, particularly for applets. JCAHO — See Joint Commission on Accreditation of Healthcare Organizations. J-codes — A subset of the HCPCS Level II code set with a high-order value of “J” that has been used to identify certain drugs and other items. The final HIPAA transactions and code sets rule states that these J-codes will be dropped from the HCPCS, and that NDC codes will be used to identify the associated pharmaceuticals and supplies. JHITA — See Joint Healthcare Information Technology Alliance. Jitter attack — A method of testing or defeating the robustness of a watermark. This attack applies “jitter” to a cover by splitting the file into a large number of samples, the deletes or duplicates one of the samples and puts the pieces back together. At this point the location of the embedded bytes cannot be found. This technique is nearly imperceptable when used on audio and video files. Job — A complete set of programs to be executed in sequence on a computer. Job accounting system — A set of systems software that can track the services and resources used by computer system account holders. Job function — The roles and responsibilities specific to an individual, not a job title. Job queue — A set of programs held in temporary storage and awaiting execution. Join — An operation that takes two relations as operand and produces a new relation by concealing the tuples and matching the corresponding columns when a stated condition holds between the two. Joint application development (JAD) — Occurs when knowledge workers and IT specialists meet, sometimes for several days, to define or review the business requirements for the system. Joint Commission on Accreditation of Healthcare Organizations (JCAHO) — An organization that accredits healthcare organizations. In the future, the JCAHO may play a role in certifying these organizations’ compliance with the HIPAA A/S requirements. 907
AU8231_A003.fm Page 908 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Joint Healthcare Information Technology Alliance (JHITA) — A healthcare industry association that represents AHIMA, AMIA, CHIM, CHIME, and HIMSS on legislative and regulatory issues affecting the use of health information technology. JPEG — Joint Photographic Experts Group. Judgment — The ability to make a decision or form an opinion by discerning and evaluating. Jukebox — Hardware that houses, reads, and writes to many optical disks using a variety of mechanical methods for operation. Just in Time (JIT) — An approach that produces or delivers a product or service just at the time the customer wants it. KDC — Key distribution center. Kerberos — Developing standard for authenticating network users. Kerberos offers two key benefits: it functions in a multi-vendor network, and it does not transmit passwords over the network. Kerckhoff’s principle — A cryptography principle that states that if the method used to encipher data is known by an opponent, then security must lie in the choice of the key. Kermit — A (once) popular file transfer and terminal emulation program. Key (cryptovariable) — In cryptography, a sequence of symbols that controls encryption and decryption. For some encryption mechanisms (symmetric), the same key is used for both encryption and decryption; for other mechanisms (asymmetric), the keys used for encryption and decryption are different. Key fingerprint — The actual binary code of an encryption key, which is presented in hexadecimal notation. Key generation — The origination of a key or set of distinct keys. Key length — The number of binary digits, or bits, in an encryption algorithm’s key. Key length is sometimes used to measure the relative strength of the encryption algorithm. Key logger or key trapper software — A program that, when installed on a computer, records every keystroke and mouse click. Key management — The generation, storage, distribution, deletion, archiving, and application of keys in accordance with a security policy. Key, primary — A unique attribute used to identify a class of records in a database. 908
AU8231_A003.fm Page 909 Thursday, October 19, 2006 7:10 AM
Glossary Key space — The total number of possible values of keys in a cryptographic algorithm or other security measure, such as a password. For example, a 20-bit key would have a key space of 1,048,576. See key length, key fingerprint. Key-to-disk device — A keyboard unit that records data as patterns of magnetic spots onto magnetic disks. key2audio™ — A product of Sony designed to control the copying of CDs by embedding code within the CD that prevents playback on a PC or Mac preventing track ripping or copying. Keyboard — Today’s most popular input technology. Kilobyte (K byte) — The equivalent of 1204 bytes. KMI — Key management infrastructure. Knowledge — Information from multiple sources integrated with common, environmental, real-world experience. Knowledge acquisition — The component of the expert system that the knowledge engineer uses to enter the rules. Knowledge base — The part of an expert system that contains specific information and facts about the expert area. Rules that the expert system uses to make decisions are derived from this source. Knowledge-based system — An artificial intelligence system that applies reasoning capabilities to reach a conclusion. Also known as an expert system. Knowledge engineer — The person who formulates the domain expertise of an expert system. Knowledge levels — Verbs that describe actions an individual should be capable of performing on the job after completion of the training associated with the cell. The verbs are identified for three training levels: Beginning, Intermediate, and Advanced. Knowledge worker — Works with and produces information as a product. Known-cover attack — A type of attack where both the original, unaltered cover and the stego-object are available. Known-message attack — A type of attack where the hidden message is known to exist by the attacker and the stego-object is analyzed for patterns that may be beneficial in future attacks. This is a very difficult attack, equal in difficulty to a stego-only attack. Known-stego attack — An attack where the tool (algorithm) is known and the original cover object and stego-object are available. 909
AU8231_A003.fm Page 910 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® L2F Protocol — Layer 2 Forwarding Protocol. Protocol that supports the creation of secure virtual private dial-up networks over the Internet. Label — A set of symbols used to identify or describe an item, record, message, or file. LAN, local area network — High-speed, low-error data network covering a relatively small geographic area (up to a few thousand meters). LANs connect workstations, peripherals, terminals, and other devices in a single building or other geographically limited area. LAN standards specify cabling and signaling at the physical and data-link layers of the OSI model. Ethernet, FDDI, and Token Ring are widely used LAN technologies. Compare with MAN and WAN. LAN switch — High-speed switch that forwards packets between data-link segments. Most LAN switches forward traffic based on MAC addresses. This variety of LAN switch is sometimes called a frame switch. LAN switches are often categorized according to the method they use to forward traffic: cut-through packet switching or store-and-forward packet switching. Multi-layer switches are an intelligent subset of LAN switches. Compare with multi-layer switch. See also cut-through packet switching, storeand-forward packet switching. Language processing — The step of ASR in which the system attempts to analyze and make sense of the user’s verbal instructions by comparing the word phonemes generated in step 2 with a language model database. Language translator — Systems software that converts programs written in assembler or a higher-level language into machine code. LAPB — Link Access Procedure — Balanced. LAPD — Link Access Procedure on the D Channel. LAPF — Link Access Procedure for Frame-Mode Bearer Services. Laser — Light Amplification by Stimulated Emission of Radiation. Analog transmission device in which a suitable active material is excited by an external stimulus to produce a narrow beam of coherent light that can be modulated into pulses to carry data. Networks based on laser technology are sometimes run over SONET. Laser printer — An output unit that uses intensified light beams to form an image on an electrically charged drum and then transfers the image to paper. Last mile bottleneck problem — Occurs when information is traveling on the Internet over a very fast line for a certain distance and then comes near the user where it must travel over a slower line. LAT — Local area transport. 910
AU8231_A003.fm Page 911 Thursday, October 19, 2006 7:10 AM
Glossary Latency — In local networking, the time (measured in bits at the transmission rate) for a signal to propagate around or throughput the network. The time taken by a DASD device to position a storage location to reach the read arm over the physical storage medium. For general purposes, average latency time is used. Delay between the time a device requests access to a network and the time it is granted permission to transmit. Law enforcement agency (LEA) — Organization authorized by a lawful authorization based on a national law to receive the results of telecommunications interceptions. Law enforcement monitoring facility (LEMF) — Law enforcement facility designated as the transmission destination for the results of interception relating to a particular interception subject. Law enforcement official — See Part II, 45 CFR 164.501. Lawful authorization — Permission granted to an LEA under certain conditions to intercept specified telecommunications and requiring cooperation from an NWO/AP/SvP. Typically, this refers to a warrant or order issued by a lawfully authorized body. Lawful interception or intercept — See interception. Laws and regulations — Federal, government-wide and organization-specific laws, regulations, policies, guidelines, standards, and procedures mandating requirements for the management and protection of information technology resources. Layer 3 switching — The emerging layer 3 switching technology integrates routing with switching to yield very high routing throughput rates in the millions-of-packets-per-second range. The movement to layer 3 switching is designed to address the downsides of the current generation of layer 2 switches, which are functionally equivalent to bridges. These downsides for a large, flat network include being subject to broadcast storms, spanning tree loops, and address limitations that drove the injection of routers into bridged networks in the late 1980s. Currently, layer 3 switching is represented by a number of approaches in the industry. Layered defense — A combination of security services, software and hardware, infrastructures, and processes that are implemented to achieve a required level of protection. These mechanisms are additive in nature, with the minimum protection being provided by the network and infrastructure layers. LCD — Life-cycle support, life-cycle definition. LCN — Logical Channel Number (X.25). LCP — Link Control Protocol (X.25). 911
AU8231_A003.fm Page 912 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® LDAP — Lightweight Directory Access Protocol. Protocol that provides access for management and browser applications that provide read/write interactive access to the X.500 Directory. LDN — Local dial number (ISDN). Learning — Knowledge gained by study (in classes or through individual research and investigation). Learning continuum — A representation in which the common characteristic of learning is presented as a series of variations from awareness through training to education. Learning objective — A link between the verbs from the “knowledge levels” section to the “Behavioral Outcomes” by providing examples of the activities an individual should be capable of doing after successful completion of training associated with the cell. Learning objectives recognize that training must be provided at Beginning, Intermediate, and Advanced levels. Leased line — An un-switched telecommunications channel leased to an organization for its exclusive use. Least cost routing (LCR) — The automatic selection of the most economically available route for each outgoing trunk call. Also known as automatic route selection. Least privilege — Confinement technique in which each process is given only the minimum privileges it needs to function; also referred to as sandboxing. (See also need-to-know.) Least recently used (LRU) — A replacement strategy in which new data must replace existing data in an area of storage; the least recently used items are replaced. Least significant bit steganography — A substitution method of steganography where the right most bit in a binary notation is replaced with a bit from the embedded message. This method provides “security through obscurity,” a technique that can be rendered useless if an attacker knows the technique is being used. Legacy Information System — An operational IS that existed prior to the implementation of the DITSCAP. Legacy system — A previously built system using older technologies such as mainframe computers and programming languages such as COBOL. Letter bomb — A Trojan horse that triggers when an e-mail message is read. 912
AU8231_A003.fm Page 913 Thursday, October 19, 2006 7:10 AM
Glossary Liability — Condition of being or potentially subject to an obligation; condition of being responsible for a possible or actual loss, penalty, evil, expense, or burden. Condition that creates a duty to perform an act immediately or in the future, including almost every character of hazard or responsibility, absolute, contingent, or likely. Lightweight Directory Access Protocol (LDAP) — This protocol provides access for management and browser application that provide read/write interactive access to the X.500 Directory. Likert scale — An evaluation tool that is usually from 1 to 5 (1 being very good; 5 being not good, or vice versa), designed to allow an evaluator to prioritize the results of the evaluation. Limit Check — An input control text that assesses the value of a data field to determine whether values fall within set limits. Line conditioning — A service offered by common carriers to reduce delay, noise, and amplitude distortion to produce transmission of higher data speeds. Line printer — An output unit that prints alphanumeric characters one line at a time. Line speed — The transmission rate of signals over a circuit, usually expressed in bits per second. Line-of-sight (LOS) — Defined by the Fresnel Zone. Fresnel Zone clearance is the minimum clearance over obstacles that the signal needs to be sent over. Reflection or path bending occurs if the clearance is not sufficient. Linguistic steganography — The method of steganography where a secret is embedded in a harmless message. See also jargon code. Link encryption — The application of online crypto-operations to a link of a communications system so that all information passing over the link is encrypted in its entirety. Linkage — The purposeful combination of data or information from one information system with that from another system in the hope of deriving additional information. Linux — An open source operating system that provides a rich operating environment for high-end workstations and network servers. List — A collection of information arranged in columns and rows in which each column displays one particular type of information. List definition table — A description of a list by column. LLC — Logical Link Control. 913
AU8231_A003.fm Page 914 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® LLD — Development, low-level design. LMI — Local Management Interface (Frame Relay). Load sharing — A multiple-computer system that shares the load during peak hours. During non-peak periods or standard operation, one system can handle the entire load with the others acting as fallback units. Local area network (LAN) — The physical connection of microcomputers with communication media (e.g., cable and fiber optics) that allows the sharing of information and peripherals among those microcomputers. Local code(s) — A generic term for code values that are defined for a state or other political subdivision, or for a specific payer. This term is most commonly used to describe HCPCS Level III Codes, but also applies to state-assigned Institutional Revenue Codes, Condition Codes, Occurrence Codes, Value Codes, etc. Local loop — The physical connection from the subscriber’s premises to the carrier’s point of presence (POP). The local loop can be provided over any suitable transmission medium. Local multipoint distribution services (LMDS) — A method of distributing TV signals to households in a local community. LMDS uses broadcast microwave signals to contact local dishes. The received signal is then distributed through the central CATV system. Location information — Information relating to the geographical, physical, or logical location of an identity relating to an interception subject. Lock/key protection system — A protection system that involves matching a key or a password with a specified access requirement. Logged-on but unattended — A workstation is considered logged on but unattended when the user is (1) logged on but is not physically present in the office; and (2) there is no one else present with an appropriate level of clearance safeguarding access to the workstation. Coverage must be equivalent to that which would be required to safeguard hardcopy information if the same employee were away from his or her desk. Users of logged on but unattended classified workstations are subject to the issuance of security violations. Logging — The automatic recording of data for the purpose of accessing and updating it. Logic bomb — A Trojan horse that will trigger when a specific logical event or action occurs. Logical error — A programming error that causes the wrong processing to take place in a syntactically valid program. 914
AU8231_A003.fm Page 915 Thursday, October 19, 2006 7:10 AM
Glossary Logical file organization — The sequencing of data records in a file according to their key. Logical Link Control (LLC) — The portion of the Link Level Protocol in the 802 standards that is in direct contact with higher-level layers. Logical observation identifiers, names, and codes (LOINC) — A set of universal names and ID codes that identify laboratory and clinical observations. These codes, which are maintained by the Regenstrief Institute, are expected to be used in the HIPAA claim attachments standard. Logical operation — A comparison of data values within the arithmetic logic unit. These comparisons show when one value is greater than, equal to, or less than a second value. Logical operator — A symbol used in programming that initiates a comparison operation of two or more data values. Logical organization — Data elements organized in a manner that meets human and organizational processing needs. Logically disconnect — Although the physical connection between the control unit and a terminal remains intact, a system-enforced disconnection prevents communication between the control unit and the terminal. LOINC — See logical observation identifiers, names, and codes. Loop — A repeating structure or process. Loophole — An error of omission or oversight in software, hardware, or firmware that permits circumventing the access control process. Lost pouch — Any pouch-out-of-control that is not recovered. LRA — Local registration authority (for digital certificates). LSA — Link-state advertisement. LSP — Link-state packet. LT — Local termination. LTC — Long-term care. M+CO — Medicare Plus Choice Organization. MAC — (1) Mandatory access controls. (2) Message authentication codes. (3) Media access control. MAC address — Standardized data-link layer address ingrained into a NIC that is required for every port or device that connects to a LAN. Other devices in the network use these addresses to locate specific ports in the network and to create and update routing tables and data structures. MAC addresses are 6 bytes long and are controlled by the IEEE. Also known as 915
AU8231_A003.fm Page 916 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® a hardware address, MAC-layer address, and physical address. Compare with network address. Mac OS — The operating system for today’s Apple computers. Machine language — Computer instructions or code representing computer operations and memory addresses in a numeric form that is executable by the computer without translation. Macro virus — A computer virus that spreads by binding itself to software such as Word or Excel. Madison Project — A code name for IBM’s Electronic Music Management System (EMMS). EMMS is being designed to deliver piracy-proof music to consumers via the Internet. Magicgate — A memory media stick from Sony designed to allow users access to copyrighted music or data. Magnetic disk — A storage device consisting of metallic platters coated with an oxide substance that allows data to be recorded as patterns of magnetic spots. Magnetic ink character recognition (MICR) — An input method under which data is encoded in special ink containing iron particles. These particles can be magnetized and sensed by special machines and converted into computer input. Magnetic tape — A storage medium consisting of a continuous strip of coated plastic film wound onto a reel and on which data can be recorded as defined patterns of magnetic spots. Mail gateway — A machine that connects two or more e-mail systems (especially dissimilar mail systems on two different networks) and transfers messages between them. Sometimes the mapping and translation can be quite complex, and generally it requires a store-and-forward scheme whereby the message is received from one system completely before it is transmitted to the next system after suitable translations. Mail relay server — An e-mail server that relays messages where neither the sender nor the receiver is a local user. A risk exists that an unauthorized user could hijack these open relays and use them to spoof their own identity. Mail server — Provides e-mail services and accounts. Mailing list — Discussion groups organized by area of interest. Mainframe computer — A computer designed to meet the computing needs of hundreds of people in a large business environment. Maintain or Maintenance — See Part II, 45 CFR 162.103. 916
AU8231_A003.fm Page 917 Thursday, October 19, 2006 7:10 AM
Glossary Maintainability — The general ease of a system to be maintained, at all levels of maintenance. Maintenance — Tasks associated with the modification or enhancement of production software. Maintenance organization — The government organization responsible for the maintenance of an IS. Although the actual organization performing maintenance on a system may be a contractor, the maintenance organization is the government organization responsible for the maintenance. Maintenance phase — Monitors and supports the new system to ensure it continues to meet the business goals. Maintenance programmer — An applications programmer responsible for making authorized changes to one or more computer programs and ensuring that the changes are tested, documented, and verified. Major application — An application that requires special attention to security due to the risk and magnitude of the harm resulting from the loss, misuse, or unauthorized access to, or modification of, the information in the application. A breach in a major application might comprise many individual application programs and hardware, software, and telecommunications components. Major applications can be either major software applications or a combination of hardware/software where the only purpose of the system is to support a specific mission-related function. MAN — Metropolitan area network. Management controls — maintenance, and use of procedures, and rules of individual accountability,
Actions taken to manage the development, the system, including system-specific policies, behavior, individual roles and responsibilities, and personnel security decisions.
Management Information System (MIS) — Deals with the planning, development, management, and use of information technology tools to help people perform tasks related to information processing and management. Mandatory Access Control (MAC) — MAC is a means of restricting access to data based on varying degrees of security requirements for information contained in the objects. A policy-based means of restricting access to objects based on the sensitivity (as represented by a label) of the information contained in the objects and the formal authorization (access control privileges) of subjects to access information of such sensitivity. Man-in-the-middle attack — Scenarios in which a malicious user can intercept messages and insert other messages that compromise the otherwise secure exchange of information between two parties. 917
AU8231_A003.fm Page 918 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® MAP — Manufacturing Automation Protocol. Maritime Strategy — Naval objectives for sea control, maritime power projection, and control and protection of shipping. The Naval objectives in support of the National Strategy. Marketing — See Part II, 45 CFR 164.501. Marketing mix — The set of marketing tools that a firm uses to pursue its marketing objectives in the target market. Masquerade — A type of security threat that occurs when an entity successfully pretends to be a different entity. Mass customization — When a business gives its customers the opportunity to tailor its product or service to the customer’s specifications. Massachusetts Health Data Consortium (MHDC) — An organization that seeks to improve healthcare in New England through improved policy development, better technology planning and implementation, and more informed financial decision making. Master file — An automated file that contains semi-permanent or permanent information and is maintained over a time period required by organizational policy. Master Plan — A long-range plan, derived from the notional architecture, for development and procurement of capabilities. Matrix display — The alphanumeric representation of characters as patterns of tiny dots in specific positions on a display terminal. Matrix printer — A hard-copy printing device that forms alphanumeric characters with small pins arranged in a matrix of rows and columns. Mature system — A fully operational system that performs all the functions it was designed to accomplish. MAU — Media attachment unit. Maximum Defined Data Set — Under HIPAA, this is all of the required data elements for a particular standard based on a specific implementation specification. An entity creating a transaction is free to include whatever data any receiver might want or need. The recipient is free to ignore any portion of the data that is not needed to conduct their part of the associated business transaction, unless the inessential data is needed for coordination of benefits. Also see Part II, 45 CFR 162.103. MCO — Managed Care Organization. M-commerce — The term used to describe E-commerce conducted over a wireless device such as a cell phone or personal digital assistant. 918
AU8231_A003.fm Page 919 Thursday, October 19, 2006 7:10 AM
Glossary MCS — TOE access, limitation on multiple concurrent sessions. MD5 hash value — A mathematically generated string of 32 letters and digits that is unique for an individual storage medium at a specific point in time. MDx — Message Digest (e.g., MD5). Media — The various physical forms (e.g., disk, tape, and diskette) on which data is recorded in machine-readable formats. Media access control (MAC) — Lower of the two sub-layers of the datalink layer defined by the IEEE. The MAC sub-layer handles access to shared media, such as whether token passing or contention will be used. A local network control protocol that governs station access to a shared transmission medium. Examples are token passing and CSMA. See also carrier sense, multiple access. Mediation — Action by an arbiter that decides whether or not a subject or process is permitted to perform a given operation on a specified object. Mediation function — A mechanism that passes information between an NWO, an AP or an SvP, and a handover interface, and information between the internal network interface and the handover interface. Medicaid Fiscal Agent (FA) — The organization responsible for administering claims for a state Medicaid program. Medicaid State Agency — The state agency responsible for overseeing the state’s Medicaid program. Medical Code Sets — Codes that characterize a medical condition or treatment. These code sets are usually maintained by professional societies and public health organizations. Compare to administrative code sets. Medical Records Institute (MRI) — An organization that promotes the development and acceptance of electronic healthcare record systems. Medicare contractor — A Medicare Part A Fiscal Intermediary, a Medicare Part B Carrier, or a Medicare Durable Medical Equipment Regional Carrier (DMERC). Medicare durable medical equipment regional carrier (DMERC) — A Medicare contractor responsible for administering Durable Medical Equipment (DME) benefits for a region. Medicare Part A Fiscal Intermediary (FI) — A Medicare contractor that administers the Medicare Part A (institutional) benefits for a given region. Medicare Part B carrier — A Medicare contractor that administers the Medicare Part B (Professional) benefits for a given region. 919
AU8231_A003.fm Page 920 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Medicare Remittance Advice Remark Codes — A national administrative code set for providing either claim-level or service-level Medicarerelated messages that cannot be expressed with a Claim Adjustment Reason Code. This code set is used in the X12 835 Claim Payment & Remittance Advice transaction, and is maintained by the HCFA. Megabyte (Mbyte, MB) — The equivalent of 1,048,576 bytes. Megahertz (MHz) — The number of millions of CPU cycles per second. Memorandum of understanding (MOU) — A document that provides a general description of the responsibilities that are to be assumed by two or more parties in their pursuit of some goal(s). More specific information may be provided in an associated SOW. Memorandum of understanding/agreement (MOU/A) — A document established between two or more parties to define their respective responsibilities in accomplishing a particular goal or mission. In this guide, an MOU/A defines the responsibilities of two or more organizations in establishing, operating, and securing a system interconnection. Memory — The area in a computer that serves as temporary storage for programs and data during program execution. Memory address — The location of a byte or word of storage in computer memory. Memory bounds — The limits in the range of storage addresses for a protected region in memory. Memory chips — A small integrated circuit chip with a semiconductor matrix used as computer memory. Menu — A section of the computer program — usually the top-level module — that controls the order of execution of other program modules. Also, online options displayed to a user, prompting the user for specific input. Message — (1) The data input by the user in the online environment that is used to drive a transaction. The output of transaction. (2) In steganography, the data a sender wishes to remain confidential. This data can be text, still images, audio, video, or anything that can be represented as a bitstream. Message address — The information contained in the message header that indicates the destination of the message. Message authentication code (MAC) — A one-way hash computed from a message and some secret data. It is difficult to forge without knowing the secret data. Its purpose is to detect if the message has been altered. 920
AU8231_A003.fm Page 921 Thursday, October 19, 2006 7:10 AM
Glossary Message digest — An example would be MD5. A message digest is a combination of alphanumeric characters generated by an algorithm that takes a digital object (such as a message you type) and pulls it through a mathematical process, giving a digital fingerprint of the message (enabling one to verify the integrity of a given message). Message handling system (MHS) — The system of message user agents, message transfer agents, message stores, and access units that together provide OSI e-mail. MHS is specified in the ITU-TSS X.400 series of recommendations. Message stream — The sequence of messages or parts of messages to be sent. Message transfer agent (MTA) — An OSI application process used to store and forward messages in the X.400 message handling system. Equivalent to Internet mail agent. Messaging application — An application based on a store and forward paradigm; it requires an appropriate security context to be bound with the message itself. Messaging service — An interactive service that offers user-to-user communication between individual users via storage units with store-andforward, and mailbox or message handling functions (e.g., information editing, processing, and conversion). Messaging-based workflow system — Sends work assignments through an e-mail system. Metadata — The description of such things as the structure, content, keys, and indexes of data. Metalanguage — A language used to specify other languages. Metatag — A part of a Web site text not displayed to users but accessible to browsers and search engines for finding and categorizing Web sites. Method — A function, capability, algorithm, formula, or process that an object is capable of performing. Metropolitan area network (MAN) — A data network intended to serve an area approximating that of a large city or college campus. Such networks are being implemented by innovative techniques, such as running fiber cables through subway tunnels. MGMA — Medical Group Management Association. MHDC — See Massachusetts Health Data Consortium. MHDI — See Minnesota Health Data Institute. 921
AU8231_A003.fm Page 922 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® MIB — Management information base. Microcomputer — A small microprocessor-based computer built to handle input, output, processing, and storage functions. Microdot — A detailed form of microfilm that has been reduced to an extremely small size for ease of transport and purposes of security. Microfilm — A film for recording alphanumeric and graphics output that has been greatly reduced in size. Micro-payment — A technique to facilitate the exchange of small amounts of money for an Internet transaction. Microphone — For capturing live sounds, such as human voice. Microprocessor — A single small chip containing circuitry and components for arithmetic, logical, and control operations. Microsoft Windows 2000 Millennium (Windows 2000 Me) — An operating system for a home computer featuring utilities for setting up a home network and performing video, photo, and music editing and cataloging. Microsoft Windows 2000 Professional (Windows 2000 Pro) — An operating system for people who have a personal computer connected to a network of other computers at work or at school. Microsoft Windows XP Home — Microsoft’s latest upgrade to Windows 2000Me, with enhanced features for allowing multiple users to use the same computer. Microsoft Windows XP Professional (Windows XP Pro) — Microsoft’s latest upgrade to Windows 2000 Pro. Microwave — A type of radio transmission used to transmit information. Middleware — The distributed software needed to support interactions between client and servers. MIDI — Musical instrument digital interface. Millions of instructions per second (MIPS) — Used as a measure for assessing the speed of mainframe computers. Also, meaningless indicator of processor speed. Minicomputer — Typically, a word-oriented computer whose memory size and processing speed falls between that of a microcomputer and a medium-sized computer. Minimum level of protection — The reduction in the total risk that results from the impact of in-place safeguards. See also total risk, acceptable risk, residual risk. 922
AU8231_A003.fm Page 923 Thursday, October 19, 2006 7:10 AM
Glossary Minimum scope of disclosure — The principle that, to the extent practical, individually identifiable health information should only be disclosed to the extent needed to support the purpose of the disclosure. Minimum security baseline — A set of minimum acceptable security controls that are applicable to a range of information technology systems. Minimum security baseline assessment — An evaluation of controls protecting an information system against a set of minimum acceptable security requirements. Minnesota Health Data Institute (MHDI) — A public-private partnership for improving the quality and efficiency of healthcare in Minnesota. MHDI includes the Minnesota Center for Healthcare Electronic Commerce (MCHEC), which supports the adoption of standards for electronic commerce and also supports the Minnesota EDI Healthcare Users Group (MEHUG). Minor application — An application, other than a major application, that requires attention to security due to the risk and magnitude of harm resulting from the loss, misuse, or unauthorized access to or modification of the information in the application. Minor applications are typically included as part of a general support system. MIPS — See millions of instructions per second. Mirror image backup — Mirror image backups (also referred to as bitstream backups) involve the backup of all areas of a computer hard disk drive or another type of storage media (e.g., Zip disks, floppy disks, Jazz disks, etc.). Such mirror image backups exactly replicate all sectors on a given storage device. Thus, all files and ambient data storage areas are copied. Such backups are sometimes referred to as “evidence-grade” backups and they differ substantially from standard file backups and network server backups. The making of a mirror image backup is simple in theory, but the accuracy of the backup must meet evidence standards. Accuracy is essential and to guarantee accuracy, mirror image backup programs typically rely on mathematical CRC computations in the validation process. These mathematical validation processes compare the original source data with the restored data. When computer evidence is involved, accuracy is extremely important, and the making of a mirror image backup is typically described as the preservation of the “electronic crime scene.” Mirrored site — An alternate site that contains the same information as the original. Mirror sites are set up for backup and disaster recovery as well to balance the traffic load for numerous download requests. Such “download mirrors” are often placed in different locations throughout the Internet. 923
AU8231_A003.fm Page 924 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Mishap risk — An expression of the possibility and impact of an unplanned event or series of events resulting in death, injury, occupational illness, damage to or loss of equipment or property (physical or cyber), or damage to the environment in terms of potential severity of consequences and likelihood of occurrence. See also risk. MISPC — Minimum interoperability specification of PKI components; a standard that specifies a minimal set of features, transactions, and data formats for the various certification management components that make up a PKI. Mission — A specific task with which a person, a group of individuals, or an organization is entrusted to perform. Mission criticality — The property that data, resources, and processes may have, which denotes that the importance of that item to the accomplishment of the mission is sufficient to be considered an enabling/disabling factor. Mission justification — The description of the operational capabilities required to perform an assigned mission. This includes a description of a system’s capabilities, functions, interfaces, information processed, operational organizations supported, and the intended operational environment. Mistake — An erroneous human action (accidental or intentional) that produces a fault condition. Mjuice — An online music store that provides secure distribution of MP3s over the Internet. A secure player and a download system allow users to play songs an unlimited number of times, but only on a registered player. MLP — Multi-link PPP. MLS — Multi-level secure. MMP — Multi-chassis Multi-link PPP. MNWF — Must not work function. Mobile base station (MBS) — Component of cellular network that provides data-link relay functions for a set of radio channels serving a cell. Mobile site — The use of a mobile/temporary facility to serve as a business resumption location. They usually can be delivered to any site and can house information technology and staff. Mobile switching center (MSC) — The location of the digital access and cross-connect system (DACS) in a cellular telephone network. Mobile telephone switching office (MTSO) — Controls the entire operation of a cellular system. It is a sophisticated computer that monitors all cellular calls, arranges handoffs and manages billing information. 924
AU8231_A003.fm Page 925 Thursday, October 19, 2006 7:10 AM
Glossary Mode of operation — A classification for systems that execute in a similar fashion and share distinctive operational characteristics (e.g., Production, DSS, online, and Interactive). Model — A representation of a problem or subject area that uses abstraction to express concepts. Model management — Component of a DSS that consists of the DSS models and the DSS model management system. Modeling — The activity of drawing a graphical representation of a design. Modem (MOdulator/DEModulator) — A piece of hardware used to connect computers (or certain other network devices) together via a serial cable (usually a telephone line). When data is sent from your computer, the modem takes the digital data and converts it to an analog signal (the modulator portion). When you receive data into your computer via modem, the modem takes the analog signal and converts it to a digital signal that your computer will understand (the demodulator portion). Modification — A type of security threat that occurs when its content is modified in an unanticipated manner by a nonauthorized entity. Modify or Modification — Under HIPAA, this is a change adopted by the secretary, through regulation, to a standard or an implementation specification. Also see Part II, 45 CFR 160.103. Modular treated conference room (MTCR) — A second-generation design of the treated conference room (TCR), offering more flexibility in configuration and ease of assembly than the original TCR, designed to provide acoustic and RF emanations protection. Modularity — Modular packages consist of sets of equipment, people, and software tailorable for a wide range of missions. MOF — Security management, management of functions in TSF. Molecule — The smallest particle of a substance that retains all the properties of the substance and is composed of one or more atoms. Monitoring and surveillance agents (or predictive agents) — Intelligent agents that observe and report on equipment. Monitoring policy — The rules outlining the way in which information is captured and interpreted. MOP — Maintenance Operation Protocol. More Stringent — See Part II, 45 CFR 160.202. 925
AU8231_A003.fm Page 926 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Mosaic attack — A watermarking attack that is particularly useful for images that are distributed over the Internet. It relies on a web browsers ability to assemble mutiple images so they appear to be one image. A watermarked image can be broken into pieces but displayed as a single image by the browser. Any program trying to detect the watermark will look at each individual piece, and if they are small enough, will not be able to detect the watermark. MOU — See Memorandum of Understanding. Mouse — A hardware device used for moving a display screen cursor. MP — Multi-link Protocol. MPEG — Motion Picture Experts Group. MPR — Multi-protocol PC-based routing. MR — Medical review. MRI — See Medical Records Institute. MRRU — Maximum received reconstructed unit (PPP). MSA — Security management, management of security attributes. MSAU — Multi-station access units (Token Ring). MSP — Medicare Secondary Payer. MSU — Vulnerability assessment, misuse. MTD — Security management, management of TSF data. M-trax — An encrypted form of MP3 watermarking technology from MCY Music that protects the music industry and artists from copyright infringements. MTU — Maximum transmission unit. Multiaccess rights terminal — A terminal that may be used by more than one class of users, for example, users with different access rights to data or files. Multichannel multipoint distribution services (MMDS) — An FCC name for a service where multiple video channels are broadcast within a limited geographic area. Often called wireless cable. Multidimensional analysis (MDA) tools — Slice-and-dice techniques that allow viewing multidimensional information from different perspectives. Multifunction printer — Scans, copies, and faxes as well as prints. 926
AU8231_A003.fm Page 927 Thursday, October 19, 2006 7:10 AM
Glossary Multilevel mode — INFOSec mode of operation wherein all the following statements are satisfied concerning the users who have direct or indirect access to the system, its peripherals, remote terminals, or remote hosts: (1) Some users do not have a valid security clearance for all the information processed in the IS; (2) all users have the proper security clearance and appropriate formal access approval for that information to which they have access; and (3) all users have a valid need-to-know only for information for which they have access. Multilevel secure — A class of systems containing information with different sensitivities that simultaneously permits access by users with different security clearances and needs-to-know, but prevents users from obtaining access to information for which they lack authorization. Multilevel security (MLS) — Concept of processing information with different classifications and categories that simultaneously permits access by users with different security clearances, but prevents users from obtaining access to information for which they lack authorization. Multinational operations — A collective term to describe military actions conducted by forces of two or more nations usually undertaken within the structure of a coalition or alliance. Multiple inheritance — The language mechanism that allows the definition of a class to include the attributes and methods defined for more than one superclass. Multiplexing — To transmit two or more signals over a single channel. Multiprocessing — A computer operating method in which two or more processors are linked and execute multiple programs simultaneously. Multiprogramming — A computer operating environment in which several programs can be placed in memory and executed concurrently. Multipurpose Internet Mail Extension (MIME) — The standard for multimedia mail contents in the Internet suite of protocols. Multitasking — Allows the user to work with more than one piece of software at a time. MUSE project — An initiative that contributes to the continuing development of intellectual property standards. The MUSE project focuses on the electronic delivery of media, embedded signaling systems, and encryption technology with the goal of creating a global standard. Must not work function — Sequences of events or commands that are prohibited because they would result in a system hazard. 927
AU8231_A003.fm Page 928 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Must work function — Software that if not performed or performed incorrectly, inadvertently, or out of sequence could result in a hazard or allow a hazardous condition to exist. This includes (1) software that directly exercises command and control over potentially hazardous functions or hardware; (2) software that monitors critical hardware components; and (3) software that monitors the system for possible critical conditions or states. Mutation — The process within a genetic algorithm of randomly trying combinations and evaluating the success or failure of the outcome. Mutually suspicious — Pertaining to a state that exists between interactive processes (systems or programs), each of which contains sensitive data and is assumed to be designed to extract data from the other and to protect its own data. MW — Multi-channel interface processor. MWF — Must work function. NAHDO — See National Association of Health Data Organizations. NAIC — See National Association of Insurance Commissioners. NAK (negative adknowledgment) — Response sent from a receiving device to a sending device indicating that the information received contained errors. Compare with acknowledgment. NAK attack — A penetration technique that capitalizes on an operating system’s inability to properly handle asynchronous interrupts. Name resolution — The process of mapping a name into the corresponding address. Naming attributes — Names carried by each instance of an object, such as name, or identification number. NASMD — See National Association of State Medicaid Directors. NAT — Network Address Translation. A means of hiding the IP addresses on an internal network from external view. NAT boxes allow net managers to use any IP addresses they choose on internal networks, thereby helping to ease the IP addressing crunch while hiding machines from attackers. National Association of Health Data Organizations (NAHDO) — A group that promotes the development and improvement of state and national health information systems. National Association of Insurance Commissioners (NAIC) — An association of the insurance commissioners of the states and territories. 928
AU8231_A003.fm Page 929 Thursday, October 19, 2006 7:10 AM
Glossary National Association of State Medicaid Directors (NASMD) — An association of state Medicaid directors. NASMD is affiliated with the American Public Health Human Services Association (APHSA). National Center for Health Statistics (NCHS) — A federal organization within the CDC that collects, analyzes, and distributes healthcare statistics. The NCHS maintains the ICD-n-CM codes. National Committee for Quality Assurance (NCQA) — An organization that accredits managed care plans, or Health Maintenance Organizations (HMOs). In the future, the NCQA may play a role in certifying these organizations’ compliance with the HIPAA A/S requirements. The NCQA also maintains the Health Employer Data and Information Set (HEDIS). National Committee on Vital and Health Statistics (NCVHS) — A federal advisory body within HHS that advises the secretary regarding potential changes to the HIPAA standards. National Computer Security Center (NCSC) — Originally named the DoD Computer Security Center, the NCSC is responsible for encouraging the widespread availability of trusted computer systems throughout the federal government. With the signing of NSDD-145; the NCSC is responsible for encouraging the widespread availability of trusted computer systems throughout the federal government. National Council for Prescription Drug Programs (NCPDP) — An ANSIaccredited group that maintains a number of standard formats for use by the retail pharmacy industry, some of which are included in the HIPAA mandates. Also see NCPDP Standard. National Drug Code (NDC) — A medical code set that identifies prescription drugs and some over-the-counter products, and that has been selected for use in the HIPAA transactions. National Employer ID — A system for uniquely identifying all sponsors of healthcare benefits. National Health Information Infrastructure (NHII) — This is a healthcare-specific lane on the information superhighway, as described in the National Information Infrastructure (NII) initiative. Conceptually, this includes the HIPAA A/S initiatives. National Information Assurance Partnership (NIAP) — A joint industry/government initiative, lead by NIST and NSA, to establish commercial testing laboratories where industry product providers can have security products tested to verify their performance against vendor claims. National information infrastructure — The total interconnected national telecommunications network of a country, which is made up of the private lines of major carriers, numerous carriers and interconnection 929
AU8231_A003.fm Page 930 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® companies, and thousands of local exchanges that connect private telephone lines to the national network and the world. National Patient ID — A system for uniquely identifying all recipients of healthcare services. This is sometimes referred to as the National Individual Identifier (NII), or as the Healthcare ID. National Payer ID — A system for uniquely identifying all organizations that pay for healthcare services. Also known as Health Plan ID or Plan ID. National Provider File (NPF) — The database envisioned for use in maintaining a national provider registry. National Provider ID (NPI) — A system for uniquely identifying all providers of healthcare services, supplies, and equipment. National Provider Registry — The organization envisioned for assigning National Provider IDs. National Provider System (NPS) — The administrative system envisioned for supporting a national provider registry. National Science Foundation (NSF) — Sponsors of the NSFNET. National Science Foundation Network (NSFNET) — A collection of local, regional, and mid-level networks in the United States tied together by a high-speed backbone. NSFNET provides scientists with access to a number of supercomputers across the country. National security — The national defense or foreign relations of the United States. National security information — Information that has been determined pursuant to Executive Order 12958 as amended by Executive Order 13292, or any predecessor order, or by the Atomic Energy Act of 1954, as amended, to require protection against unauthorized disclosure and is marked to indicate its classified status. National security system — Any information system (including any telecommunications system) used or operated by an organization or by a contractor of the organization, or by other organization on behalf of the organization: (1) the function, operation, or use of which involves intelligence activities; involves cryptologic activities related to national security; involves command and control of military forces; involves equipment that is an integral part of a weapon or weapons system; or is critical to the direct fulfillment of military or intelligence missions (excluding a system that is to be used for routine administrative and business applications, for example, payroll, finance, logistics, and personnel management applications); or (2) is protected at all times by procedures established for information that have been specifically authorized under criteria estab930
AU8231_A003.fm Page 931 Thursday, October 19, 2006 7:10 AM
Glossary lished by an executive order or an Act of Congress to be kept classified in the interest of national defense or foreign policy. National Standard Format (NSF) — Generically, this applies to any nationally standardized data format, but it is often used in a more limited way to designate the Professional EMC NSF, a 320-byte flat file record format used to submit professional claims. National strategy — Objectives of the nation for dealing in the arena of international politics, military confrontation, and national defense. National Uniform Billing Committee (NUBC) — An organization, chaired and hosted by the American Hospital Association, that maintains the UB92 hardcopy institutional billing form and the data element specifications for both the hardcopy form and the 192-byte UB-92 flat file EMC format. The NUBC has a formal consultative role under HIPAA for all transactions affecting institutional healthcare services. National Uniform Claim Committee (NUCC) — An organization, chaired and hosted by the American Medical Association, that maintains the HCFA1500 claim form and a set of data element specifications for professional claims submission via the HCFA-1500 claim form, the Professional EMC NSF, and the X12 837. The NUCC also maintains the Provider Taxonomy Codes and has a formal consultative role under HIPAA for all transactions affecting non-dental non-institutional professional healthcare services. Natural language — A language that is used in communication with computers and that closely resembles English syntax. NAUN — Nearest active upstream neighbor. NBMA — Non-broadcast multi-access. NBP — Name Binding Protocol (AppleTalk). NCHICA — See North Carolina Healthcare Information and Communications Alliance. NCHS — See National Center for Health Statistics. NCP — NetWare Core Protocol. NCP — Network Control Protocol (PPP). NCPDP — See National Council for Prescription Drug Programs. NCPDP Batch Standard — An NCPDP standard designed for use by lowvolume dispensers of pharmaceuticals, such as nursing homes. Use of Version 1.0 of this standard has been mandated under HIPAA. NCPDP Telecommunication Standard — An NCPDP standard designed for use by high-volume dispensers of pharmaceuticals, such as retail phar931
AU8231_A003.fm Page 932 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® macies. Use of Version 5.1 of this standard has been mandated under HIPAA. NCQA — See National Committee for Quality Assurance. NCSC — National Computer Security Center; part of the U.S. Department of Defense. NCVHS — See National Committee on Vital and Health Statistics. NDC — See National Drug Code. NDIS — Network Driver Interface Specification. Need-to-know — A method of isolating information resources based on a user’s need to have access to that resource in order to perform their job but no more; for example, a personnel officer needs access to sensitive personnel records and a marketing manager needs access to sensitive marketing information but not vice versa. The terms “need-to-know” and “least privilege” express the same idea. Need-to-know is generally applied to people, while least privilege is generally applied to processes. Negative acknowledgment (NAK) — A response sent by the receiver to indicate that the previous block was unacceptable and the receiver is ready to accept a retransmission. Negligence — Failure to use such care as a reasonably prudent and careful person would use under similar circumstances; the doing of some act which a person of ordinary prudence would not have done under similar circumstances or failure to do what a person of ordinary prudence would have done under similar circumstances; conduct that falls below the norm for the protection of others against unreasonable risk of harm. It is characterized by inadvertence, thoughtlessness, inattention, recklessness, etc. NetBIOS — Network Basic I/O System. Network — An integrated, communicating aggregation of computers and peripherals linked through communications facilities. Network Access layer — The layer of the TCP/IP stack that sends the message out through the physical network onto the Internet. Network access point (NAP) — (1) A node providing entry to the highspeed Internet backbone system. (2) Another name for an Internet Exchange Point. Network address — The network portion of an IP address. For a class A network, the network address is the first byte of the IP address. For a class B network, the network address is the first 2 bytes of the IP address. For a class C network, the network address is the first 3 bytes of the IP address. In the Internet, assigned network addresses are globally unique. 932
AU8231_A003.fm Page 933 Thursday, October 19, 2006 7:10 AM
Glossary Network administrator — The person who maintains user accounts, password files, and system software on your campus network. Network Basic Input Output System (NetBIOS) — The standard interface to networks on IBM PC and compatible system. Network centric — A holistic view of interconnected information systems and resources that encourages a broader approach to security management than a component-based approach. Network element — A component of the network structure such as a local exchange, higher-order switch, or service-control processor. Network File Systems (NFS) — A distributed file system developed by Sun Microsystems that allows a set of computers to cooperatively access each other’s files in a transparent manner. Network hub — A device that connects multiple computers into a network. Network Information Center (NIC) — Originally, there was only one, located at SRI International and tasked to serve the ARPANET (and later DDN) community. Today, there are many NICs, operated by local, regional, and national networks all over the world. Such centers provided user assistance, document service, training, and much more. Network layer — The OSI layer that is responsible for routing, switching, and subnetwork access across the entire OSI environment. Think of this layer as a post office that delivers letters based on the address written on an envelope. Network manager — Provides a package of end-user functions with the responsibility for the management of a network, mainly as supported by the EMs, but it may also involve direct access to the network elements. All communication with the network is based on open and well-standardized interfaces supporting management of multivendor and multi-technology network elements. Network operator (NWO) — Operator of a public telecommunications infrastructure that permits the conveyance of signals between defined network termination points by wire, microwave, optical means, or other electromagnetic means. Network propagation system analysis — A way of determining the speed and method of stego-object (or virus) movement throughout a network. Network service provider (NSP) — Owns and maintains routing computers at NAPs and even the lines that connect the NAPs to each other. For example, MCI and AT&T. 933
AU8231_A003.fm Page 934 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Network sink — A router that drops or misroutes packets, accidentally or on purpose. Intelligent network sinks can cooperate to conceal evidence of packet dropping. Networking — A method of linking distributed data processing activities through communications facilities. Networks — Includes communication capability that allows one user or system to connect to another user or system and can be part of a system or a separate system. Examples of networks include local area networks or wide area networks, including public networks such as the Internet. Neural network — A type of system developed by artificial intelligence researchers used for processing logic. Newsgroups — Usually discussions, but not “interactively live.” Newsgroups are like posting a message on a bulletin board and checking at various times to see if someone has responded to your posting. Newspaper code — A hidden communication technique where small holes are poked just above the letters in a newspaper article that will spell out a secret message. A variant of this technique is to use invisible ink in place of holes. NFS — Network file system. NHII — See National Health Information Infrastructure. NIACAP — National Information Assurance Certification and Accreditation Process. NIAP — Joint industry/government (U.S.) National IA Partnership. NIAP Common Criteria Evaluation and Validation Scheme — T h e scheme developed by NIST and NSA as part of the National Information Assurance Partnership (NIAP) establishing an organizational and technical framework to evaluate the trustworthiness of IT products. NIAP Oversight Body — A governmental organization responsible for carrying out validation and for overseeing the day-to-day operation of the NIAP Common Criteria Evaluation and Validation Scheme. NIC (network interface card) — The card that the network cable plugs into in the back of your computer system. The NIC connects your computer to the network. A host must have at least one NIC; however, it can have more than one. Every NIC is assigned a MAC address. NIDS — Network intrusion detection system. NII — National information infrastructure of a specific country. NIPC — U.S. National Infrastructure Protection Center. 934
AU8231_A003.fm Page 935 Thursday, October 19, 2006 7:10 AM
Glossary NIST — National Institute of Standards and Technology. NLPID — Network Level Protocol Identifier. NLS — Network Layer Security Protocol. NLSP — NetWare Link Service Protocol. NNI — Network to Network Interface (ATM, Frame Relay). NOC — In HIPAA, Not Otherwise Classified or Nursing Outcomes Classification. Node — A point of connection into a network. In multipoint networks, is a unit that is polled. In LANs, it is a device on the ring. In packet-switched networks, it is one of the many packet switches that form the network’s backbone. NOI — See notice of intent. Noise — Random electrical signals introduced by circuit components or natural disturbances that tend to degrade the performance of a communications channel. Non-clinical or Non-medical code sets — See administrative code sets. Non-computing security methods — Non-computing methods are security safeguards that do not use the hardware, software, and firmware of the IS. Traditional methods include physical security (controlling physical access to computing resources), personnel security, and procedural security. Non-developmental item (NDI) — Any item that is available in the commercial marketplace; any previously developed item that is in use by a Department or Agency of the United States, a state or local government, or a foreign government with which the United States has a mutual defense cooperation agreement; any item described above that requires only minor modifications to meet the requirements of the procuring agency; or any item that is currently being produced that does not meet the requirements of definitions above, solely because the item is not yet in use or is not yet available in the commercial marketplace. Non-discretionary access control — A non-discretionary authorization scheme is one under which only the recognized security authority of the security domain may assign or modify the ACI for the authorization scheme such that the authorizations of principals under the scheme are modified. Noninterference — The property that actions performed by user or process A of a system have no effect on what user or process B can observe; there is no information flow from A to B. 935
AU8231_A003.fm Page 936 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Non-intrusive monitoring — The use of non-intrusive probes or traces to assemble information and track traffic and identity vulnerabilities. Nonprocedural language — A programming language with fixed logic, which allows the programmer to specify processing operations without concern for processing logic. Non-record material — Extra and duplicate copies that are only of temporary value, including shorthand notes, used carbon paper, preliminary drafts, and other material of similar nature. Nonrecurring (ad hoc) decision — One that is made infrequently and may have different criteria for determining the best solution each time. Non-repudiation — A security service by which evidence is maintained so that the sender and recipient of data cannot deny having participated in the communication. Referred to individually as non-repudiation of origin and non-repudiation of receipt. Non-structured decision — A decision for which there may be several right answers and there is no precise way to get a right answer. Nontransparent Proxy Mode Accelerator — In a Nontransparent Proxy Mode Accelerator, the source addresses of all the packets decrypted by the SSL accelerator have a source address of that SSL accelerator and the client source addresses do not get to the server at all. From the server perspective, the request has come from the SSL accelerator. Normalization — A process of assuring that a relational database structure can be implemented as a series of two-dimensional relations. North Carolina Healthcare Information and Communications Alliance (NCHICA) — An organization that promotes the advancement and integration of information technology into the healthcare industry. NOS — Network operating system. Notebook computer — A highly portable, battery-powered microcomputer with a display screen, carried easily in a briefcase, and used away from a user’s workplace. Notice — A privacy principle that requires reasonable disclosure to a consumer of an entity’s personally identifiable information (PII) collection and use practices. This disclosure information is typically conveyed in a privacy notice or privacy policy. Microsoft: http://www. microsoft.com/security/glossary/. Notice of Intent (NOI) — A document that describes a subject area for which the federal government is considering developing regulations. It may describe the presumably relevant considerations and invite com936
AU8231_A003.fm Page 937 Thursday, October 19, 2006 7:10 AM
Glossary ments from interested parties. These comments can then be used in developing an NPRM or a final regulation. Notice of Proposed Rulemaking (NPRM) — A document that describes and explains regulations that the federal government proposes to adopt at some future date, and invites interested parties to submit comments related to them. These comments can then be used in developing a final regulation. Notional architecture — An alternative architecture composed of current systems, as well as, new procurements proposed for some future date. NPF — See National Provider File. NPI — See National Provider ID. NPRM — Notice of Proposed Rulemaking — the publication, in the Federal Register, of proposed regulations for public comment. See Notice of Proposed Rulemaking.NPS — See National Provider System. NRC — National Research Council, the quasi-governmental body that conducted a study on the state of security in health care: For the Record: Protecting Electronic Health Information (Washington, D.C.: National Academy Press, 1997). NRO — Communication non-repudiation of origin. NRR — Communication non-repudiation of receipt. NSF — See National Science Foundation; National Standard Format. NT-1 — Network Termination 1. NTN — Network Terminal Number (X.25). NTP — Network Time Protocol. NTSC/PAL — National Television System Committee: The first color TV broadcast system was implemented in the United States in 1953. This was based on the NTSC (National Television System Committee) standard. NTSC is used by many countries on the American continent as well as many Asian countries, including Japan. NTSC runs on 525 lines/frame. The PAL (Phase Alternating Line) standard was introduced in the early 1960s and implemented in most countries except for France. The PAL standard utilizes a wider channel bandwidth than NTSC, which allows for better picture quality. PAL runs on 625 lines/frame. NUBC — See National Uniform Billing Committee. NUBC EDI TAG — The NUBC EDI Technical Advisory Group, which coordinates issues affecting both the NUBC and the X12 standards. NUCC —See National Uniform Claim Committee. 937
AU8231_A003.fm Page 938 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Nucleus — The core of the atom that is made up of neutrons and protons. Null — A symbol that means nothing that is included within a message designed to confuse unintended recipients. Null option — The option to take no action. Numeric test — An input control method to verify that a field of data contains only numeric digits. NVA — Network vulnerability assessment. NVE — Network-visible entity. NVRAM — Nonvolatile random access memory. Nyquist theorem — Theorem that dictates that sampling should occur at a rate that is twice the highest frequency being sampled. OBJ — (1) Protection Profile evaluation, security objectives. (2) Security Target evaluation, security objectives. Object — (1) An entity that can have many properties (either declarative, procedural, or both) associated with it. (2) An instance of a class. Object identity — In the object-oriented paradigm, each object has a unique identifier independent of the values of other properties. Object program — A program that has been translated from a higher-level source code into machine language. Object Request Broker (ORB) — A software mechanism by which objects make and receive requests and responses. Object reuse — Reassignment and re-use of a storage medium containing one or more objects after ensuring no residual data remains on the storage medium. Objective information — Quantifiably describes something that is known. Object-oriented — Any method, language, or system that supports object identity, classification, and encapsulation and specialization. C++, Smalltalk, Objective-C, and Eiffel are examples of object-oriented implementation languages. Object-oriented analysis (OOA) — The specification of requirements in terms of objects with identity that encapsulate properties and operations, messaging, inheritance, polymorphism, and binding. Object-oriented approach — Combines information and procedures into a single view. 938
AU8231_A003.fm Page 939 Thursday, October 19, 2006 7:10 AM
Glossary Object-oriented database — Works with traditional database information and also complex data types such as diagrams, schematic drawings, videos, and sound and text documents. Object-oriented database management system (OODBMS) — A database that stores, retrieves, and updates objects using transaction control, queries, locking, and versioning. Object-oriented design (OOD) — The development activity that specifies the implementation of a system using the conceptual model defined during the analysis phase. Object-oriented language — A language that supports objects, method resolution, specialization, encapsulation, polymorphism, and inheritance. Object-oriented programming language — A programming language used to develop object-oriented systems. The language groups together data and instructions into manipulative objects. Oblivious scheme — See blind scheme. Observe, Orient, Decide, Act (OODA) — See OODA loop. OC — Optical circuit. OCR — See Office for Civil Rights. ODI — Open datalink interface. Office automation — The application of computer and related technologies to office procedure. Office for Civil Rights (OCR) — The HHS entity responsible for enforcing the HIPAA privacy rules. Office of Management and Budget (OMB) — A U.S. Government agency that has a major role in reviewing proposed federal regulations. Official information — That information or material that is owned by, produced for or by, or under the control of the U.S. Government. Offline authentication certificate — A particular form of authentication information binding an entity to a cryptographic key, certified by a trusted authority, which may be used for authentication without directly interacting with the authority. Offsite storage — A storage facility located away from the building, housing the primary information processing facility (IPF), and used for storage of computer media such as offline backup data storage files. Ohm’s law — This law applies to any resistive circuit with one of the values unknown and will allow the discovery of the unknown value. 939
AU8231_A003.fm Page 940 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® OIG — Office of the Inspector General. OLE — Microsoft’s Object Linking and Embedding technology designed to let applications share functionality through live data exchange and embedded data. Embedded objects are packaged statically within the source application, called the “client;” linked objects launch the “server” applications when instructed by the client application. Linking is the capability to call a program; embedding places data in a foreign program. OMB — See Office of Management and Budget. One-time pad — A system that randomly generates a private key, and is used only once to encrypt a message that is then decrypted by the receiver using a matching one-time pad and key. One-time pads have the advantage that there is theoretically no way to “break the code” by analyzing a succession of messages. Online analytical processing (OLAP) — The manipulation of information to support decision making. Online authentication certificate — A particular form of authentication information, certified by a trusted authority, which may be used for authentication following direct interaction with the authority. Online processing — Often called interactive processing. An operation in which the user works at a terminal or other device that is directly attached or linked to the computer. Online service — A proprietary, commercial network that provides a variety of information and other services to its subscribers. Commercial online services typically provide their own content, forums (e.g., chat rooms, bulletin boards), e-mail capability, and information available only to subscribers. Online system — Applications that allow direct interaction of the user with the computer (CPU) via a CRT, thus enabling the user to receive back an immediate response to data entered (i.e., an airline reservation system). Only one root node can be used at the beginning of the hierarchical structure. Online training — Runs over the Internet or off a CD-ROM. Online transaction processing (OLTP) — The gathering of input information, processing that information, and updating. Onward transfer — The transfer of personally identifiable information (PII) by the recipient of the original data to a second recipient. For example, the transfer of PII from an entity in Germany to an entity in the United States constitutes onward transfer of that data. 940
AU8231_A003.fm Page 941 Thursday, October 19, 2006 7:10 AM
Glossary OODA loop — The Observe, Orient, Decide, Act (OODA) cycle (or Boyd Cycle) first introduced by Colonel John Boyd, USAF. Refers to steps in the decision-making process. Open code — A form of hidden communication which uses an unencrypted message. Jargon code is an example of open code. Open Network Computing (ONC) — A distributed applications architecture promoted and controlled by a consortium led by Sun Microsystems. Open network/system — A network or systems in which, at the extremes, unknown parties, possibly in a different state or national jurisdictions will exchange/trade data. To do this, will require an overarching framework which will engender trust and certainty. A user of online services might go through a single authentication process with a trusted third party, receive certification of their public key, and then be able to enter into electronic transactions/data exchanges with merchants, governments, banks etc., using the certificate so provided for multiple purposes. Open system — A system whose architecture permits components developed by independent organizations or vendors to be combined. Open Systems Interconnection (OSI) — An international standardization program to facilitate communications among computers from different manufacturers. OpenMG — A copyright protection technology from Sony that allows recording and playback of digital music data on a personal computer and other supported devices but prevents unauthorized distribution. Operand — The portion of a computer instruction that references the memory address of an item to be processed. Operating environment — The total environment in which an information system operates. Includes the physical facility and controls, procedural and administrative controls, personnel controls (e.g., clearance level of the least cleared user). Operating system — A software program that manages the basic operations of a computer system. It calculates how the computer main memory will be apportioned, how and in what order it will handle tasks assigned to it, how it will manage the flow of information into and out of the main processor, how it will get material to the printer for printing and to the screen for viewing, how it will receive information from the keyboard, etc. Operating system software — System software that controls the application software and manages how the hardware devices work together. Operation code — The portion of the computer instruction that identifies the specific processing operation to be performed. 941
AU8231_A003.fm Page 942 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Operational controls — The security controls (i.e., safeguards or countermeasures) for an information system that primarily are implemented and executed by people (as opposed to systems). Operational database — A database that supports online transaction processing (OLTP). Operational error — An error that results from the incorrect use of a product, component, or system. Operational management — Manages and directs the day-to-day operations and implementations of the goals and strategies. Operational profile — The set of operations that the software can execute along with the probability with which they will occur. Operational Security (OPSec) — Process denying information to potential adversaries about capabilities and intentions by identifying, controlling, and protecting unclassified generic activities. Operational security information — Transient information related to a single operation or set of operations within the context of an operational association, for example, a user session. Operational security information represents the current security context of the operations and may be passed as parameters to the operational primitives or retrieved from the operations environment as defaults. Operational status — Either (a) operational system is currently in operation, (b) under development system is currently under design, development, or implementation, or (c) undergoing a major modification system is currently undergoing a major conversion or transition. Operationally object-oriented — The data model includes generic operators to deal with complex objects in their entirety. Operations security — The implementation of standardized operational security procedures that define the nature and frequency of the interaction between users, systems, and system resources, the purpose of which is to (1) maintain a system in a known secure state at all times, and (2) prevent accidental or intentional theft, destruction, alteration, or sabotage of system resources. Operator overloading — See polymorphism. OPSec — Operations security. Optical character recognition (OCR) — An input method in which handwritten, typewritten, or printed text can be read by photosensitive devices for input to a computer. Optical disk — A disk that is written to or read from by optical means. 942
AU8231_A003.fm Page 943 Thursday, October 19, 2006 7:10 AM
Glossary Optical fiber — A form of transmission medium that uses light to encode signals and has the highest transmission rate of any medium. Optical mark recognition (OMR) — Detects the presence of or absence of a mark in a predetermined place (popular for multiple choice exams). Optical modulation — The process of varying some characteristics of light pulses over a fiber-optic cable in order to pass information from one point to another. Optical storage — A medium requiring lasers to permanently alter the physical media to create a permanent record. The storage also requires lasers to read stored information from this medium. Opt-in — An option that gives you complete control over the collection and dissemination of your personal information. A site that provides this option is stating that it will not gather or track information about you unless you knowingly provide such information and consent to the site. Opt-out — An option that gives you the choice to prevent personally identifiable information from being used by a particular Web site or shared with third parties. Orange Book — Common name used to refer to the DoD Trusted Computing System Evaluation Criteria (TCSEC), DoD 5200.28-STD. Orange Forces — Forces of the United States operating in an exercise in emulation of the opposing force. Organizational security policy — Set of laws, rules, and practices that regulates how an organization manages, protects, and distributes sensitive information. Organized Health Care Arrangement — See Part II, 45 CFR 164.501. Original Classification — An initial determination that information requires protection against unauthorized disclosure in the interest of national security, and a designation of the level of classification. Original Classifier — An authorized individual in the executive branch who initially determines that particular information requires a specific degree of protection against unauthorized disclosure in the interest of national security and applies the classification designation “Top Secret,” “Secret,” or “Confidential.” OSI — Open Systems Interconnection; a seven-layer model from the ISO that defines and standardizes protocols for communicating between systems, networks and devices. OSI 7-layer model — The Open System Interconnection 7-layer model is an ISO standard for worldwide communications that defines a framework 943
AU8231_A003.fm Page 944 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® for implementing protocols in seven layers. Control is passed from one layer to the next, starting at the application layer in one station, and proceeding to the bottom layer, over the channel to the next station and back up the hierarchy. OSI Reference Model — The seven-layer architecture designed by OSI for open data communications network. OSPF — Open Shortest Path First. OUI — Organizationally unique identifier. Out of band — A LAN term that refers to the capacity to deliver information via modem or other asynchronous connection. Out-of-band signaling refers to signaling that is separated from the channel carrying the information. Signal and control information does not interfere with the data transmission. Output controls — Techniques and methods for verifying that the results of processing conform to expectations and are communicated only to authorized users. Output device — A tool used to see, hear, or otherwise accept the results of information-processing requests. Outsourcing — The delegation of specific work to a third party for a specified length of time, cost, and level of service. Overlapped processing — The simultaneous execution of input, processing, and output functions by a computer system. Overlaps — Areas in which too much capability exists. Unnecessary redundancy of coverage in a given area or function. Overreach interference — Caused by a signal feeding past a repeater (or receive antenna) to the receiving antenna at the next station in the route. Overseas Security Policy Board (OSPB) — The Overseas Security Policy Board (OSPB) is an interagency group of security professionals from the foreign affairs and intelligence communities who meet regularly to formulate security policy for U.S. missions abroad. The OSPB is chaired by the Director, Diplomatic Security Service. Overwriting — The obliteration of recorded data by recording different data on the same surface. P2P — Peer-to-peer infrastructure. Often referred to simply as peer-topeer, or abbreviated P2P, a type of network in which each workstation has equivalent capabilities and responsibilities. This differs from client/server architectures, in which some computers are dedicated to serving the 944
AU8231_A003.fm Page 945 Thursday, October 19, 2006 7:10 AM
Glossary others. Peer-to-peer networks are generally simpler, but they usually do not offer the same performance under heavy loads. P3P (Platform for Privacy Preferences Project) — A n o p e n p r i v a c y specification developed and administered by the World Wide Web Consortium (W3C) that, when implemented, enables people to make informed decisions about how they want to share personal information with Web sites. PABX — Private Automatic Branch Exchange. Telephone switch for use inside a corporation. PABX is the preferred term in Europe, while PBX is used in the United States. Packet — Logical grouping of information that includes a header containing control information and (usually) user data. Packets are most often used to refer to network layer units of data. The terms “datagram,” “frame,” “message,” and “segment” are also used to describe logical information groupings at various layers of the OSI Reference Model and in various technology circles. Packet filtering — Controlling access to a network analyzing the attributes of the incoming and outgoing packets and either letting them pass, or denying them based on a list of rules. Packet Internet Grouper (PING) — A program used to test reachability of destinations by sending them an ICMP echo request and waiting for a reply. The term is used as a verb: “Ping host X to see if it is up.”. Packet switch — WAN device that routes packets along the most efficient path and allows a communications channel to be shared by multiple connections. Formerly called an interface message processor (IMP). Packet switching — A switching procedure that breaks up messages into fixed-length units (called packets) at the message source. These units may travel along different routes before reaching their intended destination. PAD — Packet assembler/disassembler. Padding — A technique used to fill a field, record, or block with default information (e.g., blanks or zeros). PAG — See Policy Advisory Group. Page — A basic unit of storage in main memory. Page fault — A program interruption that occurs when a page that is referred to is not in main memory and must be read from external storage. Paging — A method of dividing a program into parts called pages and introducing a given page into memory as the processing on the page is required for program execution. 945
AU8231_A003.fm Page 946 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Palm — A type of PDA that runs on the Palm Operating System (Palm OS). Palm Operating System — The operating system for Palm and Handspring PDAs. PAP — (1) Password Authentication Protocol. (2) Printer Access Protocol (AppleTalk). PAP (Password Authentication Protocol) — Authentication protocol that allows PPP peers to authenticate one another. The remote router attempting to connect to the local router is required to send an authentication request. Unlike CHAP, PAP passes the password and hostname or username in the clear (unencrypted). PAP does not itself prevent unauthorized access, but merely identifies the remote end. The router or access server then determines if that user is allowed access. PAP is supported only on PPP lines. Compare with CHAP. Parallel connector — Has 25 pins that fit into the corresponding holes in the port. Most printers use parallel connectors. Parallel conversion — The concurrent use of new system by its users. Parallel port — The computer’s printer port, which in a pinch, allows user access to notebooks and computers that cannot be opened. Parent — A unit of data in a 1:n relationship with another unit of data called a child, where the parent can exist independently but the child cannot. Parity — A bit or series of bits appended to a character or block of characters to ensure that the information received is the same as the information that was sent. Parity is used for error detection. Parity bit — A bit attached to a byte that is used to check the accuracy of data storage. Partition — A memory area assigned to a computer program during its execution. Partitioning — Isolating IA-critical, IA-related, and non-IA-related functions and entities to prevent accidental or intentional interference, compromise, and corruption. Partitioning can be implemented in hardware or software. Software partitioning can be logical or physical. Partitioning is often referred to as separability in the security community. Pascal — A computer programming language designed especially for writing structured programs. This language is based on the use of a minimum set of logical control structures. Passive response — A response option in intrusion detection in which the system simply reports and records the problem detected, relying on the user to take subsequent action. 946
AU8231_A003.fm Page 947 Thursday, October 19, 2006 7:10 AM
Glossary Passive system — A system related indirectly to other systems. Passive systems may or may not have a physical connection to other systems, and their logical connection is controlled tightly. Passive wiretapping — The monitoring or recording of data while it is being transmitted over a communications link. Password — A word or string of characters that authenticates a user, a specific resource, or an access type. Password cracker — A password cracker is an application program that is used to identify an unknown or forgotten password to a computer or network resources. It can also be used to help a person obtain unauthorized access to a resource. Password entropy — Stated in bits, the measure of randomness in a password. Password sniffing — Eavesdropping on a communications line to capture passwords that are being transmitted unencrypted. Patchwork — An encoding algorithm that takes random pairs of pixels and brightens the brighter pixel and dulls the duller pixel and encodes one bit of information in the contrast change. This algorithm creates a unique change, and that change indicates the absence or presence of a signature. Patent — Exclusive right granted to an inventor to produce, sell, and distribute the invention for a specified number of years. Pattern classification — The step of ASR in which the system matches the user’s spoken phonemes to a phoneme sequence stored in an acoustic model database. Payer — In healthcare, an entity that assumes the risk of paying for medical treatments. This can be an uninsured patient, a self-insured employer, a health plan, or an HMO. PAYERID — HCFA’s term for their pre-HIPAA National Payer ID initiative. Payload — The amount of information that can be stored in the cover media. Typically, the greater the payload, the greater the risk of detection. Payment — See Part II, 45 CFR 164.501. PBX — Private branch exchange. PCM (pulse code modulation) — A digital scheme for transmitting analog data. PCS — See ICD. 947
AU8231_A003.fm Page 948 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® PDA — Personal digital assistant. A hand-held computer that serves as an organizer for personal information. PDN — Public data network. PDU — Protocol data unit. Peer-entity authentication — The corroboration that a peer entity in an association is the one claimed. Peer-to-peer network — A network in which a small number of computers share hardware (such as a printer), software, and information. PEM — Privacy Enhanced Mail; an e-mail encryption protocol. Penetration — A successful unauthorized access to a computer system. Penetration profile — A delineation of the activities required to effect penetration. Penetration signature — The description of a situation or set of conditions in which a penetration might occur. Penetration testing — Security testing in which the evaluators attempt to circumvent the security features of a system based on their understanding of the system design and implementation. The evaluators may be assumed to use all system design and implementation documentation, which may include listings of system source code, manuals, and circuit diagrams. The evaluators work under no constraints other than those applied to ordinary users or implementers of untrusted portions of the component. Perceptual masking — A condition where the perception of one element interferes with the perception another. Perfect forward secrecy — Perfect forward secrecy means that even if a private key is known to an attacker, the attacker cannot decrypt previously sent messages. Performance — The ability to track service and resource usage levels and to provide feedback on the responsiveness and reliability of the network. Performance-based — A method for designing learning objectives based on behavioral outcomes, rather than on content that provides benchmarks for evaluating learning effectiveness. Period — The time it takes a waveform to complete one complete cycle. Permission marketing — When a person has given a merchant permission to send special offers. Persistent object — An object that can survive the process that created it. A persistent object exists until it is explicitly deleted. 948
AU8231_A003.fm Page 949 Thursday, October 19, 2006 7:10 AM
Glossary Personal agent or user agent — An intelligent agent that takes action on the user’s behalf. Personal computer — A commonly used term that refers to a microcomputer. Often called a PC. Personal digital assistant (PDA) — A small hand-held computer that helps surf the Web and perform simple tasks such as note taking, calendaring, appointment scheduling, and maintaining an address book. Personal finance software — Application software that helps a user maintain a checkbook, prepare a budget, track investments, monitor credit card balances, and pay bills electronically. Personal information management (PIM) software — Helps create and maintain (1) lists, (2) appointments and calendars, and (3) points of contact. Personal productivity software — Helps the user perform personal tasks — writing a memo, creating a graph, and creating a slide presentation — that can usually be done even if the user does not own a computer. Personalization — When a Web site can know enough about the user’s likes and dislikes that it can fashion offers that are more likely to appeal to the user. Personally identifiable information — Information that can be traced back to an individual user, e.g. your name, postal address, or e-mail address. Personal user preferences tracked by a Web site via a “cookie” (see definition above) is also considered personally identifiable when linked to other personally identifiable information provided by you online. Pest program — Collective term for programs with deleterious and generally unanticipated side effects; for example, Trojan horses, logic bombs, letter bombs, viruses, and malicious worms. PGP — Pretty Good Privacy. Public key cryptography software based on the RSA cryptographic method. Phased conversion — The system installation procedure that involves a step-by-step approach for the incremental installation of one portion of a new system at a time. PHB — Pharmacy Benefits Manager. PHI — See Protected Health Information. PHP — In Common Criteria, protection of the TSF; TSF physical protection. PHS — Public Health Service. 949
AU8231_A003.fm Page 950 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Physical layer — The OSI layer that provides the means to activate and use physical connections for bit transmission. In plain terms, the physical layer provides the procedures for transferring a single bit across a physical medium, such as cables. Physical organization — The packaging of data into fields, records, files, and other structures to make them accessible to a computer system. Physical security — The measures used to provide physical protection of resources against deliberate and accidental threats. PictureMarc — A DigiMarc application that embeds an imperceptable digital watermark within an image allowing copyright communication, author recognition and electronic commerce. It is currently bundled with Adobe Photoshop. PIDAS — Perimeter Intrusion Detection Assessment System. Piggyback entry — Unauthorized access to a computer system that is gained through another user’s legitimate connection. Ping — Packet Internet groper. Piracy (or Simple Piracy) — The unauthorised duplication of an original recording for commercial gain without the consent of the rightful owner; or the packaging of pirate copies that is different from the original. Pirate copies are often compilations, such as the “greatest hits” of a specific artist, or a genre collection, such as dance tracks. Pirated software — The unauthorized use, duplication, distribution, or sale of copyrighted software. Pivot table — Enables to group and summarize information. Pixel — Short for picture element, a pixel is a single point in a graphic image. It is the smallest thing that can be drawn on a computer screen. All computer graphics are made up of a grid of pixels. When these pixels are painted onto the screen, they form an image. PKI — Public key infrastructure. PL or P. L. — Public Law, as in PL 104-191 (HIPAA). Plain old telephone system (POTS) — What we consider to be the “normal” phone system used with modems. Does not include leased lines or digital lines. Plaintext — A message before it has been encrypted or after it has been decrypted using a specific algorithm and key; also referred to as cleartext. (Contrast with ciphertext.) Plan Administration Functions — See Part II, 45 CFR 164.504. 950
AU8231_A003.fm Page 951 Thursday, October 19, 2006 7:10 AM
Glossary Plan ID — See National Payer ID. Plan of action and milestones — A document that identifies tasks needing to be accomplished. It details resources required to accomplish the elements of the plan, any milestones in meeting the tasks, and scheduled completion dates for the milestones. Plan sponsor — An entity that sponsors a health plan. This can be an employer, a union, or some other entity. Also see Part II, 45 CFR 164.501. Planning phase — Involves determining a solid plan for developing information system. Platform — Foundation upon which processes and systems are built and which can include hardware, software, firmware, etc. Platform domain — A security domain encompassing the operating system, the entities and operations it supports, and its security policy. Plotter — A graphics output device in which the computer drives a pen that draws on paper. PLP — Packet Level Protocol (X.25). PMD — Physical medium dependent. PNA adapter card — An expansion card that is put into the user’s computer to act as a doorway for information flowing in and out. Pocket PC — A type of PDA that runs on Pocket PC OS that used to be called Windows CE. Pocket PC OS (or Windows CE) — The operating system for the Pocket PC PDA. Pointer — The address of a record (or other data grouping) contained in another record so that a program may access the former record when it has retrieved the latter record. The address can be absolute, relative, or symbolic, and hence the pointer is referred to as absolute, relative, or symbolic. Pointing stick — Small rubber-like pointing device that causes the pointer to move on the screen as the user applies directional pressure. Popular on notebooks. Point-of-presence (POP) — A site where there exists a collection of telecommunications equipment, usually digital leased lines and multi-protocol routers. Point-of-sale (POS) — Applications in which purchase transactions are captured in machine-readable form at the point of purchase. 951
AU8231_A003.fm Page 952 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Point-to-Point — A network configuration interconnecting only two points. The connection can be dedicated or switched. Point-to-Point Protocol (PPP) — The successor to SLIP, PPP provides router-to-router and host-to-network connections over both synchronous and asynchronous circuits. Polarization — The direction of the electric field, the same as the physical attitude of the antenna (e.g., a vertical antenna transmits a vertically polarized wave). They receive and transmit antennas need to possess the same polarization. Policy — See security policy. Policy Advisory Group (PAG) — A generic name for many work groups at WEDI and elsewhere. Polling — A procedure by which a computer controller unit asks terminals and other peripheral devices in a serial fashion if they have any messages to send. Polymorphism — A request-handling mechanism that selects a method based on the type of target object. This allows the specification of one request that can result in invocation of different methods depending on the type of the target object. Most object-oriented languages support the selection of the appropriate method based on the class of the object (classical polymorphism). A few languages or systems support characteristics of the object, including values and user-defined defaults (generalized polymorphism). Polymorphism — Having many forms. POP — (1) Point-of-presence. (2) Post Office Protocol. Pop-up ad — An ad that appears in its own window when a user opens or closes a Web page. Pop-up blockers — A type of privacy enhancing technology. Port — (1) An outlet, usually on the exterior of a computer system, that enables peripheral devices to be connected and interfaced with the computer. (2) A numeric value used by the TCP/IP protocol suite that identifies services and applications. For example, HTTP Internet traffic uses port 80. Portability — The ability to implement and execute software in one type of computing space and have it execute in a different computing space with little or no changes. Portable document format (PDF) — The standard electronic distribution file format for heavily formatted documents such as a presentation resume because it retains the original document formatting. 952
AU8231_A003.fm Page 953 Thursday, October 19, 2006 7:10 AM
Glossary Ports — An interface point between the CPU and a peripheral device. POS — Place of service, or point of service. Postpay billing — Billing arrangement between the customer and operator/SvP in which the customer periodically receives a bill for service usage in the past period. Postscript — A language used to describe the printing of images and text and typically used with laser printing capability. Word processor or desktop publishing applications generate postscript code for higher-quality laser products. POTS — Plain old telephone service. Power (P) — The measure of the rate at which work can be accomplished. PP — Protection profile. PPC — Security Target evaluation, PP claims. PPO — Preferred Provider Organization. PPP — Point-to-Point Protocol. PPS — Prospective Payment System. PRA — The Paperwork Reduction Act. Precision engagement — The ability of joint forces to locate, surveil, discern, and track objectives or targets; select, organize, and use the correct systems; generate desired effects; assess results; and reengage with decisive speed and overwhelming operational tempo as required, throughout the full range of military operations. Preferred Products List (PPL) — A list of commercially produced equipments that meet TEMPEST and other requirements prescribed by the National Security Agency. This list is included in the NSA Information Systems Security Products and Services Catalogue, issued quarterly and available through the Government Printing Office. Prepay billing — Billing arrangement between the customer and operator/SvP in which the customer deposits an amount of money in advance, which is subsequently used to pay for service usage. Preprocessors — Software tools that perform preliminary work on a draft computer program before it is completely tested on the computer. Presentation layer — The layer of the ISO Reference Model responsible for formatting and converting data to meet the requirements of the particular system being utilized. The OSI layer that determines how application information is represented (i.e., encoded) while in transit between two end systems. 953
AU8231_A003.fm Page 954 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Presentation resume — A format-sensitive document created in a word processor to outline job qualifications in one to two printed pages. Presentation software — Helps create and edit information that will appear in electronic slides. Pretty Good Privacy (PGP) — PGP provides confidentiality and authentication services for electronic mail and file storage applications. Developed by Phil Zimmerman and distributed for free on the Internet. Widely used by the Internet technical community. PRG — Procedure-related group. PRI — Primary rate interface (ISDN). Pricer, or Repricer — A person, an organization, or a software package that reviews procedures, diagnoses, fee schedules, and other data and determines the eligible amount for a given healthcare service or supply. Additional criteria can then be applied to determine the actual allowance, or payment, amount. Primary key — An attribute that contains values that uniquely identifies the record in which the key exists. Primary Mission Area — Synonymous with Primary Warfare Mission Area (PWMA). A warfare mission area concerned with a specific, major phase or portion of naval warfare. Primary rate interface (PRI) — Provides the same throughput as a T-1, 1.544 Mbps, has 23 B or bearer channels, which run at 64 kbps, and a D or data channel, which runs at 16 kbps. Primary service — An independent category of service such as operating system services, communication services and data management services. Each primary service provides a discrete set of functionality. Each primary service inherently includes generic qualities such as usability, manageability and security. Security services are therefore not primary services but are invoked as part of the provision of primary services by the primary service provider. Principal — An entity whose identity can be authenticated. Principle of Least Privilege — A security procedure under which users are granted only the minimum access authorization they need to perform required tasks. Print suppress — The elimination of the printing of characters to preserve their secrecy — for example, the characters of a password as they are keyed by a user at a terminal or station on the network. 954
AU8231_A003.fm Page 955 Thursday, October 19, 2006 7:10 AM
Glossary Privacy — (1) The prevention of unauthorized access and manipulation of data. (2) The right of individuals to control or influence what information related to them may be collected and stored and by whom and to whom that information may be disclosed. Privacy Act of 1974 — The federal law that allows individuals to know what information about them is on file and how it is used by all government agencies and their contractors. The 1986 Electronic Communication Act is an extension of the Privacy Act. Privacy-enhanced mail (PEM) — Internet e-mail standard that provides confidentiality, authentication, and message integrity using various encryption methods. Not widely deployed in the Internet. Privacy impact assessment (PIA) — An analysis of how information is handled (1) to ensure handling conforms to applicable legal, regulatory, and policy requirements regarding privacy; (2) to determine the risks and effects of collecting, maintaining, and disseminating information in identifiable form in an electronic information system; and (3) to examine and evaluate protections and alternative processes for handling information to mitigate potential privacy risks. Privacy invasive technologies (PITs) — Describes the many technologies that intrude into privacy. Among the host of examples are data-trail generation through the denial of anonymity, data-trail intensification (e.g., identified phones, stored-value cards, and intelligent transportation systems), data warehousing and data mining, stored biometrics, and imposed biometrics. Privacy policy — An organization’s requirements for complying with privacy regulations and directives. Privacy policy in standardized machine-readable format — A s t a t e ment about site privacy practices written in a standard computer language (not English text) that can be read automatically by a Web browser. Privacy protection — The establishment of appropriate administrative, technical, and physical safeguards to protect the security and confidentiality of data records against anticipated threats or hazards that could result in substantial harm, embarrassment, inconvenience, or unfairness to any individual about whom such information is maintained. Privacy seal — An online seal awarded by one of multiple privacy certification vendors to Web sites that agree to post their privacy practices openly via privacy statements, as well as adhere to enforcement procedures that ensure that their privacy promises are met. When you click on the privacy seal, typically you are taken directly to the privacy statement of the certified Web site. 955
AU8231_A003.fm Page 956 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Privacy statement — A page or pages on a Web site that lay out its privacy policies, that is, what personal information is collected by the site, how it will be used, whom it will be shared with, and whether you have the option to exercise control over how your information will be used. Private branch exchange (PBX) — A small version of the phone company’s central switching office. Also known as a private automatic branch exchange. A central telecommunications switching station that an organization uses for its own purposes. Private key — The private or secret key of a key pair, which must be kept confidential and is used to decrypt messages encrypted with the public key, or to digitally sign messages, which can then be validated with the public key. Private network — A network established and operated by a private organization for the benefit of members of the organization. Privilege — A right granted to an individual, a program, or a process. Privileged instructions — A set of instructions generally executable only when the computer system is operating in the executive state (e.g., while handling interrupts). These special instructions are typically designed to control such protection features as the storage protection features. PRO — Professional Review Organization, or Peer Review Organization. Problem — Any deviation from predefined standards. Problem reporting — The method of identifying, tracking, and assigning attributes to problems detected within the software product, deliverables, or within the development processes. Procedural language — A computer programming language in which the programmer must determine the logical sequence of program execution as well as the processing required. Procedure — Required “how-to” instructions that support some part of a policy or standard, which state “what to do.” Procedure division — A section of a COBOL program that contains statements that direct computer processing operations. Procedure view — Contains all of the procedures within a system. Process — A sequence of activities. Process description — A narrative that describes in sequence the processing activities that take place in a computer system and the procedures for completing each activity. 956
AU8231_A003.fm Page 957 Thursday, October 19, 2006 7:10 AM
Glossary Processing controls — Techniques and methods used to ensure that processing produces correct results. Processor — The hardware unit containing the functions of memory and the central processing unit. Product Certification Center — A facility that certifies the technical security integrity of communications equipment. The equipment is handled and used within secure channels. Professional courier (or diplomatic courier) — A person specifically employed and provided with official documentation by the U.S. Department of State to transport properly prepared, addressed, and documented diplomatic pouches between the Department and its Foreign Service posts and across other international boundaries. Profile filtering — Requires that the user choose terms or enter keywords to provide a more personal picture of preferences. Profiling — Analyzing a program to determine how much time is spent in different parts of the program during execution. Program analyzers — Software tools that modify or monitor the operation of an application program to allow information about its operating characteristics to be collected automatically. Program development process — The activities involved in developing computer programs, including problem analysis, program design, process design, program coding, debugging, and testing. Program maintenance — The process of altering program code or instructions to meet new or changing requirements. Program manager — The person ultimately responsible for the overall procurement, development, integration, modification, or operation and maintenance of the IS. Programmable read-only memory (PROM) — Computer memory chips that can be programmed permanently to carry out a defined process. Programmer — The individual who designs and develops computer programs. Programmer/Analyst — The individual who analyzes processing requirements and then designs and develops computer programs to direct processing. Programming language — A language with special syntax and style conventions for coding computer programs. Programming Language/1 (PL/1) — A general-purpose, high-level language that combines business and scientific processing features. The lan957
AU8231_A003.fm Page 958 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® guage contains advanced features for experienced programmers yet can be easily learned by novice programmers. Programming specifications — The complete description of input, processing, output, and storage requirements necessary to code a computer program. Project manager — An individual who is an expert in project planning and management, defines and develops the project plan, and tracks the plan to ensure all key project milestones are completed on time. Project milestone — Key date by which a certain group of activities needs to be performed. Project plan — Defines the what, when, and who questions of system development including all activities to be performed, the individuals or resources who will perform the activities, and the time required to complete each activity. Project scope — Clearly defines the high-level system requirements. Project scope document — A written definition of the project scope and usually no longer than a paragraph. Project team — A team designed to accomplish specific one-time goals, which is disbanded once the project is complete. Prolog — A language widely used in the field of artificial intelligence. PROM — Programmable read-only memory. Proof-of-concept prototype — A prototype used to prove the technical feasibility of a proposed system. Proof of correctness — The use of mathematical logic to infer that a relation between program variables assumed true at the program entry implies that another relation between program variables holds at program exit. Protect — To keep information systems away from intentional, unintentional, and natural threats: (1) preclude an adversary from gaining access to information for the purpose of destroying, corrupting, or manipulating such information; or (2) deny use of information systems to access, manipulate, and transmit mission-essential information. Protected distribution system (PDS) — Wire-line or fiber-optic distribution system used to transmit unencrypted classified national security information through an area of lesser classification or control. Protected Health Information (PHI) — See Part II, 45 CFR 164.501. 958
AU8231_A003.fm Page 959 Thursday, October 19, 2006 7:10 AM
Glossary Protection ring — A hierarchy of access modes through which a computer system enforces the access rights granted to each user, program, and process, ensuring that each operates only within its authorized access mode. Protection schema — An outline detailing the type of access users may have to a database or application system, given a user’s need-to-know; for example,, read, write, modify, delete, create, execute, and append. Protective layers — Mechanisms for insuring the integrity of systems or data. See defense in depth. Protocol — A set of instructions required to initiate and maintain communication between sender and receiver devices. Protocol analyzer — A data communications testing unit set that enables a network engineer to observe bit patterns and simulate network elements. Protocol data unit (PDU) — This is OSI terminology for “packet.” A PDU is a data object exchanged by protocol machines (entities) within a given layer. PDUs consist of both protocol control information (PCI) and user data. Proton — A heavy subatomic particle that carries a positive charge. Prototype — A usable system or subcomponent that is built inexpensively or quickly with the intention of modifying or replacing it. Provider Taxonomy Codes — An administrative code set for identifying the provider type and area of specialization for all healthcare providers. A given provider can have several Provider Taxonomy Codes. This code set is used in the X12 278 Referral Certification and Authorization and the X12 837 Claim transactions, and is maintained by the NUCC. Proxy server — Proxy server is a server that acts as an intermediary between a remote user and the servers that run the desired applications. Typical proxies accept a connection from a user, make a decision as to whether or not client IP address is permitted to use the proxy, perhaps perform additional authentication, and complete a connection to a remote destination on behalf of the user. PRS — Resource utilization, priority of service. PSDN — Packet-switched data network. PSE — Privacy, pseudonymity. Pseudocode — Program processing specifications that can be prepared as structured English-like statements, which can then be easily converted into source code. 959
AU8231_A003.fm Page 960 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Pseudoflow — An apparent loophole deliberately implanted in an operating system program as a trap for intruders. Pseudonymity — A condition in which you have taken on an assumed identity. PSK — Phase shift keying. PSN — Packet-switched network. PSNP — Partial Sequence Number PDU. PSPDN — Packet-switched public data network. PSTN — Public switched telephone network. Psychographic filtering — Anticipates the user’s preferences based on the answers given to a questionnaire. Psychotherapy Notes — See Part II, 45 CFR 164.501. PTT — Post, telephone, and telegraph. Public Health Authority — See Part II, 45 CFR 164.501. Public key — In an asymmetric cryptography scheme, the key that may be widely published to enable the operation of the scheme. Typically, a public key can be used to encrypt, but not decrypt, or to validate a signature, but not to sign. Public key cryptography — An asymmetric cryptosystem where the encrypting and decrypting keys are different and it is computationally infeasible to calculate one form the other, given the encrypting algorithm. In public key cryptography, the encrypting key is made public, but the decrypting key is kept secret. Public Key Cryptography Standards — Public Key Cryptography Standards (PKCS) are specifications produced by RSA Laboratories in cooperation with secure systems developers worldwide for the purpose of accelerating the deployment of public key cryptography. Public key cryptosystem — An asymmetric cryptosystem that uses a public key and a corresponding private key. Public key encryption — An encryption scheme where two pairs of algorithmic keys (one private and one public) are used to encrypt and decrypt messages, files, etc. Public key infrastructure — Supporting infrastructure, including nontechnical aspects, for the management of public keys. Public network — A network on which the organization competes for time with others. 960
AU8231_A003.fm Page 961 Thursday, October 19, 2006 7:10 AM
Glossary Public switched telephone network (PSTN) — Refers to the local, long distance, and international phone system that we use every day. In some countries, it is a single phone company. In countries with competition, PSTN refers to the entire interconnected collection of local, long distance, and international phone companies, of which there could be thousands. Pulse amplitude modulation (PAM) — The first step in converting analog waveforms into digital signals for transmission. Pulse code modulation (PCM) — The most common and most important method that a telephone system in North America can use to sample a voice signal and convert that sample into an equivalent digital code. PCM is a digital modulation method that encodes a pulse amplitude modulated signal into a PCM signal. Purging — The orderly review of storage and removal of inactive or obsolete data files. Push technology — An environment in which businesses and organizations come to the user with information, services, and product offerings based on the user profile. PVC — Permanent virtual circuit. QA — Quality assurance. QAM — Quadrature amplitude modulation. QC — Quality control. QoS — Quality of service. Qualitative — Inductive analytical approaches that are oriented toward relative, non-measurable, and subjective values, such as expert judgment. Quality — The totality of features and characteristics of a product or service that bear on its ability to meet stated or implied needs. Quality assurance — An overview process that entails planning and systematic actions to ensure that a project is following good quality management practices. Quality control — Process by which product quality is compared with standards. Quality of Service (QoS) — The service level defined by a service agreement between a network user and a network provider, which guarantees a certain level of bandwidth and data flow rates. Quantitative — Deductive analytical approaches that are oriented toward the use of numbers or symbols to express a measurable quantity, such as MTTR. 961
AU8231_A003.fm Page 962 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Quantitizing — The systematic method of providing standard binary numbering to PAM samples for PCM conversion. Query and reporting tools — Similar to QBE tools, SQL, and report generators in the typical database environment. Query language — A language that enables a user to interact indirectly with a DBMS to retrieve and possibly modify data held under the DBMS. Query-by-example tools (QBE) — Helps the user graphically design the answer to a question. Queue — A waiting line in which a set of computer programs is in secondary storage awaiting processing. Radiation field — The radio frequency field that is created around the antenna and has specific properties that affect the signal transmission. RADIUS (Remote Authentication Dial-In User Service) — Database for authenticating modem and ISDN connections and for tracking connection time. A protocol used to authenticate remote users and wireless connections. RAID (redundant arrays of inexpensive disks) — Instead of using one large disk to store data, one can use many smaller disks (because they are cheaper). See disk mirroring and duplexing. An approach to using many low-cost drives as a group to improve performance, yet also provides a degree of redundancy that makes the chance of data loss remote. Rain attenuation, or raindrop absorption — The scattering of the microwave signal, which can cause signal loss in transmissions. Rainbow Series — A multi-volume set of publications on Information Assurance, Information Security, and related topics. Published by the National Computer Security Center (NCSC) at the National Security Agency (NSA) in Fort Meade, Maryland. Each volume is published under a different color cover, hence the term “Rainbow” series. Rainbow tables — A set of tools and techniques used for cracking MS Windows passwords. RAM (random access memory) — A type of computer memory that can be accessed randomly; that is, any byte of memory can be accessed without touching the preceding bytes. RAM is the most common type of memory found in computers and other devices, such as printers. There are two basic types of RAM: dynamic RAM (DRAM) and static RAM (SRAM). Random access — A method that allows records to be read from and written to disk media without regard to the order of their record key. 962
AU8231_A003.fm Page 963 Thursday, October 19, 2006 7:10 AM
Glossary Random failure — Failures that result from physical degradation over time and variability introduced during the manufacturing process. Range — The distance a signal travels before it degrades and needs to be repeated. RARP (Reverse Address Resolution Protocol) — Protocol in the TCP/IP stack that provides a method for finding IP addresses based on MAC addresses. Compare with Address Resolution Protocol (ARP). Raster image — An image that is composed of small points of color data called pixels. Raster images allow the representation complex shapes and colors in a relatively small file format. Photographs are represented using raster images. RBOCs — Regional Bell operating companies. RCP — Remote Copy Protocol. RCR — Development, representation correspondence. RCV — Protection of the TSF, trusted recovery. Reaccreditation — The official management decision to continue operating a previously accredited system. Reach — An aggregate measure of the degree to which information is shared. React — To respond to threat activity within information systems, when detected, and mitigate the consequences by taking appropriate action to incidents that threaten information and information systems. Read-only memory (ROM) — Computer memory chips with preprogrammed circuits for storing such software as word processors and spreadsheets. Reality — The real world. Real-time processing — Computer processing that generates output fast enough to support multiple activities being performed concurrently. Real-time reaction — A response to a penetration attempt that can prevent actual penetration because the attempt is detected and diagnosed in time. Reassembly — The process by which an IP datagram is “put back together” at the receiving hosts after having been fragmented in transit. Recertification — A reassessment of the technical and non-technical security features and other safeguards of a system made in support of the reaccreditation process. 963
AU8231_A003.fm Page 964 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Reciprocal agreement — Emergency processing agreements between two or more organizations with similar equipment or applications. Typically, participants promise to provide processing time to each other when an emergency arises. Reciprocity — An antenna characteristic that essentially states that the antenna is the same regardless of whether it is sending or receiving electromagnetic energy. Recognition — Capability to detect attacks as they occur and to evaluate the extent of damage and compromise. Record block — A group or collection of records appearing between interblock gaps on magnetic storage media. This group of records is handled as a single entity in computer processing. Record blocking — A technique of writing several records to magnetic storage media in between interblock gaps or spaces. Record material — All books, papers, maps, photographs, or other documentary materials, regardless of physical form or characteristics, made or received by the U.S. Government in connection with the transaction of public business and preserved or appropriated by an agency or its legitimate successor as evidence of the organization, functions, policies, decisions, procedures, or other activities of any agency of the government, or because of the informational data contained therein. Recording Industry Association of America (RIAA) — A trade group that represents the U.S. recording industry. The RIAA works to create a business and legal environment that supports the record industry and seeks to protect intellectual property rights. Recovery — The restoration of the information processing facility or other related assets following physical destruction or damage. Recovery point objective (RPO) — A measurement of the point prior to an outage to which data are to be restored. Recovery procedures — The action necessary to restore a system’s computational capability and data files after system failure or penetration. Recovery time objective (RTO) — The amount of time allowed for the recovery of a business function or resource after a disaster occurs. Rectifier — A diode designed to be placed in an alternating current circuit, used for converting AC to DC. Recurring decision — A decision that you have to make repeatedly and often periodically, whether weekly, monthly, quarterly, or yearly. 964
AU8231_A003.fm Page 965 Thursday, October 19, 2006 7:10 AM
Glossary Recursion — The definition of something in terms of itself. For example, a bill of material is usually defined in terms of itself. Red — Designation applied to information systems, and associated areas, circuits, components, and equipment in which national security information is being processed. Red Book — Common name used to refer to the Network Interpretation of the TCSEC (Orange Book). Originally referred to in some circles as the “White Book.” Red forces — Forces of countries considered unfriendly to the United States and her Allies. Red team — A group of people duly authorized to conduct attacks against friendly information systems, under prescribed conditions, for the purpose of revealing the capabilities and limitations of the information assurance posture of a system under test. For purposes of operational testing, the Red team will operate in as operationally realistic an environment as feasible and will conduct its operations in accordance with the approved operational test plan. RED/BLACK concept — Separation of electrical and electronic circuits, components, equipment, and systems that handle national security information (RED), in electrical form, from those that handle non-national security information (BLACK) in the same form. Red-Black separation — The requirement for physical spacing between “red” and “black” processing systems and their components, including signal and power lines. Reduced Instruction Set Computing (RISC) — A method of processing by which the set of instructions available to the computer is a subset of that found on conventional computers. Redundancy — Controlling failure by providing several identical functional units, monitoring the behavior of each to detect faults, and initiating a transition to a safe/secure condition if a discrepancy is detected. Redundant control capability — Use of active or passive replacement, for example, throughout the network components (i.e., network nodes, connectivity, and control stations) to enhance reliability, reduce threat of single-point-of-failure, enhance survivability, and provide excess capacity. Redundant site — A recovery strategy involving the duplication of key information technology components, including data, or other key business processes, whereby fast recovery can take place. The redundant site usually is located away from the original. Reference configuration — A combination of functional groups and reference points that shows possible network arrangements. 965
AU8231_A003.fm Page 966 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Reference monitor — (1) An access control concept that refers to an abstract machine that mediates all accesses to objects by subjects. (2) A system component that mediates usage of all objects by all subjects, enforcing the intended access controls. Referential attributes — The facts that tie an instance of one object to an instance of another object. Referential integrity — The assurance that an object handle identifies a single object. The facility of a DBMS that ensures the validity of predefined relationships. Referrer field — The referrer header field (mistakenly spelled “referer” in the HTTP standard) is a unit of information that contains the URL of the site you are currently in. The referrer header field is sent automatically to any site you are about to visit when clicking a link. Referrer headers allow reading patterns to be studied and reverse links drawn. The address of the page might contain privacy information (such as your name or email address), or might reveal personal interests that you would rather keep private. Reflections — When the microwave signal traverses a body of water or fog bank and causes multipath conditions. Regenstrief Institute — A research foundation for improving healthcare by optimizing the capture, analysis, content, and delivery of healthcare information. Regenstrief maintains the LOINC coding system that is being considered for use as part of the HIPAA claim attachments standard. Regional Diplomatic Courier Officer (RDCO) — The RDCO oversees the operations of a regional diplomatic courier division. Regression testing — The rerunning of test cases that a program has previously executed correctly to detect errors created during software correction or modification. Tests used to verify a previously tested system whenever it is modified. Relation — Describes each two-dimensional table or file in the relation model (hence its name — relational database model). Relational database — In a relational database, data is organized in twodimensional tables or relations. Relevance — Related to the matter at hand; directly bearing upon the current matter. Reliability — The probability that a system or service will perform in a satisfactory manner for a given period of time when used under specific operating conditions. 966
AU8231_A003.fm Page 967 Thursday, October 19, 2006 7:10 AM
Glossary Reliability critical — A term applied to any condition, event, process, or item whose recognition, control, performance, or tolerance is essential to reliable system operation or support. Relying third party — The entity, such as a merchant, offering goods or services online that will receive a certificate as part of a process of completing transactions with the user. Remanence — The residual magnetism that remains on magnetic storage media after degaussing. Remediation plan — See plan of action, milestones. Remote access — The ability to dial into a computer over a local telephone number using a number of digital access techniques. Remote Authentication Dial-In User Service (RADIUS) — A security and authentication mechanism for remote access. Remote diagnostic facility — An off-premise diagnostic, maintenance, and programming facility authorized to perform functions on the department computerized telephone system via an external network trunk connection. Remote file system (RFS) — A distributed file system, similar to NFS, developed by AT&T and distributed with their UNIX System V operating system. See Network File System. Remote procedure call (RPC) — An easy and popular paradigm for implementing the client/server model of distributed computing. A request is sent to a remote system to execute a designated procedure, using arguments supplied, and the result returned to the caller. Repeater — A device that propagates electrical signals from one cable to another without making routing decisions or providing packet filtering. In OSI terminology, a repeater is a physical layer intermediate system. See bridge, router. Replay — A type of security threat that occurs when an exchange is captured and resent at a later time to confuse the original recipients. Replication — The process of keeping a copy of data through either shadowing or caching. Report — Printed or displayed output that communicates the content of files and other activities. The output is typically organized and easily read. Report Program Generator (RPG) — A nonprocedural programming language used for many business applications. Report writing — The process of accessing data from files and generating it as information in the form of output. 967
AU8231_A003.fm Page 968 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Repudiation — Denying that you did something, or sent some message. REQ — (1) Protection profile evaluation, IT security requirements. (2) Security target evaluation, IT security requirements. Request for Comments (RFC) — The document series, begun in 1969, that describes the Internet suite of protocols and related experiments. Not all (in fact, very few) RFCs describe Internet standards, but all Internet standards are written up as RFCs. Request for Proposal (RFP) — A formal document that describes in detail logical requirements for a proposed system and invites outsourcing organizations (vendors) to submit bids for its development. Required by Law — See Part II, 45 CFR 164.501. Requirement definition document — Defines all of the business requirements, prioritizes them in order of business importance, and places them in a formal comprehensive document. Residual risks — The risk associated with an event when the control is in place to reduce the effect or likelihood of that event being taken into account. Residue — Data left in storage after processing operations and before degaussing or rewriting has occurred. Resistance — (1) The opposition to the flow of electric charge and is generally the function of the number of free electrons available to conduct the electric current. (2) Capability of a system to repel attacks. Resistor — A component made of a material that has a specified resistance or opposition to the flow of electrical current. A resistor is designed to oppose but not completely obstruct the passage of electrical current. Resolution of a printer — The number of dots per inch (dpi) a printer produces, which is the same principle as the resolution in a monitor. Resolution of a screen — The number of pixels a screen has. Pixels (picture elements) are the dots that make up an image on the screen. Resonant frequency — The frequency where inductive reactance equals capacitive reactance. Helps to define the maximum current or maximum voltage in a circuit. Resource — In a computer system, any function, device, or data collection that can be allocated to users or programs. Resource sharing — In a computer system, the concurrent use of a resource by more than one user, job, or program. 968
AU8231_A003.fm Page 969 Thursday, October 19, 2006 7:10 AM
Glossary Restricted area — A specifically designated and posted area in which classified information or material is located or in which sensitive functions are performed, access to which is controlled and to which only authorized personnel are admitted. Result of interception — Information relating to a target service, including the CC and IRI, which is passed by an NWO/AP/SvP to an LEA. IRI shall be provided whether or not call activity is taking place. REV — Security management, revocation. RF shielding — The application of materials to surfaces of a building, room, or a room within a room, that makes the surface largely impervious to electromagnetic energy. As a technical security countermeasure, it is used to contain or dissipate emanations from information processing equipment, and to prevent interference by externally generated energy. RFA — Regulatory Flexibility Act. RFC — Request for Comments. RFI — Radio frequency interference. RFID (radio frequency identification) system — An automatic identification and data capture system comprising one or more readers and one or more tags in which data transfer is achieved by means of suitable modulated inductive or radiating electromagnetic carriers. RGB (Red, Green, Blue) — Refers to a system for representing the colors to be used on a computer display. Richness — Defined by three aspects of the information itself: bandwidth (the amount of information), the degree to which the information is customized, and interactivity (the extent of two way communication). Ring side — The side of the cable pair that, when measured, will read − 48V DC. RIP (Routing Information Protocol) — User data protection residual information protection. RISC — Reduced Instruction Set Computer. Risk — The probability that a particular security threat will exploit a particular vulnerability. Risk analysis — An analysis that examines an organization’s information resources, its existing controls, and its remaining organization and computer system vulnerabilities. It combines the loss potential for each resource or combination of resources with an estimated rate of occurrence to establish a potential level of damage in dollars or other assets. 969
AU8231_A003.fm Page 970 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Risk assessment — A process used to identify and evaluate risks and their potential effects. Risk avoidance — The process for systematically avoiding risk. Security awareness can lead to a better education staff, which can lead to certain risks being avoided. Risk control — Techniques that are employed to eliminate, reduce, or mitigate risk, such as inherent safe and secure (re)design techniques/features, alerts, warnings, operational procedures, instructions for use, training, and contingency plans. Risk dimension — See threat perspective. Risk exposure — The exposure to loss presented to an organization or individual by a risk; the product of the likelihood that the risk will occur and the magnitude of the consequences of its occurrence. Risk index — The disparity between the minimum clearance or authorization of system users and the maximum sensitivity (e.g., classification and categories) of data processed by a system. Risk management — The discipline of identifying and measuring security risks associated with an information system, and controlling and reducing those risks to an acceptable level. The goal of risk management is to invest organizational resources to mitigate security risks in a cost-effective manner, while enabling timely and effective mission accomplishment. Risk management is an important aspect of information assurance and defensein-depth. Risk mitigation — While some risks cannot be avoided, they can be minimized or mitigated by putting controls into place to mitigate the risk once an incident occurs. Risk transfer — The process of transferring risk. An example can include transferring the risk of a building fire to an insurance company. RJE — Remote job entry. rlogin — A service offered by Berkeley UNIX that allows users of one machine to log in to other UNIX systems (for which they are authorized) and interact as if their terminals were connected directly. Similar to Telnet. RLP — Remote Location Protocol. RMON — Remote monitoring. Robot — A mechanical device equipped with simulated human senses and the capability of taking action on its own. Robotics — The use of automated equipment for production work and other mechanical tasks. 970
AU8231_A003.fm Page 971 Thursday, October 19, 2006 7:10 AM
Glossary Robust watermark — A watermark that is very resistant to destruction under any image manipulation. This is useful in verifying ownership of an image suspected of misappropriation. Digital detection of the watermark would indicate the source of the image. Robustness — The system’s ability to operate despite service interruption, system errors, and other anomalous events. ROI — Return on investment. ROL — User data protection rollback. Role — A job type defined in terms of a set of responsibilities. Role-based — When mapped to job function, assumes that a person will take on different roles, over time, within an organization and different responsibilities in relation to IT systems. Roles and responsibilities — Functions performed by someone in a specific situation and obligations to tasks or duties for which that person is accountable. Rollback — (1) Restoration of a system to its former condition after it has switched to a fallback mode of operation when the cause of the fallback has been removed. (2) The restoration of the database to an original position or condition often after major damage to the physical medium. (3) The restoration of the information processing facility or other related assets following physical destruction or damage. ROM — See read-only memory. Root cause — Underlying cause(s), event(s), condition(s), or action(s) that individually or in combination led to the accident/incident; primary precursor event(s) that have the potential for being corrected. Rootkits — (1) User-level rootkits: programs that “infect” program files that are executed by the user and run under the user account’s privileges (e.g., the Explorer.exe or Word.exe program). (2) Kernel-level rootkits: programs that “infect” functions belonging to the operating system kernel (i.e., the core Windows operating system) and are used by hundreds of applications (including the Windows API). Kernel-mode rootkits will modify (i.e., hijack) internal operating system functions that return lists of files, processes, and open ports. Rotary (or pulse) dialing — The circular telephone dial. As it returns to its normal position, it opens and closes the electrical loop sent by the central office. Rotary dial telephones momentarily break the DC circuit to represent the digits dialed. Router — (1) A system responsible for making decisions about which of several paths network (or Internet) traffic will follow. To do this, it uses a 971
AU8231_A003.fm Page 972 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® routing protocol to gain information about the network, and algorithms to choose the best route based on several criteria known as “routing metrics.” (2) A network node connected to two or more networks. It is used to send data from one network (such as 137.13.45.0) to a second network (such as 43.24.56.0). The networks could both use Ethernet, or one could be Ethernet and the other could be ATM (or some other networking technology). As long as both speak common protocols (such as the TCP/IP protocol suite), they can communicate. RPC — Remote procedure call. RPL — Protection of the TSF; replay detection. RSA — (1) A public key cryptosystem developed by Rivest, Shamir, and Adleman (RSA). The RSA has two different keys: the public encryption key and the secret decryption key. The strength of RSA depends on the difficulty of the prime number factorization. For applications with high-level security, the number of the decryption key bits should be greater than 512 bits. RSA is used for both encryption and digital signatures. (2) Resource utilization, resource allocation. RTFM — Read the “fine” manual. RTMP — Routing Table Maintenance Protocol (AppleTalk). RTP — Real-time Transport Protocol. Rule-based expert — The type of expert system that expresses the problem-solving process as rules. Rule-based security policy — A security policy based on global rules imposed for all subjects. These rules usually rely on a comparison of the sensitivity of the objects being accessed and the possession of corresponding attributes by the subjects requesting access. Rules — Constraints. Rules of behavior — The rules that have been established and implemented concerning use of, security in, and acceptable level of risk for the system. Rules will clearly delineate responsibilities and expected behavior of all individuals with access to the system. Rules should cover such matters as working at home, dial-in access, connection to the Internet, use of copyrighted works, unofficial use of federal government equipment, the assignment and limitation of system privileges, and individual accountability. RVM — Protection of the TSF, reference mediation. RVS — Relative Value Scale. 972
AU8231_A003.fm Page 973 Thursday, October 19, 2006 7:10 AM
Glossary S/MIME — Secure Multipurpose Internet Mail Extensions; an e-mail and file encryption protocol. SA — (1) Source address. (2) Security association. SABM — Set asynchronous balanced mode. SABME — Set asynchronous balanced mode extended. SAE — Security management, security attribute expiration. Safe Harbor Principles — The set of rules to which U.S. businesses that want to trade with the European Union (EU) must adhere. Safeguards — Protective measures prescribed to meet the security requirements (i.e., confidentiality, integrity, and availability) specified for an information system. Safeguards may include security features, management constraints, personnel security, and security of physical structures, areas, and devices. Synonymous with security controls and countermeasures. Safety-critical — A term applied to any condition, event, operation, process, or item whose proper recognition, control, performance, or tolerance is essential to safe system operation and support (such as a safety-critical function, safety-critical path, or safety-critical component. Safety-critical software — Software that performs or controls functions which, if executed erroneously or if they failed to execute properly, could directly inflict serious injury to people, property, or the environment or cause loss of life. Safety integrity — (1) The likelihood of a safety-related system, function, or component achieving its required safety features under all stated conditions within a stated measure of use. (2) The probability of a safetyrelated system satisfactorily performing the required safety functions under all stated conditions within a stated period of time. Safety integrity level — An indicator of the required level of safety integrity; the level of safety integrity that must be achieved and demonstrated. Safety kernel — An independent computer program that monitors the state of the system to determine when potentially unsafe system states might occur or when transitions to potentially unsafe system states might occur. A safety kernel is designed to prevent a system from entering an unsafe state and retaining or returning it to a known safe state. Safety-related software — Software that performs or controls functions that are activated to prevent or minimize the effect of a failure of a safetycritical system. Sales Force Automation (SFA) system — System that automatically tracks all of the steps in the sales process. 973
AU8231_A003.fm Page 974 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Salt — Salt is a string of random (or pseudo-random) bits concatenated with a key or password to reduce the probability of pre-computation attacks. Sanitization — (1) Removing the classified content of an otherwise unclassified resource. (2) Removing any information that could identify the source from which the information came. Sanitize — The degaussing or overwriting of information on magnetic or other storage media. Sanitizing — The degaussing or overwriting of sensitive information in magnetic or other storage media. SAP — (1) Service access point. (2) Service Advertisement Protocol (Novell). SAR — Security audit review. Sarbanes–Oxley Act of 2002 — The most dramatic change to federal securities laws since the 1930s, this Act radically redesigns federal regulation of public company corporate governance and reporting obligations. It also significantly tightens accountability standards for directors and officers, auditors, securities analysts, and legal counsel. SAS — Single attached station. Satellite modem — A modem that allows Internet access from a satellite dish. SC — Subcommittee. Scalability — The likelihood that an artifact can be extended to provide additional functionality with little or no additional effort; how well a system can adapt to increased demands. Scannable resume (ASCII resume, plaintext resume) — Designed to be evaluated by skills-extraction software and typically contains all resume content without any formatting. Scanner — Captures images, photos, and artwork that already exist on paper. Scavenging — The searching of residue for the purpose of unauthorized data acquisition. Scheduling program — A systems program that schedules and monitors the processing of production jobs in the computer system. SCHIP — The State Children’s Health Insurance Program. SCL — Security certification level (see certification level). 974
AU8231_A003.fm Page 975 Thursday, October 19, 2006 7:10 AM
Glossary Scope creep — Occurs when the scope of the project increases. SCP — CM scope. Script bunny (or script kiddie) — Someone who would like to be a hacker but does not have much technical expertise. Scripts — Executable programs used to perform specified tasks for servers and clients. SDH — Synchronous digital hierarchy. SDI — User data protection, stored data integrity. SDLC — System development life cycle. SDO — Under HIPAA, Standards Development Organization. SDU — Service data unit. Search engine — A program written to allow users to search the Web for documents that match user-specified parameters. Secrecy — A security principle that keeps information from being disclosed to anyone not authorized to access it. Secret key cryptography — A cryptographic system where encryption and decryption are performed using the same key. Secretary — Under HIPAA, this refers to the secretary of HHS or his or her designated representatives. Also see Part II, 45 CFR 160.103. Secure Digital Music Initiative (SDMI) — Forum of more than 160 companies and organizations representing a broad spectrum of information technology and consumer electronics businesses, Internet service providers, security technology companies, and members of the worldwide recording industry working to develop voluntary, open standards for digital music. SDMI is helping to enable the widespread Internet distribution of music by adopting a framework that artists and recording and technology companies can use to develop new business models. Secure Electronic Transaction (SET) — The SET specification has been developed to allow for secure credit card and offline debit card (check card) transactions over the World Wide Web. Secure interoperability — The ability to have secure, successful transactions. Today’s interoperability expands that previous focus to also include information assurance considerations, and include the requirement to formally assess whether that traditional, successful transaction is also secure (i.e., secure interoperability meaning a secure, successful transaction exists). 975
AU8231_A003.fm Page 976 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Secure operating system — An operating system that effectively controls hardware, software, and firmware functions to provide the level of protection appropriate to the value of the data resources managed by this operating system. Secure room — Any room with floor-to-ceiling, slab-to-slab construction of some substantial material (i.e., concrete, brick, cinder block, plywood, or plaster board). Any window areas or penetrations of wall areas over 15.25 cm (6 inches) must be covered with either grilling or substantial type material. Entrance doors must be constructed of solid wood, metal, etc., and be capable of holding a DS-approved three-way combination lock with interior extension. Secure Socket Layer (SSL) — A protocol developed by Netscape for transmitting private documents via the Internet. SSL works by using a public key to encrypt data that is transferred over the SSL connection. Secure voice — Systems in which transmitted conversations are encrypted to make them unintelligible to anyone except the intended recipient. Within the context of department security standards, secure voice systems must also have protective features included in the environment of the systems terminals. Security — (1) Freedom from undesirable events, such as malicious and accidental misuse; how well a system resists penetrations by outsiders and misuse by insiders. (2) The protection of system resources from accidental or malicious access, use, modification, destruction, or disclosure. (3) The protection of resources from damage and the protection of data against accidental or intentional disclosure to unauthorized persons or unauthorized modifications or destruction. Security concerns transcend the boundaries of an automated system. Security accreditation — See accreditation. Security anomaly — An irregularity possibly indicative of a security breach, an attempt to breach security, or of noncompliance with security standards, policy, or procedures. Security association — A security association is a set of parameters that defines all the security services and mechanisms used for protecting the communication. A security association is bound to a specific security protocol. Security audit — An examination of data security procedures and measures to evaluate their adequacy and compliance with established policy. Security authorization — See accreditation. Security category — The characterization of information or an information system based on an assessment of the potential impact that a loss of 976
AU8231_A003.fm Page 977 Thursday, October 19, 2006 7:10 AM
Glossary confidentiality, integrity, or availability of such information or information system would have on organizational operations, organizational assets, or individuals. Security classification designations — Refers to “Top Secret,” and “Secret,” and “Confidential” designations on classified information or material. Security controls — Techniques and methods to ensure that only authorized users can access the computer information system and its resources. Security-critical — A term applied to any condition, event, process, or item whose recognition, control, performance, or tolerance is essential to secure system operation or support. Security domain — A set of subjects, their information objects, and a common security policy. Security equipment — Protective devices such as intrusion alarms, safes, locks, and destruction equipment that provide physical or technical surveillance protection as their primary purpose. Security evaluation — An evaluation done to assess the degree of trust that can be placed in systems for the secure handling of sensitive information. One type, a product evaluation, is an evaluation performed on the hardware and software features and assurances of a computer product from a perspective that excludes the application environment. The other type, a system evaluation, is done for the purpose of assessing a system’s security safeguards with respect to a specific operational mission and is a major step in the certification and accreditation process. Security filter — A set of software or firmware routines and techniques employed in a computer system to prevent automatic forwarding of specified data over unprotected links or to unauthorized persons. Security goals — The five security goals are integrity, availability, confidentiality, accountability, and assurance. Security incident — Any act or circumstance that involves classified information that deviates from the requirements of governing security publications. For example, compromise, possible compromise, inadvertent disclosure, and deviation. Security inspection — Examination of an IS to determine compliance with security policy, procedures, and practices. Security kernel — The central part of a computer system (hardware, software, or firmware) that implements the fundamental security procedures for controlling access to system resources. Security label — Piece of information that represents the sensitivity of a subject or object, such as its hierarchical classification (CONFIDENTIAL, 977
AU8231_A003.fm Page 978 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® SECRET, TOP SECRET), together with any applicable nonhierarchical security categories (e.g., sensitive compartmented information, critical nuclear weapon design information). Security metrics — A standard of measurement used to measure and monitor information security-related information security activity. Security objective — Confidentiality, integrity, or availability of information. Security Parameter Index (SPI) — SPI is an identifier for a security association within a specific security protocol. This means that a pair of security protocol and SPI may uniquely identify a security association, but this is implementation dependent. Security plan — See system security plan. Security policy — The set of laws, rules, and practices that regulate how sensitive or critical information is managed, protected, and distributed. Security policy model — A formal presentation of the security policy enforced by the system. It must identify the set of rules and practices that regulate how a system manages, protects, and distributes sensitive information. Security process — The series of activities that monitor, evaluate, test, certify, accredit, and maintain the system accreditation throughout the system life cycle. Security program — A systems program that controls access to data in files and permits only authorized use of terminals and other related equipment. Control is usually exercised through various levels of safeguards assigned on the basis of the user’s need-to-know. Security purpose — The IS security purpose is to provide value by enabling an organization to meet all mission/business objectives while ensuring that system implementations demonstrate due care consideration of risks to the organization and its customers. Security requirements — The types and levels of protection necessary for equipment, data, information, applications, and facilities to meet security policy. Security requirements baseline — A description of minimum requirements necessary for a system to maintain an acceptable level of security. Security service — A capability that supports one, or many, of the security goals. Examples of security services are key management, access control, and authentication. Security specification — A detailed description of the safeguards required to protect a system. 978
AU8231_A003.fm Page 979 Thursday, October 19, 2006 7:10 AM
Glossary Security test and evaluation (ST&E) — An examination and analysis of the security safeguards of a system as they have been applied in an operational environment to determine the security posture of the system. Security testing — A process used to determine that the security features of a system are implemented as designed. This includes hands-on functional testing, penetration testing, and verification. Seepage — The accidental flow, to unauthorized individuals, of data or information that is presumed to be protected by computer security safeguards. Segment — Under HIPAA, this is a group of related data elements in a transaction. Also see Part II, 45 CFR162.103. SEL — Security audit event selection. Selection — A program control structure created in response to a condition test in which one of two or more processing paths can be taken. Self-Insured — Under HIPAA, an individual or organization that assumes the financial risk of paying for healthcare. Self-organizing neural network — A network that finds patterns and relationships in vast amounts of data by itself. Self-sourcing (or knowledge worker/end-user development) — The development and support of IT systems by knowledge workers with little or no help from IT specialists. Selling prototype — A prototype used to convince people of the worth of a proposed system. Semagram — Meaning: semantic symbol. Semagrams are assoicated with a concept and do not use writing to hide a message. Semiconductor — Material used in electronic components that possesses electrical conducting qualities of conductors and resistors. Sensitive data — Data that is considered confidential or proprietary. The kind of data that, if disclosed to a competitor, might give away an advantage. Sensitive information — Any information that requires protection and that should not be made generally available. Sensitive intelligence information — Such intelligence information, the unauthorized disclosure of which would lead to counteraction (1) jeopardizing the continued productivity of intelligence sources or methods that 979
AU8231_A003.fm Page 980 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® provide intelligence vital to the national security; or (2) offsetting the value of intelligence vital to the national security. Sensitive unclassified information — Any information, the loss, misuse, or unauthorized access to or modification of which could adversely affect the national interest or the conduct of federal programs, or the privacy to which individuals are entitled under 5 U.S.C Section 552a (the Privacy Act), but that has not been specifically authorized under criteria established by an Executive Order or an Act of Congress to be kept secret in the interest of national defense or foreign policy. Note: Systems that are not national security systems, but contain sensitive information, are to be protected in accordance with the requirements of the Computer Security Act of 1987 (Public Law 100–235). Sensitivity — An information technology environment consists of the system, data, and applications, which must be examined individually and in total. All systems and applications require some level of protection for confidentiality, integrity, and availability. This level of protection is determined by an evaluation of the sensitivity and criticality of the information processed, the relationship of the system to the organization’s mission, and the economic value of the system components. Sensitivity attributes — User-supplied indicators of file sensitivity that the system uses to enforce an access control policy. Sensitivity label — A hierarchical classification and a set of nonhierarchical components that are used by mandatory access controls to define a process’s resource access rights. SEP — Protection of the TSF, domain separation. Sequential organization — The physical arrangement of records in a sequence that corresponds with their logical key. Serial connector — Usually has 9 holes but may have 25 that fit into the corresponding number of pins in the port. Serial connectors are often used for monitors and certain types of modems. Serial Line Internet Protocol (SLIP) — An Internet protocol used to run IP over serial lines such as telephone circuits or RS-232 cables interconnecting two systems. SLIP is now being replaced by Point-to-Point Protocol. See Point-to-Point Protocol. Serial organization — The physical arrangement of records in a sequence. Serial processing — The processing of records in the physical order in which they appear in a file or on an input device. Server — A computer that provides a service to another computer, such as a mail server, a file server, or a news server. 980
AU8231_A003.fm Page 981 Thursday, October 19, 2006 7:10 AM
Glossary Server farm — A location that stores a group of servers in a single place. Service — A component of the portfolio of choices offered by SvPs to a user, a functionality offered to a user. Service control — The ability of the user, home environment, or serving environment to determine what a particular service does, for a specific invocation of that service, within the limitations of that service. Service control points (SCP) — The local versions of the national 800 number database. They contain the intelligence to screen the full ten digits of an 800 number and route calls to the appropriate long-distance carrier. Service information — Information used by the telecommunications infrastructure in the establishment and operation of a network-related service or services. The information may be established by an NWO/AP/SvP or a network user. Service level agreement (SLA) — Defines the specific responsibilities of the service provider and sets the customer expectations. Service program — An operating system program that provides a variety of common processing services to users (e.g., utility programs, librarian programs, and other software). Service provider (SvP) — A natural or legal person providing one or more public telecommunications services whose provision consists wholly or partly in the transmission and routing of signals on a telecommunications network. SvPs do not necessarily have to run their own networks. Service switching point (SSP) — A switching system, including its remotes, that identifies calls associated with intelligent network services and initiates dialog with the SCP. Service transfer point (STP) — A signaling point with the function of transferring messages from one signaling link to another and considered exclusively from the viewpoint of the transferor. Session — A completed connection to an Internet service, and the ensuing connect time. Session hijacking — An intruder takes over a connection after the original source has been authenticated. Session key — A randomly-generated key that is used one time, and then discarded. Session keys are symmetric (used for both encryption and decryption). They are sent with the message, protected by encryption with a public key from the intended recipient. A session key consists of a random number of approximately 40 to 2000 bits. Session keys can be derived from hash values. 981
AU8231_A003.fm Page 982 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Session layer — The layer of the ISO Reference Model coordinating communications between network nodes. It can be used to initialize, manage, and terminate communication sessions. SET — Secure Electronic Transactions protocol. SF — Super Framing (T1/E1). SHA — Secure Hash algorithm. Shared information — An organization’s information is in one central location, allowing anyone to access and use it as they need it. Shareware — Software available on the Internet that may be downloaded to your machine for evaluation and for which you are generally expected to pay a fee to the originator of the software if you decide to keep it. Sharing — Providing access to and facilitating the sharing of information, which enhances reach and creates shared awareness. Shorfalls — Functional areas in which additional capability or coverage is required. SIGINT — A broad range of operations that involves the interception and analysis of signals across the electromagnetic spectrum. Sign a message — To use your private key to generate a digital signature as a means of proving you generated, or certify, some message. Signaling — The exchange of information specifically concerned with the establishment and control of connections, and with management, in a telecommunications network. Signaling System 7 (SS7) — SS7 employs a dedicated 64-kb data circuit to carry packetized machine language messages about each call connected between and among machines of a network to achieve connection control. Signal-to-interference ratio (SIR) — The ratio of the usable signal being transmitted to the noise or undesired signal. Signature (digital) — A quantity (number) associated with a message that only someone with knowledge of your private key could have generated, but that can be verified through knowledge of your public key. Signature dynamics — A form of electronic signatures which involves the biometric recording of the pen dynamics used in signing the document. Sign-off — The knowledge workers’ actual signatures indicating they approve all of the business requirements. SIL — Safety integrity level. SIMM — Single inline memory module. 982
AU8231_A003.fm Page 983 Thursday, October 19, 2006 7:10 AM
Glossary Simple Mail Transfer Protocol (SMTP) — The Internet e-mail protocol. Simple Network Management Protocol (SNMP) — Provides remote administration of network device; “simple” because the agent requires minimal software. Simplicity — The simplest correct structure is the most desirable. Simulation — The use of an executable model to represent the behavior of an object. During testing, the computational hardware, the external environment, and even the coding segments can be simulated. Simultaneous processing — The execution of two or more computer program instructions at the same time in a multiprocessing environment. Single inheritance — The language mechanism that allows the definition of a class to include the attributes and methods defined for, at most, one superclass. Single sideband carrier — An amplitude modulation technique for encoding analog or digital data using either analog or digital transmission. Single sideband suppresses one sideband of the carrier frequency at the source. As such, less power is used and less bandwidth is required. SIP — SMDS Interface Protocol. Site — An immobile collection of systems at a specific location. Site accreditation — An accreditation where all systems at a location are grouped into a single management entity. A DAA may determine that a site accreditation approach is optimal, given the number of information technology systems, major applications, networks, or unique operational characteristics. Site accreditation begins with all systems and their interoperability and major applications at the site being certified and accredited. The site is then accredited as a single entity, and an accreditation baseline is established. Situation — Situation is a set of all security-relevant information. The decision of an entity on which security services it requires is based on the situation. Skill words — Nouns and adjectives used by organizations to describe job skills that should be woven into the text of applicants’ resumes. Skin affect — The concept that high-frequency energy travels only on the outside skin of a conductor and does not penetrate into it any great distance. Slack space — The unused space in a group of disk sectors. Or, the difference in empty bytes of the space that is allocated in clusters minus the actual size of the data files. 983
AU8231_A003.fm Page 984 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® SLARP — Serial Link Address Resolution Protocol. Slave computer — A front-end processor that handles input and output functions for a host computer. SLDC — (1) Systems development life cycle. (2) Synchronous Data Link Control. SLIP — Serial Line Interface Protocol. Small Health Plan — Under HIPAA, this is a health plan with annual receipts of $5 million or less. Also see Part II, 45 CFR 160.103. Smartcard — A small computer the size of a credit card that is used to perform functions such as identification and authentication. SMDS — Switched Multi-megabit Data Service. SML — Strength of mechanism; a rating used by the IA Technical Framework to rate the strength or robustness required for a security mechanism. Currently, three ratings are defined: SML1 — low, SML2 — medium, and SML3 — high. The SML is derived as a function of the value of the information being protected and the perceived threat to it. Compare with SOF. SMR — Security management, security management roles. SMTP — Simple Mail Transfer Protocol. SNA — (1) Survivable network analysis method; developed by the CERT/CC. (2) Systems Network Architecture. SNAP — Subnetwork Access Protocol. SNF — Skilled Nursing Facility. Sniffing — An attack capturing sensitive pieces of information, such as a password, passing through the network. SNIP — Strategic National Implementation Process--Sponsored by WEDI. SNMP — Simple Network Management Protocol. SNOMED — Under HIPAA, Systematized Nomenclature of Medicine. Sociability — The ability of intelligent agents to confer with each other. Social engineering — An attack based on deceiving users or administrators at the target site. For example, a person who illegally enters computer systems by persuading an authorized person to reveal IDs, passwords, and other confidential information. Socket — A paring of an IP address and a port number. See port. SOF — Strength of function; a rating used by the Common Criteria (ISO/IEC 15408) to rate the strength or robustness required for a security 984
AU8231_A003.fm Page 985 Thursday, October 19, 2006 7:10 AM
Glossary mechanism. Currently, three ratings are defined: basic, medium, and high. The SOF is derived as a function of the value of the information being protected and the perceived threat to it. Compare with SML. Softlifting — Illegal copying of licensed software for personal use. Software — Computer programs, procedures, rules, and possibly documentation and data pertaining to the operation of the computer system. Software integrity level — The integrity level of a software item. Software life cycle — The period of time beginning when a software product is conceived and ending when the product is no longer available for use. The software life cycle is typically broken into phases (e.g., requirements, design, programming, testing, conversion, operations, and maintenance). Software maintenance — All changes, corrections, and enhancements that occur after an application has been placed into production. Software piracy — To illegally copy software. Software reliability — A measure of confidence that the software produces accurate and consistent results that are repeatable, under low, normal, and peak loads, in the intended operational environment. Software reliability case — A systematic means of gathering, organizing, analyzing, and reporting the data needed by internal, contractual, regulatory, and Certification Authorities to confirm that a system has met specified reliability requirements and is fit for use in the intended operational environment; includes assumptions, claims, evidence, and arguments. A software reliability case is a component in a system reliability case. Software safety — Design features and operational procedures that ensure that a product performs predictably under normal and abnormal conditions, and the likelihood of an unplanned event occurring is minimized and its consequences controlled and contained, thereby preventing accidental injury or death, environmental or property damage, whether intentional or accidental. Software safety case — A systematic means of gathering, organizing, analyzing, and reporting the data needed by internal, contractual, regulatory, and Certification Authorities to confirm that a system has met specified safety requirements and is safe for use in the intended operational environment; includes assumptions, claims, evidence, and arguments. A software safety case is a component in a system safety case. Software suite — Bundled software that comes from the same publisher and costs less than buying all the software pieces individually. SONET — Synchronous Optical NETwork. 985
AU8231_A003.fm Page 986 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® SOP — Standard operating procedure. Sort — The arrangement of data in ascending or descending, alphabetic or numeric order. SOS — Identification and authentication specification of secrets. Source document — The form that is used for the initial recording of data prior to system input. Source program — The computer program that is coded in an assembler or higher-level programming language. SOW — See Statement of Work. Space diversity — Protection of a radio signal by providing a separate antenna located a few feet below the regular antenna on the same tower to assume the load when the regular transmission path on the tower fades. Space division multiple access (SDMA) — Intelligent antenna systems use this access method to increase the capacity of cellular radio networks by separating frequencies within a cell site and allowing the same frequencies to be reused. Spam — (1) (verb) The act of posting the same information repeatedly on inappropriate places or too many places so as to overburden the network. (2) (noun) Unsolicited e-mail. Spam filter — Program that detects and rejects spam by looking for certain keywords, phrases, or Internet addresses. Spatial domain — The image plane itself; the collection of pixels that composes an image. Special agent — A special agent in the Diplomatic Security Service (DSS) is a sworn officer of the Department of State or the Foreign Service, whose position is designated as either a GS-1811 or FS-2501, and has been issued special agent credentials by the Director of the Diplomatic Security Service to perform those specific law enforcement duties as defined in 22 U.S.C. 2712. Special investigators — Special investigators are contracted by the Department of State. They perform various noncriminal investigative functions in DS headquarters, field, and resident offices. They are not members of the Diplomatic Security Service and are not authorized to conduct criminal investigations. Specification — A description of a problem or subject that will be implemented in a computational or other system. The specification includes both a description of the subject and aspects of the implementation that affect its representation. Also, the process and analysis and design that 986
AU8231_A003.fm Page 987 Thursday, October 19, 2006 7:10 AM
Glossary results in a description of a problem or subject that can be implemented in a computation or other system. Spectrum — The radio frequency that is available for personal, commercial, and military use. SPF — Shortest path first. Spherical zone of control — A volume of space in which uncleared personnel must be escorted and which extends a specific distance in all directions from TEMPEST equipment processing classified information or from a shielded enclosure. SPI — Security parameter index; part of IPSec. SPID — Service provider identifier (ISDN). Split knowledge — A security technique in which two or more entities separately hold data items that individually convey no knowledge of the information that results from combining the items. A condition under which two or more entities separately have key components that individually convey no knowledge of the plaintext key that will be produced when the key components are combined in the cryptographic module. SPM — Development, security policy modeling. Sponsor — See plan sponsor. Spoof — To make a transmission appear to come from a user other than the user who performed the action. Spoofing — (1) Faking the sending address of a transmission to gain illegal entry into a secure system. (2) The deliberate inducement of a user or resource to take incorrect action. Spooling — A technique that maximizes processing speed through the temporary use of high-speed storage devices. Input files are transferred from slower, permanent storage and queued in the high-speed devices to await processing, or output files are queued in high-speed devices to await transfer to slower storage devices. SPP — Sequenced Packet Protocol (Vines). Spread-spectrum image steganography — A method of steganographic communication that uses digital imagery as the cover signal. Spread-spectrum techniques — The method of hiding a small or narrowband signal (message) in a large or wide-band cover. Spreadsheet software — Computer software that divides a display screen into a large grid. This grid allows the user to enter labels and values that can be manipulated or analyzed. 987
AU8231_A003.fm Page 988 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® SPX — Sequenced Packet Exchange (Novell). Spyware — Any software that covertly gathers user information through the user’s Internet connection without his or her knowledge, usually for advertising purposes. Spyware applications are typically bundled as a hidden component of freeware or shareware programs that can be downloaded from the Internet; however, it should be noted that the majority of shareware and freeware applications do not come with spyware. Once installed, the spyware monitors user activity on the Internet and transmits that information in the background to someone else. Spyware can also gather information about e-mail addresses and even passwords and credit card numbers. Also known as adware. SQL — See Structured Query Language. SRAM — Static RAM. SRB — Source route bridging. SRE — (1) Protection Profile evaluation, explicitly stated IT security requirements. (2) Security Target evaluation, explicitly stated IT security requirements. SRTB — Source route transparent bridging. SRTP — Sequenced Routing Update Protocol (Vines). SS7 — Signaling System 7. SSAP — Source Service Access Point (LLC). SSH — Secure Shell. SSL — Secure Sockets Layer. SSL3 — Secure Sockets Layer protocol; see also TLS1. SSN — Social Security number. SSO — (1) Single Sign-On, or(2) Standards Setting Organization. SSO — See standard-setting organization. SSP — In Common Criteria, protection of the TSF, state synchrony protocol. ST — Security target. Stacked-job processing — A computer processing technique in which programs and data awaiting processing are placed into a queue and executed sequentially. Stand-alone root — A certificate authority that signs its own certificates and does not rely on a directory service to authenticate users. 988
AU8231_A003.fm Page 989 Thursday, October 19, 2006 7:10 AM
Glossary Standard — Mandatory statement of minimum requirements that support some part of a policy. A set of rules or specifications that, when taken together, define a software or hardware device. A standard is also an acknowledged basis for comparing or measuring something. Standards are important because new technology will only take root once a group of specifications is agreed upon. Standard Generalized Markup Language (SGML) — An international standard for encoding textual information that specifies particular ways to annotate text documents separating the structure of the document from the information content. HTML is a generalized form of SGML. Standard-Setting Organization (SSO) — See Part II, 45 CFR 160.103. Standard transaction — Under HIPAA, this is a transaction that complies with the applicable HIPAA standard. Also see Part II, 45 CFR 162.103. Standard Transaction Format Compliance System (STFCS) — An EHNACsponsored WPC-hosted HIPAA compliance certification service. Standardization — The commander’s information requirements must not be compromised by the use of nonstandard equipment. Standards audit — The check to ensure that applicable standards are properly used. State — A static condition of an object or group of objects. State space — The total collection of possible states for a particular object or group of objects. State transition — A change of state for an object; something that can be signaled by an event. State Uniform Billing Committee (SUBC) — Under HIPAA, a state-specific affiliate of the NUBC. State variable — A property or type that is part of an identified state of a given type. Statement of Work (SOW) — Under HIPAA, a document describing the specific tasks and methodologies that will be followed to satisfy the requirements of an associated contract or MOU. Statement testing — A test method of satisfying the criterion that each statement in a program be executed at least once during the program testing. Static analysis — The direct analysis of the form and structure of a product that does not require its execution. It can be applied to the requirements, design, or code. 989
AU8231_A003.fm Page 990 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Static data — Data that, once established, remains constant. Statistical time division multiplexing (STDM) — This form of multiplexing uses all available time slots to send significant information and handles inbound data on a first-come, first-served basis. Steering committee — A management committee assembled to sponsor and manages various projects such as information security program. Steganalysis — The art of detecting and neutralizing steganographic messages. Steganalyst — One who applies steganalysis with the intent of discovering hidden information. Steganographic file system — A method of storing files in such a way that encrypts data and hides it such that it cannot be proven to be there. Steganography — (1) The method of concealing the existence of a message or data within seemingly innocent covers. (2) A technology used to embed information in audio and graphical material. The audio and graphical materials appear unaltered until a steganography tool is used to reveal the hidden message. Stegokey — A key that allows extraction of the secret information out of the cover. Stego-medium — The resulting combination of a cover medium and embedded message and a stego key. Stego-only attack — An attack where only the stego-object is available for analysis. STFCS — See Standard Transaction Format Compliance System. STG — Security audit event storage. StirMark — A method of testing the robustness of a watermark. StirMark is based on the premise that many watermarks can survive a simple manipulation to the file, but not a combination of manipulations. It simulates a process similar to what would happen if an image was printed and then scanned back into the computer by stretching, shearing, shifting, and rotating an image by a tiny random amount. STM — Protection of the TSF, time stamps. Storage media — Floppy diskettes, tapes, hard disk drives, or any devices that store automated information. Storage object — An object that supports both read and write accesses. 990
AU8231_A003.fm Page 991 Thursday, October 19, 2006 7:10 AM
Glossary Stored-program concept — The location of the instructions placed in the memory of a common controlled switching unit and to which it refers while processing a call. Strategic management — Provides an organization with overall direction and guidance. Strategic National Implementation Process (SNIP) — Under HIPAA, a WEDI program for helping the healthcare industry identify and resolve HIPAA implementation issues. Stream cipher — An encryption method in which a cryptographic key and an algorithm are applied to each bit in a datastream, one bit at a time. Strength — The power of the information assurance protection. Strength of Mechanism (SML) — A scale for measuring the relative strength of a security mechanism hierarchically ordered from SML 1 through SML 3. Strike warfare — A primary warfare mission area dealing with preemptive or retaliatory offensive strikes against inland or coastal ground targets. Strong authentication — Strong authentication refers to systems that require multiple factors for authentication and use advanced technology, such as dynamic passwords or digital certificates, to verify a user’s identity. Structurally Object Oriented — The data model allows definitions of data structures to represent entities of any complexity (complex objects). Structured data — See data-related concepts. Structured design — A methodology for designing systems and programs through a top-down, hierarchical segmentation. Structured programming — The process of writing computer programs using logical, hierarchical control structures to carry out processing. Structured Query Language (SQL) — The international standard language for defining and accessing a relational database. SUBC — See State Uniform Billing Committee. Subject — An active entity, generally in the form of a person, process, or device that causes information to flow among objects or changes the system state. Subjective information — Attempts to describe something that is unknown. Subnet — A portion of a network, which may be a physically independent network segment, that shares a network address with other portions of 991
AU8231_A003.fm Page 992 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® the network and is distinguished by a subnet number. A subnet is to a network what a network is to the Internet. Subnet address — The subnet portion of an IP address. In a subnetted network, the host portion of an IP address is split into a subnet and a host portion using an address (subnet) mask. Subroutine — A segment of code that can be called up by a program and executed at any time from any point. Subscriber — An entity (associated with one or more users) that is engaged in a subscription with a telecommunications service provider (TSP). The subscriber is allowed to subscribe to and unsubscribe from services, to register a user or a list of users authorized to enjoy these services, and also to set the limits relative to the use that associated users make of these services. Subscriber loop — The circuit that connects the telephone company’s central office to the demarcation point on the customer’s premises. The circuit is most likely a pair of wires. Subscript — A value used in programming to reference an item of data stored in a table. Substitution — The steganographic method of encoding information by replacing insignificant bits from the cover with the bits from the embedded message. Substitution-Linear Transformation Network — A practical architecture based on Shannon’s concepts for the secure, practical ciphers with a network structure consisting of a sequence of rounds of small substitutions, easily implemented by table lookup and connected by bit position permutations or linear transpositions. Subsystem — A major subdivision or component of an information system consisting of information, information technology, and personnel that performs one or more specific functions. Suite — A named set of resources and interfaces; a collection of resources; not a physical space. Summary Health Information — See Part II, 45 CFR 164.504. Superclass — A class from which another class inherits attributes and methods. Supercomputer — The fastest, most powerful, and expensive type of computer. 992
AU8231_A003.fm Page 993 Thursday, October 19, 2006 7:10 AM
Glossary SuperFrame — A synchronization-framing format for a T1. In a T1 circuit, each of the 24 DS0 channels is sampled every 125 microseconds and 8 bits are taken from each. If you multiply the 8 bits by the 24 channels, you get 192 bits in a chain; and then add one bit for timing, and you get 193 total bits in one frame. Twelve frames comprise the SuperFrame. A newer version of this T1 formatting is called Extended SuperFrame (ESF). Supply chain — The paths reaching out to all of a company’s suppliers of parts and services. Supply-chain management (SCM) system — Tracks inventory and information among business processes and across companies. Support mission area — Synonymous with support warfare mission area. Areas of naval warfare that provide support functions that cut across the boundaries of all (or most) other warfare mission areas. Supraliminal channel — A feature of an image that is impossible to remove without gross modifications, that is, a visible watermark. Survivability — The capability of a system to fulfill its mission, in a timely manner, in the presence of attacks, failures, or accidents. A survivability assessment covers the full threat control chronology. SVC — Switched virtual circuit. Swapping — A method of computer processing in which programs not actively being processed are held on special storage devices and alternated in and out of memory with other programs according to priority. SWG — Under HIPAA, sub-workgroup. Switch — A mechanical, electrical, or electronic device that opens or closes circuits, completes or breaks an electrical path, or selects paths or circuits. A switch looks at incoming data to determine the destination address. Based on that address, a transmission path is set up through the switching matrix between the incoming and outgoing physical communications ports and links. Switch control point (SCP), also known as service control point (SCP) — Provides computer services, such as database information, that defines the possible services and their logic. Switched lobe (SL) — Also called switch beam. Smart antennas use power patterns that are more concentrated and directed than the regular antenna. The far end device receives a much more powerful signal from the antenna. 993
AU8231_A003.fm Page 994 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Switched virtual circuit (SVC) — A virtual circuit connection established across a network on an as-needed basis and lasting only for the duration of the transfer. Switching costs — Costs that can make customers reluctant to switch to another product or service. Symbolic evaluation — The process of analyzing the path of program execution through the use of symbolic expressions. Symbolic execution — The analytical technique of dissecting each program path. Symmetric key encryption — In symmetric key encryption: two trading partners share one or more secrets, and no one else can read their messages. A different key (or set of keys) is needed for each pair of trading partners. The same key is used for encryption and decryption. Synchronous — A protocol of transmitting data over a network where the sending and receiving terminals are kept in synchronization with each other by a clock signal embedded in the data. Synchronous Optical NETwork (SONET) — SONET is an international standard for high-speed data communications over fiber-optic media. The transmission rates range from 51.84 Mbps to 2.5 Gbps. Syntax — The statement formats and rules for the use of a programming language. System — A series of related procedures designed to perform a specific task. System accreditation — The official authorization granted to an information system to process sensitive information in its operational environment based on a comprehensive security evaluation of the system’s hardware, firmware, and software security design, configuration, and implementation, and of the other system procedural, administrative, physical, TEMPEST, personnel, and communications security controls. System analysis — The process of studying information requirements and preparing a set of functional specifications that identify what a new or replacement system should accomplish. System attributes — The qualities, characteristics, and distinctive features of information systems. System bus — The electronic pathways that move information between basic components on the motherboard, including the pathway between the CPU and RAM. 994
AU8231_A003.fm Page 995 Thursday, October 19, 2006 7:10 AM
Glossary System certification — The technical evaluation of a system’s security features that established the extent to which a particular information system’s design and implementation meets a set of specified security requirements. System design — The development of a plan for implementing a set of functional requirements as an operational system. System development life cycle (SDLC) — The scope of activities associated with a system, encompassing the system’s initiation, development and acquisition, implementation, operation and maintenance, and, ultimately, its disposal, which instigates another system initiation. System entity — A system subject (user or process) or object. System environment — The unique technical and operating characteristics of an IT system and its associated environment, including the hardware, software, firmware, communications capability, organization, and physical location. System high — A system is operating at system high security mode when the system and all of its local and remote peripherals are protected in accordance with the requirements for the highest classification category and types of material contained in the system. All users having access to the system have a security clearance, but not necessarily a need-to-know for all material contained in the system. In this mode, the design and operation of the system must provide for the control of concurrently available classified material in the system on the basis of need-to-know. System high mode — IS security mode of operation wherein each user, with direct or indirect access to the IS, its peripherals, remote terminals, or remote hosts, has all of the following: (a) valid security clearance for all information within an IS; (b) formal access approval and signed nondisclosure agreements for all the information stored and processed (including all compartments and special access programs); and (c) valid need-to-know for some of the information contained within the IS. System integrity — The attribute of an IS when it performs its intended function in an unimpaired manner, free from deliberate or inadvertent unauthorized manipulation of the system. System integrity procedures — Procedures established to ensure that hardware, software, firmware, and data in a computer system maintain their state of original integrity and are not tampered with by unauthorized personnel. System interconnection — The direct connection of two or more information technology systems for the purpose of sharing data and other information resources. 995
AU8231_A003.fm Page 996 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® System log — An audit trail of relevant system happenings (e.g., transaction entries, database changes). System owner — Official having responsibility for the overall procurement, development, integration, modification, or operation and maintenance of an information system. System reliability — The composite of hardware and software reliability for a specified operational environment. System reliability measurements combine qualitative and quantitative assessments. System safety — The application of engineering and management principles, criteria, and techniques to achieve acceptable mishap risk, within the constraints of operational effectiveness, time, and cost, throughout the life of a system. System safety engineering — An engineering discipline that employs specialized professional knowledge and skills in applying scientific and engineering principles, criteria, and techniques to identify and eliminate hazards, in order to reduce the associated mishap risk. System Security Authorization Agreement (SSAA) — The SSAA is a formal agreement among the DAA(s), the Certifier, user representative, and program manager. It is used throughout the entire DITSCAP to guide actions, document decisions, specify IA requirements, document certification tailoring and level-of-effort, identify potential solutions, and maintain operational systems security. System security plan — Formal document that provides an overview of the security requirements for the information system and describes the security controls in place or planned for meeting those requirements. System-specific security control — A security control for an information system that has not been designated as a common security control. System survivability — The ability to continue to make resources available, despite adverse circumstances, including hardware malfunctions, accidental software errors, accidental and malicious intentional user activities, and environmental hazards such as EMC/EMI/RFI. System test — The process of testing an integrated hardware/software system to verify that the system meets its specified requirements. Systematic failure — Failures that result from an error of omission, error of commission, or operational error during a life-cycle activity. Systematic safety integrity — A qualitative measure or estimate of the failure rate due to systematic failures in a dangerous mode of failure. Systems analysis — The process of studying information requirements and preparing a set of functional specifications that identify what a new or replacement system should accomplish. 996
AU8231_A003.fm Page 997 Thursday, October 19, 2006 7:10 AM
Glossary Systems architecture — The fundamental and unifying system structure defined in terms of system elements, interfaces, processes, constraints, and behaviors. Systems design — The development of a plan for implementing a set of functional requirements as an operational system. Systems development life cycle (SDLC) — (1) The classical operational development methodology that typically includes the phases of requirements gathering, analysis, design, programming, testing, integration, and implementation. (2) The systematic systems building process consisting of specific phases; for example, preliminary investigation, requirements determination, systems analysis, systems design, systems development, and systems implementation. Systems engineering — An integrated composite of people, products, and processes that provides a capability or satisfies a stated need or objective. Systems Network Architecture (SNA) — IBM’s proprietar y network architecture. Systems security — There are three parts to systems security: (1) Computer Security (COMPUSec) is composed of measures and controls that protect an AIS against denial-of-service, unauthorized disclosure, modification, or destruction of AIS and data (information). (2) Communications Security (COMSec) is measures and controls taken to deny unauthorized persons information derived from telecommunications of the U.S. government. Government communications regularly travel by computer networks, telephone systems, and radio calls. (3) Information Systems Security (INFOSec) is controls and measures taken to protect telecommunications systems, automated information systems, and the information they process, transmit, and store. Systems software — The programs and other processing routines that control and activate the computer hardware facilitating its use. T-1 — Trunk Level 1. A digital transmission link with a total signaling speed of 1.544 Mbps. TA — Terminal adapter. TA/NT1TCB — Terminal Adapter/Network Termination 1 (ISDN) Trusted Computing Base. TAB — TOE access, TOE access banners. Table — An area of computer memory containing multiple storage locations that can be referenced by the same name. 997
AU8231_A003.fm Page 998 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Table driven — An indexed file in which tables containing record keys (i.e., disk addresses) are used to retrieve records. TACACS (Terminal Access Controller Access Control System) — A u thentication protocol, developed by the DDN community that provides remote access authentication and related services, such as event logging. User passwords are administered in a central database rather than in individual routers, providing an easily scalable network security solution. TACACS+ — Terminal Access Controller Access Control System Plus. An authentication protocol, often used by remote-access servers or single (reduced) sign-on implementations. TACACS and TACACS+ are proprietary protocols from CISCO®. Tactical management — Develops the goals and strategies outlined by strategic management. TAG — Under HIPAA, Technical Advisory Group. TAH — TOE access, TOE access history. Tampering — An intentionally caused event that results in modification of a system, its intended behavior, or data. Tandem switch — A tandem switch connects one trunk to another. An intermediate switch or connection between an originating telephone call location and the final destination of the call. The tandem point passes the call along. Tape management system — Systems software that assesses the given information on jobs to be run and produces information for operators and librarians regarding which data resources (e.g., tapes and disks) are needed for job execution. Target identification — Identity that relates to a specific lawful authorization as such. This may be a serial number or a combination of characters and numbers. It is not related to the denoted interception subject or subjects. Target identity — The identity associated with a target service used by the interception subject. Target of Evaluation (TOE) — Under Common Criteria, an IT product or system that is subject to an evaluation. Target service — Telecommunications service associated with an interception subject and usually specified in a lawful authorization for interception. There may be more than one target service associated with a single interception subject. 998
AU8231_A003.fm Page 999 Thursday, October 19, 2006 7:10 AM
Glossary Task management system — It allocates the processor unit resources according to priority scheme or other assignment methods. TAT — Life-cycle support, tools, and techniques. TCB — Trusted computing base. TCP — Transport Control Protocol. TCP sequence prediction — Fools applications using IP addresses for authentication (like the UNIX rlogin and rsh commands) into thinking that forged packets actually come from trusted machines. TCP/IP (Transmission Control Protocol/Internet Protocol) — A set of communications protocols that encompasses media access, packet transport, session communications, file transfer, electronic mail, terminal emulation, remote file access, and network management. TCP/IP provides the basis for the Internet. The structure of TCP/IP is as follows: Process layer clients: FTP, Telnet, SMTP, NFS, DNS; Transport layer service providers: TCP (FTP, Telnet, SMTP), UDP (NFS, DNS); Network layer: IP (TCP, UDP); and Access layer: Ethernet (IP), Token ring (IP). TCSEC — Trusted Computer Systems Evaluation Criteria. TDC — In Common Criteria, protection of the TSF: inter-TSF TSF data consistency. TDM — Time division multiplexing. TE — Terminal equipment. TE1 and TE2 — Terminal endpoints. Technical architecture — Defines the hardware, software, and telecommunications equipment required to run the system. Technical Certification — A formal assurance by the Undersecretary for Management to Congress that standards are met that apply to an examination, installation, test, or other process involved in providing security for equipment, systems, or facilities. Certifications may include exceptions and are issued by the office or person performing the work in which the standards apply. Technical controls — The security controls (i.e., safeguards or countermeasures) for an information system that are primarily implemented and executed by the information system through mechanisms contained in the hardware, software, or firmware components of the system. Technical penetration — An unauthorized RF, acoustic, or emanations intercept of information. This intercept may occur along a transmission path which is (1) known to the source, (2) fortuitous and unknown to the source, or (3) clandestinely established. 999
AU8231_A003.fm Page 1000 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Technical steganography — The method of steganography where a tool, device, or method is used to conceal a message. Examples are invisible inks and microdots. Technical surveillance — The act of establishing a technical penetration and intercepting information without authorization. Technological attack — An attack that can be perpetrated by circumventing or nullifying hardware, software, and firmware access control mechanisms rather than by subverting system personnel or other users. Technology-literate knowledge worker — A person who knows how and when to apply technology. Telecommunications — Any transmission, emission, or reception of signs, signals, writing, images, sounds, or other information by wire, radio, visual, satellite, or electromagnetic systems. Telecommunications carrier — An entity engaged in the transmission or switching of wire or electronic communications as a common carrier for hire that. Telecommunications device — A tool used to send information to and receive it from another person or location. Telecommunications service — The offering of telecommunications for a fee directly to the public or to such classes of users as to be effectively available directly to the public, regardless of the facilities used. Telecommunications service provider (TSP) — Umbrella term for APs, SPs, SvPs, and NWOs. Telecommunications Standardization Sector of the International Telecommunications Union (ITU-TSS) — A unit of the International Telecommunications Union (ITU) of the United Nations. An organization with representatives from the post office, telegraph, and telecommunications agencies (PTT) of the world. ITU-TSS produces technical standards, known as recommendations, for all internationally controlled aspects of analog and digital communications. Telecommuting — The use of communications technologies (such as the Internet) to work in a place other than a central location. Teleprocessing — Information processing and transmission performed by an integrated system of telecommunications, computers, and person-tomachine interface equipment. Teleprocessing security — The protection that results from all measures designed to prevent deliberate, inadvertent, or unauthorized disclosure or acquisition of information stored in or transmitted by a teleprocessing system. 1000
AU8231_A003.fm Page 1001 Thursday, October 19, 2006 7:10 AM
Glossary Telnet — The virtual terminal protocol in the Internet suite of protocols. Allows users of one host to log into a remote host and interact as normal terminal users of that host. TEMPEST — The study and control of spurious electronic signals emitted from electronic equipment. TEMPEST is a classification of technology designed to minimize the electromagnetic emanations generated by computing devices. TEMPEST technology makes it difficult, if not impossible, to compromise confidentiality by capturing emanated information. TEMPEST-Approved Personal Computer (TPC) — A personal computer that is currently listed on the Preferred Products List (PPL) or Evaluated Products List (EPL). TEMPEST Certification — Nationally approved hardware that protects against the transmission of compromising emanations, that is, unintentional signals from information processing equipment which can disclose information being processed by the system. TEMPEST Equipment (or TEMPEST-Approved Equipment) — Equipment that has been designed or modified to suppress compromising signals. Such equipment is approved at the national level for U.S. classified applications after undergoing specific tests. National TEMPEST approval does not, of itself, mean that a device can be used within the foreign affairs community. Separate DS approval is required. TEMPEST hazard — A security anomaly that holds the potential for loss of classified information through compromising emanations. TEMPEST test — A field or laboratory examination of the electronic signal characteristics of equipment or systems for the presence of compromising emanations. Temporal masking — A form of masking that occurs when a weak signal is played immediately after a strong signal. Temporary advantage — An advantage that, sooner or later, the competition duplicates or leap frogs with a better system. Tenant Agency — A U.S. Government department or agency operating overseas as part of the U.S. foreign affairs community under the authority of a chief of mission. Excluded are military elements not under direct authority of the chief of mission. Terabyte (TB) — Roughly 1 trillion bytes. Terminal identification — The means used to establish the unique identification of a terminal by a computer system or network. Test condition — A detailed step the system must perform along with the expected result of the step. 1001
AU8231_A003.fm Page 1002 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Test data — Data that simulates actual data to form and content and is used to evaluate a system or program before it is put into operation. Test data generators — Computer software tools that help generate files of data that can be used to test the execution and logic of application programs. Testing — The examination of the behavior of a program through its execution on sample data sets. Texture block coding — A method of watermarking that hides data within the continuous random texture patterns of an image. The technique is implemented by copying a region from a random texture pattern found in a picture to an area that has similar texture, resulting in a pair of identically textured regions in the picture. TFTP — Trivial File Transfer Protocol. TG — Under HIPAA, Task Group. The Prisoner’s Problem — A model for steganographic communication. Thin client — A workstation with a small amount of processing power and costing less than a full-powered workstation. Third-party ad servers — Companies that display banner advertisements on Web sites that you visit. These companies are often not the ones that own the Web site. Third-party administrator (TPA) — Under HIPAA, an entity that processes healthcare claims and performs related business functions for a health plan. Threat — The potential danger that a vulnerability may be exploited intentionally, triggered accidentally, or otherwise exercised. Threat agent — A means or method used to exploit a vulnerability in a system, operation, or facility. Threat analysis — A project to identify the threats that exist over key information and information technology. The threat analysis usually also defines the level of the threat and likelihood of that threat to materialize. Threat assessment — Process of formally evaluating the degree of threat to an information system and describing the nature of the threat. Threat control measure — (1) A proactive design or operational procedure, action, or device used to reduce the risk caused by a threat. (2) A proactive design technique, device, or method designed to eliminate or mitigate hazards, and unsafe and unsecure conditions, modes and states. 1002
AU8231_A003.fm Page 1003 Thursday, October 19, 2006 7:10 AM
Glossary Threat monitoring — The analysis assessment and review of audit trails and other data collected to search out system events that may constitute violations or precipitate incidents involving data privacy. Threat perspective — The perspective from which vulnerability/threat analyses are conducted (system owner, administrator, certifier, customer, etc.); also referred to as risk dimension. Threat source — Either (1) intent and method targeted at the intentional exploitation of a vulnerability or (2) the situation and method that may accidentally trigger a vulnerability. Three generic strategies — Cost leadership, differentiation, and a focused strategy. Three-dimensional (3-D) technology — Presentations of information that give the user the illusion that the object viewed is actually in the room with the user. Three-way handshake — The process whereby two protocol entities synchronize during connection establishment. Thrill-seeker hacker — A hacker who breaks into computer systems for fun. Throughput — The process of measuring the amount of work a computer system can handle within a specified timeframe. TIFF — Tagged Image Format. Time bomb — A Trojan horse that will trigger when a particular time or date is reached. Time division multiple access (TDMA) — One of several technologies used to separate multiple conversation transmissions over a finite frequency allocation of through-the-air bandwidth. TDMA is used to allocate a discrete amount of frequency bandwidth to each user in order to permit many simultaneous conversations. However, each caller is assigned a specific time slot for transmission. Time division multiplexing (TDM) — A technique for transmitting a number of separate data, voice, and video signals simultaneously over one communications medium by interleaving a piece of each signal one after another. Time domain — Method of representing a signal where the vertical deflection is the signals amplitude, and the horizontal deflection is the time variable. Time stamping — An electronic equivalent of mail franking. 1003
AU8231_A003.fm Page 1004 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Time-dependent password — A password that is valid only at a certain time of day or during a specified timeframe. Timeliness — The ability to ensure the delivery of required information within a defined time frame. Availability of required information in time to make decisions and permit execution within an adversary’s decision and execution cycle. Timely — In-time, reasonable access to data or system capabilities. Timestamping — The practice of tagging each record with some moment in time, usually when the record was created or when the record was passed from one environment to another. Tip side — Side of the line when measured with a voltmeter to an earth ground that should read zero voltage. TLS1 — Transport Layer Security protocol. TNI — Trusted network interpretation of TCSEC; see NCSC-TG011.145,146. TOCTTU (time of check to time of use) — The time interval between when a user is authenticated and when he accesses specific system resources. TOE — Under Common Criteria, target of evaluation. TOE security functions (TSF) — Under Common Criteria, all parts of the TOE that have to be relied upon for enforcement of the TSP. TOE security policy (TSP) — Under Common Criteria, the rules defining the required security behavior of a TOE. Token passing — A network access method that uses a distinctive character sequence as a symbol (token), which is passed from node to node, indicating when to begin transmission. Any node can remove the token, begin transmission, and replace the token when it is finished. Token ring — A type of area network in which the devices are arranged in a virtual ring in which the devices use a particular type of message called a token to communicate with one another. Top-level domain — Three-letter extension of a Web site address that identifies its type. Total risk — The potential for the occurrence of an adverse event if no mitigating action is taken (i.e., the potential for any applicable threat to exploit a system vulnerability). See also acceptable risk, residual risk, minimum level of protection. 1004
AU8231_A003.fm Page 1005 Thursday, October 19, 2006 7:10 AM
Glossary Touch screen — Special screen the user touches to perform a particular function. Touchpad — Popular on notebook computers, a stationary mouse that is touched with the finger. TPA — See third-party administrator, trading partner agreement. Traceroute — (1) A program available on many systems that traces the path a packet takes to a destination. It is mostly used to debug routing problems between hosts. There is also a traceroute protocol defined in RFC 1393. (2) The traceroute or finger commands to run on the source machine (attacking machine) to gain more information about the attacker. Trackball — An upside-down, stationary mouse in which the ball is moved instead of the device. Used mainly for notebooks. Trademark — A registered word, letter, or device granting the owner exclusive rights to sell or distribute the goods to which it is applied. Trading partner agreement — A contractual arrangement that specifies the legal terms and conditions under which parties operate when conducting transactions by the use of EDI. It may cover such things as validity and formation of contract; admissibility in evidence of EDI messages; processing and acknowledgment of receipt of EDI messages; security; confidentiality and protection of personal data; recording and storage of EDI messages; operational requirements for EDI — message standards, codes, transaction and operations logs; technical specifications and requirements; liability, including use of intermediaries and third-party service providers; dispute resolution; and applicable law. Traditional technology approach — Has two primary views of any system: (i) information and procedures and (ii) it keeps these two views separate and distinct at all times. Traffic analysis — A type of security threat that occurs when an outside entity is able to monitor and analyze traffic patterns on a network. Traffic flow confidentiality — A confidentiality service to protect against traffic analysis. Traffic flow security — The protection that results from those features in some cryptography equipment that conceal the presence of valid messages on a communications circuit, usually by causing the circuit to appear busy at all times or by encrypting the source and destination addresses of valid messages. Traffic security — A collection of techniques for concealing information about a message to include existence, sender, receivers, and duration. 1005
AU8231_A003.fm Page 1006 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Methods of traffic security include call-sign changes, dummy messages, and radio silence. Training — Teaching people the knowledge and skills that will enable them to perform their jobs more effectively. Training assessment — An evaluation of the training efforts. Training effectiveness — A measurement of what a given student has learned from a specific course or training event, that is, learning effectiveness; a pattern of student outcomes following a specific course or training event; teaching effectiveness; and the value of the specific class or training event, compared to other options in the context of an agency’s overall IT security training program; program effectiveness. Training effectiveness evaluation — Information collected to assist employees and their supervisors in assessing individual students’ subsequent on-the-job performance, to provide trend data to assist trainers in improving both learning and teaching, and to be used in return-on investment statistics to enable responsible officials to allocate limited resources in a thoughtful, strategic manner among the spectrum of IT security awareness, security literacy, training, and education options for optimal results among the workforce as a whole. Training matrix — A table that relates role categories relative to IT systems. Transaction — A transaction is an activity or request to a computer. Purchase orders, changes, additions, and deletions are examples of transactions that are recorded in a business information environment. Transaction Change Request System — A system established under HIPAA for accepting and tracking change requests for any of the HIPAA mandated transactions standards via a single Web site. See www.hipaadsmo.org. Transaction file — A collection of records containing data generated from the current business activity. Transaction path — One of many possible combinations of a series of discrete activities that cause an event to take place. All discrete activities in a transaction path are logically possible. Qualitative or quantitative probability measures can be assigned to a transaction path and its individual activities. Transactional processing system (TPS) — The processing of transactions as they occur rather than in batches. Transceiver — The physical device that connects a host interface to a local area network, such as Ethernet. Ethernet transceivers contain electronics that apply signals to the cable and sense collisions. 1006
AU8231_A003.fm Page 1007 Thursday, October 19, 2006 7:10 AM
Glossary Transform domain techniques — Various methods of signal and image processing (Fast Fourier Transform, Discrete Cosine Transform, etc.) used mainly for the purposes of compression. Transformation analysis — The process of detecting areas of image and sound files that is unlikely to be affected by common transformations and hide information in those places. The goal is to produce a more robust watermark. Translator — See EDI translator. Transmission Control Protocol (TCP) — The major transport protocol in the Internet suite of protocols providing reliable, connection-oriented, fullduplex streams. Transnational firm — A firm that produces and sells products and services all over the world. Transport layer — The layer of the ISO Reference Model responsible for managing the delivery of data over a communications network. Transport Layer Security Protocol — The public version of SSL3, being specified by the IETF. Transport mode — An IPSec protocol used with ESP or Alt in which the ESP or Alt header is inserted between the IP header and the upper-layer protocol of an IP packet. Trap door — A hidden software or hardware mechanism that permits system protection mechanisms to be circumvented. It is activated in some non-apparent manner; for example, a special “random” key sequence at a terminal. Treated conference room (TCR) — A shielded enclosure that provides acoustic and electromagnetic attenuation protection. Trojan horse — A computer program that is apparently or actually useful and contains a trapdoor or unexpected code. Trojan horse software — Software the user does not want that is hidden inside software the user wants. Trojan horse virus — Hides inside other software. Usually an attachment or download. TRP — Trusted path/channels, trusted path. True search engine — Uses software agent technologies to search the Internet for key words and then places them into indices. Trust — Reliance on the ability of a system or process to meet its specifications. 1007
AU8231_A003.fm Page 1008 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Trusted Computer Security Evaluation Criteria (TCSEC) — A security development standard for system manufacturers and a basis for comparing and evaluating different computer systems. Also known as the Orange Book. Trusted computer system — A system that employs sufficient hardware and software integrity measures to allow its use for simultaneously processing a range of sensitive or classified information. Trusted computing base (TCB) — The totality of protection mechanisms within a computer system, including hardware, software, and communications equipment, the combination of which is responsible for enforcing a security policy. A TCB consists of one or more components that together enforce a unified security policy over a product or system. The ability of a trusted computing base to correctly enforce a security policy depends solely on the mechanisms within the TCB and on the correct input by system administrative personnel of parameters (such as a user’s clearance) related to the security policy. Trusted guard — A computer system that is trusted to enforce a particular guard policy, such as ensuring the flow of only unclassified data from a classified system or ensuring no reverse flow of pest programs from an untrusted system to a trusted system. . Trusted third party — An entity trusted by other entities with respect to security related services and activities, such as a certification authority. TSE — In Common Criteria, TOE access, TOE session establishment. TSF — See TOE security functions. TSP — In Common Criteria, TOE security policy (TSP): the rules defining the required security behavior of a TOE. TSS — In Common Criteria, Security Target evaluation, TOE summary specification. TST — In Common Criteria, Protection of the TSF, TSF self-test. TTL — Time-to-live. Tunnel mode — An IPSec protocol used with ESP in which the header and contents of an IP packet are encrypted and encapsulated prior to transmission, and a new IP header is added. Tunneling — The use of authentication and encryption to set up virtual private networks (VPNs). Turnkey system — A complete, ready-to-operate system that is purchased from a vendor as opposed to a system developed in-house. 1008
AU8231_A003.fm Page 1009 Thursday, October 19, 2006 7:10 AM
Glossary Twisted pair — A type of network physical medium made of copper wires twisted around each other. Example: ordinary telephone cable. Twisted-pair wire — A communication medium that consists of pairs of wires that are twisted together and bound into cable. Two-factor authentication — The use of two independent mechanisms for authentication; for example, requiring a smart card and a password. Type accreditation — In some situations, a major application or general support system is intended for installation at multiple locations. The application or system usually consists of a common set of hardware, software, and firmware. Type accreditations are a form of interim accreditation and are used to certify and accredit multiple instances of a major application or general support system for operation at approved locations with the same type of computing environment. UART — Universal Asynchronous Receiver/Transmitter. UAU — User authentication. UB — In HIPAA, Uniform Bill, as in UB-82 or UB-92. UB-82 — In HIPAA, a uniform institutional claim form developed by the NUBC that was in general use from 1983 to 1993. UB-92 — In HIPAA, a uniform institutional claim form developed by the NUBC that has been in general use since 1993. UCF — In HIPAA, Uniform Claim Form, as in UCF-1500. UCTF — See Uniform Claim Task Force. UDP (User Datagram Protocol) — Connectionless transport layer protocol in the TCP/IP stack. UDP is a simple protocol that exchanges datagrams without acknowledgments or guaranteed delivery, requiring that error processing and retransmission be handled by other protocols. UDP is defined in RFC 768. UHIN — See Utah Health Information Network. UID — User identification. UN/CEFACT — See the United Nations Centre for Facilitation of Procedures and Practices for Administration, Commerce, and Transport. UN/EDIFACT — See United Nations Rules for Electronic Data Interchange for Administration, Commerce, and Transport. Unallocated space — The set of clusters that has been marked as available to store information but has not yet received a file, or still contains some or all of a file marked as deleted. 1009
AU8231_A003.fm Page 1010 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Unauthorized (malicious or accidental) disclosure, modification, or destruction of information — Unintentional errors and omissions. Unauthorized disclosure — Exposure of information to individuals not authorized to receive it. Understanding — Real-world knowledge in context. UNI — User network interface. Uniform Claim Task Force (UCTF) — In HIPAA, an organization that developed the initial HCFA-1500 Professional Claim Form. The maintenance responsibilities were later assumed by the NUCC. Uniform resource locator (URL) — The primary means of navigating the Web; consists of the means of access, the Web site, the path, and the document name of a Web resource, such as http://www.auerbach-publications.com. Uninstaller software — Utility software that can be used to remove software that the user no longer wants from the hard disk. Unit Security Officer — A U.S. citizen employee who is a nonprofessional security officer designated with a specific or homogeneous working unit to assist the office of security in carrying out functions prescribed in these regulations. Unit testing — The testing of a module for typographic, syntactic, and logical errors and for correct implementation of its design and satisfaction of its requirements. United Nations Centre for Facilitation of Procedures and Practices for Administration, Commerce, and Transport (UN/CEFACT) — An international organization dedicated to the elimination or simplification of procedural barriers to international commerce. United Nations Rules for Electronic Data Interchange for Administration, Commerce, and Transport (UN/EDIFACT) — An international EDI format. Interactive X12 transactions use the EDIFACT message syntax. Universal product code (UPC) — An array of variable-width lines that can be read by special machines (e.g., OCR devices) and converted into alphanumeric data. This method is used to mark merchandise for direct input of sales transactions. UNIX — An operating system initially developed by Bell Labs. Used primarily on engineering workstations and computers, and networked systems. UNIX is difficult for nontechnical people to use but is becoming increasingly popular in the business environment in supporting GUI applications. 1010
AU8231_A003.fm Page 1011 Thursday, October 19, 2006 7:10 AM
Glossary UNL — Privacy, unlinkability. UNO — Privacy, unobservability. Unshielded twisted pair (UTP) — A generic term for “telephone” wire used to carry data such as 10Base-T and 100Base-T. Various categories (qualities) of cable exist that are certified for different kinds of networking technologies. UNSM — United Nations Standard Messages. Unstructured data — See data-related concepts. Update — The file processing activity in which master records are altered to reflect the current business activity contained in transactional files. Upgrading — The determination that particular unclassified or classified information requires a higher degree of protection against unauthorized disclosure than currently provided. Such determination shall be coupled with a marking of the material with the new designation. UPIN — Universal Provider Identification Number — to be replaced by National Provider Identifier under HIPAA. Uplink frequencies — In satellites, the frequency used from the earth station up to the satellite. In data, the frequency used to send data from a station to a head end or mainframe. UR — In HIPAA, utilization review. URAC — The American Accreditation HealthCare Commission. URL (uniform resource locator) — An address for a specific Web page or document within a Web site. USB — (1) Universal serial bus. It is becoming the most popular means of connecting devices to a computer. Most standard desktops today have at least two USB ports, and most standard notebooks have at least one. (2) Identification and authentication user–subject binding. USC or U.S.C. — United States Code. Use — With respect to individually identifiable health information, the sharing, employment, application, utilization, examination, or analysis of such information within an entity that maintains such information. (See disclosure, in contrast.) USENET — A facility of the Internet, also called “the news,” that allows users to read and post messages to thousands of discussion groups on various topics. Usenet — A worldwide collection/system of newsgroups that allows users to post messages to an online bulletin board. 1011
AU8231_A003.fm Page 1012 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® User — (1) The party, or his designee, responsible for the security of designated information. The user works closely with an ISSE. Also referred to as the customer. (2) Person or process accessing an AIS either by direct connections (i.e., via terminals), or indirect connections (i.e., prepare input data or receive output that is not reviewed for content or classification by a responsible individual). User acceptance testing (UAT) — Determines if the system satisfies the business requirements and enables the knowledge workers to perform their jobs correctly. User agent — An intelligent agent that takes action on the user’s behalf. User Datagram Protocol (UDP) — A transport protocol in the Internet suite of protocols. UDP, like TCP, uses IP for delivery; however, unlike TCP, UDP provides for exchange of datagrams without acknowledgments or guaranteed delivery. User documentation — Highlights how to use the system. User information — The individual, or organization, who has been authorized access to the information asset by the owner. User interface management — The component of the expert system that is used to run a consultation. User representative — An individual who represents the operational interests of the user community and serves as the liaison for that community throughout the system development life cycle of the information system. User/subscriber — An individual procuring goods or services online who obtains a certificate from a certification authority. Because both consumers and merchants may have digital certificates that are used to conclude a transaction, they may both be subscribers in certain circumstances. This person may also be referred to as the signer of a digital signature or the sender of data message signed with a digital signature. User identification — A character string that validates authorized user access. USR — Guidance documents, user guidance. Utah Health Information Network (UHIN) — Under HIPAA, a public–private coalition for reducing healthcare administrative costs through the standardization and electronic exchange of healthcare data. Utility software — Software that provides additional functionality to the operating system. UTP — Unshielded twisted pair. 1012
AU8231_A003.fm Page 1013 Thursday, October 19, 2006 7:10 AM
Glossary Valid — Logically correct (with respect to original data, software, or system). Validation — The determination of the correctness, with respect to the user needs and requirements, of the final program or software produced from a development project. Validation phase — The users, acquisition authority, and DAA agree on the correct implementation of the security requirements and approach for the completed IS. Validation, Verification, and Testing — Used as an entity to define a procedure of review, analysis, and testing throughout the software life cycle to discover errors; the process of validation, verification, and testing determines that functions operate as specified and ensures the production of quality software. Value chain — A tool that views the organization as a chain or series of processes, each of which adds value to the product or service for the customer. Value network — All the resources behind the click on a Web page that the customer does not see, but that together create the customer relationship-service, order fulfillment, shipping, financing, information brokering, and access to other products. Value-added network (VAN) — A communications network using existing common carrier networks and providing such additional features as message switching and protocol handling. VBR — Variable bit rate. VC — Virtual circuit. VCI — Virtual channel identifier (X.25). VCN — Virtual circuit number (X.25). Vector — Also known as “attack vector” routes or methods used to get into computer systems, usually for nefarious purposes. They take advantage of known weak spots to gain entry. Many attack vectors take advantage of the human element in the system because that is often the weakest link. Vector image — A digital image that is created through a sequence of commands or mathematical statements that places lines and shapes in a given two- or three-dimensional space. Verification — (1) The authentication process by which the biometric system matches a captured biometric against the person’s stored template. (2) The demonstration of consistency, completeness, and correct1013
AU8231_A003.fm Page 1014 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® ness of the software at and between each stage of the development life cycle. Verification phase — The process of determining compliance of the evolving IS specification, design, or code with the security requirements and approach agreed on by the users, acquisition authority, and DAA. Verify — To determine accurately that (a) the digital signature was created by the private key corresponding to the public key and (b) the message has not been altered since its digital signature was created. Verify a signature — Perform a cryptographic calculation using a message, a signature for the message, and a public key, to determine whether the signature was generated by someone knowing the corresponding private key. Versatility — Versatility is the ability to adapt readily to unforeseen requirements. The subordinate elements of versatility are flexibility, interoperability, and autonomy. Vertical market software — Application software that is unique to a particular industry. Video disk — An optical disk that can store images. Videotext — Generic text that refers to a computer information system that uses television, telecommunication, and computer technologies to access and manipulate large, graphics-oriented databases. Virtual circuit — A network service that provides connection-oriented service, regardless of the underlying network structure. Virtual marketing — Encourages users of a product or service supplied by a B2C (buyer-to-customer) company to ask friends to join. Virtual memory — A method of extending computer memory using secondary storage devices to store program pages that are not being executed at the time. Virtual private network (VPN) — A secure private network that uses the public telecommunications infrastructure to transmit data. In contrast to a much more expensive system of owned or leased lines that can only be used by one company, VPNs are used by enterprises for both extranets and wide are intranets. Using encryption and authentication, a VPN encrypts all data that passes between two Internet points, maintaining privacy and security. Virtual reality — A three-dimensional computer simulation in which the user actively and physically participates. 1014
AU8231_A003.fm Page 1015 Thursday, October 19, 2006 7:10 AM
Glossary Virtual workplace — A technology-enabled workplace — no walls, no boundaries, work anytime, anyplace. Linked to other people and information the user needs. Virus — A type of malicious software that can destroy the computer’s hard drive, files, and programs in memory, and that replicates itself to other disks. Virus signature files — A file of virus patterns that are compared with existing files to determine if they are infected with a virus. The vendor of the anti-virus software updates the signatures frequently and makes the available to customers via the Web. Visible noise — The degradation of a cover as a result of embedding information. Visible noise will indicate the existence of hidden information. Visible watermark — A visible and translucent image that is overlaid on a primary image. Visible watermarks allow the primary image to be viewed, but still marks it clearly as property of the owner. A digitally watermarked document, image, or video clip can be thought of as digitally “stamped.” VLA — Vulnerability assessment, vulnerability analysis. VLAN — Virtual local area network. VLSM — Variable-length subnet mask. Voice mail — An e-mail system that allows a regular voice message to be digitally stored at the receiving location and converted back to voice form when it is accessed. Voice processing — A system that recognizes spoken words as well as touch tones from telephones. Basically, a “voice” computer in that it (theoretically) can do anything a computer can do, and can recognize voice commands. Voice synthesizer — An input and output device that can either interpret and convert human speech into digital signals for computer processing or convert digital signals into audible signals that resemble human speech. Volt — The unit of measurement of electromotive force. It is expressed as the potential difference in available energy between two points. One volt is the force required to produce a current of one ampere through a resistance or impedance of one ohm. Voltage — The pressure under which a flow of electrons moves through a device. VPN (virtual private network) — A private network that is configured within a public network. VTAM — Virtual Terminal Access Method. 1015
AU8231_A003.fm Page 1016 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Vulnerability — A weakness in a system that can be exploited to violate the system’s intended behavior relative to safety, security, reliability, availability, integrity, etc. Vulnerability analysis — The systematic examination of systems to determine the adequacy of security measures, identify security deficiencies, and provide data from which to predict the effectiveness of proposed security measures. Vulnerability assessment — Systematic examination of an IS or product to determine the adequacy of security measures identify security deficiencies, provide data from which to predict the effectiveness of proposed security measures, and confirm the adequacy of such measures after implementation. WAIS — Wide Area Information Service. Walker — An input device that captures and records the movement of the feet as the user walks or turns in different directions. Walk-through — A manual analysis technique in which the module author or developer describes the module’s structure and logic to colleagues. WAN (wide area network) — Data communications network that serves users across a broad geographic area and often uses transmission devices provided by common carriers. Frame Relay, SMDS, and X.25 are examples of WANs. Compare with LAN, MAN. Warez — Pronounced wayrz or wayrss. Commercial software that has been pirated and made available to the public via an electronic bulletin board system (BBS) or the Internet. Typically, the pirate has figured out a way to deactivate the copy protection or registration scheme used by the software. Note that the use and distribution of warez software is illegal. In contrast, shareware and freeware may be freely copied and distributed. Warm site — A warm site is similar to a hot site; however, it is not fully equipped with all necessary hardware needed for recovery. Washington Publishing Company (WPC) — Under HIPAA, the company that publishes the X12N HIPAA Implementation Guides and the X12N HIPAA Data Dictionary, developed the X12 Data Dictionary, and hosts the EHNAC STFCS testing program. Waterfall life cycle — A software development process that structures the analysis, design, programming, and testing. Each step is completed before the next step begins. Watermarking — A form of marking that embeds copyright information about the artist or owner. 1016
AU8231_A003.fm Page 1017 Thursday, October 19, 2006 7:10 AM
Glossary Watt — The unit of electricity consumption and representing the product of amperage and voltage. Waveforms — The characteristic shape of a signal usually shown as a plot of amplitude over a period of time. Waveguide — A conducting or dielectric structure able to support and propagate one or more modes. More specifically, a hollow, finely engineered metallic tube used to transmit microwave radio signals from the microwave antenna to the radio and vice versa. Wavelength — The length of a wave measured from any point on one wave to the corresponding point on the next wave. WDM — Wavelength-division multiplexing. Wearable computer — A fully equipped computer that is worn just like a piece of clothing or attached to a piece of clothing similar to the way the cell phone is carried on the belt. Web authoring software — Helps design and develop Web sites and pages that are published on the Web. Web beacon — Web beacons are images that are placed in HTML documents (Web pages, HTML e-mail) to facilitate user activity tracking. Web beacons are usually used in conjunction with cookies and are often used to track visitors across multiple internet domains. Web beacon images are usually, but not always, small and “invisible.” Web browser software — Enables the user to surf the Web. Web bugs — Small image in an HTML page with all dimensions set to 1 pixel. Because of its insignificant size, it is not visible but used to pass certain information anonymously to third-party sites. Mainly used by advertisers. Can also be referred to as a Web beacon or invisible GIF. Web crawler — A software program that searches the Web for specified purposes such as to find a list of all URLs within a particular site. Web defacement — Also referred to as defacement or Web site defacement, a form of malicious hacking in which a Web site is vandalized. Often, the malicious hacker will replace the site’s normal content with a specific political or social message, or will erase the content from the site entirely, relying on known security vulnerabilities for access to the site’s content. Web farm — Either a Web site that has multiple servers or an ISP that provides Web site outsourcing services using multiple servers. Web hosting — The business of providing the equipment and services required to host and maintain files for one or more Web sites and to provide fast Internet connections to those sites. Most hosting is “shared,” 1017
AU8231_A003.fm Page 1018 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® which means that Web sites of multiple companies are on the same server in order to share costs. Web log — Most Web servers produce “log files,” time-stamped lists of every request that the server receives. For each request, the log file contains anonymous information such as date and time, the IP address of the browser making the request, the document or action that is being requested, the location of the document from which the request was made, and the type of browser that was being used. Log files are usually used to ensure quality of service. They also can be used in a limited way to analyze visitor activity. Web page — A specific portion of a Web site that deals with a certain topic. Web portal — A site that provides a wide range of services, including search engines, free e-mail, chat rooms, discussion boards, and links to hundreds of different sites. Web server — Using the client/server model and the World Wide Web’s HyperText Transfer Protocol (HTTP), Web Server is a software program that serves Web page files to users. Web services — Software applications that talk to other software applications over the Internet using XML as a key enabling technology. Web site — A specific location on the Web where the user can visit, gather information, and order products. Web site address — A unique name that identifies a specific site on the Web. Web space — A storage area where the user’s Web site can be kept. WEDI — Workgroup on Electronic Data Interchange. WFQ — Weighted fair queuing. WG — Under HIPAA, a workgroup. Whitehat (or ethical) hacker — A computer security professional who is hired by a company to break into its computer system. WHO — See World Health Organization. Whois — An Internet resource that permits users to initiate queries to a database containing information on users, hosts, networks, and domains. Wide area network (WAN) — A communications network that covers a broad geographic area. 1018
AU8231_A003.fm Page 1019 Thursday, October 19, 2006 7:10 AM
Glossary Wi-Fi (wireless fidelity) — A way of transmitting information in a wave form that is reasonably fast and is often used for notebooks. Also known as IEEE 802.11b. Wired communications — Media that transmit information over a closed connected path. Wireless communications — Media that transmit information through the air. Wireless Internet service provider (or wireless ISP) — A company that provides the same services as a standard Internet service provider except that the user does not need a wired connection for access. Wireless local area network (WLAN) — A local area network using wireless communication protocol. Wireless local loop (WLL) — A means of provisioning a local loop facility without wires. Employing low-power, omnidirectional radio systems, they allow carriers to provision loops up to T-1 capacity to each subscriber. Wireless network access point — A device that allows computers to access a network using radio waves. Wiring closet — Specially designed room used for wiring a data or voice network. Wiring closets serve as a central junction point for the wiring and wiring equipment that is used for interconnecting devices. Wisdom — Understanding of what is true, right, or lasting. Word — In computer memory, a contiguous set of bits used as a basic unit of storage. Words are usually 8, 16, 32, or 64 bits long. Word processing — The use of computers or other technology for storage, editing, correction, revision, and production of textual files in the form of letters, reports, and documents. Work factor — The effort and time required to break a protective measure. Workflow — Defines all of the steps or business rules, from beginning to end, required for a process to run correctly. Workforce — Under HIPAA, employees, volunteers, trainees, and other persons whose conduct, in the performance of work for a covered entity, is under the direct control of such entity, whether or not they are paid by the covered entity. (See business associate, in contrast.) Workgroup — A group of people who can work together to achieve a common set of goals, linked together via technological tools and hardware. Workgroup for Electronic Data Interchange (WEDI) — A healthcare industry group that lobbied for HIPAA A/S, and that has a formal consultative role under the HIPAA legislation. WEDI also sponsors SNIP. 1019
AU8231_A003.fm Page 1020 Thursday, October 19, 2006 7:10 AM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® World Health Organization (WHO) — An organization that maintains the International Classification of Diseases (ICD) medical code set. World Wide Web, or Web — A multimedia-based collection of information, services, and Web sites supported by the Internet. Worm — With respect to security, a special type of virus that does not attach itself to programs, but rather spreads via other methods such as e-mail. Worm attack — A harmful exploitation of a worm that can act beyond normally expected behavior, perhaps exploiting security vulnerabilities or causing denials of service. WPC — See Washington Publishing Company. Wrapper — See cover medium. WWW — World Wide Web; also shortened to Web. Although WWW is used by many as being synonymous with the Internet, the WWW is actually one of numerous services on the Internet. This service allows e-mail, images, sound, and newsgroups. X.25 — WAN Protocol. X.400 — A ITU-TSS international standard for reformatting and sending Internet work via e-mail. X.500 — The CITT and ISO standard for electronic directory services. X.509 — A standard that is part of the X.500 specifications and defines the format of a public key certificate. X/recommendations — The ITU-TSS documents that describe data communication network standards. Well-known ones include X.25 Packet Switching Standard, X.400 Message Handling System, and X.500 Directory Services. X12 — An ANSI-accredited group that defines EDI standards for many American industries, including healthcare insurance. Most of the electronic transaction standards mandated or proposed under HIPAA are X12 standards. X12 Standard — The term currently used for any X12 standard that has been approved since the most recent release of X12 American National Standards. Because a full set of X12 American National Standards is only released about once every five years, it is the X12 standards that are most likely to be in active use. These standards were previously called Draft Standards for Trial Use. X12/PRB — In HIPAA, the X12 Procedures Review Board. 1020
AU8231_A003.fm Page 1021 Thursday, October 19, 2006 7:10 AM
Glossary XDSL — A group term used to refer to ADSL (asymmetrical digital subscriber line), HDSL (high data rate digital subscriber line), and SDSL (symmetrical digital subscriber line). All are digital technologies using the existing copper infrastructure provided by the telephone companies. XDSL is a high-speed alternative to ISDN. XML (eXtensible Markup Language) — A coding language for the Web that lets computers interpret the meaning of information in Web documents. XNS — Xerox Network Systems. X-Open — A group of computer manufacturers that promotes the development of portable applications based on UNIX. They publish a document called “The X-Open Portability Guide.” XOR — The XOR (exclusive-OR) gate acts in the same way as the logical “either/or.” The output is “true” if either, but not both, of the inputs is “true.” The output is “false” if both inputs are “false” or if both inputs are “true.” Another way of looking at this circuit is to observe that the output is 1 if the inputs are different, but 0 if the inputs are the same. XOT — X.25 over TCP. YCbCr — A setting used in the representation of digital images. Y is the luminance component; Cb,Cr are the chrominance components. Zero code suppression (ZCS) — The insertion of a “1” bit to prevent the transmission of eight or more consecutive “0” bits. ZIP — Zone Information Protocol (AppleTalk). Zip drive — A high-capacity, removable diskette drive that typically uses 100-MB Zip disks or cartridges. ZIT — Zone Information Table (AppleTalk).
1021
AU8231_A003.fm Page 1022 Thursday, October 19, 2006 7:10 AM
AU8231_IDX.fm Page 1023 Monday, October 16, 2006 1:27 PM
Index A Abstraction, 311 Acceptable use policy, 13, 533 Access control, 5–6, 93–218, 420–421 accountability, 97–98 administrative controls, See Access control, administrative controls authentication, See Authentication capability tables, 192 categories, 108–111 compensating, 111 corrective, 110–111 detective, 109–110 deterrent, 109 preventative, 108–109 recovery, 111 centralized, 192 CISSP® Candidate Information Bulletin, 759–760 CISSP® expectations, 93 classification vs. access rights, 615 confidentiality, availability, and integrity, 93–94 constrained user interface, 191 content-dependent, 191 database controls, 620–624, See also Database security database restricted views, 618 decentralized, 192 defining resources, 96 determining users, 95–96 discretionary, 186–187, 621, 645, 648 domain security model, 635 downstream controls, 147 Dublin Core metadata group proposal, 615–616 ease of exploit, 675–676 environmental security, See Physical security facility visitors and, 124–125, 289 hardware security, 642–644, See also Hardware; specific components identification and identity management, 147–179, See also Authentication; Identification and identity management
IDS, See Intrusion detection systems information classification, See Data classification ISO/IEC 17799, 308 mandatory, See Mandatory access control monitoring and logging, 119–121, See also Audit logging; Logging network access, See Network security password policies and procedures, 122–123, See also Passwords penetration testing, See Penetration testing physical entry controls, 124–125, 293–295, See also Physical security preventative controls, 646 privilege management, See Access control, privilege management role-based, 26, 189–191, 639 rule-based, 188 sample questions, 215–218, 724–727 sensitive media handling, 653–654 separation of environments, 576–577 services, 169-170 software, See Application security; Software development and programming; Software threats and vulnerabilities software copying/distribution issues, 644, 671 specifying use, 97 system backups, 663 temporal isolation, 192 types, 112 Access control, administrative controls, 112–124, See also Access control, job controls; Access control, privilege management account management, 178–179, 638–642 background services, 640 business continuity and disaster recovery, 114–115, See also Business continuity planning; Disaster recovery planning
1023
AU8231_IDX.fm Page 1024 Monday, October 16, 2006 1:27 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® change control, 114–115, 671–672, See also Change control management; Configuration management configuration management, 115 network management, 117 ongoing supervision, 50 operational policies and procedures, 113–117, See also Operations security performance, 115 personnel security, evaluation, and clearances, 117–119, See also Personnel security policy, 98, 119 product life-cycle management, 117 user management, 121–123 vulnerability management, 115–117 Access control, job controls, 118, 634–635, See also Access control, privilege management; Personnel security incompatible duties, 24–25 job position sensitivity, 25–26 job rotation, 23, 648 need to know, 25, 648 separation of duties, 23–25, 98–101, 207, 576, 648 vacations, 25 Access control, privilege management, 123–124, 633–642 account characteristics, 638 audit data analysis and management, 639–640 clearances, 637 file sensitivity labels, 637 least privilege, 25, 101, 123, 207, 648, 671–672 operating system protection, 310–311 ordinary users, 634–635, 671–672 passwords, 637–638, See also Passwords security administrators, 637–640 security profiles, 638–639 system accounts, 638–642 system administrators, 635–636 system monitoring, 634, 649 system operators, 633–634 system security characteristics, 637 system start-up, 634 Access control, security models, 324, See also Security models and architecture theory; specific models Bell-LaPadula, 325–326
1024
Biba, 326 Brewer-Nash (Chinese wall), 328–329 Graham-Denning, 328 Harrison-Ruzzo-Ullman, 328 information flow, 328 lattice, 324–326 Access control, technical controls, 112, 125–129, See also Access control technologies; specific technologies application access, 128–129, See also Application security encryption, 129–130, 227, See also Cryptography and encryption; Encryption methods and systems malware control, See Malicious software network access, 126–127, See also Network security patch management, See Patch management remote access, 126–127 software protection, 571–582, See also Software protection mechanisms system access, 128 user controls, 126 Access control assurance, 205 audit trail monitoring, 205–207 information security activities, 207–215 penetration testing, 208–215, See also Penetration testing Access control lists (ACLs), 188 integrated automated intrusion responses, 202 shared memory protection, 574 vendor patches and, 676 Access control matrix, 188, 327–328 Access control technologies, 179–186, See also Access control, technical controls; specific technologies Kerberos, 181–184, 268, 408, 491 security domain, 185 SESAME, 184 single sign-on, 179–181 Access control threats, See also Software threats and vulnerabilities; specific threats backdoor/trapdoor, 144, 553, 588, 640 buffer overflows, See Buffer overflows data remanence, 140–142, 651–652 denial of service, See Denial of service (DoS) attacks dumpster diving, 143–144, 595
AU8231_IDX.fm Page 1025 Monday, October 16, 2006 1:27 PM
Index emanations, 138–139 malware, 133–134, See also Malicious software mobile code, 132–133, 551–552 object reuse, 139–140 password crackers, 134–136, See also Passwords shoulder surfing, 139 sniffers, eavesdropping, and tapping, 137–138, 419, 495, 527, 651 social engineering, 145–147, See also Social engineering spoofing/masquerading, 136–137, See also Spoofing attacks theft, 144–145 unauthorized data mining, 142–143 Account management, 638–642 Accreditation and certification, 56, 332, 558, 559–560, 584 ACID test, 621 Active content, 132, See also Mobile code Active Directory Service (ADS), 491 ActiveX, 132, 511, 552, 565–566 ActiveX Data Objects (ADO), 612, 613 Activity monitors, 599 Adams, Carlisle, 250 Address Resolution Protocol (ARP), 450 Adleman, Leonard, 254, 589 Administrative access controls, 125–129, See also Access control, administrative controls Administrative assistants/secretaries, security roles and responsibilities, 41 Administrative law, 687 Administrative services department, information security reporting model, 33 Advanced Encryption Standard (AES), 238, 247–250, 447 add round key, 250 mix column transformation, 249–250 shift row transformation, 249 substitute bytes, 248–249 Adware, 597 AES, See Advanced Encryption Standard Aggregation, 617 Air conditioning, 662 Alarms and signals, 203 Alberti, Leon Battista, 222 Algorithm, 220 Alphabetic and polyalphabetic ciphers, 231–233
American National Standards Institute (ANSI), 14 Analog communication technology, 423 Annualized loss expectancy (ALE), 61 Anomaly-based intrusion detection, 198, 200–201 Anonymous FTP, 506 ANSI standards, 14 ANSI X9.17, 247, 268 Antennae, 138 Antivirus management, 129, 598–601, 650, 654, See also Malicious software activity monitors, 599 antimalware policies, 600–601 change detection, 599–600 intrusion detection, See Intrusion detection systems malware assurance, 601–602 portable devices or computers, See Portable device security scanners, 598–599, 601, 650, 651, See also Scanning AOL Instant Messaging, 504 Applets, See Java applets Application layer (OSI layer 7), 409, 412–413, 414, 497–520, 547–549 administrative services, 512–514 data exchange (WWW), 506–512, See also File Transfer Protocol; Hypertext Transfer Protocol DoD model, 413 E-mail, 497–501, See also E-mail information services, 517–518 instant messaging, 502–506 news messaging, 501–502 peer-to-peer applications and protocols, 512 remote-access services, 514–517 VoIP, 518–520 Application-level proxy, 467 Application programming interfaces (APIs), 612 Applications development, See Software development and programming; Software development methods Application security, 321–322, 537–629, 644 access controls, 128–129, See also Access control audit and assurance mechanisms, 582–586 certification and accreditation, 584 change management, 585–586 configuration management, 586 information accuracy, 583
1025
AU8231_IDX.fm Page 1026 Monday, October 16, 2006 1:27 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® information auditing, 583–584 information integrity, 583 information protection management, 584 audit event types, 206 change management, See Change control management CISSP® Candidate Information Bulletin, 764 confidentiality, availability, and integrity, 537 continuity of operations, 659–660, 665–667, See also Backup; Change control management; Operations security, continuity of operations current threats and levels, 537–538, See also Software threats and vulnerabilities database environment, See Database security data classification, See Data classification development protections and controls, 554–571, See also Software development and programming firmware, 323 knowledge management, 624–626 layers of control, 129 library maintenance, 673 malware, See Malicious software 90/10 strategy guidelines, 628–629 operating systems and, See Operating systems patch management, See Patch management processes and threads, 322–323 sample questions, 629–631, 746–748 security architecture and design issues, 322–324 security policies, 542 software development environment, 538, 541–546, See also Software development and programming software duplication and distribution, 644, 671 software environment, 547–548 software protection, 571–582, See also Software protection mechanisms testing, 213–214, See also Penetration testing threats and vulnerabilities, See Software threats and vulnerabilities
1026
Web application environment, 626–628, See also Internet security ARP poisoning, 450 Artificial intelligence (AI), 625 ASCII, 412, 489 FTP and, 505 Assembly language, 543–546 Asset and risk registers, 299–300 Asymmetric digital subscriber line (ADSL), 458–459 Asymmetric key algorithms, 253–260 advantages and disadvantages, 258 confidential messages, 253–254 Diffie-Hellmann, 257–258 Digital Signature Standard, 265–266 El Gamal, 258 elliptic curve cryptography, 258 hybrid symmetric-asymmetric cryptography, 259–260 message authentication, 260 open message, 254 RSA, 254–257 SESAME, 184 Asynchronous communications technology, 435 Asynchronous tokens, 152 Asynchronous Transfer Mode (ATM), 241, 461 virtual circuits, 461 Atomicity, 621, 623 Atomic values, 606, 607 Audit, 18–19 application security assurance mechanisms, 582–586 continuity planning issues, 340 event types, 205–206 frameworks for implemented security controls, 17–19 COBIT, 18 COSO, 17 ISO 17799/BS 7799, 18–19 ITIL, 18 information auditing, 583–584 information security reporting model, 33–34 operations security, 649 physical security, 300 resolution standards, 4 security administrator and, 639–640 Audit logging, 120, 650, See also Logging anomaly-based intrusion detection, 200 best practices, 207 chain of custody, 205 data mining, 617
AU8231_IDX.fm Page 1027 Monday, October 16, 2006 1:27 PM
Index issues and concerns, 206–207 monitoring and access control assurance, 205–207 protecting audit logs, 644 security administrator privileges, 639–640 separating job functions, 24 storage, 640 system administrator privileges, 636 Auditors ISO assistance, 30 security roles and responsibilities, 39–40 Audit trail, 541 time synchronization, 517–518 Australian CERT, 710 Authentication, 148, 227, See also Identification and identity management; specific techniques and technologies asynchronous tokens, 152 biometrics, See Biometrics cost vs. business value, 167–169 digital signatures, 227, 265–266, 314 fully distributed systems and, 314 HTTP, 508 Kerberos network authentication protocol, 181–184, 491, See also Kerberos keystroke pattern, 162 nonrepudiation, 220, 227 passwords, See Passwords personal identification numbers (PINs), 148, 160 physical items (badges or devices), 125, 296 Point-to-Point Protocol, 450 remote service (RADIUS), See RADIUS smart cards and tokens, 152–160, See also Smart cards SNMP, 514 symmetric and asymmetric key algorithms, See Asymmetric key algorithms; Symmetric key algorithms synchronous tokens, 152–153, 163 Token Ring, 425, 439–440 types, 126, 149–150, 296 by knowledge, 150–152, See also Passwords by ownership, 125, 152–160 what a person is, 160–162, See also Biometrics user ID guidelines, 148–149 Web application security, 628
wireless networks, 445–446 EAP framework, 447–448, 450 MAC address tables, 446 open system, 445 shared-key, 445–446 Authentication header (AH), 475–476 Automated distribution agents, 594 Automated information system (AIS), 325 Automatic teller machine (ATM), 166 ATM cards, 154 Availability, 6–7, 94, 226, 339, See also Confidentiality, availability, and integrity operations continuity, 655–656, See also Business continuity planning (BCP) Available bit rate (ABR), 461 Avalanche effect, 221
B Backdoor/trapdoor, 144, 553, 588, 640 Background checks, 46–50, 117, 118, 649–650 benefits of, 47 credentials verification, 49–50 credit history, 48 criminal history, 48–49, 649–650 driving record, 49 drug and substance testing, 49 prior employment, 49 social security number verification, 50 suspected terrorist watch list, 50 timing of, 47 types of, 47–48 Background services, 640 Back Orifice, 595 Backup, 7, 369–370, 577–578, 647, 654–655, 656 access control systems, 663 cold sites, 368, 663 continuity of operations, 656–659 continuity planning issues, 369–370 controls, 577–578 database shadowing and mirroring, 370, 658 disk mirroring, 578 electronic vaulting, 369, 659 encryption, 651, 655 hot sites, 368, 663 incremental, differential, and full, 658 maintenance and testing, 663 media misuse prevention, 654–655 off-site storage, 369–370, 659
1027
AU8231_IDX.fm Page 1028 Monday, October 16, 2006 1:27 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® online methods, 659 power, 290, 661–662, 664 RAID, See Redundant array of inexpensive (or independent) disks remote journaling, 369 resource acquisition and implementation, 381 sensitive media handling, 653–654 storage area networks, 370 Backup assessment team, 377 Baseline, 15–16 BASIC, 544 Basic rate interface (BRI), 454 Bastion hosts, 433–434 Beach ball effect, 15 Behavioral biometrics, 162 Bell-LaPadula model, 324, 325–326, 416 Bellovin, Steve, 136 Benchmarking, 362–363 Berne Convention, 691 Between-the-lines attack, 553 Biba integrity model, 326, 416 Biometrics, 150, 160–167, 296 accuracy and sensitivity, 163–165 behavioral, 162 counterfeiting, 165 facial, 162 fingerprints, 161 hand geometry, 161, 166 retinal and iris scans, 161 user acceptance, 166–167 user training, 165–166 voice patterns, 161–162 Bionet, 595 Birthday attack, 263, 273 Black box, 208 Blacklists, 498–499 Blind penetration testing, 212–213 Block ciphers, 228 Blowfish, 251, 276 Blue bug attack, 449 Blue jacking, 449 Bluetooth, 315, 449–450 Bomb blast film, 295 Booting access control, 634 rebooting, 667 Boot sector infectors, 590 Border Gateway Protocol (BGP), 410 Botnet, 596 Bots, 594 Boundary routers, 433 Brain, 590
1028
Brewer-Nash (Chinese wall) model, 328 Bridges, 442–444 Broadband wireless, 461–462 Broadcast transmissions, 435 Brownouts, 284 Brute force attacks, 272 DES and, 243, 244, 245, 272 hash functions and, 263 Kerberos key vulnerabilities, 184 password vulnerabilities, 122, 134, 180, 514 RSA and, 257 SNMP and, 514 vulnerability testing, 301 BS 7799, 18–19, 308, 309 Budgeting continuity planning issues, 350, 355 ISO responsibilities, 27–28 Buffer overflows, 128, 131–132, 527, 549–550 Bluetooth, 449 Morris Worm, 534 parameter check controls, 548–549, 573 security controls, 573 Building security, See Physical security Bus, 318, 424 Business continuity planning (BCP), 6, 114–115, 337–406, 577–578, 647, 669, See also Disaster recovery planning CISSP® Candidate Information Bulletin, 762 CISSP® expectations, 337–338 confidentiality, availability, and integrity, 339 contingency plan, 663–665 continuous availability approach, 347, 354 crisis management plan and process, 347, 372 data and software backup, 369–370, See also Backup definitions, 395–396 emergency operations center, 347, 381, 392 enterprise continuity planning and, 341–343 customer service, 343, 360–361 embarrassment or confidence loss, 343, 361 extra expense, 343, 362 revenue loss, 342, 362 formalizing policy, 350 hidden benefits, 343 incident command system, 347–348
AU8231_IDX.fm Page 1029 Monday, October 16, 2006 1:27 PM
Index incident response, See Incident response and evaluation ISO/IEC 17799, 308 legal and regulatory issues, 401–405 network inclusion, 418 operations continuity, 655–669, See also Operations security, continuity of operations planner security roles and responsibilities, 40 rationale, 339–340 regulatory compliance issues, 401–404 sample questions, 398–400, 737–740 standards, 341 terminology, 395–398 testing, maintenance, and training, 373, 381–385, 388–391 traditional BCP, 347 Business continuity planning (BCP), assessment phase, 345, 354–363, 394 activities and tasks work plan, 363 benchmarking and peer review, 362–363 budgets, 355 business impact assessment (BIA), 359–362 customer service interruption, 360–361 embarrassment or confidence loss, 361 people to interview, 360 (table) quick-hit opportunities, 361 (table) revenue loss and extra expenses, 362 business processes analysis, 355 high-level activities and tasks, 364–365 (table) motivation, risks, and control objectives, 355 people and organizations, 355 risk management, 358–359 risk management review, 353–354 technical issues and constraints, 356 threat assessment, 356–359 environmental security, 358 information security, 358 physical and personnel security, 357–358 time dependencies, 355 understanding enterprise strategy and goals, 354
Business continuity planning (BCP), design and development phase, 345, 363–366, 394 activities and tasks work plan, 386, 387 (table) backup and recovery, 369–370, 381, See also Backup contingency organization, 382 continuity plan construction, 374–381 continuity plan documents and infrastructure strategies, 373 crisis management planning approaches, 379–381 enterprise business processes, 371 guidelines, 380 (table) identifying recovery alternatives, 367–369 IT infrastructure, 370 plan contents, 376, 379 recovery team structure, 377–378 resource inventory, 378 scope, objectives, and assumptions, 376–377 recovery strategy development, 363–366 recovery team structure, 377–378 testing, maintenance, and training strategies, 373–374, 381–385, See also Business continuity planning (BCP), testing tools and software, 374–375 training and awareness strategies, 386 work plan development, 366 Business continuity planning (BCP), implementation and management phase, 345–346, 386–394 continuity planning manager roles and responsibilities, 392–394 CPPT implementation work plans, 386–387 management phase, 392–394 monitoring, 388 organizational unit plan deployment, 388 program oversight, 392 review and update, 383, 385–386, 390–391 distributing updated plans, 385–386 key stakeholder list, 391 version control, 385, 391 testing, See Business continuity planning (BCP), testing training and awareness strategies, 391–392
1029
AU8231_IDX.fm Page 1030 Monday, October 16, 2006 1:27 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Business continuity planning (BCP), project initiation phase, 344, 346–354, 394 activities and tasks work plan, 354 disruption avoidance and mitigation, 353–354 executive leadership and support, 348–351 project team organization and management, 351–353 metrics, 351 project kickoff meeting, 352–353 project management office techniques, 351 project management tools, 352 project timelines, 352 scope and authorization, 348 scope development and planning, 346–348 Business continuity planning (BCP), testing, 383–384, 388–390 checklists, 389 design and development phase, 373–374, 381–385 full interruption, 390 measurement criteria, 384 objectives, 384, 389 parallel, 390 participants, 384 schedule and timeframes, 384 simulation, 390 tabletop walkthrough, 389–390 test script, 384 Business impact assessment (BIA), 359–362, 395 Business objectives, 28, 354, 384, 389 Business processes analysis, 355 Business unit security awareness mentor, 54 Bynum, Maner Terrell, 72 Bypass attacks, 617–618
C C, 544, 582 C++, 567, 582 Cable modem, 459 Cables access control, 643 coaxial, 429 fiber optic, 429–430, 643 twisted-pair, 428–429, 643 Caesar cipher, 222, 229, 231 Candidate Information Bulletin, See CISSP® Candidate Information Bulletin
1030
Candidate key, 606 Candy-from-a-baby fallacy, 77 CAP, 592 Capability Maturity Model for Software (CMM), 555 Capability Maturity Model Integration (CMMI), 331 Capability tables, 192 Carrier sense multiple access (CSMA), 437–438 collision avoidance (CSMA/CA), 437–438 collision detection (CSMA/CD), 438 CAST, 250–251 CAST-128, 275 CAST-256, 251 Categorical imperative, 82 CCTA Risk Analysis and Management Method (CRAMM), 64 CDs, 317 Cellular phone technology GSM, 432 IMSI catcher, 432–433, 526 smart phones, 315, 471 Centralized access control, 192 Central processing unit (CPU), 308, 315–316 supervisor and problem states, 321 Centre for Computing and Social Responsibility, 72 CERT/CC, 700 Certificate authority (CA), 269 Certificate revocation, 271 Certification, public key infrastructure, 269–271 Certification and accreditation, 56, 332, 558, 559–560, 584 Certification verification, 49–50 Certified Information Security Manager (CISM), 56 Certified Information Systems Auditor (CISA), 56 Certified Information Systems Security Professional (CISSP®), 56, See also CISSP® Candidate Information Bulletin; CISSP® responsibilities and expectations Chain of custody, 205, 709 Challenge Handshake Authentication Protocol (CHAP), 450 Change control management, 114, 585–586, 659–660, 669–677 approval/disapproval, 672 build and test, 672 Clark-Wilson integrity model, 326–327
AU8231_IDX.fm Page 1031 Monday, October 16, 2006 1:27 PM
Index configuration management, 115, 586, 659, 670–671 continuity, 385–386 documentation, 673 impact assessment, 672 implementation, 673 library maintenance, 673 logs and documentation, 122 notification, 673 patch management, See Patch management production software, 671 requests, 672 self-healing systems, 647 software access control, 671–672 validation, 673 version updates, 673 Change detection software, 599–600 Change of scale test, 86 Character checks, 583 Chat applications security, 502, 504–505, 594, See also Instant messaging scripting, 505 Checksums, 260, 264, 599 Chief executive officer (CEO), 26, See also Executive management Chief information officer (CIO), 22, 26, 32 Chinese wall, 328 CHRISTMA, 591 Cipher block chaining mode (CBC), 238–241, 264 CipherDisks, 222 Cipher feedback mode (CFB), 241 Ciphertext, 220, 233 cryptanalysis attacks chosen ciphertext, 272 ciphertext only, 271 Circuit-level proxy, 466–467 Circuit-switched networks, 436–437 CISSP® Candidate Information Bulletin, 757–773 access control, 759–760 application security, 764 business continuity and disaster planning, 762 cryptography, 760 general exam information admission problems, 771 admittance, 770 examination format and scoring, 772 examination protocol, 771 exam response information, 772–773 reference material, 771
results, 772 security, 770–771 information security and risk management, 758–759 legal, regulations, compliance and investigations, 765 operations security, 764 physical security, 760–761 security architecture and design, 761 suggested reference list, 766–769 telecommunications and network security, 763 CISSP® examination, See CISSP® Candidate Information Bulletin CISSP® responsibilities and expectations, 684–685 access control, 93 continuity and disaster recovery planning, 337–338 cryptography, 219 ethical codes, 84–86 information security and risk management, 2–4 legal, regulations, compliance, and investigations domain, 684–685 physical security, 282 security architecture and design, 307–308 telecommunications and network security, 408 Citizen programmers, 550 Citrix MetaFrame, 313 Civil law, 688 Clark-Wilson model, 324, 326, 416 Classified information, See Data classification Classless interdomain routing (CIDR), 472 Cleanroom, 561–562 Clearances, 637 Client/customer/patient choice, 83 Clock synchronization, 517–518, 636 Closed-circuit television (CCTV), 296–298 Coaxial cables, 429 COBIT, 18 COBOL, 544 Code division multiple access (CDMA), 432 Code of Fair Information Practice, 78 Code of Practice for Information Security Management, 308 Codes of ethics, See Ethical issues Code words, 235–236 Cohen, Fred, 589, 598 Cold sites, 368, 663 Cold spares, 660
1031
AU8231_IDX.fm Page 1032 Monday, October 16, 2006 1:27 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Collision, defined, 220 Combination locks, 294 Committee of Sponsoring Organizations of the Treadway Commission (COSO), 17 Common Criteria, 331 Common Internet File System (CIFS), 493 Common law, 686–688 Common Object Request Broker Architecture (CORBA), 486, 569–570 Communications security, See Network security; Telecommunications security Compact disks (CDs), 317 Companion viruses, 591 Compassion/last chance, 84 Compensating controls, 647 Competition, 84 Compiled languages, 546 Compliance audit framework, 17–19 Component-based development, 563 Component Object Model (COM), 611, 613 Computer-aided software engineering (CASE), 563 Computer crime, 74, 695–697, See also Legal and regulatory issues; Malicious software; specific attacks or threats forensics, See Computer forensics hacking issues, 77–78 international cooperation, 697–698 social engineering, See Social engineering Computer ethics, See Ethical issues Computer Ethics Institute (CEI), 79 Computer forensics, 705–710 chain of custody, 205, 709 crime scene, 707 digital/electronic evidence, 708 guidelines, 709 models, 706–707 rules of evidence, 708 signature behaviors, 707 software forensics, 578–580 Computer game fallacy, 76 Computer incident response teams (CIRTs), 29, 41 Computer viruses, See Malicious software; Viruses; specific types Concept virus, 592 Concurrency, 618, 623 “Confidential” information classification, 106
1032
Confidentiality, 5, 94, 226 Bell-LaPadula model, 325–326 ethical IT decision making, 84 network security issues, 419 Confidentiality, availability, and integrity (CIA), 20 access control, 93–94 application security, 537 continuity planning, 339 cryptography, 219, 226 information security and risk management, 5–7 Information Technology Security Evaluation Criteria (ITSEC), 330 Confidentiality or nondisclosure agreements, 45, 119 Confidential messages, 253–254 Configuration management, 115, 586, 659, 668–671, x Confusion, 221 Consensus/modified Delphi method, 71 Consistency, ACID test, 621 Constant bit rate (CBR), 461 Constrained user interface, 191 Content-dependent access control, 191 Contention-based protocol, 437 Contingency organization, 382, 395 Contingency plans, 663–665, 669, See also Business continuity planning Continuity of operations, See Operations security, continuity of operations Continuity planning, See Business continuity planning Continuity planning, definitions, 395–396 Continuity planning automation, 374–375 Continuity planning project team, 351–353 Continuous availability (CA), 354, See also Business continuity planning Control gates, 311 Control Objectives for Information and related Technology (COBIT), 18 Cookies, 628 Copper-distributed data interface (CDDI), 441 Copyright, 691 Corporate memory, 624 Corporate security, information security reporting model, 32–33 Corrective controls, 647 COSO, 17 Council of Europe (CoE) Cybercrime Convention, 697–698 Counter mode (CTR), 241–242
AU8231_IDX.fm Page 1033 Monday, October 16, 2006 1:27 PM
Index Covert channels, 550, 640 Crackers, 134–136, 646 CRAMM, 63 Credit history background check, 48, 118 Criminal history background check, 48–49, 649–650 Criminal law, 687 Crisis management planning, 347, 372, 379–381, 386, 395, See also Business continuity planning Cross-certification, 271 Cryptanalysis, defined, 220 Cryptanalysis and attacks, 136, 271–274 birthday, 263, 273 brute force, 272, See also Brute force attacks chosen ciphertext, 272 chosen plaintext, 272 ciphertext only, 271 dictionary, 273 differential power analysis, 273 factoring, 273 frequency analysis, 273 known plaintext, 271–272 password crackers, 134–136 rainbow table attacks, 136 random number generator, 274 replay, 273 reverse engineering, 273–274 social engineering, 272, See also Social engineering temporary files, 274 time-memory tradeoffs, 135–136 Cryptogram, 220 Cryptography and encryption, 6, 219–279, 575 access control, 129–130, 227, See also Access control Kerberos, 181–184, See also Kerberos authentication, 227, See also Authentication backup media data, 651, 655 CISSP® Candidate Information Bulletin, 760 CISSP expectations, 219 concepts and definitions, 220–221 cryptanalytical attacks, See Cryptanalysis and attacks elliptic curve cryptography, 258 emerging technology, 223 encryption management, 266–271, See also Encryption management financial institution standards, 268–269 hash functions, See Hashing algorithms
history, 222–223 household fallacies, 533 Internet and network security, 275–276 intrusion detection systems and, 196–197 Java security, 566 Kerckhoff’s law, 266–267, 543 key management, See Key management legal issues, 271 message integrity controls, 260–265, See also specific types methods and encryption systems, 229–259, See also Encryption methods and systems; specific methods or systems nonrepudiation, 227 protecting information, 225 protocols and standards, 275 public key, See Public key infrastructure quantum cryptography, 223–225 sample questions, 277–279, 728–731 security policies, 274 stored data, 225 transmitted data, 225 uses, 226 Cryptology, defined, 220 Cryptosystem, 220 Crystal box, 209 Customary law, 688 Customer relationship management (CRM), 179 Cybercrime, See Computer crime Cybernetics, 71 Cyber squatting, 490 Cyclic redundancy checking (CRC), 434, 599
D Daemons, 640 Damage assessment team, 377 Data backup, See Backup; Storage media Database administrator, 636 Database interface languages, 609–612 Internet-database connectivity, 612–613 Java Database Connectivity (JDBC), 610 OLE DB, 611–612 Open Database Connectivity (ODBC), 609–610 XML, 610–611 Database management systems (DBMS), 602–609 hierarchical, 604–605 hybrid object-relational, 609 network, 605
1033
AU8231_IDX.fm Page 1034 Monday, October 16, 2006 1:27 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® object-oriented, 603, 609, 622 relational, 603, 605–609 security issues, See Database security SQL, 607–608, See also Structured Query Language system accounts, 640, 641 thin storage, 310 Databases defining, 322, 837 knowledge discovery in (KDD), 625 Database security, 602 data warehousing, See Data warehousing DBMS architecture, 602–609, See also Database management systems DBMS controls, 620–624 data contamination controls, 623 grant and revoke access controls, 622 lock controls, 621 metadata controls, 623 object-oriented database security, 603, 622 online transaction processing, 623–624 view-based access controls, 622 interface languages, 609–612, See also Database interface languages knowledge management, 624–626 remote journaling, 659 rollback mechanisms, 647 security administrator privileges, 637 shadowing and mirroring, 370, 658 system administrator privileges, 636 threats and vulnerabilities, 617–620 access to restricted views, 618 aggregation, 617 bypass attacks, 617–618 concurrency, 618, 623 data contamination, 618 data interception, 619 deadlocking, 618, 621 denial of service, 618–619 improper modifications, 619 inference, 619 query attacks, 619 server access, 619 time of check/time of use, 619 unauthorized access, 620 Web security, 619–620 Data circuit-terminating equipment (DCE), 460 Data classification, 101–108 assurance, 107 benefits of, 102–103 confidential or secret, 106
1034
declassification, 654 internal use only, 106 media labeling, 107 program for, 103–107 application and data owners, 105 auditing procedures, 106, 107 central repository, 106–107 classifying information and applications, 105–106 objectives, 103–104 organizational support for, 104 policy development, 104 process flow and procedure, 104 review and update, 107 standards, 104 standard templates, 105 support tool development, 105 user training, 107 public, 105–106 security classification vs. access rights, 615 Data control language (DCL), 608 Data custodian, security roles and responsibilities, 39 Data definition language (DDL), 608 Data diddlers, 587 Data Encryption Standard (DES), 237–247, 495 advantages and disadvantages, 242–244 brute-force key discovery, 272 cipher block chaining mode, 238–241, 264 cipher feedback mode, 241 counter mode, 241–242 double DES, 244–246 electronic codebook mode, 238 International Data Encryption Algorithm, 250 meet in the middle attack, 244–245 output feedback mode, 241 stream modes, 241 triple DES, 246–247, 275 Data hiding, 311, 330, 567, 568 Data keys (DKs), 268 Data-link layer (OSI layer 2), 409, 410, 433–450, See also Network security Address Resolution Protocol (ARP), 450 architecture, 433–434 DoD model, 413 Ethernet, 441–445, See also Ethernet logical link control, 410, 413 media access control, 410, 414 Point-to-Point Protocol (PPP), 450
AU8231_IDX.fm Page 1035 Monday, October 16, 2006 1:27 PM
Index technology and implementation bridges, 442–444 concentrators, 441 front-end processors, 442 hubs and repeaters, 442 multiplexers, 442 switches, 444–445 transmission technologies, 434–441, See also Transmission technologies wireless local area networks (WLANs), 445–450 Data manipulation language (DML), 608 Data marts, 613 Data mining, 616–617 unauthorized intelligence gathering, 142–143 Data mirroring, 657 Data owners, security roles and responsibilities, 39 Data remanence, 140–142, 651–652 Data storage, See Information storage Data terminal equipment (DTE), 461 Data warehousing, 613–617 data marts, 613 data mining, 616–617 metadata, 614–616 online analytical processing, 616 Web-based, 310 Deadlocking, 618, 621 Debriefing, 704–705 Debugging tools, 634 Decentralized access control, 192 Declassification, 654 Decoding, defined, 221 Decryption, 220 Defense Security Service (DSS), 341 Degaussing, 140, 651 Delphi method, 71 Demilitarized zone (DMZ), 417, 434 Denial of service (DoS) attacks, 6, 130–131, 417–418, 486, 588, 645 broadcast transmission technology and, 435 continuity of operations, 665–666 countermeasures, 417–418 database environment, 618–619 distributed DoS zombies, 131, 596 IP address spoofing, 474, See also Spoofing attacks penetration testing, 213, 214 SYN floods, 130–131, 418, 474, 483, 486 DES, See Data Encryption Standard Descarte’s rule of change, 82 Detective controls, 646, 656
Deterrent controls, 647 DHCP, See Dynamic Host Configuration Protocol Dictionary attack, 273 Differential power analysis, 273 Diffie, Whitfield, 135, 253 Diffie-Hellmann algorithm, 257–258, 276 Diffusion, 221 Digital communication technology, 423–424 Digital signatures, 227, 265–266, 314 Digital Signature Standard (DSS), 265–267 Digital subscriber line (DSL), 452, 457–459 ADSL, 458–459 Digital video disks (DVDs), 317 Digital video recorder (DVR), 297 Diode lasers, 429 Directive controls, 647 Directory Acess Protocol (DAP), 530 Directory services, 174, 486, 487–493 Active Directory Service, 491 Domain Name System, 487–489 Lightweight Directory Access Protocol (LDAP), 174, 490–491 namespace-related risks cyber squatting, 490 domain litigation, 490 NETBIOS vulnerabilities, 491–492 Network Information Service, 492–493 server information disclosure, 489–490 Direct-sequence spread spectrum (DSSS), 431 Disaster, defining, 395 Disaster recovery planning (DRP), 6, 114–115, 337–339, 346–347, 396–397, 647, 669, See also Business continuity planning assessment phase, 345, 354–363, See also Business continuity planning (BCP), assessment phase crisis management process integration, 372 data and software backup, 369–370, See also Backup design and development phase, 345, 363–366, See also Business continuity planning (BCP), design and development phase enterprise continuity planning and, 341–343 implementation and management, See Business continuity planning (BCP), implementation and management phase physical facility recovery, 371–372
1035
AU8231_IDX.fm Page 1036 Monday, October 16, 2006 1:27 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® project initiation phase, 344, 346–354, See also Business continuity planning (BCP), project initiation phase recovery alternatives, 367–369 recovery strategies for IT, 366–367 cold sites, 368, 663 full production backup capabilities, 367 hot sites, 368, 663 mobile sites, 368 multiple processing sites, 368 network recovery, 370–371 reciprocal or mutual aid agreements, 369 virtual business partners, 368–369 warm sites, 368 workspace and facilities, 368 recovery strategy development, 363–366, 371 recovery team, 377–378 backup assessment team, 377 contingency organization meeting, 382 damage assessment team, 377 recovery management team, 377 restoration team, 377 sample questions, 398–400, 737–740 service bureaus, 369 strategies for IT infrastructure, 370 terminology, 395–398 testing, maintenance, and training strategies, 373–374, 381–385, See also Business continuity planning (BCP), testing Discrete logarithmic algorithms, 257–258, 266 Discrete multitone (DMT), 458 Discretionary access control (DAC), 186–187, 621, 645 need to know, 648 Diskless workstations, 309–310 Distributed Component Object Model (DCOM), 486, 569 Distributed computing environment RPC (DCE RPC), 486 Distributed denial of service (DDoS) attacks, 131 zombies, 596 Distributed environments, 313–314, See also Network security Distributed object-oriented systems, 569–570 DomainKeys, 667
1036
Domain name litigation, 490 Domain Name System (DNS), 408, 487–489, 529 IP address spoofing, See Spoofing attacks mail exchange records, 497, 529 masquerading, 137 query manipulation, 489 social engineering, 489 Domain security model, 635 Door design and materials, 295 Double-blind penetration testing, 213 Double DES, 244–246 Downloadable code, 132, See also Mobile code Drivers, 319 Driving record background check, 49 Drug and substance testing, 49 Dry pipe system, 292 DSL, See Digital subscriber line Dual-homed host, 433 Dublin Core metadata element set, 615–616 Due care, 695 Due diligence, 15, 695 Dumpster diving, 143–144, 595 Duplexing, 657 Duplex modes, 412 Durability, ACID test, 621 DVDs, 317 Dynamic Host Configuration Protocol (DHCP), 459, 479–480 Dynamic packet filtering, 466
E EAP, See Extensible Authentication Protocol Easter egg, 598 Eavesdropping, 419, See also specific methods E-carriers, 455–456 Education, See Training and education Education verification, 49–50 EEE2, 247 EEPROM (electrically erasable programmable ROM), 155 Effect analysis, 64 Eiffel, 567 Electrical circuit-based physical intrusion detection, 298 Electronic codebook mode (ECB), 238 Electronic Frontier Foundation (EFF), 241 Electronic vaulting, 369, 659 El Gamal, 258 Elliptic curve cryptography, 258
AU8231_IDX.fm Page 1037 Monday, October 16, 2006 1:27 PM
Index E-mail address spoofing, 498 Internet Message Access Protocol (IMAP), 500–501 mail exchange records, 497 mobile code, 132 open mail relay, 498–499 phishing, 137, 667 Post Office Protocol (POP), 500 Pretty Good Privacy (PGP), 275 security using cryptography, 274–275 S/MIME, 275 SMTP, 412, 497–498 social engineering, 145–146 spam, 129, 134, 498–500, 666–667 Trojan horse programs, 594, See also Trojans viruses, 591, See also Viruses X.500 standard, 530 zombie networks, 499 Emanations, 138–139 Emergency operations center (EOC), 347, 381, 392 Emergency power off (EPO) switches, 290 Emergency response planning (ERP), 397 Emerging technologies, maintaining awareness of, 30 Employee terminations, 50–51, 207 Employment agreements, 45, 119 Employment background check, 49 Encapsulating Security Payload (ESP), 276, 476 Encapsulation, 567, 568 Encoding, defined, 221 Encryption, defined, 220 Encryption management, 266–271 financial institution standards, 268–269 key management, 266–268, See also Key management public key infrastructure, 269–271 Encryption methods and systems, 229–259, 541 AES, See Advanced Encryption Standard alphabetic and polyalphabetic ciphers, 231–233 asymmetric algorithms, 253–260, See also Public key cryptography block ciphers, 228 cable modems, 459 code words, 235–236 concepts and definitions, 220–221 DES, See Data Encryption Standard end-to-end encryption, 226 household fallacies, 533
Kerberos, See Kerberos link encryption, 225, 410 modular mathematics, 233 one-time pads, 234–235 playfair cipher, 229–230 public key algorithms, 253–260, See also Public key cryptography rail fence, 230 rectangular substitution tables, 230–231 running key cipher, 233 security policies, 274 S-HTTP, 511 steganography, 235 stream-based ciphers, 227–228 substitution ciphers, 229 symmetric algorithms, 236–252, See also Symmetric key algorithms TKIP, 447–448 transposition ciphers, 230–231 watermarking, 235 wireless networks, 446–450, See also Wireless local area networks End systems, 468–471 End-to-end encryption, 226 End user, security roles and responsibilities, 38 Enigma machine, 222 Enterprise continuity planning, 341–343, See also Business continuity planning Enterprise JavaBean (EJB), 569, 570 Enterprise strategies, goals, and objectives, 28, 354, 384, 389 Enterprisewide security oversight committee, 34–38, See also Security council Entity integrity model, 607 Environmental controls, 662 Environmental security assessment, 358, See also Physical security EPROM (erasable programmable ROM), 155, 323 Equity, 84 Erasable programmable read-only memory (EPROM), 155, 323 Ethernet, 409, 413, 438–439, 441–445, 451 Ethical hacking, 208, See also Penetration testing Ethical issues, 71–87, See also Legal and regulatory issues bases for IT decision making, 82 avoid harm, 83 client/customer/patient choice, 83 compassion/last chance, 84
1037
AU8231_IDX.fm Page 1038 Monday, October 16, 2006 1:27 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® competition, 84 confidentiality, 84 Descartes’ rule of change, 82 equity, 84 evidentiary guidance, 83 Golden Rule, 82 Kant’s categorical imperative, 82 legalism, 83 no free lunch rule, 83 openness/full disclosure, 84 professionalism, 83 risk aversion principle, 83 slippery slope, 82 trustworthiness and honesty, 84 utilitarian principle, 83 change of scale test, 86 codes of conduct and resources, 78–87 CISSPs and, 84–86 Code of Fair Information Practice, 78 Computer Ethics Institute, 79 IAB and RFC 1087, 79 (ISC)2 Code of Ethics, 81–82 National Computer Ethics and Responsibilities Campaign, 80 National Conference on Computing and Values, 80 organizational plan of action, 82–84 Working Group on Computer Ethics, 80 common fallacies, 75–77 candy-from-a-baby, 77 computer game, 76 free information, 77 hacker’s, 77 law-abiding citizen, 76 shatterproof, 76 computer crime, See Computer crime; Malicious software; specific attacks or threats computers in the workplace, 74 conservation of ownership, 86 globalization, 75 hacking and hacktivism, 77–78 history of computer ethics, 71–72 informed consent, 86 intellectual property, 75 privacy, 75 professional responsibility, 75 “Ten Commandments” for computer ethics, 79–80 Evidentiary guidance, 83 Exclusive-or, 228 Executive management communicating risks to, 26
1038
continuity planning support and leadership, 348–351 data classification program support, 104 information security reporting model, 31 security policy review and sign-off, 11 security roles and responsibilities, 38–39 techniques for gaining support of, 349 (table) Expert systems, 625–626 Exploratory model, 563 Extensible Authentication Protocol (EAP), 447–448, 450 EAP-PEAP, 448 EAP-TLS, 447 EAP-TTLS, 448 Extensible Markup Language (XML), 610–611 Extensible Messaging and Presence Protocol (XMPP), 502–503 Extranets, 463 Extreme programming, 564
F Facial biometrics, 162 Facilitated Risk Analysis Process (FRAP), 64 Factoring attacks, 256, 273 Failover network devices, 660 Failure modes, 64 False positives biometrics, 163, 296 drug tests, 49 e-mail filtering, 500 intrusion detection systems, 196, 198, 199, 200, 201, 668 triage, 701 vulnerability assessments, 58, 301 Federal Sentencing Guidelines for Organizations (FSGO), 73 Federated ID management, 180 Feistal, Harst, 237 Fences, 293 Fiber-distributed data interface (FDDI), 425, 440–441 Fiber-optic cables, 429–430, 643 Fiber optic carrier network, 456–457 File allocation table (FAT), 652 File infectors, 590 File sensitivity labels, 637 File system access controls, 97, 128 File Transfer Protocol (FTP), 151, 412, 413, 506–508 anonymous, 506
AU8231_IDX.fm Page 1039 Monday, October 16, 2006 1:27 PM
Index transfer modes, 506 trivial, 506–507 Financial institution encryption standards, 268–269 Finger, 517, 534 Fingerprint biometrics, 161 FIN scanning, 484 FIPS 180, 262 Fire exits, 290, 291 Fire extinguishers, 291–292 Fire prevention, detection, and suppression, 290–292 Firewalls, 94, 126–127, 408, 433, 464–468 access control, 643 filtering, 464 by address, 464 by service, 464–465 stateful inspection or dynamic packet, 466 static packet, 466 integrated automated intrusion responses, 202 malformed input attacks, 551 Network Address Translation, 465 personal, 467–468 Port Address Translation, 465 proxies, 466–467 application-level, 467 circuit-level, 466–467 three-legged, 434 tunneling vs., 505–506 Web application environment, 627, 628 Firmware, 323, 538 Floppy disks, 317 Foreign key, 606 Forensic programming, 578–580, See also Computer forensics FORTRAN, 544 Fraggle attacks, 475 Frame Relay network, 460–461 FRAP, 63 Free information fallacy, 77 Free Software Foundation, 75 Freeware, 692 Frequency analysis, 273 Frequency division multiple access (FDMA), 432 Frequency-hopping spread spectrum (FHSS), 431, 450 Front-end processors, 442 FTP, See File Transfer Protocol Full backup, 658 Full disclosure, 543 Full production backup capabilities, 367
G Garcia-Jay, Timothy, 82 General protection fault, 572 Generic routing encapsulation (GRE), 479 Global area network (GAN), 463, See also Internet security; Wide area network (WAN) technologies Global Information Assurance Certification (GIAC), 56 Globalization, 75 Global positioning system (GPS), 315 Global Service for Mobile Communication (GSM), 432–433 Golden Rule, 82 Good business practice standards, 341 Google Talk, 504 Governance, See Information security governance Graham-Denning model, 328 Gramm-Leach-Bliley (GLB) Act, 402 Grant and revoke access controls, 622 Grupe, Fritz H., 82 Guard post, 288 Guidelines, 16, 17, See also specific types
H Hacking, 77–78, See also Computer crime; Malicious software; Software threats and vulnerabilities hacker ethic, 77, 78 penetration testing (“ethical hacking”), 208 Halon fire extinguishers, 291, 292 Hamming error correction code, 657 Hand biometrics, 161, 166 Handwriting dynamics, 162 Hard-copy security, 143–144, 650–651 Hard drives data remanence, 140–142, 651–652 RAID, 370, 468, 578, 647, 657–658 wiping data from, 141–142 Hardware, 538, 547, 642–644, See also specific types access control, 642–644 change control management, 669–670 continuity of operations, 660 firmware, 323, See also Read-only memory hot and cold spares, 660 inventory, 670 OSI layer 1, 410, 423–432, See also Physical layer
1039
AU8231_IDX.fm Page 1040 Monday, October 16, 2006 1:27 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® redundant components, 660 security architecture and design, 311–319 Harrison-Ruzzo-Ullman model, 328 Hashing algorithms, 134, 260–263, 576, 709 attacks on, 134–135, 263, 273 HAVAL, 262 HMAC, 264–265 MD5 message digest, 261–262 password crackers, 134–135, 152 RIPEMD-160, 262 SHA and SHA-1, 262 HAVAL, 262 Health Insurance Portability and Accountability Act (HIPAA), 66, 401–402 Heating, ventilation, and air conditioning (HVAC), 290, 662 Heisenberg’s uncertainty principle, 224 Hellman, Martin, 135–136, 253 Help desk administration, security roles and responsibilities, 41 Help desk fraud, 146–147 Heuristic scanners, 599 Hierarchical database management model, 604–605 High-level programming languages, 544, 546 HIPAA, 401–402 Hiring practices, 44–51, 118, 207, See also Background checks; Personnel security HMAC, 264–265 Hoax warnings, 593 Host, 170 Host-based intrusion detection systems, 194, 197–198, 421–422, 668 Host-based scanning, 651, 669 Host identification system (HIDS), 159, 194 Hot sites, 368, 663 Hot spares, 660 HTTP, See Hypertext Transfer Protocol Hubs, 442 Human resources, See Personnel security Human resources department, security council representation, 37 Humidity control, 662 Hurricane Katrina, 339, 340 HVAC issues, 290, 662 Hypertext Markup Language (HTML), 610 Hypertext Transfer Protocol (HTTP), 126, 412, 508–511 anonymizing proxies, 509 authentication, 508
1040
content filtering, 509 HTTP over TLS (HTTPS), 510 HTTP tunneling, 505–506, 508, 510 open proxy servers, 509 Secure HTTP (S-HTTP), 497, 511 Session Initiation Protocol, 519 Web application security, 628
I IBM Lotus Instant Messenger, 503–504 ICMP, See Internet Control Message Protocol Identification and identity management, 147–148, 170–171, See also Access control authentication types, 149–150, See also Authentication biometrics, See Biometrics challenges, 172–173 identification types, 148–149 new user profile, 172 physical badges, items or devices, 125, 296 system user management, 121 technologies, 173–179 account management, 178–179 directories, 174 password management, 175–176, See also Passwords profile update, 179 single sign-on, 176–177, 179–181 Web access management tools, 174–175 user ID guidelines, 148–149 username password combinations, See Passwords IEEE standards, 14 data-link layer, 410 802.11, 438, 446 802.11a, 449 802.11b, 449 802.11g, 449 802.11i, 447–448 802.15, 449–450, See also Bluetooth 802.16, 462 802.1X, 447–448 802.3, 438, See also Ethernet 802.5, 439–440, See also Token Ring IGMP, See Internet Group Management Protocol IMSI catcher, 432–433, 526 Incident command system (ICS), 347–348
AU8231_IDX.fm Page 1041 Monday, October 16, 2006 1:27 PM
Index Incident response and evaluation, 201–204, 416, 698–704 access control recovery, 111 alarms and signals, 203 analysis and tracking, 702–703 CERT/CC model, 700 containment, 701–702 contingency plan, 663–665 continuity planning, See Business continuity planning debriefing/feedback, 704–705 definition, 698–699 false positives, 668, 701 forensics, See Computer forensics incident command system, 347–348 information security officer responsibilities, 29 intrusion response system integration, 201–203, 204 investigative phase, 701 ISO/IEC 17799, 309 recovery phase, 703–704 response capability, 699 response teams, 29, 41, 699–700 security policy review and revision, 12 triage, 700–701 Incremental backup, 658 Information auditing, 583–584 Information classification, See Data classification Information flow models, 325, 328 Information protection management, 584 Information resource contingency plans, 397 Information security architecture, See Security architecture and design Information security governance, 7–8 audit frameworks for compliance, 17–19, See also Audit COBIT, 18 COSO, 17 ISO 17799/BS 7799, 18–19 ITIL, 18 baselines, 15–16 combining/differentiating policies, standards, baselines, procedures, guidelines, 16–17 definition, 8 guidelines, 16 policies, 9–12, See also Security policies procedures, 14 standards, 13 Information security management, 1 business case for, 4
CISSP® Candidate Information Bulletin, 758–759 CISSP® expectations, 2–4 confidentiality, availability, and integrity, 5–7 costs and funding, 4, 27 enterprisewide security oversight committee, 34–38, See also Security council establishing unambiguous roles, 42 ethics, See Ethical issues governance, See Information security governance information security officer responsibilities, 26–30, See also Information security officer (ISO) responsibilities internal control standards, 3–4, See also Information security governance ISO/IEC 17799, 308–309 organizational behavior, 19–27 best practices, 22 job controls, 23–27, See also Access control; Personnel security organizational structure evolution, 20–21 planning, 42–44 operational and project, 43–44 strategic, 43 tactical, 43 privileged entity controls, 633–642, See also Access control promoting security awareness, See Security awareness risk management, See Risk management sample questions, 88–92, 719–724 security management practice, 7 security roles and responsibilities, 38–42, See also Security roles and responsibilities tools, 1–2 Information security management, reporting model, 22, 33–34 administrative services, 33 business relationships, 31 corporate security, 32–33 determining best fit, 34 executive management, 31 insurance and risk mgt, 33 internal audit, 33–34 IT department, 32 legal department, 34
1041
AU8231_IDX.fm Page 1042 Monday, October 16, 2006 1:27 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Information security officer (ISO) responsibilities, 21, 22, 26–30, 39 auditor assistance, 30 awareness program development, 28 budget, 27–28 business objective understanding, 28 compliance program development, 29 emerging technology awareness, 30 incident evaluation and response, 29 management meetings, 30 policy development, 28 regulatory compliance, 30 reporting model, 22, 31–34, See also Information security management, reporting model risk communication, 26 security administrator privileges, 637–640, See also Security administrator, roles and responsibilities security council representation, 37 security metrics, 29 threats and vulnerabilities awareness, 29 Information security policies, See Security policies Information services Finger, 517 proprietary applications and services, 520 Session Initiation Protocol (SIP), 519 time synchronization, 517–518 Information storage, 166, 316–318, See also Database management systems data and software backup, See Backup duplication of, 102 media, See Storage media RAID, See Redundant array of inexpensive (or independent) disks thin storage, 310 Information systems professionals, security roles and responsibilities, 40 Information technology (IT) department disaster recovery strategies, 366–367, See also Disaster recovery planning information security organization and, 22 information security reporting model, 32 security council representation, 37 Information technology professionals, security roles and responsibilities, 40
1042
Information Technology Security Evaluation Criteria (ITSEC), 330 Informed consent, 86 Inheritance, 567, 568–569 Initialization vector, 221, 239, 274 Inner perimeter, 286–287 Input/output (I/O) controls, 650 character checks, 583 NETBIOS vulnerabilities, 491–492 noninterference models, 325 Input/output (I/O) devices, 308, 318–319 front-end processors, 442 Input/output errors, 665 Instant messaging, 502–506, See also Chat applications security confidentiality, 504–505 Extensible Messaging and Presence Protocol, 502–503 Internet Relay Chat, 502, 503 Jabber, 502–503 proprietary applications and services, 503–504 spam over instant messaging, 505 Institute of Electronics and Electrical Engineers (IEEE), 14, See also IEEE standards Insurance department, information security reporting model, 33 Integrated circuit card (ICC), 155, See also Smart cards Integrated Services Digital Network (ISDN), 418, 437, 454–455 Integrity, 6, 94, 226, See also Confidentiality, availability, and integrity Biba model, 326, 416 Clark-Wilson model, 324, 326 entity model, 607 network security issues, 419 referential model, 607 relational database model, 606–607 Integrity-checking software, 600 Intellectual property, 75, 490, 690–691 copyright, 691 licensing issues, 691–692 open source code, 542 patent, 690 peer-to-peer sharing vs., 512 software copy control, 644 trademark, 690–691 trade secret, 691 Internal audit department, information security reporting model, 33–34 “Internal use only” information classification, 106
AU8231_IDX.fm Page 1043 Monday, October 16, 2006 1:27 PM
Index International cooperation, 697–698 International Data Encryption Algorithm (IDEA), 250 International Mobile Subscriber Identifier (IMSI), 432–433 International Organization for Standardization (ISO), 14, See also specific ISO standards Internet Architecture Board (IAB), 79 Internet Assigned Numbers Authority (IANA), 483, 528 Internet computing model, 612 Internet connectivity architecture, 319, 452–462, See also Wide area network (WAN) technologies URLs, 487 Web proxy servers, 467 Internet Control Message Protocol (ICMP), 410, 413, 414, 480–481, 483 ping of death, 480–481 ping scanning, 481 redirect attacks, 481 Smurf attacks, 475 traceroute exploitation, 481 Internet Corporation for Assigned Names and Numbers (ICANN), 472, 528 Internet Engineering Task Force (IETF), 667 Internet Group Management Protocol (IGMP), 413, 414, 436, 481–482 Internet key exchange (IKE), 477 Internet Message Access Protocol (IMAP), 500–501 Internet Protocol (IP), 410, 413, 471–475, See also TCP/IP attacks on Fraggle, 475 fragmentation (teardrop), 473 overlapping fragment, 474 Smurf, 475 source routing exploitation, 474 spoofing, 136–137, 474, See also Spoofing attacks IP versions, 473 Internet Relay Chat (IRC), 502, 503, 594 Internet security, 463 application layer architecture and technologies, 497–519, See also Application layer chat and instant messaging, 502–506, See also Instant messaging database connectivity, 612–613 database management systems and, 619–620 data exchange (WWW), 506–512
e-mail, See E-mail hidden Web servers, 527 IPSec, See IPSec IP vulnerabilities, See Internet Protocol mobile code controls, 580–582 newsgroups, 501–502 online analytical processing, 616 online transaction processing, 623–624 remote authentication service (RADIUS), See RADIUS SSL, See Secure Socket Layer Web access management tools, 174–175 Web application environment, 626–628 Internet Storm Center, 527 Internet Worm of 1988, 592 Interpreted programming languages, 546–547 Intranets, 463 Intrusion detection systems (IDSs), 138, 194–204, 421–422, 433, 668 analysis engine methods, 198–201 anomaly detection, 198, 200–201 pattern matching, 198, 199 antivirus capabilities, 598 change detection, 599–600 encrypted information and, 196–197 false positives, 196, 198, 199, 200, 201, 668 host-based, 194, 197–198, 421–422, 668 IDS management, 204–205 intrusion prevention systems versus, 195, 196 network-based, 194, 196–197, 421–422, 668 physical access monitoring, 298–299 electrical circuit, 298 light beam, 298 microwave and ultrasonic systems, 298–299 passive infrared detector, 298 video monitoring, 296–298 responses, 201–204, See also Incident response and evaluation scanning, See Scanning signatures, 199 tuning, 196 video monitoring, 296–298 Web application environment, 627 Intrusion prevention systems, 129, 194–196 IP addresses, 472 Dynamic Host Configuration Protocol (DHCP), 459, 479–480 spoofing, See Spoofing attacks
1043
AU8231_IDX.fm Page 1044 Monday, October 16, 2006 1:27 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® IPSec, 241, 275–276, 410, 475–477 authentication header, 475–476 encapsulating security payload, 476 Internet key exchange, 477 security associations, 476 transport and tunnel modes, 476–477 IPv4, 473 IPv6, 473 Iris scans, 161 (ISC)2 Code of Ethics, 81–82 ISDN, See Integrated Services Digital Network Islamic law, 689 Isolation, ACID test, 621 ISO standards, 14, See also specific standards ISO 7498-1, 409 ISO 7816-2, 157 ISO 8732, 247 ISO 9000, 19, 555 ISO 17799, 18–19, 308–309, 341 ISO 27001, 309 ISO Common Criteria, 331 Iterative development, 562 IT Governance Institute (ITGI), 8 IT Infrastructure Library (ITIL), 18 ITSEC, 330
J Jabber, 502–503 JAP, 509 Java, 567 Java applets, 132, 511, 547, 552 Java Authentication and Authorization Service (JASS), 566 Java Cryptography Extension (JCE), 566 Java Database Connectivity (JDBC), 610 Java Remote Method Invocation (JRMI), 569, 570 JavaScript, 512, 547 Java Secure Socket Extension (JSSE), 566 Java security, 564–566, See also ActiveX sandbox model, 565–566, 581–582 static type checking, 582 Java Virtual Machine (JVM), 314, 547 Jerusalem, 590 Job and task specialization, 20 Job descriptions, 44–45 Job position sensitivity, 25–26, 99–100, 118 Job rotation, 23 Johnson, Deborah, 72, 75 John the Ripper, 135 Joint analysis development (JAD), 563 Junkie virus, 592
1044
K Kant’s categorical imperative, 82 Kerberos, 181–184, 268, 408, 491, 495 Java security applications, 566 Kerckhoff’s law, 266–267, 543 Kernel mode, 572, 574 Key, defined, 220 Key management, 236, 266–268, See also Public key infrastructure distribution centers, 268 key recovery, 267 symmetric algorithms and, 252 Keypad locks, 294 Keys and locks, facility doors, 293–295 Key space, defined, 220 Keystroke logging, 134, 206, 649 Keystroke pattern authentication, 162 Knowledge discovery in databases (KDD), 625 Knowledge management, 624–626 Kuechler, William, 82
L L0pht, 135 Laptop computers, See Notebook computer security Lattice models, 324 Law-abiding citizen fallacy, 76 Law enforcement, 41, 664 Layer 2 tunneling protocol (L2TP), 479 Layering, 311, 329–330 Least privilege, 25, 101, 123, 207, 648, 671 Legal and regulatory issues, 683–715 CISSP® Candidate Information Bulletin, 765 CISSP® expectations, 684–685 computer crime, 74, 695–697, See also Malicious software; specific attacks or threats continuity planning issues, 340, 401–404 cryptography and, 271 cyber squatting, 490 domain name litigation, 490 due diligence, 15, 695 incident response, 698–704, See also Incident response and evaluation intellectual property, 75, 490, 512, 690–691, See also Intellectual property international cooperation, 697–698 ISO and ensuring compliance, 30
AU8231_IDX.fm Page 1045 Monday, October 16, 2006 1:27 PM
Index IT laws and regulations, 690 liability, 694–695 Patriot Act, 402–404 privacy, 75, 692–694 risk assessment, 57–58 sample questions, 715–717, 752–755 software forensics, 578–580 video monitoring, 298 Legal department information security reporting model, 34 security council representation, 37 Legalism, 83 Legal systems, 685–689 administrative law, 687 civil law, 688 common law, 686–688 criminal law, 687 customary law, 688 mixed law, 689 religious law, 689 tort law, 687 Lexan, 295 Liability, 694–695 Licensing issues, 691–692 Light beam-based physical intrusion detection, 298 Light-emitting diodes (LEDs), 429 Lightweight Directory Access Protocol (LDAP), 174, 490–491 Link layer encryption, 225, 410 Linus’s law, 542 Linux systems, 312 Lion worm, 593 Literary analysis, 578–580 Local area networks (LANs), 120, 319 Ethernet, See Ethernet layer 2 technology, 441–445 bridges, 442–444 front-end processors, 442 hubs and repeaters, 442 multiplexers, 442 switches, 444–445 virtual, See Virtual local area networks (VLANs) wireless LANs (WLANs), 445–450, See also Wireless local area networks Locking systems (physical entry points), 293–295 Logging, 97, 110, 120, See also Audit logging access control assurance, 205–207 anomaly matching, 198, See also Intrusion detection systems
chain of custody, 205 keystroke, 134, 206, 649 log file analysis, 702 log review, 120–121 online transaction processing, 624 Logical link control (LLC), 410, 413 Logic bombs, 596–597, 646 Log-in Trojans, 593–594 Loveletter, 591, 592–593
M Machine language, 543, 544–545 Macintosh systems, 312 Macro viruses, 592 Magistr, 593 Magnetic media, 317, 650, See also specific types; Storage media data erasing, 651–653 redundant array of independent tapes, 658 Mail exchange (MX) records, 497, 529 Mail transfer agent, 500 Mail user agent, 500 Mainframe computer systems, 21, 311–312, 469 batch processing, 192 joint analysis development, 563 log-in Trojans, 593 network security issues, 469 operating systems, 320 polling, 438 system operator privileges, 633 Maintenance and service issues, 301 Maintenance hook, 553 Malformed input attacks, 551 Malicious software (malware), 129, 133–134, 551, 586–597, 646 adware, 597 antivirus management, 129, 598–601, 650 continuity of operations, 666 cost to business, 538 DDoS zombies, 596 debugging tools, 634 hoax warnings, 593 logic bombs, 596–597, 646 malware assurance, 601 mobile code, See Mobile code notebook computers and, 470 pranks, 597–598 privilege escalation, 634 programming errors vs., 587
1045
AU8231_IDX.fm Page 1046 Monday, October 16, 2006 1:27 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® protection against, 129, 598–601, See also Software protection mechanisms activity monitors, 599 antimalware policies, 600–601 change detection, 599–600 scanners, 599, 650, 651 remote-access Trojans (RATs), 588, 595–596 removable media and, 654 social engineering and, 553, See also Social engineering software forensics, 578–580 spyware, 132, 133, 512, 597, 666 Trojan horse programs, See Trojans unauthorized software copying/distribution issues, 644, 671 viruses, See Viruses worms, See Worms Malware, See Malicious software Management meetings, 30 Mandatory access control (MAC), 187, 621 Bell-LaPadula model, 325–326 need to know, 648 Maner, Walter, 72 Man-in-the-middle attacks, 137, 417 ARP and, 450 ICMP redirect attacks, 481 IMSI catcher, 432–433 Mantraps, 293 Markup language, 610–611 MARS, 248 Masquerading attacks, 136–137, 644, See also Spoofing attacks Master agreements, 692 Master key-encrypting keys (KKMs), 268 Master keys, 268 MBone, 484 McGraw, Gary, 628 MD4, 262 MD5, 261–262, 272, 276 Media, See Storage media Media access control (MAC), 410, 414, 446 Media library, 655, 673 Meet in the middle attack, 244–245 Melissa virus, 580, 588, 591, 592 Memory, 155, 316–318 buffer overflow, See Buffer overflows RAM, See Random-access memory ROM, See Read-only memory SDRAM, 316 virtual, 317–318 Memory, corporate or organizational, 624
1046
Memory cards, 153–154, See also Smart cards Memory protection mechanisms, 574–575 Mergers and acquisitions, 19, 110 Mesh network, 425–426 Message authentication code (MAC), 264 HMAC, 264–265 Metadata, 614–616 Metadata controls, 623 Metropolitan area network (MAN), 462 Michelangelo, 590 Microsoft ActiveX, See ActiveX Microsoft Handshake Authentication Protocol (MSCHAPv2), 479 Microwave physical intrusion detection, 298–299 Middleware, 549, 323469 Mifare, 159 Mirroring data (RAID), 657 database, 370 disk, 578 Mission statement, 35–36 Mitnick, Kevin, 131, 136 Mixed law, 689 Mobile agents, 551–552 Mobile code, 132–133, 134, 551–552, 580–582, See also Malicious software Mobile device security, 299, 314 Mobile sites, 368 Mobile telephony, 432–433, See also Cellular phone technology Modems, 452–453 cable modems, 459 international issues, 535 security vulnerabilities, 452–453 V.92 vs. V.90, 431 war dialing, 214 Modified prototype model (MPM), 562 Modular mathematics, 233 Monolithic operating systems, 572–573 Moor, James, 72 Morris Worm, 534 MSN Messenger, 504 MS Word, 580 macro viruses, 592 Multicasting, 414, 435–436 Internet Group Management Protocol (IGMP), 414, 436, 481 MBone protocol, 484 Multipartite viruses, 591–592 Multiple processing sites, 368 Multiplexers, 442
AU8231_IDX.fm Page 1047 Monday, October 16, 2006 1:27 PM
Index Multiprocessing system, 316 Multitasking, 315 Multithreading, 316 Multiuser domains (MUDs), 502
N Napster, 512 National Computer Ethics and Responsibilities Campaign (NCERC), 80 National Conference on Computing and Values, 80 National Institute of Standards and Technology (NIST), 14, 66, 67, 247, 341 certification and accreditation process, 584 National Standard on Preparedness, 341 Natural disasters, 340 Need to know, 25 Nessus, 423 NetBIOS, 491–492 NetBIOS Extended User Interface (NetBEUI), 530 Netbus, 595 Network access, 170 Network access server (NAS), 170 Network access services, 493–495 remote-access services, 126–127, 514–517 Network Address Translation (NAT), 465, 519 Network-based intrusion detection systems, 194, 196–197, 421–422, 668 Network-based scanning, 651, 669, 670 Network Basic Input Output System, See NETBIOS Network database management model, 605 Network File System (NFS), 412, 493–495 Network Information Service (NIS and NIS+), 492–493 Network interface cards (NICs), 433 Network interface device (NID), 458 Network layer (OSI layer 3), 409, 410–411, 414, 436, 450–481, 464 audit event types, 206 DoD model, 413 end systems, 468–471 firewalls, 464–468, See also Firewalls global area network, 463, See also Internet security
ICMP, See Internet Control Message Protocol IGMP, See Internet Group Management Protocol IP, See Internet Protocol IP addressing using DHCP, See Dynamic Host Configuration Protocol IPSec, See IPSec LANs, See Local area networks metropolitan area network, 462 routers, 464 Routing Information Protocol, 410, 481–482 subnet mask, 472–473 tunneling, See Tunneling Virtual Router Redundancy Protocol, 482 VPNs, See Virtual private networks WAN technologies, 452–462, See also Internet security; Wide area network (WAN) technologies Network management protocol (SNMP), 513–514 Network models LANs, See Local area networks OSI reference model, 407, 408, 409–412, See also Open System Interconnect (OSI) model TCP/IP, 407, 408, 413, See also Internet Protocol; TCP/IP WAN technologies, See Wide area network (WAN) technologies Network news, 501–502 Network News Transfer Protocol (NNTP), 501–502 Network scanners, 422–423, See also Scanning Network security, 117, 319–320, 407–408, 414–423, 526, See also Internet security access control, 126–127, 416–417, 420–421, See also Access control Kerberos, 181–184 threats, See Access control threats attacks on, 414–421, See also Denial of service (DoS) attacks; Malicious software (malware); specific types of attacks access control, 416–417, See also Access control availability, 417–418 confidentiality, 419 integrity, 419 methodology, 419–421
1047
AU8231_IDX.fm Page 1048 Monday, October 16, 2006 1:27 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® audit event types, 206 botnet (robot network), 596 business contingency plans, 418 CISSP® Candidate Information Bulletin, 763 CISSP® expectations, 408 continuity of operations, 660–661 data-link layer, See Data-link layer disaster recovery strategies, 370–371 distributed object-oriented systems, 569–570 encryption usage, See also Cryptography and encryption; Encryption methods and systems end-to-end encryption, 226 IPSec, 241, 275–276, 410, See also IPSec link encryption, 225 SSL/TLS, 112, See also Secure Socket Layer firewalls, See Firewalls hardware architecture and technology, 428–432, See also Hardware; Physical layer; specific components, technologies incident response capabilities, 416, See also Incident response and evaluation intrusion detection systems, 194, 196–197, See also Intrusion detection systems key concepts, 416 LANs, See Local area networks modem vulnerabilities, 452–453, See also Modems networks and IT security, 414–416 operating systems and, 468–469 OSI reference model, See Open System Interconnect (OSI) model portable devices or computers, 470–471 remote-access Trojans (RATs), 588, 595–596 sample questions, 521–525, 740–746 scanning, See Scanning security architecture, 415–416, 433–434 bastion host, 433–434 boundary routers, 433 DMZ, 417, 434 dual-homed host, 433 hardware technology, 428–432 partitions and security perimeters, 320, 433
1048
security domains, 416 security perimeter, 433 security objectives and attack modes, 416–419 server security, 469, 489–490, See also Servers smart phones, 471 software issues, See Application layer; Application security; Software threats and vulnerabilities; specific applications, issues, technologies system administrator privileges, 636 tools, 421–423, See also Intrusion detection systems WAN technologies, See Internet security; Wide area network (WAN) technologies wireless network testing, 213, 214 workstations, 469–470 Network Time Protocol (NTP), 517–518 Network topology, 424–427 bus, 424 mesh, 425–426 ring, 425 star, 427 subnets, 472–473 transmission technologies, See Transmission technologies tree, 424–425 Network video recorder (NVR), 297 Neural networks, 625 Newsgroups, 501–502 NFPA 1600, 341 Nimda worm, 527 NIS, See Network Information Service NMap, 210, 423 No free lunch rule, 83 Nondisclosure agreements, 45, 119 Noninterference models, 325 Nonrepudiation, 220, 227, 254 Notebook computer security, 283, 289, 299, 417, 470–471 NULL scanning, 485
O Oakley/ISAKMP, 276 Object Linking and Embedding Database (OLE DB), 611–612 Object Management Group (OMG), 569 Object-oriented database security, 603, 609, 622
AU8231_IDX.fm Page 1049 Monday, October 16, 2006 1:27 PM
Index Object-oriented technology and programming, 566–568 distributed systems, 569–570 security, 568–569 encapsulation or data hiding, 330, 567, 568 inheritance, 567, 568–569 polyinstantiation, 567, 568–569 polymorphism, 567, 568 Object-relational database system, 609 Object Request Broker (ORB), 569 Object reuse, 139–140, 551, 563, 651–653 OCC Banking Circular 177, 404 OCTAVE, 62–63 Oechslin, Philippe, 136 Off-site storage, 369–370 OLE DB, 611–612 One Half virus, 592 One-time pads, 234–235 Online analytical processing (OLAP), 616 Online transaction processing (OLTP), 623–624 Opcodes, 544–545 Open Database Connectivity (ODBC), 609–610 Open message, 254 Openness/full disclosure, 84 Open Network Computing Remote Procedure Call (ONC RPC or SunRPC), 486 Open Shortest Path First (OSPF), 410 Open source, 312, 313, 542–543 Open SSL, 497 Open system authentication, 445 Open System for Communication in RealTime (OSCAR), 504 Open System Interconnect (OSI) model, 407, 408, 409–412 layer 1, 409, 410, 413–414, 428–432, See also Physical layer layer 2, 409, 410, 414, 433–450, See also Data-link layer layer 3, 409, 410–411, 414, 450–481, See also Network layer layer 4, 409, 411–412, 414, 482–486, See also Transport layer layer 5, 409, 412, 486–494, See also Session layer layer 6 (presentation layer), 409, 412, 495–496 layer 7, 409, 412, 497–519, See also Application layer
Operating systems, 310, 320–321, 538, 547–548 access control, 128, 310–311 application development security issues, 538, 541 backup and recovery resources, 577–578 memory protection, 574 monolithic, 572–573 network security issues, 468–469 security administrator privileges, 637 security kernels, 320–321, 571–572 system states, 321 user and kernel processor access modes, 572, 574 Operational and project security planning, 43–44 Operations security, 633–678 CISSP® Candidate Information Bulletin, 764 continuity of operations, See Operations security, continuity of operations control methods, 648–650, See also Access control control types, 646–647 compensating, 647 corrective, 647 detective, 646 deterrent, 647 directive, 647 preventative, 646 recovery, 647 ISO/IEC 17799, 309 media types and protection methods, 650, See also Storage media misuse prevention, 654–655 physical security, See Physical security privileged entity controls, 633–642, See also Access control record retention, 655 resource protection, 642–644 documentation, 644 facilities, 642 hardware, 642–644 software, 644 sample questions, 678–681, 748–752 sensitive media handling, 653–654 threats, See also specific threats corruption and modification, 645 destruction, 645 disclosure, 645 espionage, 646 hackers and crackers, 646 interruption and nonavailability, 645
1049
AU8231_IDX.fm Page 1050 Monday, October 16, 2006 1:27 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® malicious code, 646, See also Malicious software theft, 645 Operations security, continuity of operations, 655–669 application security, 665–667 denial of service attacks, 665–666 intrusion, 666 malware, 666 spam, 666–667 spyware, 666 business continuity planning, 669, See also Business continuity planning communications, 660–661 contingency plan, 663–665 data protection, 657–659 facilities, 661–663 fault tolerance, 656–657 hardware, 660 hardware inventory, 670 input/output errors, 665 intrusion detection systems, 668, See also Intrusion detection systems physical security, 663–664, See also Physical security power failure, 661–662, 664, See also Power failure problem management, 663–667 production delay, 665 redundancy and backup, 656–659, 663, See also Backup software, 659–660 system component failure, 664 system recovery, 667 tampering, 664–665 telecommunications failure, 664 vulnerability scanning, 668–669 Optical carrier-1 (OC1), 457 Optical media, 317, 650, See also specific types destroying, 652–653 Optical wireless, 462 Orange Book, 323–324, 325, 329 Ordinary user accounts, 641–642 Organizational ethics plan of action, 82–84 Organizational memory, 624 Organization for Economic Co-operation and Development (OECD), 693 Orthogonal frequency division (OFDM), 432, 449 OSCAR, 504 OSI reference model, See Open System Interconnect (OSI) model
1050
Outermost perimeter, 286, 292 Output, 319 Output controls, 650 Output feedback mode (OFB), 241
P Packet-switched networks, 437, 460 PALM, 314 Palm biometrics, 161 Parameter checking, 548–549, 573 PARASCAN, 597 Parker, Donn B., 71, 86 Pascal, 582 Passive infrared detector (PID), 298 Passphrases, 151 Password Authentication Protocol (PAP), 450 Password crackers, 134–136, 150, 151–152 Passwords, 134, 150–152, 637–638 access control policies and procedures, 122–123 biometric authentication vs., 163 brute-force attacks, 122 guidelines, 150–151 hashing, 134, 152, 576 help desk fraud, 146 management systems, 175–176, 637–638 password files, 644 protection techniques, 575–576 single sign-on systems, 177, 180–181 SNMP, 514 storage, 151–152 system administration accounts, 641 virtual private network authentication, 479 Patch management, 115–116, 420, 585, 673–677 access control lists and, 676 Patent, 690 Patriot Act, 402–404 Pattern matching intrusion detection, 199 Peer-to-peer applications and protocols, 314, 512 Penetration testing, 208–215, 301, 649 blind and double-blind, 212–213 documenting findings, 212 external vs. internal, 212 information provisioning, 208–209 methodology, 209–211 enumeration (network or vulnerability discovery), 210 exploitation, 211 reconnaissance/discovery, 209
AU8231_IDX.fm Page 1051 Monday, October 16, 2006 1:27 PM
Index targeted, 213 testing strategies, 212 types, 213 application testing, 213–214 DoS testing, 213, 214 PBX and IP telephony, 213, 215 social engineering, 213, 214 war dialing, 213, 214 wireless network, 213, 214 vulnerability analysis, 210–211 Peripheral devices, 308 Peripheral devices, 318–319 Perl, 567 Permanent virtual circuits (PVCs), 437 Permutation, 221 Personal digital assistants (PDAs), 314–315, 471 Personal firewalls, 467–468 Personal identification number (PIN), 148, 160 Personnel barriers, 293 Personnel security, 44, 117–119 account characteristics, 638 assessment for continuity planning, 357–358 clearances, 637 employee terminations, 50–51, 207 hiring practices, 44–51, 207 background investigations, 46–50, 117–118, 649–650, See also Background checks employment agreements, 45, 119 job descriptions, 44–45 reference checks, 45–46 identity management, See Identification and identity management ISO/IEC 17799, 308 new user profile, 172 ongoing supervision, 50 privilege control, See Access control security profiles, 638–639 separation of duties, See Separation of duties temporary hires, 118 vacations, 25 Phishing, 137, 595, 667 Phreaking, 78, 452 Physical access controls, 124–125 Physical layer (OSI layer 1), 409, 410, 428–432, See also Hardware cables, 427–429 communication technology, 423–424 modems, 430–431, See also Modems network topology, 424–427
patch panels, 430 wireless transmission technologies, 431–433 Physical security, 281–302, 642 assessment for continuity planning, 357–358 audits, exercises, and tests, 300–301 boundary protection, 292–293 CISSP® Candidate Information Bulletin, 760–761 CISSP expectations, 282 communications channels, 289–290 continuity of operations, 661–663 corporate security organization, 32–33 education, training, and awareness, 301 entry controls, 124–125, 293–299 asset and risk registers, 299–300 keys and locks, 293–295 portable device security, 299 video monitoring, 296–298 walls, doors, and windows, 295–296 environmental controls, 290, 662 facility recovery strategy development, 371–372 fire prevention, detection, and suppression, 290–292 guards, 288–289 health and safety requirements, 290 information protection and management services, 300–302 infrastructure support systems, 290 intrusion detection systems, 298–299, See also Intrusion detection systems ISO/IEC 17799, 308 law enforcement relationship, 41, 664 layered defense model, 286–289, 302 maintenance and service issues, 301 managed services, 300 methods and tools, 288 operations, See Operations security procedural controls, 288–290 sample questions, 303–305, 731–734 security roles and responsibilities, 41 security zones, 125 site location and infrastructure, 285–286 threats and vulnerabilities, 283–285 accidental, 285 environmental, 283–284 malicious, 284 physical break-in, 664 visitors and, 124–125, 289 vulnerability and penetration tests, 301 Physiological biometrics, See Biometrics
1051
AU8231_IDX.fm Page 1052 Monday, October 16, 2006 1:27 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Ping of death, 480–481 Ping scanning, 481 Plagiarism detection, 580 Plaintext, 220 cryptanalysis attacks chosen plaintext, 272 known plaintext, 271–272 Playfair cipher, 229–230 Plonk, 532 PocketPC, 315 Point coordination function, 438 Point-to-point lines, 455 Point-to-Point Protocol (PPP), 437, 450 Point-to-point tunneling protocol (PPTP), 479 Policies, security, See Security policies Polling, 438 Polyalphabetic ciphers, 231–233 Polyinstantiation, 567, 568–569 Polymorphism, 567, 568 POP (Post Office Protocol), 500 Pornography, 594 Portable computers or devices, 283, 289, 299, 314, 417, 470–471 Port Address Translation (PAT), 465 Port mapper, 487 Port numbers, 483, 529 Port scanning, 210, 484 Post Office Protocol (POP), 500 Power failure, 283–284, 656, 661–662, 664 uninterruptible power supplies, 290, 661–662 Pranks, 597–598 Presentation layer (OSI layer 6), 409, 412, 495–496 Pretty Good Privacy (PGP), 275 Preventative controls, 646 Primary key, 606 Primary rate interface (PRI), 454 Primary storage, 316–317 Prime number-based encryption algorithms, 256, 257, 273 Print cover sheets, 650 Printer access, 643 Privacy, 5, 75, See also Confidentiality continuity planning issues, 402 laws and regulations, 692–694 OECD guidelines, 693–694 Pretty Good Privacy (PGP), 275 video monitoring vs., 298 Private branch exchange (PBX) testing, 213, 215 Private key, 253 Privileged user accounts, 641
1052
Privilege management and control, See Access control, privilege management Problem state, 321 Processes and threads, 322–323 Production delay, 665 Product life-cycle management, 117 Professionalism, 83 Professional responsibility, 75 Program, 322 Programmable read-only memory (PROM), 155 Programming, See also Software development and programming bugs or errors, 587 full disclosure, 543 object-oriented, 566–568 procedure, 545–546 process and elements, 544–545 Programming languages, 539, 543–544, See also Software development and programming assembly language, 543–546 CISSP® expectations, 540 compiling and interpreting, 546–547 generations, 543–544 high-level languages, 544, 546 machine language, 543, 544–545 object-oriented, 567 scripting, 547 SPARK and formal verification, 582 support, 582 type-safe, 582 Project management office (PMO), 351 PROM (programmable read-only memory), 155 Protocol anomaly-based intrusion detection, 200–201 Prototyping, 562 Proximity coupling device (PCD), 158–159 Proximity integrated circuit card (PICC), 158–159, 296 Proxy firewalls, 466–467 Proxy systems, 126, 408, 466–467, 509–510 HTTP, 509 open proxy servers, 509 SOCKS, 478 “Public” information classification, 105–106 Public key, 253, 254 Public key certificates, 269 Public key cryptography, 227, 253–260, 314 advantages and disadvantages, 258 confidential messages, 253–254 Diffie-Hellmann Algorithm, 257–258
AU8231_IDX.fm Page 1053 Monday, October 16, 2006 1:27 PM
Index Digital Signature Standard, 265–267 El Gamal, 258 elliptic curve cryptography, 258 hybrid symmetric-asymmetric cryptography, 259–260 open message, 254 RSA, 254–257 SESAME, 184 Public key infrastructure (PKI), 159, 269–271 certificate revocation, 271 cross-certification, 271 Public relations, 705 Public switched telephone networks (PSTN), 452–453, 518 VoIP vs., 535 Pushbutton locks, 294 Python, 567
Q Qualcomm, 432 Qualitative risk assessments, 58–60 Quantitative risk assessments, 60–61 Quantum cryptography, 223–225 Quantum key distribution (QKD), 224 Query attacks, 619
R Radiofrequency interface device (RFID) cards, 296 RADIUS, 408, 453, 512–513 RAID, See Redundant array of inexpensive (or independent) disks Rail fence, 230 Rainbow table attacks, 136 RAM, See Random-access memory Random-access memory (RAM), 155, 316, 318 Random number generator, attacks on, 274 Range checks, 583 Rapid application development (RAD), 562 Rate-adaptive DSL (RADSL), 458 RATs, See Remote-access Trojans RC4, 252, 446–447 RC5, 251–252 RC6, 248 rcp, 515 Read-only memory (ROM), 155, 316–317, 323 EEPROM (electrically erasable programmable ROM), 155
EPROM (erasable programmable ROM), 155, 323 PROM (programmable ROM), 155 Real-Time Control Protocol (RTCP), 484, 519 Real-Time Protocol (RTP), 484 Reasonableness checks, 583 Rebooting, 667 Reciprocal or mutual aid agreements, 369 Record retention, 655 Recovery controls, 647 Recovery management team (RMT), 377 Recovery point objective (RPO), 397 Recovery time objective (RTO), 347, 359, 396, 397 Rectangular substitution tables, 230–231 Reduced Instruction Set Computer (RISC) processors, 313 Reduced sign-on, 180 Redundancy, 656–658, See also Backup; Redundant array of inexpensive (or independent) disks Redundant array of independent tapes (RAIT), 658 Redundant array of inexpensive (or independent) disks (RAID), 370, 468, 578, 647, 657–658, 962 Redundant communications, 660–661 Redundant hardware, 660 Reference checks, 45–46 Reference monitor, 320, 321, 324, 571 Referential integrity model, 607 Regulation, See Legal and regulatory issues Relational database management model, 603, 605–609 hybrid object-relational, 609 integrity constraints, 606–607 SQL, 607–608, See also Structured Query Language Relationship checks, 583 Relative humidity, 662 Reliable UDP (RUDP), 484 Religious law, 689 Remanence, 140–142, 651–652 Remote-access services, 126–127, 514–517 Java Remote Method Invocation (JRMI), 569, 570 rlogin, rsh, and rcp, 515 TELNET, 514–515 X Window System (X11), 516 Remote-access Trojans (RATs), 588, 595–596 Remote Authentication Dial-in User Service, See RADIUS Remote copy (rcp), 515 Remote Desktop Protocol (RDP), 313
1053
AU8231_IDX.fm Page 1054 Monday, October 16, 2006 1:27 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Remote journaling, 369 Remote log-in (rlogin), 515 Remote procedure calls (RPCs), 412, 486–487 Remote shell (rsh), 515 Repeaters, 442 Replay attack, 273 Reporting model, See Information security management, reporting model Requestor, 170 Residual risk, 57 Resource inventory, continuity and recovery planning, 378 Restoration team, 377 Restricted areas, 286 Retinal scans, 161 Reuse model, 563, See also Object reuse Reverse engineering, 273–274 RFC 1087, 79 Rijndael algorithm, 238, 247–250 Ring architecture, 311, 425 RIPEMD-160, 262 RISC processors, 313 Risk assessment annualized loss expectancy, 61 assessment, 66–71 business continuity planning, 353–354, See also Business continuity planning (BCP), assessment phase documentation requirements, 59 identify threats, 67 identify vulnerabilities, 66–67 impact estimation, 68 information valuation, 70–71 interviews, 59–60 likelihood determination, 67–68 methodologies, 62–64 CRAMM, 63 failure modes and effect analysis, 64 FRAP, 63 OCTAVE, 62–63 spanning tree analysis, 63–64 qualitative, 58–60 quantitative, 58–60 regulatory requirements, 57–58 reporting findings, 69 risk determination, 68–69 security management relationships, 7 single loss expectancy, 61 tool and technique selection, 62 Risk assessment team, 59, 61 Risk aversion principle, 83 Risk communication, 5, 26, 69
1054
Risk management, 2, 56–71 CISSP® Candidate Information Bulletin, 758–759 continuity planning issues, 358–359 countermeasure selection, 69–70 definitions, 56–57 due diligence, 695 mitigation, 65 risk acceptance, 65 risk assessment, See Risk assessment risk avoidance, 64 risk ownership, 65 risk transfer, 64 security function, 22 Risk management department, information security reporting model, 33 Rivest, Ron, 248, 251, 252, 254, 261 rlogin, 515 Rogerson, Simon, 72 Role-based access control, 26, 189–191, 639 ROM, See Read-only memory Root cause analysis, 702–703 Root threats and vulnerabilities, 149, 641, 675 rootkits, 421, 596 ROT-13, 229 Routers, 414, 464 access control, 643 access control lists, 188 boundary routers, 433 Routing Information Protocol, 410, 481–482 Virtual Router Redundancy Protocol, 482 Routing Information Protocol (RIP), 410, 481–482 Routing tables, 410 RSA, 248, 251, 252, 254–257, 273, 589 rsh, 515 Ruby, 567 Rule-based access control, 188 Running key cipher, 233
S Salami scam, 597 Sametime, 504 Sandbox, 565–566, 581–582 Sanity checking, 583 Sarbanes-Oxley Act, 73–74 Scanning, 422–423, 484–486, 599, 650, 651 compliance, 422 discovery, 422 FIN, 484
AU8231_IDX.fm Page 1055 Monday, October 16, 2006 1:27 PM
Index heuristic, 599 host-based, 651, 669 network-based, 651, 669, 670 NULL, 485 ping, 481 port, 210, 484 tools, 423 virus signatures, 598–599 vulnerability, 422, 656, 668–669, 674 XMAS, 485 Screen filters, 139 Scripting, 505 Scripting languages, 547 Scripts, 132 Script virus, 592 Secondary storage, 317 Secure and Fast Encryption Routine (SAFER), 251 Secure European System for Applications in a Multi-Vendor Environment (SESAME), 184 Secure Hash Algorithm (SHA), 262 Secure/Multipurpose Internet Mail Extension (S/MIME), 275 Secure Network File System (SNFS), 495 Secure shell (SSH), 477–478 Secure Socket Layer (SSL), 112, 276, 478–479, 496–497 Java Secure Socket Extension (JSSE), 566 Open SSL, 497 TLS, 496–497 Security administrator, roles and responsibilities, 40, 637–640, See also Information security officer (ISO) responsibilities account characteristics, 638 audit logging, 639–640 clearances, 637 file sensitivity labels, 637 passwords, 637–638 security profiles, 638–639 system security characteristics, 637 Security architecture and design, 307–332 CISSP® Candidate Information Bulletin, 761 CISSP® expectations, 307–308 design principles, 309 diskless workstations, 309–310 frameworks and standards, 308 hardware, 311–319, See also Hardware; specific components heterogeneous and distributed environments, 312–314
models, 324–328, See also Security models and architecture theory sample questions, 332–335, 734–737 security product evaluation, 329–332, See also Security product evaluation software, 322–324, See also Application security; Operating systems; specific components, applications, issues thin clients and thin processing, 309–310 Security awareness, 207 defining, 51 formal training rationale, 51 job training, 55 performance metrics, 56 professional education, 56 program activities and methods, 54 program development responsibility, 28 roles and responsibilities, 38–42, See also Security roles and responsibilities sample program outline, 52–53 training topics, 51 Security certifications, 56, See also Certification and accreditation; CISSP® Candidate Information Bulletin Security clearances, 637 Security council (enterprisewide security oversight committee), 34–38 investment area recommendation, 36–37 meetings, 37 mission statement, 35–36 representation, 37 security department relationship, 38 security planning, 42–44 security prioritization, 36 security program oversight, 35–36 vision statement, 34 Security governance, See Information security governance Security guards, 288–289 Security incident response, See Incident response and evaluation Security kernels, 320–321, 571–572 Security metrics, 29 Security models and architecture theory, 324–328 access control matrix, 188, 327–328 Bell-LaPadula, 324, 325–326, 416 Biba, 326, 416 Brewer-Nash (Chinese wall), 328 Clark-Wilson, 324, 326, 416
1055
AU8231_IDX.fm Page 1056 Monday, October 16, 2006 1:27 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Graham-Denning, 328 Harrison-Ruzzo-Ullman, 328 information flow, 325, 328 lattice, 324 noninterference, 325 research, 325 state machine, 325 Security performance metrics, 56 Security perimeter, 320, 433, See also Firewalls Security planning, 42–44, See also Business continuity planning; Disaster recovery planning; Security policies Security policies, 9–12, 185, See also Standards acceptable use policy, 13, 533 access control policy, 98, 119 antimalware policies, 600–601 application security, 542 awareness program curriculum, 52–53 combining/differentiating policies, standards, baselines, procedures, guidelines, 16–17 compliance program development, 29 encryption usage, 274 information security officer responsibility for developing, 28 ISO/IEC 17799, 309 ITGI recommendations, 8 management review and sign-off, 11 network partitions and, 320 noncompliance sanctions, 12, 53 security council oversight, 35–36 types, 12 issue-specific, 12–13 organizational or program, 12 system-specific, 13 writing, 10–12 Security product evaluation, 329–332 certification and accreditation, 332 common criteria, 331 ISO Common Criteria, 331 ITSEC, 330 SEI’s Capability Maturity Model Integration, 331 TCSEC, 329 Security roles and responsibilities, 38–42 administrative assistants/secretaries, 41 business continuity planner, 40 data custodian, 39 data/information/business owners, 39 end user, 38 executive management, 38–39
1056
help desk administration, 41 information security officer, 39, See also Information security officer (ISO) responsibilities information systems auditor, 39–40 information systems security professional, 39 IS/IT professionals, 40 physical security, 41 security administrator, 40, See also Security administrator, roles and responsibilities systems administrator, 40, See also System administrator Security threats and vulnerabilities, See also specific threats access control, 130–147, See also Access control threats assessing for continuity planning, 356–358 backdoor/trapdoor, 144, 553, 588, 640 cryptanalysis attacks, 271–274, See also Cryptanalysis and attacks database environment, 617–620, See also Database security data remanence, 140–142, 651–652 denial of service, See Denial of service (DoS) attacks dumpster diving, 143–144, 595 emanations, 138–139 help desk fraud, 146–147 ICMP attacks, 475, 480–481, See also Internet Control Message Protocol maintaining awareness of, 29, See also Security awareness malware, See Malicious software object reuse, 139–140, 551, 651–653 operations, 645–646, See also Operations security password crackers, 134–136, See also Passwords physical security, 283–285, See also Physical security portable devices, 299, 314, 470–471 risk assessment, 66–67, See also Risk assessment sniffers, eavesdropping, and tapping, 137–138, 419, 495, 527, 651 social engineering, See Social engineering software environment, See Software threats and vulnerabilities
AU8231_IDX.fm Page 1057 Monday, October 16, 2006 1:27 PM
Index spoofing/masquerading, See Spoofing attacks storage and backup media, See Backup; Storage media SYN flood, 130–131, 418, 474, 483, 486 theft, 144–145, 284, 299, 645, 654 tunneling, See Tunneling unauthorized data mining, 142–143 vulnerability management and access control, 115–116 Security zones, ActiveX, 511–512 Security zones, physical, 125, 286–287 Senior management, See Executive management Separation of duties, 23–25, 98–101, 207, 576 element distribution, 100–101 function sensitivity, 25–26, 99–100, 118 incompatible duties, 24–25 static or dynamic duties, 99 Separation of environments, 576–577 Sequence number attacks, 483, 485 Serial Line Internet Protocol (SLIP), 450 Serpent, 248 Server Message Block (SMB), 493 Servers, 635–636 access controls, 115, 128, 237, 286, 289, 469, 635, 643 authentication, 152, 182–183, 447–448, 496–497 bridges or switches, 442–444 database security, 614, 619 DHCP, 479–480 directories, 174 DMZ, 417 domain name, See Domain Name System Enterprise Java Bean, 570 fault-tolerant systems, 656–657 FTP, See File Transfer Protocol hidden and anonymous, 527 Internet Relay Chat, 503–504 Jabber, 503 Kerberos, 181–183 layer 2 network, 479 mail, 458, 497–498, 500–501 malware attacks, See Malicious software multicasting, 435–436 multiprocessing, 315 network baselines, 15 Network File System, 493–495 Network Information Service, 492 network security issues, 469, 489–490 open mail relay, 498–499 packet filtering, 466 penetration testing, 208, 212
proxy, 466–467, 478, 509–510 RADIUS, See RADIUS security domain, 185 security zones, 289 SIP, 519 SOCKS proxy server, 478 spoofing, 137, See also Spoofing attacks SSH, 477–478 system administrator responsibilities, 40 tactical security planning, 43 TELNET, 514–515 thin client systems, 309–310 top-level domain (TLD), 487–488 updating, 676 vaulting, 659 virtual LANs, 451 vulnerability scanning, 669 Web security, 619–620, 626–627 X Window system, 516–517 Service bureaus, 369 Service Set Identifier (SSID) broadcasting, 446 SESAME, 184 Session hijacking, 483, 485 Session Initiation Protocol (SIP), 519 Session key, 259, 268 Session layer (OSI layer 5), 409, 412, 486–494 access services, 493–495 directory services, 486, 487–493, See also Directory services duplex and simplex modes, 412 Remote Procedure Call, 412, 486–487 SHA and SHA-1, 262, 272, 276 Shamir, Adi, 254 Shannon, Claude, 221, 267 Shared key encryption, 236, 445–446 Shareware, 692 Sharia, 689 Shatter, 573 Shatterproof fallacy, 76 Shielded twisted pair (STP) cable, 429 Shoulder surfing, 139, 595 S-HTTP, 497, 511 Side channel attacks, 273 Signature-based intrusion detection technology, 199 Signature behavior, 707 Signatures, digital, 227, 265–266, 314 Signature scanners, 598–599 Simple Mail Transfer Protocol (SMTP), 412, 497–498, 529 Simple Network Management Protocol (SNMP), 513–514
1057
AU8231_IDX.fm Page 1058 Monday, October 16, 2006 1:27 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Simple Network Time Protocol (SNTP), 518 Simplex mode, 412 Single-factor authentication, 126, 150 Single loss expectancy (SLE), 61 Single sign-on (SSO), 176–177, 179–181 SESAME, 184 SirCam, 588, 591 Skype, 520, 535 Slippery slope, 82 Smalltalk, 567 Smart cards, 152–160, 296 ATM cards, 154 capabilities, 160 contact cards, 157, 296 contactless cards, 157–158, 296 examples of features, 156–157 legacy single sign-on systems, 177 memory cards vs., 153–154 memory types, 155 private key, 156 proximity cards, 158–159, 296 public key infrastructure, 159 RFID cards, 296 Smart locks, 294–295 Smart phones, 315, 471 S/MIME, 275 Smoke detectors, 291 SMTP, See Simple Mail Transfer Protocol Smurf attacks, 475 Sniffers, 137–138, 419, 495, 527, 651 Snort, 422 Social engineering, 145–147, 272, 421, 552–553, 577, 594–595 chat applications security issues, 505 countermeasures, 577 DNS addresses and, 489 E-mail, 145–146 help desk fraud, 146–147 malware and, 553, See also Malicious software penetration testing, 213, 214 phishing, 137, 595, 667 shoulder surfing, 139, 595 visitors and, 289 Social security number verification, 50 Software, defined, 320 Software, operating systems, See Operating systems Software backup, 7, 369–370, 577–578, See also Backup
1058
Software development and programming, 538, 541–546, 554–571, See also Application security; Programming languages; Software protection mechanisms budget and schedule overruns, 554 Capability Maturity Model, 555 change management, 585–586 CISSP® expectations, 540 development methods, 561–564, See also Software development methods Java security, 564–566 model selection considerations, 564 object-oriented technology and programming, 566–568 distributed systems, 569–570 security, 568–569 open source, 542–543 process and elements, 544–545 programming procedure, 545–546 rapid application development (RAD), 562 scope creep, 586 security testing, 558–559 separation of duties, 23, 100, 576 software environment, 547–548 systems development life cycle, 555–561, See also Systems development life cycle Software development methods, See also Software development and programming cleanroom, 561–562 component-based development, 563 computer-aided software engineering (CASE), 563 exploratory model, 563 extreme programming, 564 iterative development, 562 joint analysis development, 563 model selection considerations, 564 modified prototype model, 562 prototyping, 562 rapid application development, 562 reuse model, 563, See also Object reuse spiral method, 561 structured programming development, 561 waterfall, 561 Software duplication and distribution controls, 644, 671 Software engineering, 554, See also Software development and programming
AU8231_IDX.fm Page 1059 Monday, October 16, 2006 1:27 PM
Index Software Engineering Institute, Capability Maturity Model Integration (SEI/CMMI), 331 Software forensics, 578–580, See also Computer forensics Software library, 655, 673, 692 Software licensing, 691–692 Software patches, See Patch management Software piracy, 77, 691–692, See also Intellectual property Software programming languages, See Programming languages Software protection mechanisms, 571–582, 676, See also Access control; Application security; Software development and programming activity monitors, 599 antimalware policies, 600–601 audit and assurance, 582–586 certification and accreditation, 584 change detection, 599–600 change management, 585–586, 659–660, 669–677, See also Change control management configuration management, 586, 659, 668–669, 670–671 control and separation of environments, 576–577 covert channel controls, 575 cryptography, 575, See also Cryptography and encryption granularity of controls, 576 IDS, See Intrusion detection systems (IDSs) information auditing, 583–584 information protection management, 584 malware assurance, 601–602 malware protection, 598–601, See also Malicious software memory protection, 574–575 mobile code, 580–582 parameter check controls, 548–549, 573 parameter checking and buffer overflow controls, 548–549, 573 password protection, 575–576 patch management, 115–116, 420, 585, 673–677 processor privilege states, 571–573 programming language support, 582 scanning, 598–599, 601, 650, 651 security kernels, 321, 571 social engineering, 577 standardization, 538
time of check/time of use, 577 Software threats and vulnerabilities, 537, 549–553, 554, See also Application security; Malicious software; specific threats audit and assurance mechanisms, 582–583 backdoor/trapdoor, 144, 553, 588, 640 between-the-lines, 553 buffer overflows, See Buffer overflows citizen programmers, 550 covert channels, 550, 575 cryptanalysis attacks, See Cryptanalysis and attacks database environment, 617–620, See also Database security debugging tools, 634 development protections and controls, 548–549 ease of exploit, 675–676 malformed input attacks, 551 malware, See Malicious software man-in-the middle attacks, See Man-inthe-middle attacks mobile code, 132–133, 551–552, 580–582 notebook computers and, 470–471 object reuse, 139–140, 551, 651–653 open source code, 542 pointers, 582 remote-access Trojans (RATs), 588, 595–596 root exploits, See Root threats and vulnerabilities social engineering, 552–553, 577, 594–595, 667, See also Social engineering software copying/distribution issues, 644, 672 time of check/time of use, 553, 577 Trojan horse programs, See Trojans viruses, See Viruses Southampton Program Analysis Development Environment (SPADE), 582 SPADE, 582 Spam, 129, 134, 498–500, 501, 532, 666–667 botnet distribution, 596 over instant messaging (SPIM), 505 Spanning tree analysis, 63–64 SPARK, 582 Spartan scytale, 222 Spiral method, 561 SP-network, 221
1059
AU8231_IDX.fm Page 1060 Monday, October 16, 2006 1:27 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Spoofing attacks, 131, 136–137, 474, 485, 488–489 countermeasures, 485 e-mail addresses, 498 Network File System, 495 UDP vulnerability, 484 Spreadsheets, 322 Sprinkler systems, 292 Spysender ID, 667 Spyware, 133, 597, 646 SQL, See Structured Query Language SSID, See Service Set Identifier (SSID) broadcasting SSL, See Secure Socket Layer Standard of due care, 341 Standards, 3–4, 13–14, 17, See also specific types continuity planning, 341 good business practice, 341 IEEE, See IEEE standards information classification, 104 ISO, See ISO standards NIST, See National Institute of Standards and Technology security architecture and design, 308 Star (*) property, 326 Star topology, 427, 442 Stateful inspection filtering, 466 Stateful matching intrusion detection, 199–200 State machine models, 325 Static electricity, 662 Static packet filtering, 466 Static type checking, 582 Statistical anomaly-based intrusion detection, 200 Steganography, 235, 267 Stoned, 590 Storage area networks, 370 Storage channels, covert, 550 Storage media, 316–318, 653–654 backups, See Backup data remanence, 140–142, 651–652 degaussing, 140, 651, 652–653 disk mirroring, 578 fire protection, 291 formatting, 651 labeling and classification, 107, 653, See also Data classification library maintenance, 655, 673 misuse prevention, 654–655 object reuse vulnerabilities, 140
1060
RAID, See Redundant array of inexpensive (or independent) disks reuse, 651–653 sensitive media handling, 653–654 declassification, 654 destruction, 653–654 handling, 653 marking, 653 storing, 653 types and protection methods, 650 Strategic security planning, 43 Stream-based ciphers, 227–228 Stream Control Transmission Protocol (SCTP), 484 Streaming transmissions, 435 Stream modes of DES, 241 Structured programming development, 561 Structured Query Language (SQL), 322, 412, 551, 622, See also Structured Query Language Stylometric analysis, 578 Subscriber Identity Module (SIM), 471 SubSeven, 595 Substitution ciphers, 221, 229 SULFNBK.EXE, 593 SunRPC, 486 Sun Tzu, 668 Supervision, 50, 649 Supervisor state, 321 Suspected terrorist watch list, 50 Switched virtual circuits (SVCs), 437 Switches, 444–445, 643 Symmetric digital subscriber line (SDSL), 458 Symmetric key algorithms, 236–252 AES, 247–250, See also Advanced Encryption Standard Blowfish, 251 CAST, 250–251 DES, 237–247, See also Data Encryption Standard hybrid symmetric-asymmetric cryptography, 259–260 International Data Encryption Algorithm, 250 key management problems, 252 message authentication, 260 out-of-band key transmission, 267 Pretty Good Privacy (PGP), 275 RC4, 252 RC5, 251–252 Secure and Fast Encryption Routine, 251 TwoFish, 251
AU8231_IDX.fm Page 1061 Monday, October 16, 2006 1:27 PM
Index Synchronous communications technology, 434 Synchronous Data Link, 438 Synchronous dynamic random-access memory (SDRAM), 316 Synchronous optical network (SONET), 456–457 Synchronous tokens, 152–153, 163 SyncML, 313 SYN flood attacks, 130–131, 418, 474, 483, 486 SYN scanning, 485 Syntactical analysis, 580 System administrator privileges, 635–636 security roles and responsibilities, 40, 649 System event auditing, 206 System infectors, 591 System recovery, 667 Systems account management, 638–642 Systems development life cycle (SDLC), 555–561 acceptance phase, 558 certification and accreditation, 558, 559–560 design specifications phase, 557 development and implementation phase, 557 documentation and common program controls phase, 557–558 functional requirements definition phase, 557 implementation, 560 project initiation and planning phase, 556 revisions and system replacement, 560–561 testing and evaluation controls, 558–559 Systems Security Certified Practitioner (SSCP), 56
T T1 carriers, 455 T3 carriers, 455 TACACS+, 408 Tactical security planning, 43 Tampering, 664–665 Tape media, 317, See also Magnetic media redundant array of independent tapes, 658 Targeted penetration testing, 213 Task specialization, 20
Tavares, Stafford, 250 TCP/IP, 407, 408, 411, 413 DoS attacks, 130–131 NETBIOS vulnerabilities, 491–492 Terminal Emulation Protocol (TELNET), 514–515 TCP SYN scanning, 485 TCSEC, 329 Teardrop attack, 473 Technical access control, 125–129, See also Access control Telecommunications security, 319, 407, See also Network security CISSP® Candidate Information Bulletin, 763 CISSP expectations, 408 continuity of operations, 660–661 emanation vulnerabilities, 138 failover devices, 660 ISO/IEC 17799, 309 OSI reference model, 407, 408, 409–412, See also Open System Interconnect (OSI) model; specific layers PBX and IP telephony, 213, 215 redundancy and backup, 660–661 sample questions, 521–525, 740–746 smart phones, 315 wireless LANs, See Wireless local area networks Telecommunications transmission technologies, 431–441, See also Transmission technologies Telefonica virus, 592 TELNET, 514–515 Temperature control, 662 TEMPEST, 138 Temporal isolation, 192 Temporal Key Integrity Protocol (TKIP), 447–448 Temporary employees, 118 Temporary file vulnerabilities, 274 Terminations, employee, 50–51, 207 Terrorism, 339 Terrorist watch list, 50 Theft, 144–145, 284, 299, 645, 654 Thin clients, 309–310 Thin storage, 310 Threads and processes, 322–323 Three-factor authentication, 126, 150, See also Biometrics Three-legged firewalls, 434 Thumb reader cards, 161 Time-critical application, 397
1061
AU8231_IDX.fm Page 1062 Monday, October 16, 2006 1:27 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® Time-critical inventory, 378 Time-critical processes, 397 Time division multiple access (TDMA), 432 Time-memory tradeoffs, 135–136 Time of check/time of use (TOC/TOU), 553, 577, 619 Time stamps, 650 Time synchronization, 517–518, 636 Timing channels, covert, 550 Tippett, Peter S., 82 TLS, See Transport Layer Security Token-based authentication, 152–153, 163 Token passing, 438 Token Ring, 425, 439–440 Top-level domain (TLD) servers, 487–488 Tort law, 687 Traceroute, 481 Trademark, 490, 690–691 Trade secrets, 21, 691 Traffic anomaly-based intrusion detection, 201 Traffic padding, 419 Training and education crisis and recovery activities, 374 job training, 55 professional education, 56 security awareness, 51–56 Transaction limits, 583 Transmission Control Protocol (TCP), 411, 414, 483 attacks on, 483 IP spoofing, See Spoofing attacks scanning techniques, 484–485, See also Scanning sequence number attacks, 485 Transmission Control Protocol/Internet Protocol, See TCP/IP Transmission technologies, 434–441, See also specific technologies asynchronous communications, 435 broadcast, 435 Carrier Sense Multiple Access, 437–438 circuit-switched networks, 436–437 copper-distributed data interface, 441 Ethernet, 438–439, See also Ethernet fiber-distributed data interface, 425, 440–441 multicast, 435–436 packet-switched networks, 437 polling, 438 switched and permanent virtual circuits, 437 synchronous communications, 434
1062
token passing, 438 Token Ring, 425, 439–440 unicast, 435 wireless, 431–433 wireless optics, 462 Transport layer (OSI layer 4), 409, 411–412, 482–486 attacks on, 483 denial of service, 486, See also Denial of service (DoS) attacks IP spoofing, See Spoofing attacks scanning, 484–485, See also Scanning sequence number, 483, 485 DoD model, 413 TCP, See TCP/IP; Transmission Control Protocol UDP, See User Datagram Protocol Transport Layer Security (TLS), 276, 496–497 HTTPS, 510 Transport mode (IPSec), 476–477 Transposition ciphers, 221, 230–231 Trapdoor/backdoor vulnerabilities, 144, 553, 588, 640 Tree topology, 424 Triage, 700–701 Triple DES (3DES), 246–247, 275 Trivial FTP (TFTP), 506–507 Trojans, 129, 133, 646 e-mail, 594 Log-in, 593–594 privilege escalation, 634 remote-access Trojans (RATs), 588, 595–596 root exploits, 675 Trusted Computer Security Evaluation Criteria (TCSEC), 323–324, 325, 329, 571 Trusted Computing Base (TCB), 323–324, 329, 571, 635 layering and data hiding, 329–330 process isolation, 329 Trustworthiness and honesty, 84 Tunneling, 417, 479, See also HTTP tunneling HTTP tunneling, 505–506, 508, 510, 526 layer 2 tunneling protocol, 479 point-to-point tunneling protocol, 479 Tunnel mode (IPSec), 476–477 Tuples, 606 Twisted-pair cables, 428–429, 643 Two-factor authentication, 126, 150, 296 TwoFish, 248, 251 Type-safe programming language, 582
AU8231_IDX.fm Page 1063 Monday, October 16, 2006 1:27 PM
Index U UDP, See User Datagram Protocol Ultrasonic physical intrusion detection, 298–299 Uncertainty principle, 224 Unfriendly employee terminations, 51 Unicast transmissions, 435 Unicode, 412 Uniform Resource Locators (URLs), See URLs Uninterruptible power supplies (UPSs), 290, 661–662 Universal Serial Bus (USB) devices, 319 UNIX system environments, 313 FIN scanning, 484 Network File System, 493–495 remote access, 514–515 Unshielded twisted pair (UTP) cable, 428–429, 643 Unspecified bit rate (UBR), 461 URLs (Uniform Resource Locators), 487 malformed input attacks, 551 USB devices, 319 U.S. Department of Defense (DoD), 323, 341, 408, 413, 652, 761 Usenet, 501–502, 532, 594 User Datagram Protocol (UDP), 411, 414, 484 Fraggle attacks, 475 spoofing vulnerability, 484 Username password combinations, See Passwords Utilitarian principle, 83
V Vacations, 25 Variable bit rate (VBR), 461 Vernam ciphers, 234 Very high bit rate DSL (VDSL), 458 Videoconferencing, 414, 435 Video monitoring, 296–298 Viega, John, 628 View-based access controls, 622 Vigenère, Blais de, 222, 232 Virtual business partners, 368–369 Virtual circuits, Asynchronous Transfer Mode, 461 Virtual local area networks (VLANs), 120, 126, 451 Virtual memory, 317–318
Virtual private networks (VPNs), 14, 170, 475–479 IPSec authentication and confidentiality, 475–477, See also IPSec point-to-point tunneling protocol, 479 “poor man’s” services, 526 secure shell, 477–478 SSL/TLS, 478–479 Virtual Router Redundancy Protocol (VRRP), 482 Viruses, 129, 133, 586–587, 589–592, 646, See also Malicious software; specific types antivirus management, 598–601, 650 boot sector infectors, 590 CHRISTMA, 591 companion, 591 DNS attacks, 489 e-mail, 591 file infectors, 590 history and definition, 589–590 hoaxes, 593 Jerusalem, 590 macro, 592 Magistr, 593 Melissa, 580, 588, 591, 592 multipartite, 591–592 script, 592 SirCam, 588, 591 system infectors, 591 Trojan horse programs, See Trojans worms vs., 590 Virus signature scanners, 598–599 Vision statement, 34 Visitor management, 124–125, 289 Visual Basic, 567 Voice mail systems, 215 Voice-over-IP (VoIP), 215, 502, 518–520, 535 public telephone network vs., 535 Voice patterns, 161–162 Von Neumann architecture, 544 Vulnerability analysis, 210–211 Vulnerability assessment, 207–208, 649, See also Penetration testing; Risk assessment Vulnerability management and access control, 115–116, See also Change control management Vulnerability scanning, 422, 656, 668–669, 674 Vulnerability tests, 301
1063
AU8231_IDX.fm Page 1064 Monday, October 16, 2006 1:27 PM
OFFICIAL (ISC)2® GUIDE TO THE CISSP® CBK® W Walls, doors, and windows, 295–296 War dialing, 213, 214 Warm sites, 368 Waterfall, 561 Watermarking, 235 Web access management tools, 174–175 Web-based data warehousing, 310 Web page security, mobile code controls, 580–582 Web proxy servers, 467 Weizenbaum, Joseph, 71–72 Well-formed transactions, 25 WEP, See Wired equivalent privacy Wet pipe systems, 292 White box, 209 Wide area network (WAN) technologies, 315, 319, 452–462 Asynchronous Transfer Mode, 461 broadband wireless, 461–462 cable modem, 459 DSL, 452, 457–459 E-carriers, 455–456 frame relay, 460–461 ISDN, See Integrated Services Digital Network modems, 452–453 point-to-point lines, 455 public switched telephone networks, 452–453 SONET and optical carriers, 456–457 T1 and T3 carriers, 455 wireless local loop, 462 wireless optics, 462 X.25, 459–460 Wideband CDMA (W-CDMA), 432 Wiener, Norbert, 71 WiFi Protected Access (WPA), 447–448, 644 WIN CE, 315 Window glass and types, 295 Windows Messenger, 504 Windows 95, 554, 574 Windows 98, 574 Windows NT, 554, 584 Windows Terminal Services (WTS), 313 Windows 2000, 572–573, 574 Windows XP, 554 Wired equivalent privacy (WEP), 408, 446–447, 644 WEP Protected Access 2 (WPA2), 448 Wireless broadband, 461–462 Wireless hardware security, 644
1064
Wireless local area networks (WLANs), 445–450 authentication, 445–446 EAP framework, 447–448 MAC address tables, 446 open system, 445 shared-key, 445–446 Bluetooth technology, 449–450 encryption, 446–450 IEEE 802.11a, 449 IEEE 802.11b, 449 IEEE 802.11g, 449 Temporal Key Integrity Protocol, 447–448 WiFi Protected Access (WPA), 447–448 wired equivalent privacy (WEP), 408, 446–448, 644 SSID, 446 Wireless local loop, 462 Wireless Markup Language (WML), 611 Wireless network testing, 213, 214 Wireless optics, 462 Wireless sniffing, 527, See also Sniffers Wireless transmission technologies, 431–433 antennae, 138 code division multiple access, 432 direct-sequence spread spectrum, 431 frequency division multiple access, 432 frequency-hopping spread spectrum, 431 mobile telephony, 432–433 orthogonal frequency division multiplexing, 432, 449 time division multiple access, 432 Word documents, 580 macro viruses, 592 Word processors, 322 Work factor, 220 Working Group on Computer Ethics, 80 Workstations access control, 643 diskless systems, 309–310 network security issues, 469–470 personal firewalls, 467–468 system administrator privileges, 635 World Intellectual Property Organization (WIPO), 690 World Wide Web (WWW), 408 World Wide Web (WWW) addresses (URLs), 487
AU8231_IDX.fm Page 1065 Monday, October 16, 2006 1:27 PM
Index World Wide Web (WWW), data exchange, 506–512, See also Hypertext Transfer Protocol Worms, 129, 133, 408, 590, 592–593, 646 Internet Worm of 1988, 592 Lion, 593 Loveletter, 591, 592–593 Morris, 534 Nimda, 527, 593 viruses vs., 590
X.500 certificate standards, 269–270, 530 XMAS scanning, 485 XML, 610–611 X Window System (X11), 516, 534
Y Yahoo! Messenger, 504
Z X X11, 516 X.25, 459–460
Zimmerman, Phil, 275 Zombie networks, 499 Zombies, 131, 596
1065