OCP: Oracle8i DBA Architecture & Administration and Backup & Recovery Study Guide
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
OCP: Oracle8i™ DBA Architecture & Administration and Backup & Recovery Study Guide Doug Stuns Biju Thomas
San Francisco • Paris • Düsseldorf • Soest • London Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Associate Publisher: Richard Mills Contracts and Licensing Manager: Kristine O’Callaghan Acquisitions and Developmental Editors: Kim Goodfriend, Richard Mills Editor: Sharon Wilkey Production Editor: Leslie E. H. Light Technical Editors: Bob Bryla, Betty MacEwen Book Designer: Bill Gibson Graphic Illustrator: epic Electronic Publishing Specialists: Bill Gibson, Jill Niles, Judy Fung, Nila Nichols, Susie Hendrickson Proofreaders: Laurie OConnell, Nancy Riddiough, Camera Obscura Indexer: Ted Laux CD Coordinator: Kara Eve Schwartz CD Technicians: Keith McNeil, Siobhan Dowling Cover Designer: Archer Design Cover Illustrator/Photographer: Photo Researchers Copyright © 2001 SYBEX Inc., 1151 Marina Village Parkway, Alameda, CA 94501. World rights reserved. No part of this publication may be stored in a retrieval system, transmitted, or reproduced in any way, including but not limited to photocopy, photograph, magnetic, or other record, without the prior agreement and written permission of the publisher. Library of Congress Card Number: 01-106432 ISBN: 0-7821-2683-9 SYBEX and the SYBEX logo are trademarks of SYBEX Inc. in the USA and other countries. Screen reproductions produced with FullShot 99. FullShot 99 © 1991-1999 Inbit Incorporated. All rights reserved. FullShot is a trademark of Inbit Incorporated. The CD interface was created using Macromedia Director, COPYRIGHT 1994, 1997-1999 Macromedia Inc. For more information on Macromedia and Macromedia Director, visit http://www.macromedia.com. SYBEX is an independent entity from Oracle Corporation and is not affiliated with Oracle Corporation in any manner. This publication may be used in assisting students to prepare for an Oracle Certified Professional exam. Neither Oracle Corporation nor SYBEX warrants that use of this publication will ensure passing the relevant exam. Oracle is either a registered trademark or a trademark of Oracle Corporation in the United States and/or other countries. TRADEMARKS: SYBEX has attempted throughout this book to distinguish proprietary trademarks from descriptive terms by following the capitalization style used by the manufacturer. The author and publisher have made their best efforts to prepare this book, and the content is based upon final release software whenever possible. Portions of the manuscript may be based upon pre-release versions supplied by software manufacturer(s). The author and the publisher make no representation or warranties of any kind with regard to the completeness or accuracy of the contents herein and accept no liability of any kind including but not limited to performance, merchantability, fitness for any particular purpose, or any losses or damages of any kind caused or alleged to be caused directly or indirectly from this book. Manufactured in the United States of America 10 9 8 7 6 5 4 3 2 1
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Software License Agreement: Terms and Conditions The media and/or any online materials accompanying this book that are available now or in the future contain programs and/or text files (the "Software") to be used in connection with the book. SYBEX hereby grants to you a license to use the Software, subject to the terms that follow. Your purchase, acceptance, or use of the Software will constitute your acceptance of such terms. The Software compilation is the property of SYBEX unless otherwise indicated and is protected by copyright to SYBEX or other copyright owner(s) as indicated in the media files (the "Owner(s)"). You are hereby granted a single-user license to use the Software for your personal, noncommercial use only. You may not reproduce, sell, distribute, publish, circulate, or commercially exploit the Software, or any portion thereof, without the written consent of SYBEX and the specific copyright owner(s) of any component software included on this media. In the event that the Software or components include specific license requirements or end-user agreements, statements of condition, disclaimers, limitations or warranties ("End-User License"), those End-User Licenses supersede the terms and conditions herein as to that particular Software component. Your purchase, acceptance, or use of the Software will constitute your acceptance of such End-User Licenses. By purchase, use or acceptance of the Software you further agree to comply with all export laws and regulations of the United States as such laws and regulations may exist from time to time. Reusable Code in This Book The authors created reusable code in this publication expressly for reuse for readers. Sybex grants readers permission to reuse for any purpose the code found in this publication or its accompanying CD-ROM so long as all three authors are attributed in any application containing the reusable code, and the code itself is never sold or commercially exploited as a stand-alone product. Software Support Components of the supplemental Software and any offers associated with them may be supported by the specific Owner(s) of that material but they are not supported by SYBEX. Information regarding any available support may be obtained from the Owner(s) using the information provided in the appropriate read.me files or listed elsewhere on the media. Should the manufacturer(s) or other Owner(s) cease to offer support or decline to honor any offer, SYBEX bears no responsibility. This notice concerning support for the Software is provided for your information only. SYBEX is not the agent or principal of the Owner(s), and SYBEX is in no way responsible for providing any support for the Software, nor is it liable or responsible for any support provided, or not provided, by the Owner(s). Warranty SYBEX warrants the enclosed media to be free of physical defects for a period of ninety (90) days after purchase. The Software is not available from SYBEX in any other form or media
than that enclosed herein or posted to www.sybex.com. If you discover a defect in the media during this warranty period, you may obtain a replacement of identical format at no charge by sending the defective media, postage prepaid, with proof of purchase to: SYBEX Inc. Customer Service Department 1151 Marina Village Parkway Alameda, CA 94501 (510) 523-8233 Fax: (510) 523-2373 e-mail:
[email protected] Web: HTTP://WWW.SYBEX.COM After the 90-day period, you can obtain replacement media of identical format by sending us the defective disk, proof of purchase, and a check or money order for $10, payable to SYBEX. Disclaimer SYBEX makes no warranty or representation, either expressed or implied, with respect to the Software or its contents, quality, performance, merchantability, or fitness for a particular purpose. In no event will SYBEX, its distributors, or dealers be liable to you or any other party for direct, indirect, special, incidental, consequential, or other damages arising out of the use of or inability to use the Software or its contents even if advised of the possibility of such damage. In the event that the Software includes an online update feature, SYBEX further disclaims any obligation to provide this feature for any specific duration other than the initial posting. The exclusion of implied warranties is not permitted by some states. Therefore, the above exclusion may not apply to you. This warranty provides you with specific legal rights; there may be other rights that you may have that vary from state to state. The pricing of the book with the Software by SYBEX reflects the allocation of risk and limitations on liability contained in this agreement of Terms and Conditions. Shareware Distribution This Software may contain various programs that are distributed as shareware. Copyright laws apply to both shareware and ordinary commercial software, and the copyright Owner(s) retains all rights. If you try a shareware program and continue using it, you are expected to register it. Individual programs differ on details of trial periods, registration, and payment. Please observe the requirements stated in appropriate files. Copy Protection The Software in whole or in part may or may not be copyprotected or encrypted. However, in all cases, reselling or redistributing these files without authorization is expressly forbidden except as specifically provided for by the Owner(s) therein.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
To Cathy—Doug Stuns To Shiji—Biju Thomas
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Acknowledgments
First, I want to thank the Lord, my savior, for making this all possible. Next, my parents, Ron and Jan Stuns, for without them I would not be who I am and would not have had the fortitude necessary to write this book. My parents have been there for me throughout my life and have supported everything that I have done. My lovely wife of 11 years, Cathy, has been my major support in this process and encouraged me to start this task knowing full well of the time commitment. She is always my biggest and best supporter in every circumstance. I am so happy that she has been there to support me throughout. My wonderful children, Brant and Brea, 8 and 6 years of age, have supported and tolerated me through this process as well. The many hours away from them has been difficult for each of us, but their understanding has been incredible. Brant and Brea went so far as to join me in the den pretending to help write and review sections so that we could spend more time together. Without their understanding and support, I would not have been able to complete this task. I am so proud of them both. Numerous other people have helped to contribute to the completion and ultimate success of this project, and I owe them sincere appreciation. I want to thank Rick and Matt for reviewing questions and other technical materials. I am greatly thankful to the staff at Sybex. First, Technical Editors Bob Bryla and Betty MacEwen, for their diligent work and attention to detail. Next, Editor Sharon Wilkey, whose great efforts to correct and smooth out some rough edges made this process much easier. Production Editor Leslie E. H. Light was always there to help in any way possible. I also thank Associate Publisher Richard Mills for his support and follow-through. Unseen, but not unappreciated, Bill Gibson, Jill Niles, and the rest of the Electronic Publishing Specialists at Sybex created the look you see here and made sure each page was a marriage of form and function.It has been my pleasure to work with all of you. Finally, I want to thank all of the many professional people whom I have had the privilege to work with in my career. I have learned much and owe many thanks! —Doug Stuns
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
viii
Acknowledgments
I would like to thank Richard and Leslie for their help, guidance, and patience throughout the process of writing this book. Thank you, Sharon, for the edits and suggestions—your hard work certainly helped to complete the project on time. Bob and Betty, your technical reviews helped enormously to raise the quality of this book—thank you. Bill and Jill, you made this book look great. Thank you also. I also thank Wendy and all my colleagues for their never-failing support— you are a great team to work with. I thank my friends for inspiring and encouraging me. Finally, I owe big thanks to my wife, Shiji, who gave me all the support when I needed it the most. Thank you for giving up your weekends and evenings when I was working on this book. —Biju Thomas
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Introduction
There is high demand and competition for professionals in the Information Technology (IT) industry, and the Oracle Certified Professional (OCP) certification is the hottest credential in the database realm. You have made the right decision to pursue certification. Being an OCP will give you a distinct advantage in this highly competitive market. Many readers may already be familiar with the Oracle Corporation and its products and services. For those who aren’t familiar with the company, Oracle Corporation, founded in 1977, is the world’s leading database company and second largest independent software company, with revenues of more than $9.7 billion and clients in more than 145 countries. Oracle’s CEO, Lawrence J. Ellison, saw the future of information technology in Internet computing, and the Oracle8i database was created to meet the needs of this technological evolution. This book is intended to help you continue on your exciting new path toward obtaining the Oracle8i certified database administrator certification. The book will give you the necessary knowledge of the Oracle Server architecture and the hands-on skills you need to pass Exams 1Z0-023 and 1Z0025. Although the OCP exams for Database Administration can be taken in any order, it is generally recommended that the Oracle8i OCP certification exam for Architecture and Administration and the Oracle8i OCP certification exam for Backup and Recovery be the final exams taken in the series.
Why Become an Oracle Certified Professional? The number one reason to become an Oracle Certified Professional is to gain more visibility and greater access to the industry’s most challenging opportunities. The OCP program is Oracle’s commitment to provide top-quality resources for technical professionals who want to become Oracle specialists in specific job roles. The certification tests are scenario based, which is the most effective way to access your hands-on expertise and critical problemsolving skills. Certification is proof of your knowledge and shows that you have the skills required to support Oracle’s core products according to the standards established by Oracle. The OCP program can help a company identify proven performers who have demonstrated their skills to support the company’s
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
xxvi
Introduction
investment in Oracle technology. It demonstrates that you have a solid understanding of your job role and the Oracle products used in that role. So, whether you are beginning a career, changing careers, securing your present position, or seeking to refine and promote your position, this book is for you!
Oracle Certifications Oracle has several certification tracks designed to meet different skill levels. Each track consists of several tests that can be taken in any order. The following tracks are available:
Oracle Database Administrator
Oracle Application Developer
Oracle Database Operator
Oracle Java Developer
Oracle Financial Applications Consultant
Oracle Database Administrator (DBA) The role of Database Administrator (DBA) has become a key to success in today’s highly complex database systems. The best DBAs work behind the scenes, but are in the spotlight when critical issues arise. They plan, create, and maintain databases to ensure that the databases meet the data management needs of the business. DBAs also monitor the databases for performance issues and work to prevent unscheduled downtime. Being an effective DBA requires a broad understanding of the architecture of Oracle databases and expertise in solving system-related problems. The Oracle8i certified administrator track consists of the following five tests:
1Z0-001: Introduction to Oracle—SQL and PL/SQL
1Z0-023: Oracle8i—Architecture and Administration
1Z0-024: Oracle8i—Performance and Tuning
1Z0-025: Oracle8i—Backup and Recovery
1Z0-026: Oracle8i—Network Administration
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Introduction
xxvii
Oracle Application Developer This track tests your skills in client-server and Web-based application development using Oracle application development tools such as Developer/2000, SQL, PL/SQL, and SQL*Plus. The following five tests comprise this track:
1Z0-001: Introduction to Oracle—SQL and PL/SQL
1Z0-101: Develop PL/SQL Program Units
1Z0-111: Developer/2000 Forms 4.5 I
1Z0-112: Developer/2000 Forms 4.5 II
1Z0-113: Developer/2000 Reports 2.5
Oracle Database Operator (DBO) A Database Operator (DBO) performs simple operational tasks on Oracle databases in a support role to the DBA. DBOs need an introductory knowledge of the commands and utilities associated with managing a database. DBOs also install and set up databases, create users, and perform routine backups. You need to take the following test to be certified as a Database Operator:
1Z0-401: Database Operator
Oracle Java Developer This certification track is part of the Certification Initiative for Enterprise Development, a multi-vendor collaboration with Sun Microsystems, IBM, Novell, and the Sun-Netscape Alliance to establish standards for knowledge and skill levels for enterprise developers in the Java technology. The Initiative recognizes three levels of certification requiring five tests. At each skill level, a certificate is awarded to candidates who successfully pass the required exams in that level.
Level 1: Sun Certified Programmer
1Z0-501: Sun Certified Programmer for the Java 2 Platform
Level 2: Certified Solution Developer
1Z1-502: Oracle JDeveloper: Develop Database Applications with Java (Oracle JDeveloper, Release 2)
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
xxviii
Introduction
or 1Z1-512: Oracle JDeveloper: Develop Database Applications with Java (Oracle JDeveloper, Release 3)
1Z0-503: Object-Oriented Analysis and Design with UML
Level 3: Certified Enterprise Developer
1Z0-504: Enterprise Connectivity with J2EE
1Z0-505: Enterprise Development on the Oracle Internet Platform
Oracle Financial Applications Consultant This certification tests your expertise in Oracle Financial applications. These exams are designed to test your knowledge of the business processes incorporated into the Oracle Financial applications software. The following three tests comprise this track, and the third exam offers a specialization in either Procurement or Order Fulfillment:
1Z0-210: Financial Management R11
1Z0-220: Applied Technology R11
1Z0-230: Procurement R11
or
1Z0-240: Order Fulfillment
More Information The most current information about Oracle certification can be found at http://education.oracle.com. Follow the Certification Home Page link and choose the track you are interested in. Read the Candidate Guide for the test objectives and test contents, and keep in mind that they can change at any time without notice.
OCP: Database Administrator Track The Oracle8i Database Administrator certification consists of five tests, and Sybex offers several study guides to help you achieve the OCP Database Administrator Certification. There are three books in this series:
OCP: Oracle8i™ DBA SQL and PL/SQL Study Guide
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Introduction
xxix
OCP: Oracle8i™ DBA Architecture & Administration and Backup & Recovery Study Guide
OCP: Oracle8i™ DBA Performance Tuning and Network Administration Study Guide
Additionally, these three books are offered in a boxed set:
OCP: Oracle8i™ DBA Certification Kit
Table F.1 lists the five exams for the DBA track, their scoring, and the Sybex study guides that will help you pass each exam. TABLE F.1
OCP Database Administrator Tests and Passing Scores Total Questions
Questions Correct
Passing %
Introduction to Oracle: SQL & PL/SQL
57
39
68%
OCP: Oracle8i™ DBA SQL and PL/SQL Study Guide
1Z0-023
Oracle8i: Architecture and Administration
65
38
58%
OCP: Oracle8i™ DBA Architecture & Administration and Backup & Recovery Study Guide
1Z0-024
Oracle8i: Performance and Tuning
57
38
67%
OCP: Oracle8i™ DBA Performance Tuning and Network Administration Study Guide
1Z0-025
Oracle8i: Backup and Recovery
60
34
57%
OCP: Oracle8i™ DBA Architecture & Administration and Backup & Recovery Study Guide
1Z0-026
Oracle8i: Network Administration
59
41
71%
OCP: Oracle8i™ DBA Performance Tuning and Network Administration Study Guide
Exam #
Title
1Z0-001
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Sybex Study Guide
xxx
Introduction
Skills Required for DBA Certification
Understanding RDBMS concepts
Writing queries and manipulating data
Creating and managing users and database objects
Understanding PL/SQL programming and constructs
Understanding Oracle Server architecture—Database and Instance
Completely understanding physical and logical database storage concepts
Managing data—storage, loading, and reorganization
Managing roles, privileges, passwords, and resources
Understanding backup and recovery options
Archiving redo log files and hot backups
Using Recovery Manager (RMAN) to perform backup and recovery operations
Creating and managing standby database
Identifying and tuning database and SQL performance
Interpreting data dictionary views and database parameters
Configuring Net8 on the server side and client side
Using multi-threaded server, connection manager, and Oracle Names
Understanding graphical and character mode backup, recovery, and administration utilities
Tips for Taking OCP Exams
Each OCP test contains about 60–80 questions to be completed in about 90 minutes. Answer the questions you know first, so that you do not run out of time.
Many questions on the exam have answer choices that at first glance look identical. Read the questions carefully. Don’t just jump to conclusions. Make sure that you are clear about exactly what each question asks.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Introduction
xxxi
Many of the test questions are scenario based. Some of the scenarios contain non-essential information and exhibits. You need to be able to identify what’s important and what’s not.
Do not leave any questions unanswered. There is no negative scoring. You can mark a difficult question or a question you are unsure about and come back to it later.
When answering questions that you are not sure about, use a process of elimination to get rid of the obviously incorrect answers first. Doing this greatly improves your odds if you need to make an educated guess.
What Does This Book Cover? This book covers everything you need to know to pass both the OCP: Oracle8i DBA Architecture & Administration and OCP: Oracle8i DBA Backup & Recovery exams. The first part of the book covers the configuration, architecture, and administration of an Oracle database. The second part covers the topics of backup and recovery, including using traditional techniques as well as RMAN.
Part One: OCP: Oracle8i DBA Architecture & Administration Chapter 1 starts with an overview of the Oracle database configuration and its architecture. It discusses what constitutes an instance and a database, and the various background processes that communicate with the Oracle database. Chapter 2 discusses the administrator authentication methods and the stages in starting and stopping the Oracle database. This chapter also introduces the Oracle Enterprise Manager utilities. Chapter 3 introduces you to the steps required in database creation and how to prepare OS environment and parameter files. The database administrative packages and database event triggers are also discussed. Chapter 4 covers two important constituents of the Oracle database—the control file and redo log file. You will learn the importance and use of these files in this chapter. Chapter 5 is dedicated to managing tablespaces. The types of tablespaces and their configuration and storage are discussed. Chapter 6 discusses the logical storage structures such as blocks, extents, and segments. Creating and managing rollback segments are discussed. Chapter 7 covers creating and managing tables and its associated structures such as indexes and constraints.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
xxxii
Introduction
Chapter 8 introduces database and data security. Setting up users, privileges, and roles are discussed. Chapter 9 discusses the database utilities such as SQL*Loader and Export/Import. It also discusses the direct-load insert and National Language Support.
Part Two: Oracle8i DBA Backup & Recovery Chapter 10 starts with an overview of the backup and recovery process and configurations. Chapter 11 discusses the recovery structures and processes as well as the types of failures. Chapter 12 introduces how to configure the database for backup and recovery. This chapter also describes the differences between ARCHIVELOG and NOARCHIVELOG modes. Chapter 13 covers how to perform physical backups without Recovery Manager. It includes implications of closed and opened backups, Nologging and Logging options, control file backups, and special issues associated with read-only tablespaces. Chapter 14 covers how to perform complete recovery without Recovery Manager. This chapter focuses primarily on the differences between ARCHIVELOG and NOARCHIVELOG mode recoveries. Chapter 15 discusses how to perform incomplete recovery without Recovery Manager. This chapter has a special section on how to recover after losing current and inactive nonarchived redo log files. Chapter 16 introduces the Export and Import utilities and how these utilities complement the backup process. It includes an example of the tablespace point-in-time recovery (TSPITR). Chapter 17 introduces additional recovery issues, such as parallel recovery and recovering a database with missing data files. There is also a section on issues associated with read-only tablespaces. Chapter 18 covers more Oracle utilities for troubleshooting the database. This chapter focuses on detecting corruption, diagnosing and solving problems via trace files, and using the LogMiner utility to reconstruct redo log transactions. Chapter 19 introduces Oracle Recovery Manager, providing an overview that includes its components and capabilities.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Introduction
xxxiii
Chapter 20 demonstrates Recovery Manager’s catalog creation and maintenance. It covers how to query the recovery catalog and create, run, and store RMAN scripts. Chapter 21 covers backups using Recovery Manager. It discusses the types of RMAN backups, tuning backups, and performing incremental and cumulative backups. Chapter 22 discusses restoration and recovery using Recovery Manager and has specific examples of types of recoveries. Chapter 23 covers the Oracle standby database, including its complete configuration and setup. It provides examples of using the standby database in specific scenarios. Each chapter ends with review questions that are specifically designed to help you retain the knowledge presented. To really nail down your skills, read each question carefully and answer the questions.
Where Do You Take the Exam? You may take the exams at any of the more than 800 Sylvan Prometric Authorized Testing Centers around the world. For the location of a testing center near you, call 1-800-891-3926. Outside of the United States and Canada, contact your local Sylvan Prometric Registration Center. The tests can be taken in any order. To register for an Oracle Certified Professional exam:
Determine the number of the exam you want to take.
Register with the nearest Sylvan Prometric Registration Center. At this point, you will be asked to pay in advance for the exam. At the time of this writing, the exams are $125 each and must be taken within one year of payment. You can schedule exams up to six weeks in advance or as soon as one working day prior to the day you wish to take it. If something comes up and you need to cancel or reschedule your exam appointment, contact Sylvan Prometric at least 24 hours in advance.
When you schedule the exam, you’ll get instructions regarding all appointment and cancellation procedures, the ID requirements, and information about the testing-center location.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
xxxiv
Introduction
You can also register for the test online at http://www.2test.com/ register/frameset.htm. If you live outside the United States, register online at http://www.2test.com/register/testcenterlocator/ ERN_intl_IT&FAA.htm.
How to Use This Book This book can provide a solid foundation for the serious effort of preparing for the Oracle Certified Professional Architecture & Administration and Backup & Recovery exams. To best benefit from this book, use the following study method: 1. Take the Assessment Tests immediately following this introduction.
(The answers are at the end of each test.) Carefully read over the explanations for any questions you get wrong, and note which chapters the material comes from. This information should help you plan your study strategy. 2. Study each chapter carefully, making sure that you fully understand
the information and the test objectives listed at the beginning of each chapter. Pay extra close attention to any chapter for which you missed questions in the Assessment Tests. 3. Closely examine the sample queries that are used throughout the
book. You may find it helpful to type in the samples and compare the results shown in the book to those on your system. Once you’re comfortable with the content in the chapter, answer the review questions related to that chapter. (The answers appear at the end of the chapter, after the review questions.)
When typing in examples from the book, do not type the line numbers that appear in the sample output; the Oracle query tools automatically number lines for you.
4. Note the questions that confuse you, and study those sections of the
book again. 5. Take the Practice Exams in this book. You’ll find them in Appendix A
and Appendix B. The answers appear at the end of the exams.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Introduction
xxxv
6. Before taking the exam, try your hand at the bonus exams that are
included on the CD that comes with this book. The questions in these exams appear only on the CD. This will give you a complete overview of what you can expect to see on the real thing. 7. Remember to use the products on the CD that is included with this
book. The electronic flashcards and the EdgeTest exam preparation software have all been specifically picked to help you study for and pass your exams. Oracle also offers sample exams on their certification Web site: http://education.oracle.com/certification/. The electronic flashcards can be used on your Windows computer or on your Palm device. To learn all the material covered in this book, you’ll have to apply yourself regularly and with discipline. Try to set aside the same time period every day to study, and select a comfortable and quiet place to do so. If you work hard, you will be surprised at how quickly you learn this material. All the best!
What’s on the CD? We worked hard to provide some really great tools to help you with your certification process. All of the following tools should be loaded on your workstation when studying for the test.
The EdgeTest for Oracle Certified DBA Preparation Software Provided by EdgeTek Learning Systems, this test preparation software prepares you to successfully pass the OCP: Oracle8i DBA Architecture & Administration and Backup & Recovery exams. In this test engine, you will find all of the questions from the book, plus additional Practice Exams that appear exclusively on the CD. You can take the Assessment Tests, test yourself by chapter, take the Practice Exams that appear in the book or on the CD, or take an exam randomly generated from any of the questions.
Electronic Flashcards for PC and Palm Devices After you read the book, read the review questions at the end of each chapter and study the practice exams included in the book and on the CD. But wait, there’s more! Test yourself with the flashcards included on the CD. If you can get through these difficult questions, and understand the answers, you’ll
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
xxxvi
Introduction
know you’re ready for the OCP: Oracle8i DBA Architecture & Administration and Backup & Recovery exams. The flashcards include over 150 questions specifically written to hit you hard and make sure you are ready for the exam. Between the review questions, practice exam, and flashcards, you’ll be more than prepared for the exam.
OCP: Oracle8i DBA Architecture & Administration and Backup & Recovery Study Guide Ebook Sybex is now offering the Oracle Certification books on CD, so you can read them on your PC or laptop. They are in Adobe Acrobat format. Acrobat Reader 4 is also included on the CD. This will be extremely helpful to readers who fly and don’t want to carry a book, as well as to readers who find it more comfortable reading from their computer.
How to Contact the Authors To contact Biju Thomas, you can e-mail him at
[email protected] or visit his Web site for DBAs at www.bijoos.com/oracle. You can reach Doug Stuns via e-mail at
[email protected] . Doug Stuns recommends that you register at technet.oracle.com to get access to the Oracle8i documentation, including information for all of the RMAN commands as well as other useful information about Oracle8i.
About the Authors Biju Thomas is an Oracle Certified Professional with more than six years of Oracle database administration and application development experience. He has written articles for Oracle Magazine and Oracle Internals. Doug Stuns is an Oracle Certified Professional with more than 10 years of experience with Oracle databases. He is currently president and founder of SCS, Inc., an Oracle-based consulting company in Scottsdale, Arizona. Formerly Doug worked for the Oracle Corporation for five years, serving as senior principal technical consultant focusing on DBA consulting and customized education projects.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
OCP: Oracle8i Architecture & Administration Assessment Test 1. What happens when one of the redo members of the next group is
unavailable when LGWR has finished writing the current log file? A. Database operation will continue uninterrupted. B. The database will hang; do an ALTER DATABASE SWITCH LOGFILE
to skip the unavailable redo log. C. The instance will be shut down. D. LGWR will create a new redo log member, and the database will
continue to be in operation. 2. How do you change the status of a database to restricted availability,
if the database is already up and running? Choose the best answer. A. Shut down the database and start the database using STARTUP
RESTRICT. B. Use the ALTER DATABASE RESTRICT SESSIONS command. C. Use the ALTER SYSTEM ENABLE RESTRICTED SESSION command. D. Use the ALTER SESSION ENABLE RESTRICTED USERS command. 3. Which background process updates the online redo log files with the
redo log buffer entries when a COMMIT occurs in the database? A. DBWn B. LGWR C. CKPT D. CMMT
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
xxxviii
Assessment Test
4. Analyze the following statement. On which tablespace will the roll-
back segment R01 be created? CREATE ROLLBACK SEGMENT R01;
A. The default tablespace of the user creating the rollback segment. B. In the RBS tablespace. C. In the SYSTEM tablespace. D. The statement will return an error. You must specify a tablespace
when creating a rollback segment. 5. Choose two extent management options available for tablespaces. A. Dictionary managed B. Data file managed C. Locally managed D. Remote managed E. System managed 6. Which component in the following list is not part of the SGA? A. Database buffer cache B. Library cache C. Sort area D. Shared pool 7. How do you enable complex password verification such as having at
least one numeric or special character in the password? A. Use the ALTER USER command to specify the PASSWORD_VERIFY_
FUNCTION clause B. Define a profile with the PASSWORD_VERIFY_FUNCTION and assign
the profile to the user C. Create a trigger on the password change event, which fires when
the user changes a password and verifies the criteria D. Set the initialization parameter PASSWORD_VERIFY_FUNCTION
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
OCP: Oracle8i Architecture & Administration Assessment Test
xxxix
8. The ALTER INDEX … REBUILD command cannot ____________. A. Move index to a new tablespace B. Change the INITIAL extent size of the index C. Collect statistics on the index D. Specify a new name for the index 9. Choose a file that is not used by the SQL*Loader utility for input or
output. A. Control file B. Parameter file C. Bad file D. Text file E. Data file 10. Which is the correct order of steps in executing a query? A. Parse, execute B. Execute, parse, fetch C. Parse, execute, fetch D. Parse, fetch, execute 11. Which script creates the data dictionary tables? A. catalog.sql B. catproc.sql C. sql.bsq D. dictionary.sql 12. How do you collect statistics for a table? A. ALTER TABLE
COMPUTE STATISTICS B. ANALYZE TABLE COMPUTE STATISTICS C. ALTER TABLE COLLECT STATISTICS D. ANALYZE TABLE COLLECT STATISTICS
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
xl
Assessment Test
13. When you connect to a database by using CONNECT SCOTT/TIGER AS
SYSDBA, which schema are you connected to in the database? A. SYSTEM B. PUBLIC C. SYSDBA D. SYS E. SCOTT 14. Which storage parameter is used to make sure that each extent is a
multiple of the value specified? A. MINEXTENTS B. INITIAL C. MINIMUM EXTENT D. MAXEXTENTS 15. Which data dictionary view would you query to see the temporary seg-
ments in a database? A. DBA_SEGMENTS B. V$SORT_SEGMENT C. DBA_TEMP_SEGMENTS D. DBA_TABLESPACES 16. Suppose the database is in the MOUNT state; pick two statements from
the options below that are correct. A. The control file is open; the database files and redo log files are
closed. B. You can query the SGA by using dynamic views. C. The control file, data files, and redo log files are open. D. The control file, data files, and redo log files are all closed.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
OCP: Oracle8i Architecture & Administration Assessment Test
xli
17. Which of the following clauses will affect the size of the control file
when creating a database? Choose two. A. MAXLOGFILES B. LOGFILE C. ARCHIVELOG D. MAXDATAFILES 18. Which SQL*Plus command can be used to see whether the database is
in ARCHIVELOG mode? A. SHOW DB MODE B. ARCHIVELOG LIST C. ARCHIVE LOG LIST D. LIST ARCHIVELOG 19. How is the database character set specified? A. When you create the database B. In the initialization parameter file C. In the environment variable D. Using ALTER SESSION 20. When you create a user with a default tablespace of USERS and you do
not specify the temporary tablespace, which tablespace will be the user’s temporary tablespace? A. TEMP B. USERS C. SYSTEM 21. Which is an invalid database event for creating a trigger? A. SHUTDOWN B. SERVERERROR C. STARTUP D. COMMIT
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
xlii
Assessment Test
22. Choose two space management parameters used to control the free
space usage in a data block. A. PCTINCREASE B. PCTFREE C. PCTALLOCATED D. PCTUSED 23. When you multiplex the control file, how many control files can you
have for one database? A. Four B. Eight C. Twelve D. Unlimited 24. The following is the sequence of actions performed by users in a data-
base; who will have what privileges? James: GRANT SELECT ON CUSTOMER TO JULIE WITH GRANT OPTION; Julie: GRANT SELECT ON JAMES.CUSTOMER TO ALEX; James: REVOKE SELECT ON CUSTOMER FROM JULIE; A. James cannot revoke privileges from Julie because Julie has
granted the privilege to Alex. B. Julie loses the privilege on CUSTOMER, but Alex keeps the privilege. C. Julie and Alex lose privileges on CUSTOMER. D. Alex loses the privilege on CUSTOMER, but Julie keeps the privilege. 25. How do you prevent row migration? A. Specify larger PCTFREE B. Specify larger PCTUSED C. Specify large INITIAL and NEXT sizes D. Specify small INITRANS
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
OCP: Oracle8i Architecture & Administration Assessment Test
xliii
26. The following are the steps required for relocating a data file belong-
ing to the USERS tablespace. Order the steps in their proper sequence. A. Copy the file ‘/disk1/users01.dbf’ to ‘/disk2/
users01.dbf’ using an OS command. B. ALTER DATABASE RENAME FILE ‘/disk1/users01.dbf’ TO ‘/
disk2/users01.dbf’. C. ALTER TABLESPACE USERS OFFLINE. D. ALTER TABLESPACE USERS ONLINE. 27. Which initialization parameter specifies that no more than the speci-
fied number of seconds’ worth of redo log blocks need to be read during instance recovery? A. FAST_START_IO_TARGET B. LOG_CHECKPOINT_TIMEOUT C. LOG_CHECKPOINT_INTERVAL D. CHECKPOINT_RECOVERY_TIME 28. Which data dictionary view can be queried to find the primary key
columns of a table? A. DBA_TABLES B. DBA_TAB_COLUMNS C. DBA_IND_COLUMNS D. DBA_CONS_COLUMNS E. DBA_CONSTRAINTS 29. Which dictionary views would give you information about the total
size of a tablespace? Choose two. A. DBA_TABLESPACES B. DBA_TEMP_FILES C. DBA_DATA_FILES D. DBA_FREE_SPACE
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
xliv
Assessment Test
30. Choose the statement that is true. A. To grant a role, you must have the DBA role. B. To grant a role, you must have been granted the role with the WITH
ADMIN OPTION clause. C. When a role is created, it has all system privileges. D. When a role is created, it does not have any privileges. 31. Which DBMS package is associated with the transportable tablespace
feature? A. DBMS_TTS B. DBMS_TRANSPORT C. DBMS_TABLESPACES D. DBMS_TRANSPORT_TABLESPACES
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Answers to Assessment Test
xlv
Answers to Assessment Test 1. A. When one of the redo log members becomes unavailable, Oracle
writes an error message in the alert log file and the database operation continues uninterrupted. When all the redo log members of a group are unavailable, the instance shuts down. For more information, see Chapter 4. 2. C. Though answer A is correct, the more appropriate answer is C.
You can use the ALTER SYSTEM command to enable or disable restricted access to the database. To learn about sessions and database start-up/shutdown options, turn to Chapter 2. 3. B. The LGWR process is responsible for writing the redo log buffer
entries to the online redo log files. The LGWR process writes to the redo log files when a COMMIT occurs, when a checkpoint occurs, when the DBWn writes dirty buffers to disk, or every three seconds. To learn more about the background processes and database configuration, refer to Chapter 1. 4. C. If you do not specify a tablespace for the rollback segment, it will
be created in the SYSTEM tablespace. Rollback segments are always owned by SYS. If you do not specify the storage parameters, the rollback segment will be created with the default storage parameters specified for the tablespace. Rollback segment creation and management are discussed in Chapter 6. 5. A and C. When the extent management options are handled through
the dictionary, the tablespace is known as dictionary managed. When the extent management is done using bitmaps in the data files belonging to the tablespace, it is known as locally managed. The default is dictionary managed, which was the only management class available prior to Oracle8i. For more information, refer to Chapter 5. 6. C. The sort area is not part of the SGA; it is part of the PGA. The sort
area is allocated to the server process when required. See Chapter 1 for more information on the components of the SGA and an overview of the Oracle database architecture.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
xlvi
Assessment Test
7. B. To enable complex password verification, you must create a func-
tion that verifies the password and returns a Boolean result if the new password is acceptable. This function should be owned by SYS and is specified in the PASSWORD_VERIFY_FUNCTION of the profile. To learn more about profiles and password management, refer to Chapter 8. 8. D. To rename an index, you must use the ALTER INDEX
RENAME TO , but you cannot combine a rename with any other index operation. When rebuilding an index, you can specify a new tablespace and new storage parameters. The index can be rebuilt in parallel, and you can specify COMPUTE STATISTICS to collect statistics. Indexes are discussed in Chapter 7. 9. D. The text file is not a specific file type used by SQL*Loader per
se. The control file specifies the table name, data format, etc.; the parameter file specifies the command-line parameters in a file; the bad file has the records rejected by Oracle or SQL*Loader with an error code; and the data file has the data to be loaded. To learn about data-loading utilities, refer to Chapter 9. 10. C. The SQL query (SELECT statement) is executed in three major
stages. In the parse stage, the query is checked for errors and privileges and is parsed in the shared SQL area. In the execute stage, the parsed code is executed. In the fetch stage, data is fetched from data files. See Chapter 1 for more information on query processing steps. 11. C. The script sql.bsq is executed automatically by the CREATE
DATABASE command and it creates the data dictionary base tables. The catalog.sql script creates the data dictionary views. To learn more about the other scripts and data dictionary, refer to Chapter 3. 12. B. The ANALYZE command is used to collect statistics on the table.
COMPUTE STATISTICS reads all the blocks of the table and collects the statistics. ESTIMATE STATISTICS takes a few rows as a sample and collects statistics. Collecting statistics and validating structure by using the ANALYZE command are discussed in Chapter 7. 13. D. When you connect to the database by using the SYSDBA privilege,
you are really connecting to the SYS schema. If you use SYSOPER, you will be connected as PUBLIC. To learn more about administrator authentication methods, refer to Chapter 2.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Answers to Assessment Test
xlvii
14. C. The MINIMUM EXTENT parameter is used to make sure each
extent is a multiple of the value specified. This parameter is useful to reduce fragmentation in the tablespace. For more information, refer to Chapter 5. 15. A. To see all the temporary segments in the database, use the DBA_
SEGMENTS view and restrict the query using SEGMENT_TYPE = ‘TEMPORARY’. The V$SORT_SEGMENT view shows only the temporary segments created in TEMPORARY tablespaces. To learn about the types of segments, see Chapter 6. 16. A and B. When the database is in the MOUNT state, the control file is
opened to get information about the data files and redo log files. You can query the SGA information by using the V$ views as soon as the instance is started, that is, in the NOMOUNT state. More information about database start-up steps is in Chapter 2. 17. A and D. The clauses MAXDATAFILES, MAXLOGFILES, MAXLOGMEMBERS,
MAXINSTANCES, and MAXHISTORY affect the size of the control file. Oracle pre-allocates space in the control file for the maximums you specify. To learn more about database creation, refer to Chapter 3. 18. C. The ARCHIVE LOG LIST command shows whether the database is
in ARCHIVELOG mode, whether automatic archiving is enabled, the archival destination, and the oldest, next, and current log sequence numbers. Refer to Chapter 4. 19. A. The database character set and national character set are specified
at database creation time and in most cases it cannot be changed. National Language Support (NLS) features in Oracle8i are discussed in Chapter 9. 20. C. If you do not specify a default for the user’s temporary tablespace,
the user is assigned SYSTEM as the default and temporary tablespace. Managing users is discussed in Chapter 8. 21. D. COMMIT is an invalid database event. The trigger on the SHUTDOWN
event fires before closing the database, the STARTUP trigger fires after opening the database, and SERVERERROR fires after encountering an error. For more on database event triggers, refer to Chapter 3.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
xlviii
Assessment Test
22. B and D. PCTFREE and PCTUSED are the space management parame-
ters that control space in a block. PCTFREE specifies the percent of space that should be reserved for future updates (which can increase the length of the row), and PCTUSED specifies when Oracle can start reinserting rows to the block once PCTFREE is reached. PCTFREE and PCTUSED together cannot exceed 100. To learn about space management parameters, refer to Chapter 6. 23. B. You can have a maximum of eight control files per database. It is
recommended to keep the control files on different disks. For more information, see Chapter 4. 24. C. When object privileges are revoked, the revoke cascades to all lev-
els. For system privileges, the revoke does not cascade. To learn about system and object privileges, refer to Chapter 8. 25. A. PCTFREE specifies the free space reserved for future updates to
rows. By specifying a larger value for PCTFREE, more free space is available in each block for updates. Row migration is when a row is updated and there is not enough space to hold the row; Oracle then moves the entire row to a new block, leaving a pointer in the old block. To learn about data block free space management, refer to Chapter 7. 26. C, A, B, and D. To rename a data file, you need to take the
tablespace offline, so that Oracle does not try to update the data file while you are renaming. Using OS commands, copy the data file to the new location, and using the ALTER DATABASE RENAME FILE command or the ALTER TABLESPACE RENAME FILE command, rename the file in the database’s control file. To rename the file in the database, the new file should exist. Bring the tablespace online for normal database operation. For more information on relocating data files, see Chapter 5. 27. B. LOG_CHECKPOINT_TIMEOUT ensures that no more than the speci-
fied number of seconds’ worth of redo log blocks need to be read during instance recovery. LOG_CHECKPOINT_INTERVAL ensures that no more than the specified number of redo log blocks (OS blocks) need to be read during instance recovery. See Chapter 4.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Answers to Assessment Test
xlix
28. D. The DBA_CONS_COLUMNS view has the column name and position
that belongs to the constraint. To find the primary key constraint name, query the DBA_CONSTRAINTS view with CONSTRAINT_TYPE and TABLE_NAME in the WHERE clause. To learn about constraints, read Chapter 7. 29. B and C. The DBA_DATA_FILES view has the size of each data file
assigned to the tablespace; the total size of all the files is the size of the tablespace. Similarly, if the tablespace is locally managed and temporary, you need to query the DBA_TEMP_FILES view. For more information, see Chapter 5. 30. D. When a role is created, no privileges are associated with it. You
must grant privileges (object and system privileges) by using the GRANT command. A user with the GRANT ANY ROLE privilege can grant roles to other users. To learn about roles, refer to Chapter 8. 31. A. DBMS_TTS is associated with transportable tablespaces; the pro-
cedure TRANSPORT_SET_CHECK can be used to verify that the tablespaces in the transportable list are self-contained. To learn about transportable tablespaces, refer to Chapter 9.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
OCP: Oracle8i Backup & Recovery Assessment Test 1. What type of failure requires the DBA to get involved and is most
often the most serious? Choose all that apply. A. Process failure B. Crashed disk drive with data files that are unreadable C. Instance failure D. User error E. Media failure 2. What clause is used in managed recovery mode to determine how long
the standby database waits for an archive log to be written to the appropriate directory for recovery? A. WAIT_TIME B. TIME_PERIOD C. TIME_OUT D. TIMEOUT 3. Which utility enables you to identify and skip corrupt data blocks? A. DBVERIFY B. DBMS_REPAIR C. ANALYZE TABLE D. DB_CHECK_BLOCKS
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
lii
Assessment Test
4. If you are using the LOG_ARCHIVE_DEST_N parameter to protect
your archive log files from media, what is the maximum number of destinations? A. 2 B. 5 C. 10 D. 8 E. 3 5. Which parameter causes the Import utility to import the transportable
tablespace data dictionary information from the export file? A. TABLE B. TRANSPORTABLE C. TRANSPORT_TABLESPACE D. TABLESPACE 6. How do archive logs aid in the backup and recovery process? A. Allow recovery to beyond the point of failure B. Allow recovery up to the point of failure C. Allow imports D. Recover uncommitted transactions 7. Which two data dictionary views can you use to determine whether a
tablespace is still in backup mode? A. V$BACKUP B. V$DATAFILE C. V$TABLESPACE D. V$DATAFILE_HEADER E. DBA_DATAFILES
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
OCP: Oracle8i Backup & Recovery Assessment Test
liii
8. Which RMAN command executes a stored RMAN script? A. EXECUTE B. RUN C. START D. @ 9. What is the reason for having database checkpoints? A. To write log buffer information to redo log files B. To have the LGWR process write log buffer information to redo
log files C. To have the modified shared pool buffers written to the database files D. To have the modified database buffers written to the database files 10. In NOARCHIVELOG mode, which media failure scenario would not
require a complete restore of all data files? A. The online redo logs have not been recycled since the last backup. B. The online redo logs have been recycled since the last backup. C. The archive logs have not been recycled since the last backup. D. A table export has been performed. 11. What information is necessary to create a new control file for a database? A. The names of the database files B. The name of the database C. The location of the init.ora file D. All the names and locations of all database files
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
liv
Assessment Test
12. Which are two activities that are well suited for the Export and Import
utilities? A. Rebuilding and reorganizing tables B. Creating a physical backup C. Creating a cold physical backup D. Creating a logical archive of a database at a point in time E. Creating a backup control file 13. What command is necessary to restore files to a new location? A. SWITCH B. SET NEWNAME C. SET DATABASE D. REGISTER DATABASE 14. The server hosting the Oracle database has had an operating system
crash. What type of failure is this? A. Media failure B. Statement failure C. User error D. Instance failure 15. What would be a reason for using a recovery catalog with Recovery
Manager? A. To save long-term backup information. B. The recovery catalog manager is mandatory with RMAN. C. It improves performance. D. It is needed to reduce disk space.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
OCP: Oracle8i Backup & Recovery Assessment Test
lv
16. Which mode should you start the standby database in to receive
changes from the primary database? A. STARTUP NOMOUNT B. STARTUP FORCE C. STARTUP D. STARTUP MOUNT 17. Which of the following statements best describes correctly multi-
plexed redo logs? A. A redo group with one redo log member on one disk B. A redo group with two members, each member on a separate disk C. A redo group with two members, each member on the same disk D. A redo group with three members, each member on the same disk 18. What is the primary purpose of the media management layer? Choose
the best answer. A. To back up data files to disk B. To back up backup sets to disk C. To back up image copies to disk D. To back up data files to tape 19. What is the maximum number of archive log processes allowed? A. 2 B. 5 C. 10 D. 8
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
lvi
Assessment Test
20. You must perform a recovery because of incorrect DELETE statements
that were made to the database. Which type of recovery would be best to perform? A. Complete recovery B. Time-based recovery C. Cancel-based recovery D. Change-based recovery 21. Which command combines the results of incremental, cumulative,
archives, and redo logs to generate a synchronized database? A. RESTORE B. RECOVER C. SWITCH D. SYNCH E. RESYNCH 22. Which file will display the last time the database has been shut down? A. init.ora B. config.ora C. Alert log D. Control file 23. When the database is in NOARCHIVELOG mode, redo logs are not writ-
ten out to archive logs before the automatic recycling of the redo logs. How does this affect recovery? A. Data may be lost or unrecoverable. B. All transactions in the current redo log are lost. C. This has no effect on the recovery process. D. All data can be recovered in every situation, but it can be very
difficult.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
OCP: Oracle8i Backup & Recovery Assessment Test
lvii
24. What type of files will not be backed up in a backup set? A. Data files B. init.ora or parameter files C. Password files D. Control files E. Archive logs 25. What mode must the tablespaces be in to perform an online, or open,
backup? A. MOUNT B. NOMOUNT C. Backup D. NOARCHIVELOG E. Closed 26. Which Oracle method checks for block corruption each time data and
index blocks are modified? A. DBVERIFY B. DBMS_REPAIR C. DB_BLOCK_CHECKING D. ANALYZE TABLE 27. Which ALTER DATABASE option would you perform after an incom-
plete recovery in which the control file was re-created? A. OPEN NORESETLOGS B. OPEN RESETLOGS C. OPEN ARCHIVLOG mode D. USING BACKUP CONTROLFILE
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
lviii
Assessment Test
28. What command must be performed after each incomplete recovery is
validated and tested? A. RESYNCH DATABASE B. RESET DATABASE C. SET DATABASE D. REGISTER DATABASE 29. What RMAN command is used to start up a target database identified
by brdb when using RMAN ? Choose the best answer. A. RMAN> SHUTDOWN IMMEDIATE B. RMAN> SHUTDOWN MOUNT C. SQL> RMAN STARTUP MOUNT D. RMAN> STARTUP MOUNT 30. If you open the primary database with an ALTER DATABASE OPEN
RESETLOGS command, what effect does this have on the standby database? A. No impact on the standby database; it is automatically updated. B. Minimal impact on the standby database, but more impact on the
primary database. C. It invalidates the standby database. D. No effect only if the standby database is in managed recovery mode. 31. Analyze this Server Manager command to determine which action the
command will accomplish: SVRMGR> alter database rename file 2> ‘/db01/ORACLE/brdb/data01.dbf’ to 3>’ /db02/ORACLE/brdb/data01.dbf’; A. Move the data file in the OS to the new location. B. Update the control file with the new physical location. C. Update the recovery catalog. D. Recover the data file.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
OCP: Oracle8i Backup & Recovery Assessment Test
lix
32. To perform a closed, or offline, database backup, how should the
database be shut down so that the database files and control files are checkpointed and the current transactions are allowed to complete by either committing or rolling back? A. SHUTDOWN ABORT B. SHUTDOWN NORMAL C. SHUTDOWN IMMEDIATE D. SHUTDOWN TRANSACTIONAL 33. What command must be performed to make the restored files current
if moved to a different location? A. SWITCH B. SET NEWNAME C. SET DATABASE D. REGISTER DATABASE 34. You are recovering a read-only tablespace that was read-only at the
time of backup. Which recovery action would you need to perform to recover the tablespace? A. Restore data file from backup and apply redo logs. B. Restore data file. C. Restore all data files in the database. D. You don’t need to restore read-only tablespaces. 35. What are the init.ora parameters that control asynchronous I/O for
the RMAN backup/restore process? A. TAPE_ASYNCH_IO B. IO_ASYNCH_ENABLE C. BACKUP_TAPE_IO_SLAVES D. DISK_ASYNCH_IO E. IO_ASYNCH_SLAVES
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
lx
Assessment Test
36. Which type of backup operation will enable you to recover to the
point of failure resulting from a media failure? A. OS backup with archiving B. OS backup without archiving C. Export utility without archiving D. RMAN backup set without archiving E. Import utility with archiving 37. What should you do first before launching the RMAN utility to con-
nect to Recovery Manager? A. Shut down the recovery catalog database B. Import the target database C. Set ORACLE_SID to the correct target database name D. Shut down the target database 38. You are performing a recovery due to a media failure and you discover
that you are missing an archive log file, so you must perform an incomplete recovery. Which statement best describes the end result? A. Uncommitted transactions prior to the lost archive log will be
committed. B. All transactions recorded after the database is recovered will be lost. C. All transactions that are committed before the lost archive log will
be recovered. D. All committed transactions prior to the lost archive log will be lost. 39. Which view could you query in the recovery catalog to validate that a
stored script has been created? A. RC_CREATED_SCRIPTS B. CREATED_SCRIPTS C. RC_STORED_SCRIPT D. RC_DATABASE_SCRIPT
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
OCP: Oracle8i Backup & Recovery Assessment Test
lxi
40. A user error has occurred in your database, which resulted in many new
unwanted transactions. You determine that a tablespace point-in-time recovery is the best course of action to fix the unwanted transactions. Before starting this recovery, there must be enough disk, memory, and process resources available for which object? A. Clone data file B. Clone database C. Stub database for a full import D. Restored data file 41. If your database is in NOARCHIVELOG mode, which state can the data-
base be in to restore the database files? Choose all that apply. A. Opened B. MOUNT C. Closed D. NOMOUNT 42. As a general rule, how many recovery processes per disk drive are con-
sidered adequate to perform a parallel recovery? A. Three B. One or two C. Four D. Five 43. Select all the valid BACKUP command’s FORMAT variables. A. %D TARGET DATABASE NAME B. %P BACKUP PIECE NUMBER C. %S BACKUP SET NUMBER D. %N PADDED TARGET DATABASE NAME E. %T BACKUP SET STAMP
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
lxii
Assessment Test
Answers to Assessment Test 1. B and E. A media failure occurs when a database file cannot be read
or written to. This is most often the most serious type of failure, causing the DBA to restore data files from backup. Answer B describes a type of media failure. See Chapter 10. 2. D. The TIMEOUT clause of the RECOVER MANAGED STANDBY DATABASE
command determines how long the standby database will wait for the archive log from the primary database. See Chapter 23. 3. B. The DBMS_REPAIR utility provides a method to identify and skip
corrupt data blocks. The DBMS_REPAIR utility is made up of PL/SQL packages. See Chapter 18. 4. B. Five destinations is the maximum number of locations allowed for
the LOG_ARCHIVE_DEST_N parameter. See Chapter 12. 5. C. The TRANSPORT_TABLESPACE parameter causes the Import utility
to import only the transportable tablespace data dictionary information from the export file. See Chapter 16. 6. B. Archive logs allow recovery up to the point of failure, which
results in no data loss. See Chapter 10. 7. A and D. The V$DATAFILE_HEADER shows that the FUZZY column is
set to YES when a particular data file is in backup mode and NULL when it is not in backup mode. The V$BACKUP view STATUS column shows ACTIVE when the data file is in backup mode and NOT ACTIVE when it is not being backed up. See Chapter 13. 8. B. The RUN command will execute a stored RMAN script. See
Chapter 20. 9. D. The purpose of database checkpoints is to write modified data-
base, or data block, buffers to the database files. The database files or data-file headers are then marked as current. The control file also records this checkpoint or SCN information. See Chapter 11.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Answers to Assessment Test
lxiii
10. A. If the current online redo logs have not been written over or
recycled since the last backup, you could restore the affected database files from the media failure and roll forward the redo logs. Archive logs are just copies of the redo logs. See Chapter 14. 11. D. All the names and locations of all databases files are required to
rebuild the control files. This information would be added to the CREATE CONTROLFILE command, which is generated by the ALTER DATABASE BACKUP CONTROLFILE TO TRACE command. See Chapter 17. 12. A and D. The Export and Import utilities are well suited for rebuild-
ing and reorganizing tables. Furthermore, they build logical archives or backups of a database at a point in time without roll-forward capabilities. See Chapter 16. 13. B. The SET NEWNAME COMMAND enables you to move files to a new
location in a recovery situation. See Chapter 22. 14. D. Instance failure occurs when the Oracle database goes down
unexpectedly without a SHUTDOWN, or with a SHUTDOWN ABORT command. The current online redo logs are then applied automatically to the database upon start-up. See Chapter 11. 15. A. The recovery catalog is used to store long-term backup information
that is useful during restores and recoveries. See Chapter 20. 16. A. The standby database should be started in NOMOUNT mode and
placed in some form of recovery mode to receive archive logs from the primary database. See Chapter 23. 17. B. Multiplexing redo logs occurs when each redo group is made up of
more than one member, and each member is on a separate disk. See Chapter 11. 18. D. The primary purpose of the media management layer is to inter-
face RMAN with tape hardware devices offered by third-party vendors. Legato Storage Manager is supplied by default with the RMAN software. See Chapter 19.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
lxiv
Assessment Test
19. C. Ten archive log processes (ARC0 to ARC9) is the maximum num-
ber allowed. See Chapter 12. 20. B. A time-based recovery should be performed to recover just prior
to the point where the incorrect DELETE statement was executed and committed. See Chapter 15. 21. B. The RECOVER command determines the appropriate backups and
applies the necessary files from each to the database. The end result is a synchronized database, even if using an incremental backup strategy. See Chapter 22. 22. C. The alert log contains information about the instance, such as
shutdown, start-up, log switches, and errors. See Chapter 18. 23. A. Data may be lost when the database is in NOARCHIVELOG mode
because historical transactions are not preserved. Once the current online redo log is written over in the automatic recycling of redo logs, that information is gone and cannot be used in the recovery process. See Chapter 12. 24. B and C. RMAN will back up only data files, control files, and archive
logs in a backup set. Data files and archive logs must be separated into different backup pieces within backup sets. See Chapter 21. 25. C. The tablespaces must be in backup mode to perform an online, or
opened, database backup. The command to place the tablespace in backup mode is ALTER TABLESPACE BEGIN BACKUP, and ALTER TABLESPACE END BACKUP is the command to remove the tablespace from backup mode. See Chapter 13. 26. C. The init.ora parameter DB_BLOCK_CHECKING = TRUE will
check index and table blocks each time these blocks are modified. See Chapter 18. 27. B. The database should be opened with the RESETLOGS option to pre-
vent invalid redo logs from being applied and to reset the logs to new sequence numbers. This will also re-create any missing redo logs. See Chapter 15.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Answers to Assessment Test
lxv
28. B. Any database that has had an incomplete recovery or has been
opened with the SQL 'ALTER DATABASE OPEN RESETLOGS'; command will require the RMAN command RESET DATABASE to be performed. See Chapter 22. 29. D. To start up a target database in RMAN, you must issue the
STARTUP command from within RMAN. The STARTUP MOUNT or STARTUP command will both work. See Chapter 19. 30. C. The standby database is invalidated because the log sequence is
reset and archive logs will not be able to be applied to the standby database. See Chapter 23. 31. B. The ALTER DATABASE RENAME command will update the control
file to account for the new physical location. The file needs to be moved to the new location manually in the OS with a copy or move command. See Chapter 14. 32. D. The SHUTDOWN TRANSACTIONAL command will allow all trans-
actions to complete either by committing or rolling back and will checkpoint the database files and control files, ensuring a consistent database for backup. A transactional shutdown prevents clients from losing work, and at the same time, does not require all users to log off. See Chapter 13. 33. A. The SWITCH command makes files that were restored in a new loca-
tion current and synchronized with the control file. See Chapter 22. 34. B. A read-only tablespace needs only the associated data files con-
taining the read-only tablespace restored. No redo log needs to be applied. See Chapter 17. 35. A, C, and D. THE TAPE_ASYNCH_IO, BACKUP_TAPE_IO_SLAVES, and
DISK_ASYNCH_IO control the RMAN tape and disk asynchronous I/O capabilities. Each of these parameters must be set to TRUE to enable this feature. See Chapter 21. 36. A. An OS backup with archiving will enable you to restore a backup
and roll forward until the point of failure so that no data is lost. See Chapter 14.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
lxvi
Assessment Test
37. C. The ORACLE_SID should be pointing to or set to the correct target
database before starting the RMAN utility. See Chapter 19. 38. B. All transactions that occur in the lost archive log and all following
logs will be lost after the database is recovered. This data would need to be manually reentered. See Chapter 15. 39. C. The view RC_STORED_SCRIPT will validate that a RMAN script is
stored in the recovery catalog. See Chapter 20. 40. B. A tablespace point-in-time recovery requires enough memory,
disk, and process resources on the server to support the existing database and the clone database. See Chapter 16. 41. B and D. The database can be restored in MOUNT and NOMOUNT only
when in NOARCHIVELOG mode and using RMAN. See Chapter 22. 42. B. As a general rule, one or two recovery processes per disk drive are
adequate to recover the database as quickly as possible without generating I/O contention. See Chapter 17. 43. A, B, C, D, and E. All options are valid FORMAT variables. See
Chapter 21.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
OCP: Oracle8i Architecture & Administration
Copyright ©2000 SYBEX , Inc., Alameda, CA
PART
I
www.sybex.com
Chapter
1
Oracle Overview and Architecture ORACLE8i ARCHITECTURE AND ADMINISTRATION EXAM OBJECTIVES OFFERED IN THIS CHAPTER: Describe the Oracle server architecture and its main components List the structures involved in connecting a user to an Oracle instance List the stages in processing queries, DML statements, COMMITS
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Training and Certification Web site (http://education.oracle .com/certification/index.html) for the most current exam objectives listing.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
O
racle8i database is filled with many features that enhance the functionality and improve the performance of the database. It is featurerich with objects, Java, and many Internet programming techniques. The Architecture & Administration exam of the OCP certification tests your knowledge of the Oracle Server architecture and the most common administration tasks. This chapter begins by discussing the components that constitute the Oracle database and the way that the database functions. Administering an Oracle database requires you to know how these components interact and how to customize them to best suit your requirements.
Oracle8i Server: An Overview
The Oracle Server consists of two major components—the database and the instance. Database is a confusing term that is often used to represent different things on different platforms; the only commonality is that it is something to do with data. In Oracle, the term database is used to represent the physical files that store data. An instance comprises the memory structures and background processes used to access data (from the physical database files). Each database should have at least one instance associated with it. It is possible for multiple instances to access a single database; this is known as the Parallel Server configuration.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Oracle8i Server: An Overview
5
Database Structure The Oracle database is used to store and retrieve information; it is a collection of data. The database has logical structures and physical structures. Logical structures represent the components that you can see in the Oracle database (such as tables, indexes, etc.), and physical structures represent the method of storage that Oracle uses internally (the physical files). Oracle maintains the logical structure and physical structure separately, so that the logical structures can be defined identically across different hardware and operating system platforms.
Logical Storage Structures Oracle logically divides the database into smaller units to manage, store, and retrieve data efficiently. The following paragraphs give you an overview of the logical structures; they are discussed in detail in the coming chapters. Tablespaces The database is logically divided into smaller units at the highest level called tablespaces. A tablespace commonly groups related logical structures together. For example, you may group data specific to an application or a function together in one or more tablespaces. This logical division helps to administer a portion of the database without affecting the other. Each database should have one or more tablespaces. When a database is created, Oracle creates the SYSTEM tablespace as a minimum requirement. Blocks A block is the smallest unit of storage in Oracle. This is usually a multiple of the operating system block size. A data block corresponds to a specific number of bytes of storage space. The block size is determined at the time of database creation based on the parameter DB_BLOCK_SIZE. Extents An extent is the next level of logical grouping. It is a grouping of contiguous blocks, allocated in one chunk. Segments A segment is a set of extents allocated for logical structures such as a tables, indexes, clusters, etc. Whenever a logical structure is created, a segment is allocated, which contains at least one extent, which in turn has at least one block. A segment can be associated to only one tablespace. Figure 1.1 shows the relationship between tablespaces, segments, extents, and blocks.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
6
Chapter 1
Oracle Overview and Architecture
FIGURE 1.1
Logical structure Database
SYSTEM tablespace
Tablespace 1
Tablespace 2
Segment Extent 1
Tablespace Segment 1
Segment 2 Extent 2
Segment 3
Blocks
There are four types of segments: Data segments Used to store the table (or cluster) data. Every table created will have a segment allocated. Index segments Store the index data. Every index created will have an index segment allocated. Temporary segments Are created when Oracle needs a temporary work area, such as for sorting, during a query, to complete execution of a SQL statement. These segments are freed when the execution completes. Rollback segments Used to store undo information. When you roll back the changes made to the database, the information in the rollback segments is used to undo the changes. Segments and other logical structures are discussed in detail in Chapter 5, “Logical and Physical Database Structures.” A schema is a logical structure that groups the database objects. A schema is not directly related to a tablespace or any other logical storage structure.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Oracle8i Server: An Overview
7
The objects that belong to a schema can reside in different tablespaces, and a tablespace can have objects that belong to multiple schemas. Schema objects include structures such as tables, indexes, synonyms, procedures, triggers, database links, and so on.
Physical Storage Structures The physical database structure consists of three types of physical files:
Data files
Control files
Redo log files
The purpose and contents of each type of file are explained in the following paragraphs. Figure 1.2 shows the physical structures and how the database is related to the memory structures and background processes. This figure also shows the relationship between tablespaces and data files. FIGURE 1.2
Oracle Server Instance
Memory structures Background processes
Database Physical database structure
Data Data file file 11
Logical database structure
Data files
Control file
Data Data file file 22
Redo log files
Data Data file file 33
Database
SYSTEM tablespace
Copyright ©2000 SYBEX , Inc., Alameda, CA
Tablespace 1
Tablespace 2
www.sybex.com
Data Data file file 44
8
Chapter 1
Oracle Overview and Architecture
Data files Data files contain all the database data. Every Oracle database should have one or more data files. Each data file is associated with one and only one tablespace. A tablespace can consist of more than one data file. Redo log files Redo log files record all changes made to data. Every Oracle database should have two or more redo log files, because Oracle writes to the redo log files in a circular fashion. If a failure prevents a database change to be written to a data file, the changes can be obtained from the redo log files, and therefore changes are never lost. Redo logs are critical for database operation and recovery from a failure. Oracle allows you to have multiple copies of the redo log files (preferably on different disks). This is known as multiplexing of redo logs, where Oracle treats the redo log and its copies as a group identified with an integer, known as a redo log group. Redo log files are discussed in detail in Chapter 4, “Control Files and Redo Log Files.” Control files Every Oracle database has at least one control file. It maintains information about the physical structure of the database. The control file can be multiplexed, so that Oracle maintains multiple copies. It is critical to the database. The control file contains the database name and timestamp of database creation as well as the name and location of every data file and redo log file. Control files are discussed in detail in Chapter 4.
The size of a tablespace is determined by the total size of all the data files associated with the tablespace. The size of the database is the total size of all its tablespaces.
Oracle Memory Structures
T
he memory structures are used to cache application data, data dictionary information (metadata—information about the objects, logical structures, schemas, privileges, and so on—discussed in Chapter 3, “Creating a Database and Data Dictionary”), Structured Query Language (SQL) commands, PL/SQL and Java program units, transaction information, data required for execution of individual database requests, and other control information. Memory structures are allocated to the Oracle instance when
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Oracle Memory Structures
9
the instance is started. The two major memory structures are known as the System Global Area (also called the Shared Global Area) and the Program Global Area (also called the Private Global Area or the Process Global Area). Figure 1.3 illustrates the various memory structures in Oracle. FIGURE 1.3
Oracle memory structures SGA Shared pool Library cache
Database buffer cache KEEP
Shared memory
Data dictionary cache
Library cache
Control structures
PL/SQL procedures and packages
RECYCLE DEFAULT
Redo log buffer Large pool (optional)
Locks and other structures
Java pool (optional)
Software code area
PGA Non-shared memory
Stack space
Session info
Sort area
System Global Area The System Global Area (SGA) is a shared memory area. All users of the database share the information maintained in this area. The SGA and the background processes constitute an Oracle instance. Oracle allocates memory for the SGA when an Oracle instance is started and de-allocates it when the instance is shut down. The information stored in the SGA is divided into multiple memory structures that are allocated fixed space when the instance is started. The following are the components of the SGA.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
10
Chapter 1
Oracle Overview and Architecture
Database Buffer Cache The database buffer cache is the area of memory that caches the database data, holding blocks from the data files that have been read recently. The DB buffer cache is shared among all the users connected to the database. There are three types of buffers: Dirty buffers Dirty buffers are the buffer blocks that need to be written to the data files. The data in these buffers has changed and has not yet been written to the disk. Free buffers Free buffers do not contain any data or are free to be overwritten. When Oracle reads data from disk, free buffers are used to hold this data. Pinned buffers Pinned buffers are the ones that are currently being accessed or explicitly retained for future use (for example, the KEEP buffer pool). Oracle maintains two lists to manage the buffer cache. The write list (dirty buffer list) has the buffers that are modified and need to be written to the disk (the dirty buffers). The least recently used (LRU) list contains free buffers, pinned buffers, and the dirty buffers that have not yet been moved to the write list. Consider the LRU list as a queue of blocks, where the most recently accessed blocks are always in the front (known as the most recently used, or MRU, end of the list; the other end, where the least recently accessed blocks are, is the LRU end). The least-used blocks are thrown out of the list when new blocks are accessed and are added to the list. When an Oracle process accesses a buffer, it moves the buffer to the MRU end of the list. So the most frequently accessed data is available in the buffers. When new data buffers are moved to the LRU list, they are copied to the MRU end of the list, pushing out the buffers from the LRU end. An exception to this occurs when a full table scan is done and the blocks from a full table scan are written to the LRU end of the list. When an Oracle process requests data, it searches the data in the buffer cache, and if it finds data, the result is a cache hit. If it cannot find the data, the result is a cache miss, and data then needs to be copied from disk to the buffer. Before reading a data block into the cache, the process must first find a free buffer. The server process on behalf of the user process searches either until it finds a free buffer or until it has searched the threshold limit of buffers. If the server process finds a dirty buffer as it searches the LRU list, it moves that buffer
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Oracle Memory Structures
11
to the write list and continues to search. When the process finds a free buffer, it reads the data block from the disk into the buffer and moves the buffer to the MRU end of the LRU list. If an Oracle server process searches the threshold limit of buffers without finding a free buffer, the process stops searching and signals the DBWn background process to write some of the dirty buffers to disk. The DBWn process and other background processes are discussed in the next section. Oracle8i lets you divide the buffer pool into three areas (using the multiple buffer pool feature): The KEEP buffer pool retains the data blocks in memory; they are not aged out. The RECYCLE buffer pool removes the buffers from memory as soon as they are not needed. The DEFAULT buffer pool contains the blocks that are not assigned to the other pools.
The parameter DB_BLOCK_SIZE multiplied by the parameter DB_BLOCK_BUFFERS determines the size of the buffer cache. BUFFER_POOL_RECYCLE and BUFFER_ POOL_KEEP determine the size to be allocated to the RECYCLE and KEEP pools, respectively, from the buffer cache. When creating or altering tables and indexes, you can specify the BUFFER_POOL in the STORAGE clause.
Redo Log Buffer The redo log buffer is a circular buffer in the SGA that holds information on the changes made to the database data. The changes are known as redo entries or change vectors and are used to redo the changes in case of a failure. Changes are made to the database through INSERT, UPDATE, DELETE, CREATE, ALTER, or DROP commands.
The parameter LOG_BUFFER determines the size of the redo log buffer.
Shared Pool The shared pool portion of the SGA holds information such as SQL, PL/SQL procedures and packages, the data dictionary, locks, character set information, security attributes, and so on. The shared pool consists of the library cache and the dictionary cache.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
12
Chapter 1
Oracle Overview and Architecture
Library Cache The library cache contains the shared SQL areas, private SQL areas, PL/SQL procedures and packages, and control structures such as locks and library cache handles. The shared SQL area is used for maintaining recently executed SQL commands and their execution plans. Oracle divides each SQL statement that it executes into a shared SQL area and a private SQL area. When two users are executing the same SQL, the information in the shared SQL area is used for both. The shared SQL area contains the parse tree and execution plan, whereas the private SQL area contains values for the bind variables (persistent area) and runtime buffers (runtime area). Oracle creates the runtime area as the first step of an execute request. For INSERT, UPDATE, and DELETE statements, Oracle frees the runtime area after the statement has been executed. For queries, Oracle frees the runtime area only after all rows have been fetched or the query has been canceled. Oracle processes PL/SQL program units the same way it processes SQL statements. When a PL/SQL program unit is executed, the code is moved to the shared PL/SQL area while the individual SQL commands within the program unit are moved to the shared SQL area. Again, the shared program units are maintained in memory with an LRU algorithm. Should the same program unit be required by another process, disk I/O and compilation can be omitted, and the code that resides in memory will be executed. The third area of the library cache is maintained for internal use by the instance. Various locks, latches, and other control structures reside here and are freely accessed by any server processes requiring this information. Data Dictionary Cache The data dictionary is a collection of database tables and views containing metadata about the database, its structures, its privileges, and its users. Oracle accesses the data dictionary frequently during the parsing of SQL statements. The data dictionary cache holds the most recently used database dictionary information. The data dictionary cache is also known as the row cache because it holds data as rows instead of buffers (which hold entire blocks of data).
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Oracle Memory Structures
13
The parameter SHARED_POOL_SIZE determines the size of the shared pool.
Large Pool The large pool is an optional area in the SGA that can be configured by the database administrator to provide large memory allocations for specific database operations such as an Oracle backup or restore. The large pool allows Oracle to request large memory allocations from a separate pool to prevent contention from other applications for the same memory. The large pool does not have an LRU list.
The parameter LARGE_POOL_SIZE specifies the size of the large pool.
Program Global Area The Program Global Area (PGA) is the area in the memory that contains the data and process information for one process, and this area is non-shared memory. The contents of the PGA vary depending on the server configuration. For a dedicated server configuration (one dedicated server process for each connection to the database—dedicated server and multithreaded server configurations are discussed later in this chapter), the PGA holds stack space and session information. For multithreaded configurations (user connections go through a dispatcher, a smaller number of server processes are required as they can be shared by multiple user processes), the PGA has the stack space information (the session information is in the SGA). Stack space is the memory allocated to hold variables, arrays, and other information that belongs to the session. PGA is allocated for each server process and de-allocated when the process is completed. Unlike the SGA that is shared by several processes, the PGA provides sort space, session information, stack space, and cursor information for a single server process.
The PGA size is fixed and is dependent on the operating system.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
14
Chapter 1
Oracle Overview and Architecture
Sort Area The memory area that Oracle uses to sort data is known as the sort area, which uses memory from the PGA for a dedicated server connection. For multithreaded server (MTS) configurations, the sort area is allocated from the SGA. MTS and dedicated server configurations are discussed later in this chapter. Sort area size can grow depending on the need; the maximum size is set by the SORT_AREA_SIZE parameter. The parameter SORT_AREA_ RETAINED_SIZE determines the size to which the sort area is reduced after the sort operation. The memory released from the sort area is kept with the server process; it is not released to the operating system. If the data to be sorted does not fit into the memory area defined by SORT_ AREA_SIZE, Oracle divides the data into smaller pieces that do fit, and these are sorted individually. These individual sorts are called runs, and the data sorted is held on the user’s temporary tablespace using temporary segments. When all the individual sorts are complete, these runs are merged to produce the final result. Oracle sorts the result set if the query contains a DISTINCT, ORDER BY, GROUP BY, or any set operators (UNION, INTERSECT, MINUS).
Software Code Area Software code areas are the portions of memory that are used to store the code that is being executed. Software code areas are mostly static in size and are dependent on the operating system. These areas are read-only and can be shared (if the operating system allows), so multiple copies of the same code are not kept in memory. Some Oracle tools and utilities (such as SQL*Forms and SQL*Plus) can be installed shared, but some cannot. Multiple instances of Oracle can use the same Oracle code area with different databases if running on the same computer.
Oracle Background Processes
A
process is a mechanism used in the operating system to execute a series of tasks. Oracle starts multiple processes in the background when the instance is started. Each background process is responsible for specific tasks. The following sections describe each process and its purpose. It is not necessary for all the background processes to be present in every instance. Figure 1.4 shows the Oracle background processes.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Database Writer (DBWn)
FIGURE 1.4
15
Oracle background processes
LCKn
RECO
PMON
SMON
CKPT SGA DBWn
Database Buffer cachebuffer cache Request queue
Data files
User process
Redo log buffer
Dedicated server process
Response queue
ARCn
Shared server process
Storage device
LGWR
Data files
Dispatcher process
User process
Control files
Redo log files
A user (client) process is initiated from the tool that is trying to use the Oracle database. A server process accepts a request from the user process and interacts with the Oracle database. On dedicated server systems, there will be one server process for each client connection to the database.
Database Writer (DBWn)
The purpose of the database writer process (DBWn) is to write the contents of the dirty buffers to the data file. By default, Oracle starts one database writer process (DBW0) when the instance starts; for multi-user and
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
16
Chapter 1
Oracle Overview and Architecture
busy systems, you can have nine more database writer processes (DBW1 through DBW9) to improve performance. The parameter DB_WRITER_ PROCESSES determines the additional number of database writer processes to be started. The DBWn process writes the modified buffer blocks to disk, so more free buffers are available in the buffer cache. Writes are always performed in bulk to reduce disk contention; the number of blocks written in each I/O is operating system dependent. The DBWn process initiates writing to data files under two circumstances:
When the server process cannot find a clean buffer after searching the set threshold of buffers, it initiates the DBWn process to write dirty buffers to the disk, so that some buffers are freed.
Periodically writes buffers to disk, when a checkpoint occurs.
Writes to the data file(s) are independent of the corresponding COMMIT performed in the SQL code.
Log Writer (LGWR)
T
he log writer process (LGWR) writes the blocks in the redo log buffer in the SGA to the online redo log files. The redo log buffer is a circular buffer. When the LGWR writes log buffers to the disk, Oracle server processes can write new entries in the redo log buffer. LGWR writes the entries to the disk fast enough to ensure that there is room available for the server process to write log information. The log writer process writes the buffers to the disk under the following circumstances:
When a user transaction issues a COMMIT
When the redo log buffer is one-third full
When the DBWn process writes dirty buffers to disk
Every three seconds
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
System Monitor (SMON)
17
LGWR writes simultaneously to the multiplexed online redo log files. Even if one of the log files in the group is damaged, LGWR continues writing to the available file. LGWR writes to the redo logs sequentially so that transactions can be applied in order in the event of a failure.
By writing the committed transaction to the redo log files, the change to the database is never lost (that is, it can be recovered if a failure occurs).
Checkpoint (CKPT)
C
heckpoints help to reduce the time required for instance recovery. A checkpoint is an event that flushes the modified data from the buffer cache to the disk and updates the control file and data files. The checkpoint process (CKPT) updates the headers of data files and control files; the actual blocks are written to the file by the DBWn process. If checkpoints occur too frequently, disk contention becomes a problem with the data file updates. If checkpoints occur too infrequently, the time required to recover a failed database can be significantly higher. Checkpoints occur automatically when an online redo log file fills (log switch). A log switch occurs when Oracle finishes writing one file and starts the next file.
System Monitor (SMON)
T
he system monitor process (SMON) performs instance or crash recovery at database start-up by using the online redo log files. SMON is also responsible for cleaning up temporary segments in the tablespaces that are no longer used and for coalescing the contiguous free space in the tablespaces. If any dead transactions were skipped during crash and instance recovery because of file-read or offline errors, SMON recovers them when the tablespace or file is brought back online. SMON wakes up regularly to check whether it is needed. Other processes can call SMON if they detect a need for SMON to wake up.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
18
Chapter 1
Oracle Overview and Architecture
SMON coalesces the contiguous free space in a tablespace only if its default PCTINCREASE value is set to 0.
Process Monitor (PMON)
T
he process monitor process (PMON) cleans up failed user processes and frees up all the resources used by the failed process. It resets the status of the active transaction table and removes the process ID from the list of active processes. It reclaims all resources held by the user and releases all locks on tables and rows held by the user. PMON wakes up periodically to check whether it is needed.
DBWn, LGWR, CKPT, SMON, and PMON processes are the default processes associated with all instances.
Archiver (ARCn)
W
hen the Oracle database is running in ARCHIVELOG mode, the online redo log files are copied to another location before they are overwritten. These archived log files can be used for recovery of the database. When the database is in ARCHIVELOG mode, recovery of the database can be done up to the point of failure. The archiver process (ARCn) performs the archiving function. Oracle8i can have up to 10 ARCn processes (ARC0 through ARC9). The LGWR process starts new ARCn processes whenever the current number of ARCn processes is insufficient to handle the workload. The ARCn process is enabled only if the database is in ARCHIVELOG mode and automatic archiving is enabled (parameter LOG_ARCHIVE_START = TRUE).
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Queue Monitor (QMNn)
19
Recoverer (RECO)
The recoverer process (RECO) is used with distributed transactions to resolve failures. The RECO process is present only if the instance permits distributed transactions and if the DISTRIBUTED_TRANSACTIONS parameter is set to a nonzero value. If this initialization parameter is zero, RECO is not created during instance start-up. This process attempts to access databases involved in in-doubt transactions and resolves the transactions. A transaction is in doubt when you change data in multiple databases and a failure occurs before saving the changes. The failure can be the result of a server crash or a network problem.
Lock (LCKn)
L
CKn processes (LCK0 through LCK9) are used in the Parallel Server environment, for inter-instance locking. The Parallel Server option lets you mount the same database for multiple instances.
Job Queue (SNPn)
W
hen using distributed transaction processing, up to 36 snapshot refresh and job-queue processes (SNP0 through SNP9, and SNPA through SNPZ) can automatically refresh table snapshots. They wake up at regular intervals. SNPn processes also execute the job requests created by using the DBMS_JOB package. The JOB_QUEUE_PROCESSES parameter specifies the number of job queue processes per instance.
Queue Monitor (QMNn)
T
he queue monitor process is used for Oracle Advanced Queuing, which monitors the message queues. You can configure up to 10 queue monitor processes (QMN0 through QMN9). Oracle Advanced Queuing
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
20
Chapter 1
Oracle Overview and Architecture
provides an infrastructure for distributed applications to communicate asynchronously using messages. Oracle Advanced Queuing stores messages in queues for deferred retrieval and processing by the Oracle server. The parameter AQ_TM_PROCESSES specifies the number of queue monitor processes.
Failure of an SNP process or a QMN process does not cause the instance to crash; Oracle restarts the failed process. If any other background process fails, the Oracle instance fails.
Dispatcher (Dnnn)
Dispatcher processes are part of the multithreaded server (MTS) architecture. They minimize the resource needs by handling multiple connections to the database using a limited number of server processes. You can create multiple dispatcher processes for a single database instance; at least one dispatcher must be created for each network protocol used with Oracle.
Shared Server (Snnn)
S
hared server processes provide the same functionality as the dedicated server processes, except that shared server processes are not associated with a specific user process. Shared server processes are created to manage connections to the database in an MTS configuration. The number of shared server processes that can be created ranges between the values of the parameters MTS_SERVERS and MTS_MAX_SERVERS.
Connecting to an Oracle Instance
B
efore discussing the actual mechanism for connecting to an Oracle database, let us review the terms user process and server process. An application program (such as Pro*COBOL) or an Oracle tool (such as SQL*Plus) starts the user process when you run the application tool. The user
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Connecting to an Oracle Instance
21
process may be on the same machine where the instance/database resides or it may be initiated from a client machine in a client/server architecture. A server process gets requests from the user process and interacts with the Oracle instance to carry out the requests. On some platforms, it is possible to combine the user process and server process (single task) to reduce system overhead if the user process and server process are on the same machine. The server process is responsible for the following:
Parses and executes SQL statements issued via the application or tool.
If the data blocks requested are not already in the SGA, it reads the data files and brings the necessary data blocks into the shared buffer cache.
Returns the results to the user process in such a way that it can understand the data.
In client/server architecture, Net8 is commonly used to communicate between the client and the server. The client process (user process) attempts to establish a connection to the database using the appropriate Net8 driver, and then Net8 communicates with the server and assigns a server process to fulfill the request on behalf of the user process. Net8 has a listener on the server that constantly waits for connection requests from client machines (client and server can be on the same machine).
Dedicated Server Configuration In a dedicated server configuration, one server process is created for each connection request. Oracle assigns a dedicated server process to take care of the requests from the user process. The server process is terminated when the user process disconnects from the database. Even if the user process is not making any requests, the server process will be idling and waiting for a request. Refer to Figure 1.4 for how a user process interacts with Oracle by using a dedicated server process. The following steps detail how a dedicated server process takes the request from a user process and delivers the results (the background processes are not discussed in these steps): 1. The client application or tool initiates the user process to connect to
the instance. 2. The client machine communicates the request to the server machine by
using Net8 drivers. The Net8 listener on the server detects the request
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
22
Chapter 1
Oracle Overview and Architecture
and starts a dedicated server process on behalf of the user process after verifying the username and password. 3. The user issues a SQL command. 4. The dedicated server process determines whether a similar SQL is in
the shared SQL area. If not, it allocates a new shared SQL area for the command and stores the parse tree and execution plan. During parsing, the server process checks for syntactic correctness of the statement, checks whether the object names are valid, and checks privileges. The required information is obtained from the data dictionary cache. A PGA is created to store the private information of the process. 5. The server process looks for data blocks that need to be changed or
accessed in the buffer cache. If they are not there, it reads the data files and brings the necessary blocks to the SGA. 6. The server process executes the SQL statement. If data blocks need to
be changed, they are changed in the buffer cache (the DBWn process updates the data file). The change is logged in the redo log buffer. 7. The status of the request or the result is returned to the user process.
Multithreaded Configuration If many users are connecting to the dedicated server database, there will be many server processes. Most of the time, these server processes will be idle for Online Transaction Processing (OLTP) applications. You can configure Oracle to have one server process manage multiple user processes. This is known as the multithreaded configuration. In MTS, a fixed number of server processes are started by Oracle when the instance starts. These processes work in a round-robin fashion to serve requests from the user processes. The user processes connect to a dispatcher background process, which routes client requests to the next available shared server process. One dispatcher process can handle only one communication protocol; hence there should be at least one dispatcher process for every protocol used. MTS requires all connections to use Net8. So, for establishing a connection to the instance using an MTS configuration, three processes are involved: a Net8 listener process, a dispatcher process, and a shared server process.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Connecting to an Oracle Instance
23
When a user makes a request, the dispatcher places the request on the request queue; an available shared server process picks up the request from this queue. When the shared server process completes the request, it places the response on the calling dispatcher’s response queue. Each dispatcher has its own response queue in the SGA. The dispatcher then returns the completed request to the appropriate user process. Figure 1.4 shows the connection using an MTS configuration and the associated processes. The following steps detail how a shared server process takes the request from a user process and delivers the results: 1. When the instance is started, one or more shared server processes and
dispatcher processes are started. The Net8 listener is running on the server. The request and response queues are created in the SGA. 2. The client application or tool initiates the user process to connect to
the instance. 3. The client machine communicates the request to the server machine by
using Net8 drivers. The Net8 listener on the server detects the request and identifies the protocol that the user process is using. It connects the user process to one of the dispatchers for this protocol (if no dispatcher is available for the requested protocol, a dedicated server process is started by the listener). 4. The user issues a SQL command. 5. The dispatcher decodes the request and puts it into the request queue
(at the tail) along with the dispatcher ID. 6. The request moves up the queue as the server processes serve the pre-
vious requests. The next available shared server process picks up the request. 7. The shared server process determines whether a similar SQL statement
is there in the shared SQL area. If not, it allocates a new shared SQL area for the command and stores the parse tree and execution plan. During parsing, the server process checks for syntactic correctness of the statement, validity of the object names, and privileges. The required information is obtained from the data dictionary cache. A PGA is created to store the private information of the process.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
24
Chapter 1
Oracle Overview and Architecture
8. The server process looks for data blocks that need to be changed or
accessed in the buffer cache. If they are not there, it reads the data files and brings the necessary blocks into the SGA. 9. The server process executes the SQL statement. If data blocks need to
be changed, they are changed in the buffer cache (the DBWn process updates the data file). The change is logged in the redo log buffer. 10. The status of the request or the result is returned to the response queue
for the dispatcher. 11. The dispatcher periodically checks the response queue. When it finds
a response, it sends it to the requested user process.
Stages in Processing SQL
Structured Query Language (SQL) is used to manipulate and retrieve data in an Oracle database. Data Manipulation Language (DML) statements query or manipulate data in existing database objects. SELECT, INSERT, UPDATE, DELETE, EXPLAIN PLAN, and LOCK TABLE are DML statements. These are the most commonly used statements in the database. In this section, we’ll explain how Oracle processes the queries and other DML statements. We’ll also discuss what happens when a user makes the changes they made to the database permanent (issues a COMMIT).
Parse, Execute, and Fetch Stages SQL statements are processed in two or three steps. Each SQL statement passed to the server process from the user process goes through parse and execute phases. In the case of queries (SELECT statements), an additional phase of fetch is done to retrieve the rows.
Parse Parsing is one of the first stages in processing any SQL statement. When an application or tool issues a SQL statement, it makes a parse call to Oracle, which does the following:
Checks the statement for syntax correctness and validates the table names and column names against the dictionary
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Stages in Processing SQL
25
Determines whether the user has privileges to execute the statement
Determines the optimal execution plan for the statement
Finds a shared SQL area for the statement
If there is an existing SQL area with the parsed representation of the statement in the library cache, Oracle uses this parsed representation and executes the statement immediately. If not, Oracle generates the parsed representation of the statement, allocates a shared SQL area for the statement in the library cache, and stores its parsed representation there. The parse operation by Oracle allocates a shared SQL area for the statement, which allows the statement to be executed any number of times without parsing repeatedly.
Execute Oracle executes the parsed statement in the execute stage. For UPDATE and DELETE statements, Oracle locks the rows that are affected, so that no other process is making changes to the same row until the transaction is completed. Oracle also looks for data blocks in the data buffer cache. If it finds them, the execution will be faster; if not, Oracle has to read the data blocks from the physical data file to the buffer cache. If the statement is a SELECT or an INSERT, no rows need to be locked because no data is being changed.
Fetch The fetch operation follows the execution of a SQL SELECT command. After the execution completes, the rows identified during the execution stage are returned to the user process. The rows are ordered (sorted) if requested by the query. The results are always in a tabular format; rows may be fetched (retrieved) one row at a time or in groups (array processing).
Processing SELECT statements Queries, or SELECT statements, are the most often used commands on any Oracle database. Figure 1.5 shows the steps required in processing a query.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
26
Chapter 1
Oracle Overview and Architecture
FIGURE 1.5
Query processing stages Open cursor
Yes
Found SQL in library cache? No Parse
Define
No
Bind variable?
Yes Bind Parallelize Execute Fetch Close cursor
1. Create a cursor. The cursor may be an explicit cursor or an implicit
cursor. 2. Parse the statement. 3. Define output: specify location, type, and data type of the result set,
and perform data type conversion if necessary. This step is required only when the results are fetched to variables. 4. Bind variables; if the query is using any variables, Oracle should know
the value for the variables.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Stages in Processing SQL
27
5. See whether the query can be run in parallel, that is, whether multiple
server processes can be working to complete the query. 6. Execute the query. 7. Fetch the rows. 8. Close the cursor.
Processing DML statements The DML statements INSERT, UPDATE, and DELETE are processed with fewer steps. Figure 1.6 shows the stages. FIGURE 1.6
DML processing stages DML statement
Yes
Found SQL in library cache? No Parse
No
Bind variable?
Yes Bind Parallelize Execute Close cursor
1. Create a cursor; Oracle creates an implicit cursor.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
28
Chapter 1
Oracle Overview and Architecture
2. Parse the statement. 3. Bind variables; if the statement is using any variables, Oracle should
know the value for the variables. 4. See whether the statement can be run in parallel (multiple server pro-
cesses working to complete the work). 5. Execute the statement. 6. Inform the user that the statement execution is complete. 7. Close the cursor.
Processing a COMMIT You have seen how the server process processes a query and other DML statements. Before discussing the steps in processing a COMMIT, let’s discuss an important mechanism used by Oracle for recovery—the system change number (SCN). When a transaction commits, Oracle assigns a unique number that defines the database state at a precise moment in time, acting as an internal timestamp. The SCN is a serial number, unique and always increasing. SCNs provide a read-consistent view of the database. Recovery of the database is always performed based on the SCN. The SCN is also used to provide a readconsistent view of the data. When a query reaches the execution stage, the current SCN is determined; only the blocks with an SCN less than or equal to this SCN are read—for changed blocks (with a higher SCN), data is read from the rollback segments. The SCN is recorded in the control file, data file headers, block headers, and redo log files. The redo log file has a low SCN (the lowest change number stored in the log file) and high SCN (the highest change number in the log file—assigned when the file is closed, before opening the next redo log file). The SCN value is stored in every data file header, which gets updated whenever a checkpoint is done. The control file records the SCN number for each data file that is taken offline.
Steps in a COMMIT Oracle commits a transaction when you:
Issue a COMMIT command
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Stages in Processing SQL
Execute a DDL statement
Disconnect from Oracle
29
The following are the steps for processing a COMMIT, that is, making the changes to the database permanent: 1. The server process generates an SCN number and is assigned to the
rollback segment; then it marks in the rollback segment that the transaction is committed. 2. The LGWR process writes the redo log buffers to the online redo log
files along with the SCN number. 3. The server process releases locks held on rows and tables. 4. The user is notified that the COMMIT is complete. 5. The server process marks the transaction as complete.
Oracle defers writes to the data file to reduce disk I/O. The DBWn process writes the changed blocks to the data file independent of any COMMIT. By writing the change vectors and the SCN number to the redo log files, Oracle ensures that the changes committed are never lost. This is known as the fast commit—writes to redo log files are faster than writing the blocks to data files.
Steps in a ROLLBACK Oracle rolls back a transaction when:
You issue a ROLLBACK command.
The server process terminates abnormally.
The session is killed by a DBA.
The following steps are used in processing a ROLLBACK, that is, undoing the changes made to the database: 1. The server process undoes all changes made in the transaction by using
the rollback segment entries. 2. The server process releases all locks held on tables and rows. 3. The server process marks the transaction as complete.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
30
Chapter 1
Oracle Overview and Architecture
Summary
This chapter introduced you to the Oracle architecture and configuration. The Oracle server consists of a database and an instance. The database consists of the structures that store the actual data. The instance consists of the memory structures and background processes. The database has logical structures and physical structures. Oracle maintains the logical structures and physical structures separately, so they can be managed independently of each other. The database is divided logically into multiple tablespaces. Each tablespace can have multiple segments. Each database table or index is allocated a segment. The segment consists of one or many extents. The extent is a contiguous allocation of blocks. A block is the smallest unit of storage in Oracle. The physical database structures include data files, control files, and redo log files. The data files contain all the database data. The control file keeps information about the data files and redo log files in the database, as well as the database name, creation timestamp, and so on. The redo log files are used to keep track of the database changes. The redo log files are used to recover the database in case of instance or media failure. The memory structures of an Oracle instance include the System Global Area (SGA) and Program Global Area (PGA). The SGA is shared among all the database users; the PGA is not. The SGA consists of the database buffer cache, shared pool, and redo log buffers. The database buffers cache the recently used database blocks in memory. The dirty buffers are the blocks that are changed and need to be written to the disk. The DBWn process writes these blocks to the data files. The redo log buffers keep all changes to the database; these buffers are written to the redo log files by the LGWR process. The shared pool consists of the library cache, dictionary cache, and other control structures. The library cache contains parsed SQL and PL/SQL codes. The dictionary cache holds the most recently used dictionary information. The application tool (such as SQL*Plus or Pro*C) communicates with the database by using a server process. Oracle can have dedicated server processes, whereby one server process takes requests from one user process. In a multithreaded configuration, the server processes are shared. Parse, execute, and fetch are the major steps used in processing queries. For other DML statements, the stages are parse and execute. The parse step
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Summary
31
compiles the statement in the shared pool, checks the user’s privileges, and arrives at an execution plan. In the execute step, the parsed statement is executed. During the fetch step, data is returned to the user.
Key Terms Before you take the exam, make sure you’re familiar with the following terms: database instance logical structures physical structures tablespace block extent segment schema data file redo log file control file System Global Area or Shared Global Area (SGA) Program Global Area, Private Global Area, or Process Global Area (PGA) database buffer cache dirty buffers free buffers pinned buffers redo log buffer shared pool
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
32
Chapter 1
Oracle Overview and Architecture
library cache data dictionary data dictionary cache row cache large pool sort area runs server process user process database writer process (DBWn) log writer process (LGWR) checkpoint process (CKPT) system monitor process (SMON) process monitor process (PMON) ARCHIVELOG archiver process (ARCn) multithreaded configuration parsing execute fetch system change number (SCN)
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
33
Review Questions 1. Which component is not part of the Oracle instance? A. System Global Area B. Process monitor C. Control file D. Shared pool 2. Which background process and associated database component guar-
antees that committed data is saved even when the changes have not been recorded in the data files? A. DBWn and database buffer cache B. LGWR and online redo log file C. CKPT and control file D. DBWn and archived redo log file 3. What is the maximum number of database writer processes allowed in
an Oracle instance? A. 1 B. 10 C. 256 D. Limit specified by an operating system parameter 4. Which background process is not started by default when you start up
the Oracle instance? A. DBWn B. LGWR C. CKPT D. ARCn
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
34
Chapter 1
Oracle Overview and Architecture
5. Which of the following best describes a parallel server configuration? A. One database, multiple instances B. One instance, multiple databases C. Multiple databases on multiple servers D. Shared server process takes care of multiple user processes 6. Choose the right hierarchy, from largest to smallest, from this list of
logical database structures. A. Database, tablespace, extent, segment, block B. Database, tablespace, segment, extent, block C. Database, segment, tablespace, extent, block D. Database, extent, tablespace, segment, block 7. Which component of the SGA contains the parsed SQL code? A. Buffer cache B. Dictionary cache C. Library cache D. Parse cache 8. Which stage is not part of processing a DML statement, but is part of
processing a query? A. Parse B. Fetch C. Execute D. Feedback
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
35
9. Which background process is responsible for writing the dirty buffers
to the database files? A. DBWn B. SMON C. LGWR D. CKPT E. PMON 10. Which component in the SGA has the dictionary cache? A. Buffer cache B. Library cache C. Shared pool D. Program Global Area E. Large pool 11. When a server process is terminated abnormally, which background
process is responsible for releasing the locks held by the user? A. DBWn B. LGWR C. SMON D. PMON 12. What is a dirty buffer? A. Data buffer that is being accessed B. Data buffer that is changed but is not written to the disk C. Data buffer that is free D. Data buffer that is changed and written to the disk
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
36
Chapter 1
Oracle Overview and Architecture
13. If you are updating one row in a table using the ROWID in the WHERE
clause (assume that the row is not already in the buffer cache), what will be minimum amount of information read to the database buffer cache? A. The entire table is copied to the database buffer cache. B. The extent is copied to the database buffer cache. C. The block is copied to the database buffer cache. D. The row is copied to the database buffer cache. 14. What happens next when a server process is not able to find enough
free buffers to copy the blocks from disk? A. Signals the CKPT process to clean up the dirty buffers B. Signals the SMON process to clean up the dirty buffers C. Signals the CKPT process to initiate a checkpoint D. Signals the DBWn process to write the dirty buffers to disk 15. Which memory structures are shared? Choose two. A. Sort area B. Program Global Area C. Library cache D. Large pool 16. When a SELECT statement is issued, which stage checks the user’s
privileges? A. Parse B. Fetch C. Execute
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
37
17. Which memory structure records all database changes made to the
instance? A. Database buffer B. Dictionary cache C. Redo log buffer D. Library cache 18. What is the minimum number of online redo log files required in a
database? A. One B. Two C. Four D. Zero 19. When are the system change numbers assigned? A. When a transaction begins B. When a transaction ends abnormally C. When a checkpoint occurs D. When a COMMIT is issued 20. Which of the following is not part of the database buffer pool? A. KEEP B. RECYCLE C. LIBRARY D. DEFAULT
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
38
Chapter 1
Oracle Overview and Architecture
Answers to Review Questions 1. C. The Oracle instance consists of memory structures and background
processes. The Oracle database consists of the physical components such as data files, redo log files, and the control file. The System Global Area and shared pool are memory structures. The process monitor is a background process. 2. B. The LGWR process writes the redo log buffer entries when a
COMMIT occurs. The redo log buffer holds information on the changes made to the database. The DBWn process writes dirty buffers to the data file, but it is independent of the COMMIT. The dirty buffers can be written to the disk before or after a COMMIT. Writing the committed changes to the online redo log file ensures that the changes are never lost in case of a failure. 3. B. By default, every Oracle instance has one database writer process—
DBW0. Additional processes can be started by setting the initialization parameter DB_WRITER_PROCESSES (DBW1 through DBW9). 4. D. ARCn is the archiver process, which is started only when the LOG_
ARCHIVE_START initialization parameter is set to TRUE. DBWn, LGWR, CKPT, SMON, and PMON are the default processes associated with all instances. 5. A. In a parallel server configuration, multiple instances (known as
nodes) can mount one database. One instance can be associated with only one database. In a multithreaded configuration, one shared server process takes requests from multiple user processes. 6. B. The first level of logical database structure is the tablespace. A
tablespace may have segments, segments may have one or more extents, and extents have one or more contiguous blocks. 7. C. The library cache contains the parsed SQL code. If a query is exe-
cuted again before it is aged out of the library cache, Oracle will use the parsed code and execution plan from the library cache. The buffer cache has data blocks that are cached. The dictionary cache caches data dictionary information. There is no SGA component named parse cache.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Answers to Review Questions
39
8. B. For processing queries, the stages involved are parse, execute, and
fetch. For DML statements such as INSERT, UPDATE, or DELETE, the stages involved are parse and execute. There is no feedback stage in processing SQL. 9. A. The DBWn process writes the dirty buffers to the data files under
two circumstances—when a checkpoint occurs or when the server process searches the buffer cache for a set threshold. 10. C. The shared pool has three components: the library cache, the dic-
tionary cache, and the control structures. 11. D. PMON, or the process monitor, is responsible for cleaning up
failed user processes. It reclaims all resources held by the user and releases all locks on tables and rows held by the user. 12. B. Dirty buffers are the buffer blocks that need to be written to the
data files. The data in these buffers has changed and has not yet written to the disk. A block waiting to be written to disk is on the dirty list and cannot be overwritten. 13. C. The block is the smallest unit that can be copied to the buffer cache. 14. D. To reduce disk I/O contention, the DBWn process does not write
the changed buffers immediately to the disk. They are written only when the dirty buffers reach a threshold or when there are not enough free buffers available or when the checkpoint occurs. 15. C and D. The sort area is allocated to the server process as part of the
PGA. The PGA is allocated when the server process starts and is deallocated when the server process completes. The library cache and the large pool are part of the SGA and are shared. The SGA is created when the instance starts. 16. A. The parse stage compiles the SQL statement, if there is not an already
parsed statement available in the library cache, and then checks the privileges. The next stage is the execute stage, when the parsed code is executed. In the fetch stage, rows are returned to the user.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
40
Chapter 1
Oracle Overview and Architecture
17. C. The redo log buffer keeps track of all changes made to the database
before writing them to the redo log files. The database buffer contains the data blocks that are read from the data files, and are most recently used. The dictionary cache holds the most recently used data dictionary information. The library cache holds the parsed SQL statements and PL/SQL code. 18. B. There should be at least two redo log files in a database. The LGWR
process writes to the redo log files in a circular manner, so there should be at least two files. 19. D. A system change number (SCN) is assigned when the transaction is
committed. The SCN is a unique number acting as an internal timestamp, used for recovery and read-consistent queries. 20. C. There is no database buffer cache named LIBRARY. The DBA can
configure multiple buffer pools by using the appropriate initialization parameters for performance improvements. The KEEP buffer pool retains the data blocks in memory; they are not aged out. The RECYCLE buffer pool removes the buffers from memory as soon as they are not needed. The DEFAULT buffer pool contains the blocks that are not assigned to the other pools.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Chapter
2
Installing and Managing Oracle ORACLE8i ARCHITECTURE AND ADMINISTRATION EXAM OBJECTIVES OFFERED IN THIS CHAPTER:
Getting Started with the Oracle Server
Identify the features of the Universal Installer Set up operating system and password file authentication List the main components of the Oracle Enterprise Manager and their uses
Managing an Oracle Instance
Create the parameter file Start up an instance and open the database Close a database and shut down the instance Get and set parameter values Manage sessions Monitor the ALERT file and the trace files
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Training and Certification Web site (http://education.oracle .com/certification/index.html) for the most current exam objectives listing.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
O
racle8i uses Java-based tools to install Oracle software and create databases. Java gives the same look and feel for the installer across all platforms. The Oracle Enterprise Manager utility comes with many userfriendly database administration tools. In this chapter, you will be introduced to the features of the Oracle Universal Installer and Enterprise Manager utilities. You will also learn to use parameters and to start up and shut down an Oracle instance.
Oracle Universal Installer
O
racle8i is installed by using the Oracle Universal Installer (OUI), a GUI-based Java tool with the same look and functionality across all platforms. On Windows platforms, the installer is invoked by running the executable setup.exe. On Unix platforms, the OUI is invoked by running the script runInstaller. Figure 2.1 shows the installation location screen when you invoke the OUI. You can install new Oracle8i products or remove installed Oracle8i products by using the OUI.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Oracle Universal Installer
FIGURE 2.1
43
Oracle Universal Installer
The OUI accepts minimal user inputs for a typical installation, and you have the ability to choose the desired products by using the custom installation. OUI supports multiple Oracle homes in case you need to install different versions of Oracle under different Oracle homes. OUI resolves the dependencies among various Oracle products automatically. OUI allows silent installation, which is especially useful for workstations that do not support a graphical interface. The response to each installer question can be captured in a response file and used for future installations. Installer activities and result statuses are logged into files. You can review these files after installation. The installer can start other Oracle tools such as the Database Configuration Assistant to create a new database or the Net8 Assistant to configure the listener for the database.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
44
Chapter 2
Installing and Managing Oracle
Oracle Enterprise Manager
Oracle Enterprise Manager (OEM) is a graphical system management tool used to manage different components of Oracle and to administer the databases from one session. OEM comprises a console and different management utilities, a repository to save all the metadata information, and the actual nodes (databases and other components) that need to be managed. The three-tier architecture of OEM is shown in Figure 2.2. FIGURE 2.2
OEM three-tier architecture Client tools
Middle tier
Nodes
Management server
OEM console
Databases
Listeners
DBA management packs
Repository
The basic components of the OEM are:
Console
Management Server
Common services
DBA Management Pack
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Other services
Oracle Enterprise Manager
45
Console The console is a client GUI tool that provides a single point of access to all the databases, network, and management tools. The console consists of four panes, which can be seen in Figure 2.3. FIGURE 2.3
OEM console
The Navigator pane displays a hierarchical view of all the databases, listeners, nodes, and other services in the network and their relationships. You can drill down and see the database users, roles, groups, and so on. The Group pane enables you to graphically view and construct logical administrative groups of objects for more efficient management and administration. Objects can be grouped together based on any criteria, such as department, geographical location, or function. The Group pane is especially useful for managing environments with large numbers of databases and other services, or for seeing the relative location of managed services. Groups are created by first naming and registering a group in the Group pane, then dragging objects that you want to manage as a unit from the Navigator pane and dropping them into the Group pane.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
46
Chapter 2
Installing and Managing Oracle
The Jobs pane is the user interface to the Job Scheduling System, which can be used to automate repetitive tasks at specified times on one or multiple databases. A job consists of one or more tasks. You can build dependencies within the tasks, and certain tasks can be executed depending on the outcome of another task. The Events pane is the user interface to the Event Management System, which monitors the network for problem events. An event is made up of one or more tests that an Intelligent Agent checks against one or more of its managed services in monitoring for critical occurrences. When the Intelligent Agent detects a problem on the services, it notifies the console and the appropriate DBA based on the permissions set up. The Intelligent Agents are local to a node and are responsible for monitoring the databases and other services on the node.
Management Server and Common Services The Management Server is the middle tier between the console GUI and managed nodes. It processes all system management tasks and distributes these tasks to Intelligent Agents on the nodes. You can use multiple Management Servers to balance workload and to improve performance. The common services are the tools and systems that help the Management Server. The common services consist of the following: Repository The repository is a set of tables used to store the information about the nodes managed and the Oracle management tools. This data store can be created in any Oracle database, but preferably on a node that does not contain a critical Oracle instance to be monitored. Service discovery OEM discovers all databases and listeners on a node, once the node is identified. The Intelligent Agent finds the services and reports them back to the Oracle Management Server. These services discovered are displayed in the Navigation pane of the console. Job Scheduling System Using the Job Scheduling System, you can schedule and execute routine or repetitive administrative tasks. You can set up the system to notify you upon completion, failure, or success of a job through e-mail or a pager. Event Management System The Event Management System in the OEM is used to monitor the resource problems, loss of service, shortage of disk space, or any other problem detected on the node. These can be set up as events, and the Intelligent Agent performs tests periodically to monitor them.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Oracle Enterprise Manager
47
Notification system Notification about the status of jobs or events can be sent to the console or via e-mail or to a pager. You can select the notification procedures when you set up the job or event. Paging/e-mail blackout This feature prevents the administrator from receiving multiple e-mails or pages when a service is brought down for maintenance or for a scheduled period of downtime. Security Security parameters in OEM are defined for services, objects, and administrators. A super administrator is someone who creates and defines the permissions of all the repository’s administrators. The super administrator can access any object and control its security parameters, including objects owned by other administrators.
DBA Management Pack The DBA Management Pack is a set of tools integrated with the OEM, which helps the administrators with their daily routine tasks. These tools provide complete database administration using GUI tools rather than using SQL*Plus. The tools in the DBA Management Pack can be accessed by using the OEM, through DBA Studio, or by individually using each tool. Figure 2.4 shows a DBA Studio screen. FIGURE 2.4
DBA Studio
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
48
Chapter 2
Installing and Managing Oracle
Using the DBA Management Pack (or DBA Studio), you can administer the following: Instance You can start up and shut down an instance; modify parameters; view and change memory allocations, redo logs, and archival status; view user sessions and their SQL; see the execution plan of SQL; and manage resource allocations and long-running sessions. The tool is independently known as Instance Manager. Schema You can create, alter, or drop any schema object, including advanced queues and Java-stored procedures. You can clone any object. The tool is independently known as Schema Manager. Security You can change the security privileges for users and roles, and create and alter users, roles, and profiles. The tool is independently known as Security Manager. Storage You can manage tablespaces, data files, rollback segments, redo log groups, and archive logs. The tool is independently known as Storage Manager.
Administrator Authentication Methods
I
n this section, we will discuss the privileges and authentication methods available when using the administration tools described in the previous section. You can allow administrators to connect to the database by using operating system authentication or password file authentication. For remote or local database administration, you can use either method, but the OS authentication method can be used with remote administration only if you have a secured network connection. To use remote authentication of users through Remote Dial-In User Service (RADIUS—a standard lightweight protocol used for user authentication and authorization) with Oracle, you need Oracle8i Enterprise Edition with the Advanced Security option. When you create a database, Oracle automatically creates two administrator login IDs—SYS and SYSTEM. The initial password for SYS is CHANGE_ ON_INSTALL and the initial password for SYSTEM is MANAGER. For security reasons, you should change these passwords as soon as you finish creating the database. Oracle recommends you create at least one additional user to do the DBA tasks, rather than using the SYS or SYSTEM account. A predefined role, DBA, is created with all databases and has all database administrative privileges.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Administrator Authentication Methods
49
OS Authentication Oracle can verify your OS privileges and connect you to the database to perform database operations. In earlier versions of Oracle, this was accomplished by using CONNECT INTERNAL; this is still supported in Oracle8i, but Oracle says the INTERNAL user will be obsolete in a future release. To connect to the database by using OS authentication, you must be a member of the OSDBA or OSOPER operating system group. On most Unix systems, this is the dba group. The name of the OSDBA and OSOPER groups can be specified when you install Oracle by using the OUI. OSDBA and OSOPER are not Oracle privileges or roles that you grant through the Oracle database. The OS manages them. When you connect to the database by using the OSOPER privilege (or SYSOPER privilege), you can perform STARTUP, SHUTDOWN, ALTER DATABASE [OPEN/MOUNT], ALTER DATABASE BACKUP, ARCHIVE LOG, and RECOVER, and SYSOPER includes the RESTRICTED SESSION privilege. When you connect to the database by using the OSDBA privilege (or SYSDBA privilege), you have all system privileges with ADMIN OPTION, the OSOPER role, CREATE DATABASE, and time-based recovery. To use OS authentication, the REMOTE_LOGIN_PASSWORDFILE parameter should be set to NONE. This is the default. OS authenticated users can connect to the database by using CONNECT / AS SYSDBA or CONNECT / AS SYSOPER. You do not need a user created in the Oracle database to use OS authentication. Here is an example from a Unix platform, making a local OS authentication connection to the database to perform administration operations: $ id uid=15174(b2b) gid=14(dba) $ sqlplus /nolog SQL*Plus: Release 8.1.6.0.0 - Production on Sat Thu 17 17:17:17 2000 (c) Copyright 1999 Oracle Corporation. All rights reserved. SQL> connect / as sysdba Connected. SQL> archive log list Database log mode Automatic archival Archive destination Oldest online log sequence Current log sequence SQL>
Copyright ©2000 SYBEX , Inc., Alameda, CA
No Archive Mode Disabled /oracle/archive/DB01/arch 51 54
www.sybex.com
50
Chapter 2
Installing and Managing Oracle
All commands in Server Manager can now be executed from SQL*Plus. The Server Manager utility is obsolete and will no longer be shipped with Oracle software in future releases.
Password File Authentication When using password file authentication, the user connects to the database by specifying a username and a password. The user needs to have been granted the appropriate privileges in the database. To use password file authentication, the following steps should be completed: 1. A password file needs to be created with the SYS password by using
the ORAPWD utility. When you change the password in the database, the password in this file is automatically updated. 2. Set the REMOTE_LOGIN_PASSWORDFILE parameter. 3. Grant the appropriate users SYSDBA or SYSOPER privilege. When you
grant this privilege, these users are added to the password file. When you invoke the ORAPWD utility without any parameters, it shows the syntax for creating the password file. $ orapwd Usage: orapwd file= password=<password> entries=<users> where file - name of password file (mand), password - password for SYS and INTERNAL (mand), entries - maximum number of distinct DBAs and OPERs (opt), There are no spaces around the equal-to (=) character.
The FILE parameter specifies the name of the parameter file. Normally the file is created in the dbs directory under Oracle Home (the directory where Oracle software is installed). The PASSWORD parameter specifies the SYS password, and ENTRIES is used to specify the maximum number of users you will be assigning the SYSOPER or SYSDBA privileges. If you exceed this limit, the password file needs to be re-created. ENTRIES is an optional parameter.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Start Up the Oracle Instance
51
The parameter REMOTE_LOGIN_PASSWORDFILE can be set to either EXCLUSIVE or SHARED. When EXCLUSIVE is used, the password file can be used for only one database; you can add users other than SYS and INTERNAL to the password file. When SHARED is used, the password file is shared between multiple databases, but you cannot add any user other than SYS or INTERNAL to the password file. When you connect to the database by using the SYSDBA privilege, you are connected to the SYS schema, and when you connect by using the SYSOPER privilege, you are connected to the PUBLIC schema.
The view V$PWFILE_USERS has the information on all users granted either SYSDBA or SYSOPER privileges. The view has the username and a value of TRUE in column SYSDBA if the SYSDBA privilege is granted, or a value of TRUE in column SYSOPER if the SYSOPER privilege is granted.
Start Up the Oracle Instance
T
o start or stop an Oracle instance, you must have the SYSDBA or SYSOPER privilege. To start up a database, you can use either the DBA Studio utility pack or SQL*Plus and connect using a user account that has SYSDBA or SYSOPER privileges. The database start-up is done in three stages. First, an instance associated with the database is started, then the instance mounts to the database, and finally the database is opened for normal use. The examples discussed in this section use SQL*Plus to start up the database. The instance can start, but not mount, the database by using the STARTUP NOMOUNT command. Normally you use this database state for creating a new database or for creating new control files. When you start the instance, Oracle allocates the SGA and starts the background processes. The instance can start and mount the database without opening it by using the STARTUP MOUNT command. This state of the database is used mainly for performing specific maintenance operations such as renaming data files, enabling or disabling archive logging, renaming, adding, or dropping redo log files, or for performing a full database recovery. When you mount the database, Oracle opens the control files associated with the database. Each control file contains the names and locations of database files and online redo log files.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
52
Chapter 2
Installing and Managing Oracle
STARTUP OPEN or STARTUP is used to start the instance, mount a database, and open the database for normal operations. When you open the database, Oracle opens the online data files and online redo log files. If any of the files are not available or not in synch with the control file, Oracle returns an error. You may have to perform a recovery on the file before you can open the database. Issuing the ALTER DATABASE MOUNT command when the database is not mounted will mount the database in a previously started instance. ALTER DATABASE OPEN will open a closed database. You can open a database in read-only mode by using the ALTER DATABASE OPEN READ ONLY command. When you start the database in read-only mode, no redo information is generated because you cannot modify any data. The following example shows how to start a database by using the SQL*Plus utility. $ sqlplus /nolog SQL*Plus: Release 8.1.6.0.0 - Production on Sun Jun 18 18:18:18 2000 (c) Copyright 1999 Oracle Corporation. All rights reserved. SQL> connect / as sysdba Connected to an idle instance. SQL> startup ORACLE instance started. Total System Global Area 21379176 bytes Fixed Size 67688 bytes Variable Size 12750848 bytes Database Buffers 8388608 bytes Redo Buffers 172032 bytes Database mounted. Database opened. SQL> exit Disconnected from Oracle8i Enterprise Edition Release 8.1.6.0.0 - Production With the Partitioning and Java options PL/SQL Release 8.1.6.0.0 - Production $
Sometimes you may have problems starting up an instance. In those cases, STARTUP FORCE can be used to start a database forcefully. Use this option only if you could not shut down the database properly; STARTUP FORCE shuts down the instance if it is already running, then restarts it.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Start Up the Oracle Instance
53
You can restrict access to the database by using the command STARTUP RESTRICT to start the database in restricted mode. Only users with the RESTRICTED SESSION system privilege can connect to the database. You can also use ALTER SYSTEM [ENABLE/DISABLE] RESTRICTED SESSION to enable or disable restricted access after opening the database. Put the database in restricted mode if you want to make any major structure modifications or to get a consistent export.
You need to have the ALTER SYSTEM privilege to change the database availability by using the ALTER SYSTEM [ENABLE/DISABLE] RESTRICTED SESSION command.
If you are using Oracle Parallel Server, the database can be opened in shared mode by specifying STARTUP PARALLEL. The parameter PARALLEL_ SERVERS must be set to TRUE. By default (exclusive mode) the database can be mounted and used by only one instance.
When an instance is started in the NOMOUNT state, you can access only the views that read data from the SGA. V$PARAMETER, V$SGA, V$OPTION, V$PROCESS, V$SESSION, V$VERSION, V$INSTANCE, and so on are dictionary views that read from the SGA. When the database is mounted, information can be read from the control file. V$THREAD, V$CONTROLFILE, V$DATABASE, V$DATAFILE, V$DATAFILE_HEADER, V$LOGFILE, and so on all read data from the control file.
The Parameter File Oracle uses a parameter file when starting up the database. This is a text file containing the parameters and their values for configuring the database and instance. The default location and name of the file depend on the operating system; on Unix platforms, by default Oracle looks for the parameter file by the name init<SID>.ora (SID is the name of the instance) under the $ORACLE_HOME/dbs directory. You can specify the parameter file location and name when starting up the database by using the PFILE option of the STARTUP command. The following command starts up the database in restricted mode by using the parameter file initORADB01.ora under the /oracle/admin/ORADB01/pfile directory.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
54
Chapter 2
Installing and Managing Oracle
STARTUP PFILE=/oracle/admin/ORADB01/pfile/initORADB01.ora RESTRICT
The parameter files tell Oracle the following when starting up an instance:
The name of the database and the location of the control files
The location of the archived log files and whether to start the archival process
The size of the SGA
The private rollback segments to make online
The location of the dump and trace files
The parameters to set limits and that affect capacity
If you do not specify a parameter in the parameter file, Oracle assumes a default value for the parameter. A custom parameter file can be structured liberally, but certain syntax rules are enforced for the files. The syntax rules are:
Comment lines are preceded by a pound sign (#).
All parameters are optional. When parameters are omitted, defaults will be applied.
Parameters and their values are generally not case sensitive. Parameter values that name files can be case sensitive if the host operating system’s filenames are case sensitive.
Parameters can be listed in any order.
Parameters that accept multiple values, such as the CONTROL_FILES parameter, can list the values in parentheses delimited by commas or with no parentheses delimited by spaces.
The continuation character is the backslash character (\). This should be used when a parameter’s list of values must be continued on a separate line.
Parameter values that contain spaces should be enclosed in double quotes.
The equal sign (=) is used to delimit the parameter name and its associated value.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Start Up the Oracle Instance
55
Get Parameter Values You can get the value of a parameter by using the SHOW PARAMETERS command. When this command is used without any arguments, Oracle shows all the parameters in alphabetical order and their values. To get the value for a specific parameter, use the SHOW PARAMETERS command with the parameter name as the argument. For example, to view the value of the DB_BLOCK_SIZE parameter: SQL> show parameters db_block_size NAME TYPE VALUE ------------------------- --------- --------------------db_block_size integer 8192 SQL>
The argument in the SHOW PARAMETERS command is a filter; you can specify any string, and Oracle shows the parameters that match the argument string anywhere in the parameter name. The argument is not case sensitive. In the following example, all parameters with OS embedded somewhere in the name are shown. SQL> show parameters OS NAME ------------------------optimizer_index_cost_adj os_authent_prefix os_roles remote_os_authent remote_os_roles timed_os_statistics SQL>
TYPE --------integer string boolean boolean boolean integer
VALUE --------------------100 FALSE FALSE FALSE 0
Another method of getting the parameter values is by querying the V$PARAMETER view. V$PARAMETER shows the parameter values for the current session. V$SYSTEM_PARAMETER has the same structure as the V$PARAMETER view, except that it shows the system parameters. The columns in the V$PARAMETER view are shown in Table 2.1.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
56
Chapter 2
Installing and Managing Oracle
TABLE 2.1
V$PARAMETER View
Column Name
Data Type
Purpose
NUM
NUMBER
Parameter number.
NAME
VARCHAR2 (64)
Parameter name.
TYPE
NUMBER
Type of parameter: 1—Boolean, 2—string, 3—integer, 4—file.
VALUE
VARCHAR2 (512)
Value of the parameter.
ISDEFAULT
VARCHAR2 (9)
Whether the parameter value is the Oracle default. FALSE indicates that the parameter was changed during start-up.
ISSES_MODIFIABLE
VARCHAR2 (5)
TRUE indicates that the parameter can be changed by using an ALTER SESSION command.
ISSYS_MODIFIABLE
VARCHAR2 (9)
FALSE indicates that the parameter cannot be changed by using the ALTER SYSTEM command. IMMEDIATE indicates that the parameter can be changed, and DEFERRED indicates that the parameter change takes effect only in the next session.
ISMODIFIED
VARCHAR2 (10)
MODIFIED indicates that the parameter was changed by using ALTER SESSION. SYS_MODIFIED indicates that the parameter was changed by using ALTER SYSTEM.
ISADJUSTED
VARCHAR2 (5)
TRUE indicates that Oracle adjusted the value of the parameter to be a more suitable value.
DESCRIPTION
VARCHAR2 (64)
A brief description of the purpose of the parameter.
To get the parameter names and their values for the parameter names that start with OS, perform this query: SQL> col name format a30 SQL> col value format a25
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Start Up the Oracle Instance
57
SQL> SELECT name, value 2 FROM v$parameter 3 WHERE name like ‘os%’; NAME VALUE ------------------------------ ------------------------os_roles FALSE os_authent_prefix SQL>
You can also use the GUI tool, DBA Studio, to see the values of parameters. The description shown in this tool is more elaborate than what you would see in the V$PARAMETER view.
Set Parameter Values When you start up the instance, Oracle reads the parameter file and sets the value for the parameter. For the parameters that are not specified in the parameter file, Oracle assigns a default value. The parameters that are modified at instance start-up can be seen by querying the V$PARAMETER view for a FALSE value in the ISDEFAULT column. SQL> SELECT name, value 2 FROM v$parameter WHERE isdefault = ‘FALSE’;
Certain parameters can be changed dynamically by using the ALTER SESSION or ALTER SYSTEM command. Such parameters can be identified by querying the view V$PARAMETER. You can change the value of a parameter system-wide by using the ALTER SYSTEM command. A value of DEFERRED or IMMEDIATE in the ISSYS_ MODIFIABLE column shows that the parameter can be dynamically changed by using the command ALTER SYSTEM. DEFERRED indicates that the change you make does not take effect until a new session is started. The existing sessions will use the current value. IMMEDIATE indicates that as soon as you make a value change to the parameter, it is available to all sessions in the instance. A session can be a job or a task that Oracle manages. When you log in to the database by using SQL*Plus or any client tool, you start a session. Sessions are discussed in the next section. Here is an example of modifying a parameter by using ALTER SYSTEM.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
58
Chapter 2
Installing and Managing Oracle
SQL> ALTER SYSTEM SET log_archive_dest = ‘/oracle/archive/DB01’;
The following example will set the TIMED_STATISTICS parameter to TRUE for all future sessions. SQL> ALTER SYSTEM SET timed_statistics = TRUE DEFERRED;
A value of TRUE in the ISSES_MODIFIABLE column shows that the parameter can be changed by using ALTER SESSION. When you change a parameter by using ALTER SESSION, the value of the parameter is changed only for that session. When you start the next session, the parameter will have the original value (either the Oracle default, or the value set in the parameter file, or the value set by ALTER SYSTEM). Here is an example of modifying a parameter by using ALTER SESSION: SQL> ALTER SESSION SET nls_date_format = ‘MM-DD-YYYY’;
Manage Sessions Oracle starts a session when a database connection is made. The session is available as long as the user is connected to the database. When a session is started, Oracle allocates a session ID to that session. The user sessions connected to a database can be seen by querying the view V$SESSION. In V$SESSION, the session identifier (SID) and the serial number (SERIAL#) uniquely identify each session. The serial number guarantees that sessionlevel commands are applied to the correct session objects if the session ends and another session begins with the same session ID. The V$SESSION view contains a lot of information about a session. The username, machine name, program name, status, and login time are a few of the useful pieces of information in this view. For example, if you need to know the users who are connected to the database, and what program they are running, execute the following query. SQL> SELECT username, program 2 FROM v$session;
Sometimes it may be necessary for the DBA to terminate certain user sessions. A user session can be terminated by using the ALTER SYTEM command.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Start Up the Oracle Instance
59
The SID and SERIAL# from the V$SESSION view are required to kill the session. For example, to kill a session created by user JOHN, you do the following. SQL> SELECT username, sid, serial#, status 2 FROM v$session 3 WHERE username = ‘JOHN’; USERNAME SID SERIAL# STATUS ------------------- ---------- --------- -------JOHN 9 3 INACTIVE SQL> ALTER SYSTEM KILL SESSION ‘9, 3’; System altered. SQL> SELECT username, sid, serial#, status 2 FROM v$session 3 WHERE username = ‘JOHN’; USERNAME SID SERIAL# STATUS ------------------- ---------- --------- ------JOHN 9 3 KILLED SQL>
When you kill a session, first the session is terminated by Oracle to prevent the session from executing any more SQL statements. If any SQL statement is in progress when the session is terminated, the statement is terminated and all changes are rolled back. The locks and other resources used by the session are also released. If you kill an INACTIVE session, Oracle terminates the session and marks the status in the V$SESSION view as KILLED. When the user subsequently tries to use the session, an error is returned to the user and the session information is removed from V$SESSION. If you kill an ACTIVE session, Oracle terminates the session and issues an error message immediately to the user that the session is killed. If Oracle cannot release the resources held by the session in 60 seconds, Oracle returns a message to the user that the session has been marked for kill. The status in the V$SESSION view will again show as KILLED. If you want the user to complete the current transaction and then terminate their session, you can use the DISCONNECT SESSION option of the ALTER
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
60
Chapter 2
Installing and Managing Oracle
SYSTEM command. If the session has no pending or active transactions, this command has the same effect as KILL SESSION. Here is an example: ALTER SYSTEM DISCONNECT SESSION ‘9,3’ POST_TRANSACTION;
You can also use the IMMEDIATE clause with the KILL SESSION or DISCONNECT SESSION to roll back ongoing transactions, release all session locks, recover the entire session state, and return control to you immediately. Here are some examples: ALTER SYSTEM DISCONNECT SESSION ‘9,3’ IMMEDIATE; ALTER SYSTEM KILL SESSION ‘9,3’ IMMEDIATE;
Shut Down the Oracle Database
S
imilar to the stages in starting up a database, there are three stages to shutting down a database. First, the database is closed, then the instance dismounts the database, and finally the instance is shut down. When closing the database, Oracle writes the redo buffer to the redo log files and the changed data in the data buffer cache to the data files, and closes the data files and redo log files. The control file remains open, but the database is not available for normal operations. After closing the database, the instance dismounts the database. The control file is closed at this time. The memory allocated and the background processes still remain. The final stage is the instance shutdown. The SGA is removed from memory and the background processes are terminated when the instance is shut down. To initiate a database shutdown, you can use the SHUTDOWN command in SQL*Plus or use the DBA Studio GUI tool. You need to connect to the database by using a dedicated server process, with SYSDBA privileges to shut down the database. Once the shutdown process is initiated, no new user sessions are allowed to connect to the database.
Shutdown Options The DBA can shut down the database by using the SHUTDOWN command with any of four options. These options and the steps taken by Oracle are as follows.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Shut Down the Oracle Database
61
SHUTDOWN NORMAL When you use the SHUTDOWN command without any options, the default option used is NORMAL. When SHUTDOWN NORMAL is issued, Oracle:
Does not allow any new user connections.
Waits for all users to disconnect from the database. All connected users can continue working.
Closes the database, dismounts the instance, and shuts down the instance once all users are disconnected from the database.
SHUTDOWN IMMEDIATE SHUTDOWN IMMEDIATE is used to bring down the database as quickly as possible. When SHUTDOWN IMMEDIATE is issued, Oracle:
Does not allow any new user connections
Terminates all user connections to the database
Rolls back uncommitted transactions
Closes the database, dismounts the instance, and shuts down the instance
SHUTDOWN TRANSACTIONAL SHUTDOWN TRANSACTIONAL is used to bring down the database as soon as the users complete their current transaction. This is a mode that fits between IMMEDIATE and NORMAL. When SHUTDOWN TRANSACTIONAL is issued, Oracle:
Does not allow any new user connections.
Does not allow any new transactions in the database. When a user tries to start a new transaction, the session is disconnected.
Waits for the user to either roll back or commit any uncommitted transactions.
Closes the database, dismounts the instance, and shuts down the instance once all transactions are complete.
The following example shows a SHUTDOWN TRANSACTIONAL using SQL*Plus. $ sqlplus /nolog SQL*Plus: Release 8.1.6.0.0 - Production on Sun Jun 18 18:18:18 2000
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
62
Chapter 2
Installing and Managing Oracle
(c) Copyright 1999 Oracle Corporation. All rights reserved. SQL> connect / as sysdba Connected. SQL> shutdown transactional Database closed. Database dismounted. ORACLE instance shut down. SQL> exit Disconnected from Oracle8i Enterprise Edition Release 8.1.6.0.0 - Production With the Partitioning and Java options PL/SQL Release 8.1.6.0.0 - Production
SHUTDOWN ABORT When any of the other three shutdown options does not work, the DBA can bring down the database abruptly by using the SHUTDOWN ABORT command. An instance recovery is needed when you start up the database next time. When SHUTDOWN ABORT is issued, Oracle:
Terminates all current SQL statements that are being processed
Disconnects all connected users
Terminates the instance immediately
Will not roll back uncommitted transactions
When the database is started up after a SHUTDOWN ABORT, Oracle has to roll back the uncommitted transactions by using the online redo log files.
Instance Messages and Instance Alerts
O
racle writes informational messages and alerts to different files depending on the type of message. These messages are useful to the DBA when troubleshooting a problem. Oracle writes to these files in locations that are specific to the operating system; the locations can be specified in the
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Summary
63
initialization parameters. These parameters can be altered by using the ALTER SYSTEM command. The three variables used to specify the locations are: BACKGROUND_DUMP_DEST Location to write the debugging trace files generated by the background processes and the alert log file. USER_DUMP_DEST Location to write the trace files generated by user sessions. The server process, on behalf of the user sessions, writes trace files if the session encounters a deadlock or encounters any internal errors. The user sessions can be traced. The trace files thus generated are also written to this location. CORE_DUMP_DEST Location to write core dump files, primarily used on Unix platforms. Core dumps are normally produced when the session or the instance terminates abnormally with errors. This parameter is not available on MS-Windows platforms. All databases have an alert log file. An alert log file in the directory specified by BACKGROUND_DUMP_DEST logs significant database events and messages. The alert log stores information about block corruption errors, internal errors, and the nondefault initialization parameters used at instance start-up. The alert log also records information about database start-up, shutdown, archiving, recovery, tablespace modifications, rollback segment modifications, and data file modifications. The alert log is a normal text file. Its filename is dependent on the operating system. For Unix platforms, it takes the format alert_<SID>.log (where SID is the instance name). During the start-up of the database, if the alert log file is not available, Oracle creates one. This file grows slowly, but without limit, so you might want to delete or archive it periodically. You can delete the file even when the database is running.
Summary
T
his chapter briefly discussed the Universal Installer and Enterprise Manager, two of Oracle’s Java-based GUI tools. The OUI has the same interface across all platforms and is used to install multiple products. The Enterprise Manager is a system management tool used to manage components of
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
64
Chapter 2
Installing and Managing Oracle
Oracle and to administer many local and remote databases at one location. OEM comprises a console and different management utilities, a repository to save all the metadata information, and the actual nodes (databases and other components) that need to be managed. For connecting to the database as administrator, Oracle has two authentication methods. Operating system authentication is allowed if you are local to the computer where the database is situated or if you have a secure network connection. Password file authentication creates a password file at the server with the SYS password. Users can be granted SYSDBA or SYSOPER privilege and they can connect to the database with appropriate privileges. You need either of these privileges to shut down or start up the database. Starting up Oracle involves three stages. First, the instance is started, then the instance mounts the database, and finally the database is opened. You can start up the database to be in any of these stages by using the start-up options. The database availability can also be controlled by enabling restricted access. Shutting down the database also involves three stages, as in the start-up, but in reverse order. You can shut down the database in four ways. SHUTDOWN NORMAL, the default, waits for all users to log out before shutdown. SHUTDOWN IMMEDIATE disconnects all user sessions and shuts down the database. SHUTDOWN TRANSACTIONAL waits for the users to complete their current transaction and then shuts down the database. SHUTDOWN ABORT simply terminates the instance immediately. When you start up the database, Oracle uses different parameters to configure memory, to configure the database, and to set limits. These parameters are saved in a file called the parameter file, which is read by Oracle during instance start-up. Many of these parameters can be changed dynamically for the session by using the ALTER SESSION command or for the database by using the ALTER SYSTEM command. The database constantly writes information about major database events in a log file called the alert log file. Oracle also writes trace and dump information for debugging session problems.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Summary
Key Terms Before you take the exam, make sure you’re familiar with the following terms: Oracle Universal Installer (OUI) Oracle Enterprise Manager (OEM) Management Server OS authentication password file authentication mount parameter file session alert log file
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
65
66
Chapter 2
Installing and Managing Oracle
Review Questions 1. From the following options, which one is an invalid database start-up
option? A. STARTUP NORMAL B. STARTUP MOUNT C. STARTUP NOMOUNT D. STARTUP FORCE 2. Which two values from the V$SESSION view are used to terminate a
user session? A. SID B. USERID C. SERIAL# D. SEQUENCE# 3. To use operating system authentication to connect to the database as
an administrator, what should the value of the parameter REMOTE_ LOGIN_PASSWORDFILE be set to? A. SHARED B. EXCLUSIVE C. NONE D. OS 4. What information is available in the alert log files? A. Block corruption errors B. Users connecting and disconnecting from the database C. All user errors D. The default values of the parameters used to start up the database
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
67
5. Which parameter value is used to set the directory path where the alert
log file is written? A. ALERT_DUMP_DEST B. USER_DUMP_DEST C. BACKGROUND_DUMP_DEST D. CORE_DUMP_DEST 6. Which SHUTDOWN option requires instance recovery when the database
is started the next time? A. SHUTDOWN IMMEDIATE B. SHUTDOWN TRANSACTIONAL C. SHUTDOWN NORMAL D. None of the above 7. Which SHUTDOWN option will wait for the users to complete their
uncommitted transactions? A. SHUTDOWN IMMEDIATE B. SHUTDOWN TRANSACTIONAL C. SHUTDOWN NORMAL D. SHUTDOWN ABORT 8. How do you make a database read-only? Choose the best answer. A. STARTUP READ ONLY; B. STARTUP MOUNT; ALTER DATABASE OPEN READ ONLY; C. STARTUP NOMOUNT; ALTER DATABSE READ ONLY; D. STARTUP; ALTER SYSTEM ENABLE READ ONLY;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
68
Chapter 2
Installing and Managing Oracle
9. Which role is created by default to administer databases? A. DATABASE_ADMINISTRATOR B. SUPER_USER C. DBA D. No such role is created by default; you need to create administrator
roles after logging in as SYS. 10. Which parameter in the ORAPWD utility is optional? A. FILE B. PASSWORD C. ENTRIES D. All the parameters are optional; if you omit a parameter, Oracle
substitutes the default. 11. Which privilege do you need to connect to the database, if the data-
base is started up by using STARTUP RESTRICT? A. ALTER SYSTEM B. RESTRICTED SESSION C. CONNECT D. RESTRICTED SYSTEM 12. At which stage of the database start-up is the control file opened? A. Before the instance start-up B. Instance started C. Database mounted D. Database opened
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
69
13. User SCOTT has opened a SQL*Plus session and left for lunch. When
you queried the V$SESSION view, the STATUS was INACTIVE. You terminated SCOTT’s session. What will be status of SCOTT’s session in V$SESSION? A. INACTIVE. B. There will be no session information in V$SESSION view. C. TERMINATED. D. KILLED. 14. Which command will “bounce” the database—that is, shut down the
database and start up the database in a single command? A. STARTUP FORCE. B. SHUTDOWN FORCE. C. SHUTDOWN START. D. There is no single command to “bounce” the database; you need to
shut down the database and then restart it. 15. When performing the command SHUTDOWN TRANSACTIONAL, Oracle
performs the following tasks in what order? A. Terminates the instance B. Performs a checkpoint C. Closes the data files and redo log files D. Waits for all user transactions to complete E. Dismounts the database F. Closes all sessions 16. How many panes are there in the Enterprise Manager console? A. One B. Six C. Four D. Two
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
70
Chapter 2
Installing and Managing Oracle
17. Using SQL*Plus, which two options below will show the value of the
parameter DB_BLOCK_SIZE? A. SHOW PARAMETER DB_BLOCK_SIZE B. SHOW PARAMETERS DB_BLOCK_SIZE C. SHOW ALL D. DISPLAY PARAMETER DB_BLOCK_SIZE 18. When you issue the command ALTER SYSTEM ENABLE RESTRICTED
SESSION, what happens to the users who are connected to the database? A. The users with DBA privilege remain connected, and others are
disconnected. B. The users with RESTRICTED SESSION remain connected, and oth-
ers are disconnected. C. Nothing happens to the existing users. They can continue working. D. The users are allowed to complete their current transaction and are
disconnected. 19. Which view has information about users who are granted SYSDBA or
SYSOPER privilege? A. V$PWFILE_USERS B. DBA_PWFILE_USERS C. DBA_SYS_GRANTS D. None of the above 20. Which DB administration tool is not part of the DBA Studio pack? A. Storage Manager B. Schema Manager C. Security Manager D. SQL Worksheet
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Answers to Review Questions
71
Answers to Review Questions 1. A. STARTUP NORMAL is an invalid option; to start the database, you issue
the STARTUP command without any options or with STARTUP OPEN. 2. A and C. SID and SERIAL# are used to kill a session. You can query
the V$SESSION view to obtain these values. The command is ALTER SYSTEM KILL SESSION ‘<sid>, <serial#>’. 3. C. The value of the REMOTE_LOGIN_PASSWORDFILE parameter should
be set to NONE to use OS authentication. To use password file authentication, the value should be either EXCLUSIVE or SHARED. 4. A. The alert log stores information about block corruption errors,
internal errors, and the nondefault initialization parameters used at instance start-up. The alert log also records information about database start-up, shutdown, archiving, recovery, tablespace modifications, rollback segment modifications, and data file modifications. 5. C. The alert log file is written in the BACKGROUND_DUMP_DEST direc-
tory. This directory also records the trace files generated by the background processes. The USER_DUMP_DEST directory has the trace files generated by user sessions. The CORE_DUMP_DEST directory is used primarily on Unix platforms to save the core dump files. ALERT_DUMP_ DEST is not a valid parameter. 6. D. SHUTDOWN ABORT requires instance recovery when the database is
started the next time. Oracle will also roll back uncommitted transactions during start-up. This option shuts down the instance without dismounting the database. 7. B. When SHUTDOWN TRANSACTIONAL is issued, Oracle waits for the
users to either commit or roll back their pending transactions. Once all users have either rolled back or committed their transactions, the database is shut down. When using SHUTDOWN IMMEDIATE, the user sessions are disconnected and the changes are rolled back. SHUTDOWN NORMAL waits for the user sessions to disconnect from the database.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
72
Chapter 2
Installing and Managing Oracle
8. B. To put a database into read-only mode, you can mount the data-
base and open the database in read-only mode. This can be accomplished in one step by using STARTUP OPEN READ ONLY. 9. C. The DBA role is created when you create the database and is
assigned to the SYS and SYSTEM users. 10. C. The parameter ENTRIES is optional. You must specify a password
file name and the SYS password. The password file created will be used for authentication. 11. B. RESTRICTED SESSION privilege is required to access a database that
is in restricted mode. You start up the database in restricted mode by using STARTUP RESTRICT, or change the database to restricted mode by using ALTER SYSTEM ENABLE RESTRICTED SESSION. 12. C. The control file is opened when the instance mounts the database.
The data files and redo log files are opened after the database is opened. When the instance is started, the background processes are started. 13. D. When you terminate a session that is INACTIVE, the STATUS in
V$SESSION will show as KILLED. When SCOTT tries to perform any database activity in the SQL*Plus window, he receives an error that his session is terminated. When an ACTIVE session is killed, the changes are rolled back and an error message is written to the user’s screen. 14. A. STARTUP FORCE will terminate the current instance and start up
the database. It is equivalent to issuing SHUTDOWN ABORT and STARTUP OPEN. 15. D, F, B, C, E, and A. SHUTDOWN TRANSACTIONAL waits for all user
transactions to complete. Once no transactions are pending, it disconnects all sessions and proceeds with the normal shutdown process. The normal shutdown process performs a checkpoint, closes data files and redo log files, dismounts the database, and shuts down the instance.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Answers to Review Questions
73
16. C. There are four panes. The Navigator pane displays a hierarchical
view of all the databases, listeners, nodes, and other services in the network and their relationships. The Group pane enables you to graphically view and construct logical administrative groups of objects for more efficient management and administration. The Jobs pane is the user interface to the Job Scheduling System, which can be used to automate repetitive tasks at specified times on one or multiple databases. The Events pane is the user interface to the Event Management System, which monitors the network for problem events. 17. A and B. The SHOW PARAMETER command will show the current value
of the parameter. If you provide the parameter name, its value is shown; if you omit the parameter name, all the parameter values are shown. SHOW ALL in SQL*Plus will display the SQL*Plus environment settings, not the parameters. 18. C. If you enable RESTRICTED SESSION when users are connected,
nothing happens to the already connected sessions. Future sessions are started only if the user has the RESTRICTED SESSION privilege. 19. A. The dynamic view V$PWFILE_USERS has the username and a value
of TRUE in column SYSDBA if the SYSDBA privilege is granted, or a value of TRUE in column SYSOPER if the SYSOPER privilege is granted. 20. D. SQL Worksheet is not part of the DBA Studio pack. The DBA Stu-
dio pack includes Instance Manager, Schema Manager, Storage Manager, and Security Manager. SQL Worksheet is a SQL command interface, which is not part of the DBA pack, but is part of Enterprise Manager.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Chapter
3
Creating a Database and Data Dictionary ORACLE8i ARCHITECTURE AND ADMINISTRATION EXAM OBJECTIVES OFFERED IN THIS CHAPTER:
Creating a database
Prepare the operating system Prepare the parameter file Create the database
Creating Data Dictionary Views and Standard Packages
Construct the data dictionary views Query the data dictionary Prepare the PL/SQL environment using the administrative scripts Administer stored procedures and packages List the types of database event triggers
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Training and Certification Web site (http://education.oracle .com/certification/index.html) for the most current exam objectives listing.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
D
atabase creation requires planning and preparation. You need to prepare the operating system, decide on the configuration parameters, and lay out the physical files of the database for optimum performance. You also need to create the data dictionary and Oracle-supplied PL/SQL packages. In this chapter, you will learn about creating the database by using scripts and by using Oracle’s Database Configuration Assistant. The basic initialization parameters, Optimal Flexible Architecture, and the data dictionary views are also discussed.
Creating a Database
C
reating an Oracle database requires planning and is done in multiple steps. The database is a collection of physical files that work together with an area of allocated memory and background processes. You create the database only once, but you can change the configuration (except the block size) or add more files to the database. Before creating the database, you must have:
Necessary hardware resources such as memory and disk space
Operating system privileges
A plan to lay out the files and their sizes
A parameter file
Oracle software installed
Existing databases backed up
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Creating a Database
77
Preparing Resources Preparing the operating system resources is an important step in database creation. Depending on the OS, you may have to adjust certain configuration parameters. For example, on Unix platforms, you must configure the shared memory parameters, because Oracle uses a single shared memory segment for the SGA. Since a major share of Oracle databases are created on Unix platforms, we will discuss certain OS parameters that need to be configured before you can create any Oracle database. The Unix kernel parameters and their purposes are in the following list. The super-user administers these kernel parameters. SHMMAX Maximum size of a shared memory segment SHMMNI Maximum number of shared memory identifiers in the system SHMSEG Maximum number of shared memory segments a user process can attach to SEMMNI Maximum number of semaphore identifiers in the system SHMMAX × SHMSEG Total maximum shared memory that can be allocated Enough memory should be available to create the SGA when creating the database and for future database operation. It is better to fit the SGA in the real memory, rather than using virtual memory, to avoid paging. Paging will cause performance degradation. The Oracle software should be installed on the machine where you wish to create the database. The user account that installs the software needs to have certain administrative privileges on NT, but on Unix platforms, the user account that installs the software need not have super-user privileges. The super-user privilege is required only to set up the Oracle account and to complete certain post-installation tasks such as creating the oratab file. The oratab file lists all the database instance names on that machine, their Oracle home locations, and whether the database should be started automatically at boot time. Certain Oracle scripts and Enterprise Manager discovery services use this file. The oratab file resides under the /etc or the /var/ opt/oracle directory, depending on the OS. The user account that owns the Oracle software should have the necessary privileges to create the data files, redo log files, and control files. You must make sure that you have enough free space available to create these files. Oracle has certain guidelines on where to create the files, which are discussed in the section “Optimal Flexible Architecture.”
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
78
Chapter 3
Creating a Database and Data Dictionary
The parameter file lists the parameters that will be used for the database creation and configuration. The common parameters are discussed in the section “Parameters.” Anyone can make mistakes; before performing any major task, every DBA should make sure that they have methods to fix the mistakes. If you are already running databases on the server where you want to create the new database, you must make a full backup of all existing databases. If you overwrite one of the existing database files when creating the new database, the existing database will become useless.
Optimal Flexible Architecture The Optimal Flexible Architecture (OFA) is a set of guidelines specified by Oracle to better manage the Oracle software and the database. OFA enforces the following:
A consistent naming convention
Separating Oracle software from the database
Separating the Oracle software versions
Separating the data files belonging to different databases
Separating parameter files and database creation scripts from the database files and software
Separating trace files, log files, and dump files from the database and software
Figure 3.1 shows the software installation and database files on an NT platform conforming to the OFA. Here the ORACLE_BASE directory is C:\oracle, which has four branches—admin, ora80, ora81, and oradata. The ora80 and ora81 directories are software installations. By separating the versions, database upgrades are easy to perform. The admin and oradata directories have subdirectories for each database on the server. In Figure 3.1, oradb01 and oradb02 are two databases. Under the admin branch, for each database, there are subdirectories for administrative scripts (adhoc), background dump files and the alert log file (bdump), core dump
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Creating a Database
79
files (cdump), database creation scripts (create), export files (exp), parameter files (pfile), and a user dump directory (udump). The oradata directory has the data files, redo log files, and control files belonging to the database, separated at the database level by using subdirectories. FIGURE 3.1
OFA directory structures
For performance reasons, the OFA architecture can be slightly extended to include multiple disks and to spread out the data files. Figure 3.2 shows such a layout, where oradata01, oradata02, oradata03, oradata04, and so on can be on separate disks and can hold separate types of files (data files separate from redo log files) or different tablespaces (data tablespace separate from the index tablespace and separate from the system tablespace).
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
80
Chapter 3
Creating a Database and Data Dictionary
FIGURE 3.2
OFA using multiple disks C:\oracle\ora81 \ora82 \admin \oradb01 \oradb02 \oradata \oradb01 \system.dbf \control1.ctl D:\oracle\oradata \oradb01 \rollback.dbf \control2.ctl E:\oracle\oradata \oradb01 \temp.dbf \control3.ctl F:\oracle\oradata \oradb01 \redo11.log \redo21.log G:\oracle\oradata \oradb01 \redo21.log \redo22.log H:\oracle\oradata \oradb01 \data1.dbf
Parameters Oracle uses the parameter file to start up the instance before creating the database. Some of the database configuration values are specified via the parameter file. The purpose and format of the parameter file were discussed in Chapter 2, “Installing and Managing Oracle.” The parameters that affect the database configuration and creation are the following: CONTROL_FILES Specifies the control file location(s) for the new database with the full pathname. Specify at least two control files on different disks. You can specify up to eight control file names. Oracle creates these control files when the database is created. You should be careful in specifying the control file; if you specify the control file name of an existing
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Creating a Database
81
database, Oracle will overwrite the control file and the existing database will be damaged. If you do not use this parameter, Oracle uses the default filename, which is OS dependent. DB_BLOCK_SIZE Specifies the database block size as a multiple of the OS block size—this value cannot be changed after the database creation. The default block size is 2KB on most platforms. Oracle allows block sizes from 2KB to 32KB, depending on the OS. DB_NAME Specifies the database name—the name cannot be changed easily after the database is created (you must re-create the control file). The DB_NAME value can be a maximum of eight characters. Alphabetic characters, numbers, underscore (_), pound (#), and dollar symbol ($) can be used in the name. No other characters are valid. The first character should be an alphabetic character. Oracle removes double quotation marks before processing the database name. During database creation, the DB_NAME value is recorded in the data files, redo log files, and control file of the database. The other parameters that can be included in the parameter file are shown in Table 3.1. You must at least define the DB_BLOCK_BUFFERS and SHARED_ POOL_SIZE to calculate the SGA, which must fit into real, not virtual, memory. TABLE 3.1
Initialization Parameters Parameter Name
Description
OPEN_CURSORS
The maximum number of open cursors a session can have. The default is 50.
MAX_ENABLED_ROLES
The maximum number of database roles that users can enable. The default is 20.
DB_BLOCK_BUFFERS
The number of buffers in the database buffer cache. The buffer cache size is DB_BLOCK_ BUFFERS × DB_BLOCK_SIZE. The default value is 48MB / DB_BLOCK_SIZE.
SHARED_POOL_SIZE
Size of the shared pool. Can be specified in bytes or KB or MB. The default value on most platforms is 16MB.
LARGE_POOL_SIZE
The large pool area of the SGA. Default value is 0.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
82
Chapter 3
Creating a Database and Data Dictionary
TABLE 3.1
Initialization Parameters (continued) Parameter Name
Description
JAVA_POOL_SIZE
Size of the Java pool; the default value is 20,000KB. If you are not using Java, specify the value as 0.
LOG_CHECKPOINT_ INTERVAL
The frequency of the checkpoint. The value specified is in OS blocks.
LOG_CHECKPOINT_ TIMEOUT
Time-based checkpoints. The value specified is in seconds.
PROCESSES
The maximum number of processes that can connect to the instance. This includes the background processes.
LOG_BUFFER
Size of the redo log buffer in bytes.
BACKGROUND_DUMP_DEST
Location of the background dump directory. The alert log file is written in this directory.
CORE_DUMP_DEST
Location of the core dump directory.
USER_DUMP_DEST
Location of the user dump directory.
REMOTE_LOGIN_ PASSWORDFILE
The authentication method. When creating the database, make sure you have either commented out this parameter or set it to NONE. If you create the password file before creating the database, you can specify a different value such as EXCLUSIVE or SHARED.
COMPATIBLE
The release with which the database server must maintain compatibility. You can specify values from 8.0 to the current release number.
SORT_AREA_SIZE
Size of the area allocated for temporary sorts.
ROLLBACK_SEGMENTS
The private rollback segments to make online when starting up the database. This parameter is ignored during database creation.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Creating a Database
TABLE 3.1
83
Initialization Parameters (continued) Parameter Name
Description
LICENSE_MAX_SESSIONS
Maximum number of concurrent user sessions. When this limit is reached, only users with RESTRICTED SESSION privilege are allowed to connect. The default is 0—unlimited.
LICENSE_SESSIONS_ WARNING
A warning limit on the number of concurrent user sessions. Messages are written to the alert log when new users connect after this limit is reached. The new user is allowed to connect up to the LICENSE_MAX_SESSIONS value. The default value is 0—unlimited.
LICENSE_MAX_USERS
Maximum number of users that can be created in the database. The default is 0—unlimited.
The CREATE DATABASE Command The database is created using the CREATE DATABASE command. You must start up the instance (with STARTUP NOMOUNT PFILE=) before issuing the command. A sample database creation command is shown below. CREATE DATABASE “PROD01” CONTROLFILE REUSE LOGFILE GROUP 1 (‘/oradata02/PROD01/redo0101.log’, ‘/oradata03/PROD01/redo0102.log) SIZE 5M REUSE, GROUP 2 (‘/oradata02/PROD01/redo0201.log’, ‘/oradata03/PROD01/redo0202.log) SIZE 5M REUSE MAXLOGFILES 4 MAXLOGMEMBERS 2 MAXLOGHISTORY 0 MAXDATAFILES 254 MAXINSTANCES 1 NOARCHIVELOG CHARACTER SET “US7ASCII” NATIONAL CHARACTER SET “US7ASCII” DATAFILE ‘/oradata01/PROD1/system01.dbf’ SIZE 80M AUTOEXTEND ON NEXT 5M MAXSIZE UNLIMITED;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
84
Chapter 3
Creating a Database and Data Dictionary
Let’s discuss the clauses used in the CREATE DATABASE command. The only mandatory portion in this command is the CREATE DATABASE clause. If you omit the database name, Oracle takes the default value from the parameter DB_NAME defined in the initialization parameter file. The value specified in the parameter file and the database name in this command should be the same. The CONTROLFILE REUSE clause is used to overwrite an existing control file. Normally this clause is used only when re-creating a database. If you omit this clause, and any of the files specified by the CONTROL_FILES parameter exist, Oracle returns an error. The LOGFILE clause specifies the location of the online redo log files. If you omit the GROUP clause, Oracle creates the files specified in separate groups with one member in each. A database must have at least two redo groups. In the example, Oracle creates two redo log groups with two members in each. It is recommended to have all redo log groups be the same size. The REUSE clause overwrites an existing file, if any, provided the sizes are the same. The next five clauses specify limits for the database. The control file size depends on these limits, because Oracle pre-allocates space in the control file. MAXLOGFILES specifies the maximum number of redo log groups that can ever be created in the database. MAXLOGMEMBERS specifies the maximum number or redo log members (copies of redo log files) for each redo log group. The MAXLOGHISTORY is used only for the Parallel Server configuration. It specifies the maximum number of archived redo log files for automatic media recovery. MAXDATAFILES specifies the maximum number of data files that can be created in this database. Data files are created when you create a tablespace, or add more space to a tablespace by adding a data file. MAXINSTANCES specifies the maximum number of instances that can simultaneously mount and open this database. If you want to change any of these limits after the database is created, you must re-create the control file.
The initialization parameter DB_FILES specifies the maximum number of data files accessible to the instance. The MAXDATAFILES clause in the CREATE DATABASE command specifies the maximum number of data files allowed for the database. The DB_FILES parameter cannot specify a value larger than MAXDATAFILES.
You can specify NOARCHIVELOG or ARCHIVELOG to configure the redo log archiving. The default is NOARCHIVELOG; you can change the database to ARCHIVELOG mode by using the ALTER DATABASE command after the database is created.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Data Dictionary
85
The CHARACTER SET clause specifies the character set used to store data. The default is US7ASCII on most platforms. The character set cannot be changed after database creation. The NATIONAL CHARACTER SET clause specifies the national character set used to store data in columns specifically defined as NCHAR, NCLOB, or NVARCHAR2. If not specified, the national character set defaults to the database character set. The DATAFILE clause specifies one or more files that should be created for the database. The data files specified are used for the SYSTEM tablespace. The SYSTEM tablespace size should be at least 5MB. You can optionally specify the AUTOEXTEND clause, which is discussed in detail in Chapter 5, “Logical and Physical Database Structures.” Now that you have seen what is involved in creating a database, let’s put this all together: 1. Make sure you have enough resources and privileges available. 2. Decide on a database name, control file locations, and a database
block size, and prepare a parameter file including other necessary parameters. 3. Decide on the version of the database and the instance name. Set the
environment variables ORACLE_HOME with the directory name of the Oracle software installation and ORACLE_SID with the instance name. Normally the instance name and database name are the same. You need to set up the ORA_NLS33 environment variable, if you are using a character set other than US7ASCII. 4. Start the instance. Using SQL*Plus, connect using a SYSDBA account
and issue STARTUP NOMOUNT. 5. Create the database by using the CREATE DATABASE command.
Data Dictionary
The most important part of the Oracle database is the data dictionary. The data dictionary is a set of tables and views that hold the database’s metadata information. You cannot update the dictionary directly; Oracle updates the dictionary when you issue any Data Definition Language (DDL) commands. The dictionary is provided as read-only for users and administrators. The contents of the data dictionary and obtaining information from the dictionary are discussed in the section “Querying the Dictionary.”
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
86
Chapter 3
Creating a Database and Data Dictionary
The data dictionary consists of base tables and user-accessible views. The base tables are normalized and have cryptic, version-specific information. The views are provided by Oracle to query the dictionary and extract meaningful information. When the database is created, Oracle creates two users, SYS and SYSTEM. SYS is the owner of the data dictionary, and SYSTEM is a DBA account. The initial password for SYS is CHANGE_ON_INSTALL and for SYSTEM is MANAGER. You should change these passwords once the database creation is complete.
You should never change the definition or contents of the data dictionary base tables. Oracle uses the dictionary information for proper functioning of the database.
Creating the Dictionary The Oracle database is functional only when you create the dictionary views and additional tablespaces, rollback segments, users, and so on. Creating the dictionary views is the next step after you create the database by using the CREATE DATABASE command. Running certain Oracle-supplied scripts creates the dictionary views. The data dictionary base tables are created under the SYS schema in the SYSTEM tablespace when you issue the CREATE DATABASE command. The tablespace and tables are created by Oracle using the sql.bsq script found under the $ORACLE_HOME/rdbms/admin directory. This script creates the following:
The SYSTEM tablespace by using the data file(s) specified in the CREATE DATABASE command
A rollback segment named SYSTEM in the SYSTEM tablespace
The SYS and SYSTEM user accounts
The dictionary base tables and clusters
Indexes on dictionary tables and sequences for dictionary use
The roles PUBLIC, CONNECT, RESOURCE, DBA, DELETE_CATALOG_ROLE, EXECUTE_CATALOG_ROLE, and SELECT_CATALOG_ROLE
The DUAL table
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Data Dictionary
87
You should not modify the definitions in the sql.bsq script—for example, adding columns, removing columns, or changing the data types or width. However, you are allowed to change the storage parameters: INITIAL, NEXT, MINEXTENTS, MAXEXTENTS, PCTINCREASE, FREELISTS, FREELIST GROUPS, and OPTIMAL.
The DUAL table is a dummy table owned by SYS and is accessible to all users of the database. The table has only one column, named DUMMY, and has only one row. You should not add more rows to this table.
Running the script catalog.sql creates the data dictionary views. Synonyms are created on the views by this script to allow users easy access to the views. Before running any data dictionary script, you should connect to the database as INTERNAL or SYS. The dictionary creation scripts are under $ORACLE_HOME/rdbms/admin on most platforms. The script catproc.sql creates the dictionary items necessary for PL/ SQL functionality. The other scripts necessary for creating dictionary objects depend on the OS and the functionality you want to have in the database. For example, if you are not using Parallel Server, you need not install any parallel server–related dictionary items. At a minimum, you should run the catalog.sql and catproc.sql scripts after creating the database. The dictionary creation scripts all begin with cat. Many of the scripts call other scripts. For example, when you execute catalog.sql, it calls the following scripts: standard.sql Creates a package called STANDARD, which has the SQL functions to implement basic language features cataudit.sql Creates data dictionary views to support auditing catexp.sql Creates data dictionary views to support import/export catldr.sql Creates data dictionary views to support direct-path load of SQL*Loader catpart.sql Creates data dictionary views to support partitioning catadt.sql Creates data dictionary views to support Oracle objects and types catsum.sql Creates data dictionary views to support Oracle summary management
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
88
Chapter 3
Creating a Database and Data Dictionary
From the name of a script, you can sometimes identify its purpose. The following list indicates the categories of scripts. cat*.sql Catalog and data dictionary scripts dbms*.sql PL/SQL administrative package definitions prvt*.plb PL/SQL administrative package code, in wrapped (encrypted) form uNNNNNN.sql Database upgrade/migration scripts dNNNNNN.sql Database downgrade scripts utl*.sql Additional tables and views needed for database utilities
Installing PL/SQL Administrative Packages The PL/SQL functionality is installed in the database after creating the dictionary views by running the script catproc.sql. This script calls a series of other scripts that will install all the PL/SQL administrative packages. The dbms*.sql scripts called by catproc.sql are the administrative package definitions, and the prvt*.plb scripts are the wrapped code for the package definitions.
Administering Stored Procedures and Packages The PL/SQL stored programs are stored in the database dictionary. They are treated as any other database object. The code used to create the procedure, package, or function is available in the dictionary views DBA_SOURCE, ALL_ SOURCE, and USER_SOURCE—except when you create them with the WRAP utility. The WRAP utility generates encrypted code, which only the Oracle server can interpret. The privileges on these stored programs are managed by using regular GRANT and REVOKE statements. You can GRANT and REVOKE execute privileges on these objects to other users of the database. The DBA_OBJECTS, ALL_OBJECTS, or USER_OBJECTS views give information about the status of the stored program. If a procedure is invalid, you can recompile it by using the following statement. ALTER PROCEDURE <procedure_name> COMPILE;
To recompile a package, you need to compile the package definition and then the package body as in the following statements.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Data Dictionary
89
ALTER PACKAGE <package_name> COMPILE; ALTER PACKAGE <package_name> COMPILE BODY;
To compile the package, procedure, or function owned by any other schema, you must have the ALTER ANY PROCEDURE privilege.
Completing the Database Creation After creating the database and creating the dictionary views, you must create additional tablespaces and rollback segments to complete the database creation process. Oracle recommends creating the following tablespaces. You can create additional tablespaces depending on the requirements of your application. RBS Holds the rollback segments. When you create the database, Oracle creates the SYSTEM rollback segment in the SYSTEM tablespace. For the database to be operational, you must have at least one rollback segment that is not SYSTEM. Oracle recommends creating the rollback segments in a separate tablespace. TEMP Holds the temporary segments. Temporary segments are used by Oracle for sorting and any intermediate operation. Oracle uses these segments when the information to be sorted will not fit in the SORT_AREA_ SIZE parameter specified in the initialization file. USERS Contains the user tables. INDX Contains the user indexes. TOOLS Holds the tables and indexes created by the Oracle administrative tools. After creating these tablespaces, you must create additional users for the database. Also, you should change the default tablespace of SYSTEM to TOOLS and the temporary tablespaces of SYS and SYSTEM to TEMP. You must make a database backup after creation.
Querying the Dictionary The data dictionary views and tables can be queried in the same way that you query any other table or view. From the prefix of the data dictionary views, you can determine for whom the view is intended. Some views are accessible
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
90
Chapter 3
Creating a Database and Data Dictionary
to all Oracle users; others are intended for database administrators only. The data dictionary views can be classified into the following categories based on their prefix: DBA_ These views have information about all structures in the database—they show what’s in all users’ schemas. Accessible to the DBA or anyone with the SELECT_CATALOG_ROLE privilege, they provide information on all the objects in the database and have an OWNER column. ALL_ These views show information about all objects that the user has access to. They are accessible to all users. Each view has an OWNER column, providing information about objects accessible by the user. USER_ These views show information about the structures owned by the user (in the user’s schema). They are accessible to all users and do not have an OWNER column. V$ These views are known as the dynamic performance views, because they are continuously updated while a database is open and in use, and their contents relate primarily to performance. The actual dynamic performance views are identified by the prefix V_$. Public synonyms for these views have the prefix V$. GV$ For almost all V$ views, Oracle has a corresponding GV$ view. These are the global dynamic performance views and are available if you are running Oracle Parallel Server. The ALL_ views and USER_ views have almost identical information except for the OWNER column, but the DBA_ views often have more information useful for the administrators.
By default, users with SELECT ANY TABLE privilege can query the dictionary views. If you wish to protect the dictionary from non-DBA users, set the initialization parameter 07_DICTIONARY_ACCESSIBILITY to FALSE. Only users who connect to the database as SYSDBA can access the dictionary.
You can use the data dictionary information to generate the source code for all the objects created in the database. For example, the information on tables is available in the dictionary views DBA_TABLES, DBA_TAB_COLUMNS, or ALL_TABLES, ALL_TAB_COLUMNS, or USER_TABLES, USER_TAB_COLUMNS.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Database Configuration Assistant
91
The dictionary view DICTIONARY contains names and descriptions of all the data dictionary views in the database. DICT is a synonym for DICTIONARY view. The DICT_COLUMNS dictionary view contains the description of all columns in the dictionary views. If you want to know all the dictionary views that provide information about tables, you can do a query similar to the following. SQL> COL TABLE_NAME FORMAT A25 SQL> COL COMMENTS FORMAT A40 SQL> SELECT * FROM DICT WHERE TABLE_NAME LIKE ‘%TAB%’;
The dictionary views ALL_OBJECTS, DBA_OBJECTS, or USER_OBJECTS provide information about the objects in the database. These views contain the timestamp of object creation and the last DDL timestamp. The STATUS column shows whether the object is invalid; this is especially useful for PL/ SQL-stored programs and views.
Query the data dictionary view PRODUCT_COMPONENT_VERSION or V$VERSION to see the version of the database and installed components. Oracle product versions have five numbers. For example, 8.1.6.0.0, where 8 is the version, 1 is the new features’ release, 6 is the maintenance release, 0 is the generic patch set number, and the last 0 is the platform-specific patch set number.
Database Configuration Assistant
T
he Database Configuration Assistant (DCA) is Oracle’s GUI DBA tool for creating, modifying, or deleting a database. After you answer a few questions, this tool can either create a database or give you the scripts to create the database. It is a good idea to generate the scripts by using this tool and then customize the script files, if needed, and create the database. You have the option of creating the database with a shared server configuration (MTS) or a dedicated server configuration. You can also choose the additional options you wish to install in the database. The options, their purpose, and the script to run if created manually are shown in Table 3.2.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
92
Chapter 3
Creating a Database and Data Dictionary
TABLE 3.2
Database Creation—Installation Options
Option Name
Description
Script Name
Oracle Time Series
Provides storage and retrieval of time stamped data through object types. Contains data types along with related functions for managing and processing time series data.
ordinst.sql tsinst.sql
Oracle Spatial
Set of functions and procedures to store, access, and analyze spatial data.
catmd.sql
Oracle JServer
Includes support for Java-stored procedures, Java Database Connectivity (JDBC), SQL Java (SQLJ), Common Object Request Broker Architecture (CORBA), and Enterprise Java Beans (EJB).
initjvm.sql
Oracle InterMedia
Enables Oracle8i to manage images, audio, and video integrated with other enterprise information, through object types.
ordinst.sql iminst.sql
Oracle Visual Information Retrieval
Provides image storage, content-based retrieval, and format conversion capabilities through object types.
ordinst.sql virinst.sql
Advanced Replication
Enables copying and maintaining data in multiple databases.
catrep.sql
SQL*Plus Help
SQL*Plus help.
helpbld.sql
If you choose the typical installation option, you have only a few questions to answer. You also have the option of copying a preconfigured database or creating a new database. The tool generates the initialization parameters based on the type of database you create; the options are Online Transaction Processing (OLTP), Data Warehousing, or Multipurpose. If you choose the custom installation option, you have full control of the file locations, database limits, and tablespace sizes. Figure 3.3 shows the tablespace configuration screen of the DCA.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Database Configuration Assistant
FIGURE 3.3
93
Database Configuration Assistant
The script generated by the DCA does the following (this is a good template for you to create new databases): 1. Creates a parameter file, starts up the database in NOMOUNT mode, and
creates the database by using the CREATE DATABASE command. 2. Runs catalog.sql. 3. Creates tablespaces for tools (TOOLS), rollback (RBS), temporary
(TEMP), user (USERS), and index (INDX). 4. Creates public rollback segments in the rollback tablespace and puts
them online. 5. Changes the temporary tablespace for SYS and SYSTEM to TEMP. 6. Runs the following scripts: a. catproc.sql—sets up PL/SQL. b. caths.sql–—installs the heterogeneous services (HS) data dic-
tionary, providing the ability to access non-Oracle databases from the Oracle database. HS is an integrated component in the 8i database. c. otrcsvr.sql–—sets up Oracle Trace server stored procedures.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
94
Chapter 3
Creating a Database and Data Dictionary
d. utlsampl.sql–—sets up sample user SCOTT and creates demo
tables. e. pupbld.sql–—creates product and user profile tables. This script
is run as SYSTEM. 7. Runs the scripts necessary to install the options chosen. The options
and scripts are listed in Table 3.2.
Database Event Triggers
T
riggers are PL/SQL program units that are executed when an event occurs. Triggers are stored in the database in a compiled form. Before Oracle8i, triggers were always associated with tables to enforce complex data integrity or to perform an additional action. In Oracle8i, you can create database event triggers. There are two types of events:
System events
Database start-up (STARTUP) and shutdown (SHUTDOWN)
Server errors (SERVERERROR)
User events
User login (LOGON) and logout (LOGOFF)
DDL statements (CREATE, DROP, ALTER commands)
DML statements (INSERT, UPDATE, DELETE commands—these are types of triggers available prior to Oracle8i)
Event attributes are accessible via variables that can be used in the triggers. The attributes available depend on the type of trigger and the triggering event. The triggers can be created at the database level (ON DATABASE) for events such as start-up and shutdown; triggers can be created at the schema level (ON <schema>.SCHEMA) for events such as table creation operations (DDL); triggers on DML statements can be created only on tables. The STARTUP trigger fires after the database is opened. The SHUTDOWN trigger fires before the database is closed. The STARTUP and SHUTDOWN triggers can be associated only with the database. SERVERERROR triggers can be created to fire after a specific error occurs or after any error occurs. These triggers can be associated with the database or with a schema.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Database Event Triggers
95
LOGON and LOGOFF triggers can be associated with the database or a schema. Database level LOGON triggers fire after any user successfully logs in to the database; schema level LOGON triggers fire after that user (schema owner) successfully logs in. LOGOFF triggers fire before the user disconnects from the database. The LOGOFF trigger is executed in the same transaction where the trigger is fired, but the LOGON trigger creates a new transaction. DDL event triggers can be created to fire BEFORE or AFTER the command execution. These triggers can be created at the database level or at the schema level. The following DDL statements can fire a trigger:
CREATE, ALTER, DROP, TRUNCATE
ASSOCIATE STATISTICS, DISASSOCIATE STATISTICS
ANALYZE, COMMENT, RENAME
AUDIT, NOAUDIT
GRANT, REVOKE
DDL—fires when any of the preceding DDL statements is issued
INSERT, DELETE, and UPDATE triggers can be created to fire BEFORE or AFTER the DML operation. These triggers can be created only at the table level (INSTEAD OF triggers can be created on views). You can create these triggers to fire for each affected row in the operation, or to fire once for the statement. Following is an example of some triggers created to populate a timestamp table with database start-up and shutdown information. SQL> 2 3 4 5 6 7 8 9
CREATE OR REPLACE TRIGGER DB_START_TIME AFTER STARTUP ON DATABASE BEGIN INSERT INTO SYSTEM.START_SHUT_TIME (START_OR_SHUT, USERNAME, TIMESTAMP) VALUES (ora_sysevent, ora_login_user, sysdate); END; /
Trigger created. SQL>
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
96
Chapter 3
Creating a Database and Data Dictionary
SQL> 2 3 4 5 6 7 8 9
CREATE OR REPLACE TRIGGER DB_SHUT_TIME BEFORE SHUTDOWN ON DATABASE BEGIN INSERT INTO SYSTEM.START_SHUT_TIME (START_OR_SHUT, USERNAME, TIMESTAMP) VALUES (ora_sysevent, ora_login_user, sysdate); END; /
Trigger created. SQL>
Query the DBA_TRIGGERS or USER_TRIGGERS or ALL_TRIGGERS views to get information about the triggers. The STATUS column shows whether the trigger is ENABLED or DISABLED. If a trigger is DISABLED, it is not fired when the event occurs. You can disable a trigger by using the command ALTER TRIGGER DISABLE.
Summary
I
n this chapter, you learned how to create a database. The Oracle database is created by using the command CREATE DATABASE. This command runs the sql.bsq script, which in turn creates the data dictionary tables. The three parameters that you should pay particular attention to before creating a database are CONTROL_FILES, DB_NAME, and DB_BLOCK_SIZE. You cannot change the block size of the database once it is created. Running the script catalog.sql creates the data dictionary views. DBAs and users use these views to query the information from the data dictionary. The data dictionary is a set of tables and views that hold the metadata. The views prefixed with DBA_ are accessible only to the DBA or user with the SELECT_CATALOG_ROLE privilege. The views prefixed with ALL_ have information about all objects in the database that the user has any privilege on. The USER_ views show information about the objects that the user owns.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Summary
97
Before creating the database, you need to make sure that you have enough resources, such as disk space and memory. You also need to prepare the OS by setting the resource parameters, if any, and making sure that you have enough privileges. The Optimal Flexible Architecture (OFA) is a set of guidelines specified by Oracle to better manage the Oracle software and the database. OFA enforces a consistent naming convention as well as separate locations for Oracle software, database, and administration files. PL/SQL has several administrative packages that are useful to the DBA as well as developers. These packages are installed by running the script catproc.sql. Most of the administrative scripts are located under the directory $ORACLE_HOME/rdbms/admin. You can create event triggers to audit certain database operations or to strengthen the security. The triggering events are STARTUP, SHUTDOWN, SERVERERROR, LOGON, LOGOFF, or any DDL statement.
Key Terms Before you take the exam, make sure you’re familiar with the following terms: Optimal Flexible Architecture (OFA) ORACLE_HOME ORACLE_SID ORA_NLS33 DUAL procedure package database event triggers DICTIONARY
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
98
Chapter 3
Creating a Database and Data Dictionary
Review Questions 1. How many control files are required to create a database? A. One B. Two C. Three D. None 2. Which environment variable or registry entry variable is used to rep-
resent the instance name? A. ORA_SID B. INSTANCE_NAME C. ORACLE_INSTANCE D. ORACLE_SID 3. Complete the following sentence: The recommended configuration for
control files is A. One control file per database B. One control file per disk C. Two control files on two disks D. Two control files on one disk 4. You have specified the LOGFILE clause in the CREATE DATABASE com-
mand as follows. What happens if the size of the log file redo0101.log, which already exists, is 10MB? LOGFILE GROUP 1 (‘/oradata02/PROD01/redo0101.log’, ‘/oradata03/PROD01/redo0102.log) SIZE 5M
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
99
REUSE, GROUP 2 (‘/oradata02/PROD01/redo0201.log’, ‘/oradata03/PROD01/redo0202.log) SIZE 5M REUSE A. Oracle adjusts the size of all the redo log files to 10MB. B. Oracle creates all the redo log files as 5MB. C. Oracle creates all the redo log files as 5MB except redo0101.log,
which is created as 10MB. D. The command fails. 5. Which command should be issued before you can execute the CREATE
DATABASE command? A. STARTUP INSTANCE B. STARTUP NOMOUNT C. STARTUP MOUNT D. None of the above 6. Which initialization parameter cannot be changed after creating the
database? A. DB_BLOCK_SIZE B. DB_NAME C. CONTROL_FILES D. None; all the initialization parameters can be changed as and when
required. 7. What does OFA stand for? A. Oracle File Allocations B. Oracle File Architecture C. Optimal Flexible Architecture D. Optimal File Architecture
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
100
Chapter 3
Creating a Database and Data Dictionary
8. When creating a database, where does Oracle find information about
the control files that need to be created? A. From the initialization parameter file B. From the CREATE DATABASE command line C. From the environment variable D. Files created under $ORACLE_HOME and name derived from
name>.ctl 9. Which script creates the data dictionary views? A. catalog.sql B. catproc.sql C. sql.bsq D. dictionary.sql 10. Which prefix for the data dictionary views indicate that the contents
of the view belong to the current user? A. ALL_ B. DBA_ C. USR_ D. USER_ 11. Which data dictionary view shows information about the status of a
procedure? A. DBA_SOURCE B. DBA_OBJECTS C. DBA_PROCEDURES D. DBA_STATUS
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
101
12. How do you correct a procedure that has become invalid when one of
the tables it is referring to was altered to drop a constraint? A. Re-create the procedure B. ALTER PROCEDURE <procedure_name> RECOMPILE C. ALTER PROCEDURE <procedure_name> COMPILE D. VALIDATE PROCEDURE <procedure_name> 13. Which event trigger from the following cannot be created at the data-
base level? A. SERVERERROR B. CREATE C. AUDIT D. INSERT 14. How many data files can be specified in the DATAFILE clause when
creating a database? A. One. B. Two. C. More than one; only one will be used for the SYSTEM tablespace. D. More than one; all will be used for the SYSTEM tablespace. 15. Who owns the data dictionary? A. SYS B. SYSTEM C. DBA D. ORACLE 16. What is the default password for the SYS user? A. MANAGER B. CHANGE_ON_INSTALL C. SYS D. There is no default password.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
102
Chapter 3
Creating a Database and Data Dictionary
17. Which data dictionary view provides information on the version of the
database and installed components? A. DBA_VERSIONS B. PRODUCT_COMPONENT_VERSION C. PRODUCT_VERSIONS D. ALL_VERSION 18. What is the prefix for dynamic performance views? A. DBA_ B. X$ C. V$ D. X# 19. Which is an invalid clause in the CREATE DATABASE command? A. MAXLOGMEMBERS B. MAXLOGGROUPS C. MAXDATAFILES D. MAXLOGHISTORY 20. Which optional component in the database creation process sets up
functions and procedures to store, access, and analyze data needed for Geographical Information Systems (GIS)? A. Oracle JServer B. Oracle Time Series C. Oracle Advanced Replication D. Oracle Spatial
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Answers to Review Questions
103
Answers to Review Questions 1. D. You do not need any control files to create a database; the control
files are created when you create the database, based on the filenames specified in the CONTROL_FILES parameter of the parameter file. 2. D. The ORACLE_SID environment variable is used to represent the
instance name. When you connect to the database without specifying a connect string, Oracle connects you to this instance. 3. C. Oracle allows multiplexing of control files. If you have two control
files on two disks, one disk failure will not damage both control files. 4. D. The CREATE DATABASE command fails. For you to use the REUSE
clause, the file that exists should be the same size as the size specified in the command. 5. B. You must start up the instance to create the database. Connect to
the database by using the SYSDBA privilege and start up the instance by using the command STARTUP NOMOUNT. 6. A. The block size of the database cannot be changed after database cre-
ation. The database name can be changed after re-creating the control file with the new name, and the CONTROL_FILES parameter can be changed if the files are copied to a new location. 7. C. Optimal Flexible Architecture is a set of guidelines to organize the
files related to the Oracle database and software for better management and performance. 8. A. The control file names and locations are obtained from the initial-
ization parameter file. The parameter name is CONTROL_FILES. If this parameter is not specified, Oracle creates a control file; the location and name depend on the OS platform. 9. A. The catalog.sql script creates the data dictionary views. The
base tables for these views are created by the script sql.bsq, which is executed when you issue the CREATE DATABASE command.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
104
Chapter 3
Creating a Database and Data Dictionary
10. D. DBA_ prefixed views are accessible to the DBA or anyone with the
SELECT_CATALOG_ROLE privilege; these views provide information on all the objects in the database and have an OWNER column. The ALL_ views show information about the structures that the user has access to. USER_ views show information about the structures owned by the user. 11. B. The DBA_OBJECTS dictionary view has information on the objects,
their creation, and modification timestamp and status. 12. C. The invalid procedure, trigger, package, or view can be recompiled
by using the ALTER COMPILE command. 13. D. DML event triggers should always be associated with a table or
view. They cannot be created at the database level. All other event triggers can be created at the database level. 14. D. You can specify more than one data file; the files will be used for
the SYSTEM tablespace. The files specified cannot exceed the number of data files specified in the MAXDATAFILES clause. 15. A. The SYS user owns the data dictionary. The SYS and SYSTEM users
are created when the database is created. 16. B. The default password for SYS is CHANGE_ON_INSTALL, and for
SYSTEM it is MANAGER. You should change these passwords once the database is created. 17. B. The dictionary view PRODUCT_COMPONENT_VERSION shows infor-
mation about the database version. The view V$VERSION has the same information. 18. C. The dynamic performance views have a prefix of V$. The actual
views have the prefix of V_$, and the synonyms have a V$ prefix. The views are called dynamic performance views because they are continuously updated while the database is open and in use, and their contents relate primarily to performance. 19. B. MAXLOGGROUPS is an invalid clause; the maximum log file groups
are specified using the clause MAXLOGFILES.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Answers to Review Questions
105
20. D. The Oracle Spatial component installs procedures and functions
needed to access spatial data. This option can be installed after creating the database by running the script catmd.sql.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Chapter
4
Control and Redo Log Files ORACLE8i ARCHITECTURE AND ADMINISTRATION EXAM OBJECTIVES OFFERED IN THIS CHAPTER:
Maintaining the Control File
Explain the uses of the control file List the contents of the control file Multiplex the control file Obtain control file information
Maintaining the Redo Log Files
Explain the use of online redo log files Obtain log and archive information Control log switches and checkpoints Multiplex and maintain online redo log files Plan online redo log files Troubleshoot common redo log file problems Analyze online and archived redo logs
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Training and Certification Web site (http://
education.oracle.com/certification/index.html) for the most current exam objectives listing.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
T
his chapter discusses two important components of the Oracle database: the control file and the redo log files. The control file keeps information about the physical structure of the database. The redo log files record all changes made to data. These two files are critical for database recovery in case of a failure. You can multiplex both the control and redo log files. You will learn more about control files, putting the database in ARCHIVELOG mode, and using the LogMiner utility to analyze the redo log files and to control checkpoints.
Maintaining the Control File
The control file can be thought of as a metadata repository for the physical database. It has the structure of the database—the data files and redo log files that constitute a database. The control file is a binary file, created when the database is created, and is updated with the physical changes whenever you add or rename a file. The control file is an important component of the database. It is updated continuously and should be available at all times. You should not edit the contents of the control file; only Oracle processes should update its contents. When you start up the database, Oracle uses the control file to identify the data files and redo log files and opens them. Control files play a major role when recovering a database. The contents of the control file include the following:
Database name to which the control file belongs. A control file can belong to only one database.
Database creation timestamp.
Data files—name, location, and online/offline status information.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Maintaining the Control File
109
Redo log files—name and location.
Tablespace names.
Current log sequence number, a unique identifier that is incremented and recorded when an online redo log file is switched.
Most recent checkpoint information. A checkpoint occurs when all the modified database buffers in the SGA are written to the data files. The SCN is also recorded in the control file against the data file name that is taken offline or made read-only.
Recovery Manager’s (RMAN’s) backup information. RMAN is Oracle’s tool used to back up and recover databases.
The control file size is determined by the MAX clauses you provide when you create the database—MAXLOGFILES, MAXLOGMEMBERS, MAXLOGHISTORY, MAXDATAFILES, and MAXINSTANCES. Oracle pre-allocates space for these maximums in the control file. Therefore, when you add or rename a file in the database, the control file size does not change. When a new file is added to the database or a file is relocated, the information is updated immediately in the control file by an Oracle server process. You should back up the control file after any structural changes. The log writer process (LGWR) updates the control file with the current log sequence number. The checkpoint process (CKPT) updates the control file with the recent checkpoint information. When the database is in ARCHIVELOG mode, the archiver process (ARCn) updates the control file with archiving information such as the archive log file name and log sequence number.
Multiplexing Control Files Since the control file is critical for the database operation, Oracle recommends you have a minimum of two control files. You duplicate the control file on different disks either by using the multiplexing feature of Oracle or by using the mirroring feature of your operating system. This section discusses the multiplexing feature. Multiplexing is defined as keeping a copy of the same control file in different locations. Copying the control file to multiple locations and changing the CONTROL_FILES parameter in the initialization file to include all control file names specifies the multiplexing of the control file. The following syntax shows three multiplexed control files.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
110
Chapter 4
Control and Redo Log Files
CONTROL_FILES = (‘/ora01/oradata/MYDB/ctrlMYDB01.ctl’, ‘/ora02/oradata/MYDB/ctrlMYDB02.ctl’, ‘/ora03/oradata/MYDB/ctrlMYDB03.ctl’)
By storing the control file on multiple disks, you avoid the risk of a single point of failure. When multiplexing control files, updates to the control file can take a little longer, but that is insignificant when compared to the benefits. If you lose one control file, you can restart the database after copying one of the other control files or after changing the CONTROL_FILES parameter in the initialization file. When multiplexing control files, Oracle updates all the control files at the same time, but uses only the first control file listed in the CONTROL_FILES parameter for reading. When creating a database, you can list the control file names in the CONTROL_FILES parameter, and Oracle creates as many control files as are listed. The maximum number of multiplexed control file copies you can have is eight. If you need to add more control file copies, do the following: 1. Shut down the database. 2. Copy the control file to more locations by using an OS command. 3. Change the initialization parameter file to include the new control file
names in the parameter CONTROL_FILES. 4. Start up the database.
After creating the database, you can change the location of the control files, rename the control files, or drop certain control files. You must have at least one control file for each database. For any of these operations, you need to follow the preceding steps. Basically, you shut down the database, change the CONTROL_FILES parameter, use the OS copy command (if adding a control file), rename or drop the control files accordingly, and start up the database.
If you lose one of the control files, you can shut down the database, copy a control file, or change the CONTROL_FILES parameter and restart the database.
Creating New Control Files You can create a new control file by using the CREATE CONTROLFILE command. This is required if you lose all the control files that belong to the
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Maintaining the Control File
111
database or if you want to change any of the MAX clauses in the CREATE DATABASE command. You must know the data file names and redo log file names to create the control file. Here are the steps to follow to create the control file: 1. Prepare the CREATE CONTROLFILE command. You should have the
complete list of data files and redo log files. If you omit any data files, they can no longer be a part of the database. The following is an example of the CREATE CONTROLFILE command. CREATE CONTROLFILE SET DATABASE "ORACLE" NORESETLOGS NOARCHIVELOG MAXLOGFILES 32 MAXLOGMEMBERS 2 MAXDATAFILES 32 MAXINSTANCES 1 MAXLOGHISTORY 1630 LOGFILE GROUP 1 ‘C:\ORACLE\DATABASE\LOG2ORCL.ORA’ SIZE 200K, GROUP 2 ‘C:\ORACLE\DATABASE\LOG1ORCL.ORA’ SIZE 200K DATAFILE ‘C:\ORACLE\DATABASE\SYS1ORCL.ORA’, ‘C:\ORACLE\DATABASE\USR1ORCL.ORA’, ‘C:\ORACLE\DATABASE\RBS1ORCL.ORA’, ‘C:\ORACLE\DATABASE\TMP1ORCL.ORA’, ‘C:\ORACLE\DATABASE\APPDATA1.ORA’, ‘C:\ORACLE\DATABASE\APPINDX1.ORA’ ;
The options in this command are similar to the CREATE DATBASE command, discussed in Chapter 3, “Creating a Database and Data Dictionary.” The NORESETLOGS option specifies that the online redo log files should not be reset. 2. Shut down the database. 3. Start up the database with the NOMOUNT option. Remember, to mount
the database, Oracle needs to open the control file. 4. Create the new control file with a command similar to the preceding
example. The control files will be created using the names and locations specified in the initialization parameter CONTROL_FILES. 5. Open the database by using the ALTER DATABASE OPEN command.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
112
Chapter 4
Control and Redo Log Files
6. Shut down the database and back up the database. The steps provided
here are very basic. Depending on the type of situation, you may have to perform additional steps. Detailing all the steps that might be required to create a control file and the options in opening a database are beyond the scope of this book.
You can generate the CREATE CONTROLFILE command from the current database by using the command ALTER DATABASE BACKUP CONTROLFILE TO TRACE. The control file creation script is written to the USER_DUMP_DEST directory.
After creating the control file, you should determine whether any of the data files listed in the dictionary are missing in the control file. If you query the V$DATAFILE view, the missing files will have the name MISSINGnnnn. If you created the control file by using the RESETLOGS option, the missing data files cannot be added back to the database. If you created the control file with the NORESETLOGS option, the missing data file can be included in the database by performing a media recovery. You can back up the control file when the database is up by using the command ALTER DATABASE BACKUP CONTROL FILE TO ‘’ REUSE; Oracle recommends backing up the control file whenever you make a change to the database structure, such as adding data files, renaming files, or dropping redo log files.
Querying Control File Information The Oracle data dictionary holds all the information about the control file. The view V$CONTROLFILE lists the names of the control files for the database. The STATUS column should always be NULL; when a control file is missing, the STATUS would be INVALID, but that should never occur because when Oracle cannot update one of the control files, the instance crashes— you can start up the database only after copying a good control file. SVRMGR> SELECT * FROM V$CONTROLFILE; STATUS NAME ------------------------------------------/ora01/oradata/MYDB/ctrlMYDB01.ctl /ora02/oradata/MYDB/ctrlMYDB02.ctl /ora03/oradata/MYDB/ctrlMYDB03.ctl
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Maintaining the Control File
113
3 rows selected. SVRMGR>
The other data dictionary view that gives information about the control file is V$CONTROLFILE_RECORD_SECTION, which displays the control file record sections. The record type, record size, total records allocated, number of records used, and the index position of the first and last records are in this view. For a listing of the record types, record sizes, and usage, run the following query. SQL> SELECT TYPE, RECORD_SIZE, RECORDS_TOTAL, RECORDS_USED 2 FROM V$CONTROLFILE_RECORD_SECTION; TYPE RECORD_SIZE RECORDS_TOTAL RECORDS_USED --------------- ----------- ------------- -----------DATABASE 192 1 1 CKPT PROGRESS 4084 1 0 REDO THREAD 104 1 1 REDO LOG 72 32 3 DATAFILE 180 254 8 FILENAME 524 319 11 TABLESPACE 68 254 8 RESERVED1 56 254 0 RESERVED2 1 1 0 LOG HISTORY 36 1815 1217 OFFLINE RANGE 56 291 0 ARCHIVED LOG 584 13 0 BACKUP SET 40 408 0 BACKUP PIECE 736 510 0 BACKUP DATAFILE 116 563 0 BACKUP REDOLOG 76 107 0 DATAFILE COPY 660 519 0 BACKUP CORRUPTION 44 371 0 COPY CORRUPTION 40 408 0 DELETED OBJECT 20 408 0 PROXY COPY 852 575 0 RESERVED4 1 8168 0 22 rows selected. SQL>
There are other data dictionary views that read information from the control file. These dynamic performance views are listed in Table 4.1. These views can be accessed when the database is mounted, that is, before opening the database.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
114
Chapter 4
Control and Redo Log Files
TABLE 4.1
Dictionary Views that Read from the Control File View Name
Description
V$ARCHIVED_LOG
Archive log information such as size, SCN, timestamp, etc.
V$BACKUP_DATAFILE
Contains filename, timestamp, etc. of the data files backed up using RMAN.
V$BACKUP_PIECE
Information about backup pieces, updated when using RMAN.
V$BACKUP_REDOLOG
Information about the archived log files backed up using RMAN.
V$BACKUP_SET
Information about complete, successful backups using RMAN.
V$DATABASE
Database information such as name, creation timestamp, archive log mode, SCN, log sequence number, etc.
V$DATAFILE
Information about the data files associated with the database.
V$DATAFILE_COPY
Information about data files copied during a hot backup or using RMAN.
V$DATAFILE_HEADER
Data file header information; the filename and status are obtained from the control file.
V$LOG
Online redo log group information.
V$LOGFILE
Files or members of the online redo log group.
V$THREAD
Information about the log files assigned to each instance.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Maintaining Redo Log Files
115
Maintaining Redo Log Files
Redo logs are used to record all changes to the database. The redo log buffer in the SGA is written to the redo log file periodically by the LGWR process. The redo log files are accessed and are open during normal database operation; hence they are called the online redo log files. Every Oracle database must have at least two redo log files. The LGWR process writes to these files in a circular fashion. For example, say there are three online redo log files. The LGWR process writes to the first file, and when this file is full, it starts writing to the second file, and then to the third file, and then again to the first file (overwriting the contents). Online redo log files are filled with redo records. A redo record, also called a redo entry, is made up of a group of change vectors, each of which is a description of a change made to a single block in the database. Redo entries record data that you can use to reconstruct all changes made to the database, including the rollback segments. When you recover the database by using redo log files, Oracle reads the change vectors in the redo records and applies the changes to the relevant blocks. Each database has its own online redo log groups. A log group may have one or more redo log members (each member is a single OS file). If you have a parallel server configuration, where multiple instances are mounted to one database, each instance will have one online redo thread. That is, the LGWR process of each instance writes to the same online redo log files, and hence Oracle has to keep track of the instance from where the database changes are coming. For single instance configurations, there will be only one thread, and that thread number is 1. The redo log file contains both committed and uncommitted transactions. Whenever a transaction is committed, a system change number is assigned to the redo records to identify the committed transaction. The redo log group is referenced by an integer; you can specify the group number when you create the redo log files, either at the time of database creation or when you create the control file. It is also possible to change the redo log configuration (add/drop/rename files) by using database commands. The following example shows a CREATE DATABASE command. CREATE DATABASE “MYDB01” LOGFILE ‘/ora02/oradata/MYDB01/redo01.log’ SIZE 10M, ‘/ora03/oradata/MYDB01/redo02.log’ SIZE 10M;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
116
Chapter 4
Control and Redo Log Files
There are two log file groups created here; the first file will be assigned to group 1, and the second file will be assigned to group 2. You can have more files in each group; this is known as the multiplexing of redo log files, which is discussed later in the chapter. You can specify any group number—the range will be between 1 and MAXLOGFILES. Oracle recommends that you have the same size for all redo log groups. The following is an example of creating the log files by specifying the groups. CREATE DATABASE “MYDB01” LOGFILE GROUP 1 ‘/ora02/oradata/MYDB01/redo01.log’ SIZE 10M, GROUP 2 ‘/ora03/oradata/MYDB01/redo02.log’ SIZE 10M;
Log Switch The LGWR process writes to only one redo log file group at any time. The file that is actively being written to is known as the current log file. The log files that are required for instance recovery are known as the active log files. The other log files are known as inactive. Instance recovery is automatically done by Oracle when starting up the instance by using the online redo log files. Instance recovery may be needed if you do not shut down the database properly or if your computer crashes. The log files are written in a circular fashion. A log switch occurs when Oracle finishes writing to one file and starts writing to the next file. A log switch always occurs when the current redo log file is completely full and log writing must continue. You can force a log switch by using the ALTER SYSTEM command. A manual log switch may be necessary when performing maintenance on the redo log files. ALTER SYSTEM SWITCH LOGFILE;
Whenever a log switch occurs, Oracle allocates a sequence number to the new redo log file before writing to it. As stated earlier, this number is known as the log sequence number. If there are lots of transactions or changes to the database, the log switches can occur too frequently. Size the redo log file appropriately to avoid frequent log switches. Oracle writes to the alert log file whenever a log switch occurs.
Redo log files are written sequentially on the disk, so the I/O will be fast if there is no other activity on the disk (the disk head is always properly positioned). Keep the redo log files on a separate disk for better performance. If you have to store a data file on the same disk as the redo log file, do not put the SYSTEM, RBS, or any very active data or index tablespace file on this disk. Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Maintaining Redo Log Files
117
Checkpoint A checkpoint is an event that flushes the modified data from the buffer cache to the disk and updates the control file and data files. The CKPT process updates the headers of data files and control files; the actual blocks are written to the file by the DBWn process. A checkpoint is initiated when the redo log file is 90 percent full (90 percent of the size of the smallest log file) or when other values specified by certain parameters (discussed later in this section) are reached. You can force a checkpoint if needed. Forcing a checkpoint ensures that all changes to the database buffers are written to the data files on disk. ALTER SYSTEM CHECKPOINT;
The size of the redo log affects the checkpoint performance. If the size of the redo log is smaller compared to the number of transactions, a log switch happens often and so does the checkpoint. The DBWn process writes the dirty buffer blocks whenever a checkpoint occurs. This might reduce the time required for instance recovery, but may affect the runtime performance. Checkpoints can be adjusted using the following initialization parameters: LOG_CHECKPOINT_INTERVAL The checkpoint position in the redo log file cannot lag behind the end of the log file by more than the number of OS blocks specified by this parameter. (In release 8 and earlier, this parameter specified the frequency of checkpoints in terms of operating system blocks; after the LGWR writes a specified number of blocks to the redo log file, a checkpoint is initiated.) This ensures that no more than a fixed number of blocks are read during instance recovery. For example, if the redo log file size is 1MB, that is 2000 OS blocks; if the OS block size is 512KB, then setting the parameter to 500 will initiate a checkpoint when the redo log file is 1500 blocks full. If this parameter value exceeds the actual redo log file size or is set to zero, then checkpoints occur only when a log switch occurs. The default value for this parameter is OS dependent. LOG_CHECKPOINT_TIMEOUT Sets the checkpoint position in the redo log file to a location where the end of the log file was this many seconds ago. (In release 8 and earlier, this parameter specified the checkpoints to occur at specific intervals.) This ensures that no more than the specified numbers of seconds’ worth of redo log blocks need to be read during instance recovery. The value for this parameter is specified in seconds. By default, the value is 1800 seconds for the Enterprise Edition of Oracle and 900 seconds for the Standard Edition. If a checkpoint previously initiated is not
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
118
Chapter 4
Control and Redo Log Files
complete, the checkpoint scheduled to occur because of this parameter will be delayed. Setting this parameter to zero eliminates time-based checkpoints. Fast Start Checkpoint Fast start checkpointing is a feature used to limit the number of dirty buffers in the buffer cache, thereby limiting the time for instance recovery. During instance recovery, Oracle has to read the data blocks that were changed from the redo logs and apply the changes. If Oracle has to do a lot of I/O to perform instance recovery, the recovery time will be higher. By setting the parameter FAST_START_IO_TARGET to the desired number of blocks, the instance recovery time can be controlled. This parameter limits the number of I/O operations that Oracle should perform for instance recovery. If the number of operations required for recovery at any point in time exceeds this limit, then Oracle writes dirty buffers to disk until the number of I/O operations needed for instance recovery is reduced to the limit. You can disable the fast start checkpoint feature by setting the value of this parameter to zero. By default, it is the number of blocks in the buffer cache.
Setting the parameter LOG_CHECKPOINTS_TO_ALERT to TRUE logs each checkpoint activity to the alert log file, which is useful for determining whether checkpoints are occurring at the desired frequency.
To summarize, the factors affecting checkpoints are as follows:
A checkpoint can occur after writing the number of OS blocks that is 90 percent of the smallest log file.
The end of the log file is LOG_CHECKPOINT_TIMEOUT seconds away.
The end of the log file is LOG_CHECKPOINT_INTERVAL blocks away.
The maximum number of blocks that need to be processed during instance recovery are specified by setting FAST_START_IO_TARGET.
When any of the above conditions are applicable, the database writer takes into consideration all factors and uses a position of the redo log as the checkpoint target. This target will be close to the end of the redo log file, so that it satisfies all factors. If you see an error in the alert log file that checkpoints are not complete, add more redo log groups to the database. This gives more time for completing the checkpoints. Also make sure checkpoints are not occurring more frequently than needed.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Maintaining Redo Log Files
119
Multiplexing Log Files You can keep multiple copies of the online redo log file to safeguard against damage to these files. When multiplexing online redo log files, LGWR concurrently writes the same redo log information to multiple identical online redo log files, thereby eliminating a single point of redo log failure. All copies of the redo file have the same size and are known as a group, which is identified using an integer. Each redo log file in the group is known as a member. You must have at least two redo log groups for normal database operation. When multiplexing redo log files, it is preferable to keep the members of a group on different disks, so that one disk failure will not affect the continuing operation of the database. If LGWR can write to at least one member of the group, database operation proceeds as normal; an entry is written to the alert log file. If all members of the redo log file group are not available for writing, Oracle shuts down the instance. An instance recovery or media recovery may be needed to bring up the database. You can create multiple copies of the online redo log files at the time of database creation. For example, the following statement creates two redo log file groups with two members in each. CREATE DATABASE “MYDB01” LOGFILE GROUP 1 (‘/ora02/oradata/MYDB01/redo0101.log’, ‘/ora03/oradata/MYDB01/redo0102.log’) SIZE 10M, GROUP 2 (‘/ora02/oradata/MYDB01/redo0201.log’, ‘/ora03/oradata/MYDB01/redo0202.log’) SIZE 10M;
The maximum number of log file groups you can have is specified in the clause MAXLOGFILES, and the maximum number of members is specified in the clause MAXLOGMEMBERS. You can separate the filenames (members) by using a space or a comma.
Creating New Groups It is possible to create and add more redo log groups to the database by using the ALTER DATABASE command. The following statement creates a new log file group with two members. ALTER DATBASE ADD LOGFILE GROUP 3 (‘/ora02/oradata/MYDB01/redo0301.log’, ‘/ora03/oradata/MYDB01/redo0402.log’) SIZE 10M;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
120
Chapter 4
Control and Redo Log Files
If you omit the GROUP clause, Oracle assigns the next available number. For example, the following statement also creates a multiplexed group. ALTER DATBASE ADD LOGFILE (‘/ora02/oradata/MYDB01/redo0301.log’ ‘/ora03/oradata/MYDB01/redo0402.log’) SIZE 10M;
To create a new group without multiplexing, use the following statement. ALTER DATBASE ADD LOGFILE ‘/ora02/oradata/MYDB01/redo0301.log’ REUSE;
You can add more than one redo log group by using the ALTER DATABASE command—just use a comma to separate the groups.
If the redo log files you create already exist, you should use the REUSE option and need not specify the size. The new redo log size will be same as that of the existing file.
Adding New Members If you forgot to multiplex the redo log files at database creation or if you need to add more redo log members, you can do so by using the ALTER DATABASE command. When adding new members, you do not specify the file size, because all group members will have the same size. If you know the group number, using the following statement will add a member to group 2. ALTER DATABASE ADD LOGFILE MEMBER ‘/ora04/oradata/MYDB01/redo0203.log’ TO GROUP 2;
You can also add group members by specifying the names of other members in the group, instead of specifying the group number. You should specify all the existing group members with this syntax. ALTER DATABASE ADD LOGFILE MEMBER ‘/ora04/oradata/MYDB01/redo0203.log’ TO (‘/ora02/oradata/MYDB01/redo0201.log’, ‘/ora03/oradata/MYDB01/redo0202.log’);
Renaming Log Members If you want to move the log file member from one disk to another or just want to have a more meaningful name, you can rename a redo log member.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Maintaining Redo Log Files
121
Before renaming the online redo log members, the new (target) online redo files should exist. The SQL commands in Oracle change only the internal pointer in the control file to a new log file; they do not change or rename the OS file. You must use an OS command to rename or move the file. The steps required for renaming a log member are: 1. Shut down the database (a complete backup is recommended). 2. Copy/rename the redo log file member to the new location by using an
OS command. 3. Start up the instance and mount the database (STARTUP MOUNT). 4. Rename the log file member in the control file. Use ALTER DATABASE
RENAME FILE ‘’ TO ‘’; 5. Open the database (ALTER DATABASE OPEN). 6. Back up the control file.
Dropping Redo Log Groups You can drop a redo log group and its members by using the ALTER DATABASE command. Remember that you should have at least two redo log groups for the database to function normally. The group that is to be dropped should not be the active group or the current group—that is, you can drop only an inactive log file group. If the log file to be dropped is not inactive, use the ALTER SYSTEM SWITCH LOGFILE command. To drop the log file group 3, use the following SQL statement. ALTER DATABASE DROP LOGFILE GROUP 3;
When an online redo log group is dropped from the database, the operating system files are not deleted from disk. The control files of the associated database are updated to drop the members of the group from the database structure. After dropping an online redo log group, make sure that the drop is completed successfully, and then use the appropriate operating system command to delete the dropped online redo log files.
Dropping Redo Log Members Similar to conditions for dropping a redo log group, you can drop only the members of an inactive redo log group. Also, if there are only two groups, the log member to be dropped should not be the last member of a group. It is permitted to have a different number of members for each redo log group,
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
122
Chapter 4
Control and Redo Log Files
though it is not advised. For example, say you have three log groups, each having two members. If you drop a log member from group 2, and a failure occurs to the sole member of group 2, the database crashes. So even if you drop a member for maintenance reasons, you should make sure that all redo log groups have the same number of members. To drop the log member, use the DROP LOGFILE MEMBER clause of the ALTER DATABASE command. ALTER DATABASE DROP LOGFILE MEMBER ‘/ora04/oradata/MYDB01/redo0203.log’;
The OS file is not removed from the disk; only the control file is updated. You should use an OS command to delete the redo log file member from disk.
Archive Log Files You know that online redo log files record all changes to the database. Oracle allows you to copy these log files to a different location or to an offline storage medium. The process of copying is called archiving. The archiver process (ARCn) does this archiving. By archiving the redo log files, you can use them later to recover a database, update the standby database, or use the LogMiner utility to audit the database activities. When an online redo log file is full, and LGWR starts writing to the next redo log file, ARCn copies the completed redo log file to the archive destination. It is possible to specify more than one archive destination. The LGWR process waits for the ARCn process to complete the copy operation before overwriting any online redo log file. When the archiver process is copying the redo log files to another destination, the database is said to be in ARCHIVELOG mode. If archiving is not enabled, the database is said to be in NOARCHIVELOG mode. For production systems, for which you cannot afford to lose data, you must run the database in ARCHIVELOG mode so that in the event of a failure, you can recover the database to the time of failure or to a point in time. This is achieved by restoring the database backup and applying the database changes by using the archived log files.
Setting the Archive Destination The archive destination is specified in the initialization parameter file. You can change the archive destination parameters by using the ALTER SYSTEM
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Maintaining Redo Log Files
123
command during normal database operation. Following are the parameters associated with archive log destinations and the archiver process: LOG_ARCHIVE_DEST Specifies the destination to write archive log files. This location should be a valid directory on the server where the database is located. You can change the archiving location specified by this parameter by using ALTER SYSTEM SET LOG_ARCHIVE_DEST = ‘’; LOG_ARCHIVE_DUPLEX_DEST Specifies a second destination to write the archive log files. This must be a location on the server where the database is located. This destination can be either a must-succeed or a best-effort archive destination, depending on how many archive destinations must succeed. The minimum successful number of archive destinations is specified in the parameter LOG_ARCHIVE_MIN_SUCCEED_DEST. You can change the archiving location specified by this parameter by using ALTER SYSTEM SET LOG_ARCHIVE_DUPLEX_DEST = ‘’; LOG_ARCHIVE_DEST_N Using this parameter, you can specify up to five archiving destinations. These archive locations can be either on the local machine or on a remote machine where the standby database is located. When these parameters are used, you cannot use the LOG_ARCHIVE_DEST or LOG_ARCHIVE_DUPLEX_DEST parameters to specify the archiving location. The syntax for specifying this parameter in the initialization file is: LOG_ARCHIVE_DEST_n = “” | ((SERVICE = | LOCATION = ‘’) [MANDATORY | OPTIONAL] [REOPEN [= ]])
For example, LOG_ARCHIVE_DEST_1 = ((LOCATION=’/archive/ MYDB01’) MANDATORY REOPEN = 60) specifies a location for the archive log files on the local machine at /archive/MYDB01. The MANDATORY clause specifies that writing to this location must succeed. The REOPEN clause specifies when the next attempt to write to this location should be made, when the first attempt did not succeed. The default value is 300 seconds. Here is another example, which applies the archive logs to a standby database on a remote computer. LOG_ARCHIVE_DEST_2 = (SERVICE=STDBY01) OPTIONAL REOPEN;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
124
Chapter 4
Control and Redo Log Files
Here STDBY01 is the Net8 connect string using to connect to the remote database. Since writing is optional, the database activity continues even if ARCn could not write the archive log file. It tries the writing operation again since the REOPEN clause is specified. LOG_ARCHIVE_MIN_SUCCEED_DEST Specifies the number of destinations the ARCn process should successfully write at a minimum to proceed with overwriting the online redo log files. The default value of this parameter is 1. If you are using the LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_ DEST parameters, setting this to a value of 1 would make LOG_ARCHIVE_ DEST mandatory and LOG_ARCHIVE_DUPLEX_DEST optional. If you set the parameter to 2, writing to both destinations must be successful. If you are using the LOG_ARCHIVE_DEST_N parameter, the LOG_ARCHIVE_MIN_ SUCCEED_DEST parameter cannot exceed the total number of enabled destinations. If this parameter value is less than the number of MANDATORY destinations, the parameter is ignored. LOG_ARCHIVE_FORMAT Specifies the format in which to write the filename of the archived redo log files. You can provide a text string and any of the predefined variables. The variables are: %s
Log sequence number
%S
Log sequence number, zero filled
%t
Thread number
%T Thread number, zero filled For example, specifying the LOG_ARCHIVE_FORMAT = ‘arch_%t_%s’ will generate the archive log file names as arch_1_101, arch_1_102, arch_1_ 103, and so on, where 1 is the thread number and 101, 102, and 103 are log sequence numbers. Specifying the format as arch_%S would generate file names such as arch_00000101, arch_00000102, and so on; the number of leading zeros depends on the operating system. LOG_ARCHIVE_MAX_PROCESSES Specifies the maximum number of ARCn processes Oracle should start when starting up the database. By default the value is 1. LOG_ARCHIVE_START Specifies whether Oracle should enable automatic archiving. If this parameter is set to FALSE, none of the ARCn processes are started. You can override this parameter by using the command ARCHIVE LOG START or ARCHIVE LOG STOP.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Maintaining Redo Log Files
125
Setting ARCHIVELOG Specifying these parameters does not start writing the archive log files; you should make the database in ARCHIVELOG mode to enable archiving of the redo log files. You can specify the ARCHIVELOG clause while creating the database. But most DBAs prefer to create the database first and then enable ARCHIVELOG mode. To enable ARCHIVELOG mode, follow these steps: 1. Shut down the database. Set up the appropriate initialization parameters. 2. Start up and mount the database. 3. Enable ARCHIVELOG mode by using the command ALTER DATABASE
ARCHIVELOG. 4. Open the database by using ALTER DATABASE OPEN.
To disable ARCHIVELOG mode, follow these steps: 1. Shut down the database. 2. Start up and mount the database. 3. Disable ARCHIVELOG mode by using the command ALTER DATABASE
NOARCHIVELOG. 4. Open the database by using ALTER DATABASE OPEN.
You can enable automatic archiving by setting the parameter LOG_ ARCHIVE_START = TRUE. If you set the parameter to FALSE, Oracle does not start the ARCn process. Therefore, when the redo log files are full, the database will hang, waiting for the redo log files to be archived. You can manually archive the files by using the command ARCHIVE LOG START, which starts the ARCn processes.
Clearing an Online Redo Log File You can reinitialize an online redo log—that is, clear all the contents of the redo log file. This is equivalent to dropping the redo log file and adding it back. You can clear a log file even if there are only two groups in the database. The CLEAR LOGFILE clause of the ALTER DATABASE command is used for this purpose. This command is rarely used, and you should be careful in issuing this command because your archive log files may not be valid for media recovery. Use this statement to clear a corrupted redo log file. If the redo log file is not archived, you must use the keyword UNARCHIVED. Do not use CLEAR LOGFILE to clear a log file needed for media recovery. For example: ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 2;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
126
Chapter 4
Control and Redo Log Files
Querying Log and Archive Information You can query the redo log file information from the SQL command ARCHIVE LOG LIST or by querying the dynamic performance views. The ARCHIVE LOG LIST command shows whether the database is in ARCHIVELOG mode, whether automatic archiving is enabled, the archival destination, and the oldest, next, and current log sequence numbers. SQL> archive log list Database log mode Automatic archival Archive destination Oldest online log sequence Next log sequence to archive Current log sequence SQL>
Archive Mode Enabled C:\Oracle\oradata\ORADB02\archive 194 196 196
The view V$DATABASE shows if the database is in ARCHIVELOG mode or in NOARCHIVELOG mode.
V$LOG This dynamic performance view has information about the log file groups, sizes, and its status. The valid status codes in this view and their meanings are as follows: UNUSED New log group, never used. CURRENT Current log group. ACTIVE Log group that may be required for instance recovery. CLEARING You issued an ALTER DATABASE CLEAR LOGFILE command. CLEARING_CURRENT Empty log file after issuing the ALTER DATABASE CLEAR LOGFILE command. INACTIVE The log group is not needed for instance recovery. Here is a query from V$LOG: SQL> SELECT * FROM V$LOG;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Maintaining Redo Log Files
127
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ---------- ---------- ---------- ---------- ---------ARCHIVED STATUS FIRST_CHANGE# FIRST_TIM -------- ---------------- ------------- --------1 1 196 1048576 2 NO CURRENT 56686 30-JUL-00 2 INACTIVE
1
194
1048576 36658 28-JUL-00
2
YES
3 INACTIVE
1
195
1048576 36684 28-JUL-00
2
YES
SQL>
V$LOGFILE The V$LOGFILE view has information about the log group members. The filenames and group numbers are in this view. The STATUS column can have the value INVALID (file is not accessible), STALE (file’s contents are incomplete), DELETED (file is no longer used), or blank (file is in use). SQL> SELECT * FROM V$LOGFILE 2 ORDER BY GROUP#; GROUP# STATUS --------- ------1 1 2 STALE 2 3 3 6 rows selected. SQL>
MEMBER -----------------------------------C:\ORACLE\ORADATA\ORADB02\REDO11.LOG D:\ORACLE\ORADATA\ORADB02\REDO12.LOG C:\ORACLE\ORADATA\ORADB02\REDO21.LOG D:\ORACLE\ORADATA\ORADB02\REDO22.LOG C:\ORACLE\ORADATA\ORADB02\REDO31.LOG D:\ORACLE\ORADATA\ORADB02\REDO32.LOG
V$THREAD This view shows information about the threads in the database. For single instance databases, there will be only one thread. It shows the instance
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
128
Chapter 4
Control and Redo Log Files
name, thread status, SCN status, log sequence numbers, timestamp of checkpoint, etc. SQL> SELECT THREAD#, GROUPS, CURRENT_GROUP#, SEQUENCE# 2 FROM V$THREAD; THREAD# GROUPS CURRENT_GROUP# SEQUENCE# ---------- ---------- -------------- ---------1 3 1 199 SQL>
V$LOG_HISTORY This view contains the history of the log information. It has the log sequence number, first and highest SCN for each log change, control file ID, etc. SQL> 2 3 4
SELECT SEQUENCE#, FIRST_CHANGE#, NEXT_CHANGE#, TO_CHAR(FIRST_TIME,'DD-MM-YY HH24:MI:SS') TIME FROM V$LOG_HISTORY WHERE SEQUENCE# BETWEEN 50 AND 53;
SEQUENCE# FIRST_CHANGE# NEXT_CHANGE# TIME ---------- ------------- ----------- ----------------50 22622 22709 28-07-00 19:15:22 51 22709 23464 28-07-00 19:15:26 52 23464 23598 28-07-00 19:15:33 53 23598 23685 28-07-00 19:15:39 SQL>
V$ARCHIVED_LOG This view displays archive log information, including archive filenames, size of the file, redo log block size, SCNs, timestamp, etc. SQL> 2 3 4
SELECT NAME, SEQUENCE#, FIRST_CHANGE#, NEXT_CHANGE#, BLOCKS, BLOCK_SIZE FROM V$ARCHIVED_LOG WHERE SEQUENCE# BETWEEN 193 AND 194;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Maintaining Redo Log Files
129
NAME ---------------------------------------------------------SEQUENCE# FIRST_CHANGE# NEXT_CHANGE# BLOCKS BLOCK_SIZE --------- ------------- ------------ ------ ---------C:\ORACLE\ORADATA\ORADB02\ARCHIVE\ARCH_00193.ARC 193 36549 36658 722 512 C:\ORACLE\ORADATA\ORADB02\ARCHIVE\ARCH_00194.ARC 194 36658 36684 39 512 SQL>
V$ARCHIVE_DEST This view has information about the five archive destinations, their status, any failures, etc. The STATUS column can have six values: INACTIVE (not initialized), VALID (initialized), DEFERRED (manually disabled by the DBA), ERROR (error during copy), DISABLED (disabled after error), and BAD PARAM (bad parameter value specified). The BINDING column shows whether the target is OPTIONAL or MANDATORY, and the TARGET column indicates whether the copy is to a PRIMARY or STANDBY database. SQL> SELECT DESTINATION, BINDING, TARGET, REOPEN_SECS 2 FROM V$ARCHIVE_DEST 3 WHERE STATUS = ‘VALID’; DESTINATION REOPEN_SECS ----------------------------- ----------C:\ARCHIVE\ORADB02\archive 0 D:\ARCHIVE\ORADB02\archive 180 SQL>
BINDING
TARGET
--------- ------MANDATORY PRIMARY OPTIONAL
PRIMARY
V$ARCHIVE_PROCESSES This view displays information about the state of the 10 archive processes (ARCn). The LOG_SEQUENCE is available only if the STATE is BUSY. SQL> SELECT * FROM V$ARCHIVE_PROCESSES;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
130
Chapter 4
Control and Redo Log Files
PROCESS ------0 1 2 3 4 5 6 7 8 9 10 rows SQL>
STATUS LOG_SEQUENCE STATE ---------- ------------ ----ACTIVE 0 IDLE STOPPED 0 IDLE STOPPED 0 IDLE STOPPED 0 IDLE STOPPED 0 IDLE STOPPED 0 IDLE STOPPED 0 IDLE STOPPED 0 IDLE STOPPED 0 IDLE STOPPED 0 IDLE selected.
Analyzing Log Files You can analyze the redo log files (online and archived) by using the LogMiner, a powerful tool that enables administrators to audit the use of the database and to undo erroneous changes to data, without having any additional overhead on the database. You can also track the changes performed by a user, or on an object, and collect statistics. You can analyze the log files belonging to any Oracle8 or Oracle8i database using any Oracle8i database. LogMiner is a set of PL/SQL packages and dynamic views. LogMiner uses a dictionary file, which documents the database that created it as well as the time the file was created. Though the dictionary file is not mandatory, it is recommended to increase the readability of your undo statements. If you do not create the dictionary file, the SQL statements will show object IDs instead of their names.
Creating a LogMiner Dictionary File The dictionary file contains information about the objects in the database, mainly the object IDs and their names. You can extract the dictionary information to a text file of the database whose log files you are analyzing. The steps required to create the dictionary file are as follows: 1. Shut down the database. 2. Make sure the initialization parameter UTL_FILE_DIR is set to a direc-
tory where you wish to create the dictionary file (UTL_FILE_DIR = ‘C:\ORACLE\LM’).
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Maintaining Redo Log Files
131
3. Start up the instance and mount the database. (Execute a STARTUP
MOUNT after connecting to the database as SYSDBA). 4. The procedure DBMS_LOGMNR_D.BUILD is used to create the dictio-
nary. Specify the dictionary filename and the directory where the file should be saved. This directory should be in the UTL_FILE_DIR location. Here is an example of creating the dictionary file under C:\ORACLE\LM, where DB01DICT.ora is the dictionary file name. EXECUTE dbms_logmnr_d.build( dictionary_filename => ‘DB01DICT.ora’, dictionary_location => ‘C:\ORACLE\LM’);
The script dbmslogmnrd.sql, found under the rdbms/admin directory of the Oracle software installation, creates the DBMS_LOGMNR_D package. Though LogMiner is an Oracle8i utility, you can analyze the log files belonging to an Oracle8 database on an Oracle8i database. You must run this script on an Oracle8 database to create the package and then create the dictionary file. Use the dictionary file on an Oracle8i database when starting LogMiner. Now let’s see how to specify the log files that need to be analyzed.
Specifying Log Files You need to specify the fully qualified log file names to the LogMiner utility in order to analyze them. Use the package DBMS_LOGMNR to add or remove redo log files. You first start building the list of log files with the NEW option in this package. For example, if you need to analyze two log files named C:\ORACLE\LM\ARC034 and C:\ORACLE\LM\ARC035, do the following: 1. First, tell LogMiner that this is a new set of log files. Create the list of
logs by specifying the NEW option in the DBMS_LOGMNR.ADD_LOGFILE procedure. Enter the following to specify the first log file: execute dbms_logmnr.add_logfile( LogFileName => ‘C:\ORACLE\LM\ARC034’, Options => dbms_logmnr.NEW);
2. Add more logs by specifying the ADDFILE option. To add the second
file, enter the following. execute dbms_logmnr.add_logfile( LogFileName => ‘C:\ORACLE\LM\ARC045’, Options => dbms_logmnr.ADDFILE);
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
132
Chapter 4
Control and Redo Log Files
Oops, the filename was wrong. Remove the file by specifying the REMOVEFILE option. Enter the following to remove C:\ORACLE\LOGS\ARC045 and to add the correct filename. execute dbms_logmnr.add_logfile( LogFileName => 'C:\ORACLE\LM\ARC045', Options => dbms_logmnr.REMOVEFILE); execute dbms_logmnr.add_logfile( LogFileName => ‘C:\ORACLE\LM\ARC035’, Options => dbms_logmnr.ADDFILE);
Now that you have specified the log files, let’s start analyzing the files.
Starting LogMiner When you start LogMiner, Oracle populates the V$LOGMNR_CONTENTS view, which shows all the changes made to the database from the log files specified. The START_LOGMNR procedure of the DBMS_LOGMNR package is used to start the LogMiner utility. For example, to start LogMiner by using the dictionary file created earlier, issue execute dbms_logmnr.start_logmnr( DictFileName =>‘C:\ORACLE\LM\DB01DICT.ora’);
The START_LOGMNR procedure can take more parameters, which can be used to restrict the log file information based on a time interval or on the SCN of the database. You can set the STARTTIME and ENDTIME to filter log entries by time. The parameters are date data types, hence the TO_DATE function is used to specify date and time. For example, to analyze the redo log entries generated on July 1, 2000 between 8:30 and 8:45 A.M.: execute dbms_logmnr.start_logmnr( DictFileName => ‘C:\ORACLE\LM\DB01DICT.ora’, StartTime => to_date(`010700 08:30:00’, ‘DDMMYY HH:MI:SS’) EndTime => to_date(‘010700 08:45:00’, ‘DDMMYY HH:MI:SS’));
If you know the SCNs generated, you can use the STARTSCN and ENDSCN parameters to filter data by SCN, as in this example: execute dbms_logmnr.start_logmnr( DictFileName => ‘C:\ORACLE\LM\DB01DICT.ora’, StartScn => 100, EndScn => 150);
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Summary
133
Viewing Results The analyzed redo log file contents are available in the dynamic performance view V$LOGMNR_CONTENTS. Query this view to see the SQL statements, undo SQL statements, timestamp, SCN, tablespace name, object name, session information, rollback information, and username. For example, to query the changes made by user JAMES to the table OVERTIME, use this query: SQL> SELECT sql_redo 2 FROM v$logmnr_contents 3 WHERE username = `JAMES’ 4 AND tablename = `OVERTIME’; SQL_REDO ------------------------------------------------UPDATE OVERTIME SET HOURS = 20 WHERE NAME = ‘JAMES’;
The other LogMiner related views are V$LOGMNR_DICTIONARY (displays which dictionary file is used, database name, and so on), V$LOGMNR_LOGS (displays log file names, SCN info, timestamp, and so on) and V$LOGMNR_ PARAMETERS (displays the parameter used to start the LogMiner utility).
The analyzed redo log information is stored in memory (SGA); once you restart the database, the information is removed. To access the V$LOGMNR_ CONTENTS view, you need to start LogMiner, that is, execute DBMS_ LOGMNR.START_LOGMNR.
Summary
In this chapter, we discussed two important components of the Oracle database—the control file and redo log files. The control file records information about the physical structure of the database along with the database name, tablespace names, log sequence number, checkpoint, and RMAN information. The size of the control file depends on the five MAX clauses you specify at the time of database creation: MAXLOGFILES, MAXLOGMEMBERS, MAXLOGHISTORY, MAXDATAFILES, and MAXINSTANCES.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
134
Chapter 4
Control and Redo Log Files
Oracle provides a mechanism to multiplex the control file. The information is concurrently updated to all the control files. The parameter CONTROL_ FILES in the parameter file is used to specify the control files at the time of database creation and afterwards at database start-up. You can re-create the control files by specifying all the redo log files and data files. The V$CONTROLFILE view provides the names of the control files. Redo log files record all changes to the database. The LGWR process writes the redo log buffer information from the SGA to the redo log files. The redo log file is considered as a group. The group can have more than one member. If more than one member is present in a group, the group is known as a multiplexed group, where the LGWR process writes to all the members of the group at the same time. Even if you lose one member, LGWR continues writing with the remaining members. The LGWR process writes to only one redo log file group at any time. The file that is actively being written to is known as the current log file. The log files that are required for instance recovery are known as active log files. The other log files are known as inactive. A log switch occurs when Oracle finishes writing one file and starts the next file. A log switch always occurs when the current redo log file is completely full and writing must continue. A checkpoint is an event that flushes the modified data from the buffer cache to the disk and updates the control file and data files. The checkpoint process (CKPT) updates the headers of data files and control files; the actual blocks are written to the file by the DBWn process. You can manually initiate a checkpoint or a log switch by using the ALTER SYSTEM SWITCH LOGFILE command. By saving the redo log files to a different (or offline storage) location, you can recover a database or audit the redo log files. The ARCn process does the archiving when the database is in ARCHIVELOG mode. The archive destination is specified in the initialization parameter file. The dictionary views V$LOG and V$LOGFILE provide information about the redo log files. The redo log files can be analyzed using the LogMiner utility. You can use LogMiner to reverse a change made by a user, to audit the database usage, or to collect statistics.
Key Terms Before you take the exam, make sure you’re familiar with the following terms: log sequence number multiplexing
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Summary
V$CONTROLFILE redo record redo entry change vectors ARCHIVELOG NOARCHIVELOG V$LOGFILE LogMiner
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
135
136
Chapter 4
Control and Redo Log Files
Review Questions 1. What is the best method to rename a control file? A. Use the ALTER DATABASE RENAME FILE command. B. Shut down the database, rename the control file by using an OS
command, and restart the database after changing the CONTROL_ FILES parameter. C. Put the database in RESTRICTED mode and issue the ALTER
DATABASE RENAME FILE command. D. Shut down the database, change the CONTROL_FILES parameter,
and start up the database. E. Re-create the control file using the new name. 2. Which piece of information is not available in the control file? A. Instance name B. Database name C. Tablespace names D. Log sequence number 3. When you create a control file, the database has to be: A. Mounted B. Not mounted C. Open D. Restricted 4. Which data dictionary view provides the names of the control files? A. V$DATABASE B. V$INSTANCE C. V$CONTROLFILES D. None of the above
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
137
5. The initialization parameter file has LOG_CHECKPOINT_INTERVAL = 60;
what does this mean? Choose the best answer. A. A checkpoint occurs every 60 seconds. B. A checkpoint occurs after writing 60 blocks. C. When an instance recovery is required, Oracle need not read more
than 60 blocks. D. When an instance recovery is required, Oracle needs to read more
than 60 seconds’ worth of redo log blocks. 6. Which data dictionary view shows that the database is in
ARCHIVELOG mode? A. V$INSTANCE B. V$LOG C. V$DATABASE D. V$THREAD 7. What is the biggest advantage of having the control files on differ-
ent disks? A. Database performance. B. Guards against failure. C. Faster archiving. D. Writes are concurrent, so having control files on different disks
speeds up control file writes. 8. Which file is used to record all changes made to the database and is
used only when performing an instance recovery? A. Archive log file B. Redo log file C. Control file D. Alert log file
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
138
Chapter 4
Control and Redo Log Files
9. What will happen if ARCn could not write to a mandatory archive
destination? A. The database will hang. B. The instance will shut down. C. ARCn starts writing to LOG_ARCHIVE_DUPLEX_DEST if it is specified. D. Oracle stops writing the archived log files. 10. How many ARCn processes can be associated with an instance? A. Five B. Four C. Ten D. Operating system dependent 11. Which one is an invalid status code in the V$LOGFILE view? A. STALE B. Blank C. ACTIVE D. INVALID 12. If you have two redo log groups with four members each, how many
disks does Oracle recommend to keep the redo log files? A. Eight B. Two C. One D. Four
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
139
13. What will happen if you issue the following command?
ALTER DATABASE ADD LOGFILE (‘/logs/file1’ REUSE, ‘/logs/file2’ REUSE); A. Statement will fail, because the group number is missing B. Statement will fail, because log file size is missing C. Creates a new redo log group, with two members D. Adds two members to the current redo log group 14. Which two parameters cannot be used together to specify the archive
destination? A. LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST B. LOG_ARCHIVE_DEST and LOG_ARCHIVE_DEST_1 C. LOG_ARCHIVE_DEST_1 and LOG_ARCHIVE_DEST_2 D. None of the above; you can specify all the archive destination
parameters with valid destination names. 15. Which package is not associated with the LogMiner utility? A. DBMS_LOG_MINER B. DBMS_LOGMNR C. DBMS_LOGMNR_D 16. Querying which view will show whether automatic archiving is
enabled? A. V$ARCHIVE_LOG B. V$DATABASE C. V$PARAMETER D. V$LOG
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
140
Chapter 4
Control and Redo Log Files
17. If you need to have your archive log files named with the log sequence
numbers as arch_0000001, arch_0000002, and so on (zero filled, fixed width), what should be the value of the LOG_ARCHIVE_FORMAT parameter? A. arch_%S B. arch_%s C. arch_000000%s D. arch_%0%s 18. Following are the steps needed to rename a redo log file. Order them
in the proper sequence. A. Use an OS command to rename the redo log file. B. Shut down the database. C. ALTER DATABASE RENAME FILE ‘oldfile’ TO ‘newfile’ D. STARTUP MOUNT E. ALTER DATABASE OPEN F. Backup the control file. 19. Which parameter is used to limit the number of dirty buffers in the
buffer cache, thereby limiting the time required for instance recovery? A. LOG_CHECKPOINT_TIMEOUT B. LOG_CHECKPOINT_INTERVAL C. LOG_CHECKPOINT_BLOCKS D. FAST_START_IO_TARGET
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
141
20. Which statement will add a member /logs/redo22.log to log file
group 2? A. ALTER DATABASE ADD
LOGFILE ‘/logs/redo22.log’ TO GROUP 2; B. ALTER DATABASE ADD
LOGFILE MEMBER ‘/logs/redo22.log’ TO GROUP 2; C. ALTER DATABASE ADD
MEMBER ‘/logs/redo22.log’ TO GROUP 2; D. ALTER DATABASE ADD
LOGFILE ‘/logs/redo22.log’;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
142
Chapter 4
Control and Redo Log Files
Answers to Review Questions 1. B. To rename (or multiplex, or drop) a control file, you need to shut
down the database, rename (or copy, or delete) the control file by using OS commands, change the parameter CONTROL_FILES in the initialization parameter file, and start up the database. 2. A. The instance name is not in the control file. The control file has
information about the physical database structure. 3. B. The database should be in the NOMOUNT state to create a control
file. When you mount the database, Oracle tries to open the control file to read the physical database structure. 4. D. The V$CONTROLFILE view shows the names of the control files in
the database. 5. C. LOG_CHECKPOINT_INTERVAL ensures that no more than a speci-
fied number of redo log blocks (OS blocks) need to be read during instance recovery. LOG_CHECKPOINT_TIMEOUT ensures that no more than a specified number of seconds’ worth of redo log blocks need to be read during instance recovery. 6. C. The V$DATABASE view shows whether the database is in
ARCHIVELOG mode or in NOARCHIVELOG mode. 7. B. Having the control files on different disks ensures that even if you
lose one disk, you lose only one control file. If you lose one of the control files, you can shut down the database, copy a control file, or change the CONTROL_FILES parameter and restart the database. 8. B. The redo log file records all changes made to the database. The
LGWR process writes the redo log buffer entries to the redo log files. These entries are used to roll forward, or to update, the data files during an instance recovery. Archive log files are used for media recovery.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Answers to Review Questions
143
9. A. Oracle will write a message to the alert file and all database oper-
ations will be stopped. Database operation resumes automatically after successfully writing the archived log file. If the archive destination becomes full, you can make room for archives either by deleting the archive log files after copying them to a different location, or by changing the parameter to point to a different archive location. 10. C. You can have a maximum of ten archiver processes. 11. C. The STATUS column in V$LOGFILE can have values INVALID (file
is not accessible), STALE (file’s contents are incomplete), DELETED (file is no longer used), or blank (file is in use). 12. D. Oracle recommends that you keep each member of a redo log
group on a different disk. You should have a minimum of two redo log groups, and it is recommended to have two members in each group. The maximum number of redo log groups is determined by the MAXLOGFILES database parameter. The MAXLOGMEMBERS database parameter specifies the maximum number of members per group. 13. C. The statement creates a new redo log group with two members.
When the GROUP option is specified, you must use an integer value. Oracle will automatically generate a group number if the GROUP option is not specified. Use the SIZE option if you are creating a new file. Use the REUSE option if the file already exists. 14. B. When using a LOG_ARCHIVE_DEST_N parameter, you cannot use
the LOG_ARCHIVE_DEST or LOG_ARCHIVE_DUPLEX_DEST parameters to specify other archive locations. Using a LOG_ARCHIVE_DEST_N parameter, you can specify up to five archiving locations. 15. A. The package DBMS_LOGMNR is used to add and drop log files
and to start the LogMiner utility. DBMS_LOGMNR_D is used to create the dictionary file. 16. C. Automatic archiving is enabled by setting the initialization param-
eter LOG_ARCHIVE_START = TRUE. All the parameter values can be queried using the V$PARAMETER view. The ARCHIVE LOG LIST command will also show whether automatic archiving is enabled.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
144
Chapter 4
Control and Redo Log Files
17. A. There are four formatting variables available to use with archive
log file names: %s specifies the log sequence number; %S specifies the log sequence number, leading zero filled; %t specifies the thread; and %T specifies the thread, leading zero filled. 18. B, A, D, C, E, and F. The correct order is: 1. Shut down the database. 2. Use an OS command to rename the redo log file. 3. STARTUP MOUNT 4. ALTER DATABASE RENAME FILE ‘oldfile’ TO ‘newfile’ 5. ALTER DATABASE OPEN 6. Back up the control file. 19. D. By setting the parameter FAST_START_IO_TARGET to the desired
number of blocks, the instance recovery time can be controlled. This parameter limits the number of I/O operations that Oracle should perform for instance recovery. If the number of operations required for recovery at any point in time exceeds this limit, then Oracle writes dirty buffers to disk until the number of I/O operations needed for instance recovery is reduced to the limit. 20. B. When adding log file members, you should specify the group num-
ber or specify all the existing group members. Option D would create a new group with one member.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Chapter
5
Logical and Physical Database Structures ORACLE8i ARCHITECTURE AND ADMINISTRATION EXAM OBJECTIVES OFFERED IN THIS CHAPTER: Describe the logical structure of the database Distinguish the different types of temporary segments Create tablespaces Change the size of tablespaces Allocate space for temporary segments Change the status of tablespaces Change the storage settings of tablespaces Relocate tablespaces
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Training and Certification Web site (http://education .oracle.com/certification/index.html) for the most current exam objectives listing.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
T
his chapter covers the physical and logical data storage. Chapter 1 briefly discussed the physical and logical structures. Chapter 4 discussed two of the three components of the physical database structure—control files and redo log files. The third component of the physical structure is data files. Data files belong to logical units called tablespaces. In this chapter, you will learn to manage data files and tablespaces.
Tablespaces and Data Files
T
he database’s data is stored logically in tablespaces and physically in the data files corresponding to the tablespaces. The logical storage management is independent of the physical storage of the data files. A tablespace can have more than one data file associated with it. One data file belongs to only one tablespace. A database can have one or more tablespaces. Figure 5.1 shows the relationship between the database, tablespaces, data files, and the objects in the database. Any object (such as a table, an index, etc.) created in the database is stored in a single tablespace. But the object’s physical storage can be on multiple data files belonging to that tablespace. A segment cannot be stored in multiple tablespaces.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Tablespaces and Data Files
FIGURE 5.1
147
Tablespaces and data files Database Tablespace 1
Data file 1
Tablespace 2
Data file 2
Data file 3
Objects
The size of the tablespace is the total size of all the data files belonging to that tablespace. The size of the database is the total size of all tablespaces in the database, which is the total size of all data files in the database. The smallest logical unit of storage in a database is a database block. The size of the block is defined at the time of database creation and it cannot be altered. The database block size is a multiple of the operating system block size. Changing the size of the data files belonging to a tablespace can change the size of that tablespace. You can add more space to a tablespace by adding more data files to the tablespace. You can add more space to the database by adding more tablespaces, by adding more data files to the existing tablespaces, or by increasing the size of the existing data files. When you create a database, Oracle creates the SYSTEM tablespace. All the dictionary objects are stored in this tablespace. The data files specified at the time of database creation are assigned to the SYSTEM tablespace. It is possible to add more space to the SYSTEM tablespace after creating the database by adding more data files or by increasing the size of the data files. The PL/SQL program units (such as procedures, functions, packages, or triggers) created in the database are also stored in the SYSTEM tablespace.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
148
Chapter 5
Logical and Physical Database Structures
Oracle recommends not creating any objects other than the Oracle data dictionary in the SYSTEM tablespace. By having multiple tablespaces, you can:
Separate the Oracle dictionary from other database objects. This reduces contention between dictionary objects and database objects for the same data file.
Control I/O by allocating separate physical storage disks for different tablespaces.
Manage space quotas for users on tablespaces.
Have separate tablespaces for temporary segments (TEMP) and rollback segments (RBS). You can also create a tablespace for a specific activity—for example, high-update tables can be placed in a separate tablespace.
Group application-related or module-related data together, so that when maintenance is required for the application’s tablespace, only that tablespace need be taken offline, and the rest of the database is available for other users.
Back up the database one tablespace at a time.
Make part of the database read-only.
When you create a tablespace, Oracle creates the data files with the size specified. The space reserved for the data file is formatted but does not contain any user data. Whenever spaces for objects are needed, extents are allocated from this free space.
Managing Tablespaces
W
hen Oracle allocates space to an object in a tablespace, it is allocated in chunks of contiguous database blocks known as extents. Each object is allocated a segment, which has one or more extents. (If the object is partitioned, each partition will have a segment allocated. Partitions are discussed in Chapter 7, “Managing Tables, Indexes, and Constraints.”) Oracle maintains the extent information such as extents free, extent size, extents allocated, and so on either in the data dictionary or in the tablespace itself.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Managing Tablespaces
149
If the extent management information is kept in the dictionary for a tablespace, that tablespace is called a dictionary-managed tablespace. Whenever an extent is allocated or freed, the information is updated in the corresponding dictionary tables. Such updates also generate rollback information. When the management information is kept in a tablespace itself, by using bitmaps in each data file, such a tablespace is known as a locally managed tablespace. Each bit in the bitmap corresponds to a block or a group of blocks. When an extent is allocated or freed for reuse, Oracle changes the bitmap values to show the new status of the blocks. These changes do not generate rollback information because they do not update tables in the data dictionary.
Creating a Tablespace As the database grows bigger, it is better to have multiple tablespaces for easier management of database objects. Using the CREATE TABLESPACE command creates a tablespace. You should specify the tablespace name and at least one data file name to create the tablespace. Optionally, you can specify the storage parameters. These parameters are used whenever a new object is created (whenever a new segment is allocated) in the tablespace. Storage parameters specified when an object is created override the default storage parameters of the tablespace containing the object. The default storage parameters for the tablespace are used only when you create an object without specifying any storage parameters. You can specify the extent management clause when creating a tablespace. If you do not specify the extent management clause, Oracle creates a dictionary-managed tablespace. You can have both dictionary-managed and locally managed tablespaces in the same database.
The tablespace name cannot exceed 30 characters. The name should begin with an alphabetic character and may contain alphabetic characters, numeric characters, and the special characters #, _, and $.
Dictionary-Managed Tablespaces Prior to Oracle8i, only dictionary-managed tablespaces were available. In dictionary-managed tablespaces, all extent information is stored in the data
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
150
Chapter 5
Logical and Physical Database Structures
dictionary. A simple example of a dictionary-managed tablespace creation command is as follows: CREATE TABLESPACE APPL_DATA DATAFILE ‘/disk3/oradata/DB01/appl_data01.dbf’ SIZE 100M;
This statement creates a tablespace named APPL_DATA; the data file specified is created with a size of 100MB. You can specify more than one file under the DATAFILE clause separated by commas; you may need to create more files if there are any OS limits on the file size. For example, if you need to have 6GB allocated for the tablespace, and the OS allows only 2GB as the maximum file size, you need three data files for the tablespace. The statement will be as follows: CREATE TABLESPACE APPL_DATA DATAFILE ‘/disk3/oradata/DB01/appl_data01.dbf’ SIZE 2000M, ‘/disk3/oradata/DB01/appl_data02.dbf’ SIZE 2000M, ‘/disk4/oradata/DB01/appl_data03.dbf’ SIZE 2000M;
The options available when creating and reusing a data file are discussed in the section “Managing Data Files” later in this chapter. The following statement shows tablespace creation using all optional clauses. CREATE TABLESPACE APPL_DATA DATAFILE ‘/disk3/oradata/DB01/appl_data01.dbf’ SIZE 100M DEFAULT STORAGE ( INITIAL 256K NEXT 256K MINEXTENTS 2 PCTINCREASE 0 MAXEXTENTS 4096) MINIMUM EXTENT 256K LOGGING ONLINE PERMANENT EXTENT MANAGEMENT DICTIONARY;
The clauses in the CREATE TABLESPACE command specify the following: DEFAULT STORAGE Specifies the default storage parameters for new objects that are created in the tablespace. If an explicit storage clause is
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Managing Tablespaces
151
specified when creating an object, the tablespace defaults are not used for the specified storage parameters. The storage parameters are specified within parentheses; no parameter is mandatory, but if you specify the DEFAULT STORAGE clause, you must specify at least one parameter inside the parentheses. INITIAL Specifies the size of the object’s (segment’s) first extent. NEXT specifies the size of the segment’s next and successive extents. The size is specified in bytes. You can also specify the size in KB or MB by post-fixing the size with K or M respectively. The default value of INITIAL and NEXT is 5 database blocks. The minimum value of INITIAL is 2 database blocks and NEXT is 1 database block. Even if you specify sizes smaller than these values, Oracle allocates the minimum sizes. PCTINCREASE Specifies how much the third and subsequent extents grow over the preceding extent. The default value is 50, meaning that each subsequent extent is 50 percent larger than the preceding extent. The minimum value is 0, meaning all extents after the first are the same size. For example, if you have storage parameters specified as (INTIAL 1M NEXT 2M PCTINCREASE 0), the extent sizes would be 1MB, 2MB, 2MB, 2MB, etc. If you specify the PCTINCREASE as 50, the extent sizes would be 1MB, 2MB, 3MB, 4.5MB, 6.75MB, etc. The actual NEXT extent size is rounded to a multiple of the block size. MINEXTENTS Specifies the total number of extents that should be allocated to the segment at the time of creation. This parameter enables you to allocate a large amount of space when you create an object, even if the space available is not contiguous. The default and minimum value is 1. When you specify MINEXTENTS as more than 1, the extent sizes are calculated based on NEXT and PCTINCREASE. MAXEXTENTS specifies the maximum number of extents that can be allocated to a segment. You can specify an integer or UNLIMITED. The minimum value is 1, and the default value depends on the database block size. MINIMUM EXTENT Specifies that the extent sizes should be a multiple of the size specified. This clause can be used to control fragmentation in the tablespace by allocating extents of at least the size specified and is always a multiple of the size specified. In the CREATE TABLESPACE example, all the extents allocated in the tablespace would be a multiple of 256KB. The INITIAL and NEXT extent sizes specified should be a multiple of MINIMUM EXTENT.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
152
Chapter 5
Logical and Physical Database Structures
LOGGING Specifies that the DDL operations and direct-load INSERT should be recorded in the redo log files. This is the default, and the clause can be omitted. When you specify NOLOGGING, data is modified with minimal logging and hence the commands complete faster. Since the changes are not recorded in the redo log files, you need to apply the commands again in the case of a media recovery. You can specify LOGGING or NOLOGGING in the individual object creation statement, and it overrides the tablespace default. ONLINE Specifies that the tablespace should be made online or available as soon as it is created. This is the default, and hence the clause can be omitted. If you do not want the tablespace to be available, you can specify OFFLINE. PERMANENT Specifies whether the tablespace should be used to create permanent objects such as tables, indexes, etc. This is the default and hence can be omitted. If you plan to use the tablespace for temporary segments (such as to handle sorts in SQL), you can mark the tablespace as TEMPORARY. Permanent objects such as tables or indexes are not allowed to be created in a TEMPORARY tablespace. Temporary tablespaces are discussed later in the chapter. EXTENT MANAGEMENT Can be omitted for dictionary-managed tablespaces. If you omit this clause, Oracle creates the tablespace as dictionary managed.
Locally Managed Tablespace Using the CREATE TABLESPACE command with the EXTENT MANAGEMENT LOCAL clause creates a locally managed tablespace. Locally managed tablespaces manage space more efficiently, provide better methods to reduce fragmentation, and increase reliability. You cannot specify the DEFAULT STORAGE, TEMPORARY, and MINIMUM EXTENT clauses of the CREATE TABLESPACE in a locally managed tablespace. You can have Oracle manage extents automatically with the AUTOALLOCATE option. When using this option, you cannot specify sizes for the objects created in the tablespace. Oracle manages the extent sizes; there is no control for you on the extent sizes or its allocation and de-allocation. Following is an example of creating a locally managed tablespace with Oracle managing the extent allocation.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Managing Tablespaces
153
CREATE TABLESPACE USER_DATA DATAFILE ‘/disk1/oradata/MYDB01/user_data01.dbf’ SIZE 300M EXTENT MANAGEMENT LOCAL AUTOALLOCATE;
You can specify that the tablespace be managed with uniform extents of a specific size by using the UNIFORM SIZE clause. All the extents will be created with the size specified. You cannot specify extent sizes during creation. The following is an example of creating a locally managed tablespace with uniform extent sizes of 512KB. CREATE TABLESPACE USER_DATA DATAFILE ‘/disk1/oradata/MYDB01/user_data01.dbf’ SIZE 300M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 512K;
The SYSTEM tablespace can be locally managed; you must specify the EXTENT MANAGEMENT LOCAL clause in the CREATE DATABASE command.
Temporary Tablespace Oracle can manage space for sort operations more efficiently by using temporary tablespaces. By exclusively designating a tablespace for temporary segments, Oracle eliminates allocation and de-allocation of temporary segments. A temporary tablespace can be used only for sort segments. Only one sort segment is allocated for an instance in a temporary tablespace, and all sort operations use this sort segment. More than one transaction can use the same sort segment, but each extent can be used only by one transaction. The sort segment for a given temporary tablespace is created at the time of the first sort operation on that tablespace. The sort segment expands by allocating extents until the segment size is sufficient for the total storage demands of all the active sorts running on that instance. A temporary tablespace can be dictionary managed or locally managed. Using the CREATE TABLESPACE command with the TEMPORARY clause creates a dictionary-managed temporary tablespace. Here is an example: CREATE TABLESPACE TEMP DATAFILE ‘/disk5/oradata/MYDB01/temp01.dbf’ SIZE 300M DEFAULT STORAGE (INITIAL 2M NEXT 2M PCTINCREASE 0 MAXEXTENTS UNLIMITED) TEMPORARY;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
154
Chapter 5
Logical and Physical Database Structures
When the first sort operation is performed on disk, a temporary segment is allocated with a 2MB initial extent size and 2MB subsequent extent sizes. The extents, once allocated, are freed only when the instance is shut down. Temporary segments are based on the default storage parameters of the tablespace. For a TEMPORARY tablespace, the recommended INITIAL and NEXT should be equal to each other and it should be a multiple of SORT_ AREA_SIZE plus DB_BLOCK_SIZE to reduce the possibility of fragmentation. Keep PCTINCREASE equal to zero. For example, if your sort area size is 64KB and database block size is 8KB, provide the default storage of the temporary tablespace as (INITIAL 136K NEXT 136K PCTINCREASE 0 MAXEXTENTS UNLIMITED). If you are using a PERMANENT tablespace for sort operations, temporary segments are created in the tablespace when the sort is performed and are freed when the sort operation completes. There will be one sort segment for each sort operation. This requires a lot of extent and segment management operations. A locally managed temporary tablespace is created using the CREATE TEMPORARY TABLESPACE command. The following statement creates a locally managed temporary tablespace. CREATE TEMPORARY TABLESPACE TEMP TEMPFILE ‘/disk5/oradata/MYDB01/temp01.tmf’ SIZE 500M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 5M;
Notice that the DATAFILE clause of the CREATE TABLESPACE command is replaced with the TEMPFILE clause. Temporary files are always in NOLOGGING mode and are not recoverable. They cannot be made read-only, cannot be renamed, cannot be created with the ALTER DATABASE command, do not generate any information during the BACKUP CONTROLFILE command, and are not included during a CREATE CONTROLFILE command. The EXTENT MANAGEMENT LOCAL clause is optional and can be omitted; it is provided to improve readability. If you do not specify the extent size by using the UNIFORM SIZE clause, the default size used will be 1MB.
An Oracle temporary file (called a tempfile) is not a temporary file in the traditional OS sense; only the objects within a temporary tablespace consisting of one or more tempfiles are temporary.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Managing Tablespaces
155
Each user is assigned a temporary tablespace when the user is created. By default, the default tablespace (where the user creates objects) and the temporary tablespace (where the user’s sort operations are performed) are both the SYSTEM tablespace. No user should have SYSTEM as their default or temporary tablespace. This will unnecessarily increase fragmentation in the SYSTEM tablespace.
Altering a Tablespace A tablespace may be altered using the ALTER TABLESPACE command to:
Change the default storage parameters
Change the extent allocation and LOGGING/NOLOGGING modes.
Change the tablespace from PERMANENT to TEMPORARY or vice versa.
Change availability of the tablespace
Make it read-only or read-write
Coalesce the contiguous free space
Add more space by adding new data files or temporary files
Rename files belonging to the tablespace
Begin and end backup
Changing the default storage or extent allocation or LOGGING/NOLOGGING does not affect the existing objects in the tablespace. The DEFAULT STORAGE and LOGGING/NOLOGGING clauses are applied to the newly created segments if such a clause is not explicitly specified when creating new objects. For example, to change the storage parameters, use the statement ALTER TABLESPACE APPL_DATA DEFAULT STORAGE (INITIAL 2M NEXT 2M);
Only the INITIAL and NEXT values of the storage clause are changed; the other storage parameters such as PCTINCREASE or MINEXTENTS remain unaltered. You can change a dictionary-managed temporary tablespace to permanent or vice versa by using the ALTER TABLESPACE command, if the tablespace is empty. You cannot use the ALTER TABLESPACE command,
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
156
Chapter 5
Logical and Physical Database Structures
with the TEMPORARY keyword, to change a locally managed permanent tablespace into a locally managed temporary tablespace. You must use the CREATE TEMPORARY TABLESPACE statement to create a locally managed temporary tablespace. However, you can use the ALTER TABLESPACE command to change a locally managed temporary tablespace to a locally managed permanent tablespace. The following statement changes a tablespace to temporary. ALTER TABLESPACE TEMP TEMPORARY;
The clauses in the ALTER TABLESPACE command are all mutually exclusive; you can specify only one clause at a time.
Tablespace Availability You can control the availability of certain tablespaces by altering the tablespace to be offline or online. When you make a tablespace offline, the segments in that tablespace are not accessible. The data stored in other tablespaces is available for use. When making a tablespace unavailable, you can use these four options: NORMAL This is the default. Oracle writes all the dirty buffer blocks in the SGA to the data files of the tablespace and closes the data files. All data files belonging to the tablespace must be online. You need not do a media recovery when bringing the tablespace online. For example: ALTER TABLESPACE USER_DATA OFFLINE;
TEMPORARY Oracle performs a checkpoint on all online data files. It does not ensure that the data files are available. You may need to perform a media recovery on the offline data files when the tablespace is brought online. For example: ALTER TABLESPACE USER_DATA OFFLINE TEMPORARY;
IMMEDIATE Oracle does not perform a checkpoint and does not make sure that all data files are available. You must perform a media recovery when the tablespace is brought back online. For example: ALTER TABLESPACE USER_DATA OFFLINE IMMEDIATE;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Managing Tablespaces
157
FOR RECOVER This makes the tablespace offline for point-in-time recovery. You can copy the data files belonging to the tablespace from a backup and apply the archive log files. For example: ALTER TABLESPACE USER_DATA OFFLINE FOR RECOVER;
You cannot make the SYSTEM tablespace offline because the data dictionary must always be available for the functioning of the database. If a tablespace is offline when you shut down the database, it remains offline when you start up the database. You can make a tablespace offline by using the command ALTER TABLESPACE USER_DATA ONLINE. When a tablespace is taken offline, SQL statements cannot reference any objects contained in that tablespace. If there are unsaved changes when you take the tablespace offline, Oracle saves rollback data corresponding to those changes in a deferred rollback segment in the SYSTEM tablespace. When the tablespace is brought back online, Oracle applies the rollback data to the tablespace, if needed.
Coalescing Free Space The ALTER TABLESPACE command with the COALESCE clause can be used to coalesce the adjacent free extents. When you free up the extents used by an object, either by altering the object storage or by dropping the object, Oracle does not combine the free extents that are adjacent to each other. When coalescing tablespaces, Oracle does not combine all free space to one big extent; Oracle combines only the adjacent extents. For example, Figure 5.2 shows the extent allocation in a tablespace before and after coalescing. FIGURE 5.2
Coalescing a tablespace Tablespace USERS before coalescing D
F
F
F
D
F
D
D
D
D
F
F
ALTER TABLESPACE USERS COALESCE; D
F
D
F
D = Data extent F = Free extent
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
F
158
Chapter 5
Logical and Physical Database Structures
The SMON process performs the coalescing of the tablespace. If the PCTINCREASE storage parameter for the tablespace is set to a nonzero value, the SMON process automatically coalesces the tablespace’s unused extents. Even if PCTINCREASE is set to zero, Oracle coalesces the tablespace when it does not find a free extent big enough. Oracle also does a limited amount of coalescing if the PCTINCREASE value of the object dropped is not zero. If the extent sizes of the tablespace are all uniform, there is no need to coalesce.
Read-Only Tablespace A tablespace can be made read-only if you do not want the users to change any data in the tablespace. All objects in the tablespace will be available for queries. INSERT, UPDATE, and DELETE operations on the data are not allowed. When the tablespace is made read-only, the data file headers are no longer updated when the checkpoint occurs. You need to back up the readonly tablespaces only once. You cannot make the SYSTEM tablespace readonly. When you make a tablespace read-only, all the data files must be online, and the tablespace can have no pending transactions. You can drop objects such as tables or indexes from a read-only tablespace, but you cannot create new objects in a read-only tablespace. To make the USERS tablespace read-only, use the statement: ALTER TABLESPACE USERS READ ONLY;
Prior to Oracle8i, to make a tablespace read-only, no active transactions could be in the database. In Oracle8i, when the command is issued, the tablespace goes into a transitional read-only mode in which no further DML statements are allowed, though existing transactions that are modifying the tablespace will be allowed to commit or roll back. To change a tablespace back to read-write mode, use the following command: ALTER TABLESPACE USERS READ WRITE;
Oracle normally checks the availability of all data files belonging to the database when starting up the database. If you are storing your read-only tablespace on an offline storage media or on a CD-ROM, you might want to skip the data file availability checking when starting up the database. This can be accomplished by setting the parameter READ_ONLY_OPEN_DELAYED to TRUE. Oracle checks the availability of data files belonging to read-only tablespaces only at the time of access to an object in the tablespace. A missing or bad read-only file will not be detected at database start-up time.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Managing Tablespaces
159
Adding Space to a Tablespace You can add more space to a tablespace by adding more data files to it or by changing the size of the existing data files. To add more data files or temporary files to the tablespace, use the ALTER TABLESPACE command with the ADD [DATAFILE/TEMPFILE] clause. For example, to add a file to a tablespace, run the command ALTER TABLESPACE USERS ADD DATAFILE ‘/disk5/oradata/DB01/users02.dbf’ SIZE 25M;
If you are modifying a locally managed temporary tablespace to add more files, use the statement ALTER TABLESPACE USER_TEMP ADD TEMPFILE ‘/disk4/oradata/DB01/user_temp01.dbf’ SIZE 100M;
For locally managed temporary tablespaces, ADD TEMPFILE is the only clause you can use with the ALTER TABLESPACE command.
Dropping a Tablespace The DROP TABLESPACE command can drop a tablespace from the database. If the tablespace to be dropped is not empty, you should use the INCLUDING CONTENTS clause. For example, to drop the USER_DATA tablespace, use the statement DROP TABLESPACE USER_DATA;
If the tablespace is not empty, you should specify DROP TABLESPACE USER_DATA INCLUDING CONTENTS;
If there are referential integrity constraints from the objects on other tablespaces referring to the objects in the tablespace that is being dropped, you must specify the CASCADE CONSTRAINTS clause. DROP TABLESPACE USER_DATA INCLUDING CONTENTS CASCADE CONSTRAINTS;
When you drop a tablespace, only the control file is updated with the tablespace and data file information. The actual data files belonging to the tablespace are not removed. If you need to free up the disk space, use OS commands to remove the data files belonging to the dropped tablespace. You cannot drop the SYSTEM tablespace.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
160
Chapter 5
Logical and Physical Database Structures
Querying Tablespace Information Tablespace information can be queried from the data dictionary views. The following views provide information on tablespaces.
DBA_TABLESPACES The DBA_TABLESPACES view shows information about all tablespaces in the database (USER_TABLESPACES shows tablespaces that are accessible to the user). This view contains default storage parameters, type of tablespace, status, etc. SQL> SELECT TABLESPACE_NAME, EXTENT_MANAGEMENT, 2 ALLOCATION_TYPE, CONTENTS 3 FROM DBA_TABLESPACES; TABLESPACE_NAME ----------------------SYSTEM RBS USERS TEMP TOOLS INDX DRSYS OEM_REPOSITORY TEMP_LOCAL
EXTENT_MAN ---------DICTIONARY DICTIONARY DICTIONARY DICTIONARY DICTIONARY DICTIONARY DICTIONARY LOCAL LOCAL
ALLOCATION ---------USER USER USER USER USER USER USER SYSTEM UNIFORM
CONTENTS --------PERMANENT PERMANENT PERMANENT TEMPORARY PERMANENT PERMANENT PERMANENT PERMANENT TEMPORARY
DBA_FREE_SPACE This view shows the free extents available in all tablespaces. This view can be used to find the total free space available in a tablespace. USER_FREE_ SPACE shows the free extents in tablespaces accessible to the current user. Locally managed temporary tablespaces are not shown in this view. SQL> SELECT TABLESPACE_NAME, SUM(BYTES) FREE_SPACE 2 FROM DBA_FREE_SPACE 3 GROUP BY TABLESPACE_NAME;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Managing Tablespaces
161
TABLESPACE_NAME FREE_SPACE ------------------------------ ---------DRSYS 88268800 INDX 26206208 OEM_REPOSITORY 3473408 RBS 515891200 SYSTEM 393216 TEMP 75489280 TOOLS 229376 USERS 10215424
V$TABLESPACE This view lists the tablespace name from the control file. SQL> SELECT * FROM V$TABLESPACE; TS# ---------0 1 2 3 4 5 6 7 9
NAME -----------------------------SYSTEM RBS USERS TEMP TOOLS INDX DRSYS OEM_REPOSITORY TEMP_LOCAL
V$SORT_USAGE This view shows information about the active sorts in the database; it shows the space used, username, SQL address, and hash value. This view can be joined with V$SESSION or V$SQL to get more information about the session. SQL> SELECT USER, SESSION_ADDR, SESSION_NUM, SQLADDR, 2 SQLHASH, TABLESPACE, EXTENTS, BLOCKS 3 FROM V$SORT_USAGE; USER SESSION_ SESSION_NUM SQLADDR -------------------- -------- ----------- -------SQLHASH TABLESPACE EXTENTS BLOCKS ---------- -------------------- ---------- ---------SCOTT 030539F4 24 0343E200 1877781575 TEMP 45 360
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
162
Chapter 5
Logical and Physical Database Structures
Other Views The following views also show information related to tablespaces: DBA_SEGMENTS, USER_SEGMENTS Shows information about the segments, segment types, size, and storage parameter values associated with tablespaces DBA_EXTENTS, USER_EXTENTS Shows information about the extents, extent sizes, associated segment, and tablespace DBA_DATA_FILES Shows data files belonging to tablespaces DBA_TEMP_FILES Shows temporary files belonging to locally managed temporary tablespaces V$TEMP_EXTENT_MAP Shows all extents of a locally managed temporary tablespace DBA_USERS Shows information about the default and temporary tablespace assignments to users
Managing Data Files
Data files (or temporary files) are created when you create a tablespace or when you alter a tablespace to add files. You can specify the size of the file when creating a file or reuse an existing file. When reusing an existing file, that file should not belong to any Oracle database—the contents of the file are overwritten. Use the REUSE clause to specify an existing file. If you omit the REUSE clause and the data file being created exists, Oracle returns an error. For example: CREATE TABLESPACE APPL_DATA DATAFILE ‘/disk2/oradata/DB01/appl_data01.dbf’ REUSE;
When REUSE is specified, you can omit the SIZE clause. If you specify the SIZE clause, the size of the file should be the same as the existing file. If the file to be created does not exist, Oracle creates a new file even if the REUSE clause is specified. You should always specify the fully qualified directory name for the file being created. If you omit the directory, Oracle creates the file under the default database directory or in the current directory, depending on the OS.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Managing Data Files
163
Sizing Files You can specify that the data file (or temporary file) grow automatically whenever space is needed in the tablespace. This is accomplished by specifying the AUTOEXTEND clause for the file. This functionality enables you to have fewer data files per tablespace and can simplify administration of data files. The AUTOEXTEND clause can be turned ON and OFF; file size increments can also be specified. You can set a maximum limit for the file size; by default the file size limit is UNLIMITED. The AUTOEXTEND clause for files can be specified when you run the CREATE DATABASE, CREATE TABLESPACE, ALTER TABLESPACE, or ALTER DATAFILE commands. For example: CREATE TABLESPACE APPL_DATA DATAFILE ‘/disk2/oradata/DB01/appl_data01.dbf’ SIZE 500M AUTOEXTEND ON NEXT 100M MAXSIZE 2000M;
The AUTOEXTEND ON clause specifies that the automatic file resize feature should be enabled for the specified file; NEXT specifies the size by which the file should be incremented, and MAXSIZE specifies the maximum size for the file. When Oracle tries to allocate an extent in the tablespace, it looks for a free extent. If a large enough free extent cannot be located (even after coalescing), Oracle increases the data file size by 100MB and tries to allocate the new extent. The following statement can disable the automatic file extension feature: ALTER DATABASE DATAFILE ‘/disk2/oradata/DB01/appl_data01.dbf’ AUTOEXTEND OFF;
If the file already exists in the database, and you wish to enable the auto extension feature, use the ALTER DATABASE command. For example, you can use the statement ALTER DATABASE DATAFILE ‘/disk2/oradata/DB01/appl_data01.dbf’ AUTOEXTEND ON NEXT 100M MAXSIZE 2000M;
You can increase or decrease the size of a data file or temporary file (thus increasing or decreasing the size of the tablespace) by using the RESIZE clause of the ALTER DATABASE DATAFILE command. For example, to redefine the size of a file, use the statement ALTER DATABASE DATAFILE ‘/disk2/oradata/DB01/appl_data01.dbf’ RESIZE 1500M; Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
164
Chapter 5
Logical and Physical Database Structures
When decreasing the file size, Oracle returns an error if it finds data beyond the new file size. You cannot reduce the file size below the highwater mark in the file. Reducing the file size helps to reclaim unused space.
Renaming Files Data files can be renamed using the RENAME FILE clause of the ALTER DATABASE command. You can also rename data files by using the RENAME DATAFILE clause of the ALTER TABLESPACE command. The RENAME functionality is used to logically move tablespaces from one location to another. To rename or relocate data files belonging to a non-SYSTEM tablespace, you should follow certain steps. Consider the following example. The tablespace USER_DATA has three data files named ‘/disk1/oradata/ DB01/user_data01.dbf’, ‘/disk1/oradata/DB01/userdata2.dbf’, and ‘/disk1/oradata/DB01/user_data03.dbf’. You notice that the second file does not follow the naming standard set for your company, so you need to rename the file. Follow these steps: 1. Take the tablespace offline. ALTER TABLESPACE USER_DATA OFFLINE;
2. Copy or move the file to the new location or rename the file by using
OS commands. 3. Rename the file in the database by using one of the following two
commands. ALTER DATABASE RENAME FILE ‘/disk1/oradata/DB01/userdata2.dbf’ TO ‘/disk1/oradata/DB01/user_data02.dbf’;
or ALTER TABLESPACE USER_DATA RENAME DATAFILE ‘/disk1/oradata/DB01/userdata2.dbf’ TO ‘/disk1/oradata/DB01/user_data02.dbf’;
4. Bring the tablespace online. ALTER TABLESPACE USER_DATA ONLINE;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Managing Data Files
165
If you need to relocate the tablespace from disk 1 to disk 2, follow the same steps. You can rename all the files in the tablespace by using a single command. The steps are as follows: 1. Take the tablespace offline. ALTER TABLESPACE USER_DATA OFFLINE;
2. Copy the file to the new location by using OS commands on the disk. 3. Rename the files in the database by using one of the following two
commands. The number of data files specified before the keyword TO should be equal to the number of files specified after the keyword. ALTER DATABASE RENAME FILE ‘/disk1/oradata/DB01/user_data01.dbf’, ‘/disk1/oradata/DB01/userdata2.dbf’, ‘/disk1/oradata/DB01/user_data03.dbf’ TO ‘/disk2/oradata/DB01/user_data01.dbf’, ‘/disk2/oradata/DB01/user_data02.dbf’, ‘/disk2/oradata/DB01/user_data03.dbf’;
or ALTER TABLESPACE USER_DATA RENAME DATAFILE ‘/disk1/oradata/DB01/user_data01.dbf’, ‘/disk1/oradata/DB01/userdata2.dbf’, ‘/disk1/oradata/DB01/user_data03.dbf’ TO ‘/disk2/oradata/DB01/user_data01.dbf’, ‘/disk2/oradata/DB01/user_data02.dbf’, ‘/disk2/oradata/DB01/user_data03.dbf’;
4. Bring the tablespace online. ALTER TABLESPACE USER_DATA ONLINE;
To rename or relocate files belonging to multiple tablespaces, or if the file belongs to the SYSTEM tablespace, you must follow these steps: 1. Shut down the database. A complete backup is recommended before
making any structural changes. 2. Copy or rename the files on the disk by using OS commands.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
166
Chapter 5
Logical and Physical Database Structures
3. Start up and mount the database (STARTUP MOUNT). 4. Rename the files in the database by using the ALTER DATABASE
RENAME FILE command. 5. Open the database by using ALTER DATABASE OPEN.
If you need to move read-only tablespaces to CD-ROM or any write-once read-many device, follow these steps: 1. Make the tablespace read-only. 2. Copy the data files belonging to the tablespace to the read-only device. 3. Rename the files in the database by using the ALTER DATABASE
RENAME FILE command.
Querying Data File Information Data file and temporary file information can be queried by using the following views.
V$DATAFILE This view shows data file information from the control file. SQL> SELECT FILE#, RFILE#, STATUS, BYTES, BLOCK_SIZE 2 FROM V$DATAFILE; FILE# RFILE# STATUS BYTES BLOCK_SIZE ---------- ---------- ------- ---------- ---------1 1 SYSTEM 267386880 8192 2 2 ONLINE 545259520 8192 3 3 ONLINE 17039360 8192 4 4 ONLINE 75497472 8192 5 5 ONLINE 17825792 8192 6 6 ONLINE 26214400 8192 7 7 ONLINE 92274688 8192 8 8 ONLINE 31465472 8192
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Managing Data Files
167
V$TEMPFILE Similar to V$DATAFILE, this view shows information about the temporary files. SQL> SELECT FILE#, RFILE#, STATUS, BYTES, BLOCK_SIZE 2 FROM V$TEMPFILE; FILE# RFILE# STATUS BYTES BLOCK_SIZE ---------- ---------- ------- ---------- ---------1 1 ONLINE 10485760 8192
DBA_DATA_FILES This view shows information about the data file names, associated tablespace names, size, status, etc. SQL> SELECT TABLESPACE_NAME, FILE_NAME, BYTES, 2 AUTOEXTENSIBLE 3 FROM DBA_DATA_FILES; TABLESPACE ---------SYSTEM RBS USERS TEMP TOOLS INDX DRSYS OEM_REP
FILE_NAME BYTES AUT ----------------------------- ---------- --C:\ORACLE\DB01\SYSTEM01.DBF 267386880 YES C:\ORACLE\DB01\RBS01.DBF 545259520 NO C:\ORACLE\DB01\USERS01.DBF 17039360 YES C:\ORACLE\DB01\TEMP01.DBF 75497472 NO C:\ORACLE\DB01\TOOLS01.DBF 17825792 YES C:\ORACLE\DB01\INDX01.DBF 26214400 YES C:\ORACLE\DB01\DR01.DBF 92274688 YES C:\ORACLE\DB01\OEM_REP.ORA 31465472 YES
DBA_TEMP_FILES This view shows information similar to that of DBA_DATA_FILES for the temporary files in the database. SQL> SELECT TABLESPACE_NAME, FILE_NAME, BYTES, 2 AUTOEXTENSIBLE 3 FROM DBA_TEMP_FILES; TABLESPACE FILE_NAME BYTES AUT ---------- -------------------------------------- --TEMP_LOCAL C:\ORACLE\DB01\TEMP_LOCAL01.DBF 10485760 NO
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
168
Chapter 5
Logical and Physical Database Structures
The maximum number of data files per tablespace is OS dependent, but on most operating systems, it is 1022. The maximum number of data files per database is 65,533. The MAXDATAFILES clause in the CREATE DATABASE or CREATE CONTROLFILE statements also limits the number of data files per database. The maximum data file size is OS dependent. There is no limit on the number of tablespaces per database. Because only 65,533 data files are allowed per database, you cannot have more than 65,533 tablespaces, because each tablespace needs at least one data file.
Summary This chapter discussed the tablespaces and data files—the logical storage structures and physical storage elements of the database. A data file belongs to one tablespace, and a tablespace can have one or more data files. The size of the tablespace is the total size of all the data files belonging to that tablespace. The size of the database is the total size of all tablespaces in the database, which is the same as the total size of all data files in the database. Tablespaces are logical storage units used to group data depending on their type or category. Tablespaces are created using the CREATE TABLESPACE command. Oracle always allocates space to an object in chunks of blocks known as extents. Tablespaces can handle the extent management through the Oracle dictionary or locally in the data files that belong to the tablespace. When creating tablespaces, you can specify default storage parameters for the objects that will be created in the tablespace. If you do not specify any storage parameters when creating an object, the storage parameters for the tablespace are used for the new object. Locally managed tablespaces can have uniform extent sizes; this reduces fragmentation and wasted space. You can also make Oracle do the entire extent sizing for locally managed tablespaces. A temporary tablespace is used only for sorting; no permanent objects can be created in a temporary tablespace. Only one sort segment will be created for each instance in the temporary tablespace. Multiple transactions can use the same sort segment, but one transaction can use only one extent. Locally
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Summary
169
managed temporary tablespaces are created using the CREATE TEMPORARY TABLESPACE command. Temporary files are created (instead of data files) when you use this command. Although these files are part of the database, they do not appear in the control file, and the block changes do not generate any redo information because all the segments created on locally managed temporary tablespaces are temporary segments. A tablespace can be altered to change its availability or to make it readonly. Data in an offline tablespace is not accessible, whereas data in the readonly tablespaces cannot be modified or deleted. You can drop objects from a read-only tablespace. Space is added to the tablespace by adding new data files to the tablespace or by increasing the size of the data files. Tablespace information can be obtained from the dictionary using the DBA_TABLESPACES and V$TABLESPACE views. The data files can be renamed through Oracle. This feature is useful to relocate a tablespace. The V$DATAFILE, V$TEMPFILE, DBA_DATA_FILES, and DBA_TEMP_FILES views provide information on the data files.
Key Terms Before you take the exam, make sure you’re familiar with the following terms: tablespace data file extent management dictionary-managed tablespace locally managed tablespace temporary tablespace read-only tablespace
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
170
Chapter 5
Logical and Physical Database Structures
Review Questions 1. Which two statements should be executed to make the USERS
tablespace read-only, if the tablespace is offline? A. ALTER TABLESPACE USERS READ ONLY B. ALTER DATABASE MAKE TABLESPACE USERS READ ONLY C. ALTER TABLESPACE USERS ONLINE D. ALTER TABLESPACE USERS TEMPORARY 2. When is a sort segment allocated in a temporary tablespace released? A. When the sort operation completes. B. When the instance is shut down. C. When you issue ALTER TABLESPACE COALESCE. D. SMON clears up inactive sort segments. 3. Which of the following is not a logical database structure? A. Data block B. Extent C. OS block D. Tablespace 4. What will be the minimum size of the segment created in a tablespace
if the tablespace’s default storage values are specified as (INITIAL 2M NEXT 2M MINEXTENTS 3 PCTINCREASE 50) and no storage clause is specified for the object? A. 2MB B. 4MB C. 5MB D. 7MB E. 8MB
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
171
5. How would you add more space to a tablespace? Choose two. A. ALTER TABLESPACE ADD DATAFILE
SIZE B. ALTER DATABASE DATAFILE RESIZE C. ALTER DATAFILE RESIZE D. ALTER TABLESPACE RESIZE 6. If the DB_BLOCK_SIZE of the database is 8KB, what will be the size of
the third extent when you specify the storage parameters as (INITIAL 8K NEXT 8K PCTINCREASE 50 MINEXTENTS 3)? A. 16KB B. 24KB C. 12KB D. 40KB 7. Which tablespace is created by Oracle? A. TOOLS. B. SYSTEM. C. DATA. D. ORACLE. E. No tablespaces are created by Oracle; you need to create all nec-
essary tablespaces.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
172
Chapter 5
Logical and Physical Database Structures
8. Which data dictionary view can be queried to obtain information
about the files that belong to locally managed temporary tablespaces? A. DBA_DATA_FILES B. DBA_TABLESPACES C. DBA_TEMP_FILES D. DBA_LOCAL_FILES 9. When does the SMON process automatically coalesce the
tablespaces? A. When the initialization parameter COALESCE_TABLESPACES is set
to TRUE B. When the PCTINCREASE default storage of the tablespace is set to 0 C. When the PCTINCREASE default storage of the tablespace is set
to 50 D. Whenever the tablespace has more than one free extent 10. Which operation is permitted on a read-only tablespace? A. Delete data from table B. Drop table C. Create new table D. None of the above 11. How would you drop a tablespace if the tablespace were not empty? A. Rename all the objects in the tablespace and then drop the
tablespace. B. Remove the data files belonging to the tablespace from the disk. C. Use ALTER DATABASE DROP CASCADE. D. Use DROP TABLESPACE INCLUDING
CONTENTS.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
173
12. Which command is used to enable the auto-extensible feature for a
file, if the file is already part of a tablespace? A. ALTER DATABASE. B. ALTER TABLESPACE. C. ALTER DATA FILE. D. You cannot change the auto-extensible feature once the data file
created. 13. The database block size is 4KB. You created a tablespace using the fol-
lowing command. CREATE TABLESPACE USER_DATA DATAFILE ‘C:/DATA01.DBF’;
If you create an object in the database without specifying any storage parameters, what will be the size of the third extent that belongs to the object? A. 6KB B. 20KB C. 50KB D. 32KB 14. Which statement is false? A. A dictionary-managed temporary tablespace can be made
permanent. B. The size of the locally managed temporary tablespace file cannot
be changed. C. Once created, the extent management of a tablespace cannot be
altered. D. A locally managed permanent tablespace cannot be made
temporary.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
174
Chapter 5
Logical and Physical Database Structures
15. Which statement is true regarding the SYSTEM tablespace? A. Can be made read-only. B. Can be offline. C. Data files can be renamed. D. Data files cannot be resized. 16. What are the recommended INITIAL and NEXT values for a temporary
tablespace, to reduce fragmentation? A. INITIAL = 1MB; NEXT = 2MB B. INITIAL = multiple of SORT_AREA_SIZE + 1; NEXT =
INITIAL C. INITIAL = multiple of SORT_AREA_SIZE + DB_BLOCK_SIZE;
NEXT = INITIAL D. INITIAL = 2 × SORT_AREA_SIZE; NEXT = SORT_AREA_SIZE 17. Which parameter specified in the DEFAULT STORAGE clause of CREATE
TABLESPACE cannot be altered after creating the tablespace? A. INITIAL B. NEXT C. MAXEXTENTS D. None 18. How would you determine how much sort space is used by a user
session? A. Query the DBA_SORT_SEGMENT view. B. Query the V$SORT_SEGMENT view. C. Query the V$SORT_USAGE view. D. Only the total sort segment size can be obtained; individual session
sort space usage cannot be found.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
175
19. If you issue ALTER TABLESPACE USERS OFFLINE IMMEDIATE, which
of the following statements is true? A. All data files belonging to the tablespace must be online. B. Does not ensure that the data files are available. C. Need not do media recovery when bringing the tablespace online. D. Need to do media recovery when bringing the tablespace online.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
176
Chapter 5
Logical and Physical Database Structures
Answers to Review Questions 1. C and A. To make a tablespace read-only, all the data files belonging
to the tablespace must be online and available. So, bring the tablespace online and then make it read-only. 2. B. The sort segment or temporary segment created in a temporary
tablespace is released only when the instance is shut down. Each instance may have one sort segment in the tablespace; the sort segment is created when the first sort for the instance is started. 3. C. Although a data block is a multiple of OS blocks, an OS block is
not a logical structure. It is part of the physical database structure. 4. D. When the segment is created, it will have three extents; the first
extent is 2MB, the second is 2MB, and the third is 3MB. So the total size of the segment is 7MB. 5. A and B. More space can be added to a tablespace either by adding
a data file or by increasing the size of an existing data file. 6. A. The third extent size will be NEXT + 0.5 * NEXT, which is
12KB, but the block size is 8KB, so the third extent size will be 16KB. The initial extent allocated will be 16KB (minimum size for INITIAL is two blocks) and the total segment size would be 16 + 8 + 16 = 40KB. 7. B. The SYSTEM tablespace is created by Oracle when you issue the
CREATE DATABASE command to create a database. All other necessary tablespaces should be created by the DBA. 8. C. Locally managed temporary tablespaces are created using the
CREATE TEMPORARY TABLESPACE command. The data files (temporary files) belonging to these tablespaces are in the DBA_TEMP_FILES view. The EXTENT_MANAGEMENT column of the DBA_TABLESPACES view shows the type of the tablespace. The data files belonging to locally managed permanent tablespaces and dictionary-managed (permanent and temporary) tablespaces can be queried from DBA_DATA_FILES. Locally managed temporary tablespaces reduce contention on the data dictionary tables; their changes are not logged in the redo log files.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Answers to Review Questions
177
9. C. The SMON process automatically coalesces free extents in the
tablespace when the tablespace’s PCTINCREASE is set to a nonzero value. You can manually coalesce a tablespace by using ALTER TABLESPACE COALESCE. 10. B. A table can be dropped from a read-only tablespace. When a table
is dropped, Oracle does not have to update the data file; it updates the dictionary tables. Any change to data or creation of new objects is not allowed in a read-only tablespace. 11. D. The INCLUDING CONTENTS clause is used to drop a tablespace that
is not empty. Oracle does not remove the data files that belong to the tablespace; you need to do it manually using an OS command. Oracle updates only the control file. 12. A. You can use the ALTER TABLESPACE command to rename a file
that belongs to the tablespace, but all other file management operations are done through the ALTER DATABASE command. To enable auto-extension, use ALTER DATABASE DATAFILE AUTOEXTEND ON NEXT MAXSIZE . 13. D. When you create a tablespace with no default storage parameters,
Oracle assigns (5 × DB_BLOCK_SIZE) to INITIAL and NEXT; PCTINCREASE is 50. So the third extent would be 50 percent more than the second. The first extent is 20KB, the second is 20KB, and third is 32KB (because the block size is 4KB). 14. B. The size of a temporary file can be changed using ALTER
DATABASE TEMPFILE RESIZE . A temporary file cannot be renamed. 15. C. The data files belonging to the SYSTEM tablespace can be renamed
when the database is in the MOUNT state, by using the ALTER DATABASE RENAME FILE command.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
178
Chapter 5
Logical and Physical Database Structures
16. C. The recommended storage for a TEMPORARY tablespace is a multi-
ple of SORT_AREA_SIZE + DB_BLOCK_SIZE. For example, if the sort area size is 100KB and the block size is 4KB, the sort extents should be sized 104KB, 204KB, or 304KB, etc. Sorting is done on the disk only when there is not enough space available in memory. Memory sort size is specified by the SORT_AREA_SIZE parameter. Therefore, when the sorting is done on disk, the minimum area required is as big as the SORT_AREA_SIZE, and one block is added for the overhead. The INITIAL and NEXT storage parameters should be the same for the TEMPORARY tablespace, and PCTINCREASE should be zero. 17. D. All the default storage parameters defined for the tablespace can
be changed using the ALTER TABLESPACE command. Once objects are created, their INITIAL and MINEXTENTS values cannot be changed. 18. C. The V$SORT_USAGE view provides the number of EXTENTS and
number of BLOCKS used by each sort session. This view provides the username also. It can be joined with V$SESSION or V$SQL to obtain more information on the session or the SQL statement causing the sort. 19. D. When you take a tablespace offline with the IMMEDIATE clause,
Oracle does not perform a checkpoint and does not make sure that all data files are available. You must perform a media recovery when the tablespace is brought online.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Chapter
6
Segments and Storage Structures ORACLE8i ARCHITECTURE AND ADMINISTRATION EXAM OBJECTIVES OFFERED IN THIS CHAPTER: Storage Structure and Relationships
Describe the logical structure of the database
List the segment types and their uses
List the keywords that control block space usage
Obtain information about storage structures from the data dictionary
List the criteria for separating segments
Managing Rollback Segments
Create rollback segments using appropriate storage settings
Maintain rollback segments
Plan the number and size of rollback segments
Obtain rollback segment information from the data dictionary
Troubleshoot common rollback segment problems
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Training and Certification Web site (http://
education.oracle.com/certification/index.html) for the most current exam objectives listing.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
S
egments are logical storage units that fit between a tablespace and an extent in the logical storage hierarchy. A segment has one or more extents and it belongs to a tablespace. This chapter covers in detail segments, extents, and blocks. This chapter also discusses the types of segments and the type of information stored in these segments.
Data Blocks
A data block is the smallest logical unit of storage in Oracle. The block size is defined at the time of database creation and it cannot be changed. The block size is a multiple of the OS block size. The data block is the unit of I/O used in the database. The format of the data block is the same, whether it is used to store a table, index, or cluster. A data block consists of the following: Common and variable header The header portion contains information about the type of block and block address. The block type can be data, index, or rollback. The common block header can take 24 bytes, and the variable (transaction) header occupies (24 × INITRANS) bytes. By default, the value of INITRANS for tables is 1 and for indexes is 2. Table directory This portion of the block has information about tables that have rows in this block. The table directory occupies 4 bytes. Row directory Contains information (such as the row address) about the actual rows in the block. The space allocated for the row directory is not reclaimed, even if you delete all rows in the block. The space is reused when new rows are added to the block. The row directory occupies (4 × number of rows) bytes.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Data Blocks
181
Row data The actual rows are stored in this area. Free space This is the space that is available for new rows or for extending the existing rows through updates.
The space used for the common and variable header, table directory, and row directory in a block is collectively known as the block overhead. The overhead varies, but mostly it is between 84 and 107 bytes. If more rows are inserted into the block (row directory increases) or a large INITRANS is specified (header increases), this overhead size might be higher.
Block Storage Parameters When you create objects such as tables or indexes, you can specify the block storage options. Choosing proper values for these storage parameters can save you a lot of space and provide better performance. The storage parameters affecting the block are: PCTFREE and PCTUSED These two space management parameters control the free space available for inserts and updates on the rows in the block. These parameters can be specified when you create an object. INITRANS and MAXTRANS These two transaction entry parameters control the number of concurrent transactions that can use the block data. These parameters can be specified when you create an object. Based on these parameters, space is reserved in the block for transaction entries.
PCTFREE and PCTUSED Before discussing these parameters, let’s consider two important aspects of storing rows in a block: row chaining and row migration. If the table row length is bigger than a block, or if the table has LONG or LOB columns, it is difficult to fit one row entirely in one block. Oracle stores such rows in more than one block. This is unavoidable, and storing such rows in multiple blocks is known as row chaining. In some cases, the row will fit into a block with other rows, but due to an update activity, the row length increases and no free space remains available to accommodate the modified row. Oracle then moves the entire row from its original block to a new block, leaving a pointer in the original block to refer to the new block. This is known as row migration.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
182
Chapter 6
Segments and Storage Structures
Both row migration and row chaining affect the performance of queries, because Oracle has to read more than one block to retrieve the row. Row migration can be avoided if you plan the block’s free space properly using the PCTFREE and PCTUSED parameters. PCTFREE and PCTUSED are specified in percentages of the data block. PCTFREE specifies what percent of the block should be allocated as free space for future updates. If the table can undergo a lot of updates and the updates increase the size of the row, set a higher value for the PCTFREE parameter, so that even if the row length increases due to an update, the rows are not moved out of the block (no row migration). Whenever a new row is added to a block, Oracle determines whether the free space will fall below the PCTFREE threshold. If it does, then the block is removed from the free list and the row is stored in another block. PCTUSED specifies when the block can be considered for adding new rows. After the block becomes full as determined by the PCTFREE parameter, Oracle considers adding new rows to the block only when the used space falls below the percent value set by PCTUSED. When the used space in a block falls below the PCTUSED threshold, the block is added to the free list. To understand the usage of the PCTFREE and PCTUSED parameters, consider an example: The table EMP is created with a PCTFREE value of 10 and a PCTUSED value of 40. When you insert rows into the EMP table, Oracle adds rows to a block until it is 90 percent full (including row data and overhead), leaving 10 percent of the block free for future updates. During an update operation, if the row length increases, Oracle uses the free space available. Once no free space is available, Oracle moves the row out of the block and provides a pointer to the new location (row migration). If you delete rows from the table (or update the rows such that the row length decreases), more free space will be available in the block. Oracle starts inserting new rows into the block only when the used space available falls below PCTUSED, which is 40 percent. So when the row data and overhead is below 40 percent of the block, new rows are inserted into the block. Such inserts will continue until the block is 90 percent full. When the block has only PCTFREE (or less) percent free space available, it is removed from the free list. The block is added back to the free list only when the used space in the block falls below PCTUSED percent. The default value of PCTFREE is 10, and the default for PCTUSED is 40. The sum of PCTFREE and PCTUSED cannot be more than 100. If the rows in a table are subject to a lot of updates, and the updates increase the row length, set a higher PCTFREE. If the table has a large number of inserts and deletes, and
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Data Blocks
183
the updates do not cause the row length to increase, set the PCTFREE low and set the PCTUSED high. A high value for PCTUSED will help to reuse the space freed by deletes faster. If the table row length is larger, or if the table rows are never updated, set the PCTFREE very low so that a row can fit into one block and you fill each block. PCTFREE can be specified when you create a table or index or cluster, and PCTUSED can be specified while creating tables and clusters, but not indexes.
INITRANS and MAXTRANS The transaction entry settings reserve space for transactions in the block. Set these parameters based on the maximum number of transactions that can touch a block at any given point in time. INITRANS reserves space in the block header for DML transaction entries. If you do not specify INITRANS, Oracle defaults the value to 1 for table data blocks, and 2 for index blocks and cluster blocks. When multiple transactions access the data block, space is allocated in the block header for each transaction. When there is no pre-allocated space available, Oracle allocates space from the free area of the block for the transaction entry. The space allocated from the free space thus becomes part of the block overhead and is never released. The MAXTRANS parameter limits the number of transaction entries that can concurrently use data in a data block. Therefore, you can limit the amount of free space that can be allocated for transaction entries in a data block by using MAXTRANS. The default value is OS specific, and the maximum value you can specify is 255. The values for INITRANS and MAXTRANS should be based on the number of transactions that can simultaneously update/insert/delete the rows in a block. If the row length is large or the number of users accessing the table is low, set INITRANS to a low value. Some tables, such as the control tables, are accessed frequently by the users, and chances are high that more than one user can access a block simultaneously to update, insert, or delete. If a sufficient amount of transaction entry space is not reserved, Oracle dynamically allocates transaction entry space from the free space available in the block (this is an expensive operation, and the space allocated in this way cannot be reclaimed). When you set MAXTRANS, Oracle limits the number of transaction entries in a block. INITRANS and MAXTRANS can be specified when you create a table, index, or cluster. Set a higher INITRANS value for tables and indexes that are queried most often by the application, such as application control tables.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
184
Chapter 6
Segments and Storage Structures
Extents
An extent is a logical storage unit that is made up of contiguous data blocks. An extent is first allocated when a segment is created, and subsequent extents are allocated when all the blocks in the segment are full. Oracle can manage the extent’s allocated and free information through the data dictionary, or locally by using the bitmaps on data files. Dictionary-managed tablespaces and locally managed tablespaces are discussed in Chapter 5, “Logical and Physical Storage Structures.” You have also seen the parameters that control the size of the extents; to refresh your memory, these are: INITIAL The first extent size for a segment, allocated when the segment (object) is first created. NEXT The second extent size for a segment. PCTINCREASE The size by which the extents should be increased based on the previously allocated extent size. This parameter affects the third extent onward in a segment. MINEXTENTS The minimum number of extents to be allocated when creating the segment. MAXEXTENTS The maximum number of extents that are allowed in a segment. You can set no extent limits by specifying UNLIMITED. When the extents are managed locally, the storage parameters do not affect the size of the extents. For locally managed tablespaces, you can either have uniform extent sizes or variable extent sizes managed completely by Oracle. Once an object (such as a table or an index) is created, its INITIAL and MINEXTENTS values cannot be changed. Changes to NEXT and PCTINCREASE take effect when the next extent is allocated for the object—already allocated extent sizes are not changed.
The header block of each segment contains a directory of the extents in that segment.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Extents
185
Allocating Extents Oracle allocates an extent when an object is first created or when all the blocks in the segment are full. For example, when you create a table, contiguous blocks specified by INITIAL are allocated for the table. If the MINEXTENTS value is more than 1, that many extents are allocated at the time of creation. Even though the table has no data, space is allocated for the table. When all the blocks allocated for the table are completely filled, Oracle allocates another extent. The size of this extent depends on the values of the NEXT and PCTINCREASE parameters. New extents in locally managed tablespaces are allocated by searching the data file’s bitmap for the amount of contiguous free space required. Oracle looks at each file’s bitmap to find contiguous free space; Oracle returns an error if none of the files have enough free space. In dictionary-managed tablespaces, Oracle allocates extents based on the following rules: 1. If the extent requested is more than 5 data blocks, Oracle adds one
more block to reduce internal fragmentation. For example, if the blocks requested is 24, Oracle adds one more block and searches the tablespace where the segment belongs for a free extent with 25 blocks. 2. If an exact match fails, Oracle searches the contiguous free blocks
again for a free extent larger than the required value. When it finds one, if the number of blocks above the required size is less than or equal to 5 blocks, Oracle allocates the entire extent for the segment. Using our example, if the free contiguous blocks found is 28 blocks, Oracle allocates 28 blocks to the segment. This is to eliminate fragmentation. If the number of blocks above the required size is more than 5 blocks, Oracle breaks the free extent into two and allocates the required space for the segment. The rest of the contiguous blocks are added to the free list. In our example, if the free extent size is 40 blocks, Oracle allocates 25 blocks to the segment as an extent and 15 blocks are marked as a free extent. 3. If step 2 fails, Oracle coalesces the free space in the tablespace and
repeats step 2 again. 4. If step 3 fails, Oracle verifies whether the files are defined as auto-
extensible; if so, Oracle tries to extend the file and repeats step 2. If Oracle cannot extend the file or cannot allocate an extent even after resizing the data file to its maximum size specified, Oracle issues an error and does not allocate an extent to the segment.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
186
Chapter 6
Segments and Storage Structures
Extents are normally de-allocated when you drop an object. The extents allocated to a table or a cluster can be freed up by using the TRUNCATE … DROP STORAGE command to remove all rows. The TRUNCATE command can be used to remove all rows from a table or cluster. The DROP STORAGE clause is the default and it removes all the extents higher than MINEXTENTS after removing all rows. The REUSE STORAGE clause does not de-allocate extents, it just removes all the rows from the table/cluster. Rows deleted using the TRUNCATE command cannot be rolled back. Deleting rows by using DELETE does not free up the extents. You can also manually de-allocate extents by using the command ALTER [TABLE/INDEX/CLUSTER] DEALLOCATE UNUSED (discussed in Chapter 7, “Managing Tables, Indexes, and Constraints”).
Querying Extent Information Extent information can be queried from the data dictionary by using the following views.
DBA_EXTENTS This view lists the extents allocated in the database for all segments. It shows the size, segment name, and tablespace name where it resides. SQL> SELECT OWNER, SEGMENT_TYPE, TABLESPACE_NAME, 2 FILE_ID, BYTES 3 FROM DBA_EXTENTS 4 WHERE SEGMENT_NAME = ‘PLAN_TABLE’; OWNER -----SYSTEM SYSTEM
SEGMENT_TYPE -----------TABLE TABLE
TABLESPACE_NAME FILE_ID BYTES --------------- ------- -----TOOLS 5 65536 TOOLS 5 32768
DBA_FREE_SPACE This view lists information about the free extents in a tablespace. SQL> SELECT TABLESPACE_NAME, MAX(BYTES) LARGEST, 2 MIN(BYTES) SMALLEST, COUNT(*) EXT_COUNT 3 FROM DBA_FREE_SPACE 4 GROUP BY TABLESPACE_NAME;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Segments
187
TABLESPACE_NAME LARGEST SMALLEST EXT_COUNT --------------- ---------- ---------- ---------DRSYS 88268800 88268800 1 INDX 26206208 26206208 1 OEM_REPOSITORY 3473408 3473408 1 RBS 470278144 1048576 2 SYSTEM 327680 65536 2 TEMP 62775296 62775296 1 TOOLS 229376 229376 1 USERS 647168 647168 1
Segments
A
segment is a logical storage unit that is made up of one or more extents. Every object in the database that requires space to store data is allocated a segment. The size of the segment is the total of the size of all extents in that segment. When you create a table, index, cluster, or materialized view (snapshot), a segment is allocated for the object (for partitioned tables and indexes, a segment is allocated for each partition). A segment can belong to only one tablespace, but may spread across multiple data files belonging to the tablespace. There are four types of segments:
Data segments
Index segments
Temporary segments
Rollback segments
Data and Index Segments A data segment stores data that belongs to a table, a cluster, or a materialized view. The storage parameters for a table or cluster determine how its data segment’s extents are allocated. A data segment can hold:
A nonpartitioned or nonclustered table
A partition in a partitioned table
A cluster of tables
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
188
Chapter 6
Segments and Storage Structures
The size of the segment depends on the storage parameters that are specified at the time of the table or cluster creation. If no storage parameters are defined, the default storage for the tablespace is used. A table may contain LOB or VARRAY column types. These entities can be stored in their own segments to improve performance. LOBs and VARRAYs can be stored in LOB data segments. You can specify a STORAGE clause for these segments that will override storage parameters specified at the table level. An index segment stores ROWID and index keys that belong to an index. You can specify storage parameters when you create an index. Every nonpartitioned index has one segment allocated, and every partition in a partitioned index has a segment.
Temporary Segments When processing queries, Oracle may require space on the disk to support operations such as sorts. Oracle automatically allocates space in the tablespace assigned as the user’s TEMPORARY TABLESPACE. The segments thus allocated are called temporary segments. These segments are used primarily for sorts when the sort operation cannot be done in the SORT_AREA_ SIZE specified in memory. The following statements may require a temporary segment, depending on the volume of data processed:
CREATE INDEX
SELECT DISTINCT
Using an ORDER BY or GROUP BY clause in queries
Queries involving set operations such as UNION, INTERSECT, and MINUS
Oracle allocates temporary segments as needed during a user session. The segments are allocated in the temporary tablespace of the user issuing the statement. The temporary segment is removed when the statement completes. Because of this frequent allocation and de-allocation, it is better to have a separate tablespace specifically designed for temporary segments. In Chapter 5, you saw the creation of such temporary tablespaces. Having a separate tablespace prevents fragmentation on the SYSTEM or other application tablespaces. Entries made to the temporary segment blocks are not recorded in the redo log files. While creating a table by using CREATE TABLE AS SELECT, or when creating an index, Oracle first allocates temporary segments in the target tablespace and makes them permanent when the statement completes. Temporary segments created in temporary tablespaces are de-allocated only
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Segments
189
when the instance is shut down. Temporary segments created on permanent tablespaces are removed when the statement completes. Oracle8i can create temporary tables to hold session-private data that exists only for the duration of a transaction or session. Data from the temporary table is automatically dropped at session termination, either when the user logs off or when the session terminates abnormally, for example, during a session or instance crash. A temporary table is created using the CREATE GLOBAL TEMPORARY TABLE statement. You can also create indexes on temporary tables. Oracle allocates temporary segments for temporary tables and indexes on temporary tables.
Size and extents of the temporary segments are derived from the default storage parameters of the tablespace.
Querying Segment Information Segment information can be obtained from the data dictionary by using the following views.
DBA_SEGMENTS This view shows the segments created in the database, their size, tablespace, type, storage parameters, etc. Notice that the LOB segment types are listed as LOBINDEX for index and LOBSEGMENT for data. SQL> 2 3 4
SELECT TABLESPACE_NAME, SEGMENT_TYPE, COUNT(*) SEG_CNT FROM DBA_SEGMENTS WHERE OWNER != ‘SYS’ GROUP BY TABLESPACE_NAME, SEGMENT_TYPE;
TABLESPACE_NAME --------------DRSYS DRSYS DRSYS DRSYS OEM_REPOSITORY OEM_REPOSITORY TEMP USERS USERS
SEGMENT_TYPE SEG_CNT ------------------ ---------INDEX 38 LOBINDEX 1 LOBSEGMENT 1 TABLE 21 INDEX 216 TABLE 210 TEMPORARY 1 INDEX 1 TABLE 2
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
190
Chapter 6
Segments and Storage Structures
V$SORT_SEGMENT This view contains information about every sort segment in a given instance. The view is updated only when the tablespace is of the TEMPORARY type. It shows the number of active users, sort segment size, extents used, extents not used, etc. SQL> SELECT TABLESPACE_NAME, EXTENT_SIZE, CURRENT_USERS, 2 TOTAL_BLOCKS, USED_BLOCKS, FREE_BLOCKS, MAX_BLOCKS 3 FROM V$SORT_SEGMENT; TABLESPACE_NAME EXTENT_SIZE CURRENT_USERS ------------------ ----------- ------------TOTAL_BLOCKS USED_BLOCKS FREE_BLOCKS MAX_BLOCKS ------------ ----------- ----------- ---------TEMP 8 0 1552 0 1552 1552
Managing Rollback Segments
R
ollback segments record old values of data that were changed by a transaction. Rollback segments provide read consistency and the ability to undo changes, as well as assist in crash recovery. Information in a rollback segment consists of several rollback entries called undo entries. Before updating or deleting rows, Oracle stores the row as it existed before the operation (known as the before-image data) in a rollback segment. A rollback entry consists of the before-image data along with the block ID and data file number. The rollback entries that belong to a transaction are all linked together, so that the transaction can be rolled back, if necessary. The data block header is also updated with the rollback segment information to identify where to find the undo information. This is used to provide a readconsistent view of the data at a given point in time. The changes to data in a transaction are stored in a single rollback segment. When the transaction is complete (either by a COMMIT or by a ROLLBACK), Oracle finds a new rollback segment for the session. When a user performs an update or a delete operation, the before-image data is saved in the rollback segments; then the blocks corresponding to the data are modified. For inserts, the rollback entries include the ROWID of
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Managing Rollback Segments
191
the row inserted, because to undo an insert operation, the rows inserted must be deleted. If the transaction modifies an index, then the old index keys also will be stored in the rollback segments. The rollback segments are freed when the transaction ends, but the rollback information is not destroyed immediately. They are used to provide a read-consistent view of relevant data for queries in other sessions that started before the transaction is committed. Oracle records changes to the original data block and rollback segment block in the redo log. This second recording of the rollback information is important for transactions that are not yet committed or rolled back at the time of a system crash. If a system crash occurs, Oracle automatically restores the rollback segment information, including the rollback entries for active transactions, as part of instance or media recovery. Once the recovery is complete, Oracle performs the actual rollbacks of transactions that had been neither committed nor rolled back at the time of the system crash.
Using Rollback Segments When you create the database, Oracle creates the SYSTEM rollback segment in the SYSTEM tablespace. Every database should have at least one rollback segment other than the SYSTEM rollback segment. Oracle uses the SYSTEM rollback segment primarily for transactions involving objects in the SYSTEM tablespace. For changes involving objects in the non-SYSTEM tablespace, you should create rollback segments in a non-SYSTEM tablespace. Oracle recommends having a separate tablespace for rollback segments. Oracle assigns the next available rollback segment with the least number of active transactions to a transaction when that transaction begins. You can also use a specific rollback segment by using the SET TRANSACTION USE ROLLBACK SEGMENT command; this statement should be the first statement of the transaction. A transaction begins when you start a new session or when you issue a COMMIT or ROLLBACK. The rollback segment is used only when the first DML statement is issued. A transaction can use only one rollback segment. Like any other segment, the rollback segment also consists of many extents. The extents in the rollback segment are used in a circular fashion. There should be at least two extents for each rollback segment. When Oracle completes writing to an extent in the rollback segment, it checks whether the next extent is free. When the last extent of the rollback segment becomes full,
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
192
Chapter 6
Segments and Storage Structures
Oracle continues writing rollback data by wrapping around to the first extent in the segment. To continue writing rollback information for a transaction, Oracle always tries to reuse the next extent in the circle first. However, if the next extent contains data from the current transaction or if the extent is full, then Oracle must allocate a new extent. Oracle can allocate new extents for a rollback segment until the number of extents reaches the value set for the rollback segment’s storage parameter MAXEXTENTS. If the extents for a rollback segment reach MAXEXTENTS or if the tablespace does not have enough room, Oracle returns an error and the statement is not completed. A single extent in the rollback segment can have entries from multiple transactions, but a block within the extent can have entries from only one transaction. Only one transaction can write to one extent of the rollback segment at any given time.
Creating Rollback Segments Rollback segments are created using the CREATE ROLLBACK SEGMENT command. You can create public or private rollback segments. A public rollback segment is available to all instances and is brought online when the database is started. A private rollback segment is available only to the instance that brings the rollback segment online. Private rollback segments can be made available to the instance at start-up by specifying the ROLLBACK_SEGMENTS in the initialization parameter file. The rollback segments are always owned by SYS, irrespective of who creates them. If you do not specify a tablespace when creating a rollback segment, the rollback segment is created in the SYSTEM tablespace. To create a private rollback segment R01 in the RBS tablespace, use the statement CREATE ROLLBACK SEGMENT R01 TABLESPACE RBS;
To create a public rollback segment R01 in the RBS tablespace, use the statement CREATE PUBLIC ROLLBACK SEGMENT R01 TABLESPACE RBS;
The storage parameters that can be used with rollback segments are INITIAL, NEXT, MINEXTENTS, MAXEXTENTS, and OPTIMAL. You cannot specify PCTINCREASE for rollback segments; its value is always 0. OPTIMAL specifies an optimal size in bytes for the rollback segments. Use K or M to specify the size in KB or MB. OPTIMAL plays an important role in the size of the rollback
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Managing Rollback Segments
193
segments. When a rollback segment grows beyond the size specified by OPTIMAL, Oracle de-allocates the extents above the OPTIMAL size when there are no active transactions in the rollback segment. For example: CREATE PUBLIC ROLLBACK SEGMENT R01 TABLESPACE RBS STORAGE (INITIAL 3M NEXT 3M MINEXTENTS 10 OPTIMAL 42M);
The rollback segment R01 is created with equally sized extents of 3MB each, and is allocated 10 extents. So the size of the rollback segment is 30MB at the time of creation. When transactions start using this rollback segment, and if the rollback segment size grows beyond 40MB, Oracle de-allocates extents and keeps the size of the rollback segment at 42MB when there are no active transactions. Allocating and de-allocating extents are expensive operations when evaluating the effects on database performance; therefore, setting the OPTIMAL value for rollback segments is a key element of a well-tuned database. The value of OPTIMAL should be at least INITIAL + NEXT × (MINEXTENTS – 1). If you do not specify the storage parameters when creating the rollback segment, Oracle uses the default storage parameters assigned for the tablespace. The default value of OPTIMAL is NULL. To remove the OPTIMAL setting from a rollback segment, use ALTER ROLLBACK SEGMENT STORAGE (OPTIMAL NULL);
Oracle recommends creating rollback segments with equally sized INITIAL and NEXT values, MINEXTENTS set to 20, and OPTIMAL set to a value greater than MINEXTENTS × NEXT. Creating 20 extents for the rollback segment avoids dynamic segment extension.
Altering Rollback Segments Using the ALTER ROLLBACK SEGMENT statement alters the rollback segment, whether private or public. The rollback segment should be made available after creating it. The rollback segment will be offline when you create a new rollback segment. To bring the rollback segment online, use the statement ALTER ROLLBACK SEGMENT R01 ONLINE;
Public rollback segments are made available when you start up the instance. Private rollback segments are made online only if you specify them in the ROLLBACK_SEGMENTS initialization parameter.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
194
Chapter 6
Segments and Storage Structures
When you make a rollback segment offline, transactions cannot use that rollback segment. You may want to take a rollback segment offline if you need to drop the rollback segment or if you need to make the tablespace (where the rollback segment resides) offline. If you try to take a rollback segment that contains active transactions offline, Oracle makes the rollback segment unavailable to future transactions and takes it offline after all the active transactions using the rollback segment complete. The status of such rollback segments will be PENDING OFFLINE. To make a rollback segment offline, use the statement ALTER ROLLBACK SEGMENT R01 OFFLINE;
You can also change the storage parameters of the rollback segment, except INITIAL and MINEXTENTS. When you change NEXT, its value is used only for the future extents created; the existing extents allocated to the segment are not changed. For example: ALTER ROLLBACK SEGMENT R01 STORAGE (MAXEXTENTS UNLIMITED);
Shrinking Rollback Segments If you set the OPTIMAL parameter for rollback segments, Oracle automatically shrinks the rollback segment to its OPTIMAL size when there are no active transactions in the rollback segments. You can manually shrink a rollback segment to a specified size by using the ALTER ROLLBACK SEGMENT command. ALTER ROLLBACK SEGMENT R01 SHRINK TO 10M;
Dropping Rollback Segments Rollback segments can be dropped after taking the segment offline. Once you drop a rollback segment, Oracle marks the status of the rollback segment in the dictionary as INVALID. Drop a rollback segment using the following statements. ALTER ROLLBACK SEGMENT R01 OFFLINE; DROP ROLLBACK SEGMENT R01;
If you are dropping a rollback segment specified in the ROLLBACK_ SEGMENTS parameter of the initialization file, make sure to remove its entry.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Managing Rollback Segments
195
Otherwise, when you try to start up the instance, Oracle will generate an error message. If you take a tablespace offline containing active transactions (where the rollback entries are in a different tablespace—tablespaces with active rollback segments cannot be taken offline), Oracle writes the rollback entries to the SYSTEM tablespace. This rollback segment is known as the deferred rollback segment. The entries in the deferred rollback segments are used when the tablespace is back online. The deferred rollback segments are dropped when the transaction completes (either by a ROLLBACK or a COMMIT).
Common Rollback Issues Oracle tries to use the SYSTEM rollback segment for non-SYSTEM activities only when there are not enough rollback segments available. You should create enough rollback segments to satisfy the requirements of all transactions in the database. Place the rollback segments in their own tablespace. This will reduce fragmentation on data tablespaces. It is better to create one more rollback segment in the SYSTEM tablespace to reduce contention for the SYSTEM tablespace when performing internal operations. It is recommended that you create one rollback segment for every four concurrent transactions—that is, if the maximum number of concurrent transactions in a database is 40, create a minimum of 10 rollback segments. You can query the V$WAITSTAT view to see whether there are any waits for rollback segments, and determine the number of rollback segments required. SQL> SELECT CLASS, COUNT 2 FROM V$WAITSTAT 3 WHERE CLASS LIKE ‘undo%’; CLASS COUNT ------------------ ---------undo header 2 undo block 4
The initialization parameter TRANSACTIONS indicates the number of concurrent transactions you expect for the database. It does not limit the number of transactions. TRANSACTIONS_PER_ROLLBACK_SEGMENT specifies the number of concurrent transactions you expect each rollback segment to have to handle. The minimum number of rollback segments acquired at start-up is TRANSACTIONS / TRANSACTIONS_PER_ROLLBACK_SEGMENT. The
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
196
Chapter 6
Segments and Storage Structures
TRANSACTIONS_PER_ROLLBACK_SEGMENT parameter does not limit the number of transactions that can use a rollback segment; it determines the number of rollback segments that should be made online when starting up an instance. Set the MAXEXTENTS value of the rollback segments to a sufficient number. A transaction can fail if it cannot allocate enough extents; either the MAXEXTENTS value is reached, or there is no space available in the tablespace. Size the rollback segments to suit the most common transactions of your database. Rollback segments can be corrupted if Oracle cannot read blocks that belong to the rollback segment from the data file (a corrupted data file). Oracle marks the status of such rollback segments as NEEDS RECOVERY.
Snapshot Too Old Error A Snapshot too old error occurs when Oracle cannot produce a readconsistent view of the data. This error usually happens when a transaction commits after a long-running query has started, and the rollback information is overwritten or the rollback extents are de-allocated. Consider an example: User SCOTT has updated the EMP table and has not committed the changes. The old values of the rows updated by SCOTT are written to the rollback segment. When user JAKE queries the EMP table, Oracle uses the rollback segment to produce a read-consistent view of the table. Suppose that the query JAKE initiated was a long one; therefore, Oracle will fetch the blocks in multiple iterations. User SCOTT can commit his transaction, and the rollback segment is marked committed. If another transaction overwrites the same rollback segment, JAKE’s transaction will not be able to get the view of the EMP table when the transaction started. This produces a Snapshot too old error. You can reduce the chances of generating a Snapshot too old error by creating rollback segments with a high MINEXTENTS value, larger extent sizes, and a high OPTIMAL value.
The initialization parameter MAX_ROLLBACK_SEGMENTS specifies the maximum number of rollback segments that can be made online.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Managing Rollback Segments
197
Querying Rollback Information The following dictionary views can be queried to obtain information about the rollback segments and transactions.
DBA_ROLLBACK_SEGS This view provides information about the rollback segments, their status, tablespace name, sizes, etc. SQL> SELECT SEGMENT_NAME, TABLESPACE_NAME, INITIAL_EXTENT 2 INI, NEXT_EXTENT, MIN_EXTENTS MIN, STATUS 3 FROM DBA_ROLLBACK_SEGS; SEGMENT_NAME ------------SYSTEM RBS0 RBS1 RBS2 RBS3 RBS4 RBS5 RBS6
TABLES INI NEXT_EXTENT MIN STATUS ------ ------- ----------- --- ------SYSTEM 57344 57344 2 ONLINE RBS 524288 524288 8 ONLINE RBS 524288 524288 8 ONLINE RBS 524288 524288 8 ONLINE RBS 524288 524288 8 ONLINE RBS 524288 524288 8 ONLINE RBS 524288 524288 8 ONLINE RBS 524288 524288 8 OFFLINE
V$ROLLNAME This view lists all online rollback segments. The USN is the rollback segment number, which can be used to join with the V$ROLLSTAT view. SQL> SELECT * FROM V$ROLLNAME; USN ---------0 1 2 3 4 5 6
NAME -----------SYSTEM RBS0 RBS1 RBS2 RBS3 RBS4 RBS5
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
198
Chapter 6
Segments and Storage Structures
V$ROLLSTAT This view lists the rollback statistics. It is used for tuning rollback segments. You can join this view with the V$ROLLNAME view to get the rollback segment name. The view shows the segment size, OPTIMAL value, number of shrinks since instance start-up, number of active transactions, extents, status, etc. SQL> SELECT * FROM V$ROLLSTAT 2 WHERE USN = 1 USN EXTENTS RSSIZE WRITES XACTS ---------- ---------- ---------- ---------- ---------GETS WAITS OPTSIZE HWMSIZE SHRINKS ---------- ---------- ---------- ---------- ---------WRAPS EXTENDS AVESHRINK AVEACTIVE ---------- ---------- ---------- ---------STATUS CUREXT CURBLK --------------- ---------- ---------1 8 4186112 152556 0 1008 0 4194304 4186112 0 0 0 0 0 ONLINE 3 40
Summary
T
his chapter discussed the logical storage structures in detail. A data block is the smallest logical storage unit in Oracle. The data block overhead is the space used to store the block information and row information. The overhead consists of a common and variable header, a table directory, and a row directory. The rows are stored in the row data area, and the free space is the space available to accommodate new rows or the space available for the existing rows to expand. The free space can be managed by two parameters: PCTFREE and PCTUSED. PCTFREE determines the amount of free space that should be maintained in a block for future row expansion due to updates. When the used space in a block reaches the PCTFREE threshold, the block is removed from the free list. The block is added back to the free list when the used space drops below PCTUSED. The INITRANS and MAXTRANS parameters specify the
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Summary
199
concurrent transactions that can access a block. INITRANS reserves transaction space for the specified number of transactions, and MAXTRANS specifies the maximum number of concurrent transactions for the block. Extents are logical storage units consisting of contiguous blocks. Sizes of extents are specified by the INITIAL, NEXT, and PCTINCREASE parameters. The minimum value of INITIAL should be two blocks. A segment consists of one or more extents, and there are four types of segments. Data segments store the table rows. Index segments store index keys. (The data and index segments used to store LOB or VARRAY data types are known as LOB segment and LOB index segments, respectively.) Temporary segments are used for sort operations, and rollback segments are used to store undo information. When the database is created, Oracle creates the SYSTEM rollback segment. You must create additional rollback segments in the database depending on the number of concurrent transactions. Oracle recommends having a separate tablespace for the rollback segments. The rollback segments are used in a circular fashion. More than one transaction can use one rollback segment concurrently. Oracle extends the rollback segments when required by the transaction. Oracle can add extents to a rollback segment until the number of extents reaches the MAXEXTENTS value and there is enough room in the tablespace. OPTIMAL specifies a size for the rollback segments such that when the rollback segment grows beyond OPTIMAL, Oracle de-allocates extents and shrinks the rollback segment to OPTIMAL size. Rollback segments can be dropped when they are offline. DBA_EXTENTS and DBA_SEGMENTS are views that can be queried to get information on extents and segments. Rollback segment information can be queried from the DBA_ROLLBACK_SEGS, V$ROLLNAME, and V$ROLLSTAT views.
Key Terms Before you take the exam, make sure you’re familiar with the following terms: data block row chaining row migration data segment
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
200
Chapter 6
Segments and Storage Structures
index segment rollback entries public rollback segment private rollback segment temporary segment
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
201
Review Questions 1. Place the following logical storage structures in order—from the
smallest logical storage unit to the largest. A. Segment B. Block C. Tablespace D. Extent 2. When a table is updated, where is the before-image information
(which can be used for undoing the changes) stored? A. Temporary segment B. Redo log buffer C. Undo buffer D. Rollback segment 3. Which parameter specifies the number of transaction slots in a data
block? A. MAXTRANS B. INITRANS C. PCTFREE D. PCTUSED 4. What happens if you create a rollback segment in the same tablespace
where application data is stored? Choose the best answer. A. The tablespace will be fragmented. B. Performance improves, because when changes are made, undo
information can be written to the same tablespace. C. There should be a minimum of two data files associated with the
tablespace when rollback segments are created. D. None of the above is true.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
202
Chapter 6
Segments and Storage Structures
5. Which storage parameter is applicable only to rollback segments? A. PCTINCREASE B. MAXEXTENTS C. TRANSACTIONS D. None of the above 6. Choose the statement used to manually de-allocate the extents used by
a rollback segment. A. ALTER ROLLBACK SEGMENT R01 DEALLOCATE; B. ALTER ROLLBACK SEGMENT R01 DROP EXTENTS; C. ALTER ROLLBACK SEGMENT R01 SHRINK; D. ALTER ROLLBACK SEGMENT R01 SIZE 10K; 7. Which data dictionary view would you query to see the free extents in
a tablespace? A. DBA_TABLESPACES B. DBA_FREE_SPACE C. DBA_EXTENTS D. DBA_SEGMENTS 8. What is the minimum number of extents a rollback segment can have? A. One B. Two C. Four D. Zero 9. Which portion of the data block stores information about the table
having rows in this block? A. Common and variable header B. Row directory C. Table directory D. Row data
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
203
10. When does Oracle stop adding rows to a block? A. When free space reaches the PCTFREE threshold B. When row data reaches the PCTFREE threshold C. When free space drops below the PCTUSED threshold D. When row data drops below the PCTUSED threshold 11. What type of rollback segment is used if a tablespace containing an
active transaction is taken offline? A. SYSTEM B. Private C. Public D. Deferred 12. How can you fix the MAXEXTENTS reached error for a rollback seg-
ment? Choose two. A. Add more space to the tablespace. B. Increase the MAXEXTENTS parameter of the rollback segment. C. Increase the value of INITIAL. D. Drop and re-create the rollback segments with a larger extent size. 13. What is default value of PCTFREE? A. 40 B. 0 C. 100 D. 10 14. Which data dictionary view can be queried to see the OPTIMAL value
for a rollback segment? A. DBA_ROLLBACK_SEGS B. V$ROLLSTAT C. DBA_SEGMENTS D. V$ROLLNAME
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
204
Chapter 6
Segments and Storage Structures
15. What is row migration? A. A single row spread across multiple blocks B. Moving a table from one tablespace to another C. Storing a row in a different block when there is not enough room
in the current block for the row to expand D. Deleting a row and adding it back to the same table 16. What can cause the Snapshot too old error? A. Smaller rollback extents B. Higher MAXEXTENTS value C. Larger rollback extents D. Higher OPTIMAL value 17. Which two initialization parameters are used by Oracle to determine
the number of rollback segments required for an instance? A. ROLLBACK_SEGMENTS B. TRANSACTIONS C. TRANSACTIONS_PER_ROLLBACK_SEGMENT D. PROCESSES 18. Which of the following statements may require a temporary segment? A. CREATE TABLE B. CREATE INDEX C. UPDATE D. CREATE TABLESPACE
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
205
19. How does Oracle determine the extent sizes for a temporary segment? A. From the initialization parameters B. From the tables involved in the sort operation C. Using the default storage parameters for the tablespace D. The database block size 20. Fill in the blank: The parameter MAXTRANS specifies the maximum
number of concurrent transactions per __________. A. Table B. Segment C. Extent D. Block
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
206
Chapter 6
Segments and Storage Structures
Answers to Review Questions 1. B, D, A, and C. A data block is the smallest logical storage unit in
Oracle. An extent is a group of contiguous blocks. A segment consists of one or more extents. A segment can belong to only one tablespace. A tablespace can have many segments. 2. D. Before any DML operation, the undo information (before-image
of data) is stored in the rollback segments. This information is used to undo the changes and to provide a read-consistent view of the data. 3. B. INITRANS specifies the number of transaction slots in a data block.
A transaction slot is used by Oracle when the data block is being modified. INITRANS reserves space for the transactions in the block. MAXTRANS specifies the maximum number of concurrent transactions allowed in the block. The default for a block in a data segment is 1, and the default for the block in an index segment is 2. 4. A. Because of the allocation and de-allocation of extents by the roll-
back segments, the application tablespace can become fragmented. You should create a separate tablespace for rollback segments and a separate tablespace for temporary segments. 5. D. OPTIMAL is the storage parameter that is applicable only to roll-
back segments. When the rollback segment grows beyond the value specified by OPTIMAL, it is shrunk to OPTIMAL size when there are no active transactions in the segment. TRANSACTIONS is not a storage parameter, it is an initialization parameter. 6. C. You can also specify a size for the rollback segment. For example,
ALTER ROLLBACK SEGMENT R01 SHRINK TO 100K would shrink the rollback segment to 100KB. If you do not specify the size, Oracle will shrink the rollback segment to the OPTIMAL size. If OPTIMAL is not set, Oracle will shrink the rollback segment to INITIAL + [NEXT × (MINEXTENTS – 1)]. 7. B. DBA_FREE_SPACE shows the free extents in a tablespace. DBA_
EXTENTS shows all the extents that are allocated to a segment.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Answers to Review Questions
207
8. B. The rollback segment should always have at least two extents.
When creating a rollback segment, the MINEXTENTS value should be two or more. 9. C. The table directory portion of the block stores information about
the table having rows in the block. The row directory stores information such as row address and size of the actual rows stored in the row data area. 10. A. The PCTFREE and PCTUSED parameters are used to manage the free
space in the block. Oracle inserts rows into a block until the free space falls below the PCTFREE threshold. PCTFREE is the amount of space reserved for future updates. Oracle considers adding more rows to the block only when the free space falls below the PCTUSED threshold. 11. D. Oracle creates a deferred rollback segment in the SYSTEM
tablespace when a tablespace containing an active transaction is taken offline. The deferred rollback segment will be removed automatically when it is no longer needed, that is, when the data tablespace is brought online and the transactions are either committed or rolled back. 12. B and D. The MAXEXTENTS reached error for a rollback segment
can be fixed either by increasing the size of the extents or by changing the MAXEXTENTS value. You cannot change the INITIAL value for a segment unless you drop and re-create the rollback segment. 13. D. The default value of PCTFREE is 10, and the default for PCTUSED
is 40. 14. B. The OPTIMAL value can be queried from the V$ROLLSTAT view.
This view does not show the offline rollback segments. 15. C. Row migration is the movement of a row from one block to a new
block. This occurs when a row is updated and its new size cannot fit into the free space of the block; Oracle moves the row to a new block, leaving a pointer in the old block to the new block. This problem can be avoided by either setting a higher PCTFREE value or specifying a larger block size at database creation.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
208
Chapter 6
Segments and Storage Structures
16. A. Smaller rollback extents can cause the Snapshot too old error
if there are long-running queries in the database. 17. B and C. TRANSACTIONS specifies the number of concurrent transac-
tions. Oracle acquires TRANSACTIONS / TRANSACTIONS_PER_ ROLLBACK_SEGMENT rollback segments at instance start-up. You can specify the private rollback segments to bring online by using the ROLLBACK_SEGMENTS parameter. 18. B. Operations that require a sort may need a temporary segment
(when the sort operation cannot be completed in the memory area specified by SORT_AREA_SIZE). Queries that use DISTINCT, GROUP BY, ORDER BY, UNION, INTERSECT, or MINUS clauses also need a sort of the result set. 19. C. The default storage parameters for the tablespace determine the
extent sizes for temporary segments. 20. D. MAXTRANS specifies the maximum allowed concurrent transac-
tions per block. Oracle needs transaction space for each concurrent transaction in the block’s variable header. You can pre-allocate space by specifying INITRANS.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Chapter
7
Managing Tables, Indexes, and Constraints ORACLE8i ARCHITECTURE AND ADMINISTRATION EXAM OBJECTIVES OFFERED IN THIS CHAPTER: Managing Tables
Create tables using appropriate storage settings
Control the space used by tables
Analyze tables to check integrity and migration
Retrieve information about tables from the data dictionary
Convert between different formats of ROWID
Managing Indexes
List different types of indexes and their uses
Create b-tree and bitmap indexes
Reorganize indexes
Drop indexes
Get index information from the data dictionary
Maintaining Data Integrity
Implement data integrity
Maintain integrity constraints
Obtain constraint information from the data dictionary
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Training and Certification Web site (http://
education.oracle.com/certification/index.html) for the most current exam objectives listing.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
T
he previous chapters have discussed Oracle’s architecture: physical and logical structures of the database. Data is stored in Oracle as rows and columns using a table. This chapter covers the options available in creating tables, how to quickly retrieve data by using indexes, and how the Oracle database can enforce business rules by using integrity constraints.
Managing Tables
You can think of a table as a spreadsheet having column headings and many rows of information. Data is stored in the Oracle database using tables. A schema, or a user, in the database owns the table. The table columns have a defined data type—the data stored in the columns should satisfy the characteristics of the column. We have already discussed in the previous chapters the logical storage structures and parameters. Let’s see how these structures can be related to a table.
Creating Tables A table is created using the CREATE TABLE command. You can create a table under the username used to connect to the database or, with proper privileges, you can create a table under another username. A database user can be referred to as a schema, or as an owner, when the user owns objects in the database. The simplest form of creating a table is as follows: CREATE TABLE ORDERS ( ORDER_NUM NUMBER, ORDER_DATE DATE,
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Managing Tables
PRODUCT_CD QUANTITY STATUS
211
VARCHAR2 (10), NUMBER (10,3), CHAR);
ORDERS is the table name; the columns in the table are specified in parentheses separated by commas. The table is created under the username used to connect to the database; to create the table under another schema, you need to qualify the table with the schema name. For example, if you want to create the ORDERS table as being owned by SCOTT, create the table by using CREATE TABLE SCOTT.ORDERS (… …). A column name and a data type identify each column. For certain data types, you can specify a maximum width. You can specify any Oracle builtin data type or user-defined data type for the column definition. When specifying user-defined data types, the user-defined type must exist before creating the table. Oracle8i has three categories of built-in data types: scalar, collection, and relationship. Collection and relationship data types are used for object-relational functionality of Oracle8i. Table 7.1 lists the built-in scalar data types in Oracle. TABLE 7.1
Oracle Built-in Scalar Data Types Data Type
Description
CHAR (<size>)
Fixed-length character data with length in bytes specified inside parentheses. Data is space padded to fit the column width. Size defaults to 1 if not defined. Maximum is 2000 bytes.
VARCHAR (<size>)
Same as VARCHAR2.
VARCHAR2 (<size>)
Variable-length character data. Maximum allowed length is specified in parentheses. You must specify a size; there is no default value. Maximum is 4000 bytes.
NCHAR (<size>)
Same as CHAR; stores National Language Support (NLS) character data.
NVARCHAR2 (<size>)
Same as VARCHAR2; stores NLS character data.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
212
Chapter 7
Managing Tables, Indexes, and Constraints
TABLE 7.1
Oracle Built-in Scalar Data Types (continued) Data Type
Description
LONG
Stores variable-length character data up to 2GB. Use CLOB or NCLOB data types instead. Provided in Oracle8i for backward compatibility. Can have only one LONG column per table.
NUMBER (<precision>, <scale>)
Stores fixed and floating-point numbers. You can optionally specify a precision (total length including decimals) and scale (digits after decimal point). The default is 38 digits of precision, and the valid range is between –1 x 10-130 and 9.999…99125.
DATE
Stores date data. Has century, year, month, date, hour, minute, and seconds internally. Can be displayed in various formats.
RAW (<size>)
Variable-length data type used to store unstructured data, without a character set conversion. Provided for backward compatibility. Use BLOB or BFILE instead.
LONG RAW
Same as RAW, can store up to 2GB of binary data.
BLOB
Stores up to 4GB of unstructured binary data.
CLOB
Stores up to 4GB of character data.
NCLOB
Stores up to 4GB of NLS character data.
BFILE
Stores unstructured binary data in operating system files outside the database.
ROWID
Stores binary data representing a physical row address of a table’s row. Occupies 10 bytes.
UROWID
Stores binary data representing any type of row address: physical, logical, or foreign. Up to 4000 bytes.
Collection types are used to represent more than one element, such as an array. There are two collection data types: VARRAY and TABLE. Elements in the VARRAY data type are ordered and have a maximum limit. Elements in a
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Managing Tables
213
TABLE data type (nested table) are not ordered, and there is no upper limit to the number of elements, unless restricted by available resources. REF is the relationship data type, which defines a relationship with other objects by using a reference. It actually stores pointers to data stored in different object tables.
Specifying Storage If you create a table without specifying the storage parameters and tablespace, the table will be created in the default tablespace of the user, and the storage parameters used will be those of the default parameters specified for the tablespace. It is always better to estimate the size of the table and specify appropriate storage parameters when creating the table. If the table is too large, you might need to consider partitioning (discussed later) or creating the table in a separate tablespace. This helps to manage the table. Oracle allocates a segment to the table when the table is created. This segment will have the number of extents specified by the storage parameter MINEXTENTS. Oracle allocates new extents to the table as required. Though you can have an unlimited number of extents for a segment, a little planning can improve the performance of the table. Having numerous extents affects the operations on the table, such as when the table is truncated or full table scans are performed. A larger number of extents may cause additional I/Os in the data file, and therefore may have an effect on performance. To create the ORDERS table using explicit storage parameters in the USER_ DATA tablespace: CREATE TABLE JAKE.ORDERS ( ORDER_NUM NUMBER, ORDER_DATE DATE, PRODUCT_CD VARCHAR2 (10), QUANTITY NUMBER (10,3), STATUS CHAR) TABLESPACE USER_DATA PCTFREE 5 PCTUSED 75 INITRANS 1 MAXTRANS 255 STORAGE (INITIAL 512K NEXT 512K PCTINCREASE 0 MINEXTENTS 1 MAXEXTENTS 100 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL KEEP);
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
214
Chapter 7
Managing Tables, Indexes, and Constraints
The table will be owned by JAKE and will be created in the USER_DATA tablespace (JAKE should have appropriate space quota privileges in the tablespace; privileges and space quotas are discussed in Chapter 8, “Managing Users and Security”). None of the storage parameters are mandatory to create a table; Oracle assigns default values if you omit them. Let’s discuss the clauses used in the table creation. TABLESPACE specifies the location where the table should be created. If you omit the STORAGE clause or any parameters in the STORAGE clause, the default will be taken from the tablespace’s default storage (if applicable). PCTFREE and PCTUSED are block storage parameters. PCTFREE specifies the amount of free space that should be reserved in each block of the table for future updates. In this example, you specify a low PCTFREE for the ORDERS table, because there are not many updates to the table that increase the row length. PCTUSED specifies when the block should be considered for inserting new rows once the PCTFREE threshold is reached. Here you specified 75, so when the used space falls below 75 (due to updates or deletes), new rows will be added to the block. INITRANS and MAXTRANS specify the number of concurrent transactions that can update each block of the table. Oracle reserves space in the block header for the INITRANS number of concurrent transactions. For each additional concurrent transaction, Oracle allocates space from the free space— which has an overhead of dynamically allocating transaction entry space. If the block is full, and no space is available, the transaction waits until a transaction entry space is available. MAXTRANS specifies the maximum number of concurrent transactions that can touch a block. This prevents unnecessarily allocating transaction space in the block header, because the transaction space allocated is never reclaimed. In most cases, the Oracle defaults of INITRANS 1 and MAXTRANS 255 are sufficient. The STORAGE clause specifies the extent sizes, free lists, and buffer pool values. In Chapter 5, “Logical and Physical Database Structures,” we discussed the INITIAL, NEXT, MINEXTENTS, MAXEXTENTS, and PCTINCREASE parameters. These five parameters control the size of the extents allocated to the table. If the table is created on a locally managed uniform extent tablespace, these storage parameters are ignored. FREELIST GROUPS specifies the number of free list groups that should be created for the table. The default and minimum value is 1. Each free list group uses one data block (that’s why the minimum value for INITIAL is two database blocks) known as the segment header, which has information about the extents, free blocks, and high-water mark of the table.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Managing Tables
215
FREELISTS specifies the number of lists for each free list group. The default and minimum value is 1. The free list manages the list of blocks that are available to add new rows. A block is removed from the free list if the free space in the block is below PCTFREE. The block remains out of the free list as long as the used space is above PCTUSED. Create more free lists if the volume of inserts to the table is high. An appropriate number would be the number of concurrent transactions performing inserts to the table. Oracle recommends having FREELISTS and INITRANS be the same value. The FREELIST GROUPS parameter is mostly used for parallel server configuration, where you can specify a group for each instance. The BUFFER_POOL parameter of the STORAGE clause specifies the area of the database buffer cache to keep the blocks of the table when read from the data file while querying or for update/delete. There are three buffer pools: KEEP, RECYCLE, and DEFAULT. The default value is DEFAULT. Specify KEEP if the table is small and is frequently accessed. The blocks in the KEEP pool are always available in the SGA, so I/O will be faster. The blocks assigned to the RECYCLE buffer pool are removed from memory as soon as they are not needed. Specify RECYCLE for large tables or tables that are seldom accessed. If you do not specify KEEP or RECYCLE, the blocks are assigned to the DEFAULT pool, where they will be aged out using an LRU algorithm.
Creating from a Query A table can be created using existing tables or views by specifying a subquery instead of defining the columns. The subquery can refer to more than one table or view. The table will be created with the rows returned from the subquery. You can specify new column names for the table, but Oracle derives the data type and maximum width based on the query result—you cannot specify the data type with this method. You can specify the storage parameters for the tables created by using the subquery. For example, let’s create a new table from the ORDERS table for the orders that are accepted. Notice that the new column names are specified. CREATE TABLE ACCEPTED_ORDERS (ORD_NUMBER, ORD_DATE, PRODUCT_CD, QTY) TABLESPACE USERS PCTFREE 0 STORAGE (INITIAL 128K NEXT 128K PCTINCREASE 0) AS SELECT ORDER_NUM, ORDER_DATE, PRODUCT_CD, QUANTITY FROM ORDERS WHERE STATUS = ‘A'’
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
216
Chapter 7
Managing Tables, Indexes, and Constraints
Partitioning When tables are very large, you can manage them better by using partitioning. Partitioning is breaking a large table into manageable pieces based on the values in a column (or multiple columns) known as the partition key. If you have a very large table spread across many data files, and one disk fails, you have to recover the entire table. However, if the table is partitioned, you need to recover only that partition. SQL statements can access the required partition(s) rather than reading the entire table. There are three partitioning methods available: Range partitioning You can create a range partition where the partition key values are in a range—for example, a transaction table can be partitioned on the transaction date, and you can create a partition for each month or each quarter. The partition column list can be one or more columns. Hash partitioning Hash partitions are more appropriate when you do not know how much data will be in a range or whether the sizes of the partition vary. Hash partitions use a hash algorithm on the partitioning columns. The number of partitions required should be specified preferably as a power of two (such as 2, 4, 8, 16, etc.). Composite partitioning This method uses the range partition method to create partitions and the hash partition method to create subpartitions. The logical attributes for all partitions remain the same (such as column name, data type, constraints, etc.), but each partition can have its own physical attributes (such as tablespace, storage parameters, etc.). Each partition in the partitioned table is allocated a segment. You can place these partitions on different tablespaces, which can help you to balance the I/O by placing the data files appropriately on disk. Also, by having the partitions in different tablespaces, you can make a partition tablespace read-only. The storage parameters can be specified at the table level or for each partition.
Partitioned tables cannot have any columns with LONG or LONG RAW data types.
Range-Partitioned Table A range-partitioned table is created by specifying the PARTITION BY RANGE clause in the CREATE TABLE command. As stated earlier, range partitioning is suitable for tables that have column(s) with a range of values. For example,
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Managing Tables
217
your transaction table might have the transaction date column, with which you can create a partition for every month or for a quarter, etc. Consider the following example: CREATE TABLE ORDER_TRANSACTION ( ORD_NUMBER NUMBER(12), ORD_DATE DATE, PROD_ID VARCHAR2 (15), QUANTITY NUMBER (15,3)) PARTITION BY RANGE (ORD_DATE) (PARTITION FY1999Q4 VALUES LESS THAN (TO_DATE(‘01012000’,‘MMDDYYYY’)) TABLESPACE ORD_1999Q4, PARTITION FY2000Q1 VALUES LESS THAN (TO_DATE(‘04012000’,‘MMDDYYYY’)) TABLESPACE ORD_2000Q1 STORAGE (INITIAL 500M NEXT 500M) INITRANS 2 PCTFREE 0, PARTITION FY2000Q2 VALUES LESS THAN (TO_DATE(‘07012000’,‘MMDDYYYY’)) TABLESPACE ORD_2000Q2, PARTITION FY2000Q3 VALUES LESS THAN (TO_DATE(‘10012000’,‘MMDDYYYY’)) TABLESPACE ORD_2000Q3 STORAGE (INITIAL 10M NEXT 10M)) STORAGE (INITIAL 200M NEXT 200M PCTINCREASE 0 MAXEXTENTS 4096) NOLOGGING;
Here a range-partitioned table named ORDER_TRANSACTION is created. PARTITION BY RANGE specifies that the table should be range partitioned; the partition column is provided in parentheses (more than one column should be separated by commas), and the partition specifications are defined. Each partition specification begins with the keyword PARTITION. You can optionally provide a name for the partition. The VALUES LESS THAN clause defines values for the partition columns that should be in the partition. In the example, each partition is created on different tablespaces; partitions FY1999Q4 and FY2000Q2 inherit the storage parameter values from the table definition, whereas FY2000Q1 and FY2000Q3 have the storage parameters explicitly defined. Records with ORD_DATE prior to 01-Jan-2000 will be stored in partition FY1999Q4; since you did not specify a partition for records with ORD_ DATE after 31-Sep-2000, Oracle rejects those rows. NULL value is treated greater than all other values. If the partition key can have NULL values or records with a higher ORD_DATE than the highest upper range in the partition specification list, you must create a partition for the upper range. The
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
218
Chapter 7
Managing Tables, Indexes, and Constraints
MAXVALUE parameter specifies that the partition bound is infinite. In this example, an upper-bound partition can be specified as follows: CREATE TABLE ORDER_TRANSACTION (… …) PARTITION BY RANGE (ORD_DATE) (PARTITION FY1999Q4 VALUES LESS THAN (TO_DATE(‘01012000’,‘MMDDYYYY’)) TABLESPACE ORD_1999Q4,
… … … PARTITION FY2999Q4 VALUES LESS THAN (MAXVALUE) TABLESPACE ORD_2999Q4 ) STORAGE (INITIAL 200M NEXT 200M PCTINCREASE 0 MAXEXTENTS 4096) NOLOGGING;
Hash-Partitioned Table A hash-partitioned table is created by specifying the PARTITION BY HASH clause in the CREATE TABLE command. Hash partitioning is suitable for any large table to make use of Oracle8i’s performance improvements even if you do not have column(s) with a range of values. This is suitable when you do not know how many rows will be in the table. Choose a column with unique values or more distinct values for the partition key. The following example creates a hash-partitioned table with four partitions. The partitions are created in tablespaces DOC101, DOC102, and DOC103. Since there are four partitions and only three tablespaces listed, Oracle reuses the DOC101 tablespace for the fourth partition. The partition names are created by Oracle as SYS_XXXX. Physical attributes are specified at the table level only. CREATE TABLE DOCUMENTS1 ( DOC_NUMBER NUMBER(12), DOC_TYPE VARCHAR2 (20), CONTENTS VARCHAR2 (600)) PARTITION BY HASH (DOC_NUMBER, DOC_TYPE) PARTITIONS 4 STORE IN (DOC101, DOC102, DOC103) STORAGE (INITIAL 64K NEXT 64K PCTINCREASE 0 MAXEXTENTS 4096);
The following example creates a hash-partitioned table with named partitions in the tablespaces. CREATE TABLE DOC_NUMBER DOC_TYPE CONTENTS
DOCUMENTS2 ( NUMBER(12), VARCHAR2 (20), VARCHAR2 (600))
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Managing Tables
219
PARTITION BY HASH (DOC_NUMBER, DOC_TYPE) ( PARTITION DOC201 TABLESPACE DOC201, PARTITION DOC202 TABLESPACE DOC202, PARTITION DOC203 TABLESPACE DOC203, PARTITION DOC203 TABLESPACE DOC204 ) STORAGE (INITIAL 64K NEXT 64K PCTINCREASE 0 MAXEXTENTS 4096);
Composite-Partitioned Table Composite partitions have range partitions and hash subpartitions. Only subpartitions are physically created on the disk (tablespace); partitions are logical representations only. Composite partitioning gives the flexibility of range and hash for tables with a smaller range of values. In the following example, the table is range partitioned on the MAKE_YEAR column; each partition is subdivided based on the MODEL into four subpartitions on four tablespaces. Each tablespace will have one subpartition from each partition, that is, three subpartitions per tablespace, thereby having 12 subpartitions. CREATE TABLE CARS ( MAKE_YEAR NUMBER(4), MODEL VARCHAR2 (30), MANUFACTR VARCHAR2 (50), QUANTITY NUMBER) PARTITION BY RANGE (MAKE_YEAR) SUBPARTITION BY HASH (MODEL) SUBPARTITIONS 4 STORE IN(TSMK1, TSMK2, TSMK3, TSMK4) ( PARTITION M1999 VALUES LESS THAN (2000), PARTITION M2000 VALUES LESS THAN (2001), PARTITION M9999 VALUES LESS THAN (MAXVALUE)) STORAGE (INITIAL 64K NEXT 64K PCTINCREASE 0 MAXEXTENTS 4096);
The following example shows how to name the subpartitions and store each subpartition in a different tablespace. Subpartitions for partition M1999 have explicit storage parameters specified. CREATE TABLE CARS2 ( MAKE_YEAR NUMBER(4), MODEL VARCHAR2 (30), MANUFACTR VARCHAR2 (50), QUANTITY NUMBER) PARTITION BY RANGE (MAKE_YEAR)
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
220
Chapter 7
Managing Tables, Indexes, and Constraints
SUBPARTITION BY HASH (MODEL) SUBPARTITIONS 4 ( PARTITION M1999 VALUES LESS THAN (2000) STORAGE (INITIAL 128K NEXT 128K) ( SUBPARTITION M1999_SP1 TABLESPACE TS991, SUBPARTITION M1999_SP2 TABLESPACE TS992, SUBPARTITION M1999_SP3 TABLESPACE TS993, SUBPARTITION M1999_SP4 TABLESPACE TS994 ), PARTITION M2000 VALUES LESS THAN (2001) ( SUBPARTITION M2000_SP1 TABLESPACE TS001, SUBPARTITION M2000_SP2 TABLESPACE TS002, SUBPARTITION M2000_SP3 TABLESPACE TS003, SUBPARTITION M2000_SP4 TABLESPACE TS004 ), PARTITION M9999 VALUES LESS THAN (MAXVALUE) ( SUBPARTITION M2999_SP1 TABLESPACE TS011, SUBPARTITION M2999_SP2 TABLESPACE TS012, SUBPARTITION M2999_SP3 TABLESPACE TS013, SUBPARTITION M2999_SP4 TABLESPACE TS014 )) STORAGE (INITIAL 64K NEXT 64K PCTINCREASE 0 MAXEXTENTS 4096);
Using Other Create Clauses The other clauses you can specify while creating a table are listed here. These clauses help to manage various types of operations on the table. LOGGING/NOLOGGING LOGGING is the default for the table and tablespace, but if the tablespace is defined as NOLOGGING, then the table uses NOLOGGING. LOGGING specifies that table creation and direct-load inserts should be logged to the redo log files. Creating the table by using a subquery and the NOLOGGING clause can improve the table creation time dramatically for large tables. If the table creation, initial data population (using a subquery), and direct-load inserts are not logged to the redo log files when using the NOLOGGING clause, you must back up the table (or better yet, the entire tablespace) after such operations are performed. Media recovery will not create or load tables created with the NOLOGGING attribute. You can also specify a separate LOGGING or NOLOGGING attribute for indexes and LOB storage of the table, independent of the table’s attribute. The following example creates a table with the NOLOGGING clause. CREATE TABLE MY_ORDERS (… …) TABLESPACE USER_DATA STORAGE (… …) NOLOGGING;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Managing Tables
221
PARALLEL/NOPARALLEL NOPARALLEL is the default. PARALLEL causes the table creation (if created using a subquery) and the DML statements on the table to execute in parallel. Normally, a single-server process performs operations on tables in a transaction (serial operation). When the PARALLEL attribute is set, Oracle uses multiple processes to complete the operation for a full table scan. You can specify a degree for the parallelism; if not specified, Oracle calculates the optimum degree of parallelism. The parameter PARALLEL_THREADS_PER_CPU determines the number for parallel degree per CPU; usually the default is 2. If you do not specify the degree, Oracle calculates the degree based on this parameter and the number of CPUs available. The following example creates a table by using a subquery. The table creation will not be logged in the redo log file, and multiple processes will query the JAKE.ORDERS table and create the MY_ ORDERS table. CREATE TABLE MY_ORDERS (… …) TABLESPACE USER_DATA STORAGE (… …) NOLOGGING PARALLEL AS SELECT * FROM JAKE.ORDERS;
CACHE/NOCACHE NOCACHE is the default. For small look-up tables that are frequently accessed, you can specify the CACHE clause to have the blocks retrieved using a full table scan placed at the MRU end of the LRU list in the buffer cache; the blocks are not aged out of the buffer cache immediately. The default behavior (NOCACHE) is to place the blocks from a full table scan at the tail end of the LRU list, where they are moved out of the list as soon as a different process or query needs these blocks for storing another table’s blocks in the cache.
Altering Tables You can alter a table by using the ALTER TABLE command to change its storage settings, add or drop columns, or modify the column characteristics such as default value, data type, and length. You can also move the table from one tablespace to another, disable constraints and triggers, and change the clauses such as PARALLEL, CACHE, and LOGGING. In this section, we will discuss altering the storage and space used by the table. You cannot change the STORAGE parameters INITIAL and MINEXTENTS by using the ALTER TABLE command. You can change NEXT, PCTINCREASE, MAXEXTENTS, FREELISTS, and FREELIST GROUPS. The changes will not
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
222
Chapter 7
Managing Tables, Indexes, and Constraints
affect the extents that are already allocated. When NEXT is changed, the next extent allocated will have the new size. When PCTINCREASE is altered, the next extent allocated will be based on the current value of NEXT, but further extent sizes will be calculated with the new PCTINCREASE value. Here is an example of changing the storage parameters: ALTER TABLE ORDERS STORAGE (NEXT 512K PCTINCREASE 0 MAXEXTENTS UNLIMITED);
Allocating and De-allocating Extents New extents can be allocated to a table or a partition manually by using the ALTER TABLE command. You can optionally specify a filename if you want to allocate the extent in a particular data file. You can specify the size of the extent in bytes (K or M can be used to specify the size in KB or MB). If you omit the size of the extent, Oracle uses the NEXT size. For example, to manually allocate the next extent: ALTER TABLE ORDERS ALLOCATE EXTENT;
To specify the size of the extent to be allocated: ALTER TABLE ORDERS ALLOCATE EXTENT SIZE 200K;
To specify the data file where the extent should be allocated: ALTER TABLE ORDERS ALLOCATE EXTENT SIZE 200K DATAFILE ‘C:\ORACLE\ORADATA\USER_DATA01.DBF’;
The data file should belong to the tablespace where the table or the partition resides. Sometimes the storage space estimated for the table may be too high. The table may be created with large extent sizes, or if you do not set the PCTINCREASE size properly, the space allocated to the table may be large. Any other table or object cannot use the space once allocated. You can free up such unused free space by manually de-allocating the unused blocks above the high-water mark (HWM) of the table. The HWM indicates the historically highest amount of used space in a segment. The HWM moves only in the forward direction, that is, when new blocks are used to store data in a table, the HWM is increased. When rows are deleted, and even if a block is completely empty, the HWM is not decreased. The HWM is reset only when you TRUNCATE the table.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Managing Tables
223
The UNUSED_SPACE procedure of the DBMS_SPACE package can be used to find the HWM of the segment. The following listing shows the parameters for the procedure. PROCEDURE UNUSED_SPACE Argument Name -----------------------------SEGMENT_OWNER SEGMENT_NAME SEGMENT_TYPE TOTAL_BLOCKS TOTAL_BYTES UNUSED_BLOCKS UNUSED_BYTES LAST_USED_EXTENT_FILE_ID LAST_USED_EXTENT_BLOCK_ID LAST_USED_BLOCK PARTITION_NAME
Type ---------VARCHAR2 VARCHAR2 VARCHAR2 NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER VARCHAR2
In/Out Default? ------ -------IN IN IN OUT OUT OUT OUT OUT OUT OUT IN DEFAULT
You can execute the procedure by specifying the owner, type (table, index, table partition, table subpartition, etc.), and name. The HWM is TOTAL_BYTES – UNUSED_BYTES. Here is an example: SQL> variable vtotalblocks number SQL> variable vtotalbytes number SQL> variable vunusedblocks number SQL> variable vunusedbytes number SQL> variable vlastusedefid number SQL> variable vlastusedebid number SQL> variable vlastusedblock number SQL> EXECUTE DBMS_SPACE.UNUSED_SPACE (‘JOHN’, > ‘ORDERS’, ‘TABLE’, :vtotalblocks, :vtotalbytes, > :vunusedblocks, :vunusedbytes, > :vlastusedefid, :vlastusedebid, :vlastusedblock); PL/SQL procedure successfully completed. SQL> select :vtotalbytes, :vunusedbytes from dual; :VTOTALBYTES :VUNUSEDBYTES ------------ ------------1224288 8507904 SQL>
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
224
Chapter 7
Managing Tables, Indexes, and Constraints
The DEALLOCATE UNUSED clause of the ALTER TABLE command can be used to free up the unused space allocated to the table. For example, to free up all blocks above the HWM, use this statement: ALTER TABLE ORDERS DEALLOCATE UNUSED;
You can use the KEEP parameter in the UNUSED clause to specify the number of blocks you wish to keep above the HWM after de-allocation. For example, to have 100KB of free space available for the table above the HWM, specify the following: ALTER TABLE ORDERS DEALLOCATE UNUSED KEEP 100K;
If you do not specify KEEP, and the HWM is below MINEXTENTS, Oracle keeps MINEXTENTS extents. If you specify KEEP, and the HWM is below MINEXTENTS, Oracle adjusts the MINEXTENTS to match the number of extents. If the HWM is less than the size of INITIAL, and KEEP is specified, Oracle adjusts the size of INITIAL. Table 7.2 shows some examples of freeing up space. Let’s assume that the table is created with (INITIAL 1024K NEXT 1024K PCTINCREASE 0 MINEXTENTS 4) and now the table has 10 extents (total size 10,240KB). TABLE 7.2
DEALLOCATE Clause Examples
HWM
DEALLOCATE Clause
Resulting Size
7000KB
UNUSED;
7000KB
Seven; the seventh extent will be split at the HWM.
200KB
UNUSED;
4096KB
Four, because the KEEP clause is not specified.
200KB
UNUSED KEEP 100K;
300KB
One; the initial extent is split at the HWM.
2000KB
UNUSED KEEP 0K;
2000KB
Two; the second extent is split at the HWM.
Extent Count
When a full table scan is performed, Oracle reads each block up to the table’s high-water mark.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Managing Tables
225
The TRUNCATE command is used to delete all rows of the table and to reset the HWM of the table. You have the option of keeping the space allocated to the table or to de-allocate the extents when using TRUNCATE. By default, Oracle de-allocates all the extents allocated above MINEXTENTS of the table and the associated indexes. To preserve the space allocated, you must specify the REUSE clause (DROP is the default), as in this example: TRUNCATE TABLE ORDERS REUSE STORAGE;
To truncate a table, you must disable all referential integrity constraints.
Reorganizing Tables The MOVE clause of the ALTER TABLE command can be used on a nonpartitioned table to reorganize or to move from one tablespace to another. The table is reorganized to reduce the number of extents by specifying larger extent sizes or to prevent row migration. When you move a table, Oracle creates a new segment for the table, copies the data, and drops the old segment. The new segment can be in the same tablespace or in a different tablespace. Since the old segment is dropped only after creating the new segment, you need to make sure you have sufficient space in the tablespace, if you’re not changing to a different tablespace. The MOVE clause can specify a new tablespace, new storage parameters for the table, new free space management parameters, and new transaction entry parameters. You can use the NOLOGGING clause to speed up the reorganization by not writing the changes to the redo log file. The following example moves the ORDERS table to another tablespace named NEW_DATA. New storage parameters are specified, and the operation is not logged in the redo log files (NOLOGGING). ALTER TABLE ORDERS MOVE TABLESPACE NEW_DATA STORAGE (INITIAL 50M NEXT 5M PCTINCREASE 0) PCTFREE 0 PCTUSED 50 INITRANS 2 NOLOGGING;
Prior to Oracle8i, a table was reorganized using the export-drop-import method. Queries are allowed on the table while the move operation is in progress, but no insert, update, or delete operations are allowed. The granted permissions on the table are retained.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
226
Chapter 7
Managing Tables, Indexes, and Constraints
Analyzing Tables A table can be analyzed to verify the blocks in the table, to find the chained and migrated rows of a table, and to collect statistics on the table. You can specify the PARTITION or SUBPARTITION clause to analyze a specific partition or subpartition of the table.
Validating Structure Due to hardware problems, disk errors, or software bugs, some blocks may become corrupted (logical corruption). Oracle will return a corruption error only when the rows are accessed. (The Export utility identifies logical corruption in tables, because it does a full table scan.) The ANALYZE command can be used to validate the structure or check the integrity of the blocks allocated to the table. If Oracle finds blocks or rows that are not readable, an error is returned. The ROWIDs of the bad rows are inserted into a table. You can specify the name of the table where you want the ROWIDs to be saved; by default, Oracle looks for the table named INVALID_ROWS. The table can be created using the script utlvalid.sql supplied from Oracle, located in the rdbms/admin directory of the software installation. The structure of the table is as follows: SQL>
@c:\oracle\ora81\rdbms\admin\utlvalid.sql
Table created. SQL> desc invalid_rows Name Null? ----------------------- -------OWNER_NAME TABLE_NAME PARTITION_NAME SUBPARTITION_NAME HEAD_ROWID ANALYZE_TIMESTAMP
Type -----------VARCHAR2(30) VARCHAR2(30) VARCHAR2(30) VARCHAR2(30) ROWID DATE
This example validates the structure of the ORDERS table: ANALYZE TABLE ORDERS VALIDATE STRUCTURE;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Managing Tables
227
If Oracle encounters bad rows, they are inserted into the INVALID_ROWS table. To specify a different table name, use the following syntax. ANALYZE TABLE ORDERS VALIDATE STRUCTURE INTO SCOTT.CORRUPTED_ROWS;
You can also validate the blocks of the indexes associated with the table by specifying the CASCADE clause. ANALYZE TABLE ORDERS VALIDATE STRUCTURE CASCADE;
To analyze a partition by name MAY2000 in table GLEDGER, specify ANALYZE TABLE GLEDGER PARTITION (MAY2000) VALIDATE STRUCTURE;
Finding Migrated Rows A row is migrated if the row is moved from its original block to another block because there was not enough free space available in its original block to accommodate the row, which was expanded due to an update. Oracle keeps a pointer in the original block to indicate the new block ID of the row. When there are many migrated rows in a table, performance of the table is affected, because Oracle has to read two blocks instead of one for a given row retrieval or update. By specifying an efficient PCTFREE value, this problem can be avoided. A row is chained if the row is bigger than the block size of the database. Normally, the rows of a table with LOB data types are more likely to become chained. The LIST CHAINED ROWS clause of the ANALYZE command can be used to find the chained and migrated rows of a table. Oracle writes the ROWID of such rows to a specified table. If no table is specified, Oracle looks for the CHAINED_ROWS table. The table can be created using the script utlchain.sql supplied from Oracle, located in the rdbms/admin directory of the software installation. The structure of the table is as follows: SQL> @c:\oracle\ora81\rdbms\admin\utlchain.sql Table created. SQL> DESC CHAINED_ROWS Name Null? ------------------------ -------OWNER_NAME TABLE_NAME
Copyright ©2000 SYBEX , Inc., Alameda, CA
Type -----------VARCHAR2(30) VARCHAR2(30)
www.sybex.com
228
Chapter 7
Managing Tables, Indexes, and Constraints
CLUSTER_NAME PARTITION_NAME SUBPARTITION_NAME HEAD_ROWID ANALYZE_TIMESTAMP
VARCHAR2(30) VARCHAR2(30) VARCHAR2(30) ROWID DATE
Only the ROWIDs are listed in the CHAINED_ROWS table. You can use this information to save these chained rows to a different table, delete them from the source table, and insert them back from the second table. Here is one method to fix migrated rows in a table: 1. Analyze the table to find migrated rows. ANALYZE TABLE ORDERS LIST CHAINED ROWS;
2. Find the number of migrated rows. SELECT COUNT(*) FROM CHAINED_ROWS WHERE OWNER_NAME = ‘SCOTT’ AND TABLE_NAME = ‘ORDERS’;
3. If there are migrated rows, create a temporary table to hold the
migrated rows. CREATE TABLE TEMP_ORDERS AS SELECT * FROM ORDERS WHERE ROWID IN (SELECT HEAD_ROWID FROM CHAINED_ROWS WHERE OWNER_NAME = ‘SCOTT’ AND TABLE_NAME = ‘ORDERS’);
4. Delete the rows from the ORDERS table. DELETE FROM ORDERS WHERE ROWID IN (SELECT HEAD_ROWID FROM CHAINED_ROWS WHERE OWNER_NAME = ‘SCOTT’ AND TABLE_NAME = ‘ORDERS’);
5. Insert the rows back into the ORDERS table. INSERT INTO ORDERS SELECT * FROM TEMP_ORDERS;
Before deleting the rows, make sure you disable any foreign key constraints referring to the ORDERS table. You will not be able to delete the rows
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Managing Tables
229
if there are child rows, and most important, if the constraints are defined with the CASCADE option, it deletes the child rows! See the “Managing Constraints” section later in this chapter to learn about the foreign key constraints and how to enable/disable them.
Collecting Statistics Statistics about a table can be collected and saved in the dictionary tables by using the ANALYZE command. The cost-based optimizer also uses the statistics to generate the execution plan of SQL statements. You can calculate the exact statistics (COMPUTE) of the table or sample a few rows and estimate the statistics (ESTIMATE). By default, Oracle collects statistics for all the columns and indexes in the table. For large tables, estimation may be the choice, because when you compute the statistics, Oracle reads each block of the table. The following information is collected and saved in the dictionary when using the ANALYZE command to collect statistics:
Total number of rows in the table and number of chained rows
Total number of blocks allocated, unused blocks, and average free space in each block
Average row length
Here is an example of analyzing a table using the COMPUTE clause: ANALYZE TABLE ORDERS COMPUTE STATISTICS;
When using the ESTIMATE option, you can either specify a certain number of rows or specify a certain percentage of rows in the table. If the rows specified are more than 50 percent of the table, Oracle does a COMPUTE. If you do not specify the SAMPLE clause, Oracle samples 1064 rows. To specify the number of rows: ANALYZE TABLE ORDERS ESTIMATE STATISTICS SAMPLE 200 ROWS;
To specify a percent of the table to sample: ANALYZE TABLE ORDERS ESTIMATE STATISTICS SAMPLE 20 PERCENT;
To remove statistics collected on a table, use the DELETE STATISTICS option: ANALYZE TABLE ORDERS DELETE STATISTICS;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
230
Chapter 7
Managing Tables, Indexes, and Constraints
The statistics can also be collected using the DBMS_STATS package. You have the option to collect the statistics into a non-dictionary table.
Querying Table Information Several data dictionary views are available to provide information about the tables. We will discuss certain views and their columns that you should be familiar with before taking the test.
DBA_TABLES The DBA_TABLES, USER_TABLES, and ALL_TABLES views are the primary views used to query for information about tables (TABS is a synonym for USER_TABLES). The views have the following information (the columns that can be used in the query are provided in parentheses):
Identity (OWNER, TABLE_NAME)
Storage (TABLESPACE_NAME, PCT_FREE, PCT_USED, INI_TRANS, MAX_ TRANS, INITIAL_EXTENT, NEXT_EXTENT, MAX_EXTENTS, MIN_EXTENTS, PCT_INCREASE, FREELISTS, FREELIST_GROUPS, BUFFER_POOL)
Statistics (NUM_ROWS, BLOCKS, EMPTY_BLOCKS, AVG_SPACE, CHAIN_ CNT, AVG_ROW_LEN, AVG_SPACE_FREELIST_BLOCKS, NUM_FREELIST_ BLOCKS, SAMPLE_SIZE, LAST_ANALYZED)
Miscellaneous create options (LOGGING, DEGREE, CACHE, PARTITIONED, NESTED)
DBA_TAB_COLUMNS Use the DBA_TAB_COLUMNS, USER_TAB_COLUMNS, and ALL_TAB_COLUMNS views to display information about the columns in a table. The following information can be queried:
Identity (OWNER, TABLE_NAME, COLUMN_NAME, COLUMN_ID)
Column characteristics (DATA_TYPE, DATA_LENGTH, DATA_ PRECISION, DATA_SCALE, NULLABLE, DEFAULT_LENGTH, DATA_ DEFAULT)
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Managing Tables
231
Statistics (AVG_COL_LEN, NUM_DISTINCT, HIGH_VALUE, LOW_VALUE, NUM_NULLS, LAST_ANALYZED, SAMPLE_SIZE)
Table 7.3 lists other dictionary views that show information about the tables in the database. TABLE 7.3
Dictionary Views with Table Information View Name
Contents
ALL_ALL_TABLES DBA_ALL_TABLES USER_ALL_TABLES
Similar information as in DBA_TABLES; shows information about relational tables and object tables.
ALL_TAB_PARTITIONS DBA_TAB_PARTITIONS USER_TAB_PARTITIONS
Partitioning information, storage parameters, and partition-level statistics gathered.
ALL_TAB_ SUBPARTITIONS DBA_TAB_ SUBPARTITIONS USER_TAB_ SUBPARTITIONS
Subpartition information for composite partitions in the database.
ALL_OBJECTS DBA_OBJECTS USER_OBJECTS
Information about the objects. For table information such as creation timestamp and modification date, query this view. The OBJECT_TYPE column shows the type of the object, such as table, index, trigger, etc.
ALL_EXTENTS DBA_EXTENTS USER_EXTENTS
Information about the extents allocated to the table. Shows the tablespace, data file, number of blocks, extent size, etc.
Using ROWID ROWID is used to uniquely identify each row of the table. ROWID is a pseudocolumn in all tables that is not implicitly selected—you must specify ROWID in the query. The ROWID is an 18-byte structure that stores the physical location of the row. Since ROWIDs contain the exact block ID where the row is
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
232
Chapter 7
Managing Tables, Indexes, and Constraints
located, it is the fastest way to access a row. There are two categories of ROWIDs: Physical ROWID A ROWID used to identify each row of a table, partition, subpartition, or cluster Logical ROWID A ROWID used to identify the rows of an index-organized table (IOT—discussed later in this chapter) Unless explicitly specified, ROWID in this chapter means a physical ROWID. There are two formats for the ROWID: Extended format Uses a base-64 encoding scheme to display the ROWID consisting of the characters A–Z, a–z, 0–9, +, and –. The ROWID is an 18character representation and takes 10 bytes to store. The format is OOOOOOFFFBBBBBBRRR, where:
OOOOOO is the object number.
FFF is the relative data file number where the block is located; the file number is relative to the tablespace.
BBBBBB is the block ID where the row is located.
RRR is the row in the block.
SQL> SELECT ROWID, ORDER_NUM 2 FROM ORDERS; ROWID ORDER_NUM ------------------ ---------AAAFqsAADAAAAfTAAA 5934343 AAAFqsAADAAAAfTAAB 343433
Restricted format This format is the pre-Oracle8 format, carried forward for compatibility. The restricted format is BBBBBBB.RRRR.FFFF (in base-16 or hexadecimal format), where:
BBBBBBB is the data block.
RRRR is the row number.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Managing Tables
233
FFFF is the data file.
SQL> SELECT DBMS_ROWID.ROWID_TO_RESTRICTED(ROWID,0), 2 ORDER_NUM 3 FROM ORDERS; DBMS_ROWID.ROWID_T ORDER_NUM ------------------ ---------000007D3.0000.0003 5934343 000007D3.0001.0003 343433
DBMS_ROWID The DBMS_ROWID package can be used to read and convert the ROWID information. This package has several useful functions that can be used to convert the ROWID between extended and restricted formats. The function ROWID_TO_RESTRICTED is used to convert the extended ROWID format to restricted format. The two parameters to this function are the extended ROWID and the conversion type. Conversion type is an integer: 0 to return the ROWID in an internal format, and 1 to return it in character string. If the database is upgraded from Oracle7 to Oracle8i, and there are some tables with a ROWID defined as the type of an explicit column in the table, you can convert the restricted ROWID to extended format by using the function ROWID_TO_EXTENDED. There are four parameters to this function: the old ROWID, object owner, object name, and conversion type. The object owner and object name parameters are optional. SQL> 2 2 3 4 SQL>
SELECT ROWID, DBMS_ROWID.ROWID_TO_EXTENDED(ROWID, NULL, NULL, 0) EXT_ROWID, DBMS_ROWID.ROWID_TO_RESTRICTED(ROWID,1) RES_ROWID FROM ORDERS /
ROWID -----------------AAAFqsAADAAAAfTAAA AAAFqsAADAAAAfTAAB
EXT_ROWID -----------------AAAFqsAADAAAAfTAAA AAAFqsAADAAAAfTAAB
Copyright ©2000 SYBEX , Inc., Alameda, CA
RES_ROWID -----------------000007D3.0000.0003 000007D3.0001.0003
www.sybex.com
234
Chapter 7
Managing Tables, Indexes, and Constraints
The ROWID_TO_VERIFY function takes the same parameters as the ROWID_ TO_EXTENDED function. This function verifies whether a restricted format ROWID can be converted to extended format by using the ROWID_TO_ EXTENDED function. If the ROWID can be converted, it returns 0; otherwise, it returns 1.
Managing Indexes
I
ndexes are used to access data more quickly than reading the whole table, and they reduce disk I/O considerably when the queries make use of the available indexes. As with tables, you can specify storage parameters for indexes, create partitioned indexes, and analyze the index to verify structure and collect statistics. You can create any number of indexes on a table. A column can be part of multiple indexes, and you can specify up to 30 columns in an index. When more than one column is specified for indexing, it is known as a composite index. You can have more than one index with the same index columns, but in a different order. In this section, we will discuss the types of indexes and when to use which type.
Creating Indexes Indexes are created using the CREATE INDEX statement. Indexes can be created and dropped without affecting the base data of the table—indexes and table data are independent. Oracle maintains the indexes automatically: when new rows are added to the table, updated, or deleted, Oracle updates the corresponding indexes. You can create the following types of indexes: Bitmap index A bitmap index does not repeatedly store the index column values. Each value is treated as a key, and for the corresponding ROWIDs a bit is set. Bitmap indexes are suitable for columns with low cardinality, such as the SEX column in an EMPLOYEE table, where the possible values are M or F. The cardinality is the number of distinct column values in a column. In the EMPLOYEE table example, the cardinality of the SEX column is 2. You cannot create unique or reverse key bitmap indexes. b-tree index This is the default. The index is created using the b-tree algorithm. The b-tree includes nodes with the index column values and
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Managing Indexes
235
the ROWID of the row. The ROWIDs are used to identify the rows in the table. The following are the types of b-tree indexes you can create: Non-unique index This is the default; the index column values are not unique. Unique index This is created by specifying the UNIQUE keyword: each column value entry of the index is unique. For composite indexes, Oracle guarantees that the combination of all index column values in the composite index is unique. Oracle returns an error if you try to insert two rows with the same index column values. Reverse key index The reverse key index is created by specifying the REVERSE keyword. The bytes of each column indexed are reversed, while keeping the column order. For example, if column ORDER_NUM has value 54321, Oracle reverses the bytes to 12345 and then adds it to the index. This type of indexing can be used for unique indexes when inserts to the table are always in the ascending order of the indexed columns. This helps to distribute the adjacent valued columns to different leaf blocks of the index, and as a result improve performance by retrieving fewer index blocks. Leaf blocks are the blocks at the lowest level of the b-tree. Function-based index The function-based index can be created on columns with expressions. For example, creating an index on the SUBSTR(EMPID, 1,2) can speed up the queries using SUBSTR(EMPID, 1, 2) in the WHERE clause.
Oracle does not include the rows with NULL values in the index columns when storing the index.
The CREATE INDEX statement creates a non-unique index on the columns specified. You must specify a name for the index and the table name on which the index should be built. For example, to create an index on the ORDER_DATE column of the ORDERS table, specify CREATE INDEX IND1_ORDERS ON ORDERS (ORDER_DATE);
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
236
Chapter 7
Managing Tables, Indexes, and Constraints
To create a unique index, you must specify the keyword UNIQUE immediately after CREATE. For example: CREATE UNIQUE INDEX IND2_ORDERS ON ORDERS (ORDER_NUM);
To create a bitmap index, you must specify the keyword BITMAP immediately after CREATE. Bitmap indexes cannot be unique. For example: CREATE BITMAP INDEX IND3_ORDERS ON ORDERS (STATUS);
Specifying Storage If you do not specify the TABLESPACE clause in the CREATE INDEX statement, Oracle creates the index in the default tablespace of the user. If the STORAGE clause is not specified, Oracle inherits the default storage parameters defined for the tablespace. All the storage parameters discussed under the “Managing Tables” section are applicable to indexes and have the same meaning except for PCTUSED. PCTUSED cannot be specified for indexes. Keep the INITRANS for the index more than that specified for the corresponding table, because the index blocks can hold a larger number of rows than a table. Here is an example of creating an index and specifying the storage: CREATE UNIQUE INDEX IND2_ORDERS ON ORDERS (ORDER_NUM) TABLESPACE USER_INDEX PCTFREE 25 INITRANS 2 MAXTRANS 255 STORAGE (INITIAL 128K NEXT 128K PCTINCREASE 0 MINEXTENTS 1 MAXEXTENTS 100 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL KEEP);
When creating indexes for a table with rows, Oracle writes the data blocks with index values up to PCTFREE. The free space reserved by specifying PCTFREE is used when inserting into the table a new row (or updating a row that changes the corresponding index key column value) that needs to be placed between two index key values of the leaf node. If no free space is available in the block, Oracle uses a new block. If many new rows are inserted into the table, keep the PCTFREE of the index high.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Managing Indexes
237
Using Other Create Clauses You can specify NOLOGGING to make the index creation faster, and therefore not write information to the redo log files. The default is LOGGING. It is possible to collect statistics about the index while creating the index by specifying the COMPUTE STATISITCS clause. This avoids another ANALYZE on the index later. The ONLINE clause specifies that the table will be available for DML operations when the index is built. If data is loaded to the table in the order of an index, you can specify the NOSORT clause. Oracle does not sort the rows, but if the data is not sorted, Oracle returns an error. Specifying this clause saves time and temporary space. For multicolumn indexes, eliminating the repeating key columns can save storage space. Specify the COMPRESS clause when creating the index. NOCOMPRESS is the default. This clause can be used only with non-partitioned indexes. Index performance may be affected when using this clause. Specify PARALLEL to create the index using multiple server processes. NOPARALLEL is the default. The following is an example of creating an index by specifying some of the miscellaneous clauses: SQL> 2 3 4 5 6 7 8
CREATE INDEX IND5_ORDERS ON ORDERS (ORDER_NUM, ORDER_DATE) TABLESPACE INDX NOLOGGING NOSORT COMPRESS SORT COMPUTE STATISTICS;
Index created. SQL>
Partitioning As with tables, you can partition indexes for better manageability and performance. Partitioned tables can have partitioned and/or non-partitioned indexes, and partitioned indexes can be created on partitioned or nonpartitioned tables. When all the index partitions correspond to the table partitions (equipartitioning), it is a local partitioned index. Specifying the LOCAL
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
238
Chapter 7
Managing Tables, Indexes, and Constraints
keyword creates local indexes. For local indexes, Oracle maintains the index partition keys automatically, in synch with the table partition. A global partitioned index specifies different partition range values; the partition column values need not belong to a single table partition. Specifying the GLOBAL keyword creates global indexes. You can create four types of partitioned indexes: Local prefixed Local index with leading columns (leftmost column in index) in the order of the partition key. Local non-prefixed Partition key columns are not leading columns, but the index is local. Global prefixed Global index, with leading columns in the order of the partition key. Global non-prefixed Global index, with leading columns not in the partition key order.
Bitmap indexes created on partitioned tables must be local. You cannot create a partitioned bitmap index on a non-partitioned table.
Reverse Key Indexes Specifying the REVERSE keyword creates a reverse key index. Reverse key indexes improve performance of certain OLTP applications using the parallel server. The following example creates a reverse key index on the ORDER_ NUM and ORDER_DATE column of the ORDERS table. CREATE UNIQUE INDEX IND2_ORDERS ON ORDERS (ORDER_DATE, ORDER_NUM) TABLESPACE USER_INDEX REVERSE;
Function-Based Indexes Function-based indexes are created as regular b-tree or bitmap indexes. Specify the expression or function when creating the index. Oracle precalculates
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Managing Indexes
239
the value of the expression and creates the index. For example, to create a function based on SUBSTR(PRODUCT_ID,1,2): CREATE INDEX IND4_ORDERS ON ORDERS (SUBSTR(PRODUCT_ID,1,2)) TABLESPACE USER_INDEX;
To use the function-based index, you must set the instance initialization parameter QUERY_REWRITE_ENABLED to TRUE and QUERY_REWRITE_ INTEGRITY to TRUSTED. Also, the COMPATIBLE parameter should be set to 8.1.0 or higher. A query can use this index if its WHERE clause specifies a condition by using SUBSTR(PRODUCT_ID,1,2), as in the following example: SELECT * FROM ORDERS WHERE SUBSTR(PRODUCT_ID,1,2) = ‘BT’;
You must gather statistics for function-based indexes for the cost-based optimizer to use the index. The rule-based optimizer does not use function-based indexes.
Index-Organized Tables You can store index and table data together in a structure known as an index-organized table (IOT). IOTs are suitable for tables in which the data access is mostly through its primary key, such as look-up tables, where you have a code and a description. An IOT is a b-tree index, and instead of storing the ROWID of the table where the row belongs, the entire row is stored as part of the index. You can build additional indexes on the columns of an IOT. The data in an IOT is accessed the same way you would access the data in a table. Since the row is stored along with the b-tree index, there is no physical ROWID for each row. The primary key identifies the rows in an IOT. Oracle “guesses” the location of the row and assigns a logical ROWID for each row, which permits the creation of secondary indexes. You can partition an IOT, but the partition columns should be a subset of the primary key columns. To build additional indexes on the IOT, Oracle uses a logical ROWID, which is derived from the primary key values of the IOT. The logical ROWID
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
240
Chapter 7
Managing Tables, Indexes, and Constraints
can include a guessed physical location of the row in the data files. This guessed location is not valid when a row is moved from one block to another. If the logical ROWID does not include the guessed location of the ROWID, Oracle has to perform two index scans when using the secondary index. The logical ROWIDs can be stored in columns with the data type UROWID. An index-organized table is created using the CREATE TABLE command with the ORGANIZATION INDEX keyword. You must specify the primary key for the table when creating the table. SQL> 2 3 4 5 6 7 8 9 10
CREATE TABLE IOT_EXAMPLE ( PK_COL1 NUMBER (4), PK_COL2 VARCHAR2 (10), NON_PK_COL1 VARCHAR2 (40), NON_PK_COL2 DATE, CONSTRAINT PK_IOT PRIMARY KEY (PK_COL1, PK_COL2)) ORGANIZATION INDEX TABLESPACE INDX STORAGE (INITIAL 32K NEXT 32K PCTINCREASE 0);
Table created. SQL>
Altering Indexes Indexes can be altered using the ALTER INDEX command to:
Change its STORAGE clause, except for the parameters INITIAL and MINEXTENTS
De-allocate unused blocks
Rebuild the index
Coalesce leaf nodes
Manually allocate extents
Change the PARALLEL/NOPARALLEL, LOGGING/NOLOGGING clauses
Modify partition storage parameters, rename partitions, drop partitions, etc.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Managing Indexes
241
Specify the ENABLE/DISABLE clause to enable or disable functionbased indexes
Mark the index or index partition as UNUSABLE, thereby disabling the index or index partition
Rename the index
Since the rules for changing the storage parameters, allocating extents, and de-allocating extents are similar to those for altering a table, we will provide only some short examples here. To change storage parameters: ALTER INDEX SCOTT.IND1_ORDERS STORAGE (NEXT 512K MAXEXTENTS UNLIMITED);
To allocate an extent: ALTER INDEX SCOTT.IND1_ORDERS ALLOCATE EXTENT SIZE 200K;
To de-allocate unused blocks: ALTER INDEX SCOTT.IND1_ORDERS DEALLOCATE UNUSED KEEP 100K;
If you disable an index by specifying the UNUSABLE clause, you must rebuild the index to make it valid.
Rebuilding/Coalescing Indexes Over time, blocks in an index can become fragmented and leave behind free space in leaf blocks. You can compress these indexes and gain space that can be used for new leaf blocks. The COALESCE clause can be used to free up index leaf blocks within the same branch of the tree. The index storage parameters and tablespace values remain the same. Here is an example: ALTER INDEX IND1_ORDERS COALESCE;
If you want to re-create the index on a different tablespace or specify different storage parameters and free up leaf blocks, you can use the REBUILD clause. When you rebuild an index, Oracle drops the original index when the rebuild is complete (a new index is created even if you do not change the tablespace or storage). Users can access the index if the ONLINE parameter is specified. Optionally, you can collect statistics while rebuilding the index by
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
242
Chapter 7
Managing Tables, Indexes, and Constraints
using the COMPUTE STATISTICS parameter, and not generate redo log entries by specifying NOLOGGING. You can also specify REVERSE or NOREVERSE to convert a normal index to a reverse key index or vice versa. The following example moves the index to a new tablespace, collecting the statistics while rebuilding, and makes the table available for insert/ update/delete operations by specifying the ONLINE clause. ALTER INDEX IND1_ORDERS REBUILD TABLESPACE NEW_INDEX_TS STORAGE (INITIAL 25M NEXT 5M PCTINCREASE 0) PCTFREE 20 INITRANS 4 COMPUTE STATISTICS ONLINE NOLOGGING;
Dropping Indexes Indexes can be dropped using the DROP INDEX command. Oracle frees up all the space used by the index when the index is dropped. When a table is dropped, the indexes built on the table are automatically dropped. For example: DROP INDEX SCOTT.IND5_ORDERS;
Analyzing Indexes Similar to tables, indexes can be analyzed to validate their structure (to find block corruption) and to collect statistics. You cannot use the ANALYZE command on an index with the LIST CHAINED ROWS clause. You can use the COMPUTE or ESTIMATE clause when collecting statistics. Here are some examples: To validate the structure of an index: ALTER INDEX IND5_ORDERS VALIDATE STRUCTURE;
To collect statistics by sampling 40 percent of the entries in the index: ALTER INDEX IND5_ORDERS ESTIMATE STATISTICS SAMPLE 40 PERCENT;
To delete statistics: ALTER INDEX IND5_ORDERS DELETE STATISTICS;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Managing Indexes
243
Querying Index Information Several data dictionary views are available to query information about indexes. This section covers certain views and their columns that you should be familiar with before taking the exam.
DBA_INDEXES The DBA_INDEXES, USER_INDEXES, and ALL_INDEXES views are the primary views used to query for information about indexes (IND is a synonym for USER_INDEXES). The views have the following information (the columns that can be used in the query are provided in parentheses):
Identity (OWNER, INDEX_NAME, INDEX_TYPE, TABLE_OWNER, TABLE_ NAME, TABLE_TYPE)
Storage (TABLESPACE_NAME, PCT_FREE, PCT_USED, INI_TRANS, MAX_ TRANS, INITIAL_EXTENT, NEXT_EXTENT, MAX_EXTENTS, MIN_EXTENTS, PCT_INCREASE, FREELISTS, FREELIST_GROUPS, BUFFER_POOL)
Statistics (BLEVEL, LEAF_BLOCKS, DISTINCT_KEYS, AVG_LEAF_ BLOCKS_PER_KEY, AVG_DATA_BLOCKS_PER_KEY, NUM_ROWS, SAMPLE_ SIZE, LAST_ANALYZED)
Miscellaneous create options (UNIQUENESS, LOGGING, DEGREE, CACHE, PARTITIONED)
DBA_IND_COLUMNS Use the DBA_IND_COLUMNS, USER_IND_COLUMNS, and ALL_IND_COLUMNS views to display information about the columns in an index. The following information can be queried:
Identity (INDEX_OWNER, INDEX_NAME, TABLE_OWNER, TABLE_NAME, COLUMN_NAME)
Column characteristics (COLUMN_LENGTH, COLUMN_POSITION, DESCEND)
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
244
Chapter 7
Managing Tables, Indexes, and Constraints
Table 7.4 lists other dictionary views that show information about the indexes in the database. TABLE 7.4
Dictionary Views with Index Information View Name
Contents
ALL_IND_PARTITIONS DBA_IND_PARTITIONS USER_IND_PARTITIONS
Index partitioning information, storage parameters, and partition-level statistics gathered
ALL_IND_SUBPARTITIONS DBA_IND_SUBPARTITIONS USER_IND_SUBPARTITIONS
Sub-partition information for composite index partitions in the database
ALL_IND_EXPRESSIONS DBA_IND_EXPRESSIONS USER_IND_EXPRESSIONS
Columns or expressions used to create the function-based index
INDEX_STATS
Statistical information from the ANALYZE INDEX VALIDATE STRUCTURE command
Managing Constraints Constraints are created in the database to enforce a business rule in the database and to specify relationships between various tables. Business rules can also be enforced using database triggers and application code. Integrity constraints prevent bad data from being entered into the database. Oracle allows creating five types of integrity constraints: NOT NULL Prevents NULL values from being entered into the column. These types of constraints are defined on a single column. CHECK Checks whether the condition specified in the constraint is satisfied. UNIQUE Ensures that there are no duplicate values for the column(s) specified. Every value or set of values is unique within the table.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Managing Constraints
245
PRIMARY KEY Uniquely identifies each row of the table. Prevents NULL values. A table can have only one primary key constraint. FOREIGN KEY Establishes a parent-child relationship between tables by using common columns.
Creating Constraints Constraints are created using the CREATE TABLE or ALTER TABLE statements. You can specify the constraint definition at the column level if the constraint is defined on a single column. For multiple column constraints, they must be defined at the table level; the columns should be specified in parentheses separated by a comma. If you do not provide a name for the constraints, Oracle assigns a system-generated name. A name is provided for the constraint by specifying the keyword CONSTRAINT followed by the constraint name. In this section, we will discuss the rules for each constraint type and examples of creating constraints.
NOT NULL NOT NULL constraints have the following characteristics:
Constraint defined at the column level.
Use CREATE TABLE to define constraints when creating the table. The following example shows a named constraint on the ORDER_NUM column; for ORDER_DATE, Oracle generates a name. CREATE TABLE ORDERS ( ORDER_NUM NUMBER (4) CONSTRAINT NN_ORDER_NUM NOT NULL, ORDER_DATE DATE NOT NULL, PRODUCT_ID)
Use ALTER TABLE MODIFY to add or remove a NOT NULL constraint on the columns of an existing table. The following code shows examples of removing a constraint and adding a constraint. ALTER TABLE ORDERS MODIFY ORDER_DATE NULL; ALTER TABLE ORDERS MODIFY PRODUCT_ID NOT NULL;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
246
Chapter 7
Managing Tables, Indexes, and Constraints
CHECK CHECK constraints have the following characteristics:
The constraint can be defined at the column level or table level.
The condition specified in the CHECK clause should evaluate to a Boolean result and can refer to values in other columns of the same row; the condition cannot use queries.
Environment functions such as SYSDATE, USER, USERENV, UID, and pseudo-columns such as ROWNUM, CURRVAL, NEXTVAL, or LEVEL cannot be used to evaluate the check condition.
One column can have more than one CHECK constraint defined. The column can have a NULL value.
Can be created using CREATE TABLE or ALTER TABLE. CREATE TABLE BONUS ( EMP_ID VARCHAR2 (40) NOT NULL, SALARY NUMBER (9,2), BONUS NUMBER (9,2), CONSTRAINT CK_BONUS CHECK (BONUS > 0)); ALTER TABLE BONUS ADD CONSTRAINT CK_BONUS2 CHECK (BONUS < SALARY);
UNIQUE UNIQUE constraints have the following characteristics:
Can be defined at the column level for single-column unique keys. For a multiple-column unique key (composite key—the maximum number of columns specified can be 32), the constraint should be defined at the table level.
Oracle creates a unique index on the unique key columns to enforce uniqueness. If a unique index or non-unique index already exists on the table with the same column order prefix, Oracle uses the existing index. To use the existing non-unique index, there must not be any duplicate keys in the table.
Unique constraints allow NULL values in the constraint columns.
Storage can be specified for the implicit index created when creating the key. If no storage is specified, the index is created on the default
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Managing Constraints
247
tablespace with the default storage parameters of the tablespace. You can specify the LOGGING and NOSORT clauses, as you would when creating an index. The index created can be a local or global partitioned index. The index will have the same name as the unique constraint. Following are two examples. The first one defines a unique constraint with two columns and specifies the storage parameters for the index. The second example adds a new column to the EMP table and creates a unique key at the column level. ALTER TABLE BONUS ADD CONSTRAINT UQ_EMP_ID UNIQUE (DEPT, EMP_ID) USING INDEX TABLESPACE INDX STORAGE (INITIAL 32K NEXT 32K PCTINCREASE 0); ALTER TABLE EMP ADD SSN VARCHAR2 (11) CONSTRAINT UQ_SSN UNIQUE;
PRIMARY KEY PRIMARY KEY constraints have the following characteristics:
All characteristics of the UNIQUE key are applicable except that NULL values are not allowed in the primary key columns.
A table can have only one primary key.
Oracle creates a unique index and NOT NULL constraints for each column in the key. The following example defines a primary key when creating the table. Storage parameters are specified for both the table and the primary key index. CREATE TABLE EMPLOYEE ( DEPT_NO VARCHAR2 (2), EMP_ID NUMBER (4), NAME VARCHAR2 (20) NOT NULL, SSN VARCHAR2 (11), SALARY NUMBER (9,2) CHECK (SALARY > 0), CONSTRAINT PK_EMPLOYEE PRIMARY KEY (DEPT_NO, EMP_ID) USING INDEX TABLESPACE INDX STORAGE (INITIAL 64K NEXT 64K) NOLOGGING, CONSTRAINT UQ_SSN UNIQUE (SSN) USING INDEX TABLESPACE INDX) TABLESPACE USERS STORAGE (INITIAL 128K NEXT 64K);
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
248
Chapter 7
Managing Tables, Indexes, and Constraints
Indexes created to enforce unique keys and primary keys can be managed as any other index. However, these indexes cannot be dropped explicitly.
You cannot drop indexes created to enforce the primary key or unique constraints.
FOREIGN KEY The foreign key is the column or columns in the table (child table) where the constraint is created; the referenced key is the primary key or unique key column or columns in the table (parent table) that is referenced by the constraint. The following rules are applicable to foreign key constraints:
A foreign key constraint can be defined at the column level or table level. Multiple-column foreign keys should be defined at the table level.
The foreign key column(s) and referenced key column(s) can be in the same table (self-referential integrity constraint).
NULL values are allowed in the foreign key columns. The following is an example of creating a foreign key constraint on the COUNTRY_CODE and STATE_CODE columns of the CITY table, which refers to the COUNTRY_CODE and STATE_CODE columns of the STATE table (the composite primary key of the STATE table). ALTER TABLE CITY ADD CONSTRAINT FK_STATE FOREIGN KEY (COUNTRY_CODE, STATE_CODE) REFERENCES STATE (COUNTRY_CODE, STATE_CODE);
The ON DELETE clause specifies the action to be taken when a row in the parent table is deleted and child rows exist with the deleted parent primary key. You can delete the child rows (CASCADE) or set the foreign key column values to NULL (SET NULL). If you omit this clause, Oracle will not allow you to delete from the parent table if child records exist. You must delete the child rows first and then the parent
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Managing Constraints
249
row. Following are two examples of specifying the delete action in a foreign key. ALTER TABLE CITY ADD CONSTRAINT FK_STATE FOREIGN KEY (COUNTRY_CODE, STATE_CODE) REFERENCES STATE (COUNTRY_CODE, STATE_CODE) ON DELETE CASCADE; ALTER TABLE CITY ADD CONSTRAINT FK_STATE FOREIGN KEY (COUNTRY_CODE, STATE_CODE) REFERENCES STATE (COUNTRY_CODE, STATE_CODE) ON DELETE SET NULL;
Create Disabled Constraints When a constraint is created, it is enabled automatically. You can create a disabled constraint by specifying the DISABLED keyword after the constraint definition. For example: ALTER TABLE CITY ADD CONSTRAINT FK_STATE FOREIGN KEY (COUNTRY_CODE, STATE_CODE) REFERENCES STATE (COUNTRY_CODE, STATE_CODE) DISABLE; ALTER TABLE BONUS ADD CONSTRAINT CK_BONUS CHECK (BONUS > 0) DISABLE;
Drop Constraints Constraints are dropped using ALTER TABLE. Any constraint can be dropped by specifying the constraint name. ALTER TABLE BONUS DROP CONSTRAINT CK_BONUS2;
To drop unique key constraints with referenced foreign keys, specify the CASCADE clause to drop the foreign key constraints and the unique constraint. Specify the unique key columns(s). For example: ALTER TABLE EMPLOYEE DROP UNIQUE (EMP_ID) CASCADE;
To drop primary key constraints with referenced foreign key constraints, use the CASCADE clause to drop all foreign key constraints and then the primary key. ALTER TABLE BONUS DROP PRIMARY KEY CASCADE;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
250
Chapter 7
Managing Tables, Indexes, and Constraints
Enabling and Disabling Constraints When you create a constraint, the constraint is automatically enabled (unless you specify the DISABLE clause). You can disable a constraint by using the DISABLE clause of the ALTER TABLE statement. When you disable the UNIQUE or PRIMARY KEY constraints, Oracle drops the associated unique index. When you re-enable these constraints, Oracle builds the index. You can disable any constraint by specifying the clause DISABLE CONSTRAINT followed by the constraint name. Specifying UNIQUE and the column name(s) can disable unique keys, and specifying PRIMARY KEY can disable the table’s primary key. You cannot disable a primary key or unique key if foreign keys that are enabled reference it. To disable all the referenced foreign keys and the primary or unique key, specify CASCADE. Following are three examples that illustrate this. ALTER TABLE BONUS DISABLE CONSTRAINT CK_BONUS; ALTER TABLE EMPLOYEE DISABLE CONSTRAINT UQ_EMPLOYEE; ALTER TABLE STATE DISABLE PRIMARY KEY CASCADE;
Using the ENABLE clause of the ALTER TABLE statement enables a constraint. When you enable a disabled unique or primary key, Oracle creates an index if an index with the unique or primary key columns (prefixed) does not already exist. You can specify storage for the unique or primary key when enabling these constraints. For example: ALTER TABLE STATE ENABLE PRIMARY KEY USING INDEX TABLESPACE USER_INDEX STORAGE (INITIAL 2M NEXT 2M);
The EXCEPTIONS INTO clause can be used to find the rows that violate a referential integrity or uniqueness condition. The ROWIDs of the invalid rows are inserted into a table. You can specify the name of the table where you want the ROWIDs to be saved; by default, Oracle looks for the table named EXCEPTIONS. The table can be created using the script utlexcpt.sql supplied from Oracle, located in the rdbms/admin directory of the software installation. The structure of the table is as follows: SQL>
@c:\oracle\ora81\rdbms\admin\utlexcpt.sql
Table created. SQL> desc exceptions
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Managing Constraints
Name Null? ----------------------- -------ROWID OWNER TABLE_NAME CONSTRAINT
251
Type -------------ROWID VARCHAR2(30) VARCHAR2(30) VARCHAR2(30)
The following example enables the primary key constraint and inserts the ROWIDs of the bad rows into the EXCEPTIONS table: ALTER TABLE STATE ENABLE PRIMARY KEY EXCEPTIONS INTO EXCEPTIONS;
You can also use the MODIFY CONSTRAINT clause of the ALTER TABLE statement to enable/disable constraints. Specify the constraint name followed by the MODIFY CONSTRAINT keywords. Following are examples. ALTER TABLE BONUS MODIFY CONSTRAINT CK_BONUS DISABLE; ALTER TABLE STATE MODIFY CONSTRAINT PK_STATE DISABLE CASCADE; ALTER TABLE BONUS MODIFY CONSTRAINT CK_BONUS ENABLE; ALTER TABLE STATE MODIFY CONSTRAINT PK_STATE USING INDEX TABLESPACE USER_INDEX STORAGE (INITIAL 2M NEXT 2M) ENABLE;
Validated Constraints You have seen how to enable and disable a constraint. ENABLE and DISABLE affect only future data that will be added/modified in the table. In contrast, the VALIDATE and NOVALIDATE keywords in the ALTER TABLE command act upon the existing data. Therefore, a constraint can have four states: ENABLE VALIDATE This is the default for the ENABLE clause. The existing data in the table is validated to verify that it conforms to the constraint. ENABLE NOVALIDATE Does not validate the existing data, but enables the constraint for future constraint checking. DISABLE VALIDATE The constraint is disabled (any index used to enforce the constraint is also dropped), but the constraint is kept valid. No DML operation is allowed on the table because future changes cannot be verified.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
252
Chapter 7
Managing Tables, Indexes, and Constraints
DISABLE NOVALIDATE This is the default for the DISABLE clause. The constraint is disabled, and no checks are done on future or existing data. Let’s discuss an example of how these clauses can be used. Say that you have a large data warehouse table, where bulk data loads are performed every night. This table has a primary key enforced using a non-unique index—because Oracle does not drop the non-unique index when disabling the constraint. When you do batch loads, the primary key constraint is disabled as follows: ALTER TABLE WH01 MODIFY CONSTRAINT PK_WH01 DISABLE NOVALIDATE;
After the batch load completes, the primary key can be enabled by the following: ALTER TABLE WH01 MODIFY CONSTRAINT PK_WH01 ENABLE NOVALIDATE;
Oracle does not allow any inserts/updates/deletes on a table with a DISABLE VALIDATE constraint. This is a quick way to make a table read-only.
Deferring Constraint Checks By default, Oracle checks whether the data conforms to the constraint when the statement is executed. Oracle allows you to change this behavior if the constraint is created using the DEFERRABLE clause (NOT DEFERRABLE is the default). It specifies that the transaction can set the constraint-checking behavior. INITIALLY IMMEDIATE specifies that the constraint should be checked for conformance at the end of each SQL statement (this is the default). INITIALLY DEFERRED specifies that the constraint should be checked for conformance at the end of the transaction. The DEFERRABLE status of a constraint cannot be changed using ALTER TABLE MODIFY CONSTRAINT; you must drop and recreate the constraint, and the INITIALLY [DEFERRED/IMMEDIATE] clause can be changed using ALTER TABLE. If the constraint is DEFERRABLE, you can set the behavior by using the SET CONSTRAINTS command or by using the ALTER SESSION SET CONSTRAINT command. You can enable or disable deferred constraint checking by listing all the constraints or by specifying the ALL keyword. The SET CONSTRAINTS
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Managing Constraints
253
command is used to set the constraint-checking behavior for the current transaction, and the ALTER SESSION command is used to set the constraintchecking behavior for the current session. Let’s consider an example. Create a primary key constraint on the CUSTOMER table and a foreign key constraint on the ORDERS table as DEFERRABLE. Though the constraints are created DEFERRABLE, they are not deferred because of the INITIALLY IMMEDIATE clause. ALTER TABLE CUSTOMER ADD CONSTRAINT PK_CUST_ID PRIMARY KEY (CUST_ID) DEFERRABLE INITIALLY IMMEDIATE; ALTER TABLE ORDERS ADD CONSTRAINT FK_CUST_ID FOREIGN KEY (CUST_ID) REFERENCES CUSTOMER (CUST_ID) ON DELETE CASCADE DEFERRABLE;
If you try to add a row to the ORDERS table with a CUST_ID that is not available in the CUSTOMER table, Oracle returns an error immediately, even though you plan to add the CUSTOMER row soon. Since the constraints are verified for conformance at each SQL, you must insert the CUSTOMER row first and then insert the row to the ORDERS table. Since the constraints are defined as DEFERRABLE, you can change this behavior by using this command: SET CONSTRAINTS ALL DEFERRED;
Now, you can insert rows to these tables in any order. Oracle checks the constraint conformance only at commit time. If you want deferred constraint checking as the default, create/modify the constraint by using INITIALLY DEFERRED. For example: ALTER TABLE CUSTOMER MODIFY CONSTRAINT PK_CUST_ID INITIALLY DEFERRED;
Querying Constraint Information Constraints, their columns, type, status, and so on can be queried from the dictionary. Here are the views you can use to query constraint information.
DBA_CONSTRAINTS The ALL_CONSTRAINTS, DBA_CONSTRAINTS, and USER_CONSTRAINTS views can be queried to get information about the constraints. The CONSTRAINT_TYPE
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
254
Chapter 7
Managing Tables, Indexes, and Constraints
column shows the type of constraint—C for check, P for primary key, U for unique key, and R for referential (foreign key); V and O are associated with views. For check constraints, the SEARCH_CONDITION column shows the check condition. NOT NULL constraints are not listed in this view. NOT NULL constraint information can be found from the NULLABLE column of the DBA_ TAB_COLUMNS view. Here is a sample query to get the constraint information: SQL> SELECT CONSTRAINT_NAME, CONSTRAINT_TYPE, DEFERRED, 2 DEFERRABLE, STATUS 3 FROM DBA_CONSTRAINTS 4 WHERE TABLE_NAME = ‘ORDERS’; CONSTRAINT_NAME ----------------CK_QUANTITY PK_ORDERS
C C P
DEFERRED --------IMMEDIATE DEFERRED
DEFERRABLE -------------NOT DEFERRABLE DEFERRABLE
STATUS -------DISABLED ENABLED
SQL>
DBA_CONS_COLUMNS The ALL_CONS_COLUMNS, DBA_CONS_COLUMNS, and USER_CONS_COLUMNS views show the columns associated with the constraints. SQL> SELECT CONSTRAINT_NAME, COLUMN_NAME, POSITION 2 FROM DBA_CONS_COLUMNS 3 WHERE TABLE_NAME = ‘ORDERS’; CONSTRAINT_NAME -----------------------------CK_QUANTITY PK_ORDERS PK_ORDERS
COLUMN_NAME POSITION --------------- ---------QUANTITY ORDER_NUM 2 ORDER_DATE 1
SQL>
Summary
This chapter discussed the various options available for creating tables, indexes, and constraints. Tables are created using the CREATE TABLE command. By default, the table will be created in the current
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Summary
255
schema. To create the table in another schema, you should qualify the table with the schema name. Storage parameters can be specified when creating the table. The storage parameters that specify the extent sizes are INITIAL, NEXT, and PCTINCREASE. Once the table is created, you cannot change the INITIAL and MINEXTENTS parameters. PCTFREE controls the free space in the data block. Partitioning the tables enables large tables to be managed more easily and better query performance to be achieved. Partitioning is breaking the large table into smaller, more manageable pieces. Three types of partitioning methods are available: range, hash, and composite. Indexes can also be created on partitioned tables. The indexes can be equipartitioned, also known as a local index, meaning index partitions will have the same partition keys and number of partitions as the partitioned table. Partitioned indexes can be created on partitioned tables or non-partitioned tables. Similarly, partitioned tables can have partitioned local, partitioned global, or nonpartitioned indexes. Tables and indexes can be altered to de-allocate extents or unused blocks. The DEALLOCATE clause of the ALTER TABLE statement is used to release the free blocks that are above the high-water mark (HWM). You can also manually allocate space to a table or index by using the ALLOCATE EXTENT clause. Tables can be moved or reorganized using the MOVE clause. New tablespace and storage parameters can be specified. The ANALYZE command can be used on tables and indexes to validate the structure and to identify corrupt blocks. It can also be used to collect statistics and to find and fix the chained rows in a table. The ROWID of the table is an 18-character representation of the physical location of the row. The DBMS_ROWID package can be used to convert ROWIDs between restricted and extended formats. Information on tables can be queried from DBA_ TABLES, DBA_TAB_COLUMNS, DBA_TAB_PARTITIONS, etc. Indexes can be created as b-tree or bitmap. Bitmap indexes save storage space for low cardinality columns. You can create reverse key or functionbased indexes. An index-organized table (IOT) stores the index and row data in the b-tree structure. Tablespace and storage should be specified when creating indexes. Indexes can be created ONLINE, that is, the table will be available for insert/update/delete operations while the indexing is in progress. The REBUILD clause of the ALTER INDEX command can be used to move the index to a different tablespace or to reorganize the index. You can also change a reverse key index to a normal index or vice versa. Constraints are created on the tables to enforce business rules. There are five types of constraints: not null, check, unique, primary key, and foreign key.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
256
Chapter 7
Managing Tables, Indexes, and Constraints
The constraints can be created to check the conformance at each SQL statement or when committing the changes—checking for conformance at each statement is the default. You can enable and disable constraints. Constraints can be enabled with the NOVALIDATE clause to save time after large data loads.
Key Terms Before you take the exam, make sure you’re familiar with the following terms: data types partitioning logical attributes physical attributes high-water mark (HWM) ROWID bitmap index b-tree reverse key index function-based index index-organized table integrity constraints
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
257
Review Questions 1. A table is created as follows: CREATE TABLE MY_TABLE (COL1 NUMBER) STORAGE (INITIAL 2M NEXT 2M MINEXTENTS 6 PCTINCREASE 0);
When you issue the following statement, what will be the size of the table, if the high-water mark of the table is 200KB? ALTER TABLE MY_TABLE DEALLOCATE UNUSED KEEP 1000K;
A. 1000KB B. 200KB C. 1200KB D. 2MB E. 13MB 2. Which command is used to drop a constraint? A. ALTER TABLE … MODIFY CONSTRAINT B. DROP CONSTRAINT C. ALTER TABLE … DROP CONSTRAINT D. ALTER CONSTRAINT … DROP 3. Which data dictionary view can be queried to determine whether a
constraint is enabled? A. DBA_CONS_COLUMNS B. DBA_CONSTRAINTS C. DBA_TABLES D. All of the above
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
258
Chapter 7
Managing Tables, Indexes, and Constraints
4. Which data dictionary view has the timestamp of the table creation? A. DBA_OBJECTS B. DBA_SEGMENTS C. DBA_TABLES D. All of the above 5. What happens when you issue the following statement and the
CHAINED_ROWS table does not exist in the current schema? ANALYZE TABLE EMPLOYEE LIST CHAINED ROWS; A. Oracle creates the CHAINED_ROWS table. B. Oracle updates the dictionary with the number of chained rows in
the table. C. Oracle creates the CHAINED_ROWS table under the SYS schema; if
one already exists under SYS, Oracle uses it. D. The statement fails. 6. The following statement is issued against the primary key constraint
(PK_BONUS) of the BONUS table. Choose two statements that are true. ALTER TABLE BONUS MODIFY CONSTRAINT PK_BONUS DISABLE VALIDATE; A. No new rows can be added to the BONUS table. B. Existing rows of the BONUS table are validated before disabling the
constraint. C. Rows can be modified, but the primary key columns cannot change. D. The unique index created when defining the constraint is dropped. 7. Which clause in the ANALYZE command checks for the integrity of the
rows in the table? A. COMPUTE STATISTICS B. VALIDATE STRUCTURE C. LIST INVALID ROWS D. None of the above
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
259
8. Which statement is not true? A. A partition can be range-partitioned. B. A subpartition can be range-partitioned. C. A partition can be hash-partitioned. D. A subpartition can be hash-partitioned. 9. A table is created with an INITRANS value of 2. Which value would
you choose for INITRANS of an index created on this table? A. 4 B. 2 C. 1 10. When validating a constraint, why would you specify the EXCEPTIONS
clause? A. To display the ROWIDs of the rows that do not satisfy the constraint B. To move the bad rows to the table specified in the EXCEPTIONS
clause C. To save the ROWIDs of the bad rows in the table specified in the
EXCEPTIONS clause D. To save the bad rows in the table specified in the EXCEPTIONS
clause 11. Which keyword is not valid for the BUFFER_POOL parameter of the
STORAGE clause? A. DEFAULT B. LARGE C. KEEP D. RECYCLE
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
260
Chapter 7
Managing Tables, Indexes, and Constraints
12. Which clause in the ALTER TABLE command is used to reorganize a table? A. REORGANIZE B. REBUILD C. RELOCATE D. MOVE 13. Which line in the following code has an error? 1 2 3 4 5
ALTER TABLE MY_TABLE STORAGE ( MINEXTENTS 4 NEXT 512K) NOLOGGING;
A. 2 B. 3 C. 4 D. 5 14. Which component is not part of the ROWID? A. Tablespace B. Data file number C. Object ID D. Block ID 15. Which keyword should be used in the CREATE INDEX command to
create a function-based index? A. CREATE FUNCTION INDEX … B. CREATE INDEX … ORGANIZATION INDEX C. CREATE INDEX … FUNCTION BASED … D. None of the above
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
261
16. Which data dictionary view shows statistical information from the
ANALYZE INDEX VALIDATE STRUCTURE command? A. INDEX_STATS B. DBA_INDEXES C. IND D. None; VALIDATE STRUCTURE does not generate statistics. 17. A constraint is created with the DEFERRABLE INITIALLY IMMEDIATE
clause. What does this mean? A. Constraint checking is done only at commit time. B. Constraint checking is done after each SQL, but you can change
this behavior by specifying SET CONSTRAINTS ALL DEFERRED. C. Existing rows in the table are immediately checked for constraint
violation. D. The constraint is immediately checked in a DML operation, but
subsequent constraint verification is done at commit time. 18. Which script creates the CHAINED_ROWS table? A. catproc.sql B. catchain.sql C. utlchain.sql D. No script is necessary; ANALYZE TABLE LIST CHAINED ROWS
creates the table.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
262
Chapter 7
Managing Tables, Indexes, and Constraints
19. What is the difference between a unique constraint and a primary key
constraint? A. A unique key constraint requires a unique index to enforce the con-
straint, whereas a primary key constraint can enforce uniqueness using a unique or non-unique index. B. A primary key column can be NULL, but a unique key column can-
not be NULL. C. A primary key constraint can make use of an existing index, but a
unique constraint always creates an index. D. A unique constraint column can be NULL, but primary key col-
umn(s) cannot be NULL.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Answers to Review Questions
263
Answers to Review Questions 1. C. The KEEP parameter in the DEALLOCATE clause is used to specify
the amount of space you want to keep in the table above the HWM. If the KEEP parameter is not specified, Oracle de-allocates all the space above the HWM if the HWM is above MINEXTENTS; otherwise, free space is de-allocated above MINEXTENTS. 2. C. Constraints are defined on the table and are dropped using the
ALTER TABLE command DROP clause. For dropping the primary key, you can also specify PRIMARY KEY instead of the constraint name. Similarly, to drop a unique constraint, you can also specify UNIQUE (). 3. B. The STATUS column of the DBA_CONSTRAINTS view shows
whether the constraint is enabled or disabled. 4. A. The DBA_OBJECTS view has information about all the objects
created in the database and has the timestamp and status of the object in the column CREATED. DBA_TABLES does not show the timestamp. 5. D. If you do not specify a table name to insert the ROWID of chained/
migrated rows, Oracle looks for a table named CHAINED_ROWS in the users schema. If the table does not exist, Oracle returns an error. The dictionary (the DBA_TABLES view) is updated with the number of chained rows when you do a COMPUTE STATISTICS on the table. 6. A and D. DISABLE VALIDATE disables the constraint and drops the
index, but keeps the constraint valid. No DML operation is allowed on the table. 7. B. The VALIDATE STRUCTURE clause of the ANALYZE TABLE com-
mand checks the structure of the table and makes sure all rows are readable. 8. B. Subpartitions can only be hash-partitioned. A partition can be
range-partitioned or hash-partitioned.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
264
Chapter 7
Managing Tables, Indexes, and Constraints
9. A. Since index blocks hold more entries per block than table data
blocks hold, you should provide a higher value of INITRANS to the index than the table. 10. C. If you specify the EXCEPTIONS INTO clause when validating or
enabling a constraint, the ROWIDs of the rows that do not satisfy the constraint are saved in the table specified in the EXCEPTIONS clause. You can remove the bad rows or fix the column values and enable the constraint. 11. B. The BUFFER_POOL parameter specifies a buffer pool cache for the
table or index. The KEEP pool retains the blocks in the SGA. RECYCLE removes blocks from SGA as soon as the operation is completed, and the DEFAULT pool is for objects for which KEEP or RECYCLE is not specified. 12. D. The MOVE clause is used to reorganize a table. You can specify new
tablespace and storage parameters. Queries are allowed on the table, but no DML operations are allowed during the move. 13. B. When you change the storage parameters for an existing index or
table, you cannot change the MINEXTENTS and INITIAL values. 14. A. The format of a ROWID is OOOOOOFFFBBBBBBRRR, where
OOOOOO is the object number, FFF is the relative data file number where the block is located, BBBBBB is the block ID where the row is located, and RRR is the row in the block. 15. D. No keyword needs to be specified to create a function-based index
other than to specify the function itself. To enable a function-based index, you must set the parameter QUERY_REWRITE_ENABLED to TRUE and QUERY_REWRITE_INTEGRITY to TRUSTED. 16. A. The INDEX_STATS and INDEX_HISTOGRAMS views show statisti-
cal information from the ANALYZE INDEX VALIDATE STRUCTURE statement.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Answers to Review Questions
265
17. B. DEFERRABLE specifies that the constraint can be deferred using the
SET CONSTRAINTS command. INITIALLY IMMEDIATE specifies that the constraint’s default behavior is to validate the constraint for each SQL. 18. C. The utlchain.sql script, located under the rdbms/admin direc-
tory for the Oracle software installation, creates the table. When chained or migrated rows are found in the table after issuing the ANALYZE TABLE LIST CHAINED ROWS command, the ROWID of such chained/migrated rows are inserted into the CHAINED_ROWS table. 19. D. Columns that are part of the primary key cannot accept NULL
values.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Chapter
8
Managing Users and Security ORACLE8i ARCHITECTURE AND ADMINISTRATION EXAM OBJECTIVES OFFERED IN THIS CHAPTER:
Managing Password Security and Resources
Manage passwords using profiles Administer profiles Control use of resources using profiles Obtain information about profiles, password management, and resources
Managing Users
Create new database users Alter and drop existing database users Monitor information about existing users
Managing Privileges
Identify system and object privileges Grant and revoke privileges Control operating system or password file authentication Identify auditing capabilities
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Managing Roles
Create and modify roles Control availability of roles Remove roles Use predefined roles Display role information from the data dictionary
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Training and Certification Web site (http:// education.oracle.com/certification/index.html) for the most current exam objectives listing.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
C
ontrolling database access and resource limits are important aspects of the DBA’s function. Profiles are used to manage the database and system resources and to manage database passwords and password verification. Database and data access are controlled using privileges. Roles are created to manage the privileges. This chapter covers creating users and assigning proper resources and privileges. It also discusses the auditing capabilities of the database.
Profiles
Profiles are used to control the database and system resource usage. Oracle provides a set of predefined resource parameters that you can use to monitor and control database resource usage. The DBA can define limits for each resource by using a database profile. Profiles are also used for password management. You can create various profiles for different user communities and assign a profile to each user. When the database is created, Oracle creates a profile named DEFAULT, and if you do not specify a profile for the user, the DEFAULT profile is assigned.
Resource Management Oracle lets you control the following types of resource usage through profiles:
Concurrent sessions per user
Elapsed and idle time connected to the database
CPU time used
Private SQL and PL/SQL area used in the SGA
Logical reads performed
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
270
Chapter 8
Managing Users and Security
Resource limits are enforced only when the parameter RESOURCE_LIMIT is set to TRUE. Even if you have defined profiles and assigned profiles to users, Oracle enforces them only when this parameter is set to TRUE. You can set this parameter in the initialization parameter file so that every time the database starts, the resource usage is controlled for each user using the assigned profile. Resource limits can also be enabled or disabled using the ALTER SYSTEM command. The default value of RESOURCE_LIMIT is FALSE. The limits for each resource are specified as an integer; you can set no limit for a given resource by specifying UNLIMITED, or use the value specified in the DEFAULT profile by specifying DEFAULT. The DEFAULT profile initially has the value UNLIMITED for all resources. The DEFAULT profile can be modified after database creation. Most resource limits are set at the session level; a session is created when a user connects to the database. Certain limits can be controlled at the statement level (but not at the transaction level). If a user exceeds a resource limit, Oracle aborts the current operation, rolls back the changes made by the statement, and returns an error. The user has the option of committing or rolling back the transaction, because the statements issued earlier in the transaction are not aborted. No other operation is permitted when a session-level limit is reached. The user can disconnect, in which case the transaction is committed. The parameters that are used to control resources are as follows: SESSIONS_PER_USER Limits the number of concurrent sessions a user can have. No more sessions from the current user are allowed when this threshold is reached. CPU_PER_SESSION Limits the amount of CPU time a session can use. The CPU time is specified in hundredths of a second. CPU_PER_CALL Limits the amount of CPU time a single SQL statement can use. The CPU time is specified in hundredths of a second. This is useful for controlling runaway queries, but you should be careful to specify this limit for batch programs. LOGICAL_READS_PER_SESSION Limits the number of data blocks read in a session, including the blocks read from memory and from physical reads. LOGICAL_READS_PER_CALL Limits the number of data blocks read by a single SQL statement, including the blocks read from memory and from physical reads. PRIVATE_SGA Private areas for SQL and PL/SQL statements are created in the SGA in the multithreaded architecture. This parameter limits the
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Profiles
271
amount the space allocated in the SGA for private areas, per session. You can specify K or M to indicate the size in KB or MB, respectively. If K or M is not used, the size is in bytes. This limit does not apply to dedicated server architecture connections. CONNECT_TIME Specifies the maximum number of minutes a session can stay connected to the database (total elapsed time, not CPU time). When the threshold is reached, the user is automatically disconnected from the database; any pending transaction is rolled back. IDLE_TIME Specifies the maximum number of minutes a session can be continuously idle, that is, without any activity for a continuous period of time. When the threshold is reached, the user is disconnected from the database; any pending transaction is rolled back. COMPOSITE_LIMIT You can define a cost for the system resources (the resource cost on one database may be different from another, based on the number of transactions, CPU, memory, etc.) known as the composite limit. The composite limit is a weighted sum of four resource limits: CPU_ PER_SESSION, LOGICAL_READS_PER_SESSION, CONNECT_TIME, and PRIVATE_SGA. Setting the resource cost is discussed in the upcoming section “Managing Profiles.”
Password Management Profiles are also used for password management. You can set the following by using profiles: Account locking Number of failed login attempts, and the number of days the password will be locked Password expiration How often passwords should be changed, whether passwords can be reused, and the grace period after which the user is warned that the password change is due Password complexity Use of a customized function to verify the password complexity—for example, the password should not be the same as the user ID, cannot include commonly used words, etc.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
272
Chapter 8
Managing Users and Security
The following are the parameters in the profiles that are used for password management: FAILED_LOGIN_ATTEMPTS Specifies the maximum number of consecutive invalid login attempts (providing an incorrect password) allowed before the user account is locked. PASSWORD_LOCK_TIME Specifies the number of days the user account will remain locked after the user has made FAILED_LOGIN_ATTEMPTS number of consecutive failed login attempts. PASSWORD_LIFE_TIME Specifies the number of days a user can use one password. If the user does not change the password within the number of days specified, all connection requests return an error. The DBA then has to reset the password. PASSWORD_GRACE_TIME Specifies the number of days the user will get a warning before the password expires. This is a reminder for the user to change the password. PASSWORD_REUSE_TIME Specifies the number of days a password cannot be used again after changing it. PASSWORD_REUSE_MAX Specifies the number of password changes required before a password can be reused. You cannot specify a value for both PASSWORD_REUSE_TIME and PASSWORD_REUSE_MAX; one should always be set to UNLIMITED, because you can enable only one type of password history method. PASSWORD_VERIFY_FUNCTION Specifies the function you want to use to verify the complexity of the new password. Oracle provides a default script, which can be modified.
You can specify minutes or hours as a fraction or expression in parameters that require days as a value. One hour can be represented as 0.042 days or 1/24, and one minute can be specified as 0.000694 days or 1/24/60 or 1/1440.
Managing Profiles You can have many profiles created in the database that specify both resource management parameters and password management parameters.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Profiles
273
However, a user can have only one profile assigned at any given time. A profile is created using the CREATE PROFILE command. You need to provide a name for the profile, and then specify the parameter names and their values separated by space(s). Let’s create a profile to manage passwords and resources for the accounting department users. The users are required to change their password every 60 days and they cannot reuse their password for 90 days. They are allowed to make a typo in the password only six consecutive times while connecting to the database; if the login fails for a seventh time, their account will be locked forever (until the DBA or security department unlocks the account). The accounting department users are allowed to have a maximum of six database connections; they can stay connected to the database for 24 hours, but an inactivity of 2 hours will terminate their session. To prevent the users from performing runaway queries, in this example you will set the maximum number of blocks they can read per SQL statement to 1 million. SQL> CREATE PROFILE ACCOUNTING_USER LIMIT 2 SESSIONS_PER_USER 6 3 CONNECT_TIME 1440 4 IDLE_TIME 120 5 LOGICAL_READS_PER_CALL 1000000 6 PASSWORD_LIFE_TIME 60 7 PASSWORD_REUSE_TIME 90 8 PASSWORD_REUSE_MAX UNLIMITED 9 FAILED_LOGIN_ATTEMPTS 6 10 PASSWORD_LOCK_TIME UNLIMITED Profile created.
In the example, parameters such as PASSWORD_GRACE_TIME, CPU_PER_ SESSION, and PRIVATE_SGA are not used. They will have a value of DEFAULT, which means the value will be taken from the DEFAULT profile. The DBA or security officer can unlock a locked user account by using the ALTER USER command. The following example shows the unlocking of SCOTT’s account. SQL> ALTER USER SCOTT ACCOUNT UNLOCK; User altered.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
274
Chapter 8
Managing Users and Security
Composite Limit The composite limit specifies the total resource cost for a session. You can define a weight for each resource based on the available resources. The resources that are considered for calculating the composite limit are:
CPU_PER_SESSION
LOGICAL_READS_PER_SESSION
CONNECT_TIME
PRIVATE_SGA
The costs associated with each of these resources are set at the database level by using the ALTER RESOURCE COST command. By default, the resources have a cost of 0, which means they should not be considered for a composite limit (they are inexpensive). A higher cost means that the resource is expensive. If you do not specify any of these resources in ALTER RESOURCE COST, it will keep the previous value. For example: SQL> ALTER RESOURCE COST 2 LOGICAL_READS_PER_SESSION 10 3 CONNECT_TIME 2; Resource cost altered.
Here CPU_PER_SESSION and PRIVATE_SGA will have a cost of 0 (if they have not been modified before). You can define limits for each of the four parameters in the profile as well as set the composite limit. The limit that is reached first is the one that takes effect. Let’s add a composite limit to the profile you created earlier. SQL> ALTER PROFILE ACCOUNTING_USER LIMIT 2 COMPOSITE_LIMIT 1500000; Profile altered.
The cost for the composite limit is calculated as follows: Cost = (10 × LOGICAL_READS_PER_SESSION) + (2 × CONNECT_TIME) If the user has performed 100,000 block reads and was connected for two hours, the cost so far would be (10 × 100,000) + (2 × 120) = 1,000,240. The user will be restricted when this cost exceeds 1,500,000 or if the values for LOGICAL_READS_PER_SESSION or CONNECT_TIME set in the profile are reached.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Profiles
275
Password Verification Function You can create a function to verify the complexity of the passwords and assign the function name to the PASSWORD_VERIFY_FUNCTION parameter in the profile. When a password is changed, Oracle verifies whether the supplied password satisfies the conditions specified in this function. Oracle provides a default verification function, known as VERIFY_FUNCTION, which can be found under the rdbms/admin directory of your Oracle software installation; the script is named utlpwdmg.sql. The password verification function should be owned by SYS and should have the following characteristics. FUNCTION SYS. ( <userid_variable> IN VARCHAR2 (30), <password_variable> IN VARCHAR2 (30), IN VARCHAR2 (30) ) RETURN BOOLEAN
The default password verification function provided by Oracle checks for the following:
Password is not the same as the username.
Password has a minimum length.
Password is not too simple; a list of words is checked.
Password contains at least one letter, one digit, and one punctuation mark.
Password differs from the previous password by at least three letters.
If the new password satisfies all the conditions, the function returns a Boolean result of TRUE, and the user’s password is changed.
Altering Profiles Using the ALTER PROFILE command changes profile values. Any parameter in the profile can be changed using this command. The changes take effect the next time the user connects to the database. For example, to add a password verification function and set a composite limit to the profile you created in the previous example: SQL> ALTER PROFILE ACCOUNTING_USER LIMIT 2 PASSWORD_VERIFY_FUNCTION VERIFY_FUNCTION 3 COMPOSITE_LIMIT 1500 Profile altered.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
276
Chapter 8
Managing Users and Security
Dropping Profiles Profiles are dropped using the DROP PROFILE command. If any user is assigned the profile you wish to drop, Oracle returns an error. You can drop such profiles by specifying CASCADE, in which case the users who have that profile will be assigned the DEFAULT profile. SQL> DROP PROFILE ACCOUNTING_USER CASCADE; Profile dropped.
Assigning Profiles Profiles can be assigned to users by using the CREATE USER or ALTER USER command. These commands are discussed later in the chapter. This example assigns the ACCOUNTING_USER profile to an existing user named SCOTT: SQL> ALTER USER SCOTT 2 PROFILE ACCOUNTING_USER; User altered.
Querying Profile Information Profile information can be queried from the DBA_PROFILES view. The following example shows information about the profile created previously. The RESOURCE_TYPE column indicates whether the parameter is KERNEL (resource) or PASSWORD. SQL> 2 3 4
SELECT FROM WHERE AND
RESOURCE_NAME, LIMIT DBA_PROFILES PROFILE = ‘ACCOUNTING_USER’ RESOURCE_TYPE = ‘KERNEL’;
RESOURCE_NAME ------------------------COMPOSITE_LIMIT SESSIONS_PER_USER CPU_PER_SESSION CPU_PER_CALL LOGICAL_READS_PER_SESSION LOGICAL_READS_PER_CALL
LIMIT ---------1500 6 DEFAULT DEFAULT DEFAULT 10000000
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Users
IDLE_TIME CONNECT_TIME PRIVATE_SGA
277
120 UNLIMITED DEFAULT
9 rows selected.
The views USER_RESOURCE_LIMITS and USER_PASSWORD_LIMITS show the limits defined for the current user for resource and password, respectively. SQL> SELECT * FROM USER_PASSWORD_LIMITS; RESOURCE_NAME ------------------------FAILED_LOGIN_ATTEMPTS PASSWORD_LIFE_TIME PASSWORD_REUSE_TIME PASSWORD_REUSE_MAX PASSWORD_VERIFY_FUNCTION PASSWORD_LOCK_TIME PASSWORD_GRACE_TIME
LIMIT --------------6 60 90 UNLIMITED VERIFY_FUNCTION UNLIMITED UNLIMITED
7 rows selected.
The system resource cost can be queried from the RESOURCE_COST view. SQL> SELECT * FROM RESOURCE_COST; RESOURCE_NAME UNIT_COST ------------------------- ---------CPU_PER_SESSION 10 LOGICAL_READS_PER_SESSION 0 CONNECT_TIME 2 PRIVATE_SGA 0
Users
Access to the Oracle database is provided using database accounts known as usernames (users). If the user owns database objects, the account is known as a schema, which is a logical grouping of all the objects owned by the user. Persons
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
278
Chapter 8
Managing Users and Security
requiring access to the database should have a valid username created in the database. The following are the properties associated with a database user account: Authentication method Each user must be authenticated to connect to the database by using a password, through the operating system, or via the Enterprise Directory Service. Operating system authentication is discussed under the “Privileges and Roles” section. Default and temporary tablespaces The default tablespace specifies a tablespace for the user to create objects if another tablespace is not explicitly specified. The user needs a quota assigned in the tablespace to create objects, even if the tablespace is the user’s default. The temporary tablespace is used to create the temporary segments; the user need not have any quota assigned in this tablespace. Space quota The user needs to have a space quota assigned in each tablespace where they want to create the objects. Profile The user can have a profile to specify the resource limits and password settings. If a profile is not specified when creating the user, the DEFAULT profile is assigned.
When the database is created, the SYS and SYSTEM users are created. SYS is the schema that owns the data dictionary.
Managing Users Users are created in the database using the CREATE USER command. The authentication method should be specified when creating the user. A common authentication method is using the database; the username is assigned a password and is stored encrypted in the database. Oracle verifies the password when establishing a connection to the database. Let’s create a user JOHN with the various clauses available in the CREATE USER command and discuss each clause. SQL> CREATE USER JOHN 2 IDENTIFIED BY “B1S2!” 3 DEFAULT TABLESPACE USERS
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Users
4 5 6 7 8 9
279
TEMPORARY TABLESPACE TEMP QUOTA UNLIMITED ON USERS QUOTA 1M ON INDX PROFILE ACCOUNTING_USER PASSWORD EXPIRE ACCOUNT UNLOCK
User created.
The IDENTIFIED BY clause specifies that the user will be authenticated using the database. To authenticate the user using the operating system, specify IDENTIFIED EXTERNALLY. The password specified is not case sensitive. If you do not specify the DEFAULT TABLESPACE and TEMPORARY TABLESPACE clauses, the SYSTEM tablespace is assigned as both the default and temporary tablespaces. Though the default and temporary tablespaces are specified, JOHN does not initially have any space quota on the USERS tablespace. Quotas on the USERS and INDX tablespaces are allocated through the QUOTA clause. The QUOTA clause can be specified any number of times with the appropriate tablespace name and space limit. The space limit is specified in bytes, but can be followed by K or M to indicate KB or MB. To create extents, the user should have a sufficient space quota in the tablespace. UNLIMITED specifies that the quota on the tablespace is not limited. The PROFILE clause specifies the profile to be assigned. PASSWORD EXPIRE specifies that the user will be prompted (if using SQL*Plus, otherwise the DBA should change the password) for a new password at the first login. ACCOUNT UNLOCK is the default; you can specify ACCOUNT LOCK to initially lock the account. The user JOHN can connect to the database only if he has the CREATE SESSION privilege. Granting privileges and roles are discussed later, in the section “Privileges and Roles.” The CREATE SESSION role is granted to user JOHN by specifying: SQL> GRANT CREATE SESSION TO JOHN; Grant succeeded.
To create extents, a user with the UNLIMITED_TABLESPACE system privilege does not need any space quota in any tablespace.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
280
Chapter 8
Managing Users and Security
Modifying User Accounts All the characteristics discussed when creating the user can be modified using the ALTER USER command. You can also assign or modify the default roles assigned to the user (discussed later in this chapter). Changing the default tablespace of a user affects only the objects created in the future. The following example changes the default tablespace of JOHN and assigns a new password. ALTER USER JOHN IDENTIFIED BY SHADOW2# DEFAULT TABLESPACE APPLICATION_DATA;
The DBA can lock or unlock a user’s account as follows: ALTER USER <username> ACCOUNT [LOCK/UNLOCK]
The DBA can also expire the user’s password: ALTER USER <username> PASSWORD EXPIRE
Users must change the password the next time they log in, or the DBA must change the password. If the password is expired, SQL*Plus prompts for a new password at login time. In the following example, setting the quota to 0 revokes the tablespace quota assigned. The objects created by the user in the tablespace remain there, but no new extents can be allocated in that tablespace. ALTER USER JOHN QUOTA 0 ON USERS;
Users can change their password by using the ALTER USER command; they do not need the ALTER USER privilege to do so.
Dropping Users You can drop a user from the database by using the DROP USER command. If the user (schema) owns objects, Oracle returns an error. If you specify the CASCADE keyword, Oracle drops all the objects owned by the user and then drops the user. If other schema objects, such as procedures, packages, or views, refer to the objects in the user’s schema, they become invalid. When objects are dropped, space is freed up immediately in the relevant
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Users
281
tablespaces. The following example shows how to drop the user JOHN, with all the owned objects. DROP USER JOHN CASCADE;
You cannot drop a user who is currently connected to the database.
Authenticating Users In this section, we will discuss two widely used user-authenticating methods. The first is authentication by the database. You define a password for the user (the user can change the password), and Oracle stores the password in the database (encrypted). When connections are made to the database, Oracle verifies the password supplied by the user with the password in the database. By default, the password supplied by the user is not encrypted when sent over the network. To encrypt the user’s password, you must set the ORA_ ENCRYPT_LOGIN environment variable to TRUE on the client machine. Similarly, when using database links, the password sent across the network is not encrypted. To encrypt such connections, you must set the DBLINK_ENCRYPT_ LOGIN initialization parameter to TRUE. The passwords are encrypted using the data encryption standard (DES) algorithm. The next authentication method widely used is authorizing using the OS. Oracle verifies the OS login account and connects to the database—users need not specify a username and password. Oracle does not store the passwords of such OS-authenticated users, but they must have a username in the database. The initialization parameter OS_AUTHENT_PREFIX determines the prefix used for OS authorization. By default, the value is OPS$. For example, if your OS login name is ALEX, the database username should be OPS$ALEX. When Alex specifies CONNECT / or does not specify a username to connect to the database, Oracle tries to connect you to the OPS$ALEX account. You can set the OS_AUTHENT_PREFIX parameter to a null string “”; this will not add any prefix. To create an OS-authenticated user: SQL> CREATE USER OPS$ALEX IDENTIFIED EXTERNALLY;
To connect to a remote database using OS authentication, the REMOTE_ OS_AUTHENT parameter should be set to TRUE. You must be careful in using this, because connections can be made from any computer. For example, if
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
282
Chapter 8
Managing Users and Security
you have an OS account named ORACLE and a database account OPS$ORACLE, you can connect to the database from the machine where the database resides. If you set REMOTE_OS_AUTHENT to TRUE, you can log in to any server with the ORACLE OS account and connect to the database over the network. If a user creates an OS ID named ORACLE, and is on the network, the user can connect to the database using OS authentication.
Complying with Oracle Licensing Terms The DBA is responsible for ensuring that the organization complies with the Oracle licensing agreement. Chapter 3, “Creating a Database and Data Dictionary,” discussed the parameters that can be set in the initialization file to enforce license agreements. They are: LICENSE_MAX_SESSIONS Maximum number of concurrent user sessions. When this limit is reached, only users with the RESTRICTED SESSION privilege are allowed to connect. The default is 0—unlimited. Set this parameter if your license is based on concurrent database usage. LICENSE_SESSIONS_WARNING A warning limit on the number of concurrent user sessions. The default value is 0—unlimited. A warning message is written in the alert file when the limit is reached. LICENSE_MAX_USERS Maximum number of users that can be created in the database. The default is 0—unlimited. Set this parameter if your license is based on the total number of database users. You can change the value of these parameters dynamically by using the ALTER SYSTEM command. For example: ALTER SYSTEM SET LICENSE_MAX_SESSIONS = 256 LICENSE_SESSIONS_WARNING = 200;
The high-water mark column of the V$LICENSE view shows the maximum number of concurrent sessions created since instance start-up and the limits set. A value of 0 indicates that no limit is set. SQL> SELECT * FROM V$LICENSE;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Users
283
SESSIONS_MAX SESSIONS_WARNING SESSIONS_CURRENT ------------ ---------------- ---------------SESSIONS_HIGHWATER USERS_MAX ------------------ ---------256 200 105 115 0
The total number of database users can be obtained from the DBA_USERS view: SELECT COUNT(*) FROM DBA_USERS;
Querying User Information User information can be queried from the data dictionary views DBA_USERS and USER_USERS. USER_USERS shows only one row: information about the current user. The user account status, password expiration date, account locked date (if locked), encrypted password, default and temporary tablespaces, profile name, and creation date can be obtained from this view. Oracle creates a numeric ID and assigns it to the user when the user is created. SYS has an ID of 0. SQL> SELECT USERNAME, DEFAULT_TABLESPACE, 2 TEMPORARY_TABLESPACE, PROFILE, 3 ACCOUNT_STATUS, EXPIRY_DATE 4 FROM DBA_USERS 5 WHERE USERNAME = ‘JOHN’; USERNAME DEFAULT_TABLESPACE TEMPORARY_TABLESPACE -------- -------------------- -------------------PROFILE ACCOUNT_ST EXPIRY_DA ------------------------------ ---------- --------JOHN USERS TEMP ACCOUNTING_USER OPEN 22-OCT-00
The view ALL_USERS shows the username and creation date. SQL> SELECT * FROM ALL_USERS 2 WHERE USERNAME LIKE ‘SYS%’; USERNAME USER_ID CREATED -------- ---------- --------SYS 0 13-JUL-00 SYSTEM 5 13-JUL-00
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
284
Chapter 8
Managing Users and Security
The views DBA_TS_QUOTAS and USER_TS_QUOTAS list the tablespace quota assigned to the user. A value of -1 indicates an unlimited quota. SQL> SELECT TABLESPACE_NAME, BYTES, MAX_BYTES, BLOCKS, 2 MAX_BLOCKS 3 FROM DBA_TS_QUOTAS 4 WHERE USERNAME = ‘JOHN’; TABLESPACE BYTES MAX_BYTES BLOCKS MAX_BLOCKS ---------- ---------- ---------- ---------- ---------INDX 0 1048576 0 128 USERS 0 -1 0 -1
The V$SESSION view shows the users currently connected to the database, and V$SESSTAT shows the session statistics. The description for the statistic codes from V$SESSTAT can be found in V$STATNAME. SQL> SELECT USERNAME, OSUSER, MACHINE, PROGRAM 2 FROM V$SESSION 3 WHERE USERNAME = ‘JOHN’; USERNAME OSUSER MACHINE PROGRAM -------- ------- ------------- ----------------JOHN KJOHN USA.CO.AU SQLPLUSW.EXE SQL> 2 3 4 5 6
SELECT FROM WHERE AND AND AND
A.NAME, B.VALUE V$STATNAME A, V$SESSTAT B, V$SESSION C A.STATISTIC# = B.STATISTIC# B.SID = C.SID C.USERNAME = ‘JOHN’ A.NAME LIKE ‘%session%’;
NAME VALUE ---------------------------------------- ---------session logical reads 729 session stored procedure space 0 CPU used by this session 12 session connect time 0 session uga memory 98368 session uga memory max 159804 session pga memory 296416 session pga memory max 296416 session cursor cache hits 0 session cursor cache count 0
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Privileges and Roles
285
The current username connected to the database is available in the system variable USER. Using SQL*Plus, you can run SHOW USER to get the username. SQL> SHOW USER USER is “JOHN” SQL>
Privileges and Roles
Privileges in the Oracle database control access to the data and restrict the actions users can perform. Through proper privileges, users can create, drop, or modify objects in their own schema or in another user’s schema. Privileges also determine what data a user should have access to. Privileges can be granted to a user via two methods:
Assign privileges directly to the user
Assign privileges to a role, and then assign the role to the user
A role is a named set of privileges, which eases the management of privileges. For example, if you have 10 users needing access to the data in the accounting tables, you can grant the privileges required to a role and grant the role to the 10 users. There are two types of privileges: Object privileges Object privileges are granted on schema objects that belong to a different schema. The privilege can be on data (to read, modify, delete, add, or reference), on a program (to execute), or to modify an object (to change the structure). System privileges System privileges provide the right to perform a specific action on any schema in the database. System privileges do not specify an object, but are granted at the database level. Certain system privileges are very powerful and should be granted only to trusted users. System privileges and object privileges can be granted to a role. PUBLIC is a user group defined in the database; it is not a database user or a role. Every user in the database belongs to this group. So if you grant privileges to PUBLIC, they are available to all users of the database.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
286
Chapter 8
Managing Users and Security
Object Privileges
Object privileges are granted on a specific object. The owner of the object has all privileges on the object. The owner can grant privileges on that object to any other users of the database. The owner can also authorize another user in the database to grant privileges on the object to other users. Let’s consider an example: user JOHN owns a table named CUSTOMER and grants read and update privileges to JAMES. When multiple privileges are specified, they are separated by a comma. SQL> GRANT SELECT, UPDATE ON CUSTOMER TO JAMES;
JAMES cannot insert into or delete from CUSTOMER; JAMES can only query and update rows in the CUSTOMER table. JAMES cannot grant the privilege to another user in the database, because JAMES is not authorized by JOHN to do so. If the privilege is granted with the WITH GRANT OPTION, JAMES can grant the privilege to others. SQL> GRANT SELECT, UPDATE ON CUSTOMER 2 TO JAMES WITH GRANT OPTION;
The INSERT, UPDATE, or REFERENCES privileges can be granted on columns also. For example: SQL> GRANT INSERT (CUSTOMER_ID) ON CUSTOMER TO JAMES;
There are nine object privileges that can be granted to users of the database: SELECT Grants read (query) access to the data in a table, view, sequence, or materialized view UPDATE Grants update (modify) access to the data in a table, column, view, or materialized view DELETE Grants delete (remove) access to the data in a table, view, or materialized view INSERT Grants insert (add) access to a table, column, view, or materialized view EXECUTE Grants execute (run) privilege on a PL/SQL stored object, such as a procedure, package, or function
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Object Privileges
287
READ Grants read access on a directory INDEX Grants index creation privilege on a table REFERENCES Grants reference access to a table or columns to create foreign keys that can reference the table ALTER Grants access to modify the structure of a table or sequence The following are some of the points you should remember related to object privileges:
Object privileges can be granted to a user, role, or PUBLIC.
If a view refers to tables or views from another user, you must have the privilege WITH GRANT OPTION on the underlying tables of the view to grant any privilege on the view to another user. For example, say JOHN owns a view, which references a table from JAMES. To grant the SELECT privilege on the view to another user, JOHN should have received the SELECT privilege on the table WITH GRANT OPTION.
Any object privilege received on a table provides the grantee the privilege to lock the table.
The SELECT privilege cannot be specified on columns; to grant columnlevel SELECT privileges, create a view with the required columns and grant SELECT on the view.
You can specify ALL or ALL PRIVILEGES to grant all available privileges on an object (for example, GRANT ALL ON CUSTOMER TO JAMES)
Even if you have the DBA privilege, to grant privileges on objects owned by another user you must have been granted the appropriate privilege on the object WITH GRANT OPTION.
Multiple privileges can be granted to multiple users and/or roles in one statement. For example, GRANT INSERT, UPDATE, SELECT ON CUSTOMER TO ADMIN_ROLE, JULIE, SCOTT;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
288
Chapter 8
Managing Users and Security
System Privileges
System privileges are the privileges that enable the user to perform an action; they are not specified on any particular object. Like object privileges, system privileges also can be granted to a user, role, or PUBLIC. There are many system privileges in Oracle; Table 8.1 summarizes the privileges used to manage objects in the database. The CREATE, ALTER, and DROP privileges provide the ability to create, modify, and drop the object specified in the user’s schema. When a privilege is specified with ANY, it authorizes the user to perform the action on any schema in the database. Table 8.1 shows the types of privileges that are associated with certain types of objects. For example, the SELECT ANY TABLE privilege gives the user the ability to query all tables or views in the database, irrespective of who owns them; the SELECT ANY SEQUENCE privilege gives the user the ability to select from all sequences in the database. TABLE 8.1
System Privileges for Managing Objects
OBJECT TYPE
CREATE
CREATE ANY
Cluster
Database link
Public database link
Dimension
Index
DROP ANY
EXECUTE ANY
QUERY REWRITE
GLOBAL QUERY REWRITE
Directory
Library
DROP
Context
Indextype
ALTER
ALTER ANY
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
SELECT ANY
System Privileges
TABLE 8.1
OBJECT TYPE
System Privileges for Managing Objects (continued)
CREATE
CREATE ANY
Materialized view
Operator
Outline
289
ALTER
ALTER ANY
DROP
DROP ANY
QUERY REWRITE
GLOBAL QUERY REWRITE
Procedure
Profile
Role
Rollback segment
Sequence
Snapshot
Synonym
Public synonym
Table
Tablespace
Trigger
Type
User
View
EXECUTE ANY
SELECT ANY
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
290
Chapter 8
Managing Users and Security
Table 8.2 lists other system privileges that do not fall into the categories outlined in Table 8.1. TABLE 8.2
Additional System Privileges If you have
You’ll be able to
ALTER DATABASE
Change the database configuration by using the ALTER DATABASE command.
ALTER SYSTEM
Use the ALTER SYSTEM command.
AUDIT SYSTEM
Audit SQL statements.
CREATE SESSION
Connect to the database.
ALTER RESOURCE COST
Use the ALTER RESOURCE COST command to set up resource costs.
ALTER SESSION
Change session properties by using ALTER SESSION.
RESTRICTED SESSION
Connect to the database when the database is in restricted mode.
BACKUP ANY TABLE
Export tables that belong to other users.
DELETE ANY TABLE
Delete rows from tables or views owned by any user in the database.
DROP ANY TABLE
Drop tables or table partitions owned by any user in the database.
INSERT ANY TABLE
Insert rows into tables or views owned by any user into the database.
LOCK ANY TABLE
Lock tables or views owned by any user in the database.
UPDATE ANY TABLE
Update rows in tables or views owned by any user in the database.
MANAGE TABLESPACE
Do tablespace management operations such as making them offline or online, and beginning or ending backup.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
System Privileges
TABLE 8.2
291
Additional System Privileges (continued) If you have
You’ll be able to
UNLIMITED TABLESPACE
Create objects in any tablespace; space in the database is not restricted.
ADMINISTER DATABASE TRIGGER
Create a trigger in the database (you still need the CREATE TRIGGER or CREATE ANY TRIGGER privilege).
BECOME USER
Become another user while doing a full import.
ANALYZE ANY
Use the ANALYZE command on any table, index, or cluster in any schema in the database.
AUDIT ANY
Audit any object or schema in the database.
COMMENT ANY TABLE
Create comments on any tables in the database.
GRANT ANY PRIVILEGE
Grant any system privilege.
GRANT ANY ROLE
Grant any role.
SYSOPER
Start up and shut down the database; mount, open, or back up the database; use ARCHIVELOG and RECOVER commands; use the RESTRICTED SESSION privilege.
SYSDBA
Perform all SYSOPER actions plus create or alter a database.
Privileges with ANY are powerful and should be granted to responsible users. Privileges with ANY provide access to all such objects in the database, including SYS-owned dictionary objects. For example, if you give a user the ALTER ANY TABLE privilege, that user can use the privilege on a data dictionary table. To protect the dictionary, Oracle provides an initialization parameter, 07_DICTIONARY_ACCESSIBILITY, that controls access to the data dictionary. If this parameter is set to TRUE (the default, Oracle7 behavior), a user with the ANY privilege can exercise that privilege on the SYS dictionary objects. It is not possible to access the dictionary with the ANY privilege if this parameter is set to FALSE. For example, a user with SELECT
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
292
Chapter 8
Managing Users and Security
ANY TABLE can query the DBA_ views, but when 07_DICTIONARY_ ACCESSIBILITY is set to FALSE, the user cannot query the dictionary views. You can, however, grant the user specific access to the dictionary views (via object privileges). When we discuss roles later in this chapter, you’ll learn how to provide query access to the dictionary. Here are some of the points you should remember about system privileges:
To connect to the database, you need the CREATE SESSION privilege.
To truncate a table that belongs to another schema, you need the DROP ANY TABLE privilege.
The CREATE ANY PROCEDURE (or EXECUTE ANY PROCEDURE) privilege allows the user to create, replace, or drop (or execute) procedures, packages, and functions; this includes Java classes.
The CREATE TABLE privilege gives you the ability to create, alter, drop, and query tables in a schema.
SELECT, INSERT, UPDATE, and DELETE are object privileges, but SELECT ANY, INSERT ANY, UPDATE ANY, and DELETE ANY are system privileges (in other words, do not apply to a particular object).
Granting System Privileges System privileges are also granted to a user, role, or PUBLIC by using the GRANT command. The WITH ADMIN OPTION clause gives the grantee the privilege to grant the privilege to another user, role, or PUBLIC. For example, if JOHN needs to create a table under JAMES’s schema, he needs to have the CREATE ANY TABLE privilege. This privilege not only allows JOHN to create a table under JAMES’s schema, but also allows the creation of a table under any schema in the database. SQL> GRANT CREATE ANY TABLE TO JOHN;
If John must be able to grant this privilege to others, he should be granted the privilege with the WITH ADMIN OPTION clause (or should have the GRANT ANY PRIVILEGE privilege). SQL> GRANT CREATE ANY TABLE TO JOHN WITH ADMIN OPTION;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Revoking Privileges
293
Revoking Privileges
Object privileges and system privileges can be revoked from a user by using the REVOKE statement. You can revoke a privilege if you have granted it to the user, or if you have been granted that privilege with the WITH ADMIN OPTION (for system privileges) or the WITH GRANT OPTION (for object privileges) clauses. The following are some examples of revoking privileges. To revoke the UPDATE privilege granted to JAMES from JOHN on JOHN’s CUSTOMER table: SQL> REVOKE UPDATE ON CUSTOMER FROM JAMES;
To revoke the SELECT ANY TABLE and CREATE TRIGGER privileges from JULIE: SQL> REVOKE SELECT ANY TABLE, CREATE TRIGGER 2 FROM JULIE;
The CASCADE CONSTRAINTS clause should be specified to revoke the REFERENCES privilege, which will drop the referential integrity constraint created using the privilege. You must use this clause if any constraints exist. SQL> REVOKE REFERENCES FROM JAMES 2 ON CUSTOMER CASCADE CONSTRAINTS;
The following statement revokes all the privileges granted by JAMES on the STATE table to JULIE. JAMES executes this statement. SQL> REVOKE ALL ON STATE FROM JULIE;
Keep the following in mind when revoking privileges:
If multiple users (or administrators) have granted an object privilege to a user, revoking the privilege by one administrator will not prevent the user from performing the action, because the privileges granted by the other administrators are still valid.
To revoke the WITH ADMIN OPTION or WITH GRANT OPTION, you must revoke the privilege and re-grant the privilege without the clause.
You cannot selectively revoke column privileges; you must revoke the privilege from the table, and grant them again with the appropriate columns.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
294
Chapter 8
Managing Users and Security
If a user has used his system privileges to create or modify an object, and subsequently the user’s privilege is revoked, there is no change made to the objects that the user has already created or modified. The user just can no longer create or modify the object.
If a PL/SQL program or view is created based on an object privilege (or a DML system privilege such as SELECT ANY, UPDATE ANY, etc.), revoking the privilege will invalidate the object.
If user A is granted a system privilege WITH ADMIN OPTION, and grants the privilege to user B, when user A’s privilege is revoked, user B’s privilege still remains.
If user A is granted an object privilege WITH GRANT OPTION, and grants the privilege to user B, when user A’s privilege is revoked, user B’s privilege is also automatically revoked, and the objects that make use of the privileges under user A and user B are invalidated.
Querying Privilege Information
Privilege information can be queried from the data dictionary by using various views. Table 8.3 lists the views that provide information related to privileges. TABLE 8.3
Privilege Information View Name
Description
ALL_TAB_PRIVS DBA_TAB_PRIVS USER_TAB_PRIVS
Lists the object privileges. ALL_TAB_PRIVS shows only the privileges granted to the user and to PUBLIC.
ALL_TAB_PRIVS_MADE USER_TAB_PRIVS_MADE
Lists the object grants made by the current user or grants made on the objects owned by the current user.
ALL_TAB_PRIVS_RECD USER_TAB_PRIVS_RECD
Lists the object grants received by the current user or PUBLIC.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Querying Privilege Information
TABLE 8.3
295
Privilege Information (continued) View Name
Description
ALL_COL_PRIVS DBA_COL_PRIVS USER_COL_PRIVS
Lists column privileges.
ALL_COL_PRIVS_MADE USER_COL_PRIVS_MADE
Lists column privileges made by the current user.
ALL_COL_PRIVS_RECD USER_COL_PRIVS_RECD
Lists column privileges received by the current user.
DBA_SYS_PRIVS USER_SYS_PRIVS
Lists system privilege information.
SESSION_PRIVS
Lists the system privileges available for the current session.
Here are some sample queries using the dictionary views to query privileges information. To list information about privileges granted on the table CUSTOMER: SQL> SELECT * FROM DBA_TAB_PRIVS 2 WHERE TABLE_NAME = ‘CUSTOMER’; GRANTEE --------------SCOTT JAMES JAMES ACCOUNTS_MANAGE
OWNER -----JOHN JOHN JOHN JOHN
TABLE_NAME ----------CUSTOMER CUSTOMER CUSTOMER CUSTOMER
GRANTOR ------JAMES JOHN JOHN JOHN
To list system privileges granted to JOHN: SQL> SELECT * FROM DBA_SYS_PRIVS 2 WHERE GRANTEE = ‘JOHN’; GRANTEE --------------JOHN JOHN
PRIVILEGE -------------------CREATE SESSION UNLIMITED TABLESPACE
Copyright ©2000 SYBEX , Inc., Alameda, CA
ADM --NO NO
www.sybex.com
PRIVILEGE --------SELECT SELECT UPDATE SELECT
GRA --NO YES YES NO
296
Chapter 8
Managing Users and Security
Managing Roles
A role is a named group of privileges used to make privilege administration easier. For example, if your accounting department has 30 users and all need similar access to the tables in the accounts receivable application, you can create a role and grant the appropriate system and object privileges to the role. Each user of the accounting department can be granted the role, instead of granting each object and system privilege to individual users. Using the CREATE ROLE command creates the role. No user owns the role; it is owned by the database. When a role is created, no privileges are associated with it. You must grant the appropriate privileges to the role. For example, to create a role named ACCTS_RECV and grant certain privileges to the role: CREATE ROLE ACCTS_RECV; GRANT SELECT ON GENERAL_LEDGER TO ACCTS_RECV; GRANT INSERT, UPDATE ON JOURNAL_ENTRY TO ACCTS_RECV;
Similar to users, roles can also be authenticated. The default is NOT IDENTIFIED, which means no authorization is required to enable or disable the role. The authorization methods available are: Database The role is authorized by the database using a password associated with the role. Whenever such roles are enabled, the user is prompted for a password if the role is not one of the default roles for the user. In the following example, a role ACCOUNTS_MANAGER is created with a password. SQL> CREATE ROLE ACCOUNTS_MANAGER IDENTIFIED BY ACCMGR;
Operating system The role is authorized by the OS. This is useful when the OS can associate its privileges with the application privileges, and information about each user is configured in OS files. To enable OS role authorization, the parameter OS_ROLES must be set to TRUE. The following example creates a role, authorized by the OS. SQL> CREATE ROLE APPLICATION_USER IDENTIFIED EXTERNALLY;
You can change the role’s password or authentication method by using the ALTER ROLE command. You cannot rename a role. For example: SQL> ALTER ROLE ACCOUNTS_MANAGER IDENTIFIED BY MANAGER;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Managing Roles
297
Roles are dropped by the DROP ROLE command. Oracle will let you drop a role even if it is granted to users or other roles. When a role is dropped, it is immediately removed from the users’ role lists. SQL> DROP ROLE ACCOUNTS_MANAGER;
Predefined Roles When the database is created, Oracle creates six predefined roles. These roles are defined in the sql.bsq script, which is executed when you run the CREATE DATABASE command. Following are the predefined roles: CONNECT Privilege to connect to the database, to create a cluster, database link, sequence, synonym, table, and view, and to alter a session. RESOURCE Privileges to create a cluster, table, or sequence, and to create programmatic objects such as procedures, functions, packages, indextypes, types, triggers, and operators. DBA All system privileges with the ADMIN option, so the system privileges can be granted to other users of the database or to roles. SELECT_CATALOG_ROLE Ability to query the dictionary views and tables. EXECUTE_CATALOG_ROLE Privilege to execute the dictionary packages (DBMS packages). DELETE_CATALOG_ROLE Ability to drop or re-create the dictionary packages. Also, when you run the catproc.sql script as part of the database creation, it executes catexp.sql, which creates two more roles: EXP_FULL_DATABASE Ability to make full and incremental exports of the database using the Export utility. IMP_FULL_DATABASE Ability to perform full database imports using the Import utility. This is a very powerful role.
Enabling and Disabling Roles If a role is not the default role for a user, it is not enabled when the user connects to the database. The ALTER USER command is used to set the default
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
298
Chapter 8
Managing Users and Security
roles for a user. There are four ways to use the DEFAULT ROLE clause. The following examples illustrate this. To specify the named roles CONNECT and ACCOUNTS_MANAGER as default roles: ALTER USER JOHN DEFAULT ROLE CONNECT, ACCOUNTS_MANAGER;
To specify all roles granted to the user as the default: ALTER USER JOHN DEFAULT ROLE ALL;
To specify all roles except certain roles as the default: ALTER USER JOHN DEFAULT ROLE ALL EXCEPT RESOURCE, ACCOUNTS_ADMIN;
To specify no roles as the default: ALTER USER JOHN DEFAULT ROLE NONE;
Only roles granted to the user can be specified as default roles. The DEFAULT ROLE clause is not available in the CREATE USER command. Default roles are enabled when the user connects to the database, and do not require a password. Roles can be enabled or disabled using the SET ROLE command. The maximum number of roles that can be enabled is specified in the initialization parameter MAX_ENABLED_ROLES (the default is 20). Only roles granted to the user can be enabled or disabled. If a role is defined with a password, the password must be supplied when you enable the role. For example: SET ROLE ACCOUNTS_ADMIN IDENTIFIED BY MANAGER;
To enable all roles, specify the following: SET ROLE ALL;
To enable all roles, except the ones specified: SET ROLE ALL EXCEPT RESOURCE, ACCOUNTS_USER;
To disable all roles, including the default roles: SET ROLE NONE;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Querying Role Information
299
Querying Role Information
The data dictionary view DBA_ROLES lists the roles defined in the database. The column PASSWORD specifies the authorization method. SQL> SELECT * FROM DBA_ROLES; ROLE -----------------------------CONNECT RESOURCE DBA SELECT_CATALOG_ROLE EXECUTE_CATALOG_ROLE DELETE_CATALOG_ROLE EXP_FULL_DATABASE IMP_FULL_DATABASE APPLICATION_USER ACCOUNTS_MANAGER
PASSWORD -------NO NO NO NO NO NO NO NO EXTERNAL YES
The view SESSION_ROLES lists the roles that are enabled in the current session. SQL> SELECT * FROM SESSION_ROLES; ROLE -----------------------------CONNECT DBA SELECT_CATALOG_ROLE HS_ADMIN_ROLE EXECUTE_CATALOG_ROLE DELETE_CATALOG_ROLE EXP_FULL_DATABASE IMP_FULL_DATABASE JAVA_ADMIN
The view DBA_ROLE_PRIVS (or USER_ROLE_PRIVS) lists all the roles granted to users and roles. SQL> SELECT * FROM DBA_ROLE_PRIVS 2 WHERE GRANTEE = ‘JOHN’;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
300
Chapter 8
Managing Users and Security
GRANTEE ----------JOHN JOHN
GRANTED_ROLE ------------------ACCOUNTS_MANAGER RESOURCE
ADM --YES NO
DEF --NO YES
The view ROLE_ROLE_PRIVS lists the roles granted to the roles, ROLE_ SYS_PRIVS lists the system privileges granted to roles, and ROLE_TAB_PRIVS shows information on the object privileges granted to roles. SQL> SELECT * FROM ROLE_ROLE_PRIVS 2 WHERE ROLE = ‘DBA’; ROLE ------DBA DBA DBA DBA DBA DBA
GRANTED_ROLE -----------------------------DELETE_CATALOG_ROLE EXECUTE_CATALOG_ROLE EXP_FULL_DATABASE IMP_FULL_DATABASE JAVA_ADMIN SELECT_CATALOG_ROLE
ADM --YES YES NO NO NO YES
SQL> SELECT * FROM ROLE_SYS_PRIVS 2 WHERE ROLE = ‘CONNECT’; ROLE ------------CONNECT CONNECT CONNECT CONNECT CONNECT CONNECT CONNECT CONNECT
PRIVILEGE ---------------------ALTER SESSION CREATE CLUSTER CREATE DATABASE LINK CREATE SEQUENCE CREATE SESSION CREATE SYNONYM CREATE TABLE CREATE VIEW
ADM --NO NO NO NO NO NO NO NO
SQL> SELECT * FROM ROLE_TAB_PRIVS 2 WHERE TABLE_NAME = ‘CUSTOMER’;
ROLE OWNER TABLE_NAME COLUMN_NAME PRIVILEGE GRA ----------- ------ ---------- ----------- --------- --ACC_MANAGER JOHN CUSTOMER SELECT NO
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Auditing the Database
301
Auditing the Database
Auditing is storing information about database activity, which can be used for monitoring suspicious database activity or for collecting statistics on database usage. When the database is created, Oracle creates the SYS.AUD$ table, known as the audit trail, which is used to store the audited records. To enable auditing, the initialization parameter AUDIT_TRAIL should be set to TRUE or DB. When this parameter is set to OS, Oracle writes the audited records to an OS file instead of inserting them into the SYS.AUD$ table. The AUDIT command is used to specify the audit actions. Oracle has three types of auditing capabilities: Statement auditing Audits the SQL statements used. (Example: AUDIT SELECT BY SCOTT audits all SELECT statements performed by SCOTT.) Privilege auditing Audits the privileges used. (Example: AUDIT CREATE TRIGGER audits all users who exercise their CREATE TRIGGER privilege.) Object auditing Audits the use of a specific object. (Example: AUDIT SELECT ON JOHN.CUSTOMER monitors the SELECT statements performed on the CUSTOMER table.) You can restrict the auditing scope by specifying the user list in the BY clause. The WHENEVER SUCCESSFUL clause can be used to limit the statements audited to only successful statements. The WHENEVER NOT SUCCESSFUL clause limits the auditing to the statements that failed. You can also specify BY SESSION or BY ACCESS; BY SESSION is the default. BY SESSION specifies that one audit record is inserted for one session, regardless of the number of times the statement is executed. BY ACCESS specifies that one audit record is inserted each time the statement is executed. Following are some examples of auditing. To audit the connection and disconnection to the database: AUDIT SESSION;
To audit only successful logins: AUDIT SESSION WHENEVER SUCCESSFUL;
To audit only failed logins: AUDIT SESSION WHENEVER NOT SUCCESSFUL;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
302
Chapter 8
Managing Users and Security
To audit successful logins of specific users: AUDIT SESSION BY JOHN, ALEX WHENEVER SUCCESSFUL;
To audit the successful updates and deletes on the CUSTOMER table: AUDIT UPDATE, DELETE ON JOHN.CUSTOMER BY ACCESS WHENEVER SUCCESSFUL;
The NOAUDIT command is used to turn off auditing. You can specify all options available in the AUDIT statement to turn off auditing except BY SESSION and BY ACCESS. When you turn off auditing, Oracle turns off the action, regardless of its BY SESSION or BY ACCESS specification. To turn off the object auditing enabled on the CUSTOMER table: NOAUDIT UPDATE, DELETE ON JOHN.CUSTOMER;
Certain database activities are always monitored and written to OS files, even if auditing is disabled. Oracle writes audit records when the instance starts up and shuts down and when a user connects to the database with administrator privileges.
Summary
This chapter discussed the security aspects of the Oracle database using profiles, privileges, and roles. Profiles are used to control the database and system resource usage. Profiles are also used for password management. You can create various profiles for different user communities and assign a profile to each user. When the database is created, Oracle creates a profile named DEFAULT, which is assigned to the users when a profile is not specified for the user. Profiles can monitor the resource use by session or on a per-call basis. Resource limits are enforced only when the parameter RESOURCE_LIMIT is set to TRUE. Profiles can be used for password management. You can perform account locking, password expiration and reuse, and password complexity verification by using profiles. When an account is locked, the DBA must unlock it for the user to connect to the database. Profiles cannot be dropped when they
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Summary
303
are assigned for users; such profiles can be dropped only when the CASCADE keyword is used. Users are created in the database to use the database. Every user needs the CREATE SESSION privilege to be able to connect to the database. Users are assigned a default and a temporary tablespace. When the user does not specify a tablespace when creating tables, it is created in the default tablespace. The temporary tablespace is used for creating sort segments. If you do not specify default or temporary tablespaces, Oracle assigns SYSTEM as the user’s default and temporary tablespace. Users are granted a space quota on the tablespace. When they exceed this quota, no more extents can be allocated to the objects. If a user has the UNLIMITED TABLESPACE system privilege, there are no space quota restrictions. Before connecting a user to the database, Oracle authenticates the username. The authentication method can be via the database, whereby the user specifies a password. The OS authentication method uses the OS login information to connect to the database; such users are created with the IDENTIFIED EXTERNALLY clause. A user cannot be dropped if the user is connected to the database. You must terminate the user session before dropping. Privileges are used to control the actions performed by the users on the database. A role is a named set of privileges, which is used to ease management of privileges. There are two types of privileges: object privileges and system privileges. Object privileges specify allowed actions on a specific object (the owner of the object has to grant the privilege to other users or authorize other users to grant the privileges on the object); system privileges specify the allowed actions on the database. The GRANT and REVOKE commands are used to manage privileges. Any privilege granted to PUBLIC is available to all the users in the database. Database actions or statements can be audited using the AUDIT command. Oracle provides capabilities to audit statements, privilege usage, or object usage. Auditing can be restricted to specific users, on successful statements, or on unsuccessful statements. You can also limit the number of audit records generated by specifying auditing by session (one record per session) or by access (one record per DDL, DML, etc.).
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
304
Chapter 8
Managing Users and Security
Key Terms Before you take the exam, make sure you’re familiar with the following terms: profiles schema space quota privilege object privileges system privileges PUBLIC role audit trail
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
305
Review Questions 1. Profiles cannot be used to restrict A. CPU time used B. Total time connected to the database C. Maximum time a session can be inactive D. Time spent reading blocks 2. Which command is used to assign a profile to an existing user? A. ALTER PROFILE. B. ALTER USER. C. SET PROFILE. D. The profile should be specified when creating the user; it cannot be
changed. 3. Which resource is not used to calculate the COMPOSITE_LIMIT? A. PRIVATE_SGA B. CPU_PER_SESSION C. CONNECT_TIME D. LOGICAL_READS_PER_CALL 4. Choose the option that is not true. A. Oracle creates a profile named DEFAULT when the database is created. B. Profiles cannot be renamed. C. DEFAULT is a valid name for a profile resource. D. The SESSIONS_PER_USER resource in the DEFAULT profile initially
has a value of 5.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
306
Chapter 8
Managing Users and Security
5. What is the maximum number of profiles that can be assigned to a user? A. 1 B. 2 C. 32 D. Unlimited 6. When a new user is created, and you do not specify a profile, A. Oracle prompts you for a profile name. B. No profile is assigned to the user. C. The DEFAULT profile is assigned. 7. Which resource specifies the value in minutes? A. CPU_PER_SESSION B. CONNECT_TIME C. PASSWORD_LOCK_TIME D. All of the above 8. Which password parameter in the profile definitions can restrict the
user from using the old password for 90 days? A. PASSWORD_REUSE_TIME B. PASSWORD_REUSE_MAX C. PASSWORD_LIFE_TIME D. PASSWORD_REUSE_DAYS 9. Which dictionary view shows the password expiration date for a user? A. DBA_PROFILES B. DBA_USERS C. DBA_PASSWORDS D. V$SESSION
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
307
10. Which clause in the CREATE USER command can be used to specify no
limits on the space allowed in tablespace APP_DATA? A. DEFAULT TABLESPACE B. UNLIMITED TABLESPACE C. QUOTA D. PROFILE 11. User JAMES has a table named JOBS created on the tablespace USERS.
When you issue the following statement, what effect it will have on the JOBS table? ALTER USER JAMES QUOTA 0 ON USERS;
A. No more rows can be added to the JOBS table. B. No blocks can be allocated to the JOBS table. C. No new extents can be allocated to the JOBS table. D. The table JOBS cannot be accessed. 12. Which view would you query to see whether John has the CREATE
TABLE privilege? A. DBA_SYS_PRIVS B. DBA_USER_PRIVS C. DBA_ROLE_PRIVS D. DBA_TAB_PRIVS 13. Which clause should be specified to enable the grantee to grant the sys-
tem privilege to other users? A. WITH GRANT OPTION B. WITH ADMIN OPTION C. CASCADE D. WITH MANAGE OPTION
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
308
Chapter 8
Managing Users and Security
14. Which is not a system privilege? A. SELECT B. UPDATE ANY C. EXECUTE ANY D. CREATE TABLE 15. Which data dictionary view can be queried to see whether a user has
the EXECUTE privilege on a procedure? A. DBA_SYS_PRIVS B. DBA_TAB_PRIVS C. DBA_PROC_PRIVS D. SESSION_PRIVS 16. To grant the SELECT privilege on the table CUSTOMER to all users in the
database, which statement would you use? A. GRANT SELECT ON CUSTOMER TO ALL USERS; B. GRANT ALL ON CUSTOMER TO ALL; C. GRANT SELECT ON CUSTOMER TO ALL; D. GRANT SELECT ON CUSTOMER TO PUBLIC; 17. Which role in the following list is not a predefined role from Oracle? A. SYSDBA B. CONNECT C. IMP_FULL_DATABASE D. RESOURCE 18. How do you enable a role? A. ALTER ROLE B. ALTER USER C. SET ROLE D. ALTER SESSION
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
309
19. What is accomplished when you issue the following statement? ALTER USER JOHN DEFAULT ROLE ALL;
A. John is assigned all the roles created in the database. B. Future roles granted to John will not be default roles. C. All of John’s roles are enabled, except the roles with passwords. D. All of John’s roles are enabled when connecting to the database. 20. Which command is used to define CONNECT and RESOURCE as the
default roles for user JAMES? A. ALTER USER B. ALTER ROLE C. SET ROLE D. SET PRIVILEGE
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
310
Chapter 8
Managing Users and Security
Answers to Review Questions 1. D. There is no resource parameter in the profile definition to monitor
the time spent reading blocks, but you can restrict the number of blocks read per SQL statement or per session. 2. B. The PROFILE clause in the ALTER USER command is used to set the
profile for an existing user. You must have the ALTER USER privilege to do this. 3. D. Call-level resources are not used to calculate the COMPOSITE_
LIMIT. The resource cost of the four resources (the fourth is LOGICAL_ READS_PER_SESSION) can be set using the ALTER RESOURCE COST command. 4. D. All resources in the default profile have a value of UNLIMITED
when the database is created. These values can be changed. 5. A. A user can have only one profile assigned. The profile assigned to
a user can be queried from the DBA_USERS view. 6. C. The DEFAULT profile is created when the database is created and is
assigned to users if you do not specify a profile for the new user. To assign a profile, the user must first be created in the database. 7. B. CONNECT_TIME is specified in minutes, CPU_PER_SESSION is spec-
ified in hundredths of a second, and PASSWORD_LOCK_TIME is specified in days. 8. A. PASSWORD_REUSE_TIME specifies the number of days required
before the old password can be reused; PASSWORD_REUSE_MAX specifies the number of password changes required before a password can be reused. At least one of these parameters must be set to UNLIMITED. 9. B. The DBA_USERS view shows the password expiration date,
account status, and locking date along with the user’s tablespace assignments, profile, creation date, etc.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Answers to Review Questions
311
10. C. The QUOTA clause is used to specify the amount of space allowed
on a tablespace; a size or UNLIMITED can be specified. The user will have unlimited space if the system privilege UNLIMITED TABLESPACE is granted. 11. C. When the space quota is exceeded or quota is removed from a user
on a tablespace, the tables remain in the tablespace, but no new extents can be allocated. 12. A. CREATE TABLE is a system privilege. System privileges can be que-
ried from DBA_SYS_PRIVS or USER_SYS_PRIVS. 13. B. The WITH ADMIN OPTION specified with system privileges enables
the grantee to grant the privileges to others, and the WITH GRANT OPTION specified with object privileges enables the grantee to grant the privilege to others. 14. A. SELECT, INSERT, UPDATE, DELETE, EXECUTE, and REFERENCES are
object privileges. SELECT ANY, UPDATE ANY, etc. are system privileges. 15. B. The DBA_TAB_PRIVS, USER_TAB_PRIVS, and ALL_TAB_PRIVS
views show information about the object privileges. 16. D. PUBLIC is the group or class of database users, to which all users
of the database belong. 17. A. SYSDBA and SYSOPER are not roles; they are system privileges. 18. C. The SET ROLE command is used to enable or disable granted roles
for the user. The view SESSION_ROLES shows the roles that are enabled in the session. All default roles are enabled when the user connects to the database. 19. D. Default roles are enabled when connecting to the database even if
the roles are password authorized. 20. A. The ALTER USER command is used to define the default role(s) for
a user.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Chapter
9
Data Utilities ORACLE8i ARCHITECTURE AND ADMINISTRATION EXAM OBJECTIVES OFFERED IN THIS CHAPTER:
Loading Data
Load data using direct-load insert Load data into Oracle tables using SQL*Loader: conventional path Load data into Oracle tables using SQL*Loader: direct path
Reorganizing Data
Reorganize data using the Export and Import utilities Move data using transportable tablespaces
Using National Language Support
Choose a character set and national character set for a database Specify the language-dependent behavior using initialization parameters, environment variables, and the ALTER SESSION command Use the different types of National Language Support (NLS) parameters Explain the influence on language-dependent application behavior Obtain information about NLS usage
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Training and Certification Web site (http: //education.oracle.com/certification/index.html) for the most current exam objectives listing.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
T
his chapter discusses the various methods used for loading data into an Oracle table. Direct-load insert and direct-path data loading can improve the performance of data inserts and data loads. Data can be moved between the databases by using the Export and Import utilities. These utilities can also be used to move tablespaces between databases. This chapter also discusses setting up National Language Support (NLS) and various NLS parameters.
Loading Data
D
ata in a table can be populated from external data files (text files) or from another table in the Oracle database. When bulk loads of data are performed, you can specify whether the redo information should be generated. By not generating the redo information, the data loads complete more quickly. Oracle also gives you the option to write the data blocks directly to the data file, rather than going through the data buffer cache. When you write directly to the data file, Oracle starts writing the new blocks above the high-water mark; the free space within the already allocated blocks is not used. The direct-load insert and direct-load method of SQL*Loader can make use of this functionality. The conventional method of inserts and data loads make use of the buffer cache, and each row is added to the table individually, making use of the free space in the blocks and maintaining referential integrity. When using the direct-load insert, triggers and referential integrity constraints should not be enabled. When using direct load (SQL*Loader), Oracle disables the referential integrity constraints and enables them after the load is complete.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Loading Data
315
Direct-Load Insert A direct-load insert can be used to append data to a table from another table. This method of insertion can bypass the buffer cache and directly write the data blocks to the data file. The direct-load insert can be performed in parallel. If the table is in NOLOGGING mode, it will not generate redo entries. To enable direct-load inserts, you must specify the APPEND hint if the table is in serial mode. If the table is in parallel mode, the APPEND hint is optional. A hint is used to specify your own data retrieval path rather than using the one calculated by the Oracle optimizer. A hint is specified between /*+ and */. If there are triggers or referential integrity constraints on the table, Oracle ignores the APPEND hint and does a conventional insert. There is no special keyword to enable or disable direct-load inserts in parallel mode. To improve load speed, indexes can be dropped and rebuilt after the load completes. During direct-load insert, exclusive locks are obtained on the table; no DML activity can be performed. The following examples illustrate how to use direct-load insert. When the table is in serial mode, that is, the degree of parallelism is 1, to enable the direct-load insert you must specify the APPEND hint. ALTER TABLE MON_TRANS NOPARALLEL; INSERT /*+ APPEND */ INTO MON_TRANS SELECT * FROM DAILY_TRANS;
To disable logging the direct-load inserts to the redo log file, the table should be defined in NOLOGGING mode. ALTER TABLE MON_TRANS NOLOGGING;
When the database is in ARCHIVELOG mode, if a media recovery is done, Oracle cannot load the records to the table from the archive log files. Therefore, you must make a backup of the table or tablespace when the direct-load insert is complete. If the database is in NOARCHIVELOG mode, having the table in NOLOGGING mode does not have any implications and can tremendously improve the performance of direct-load inserts. In serial direct-load insert, Oracle inserts data blocks above the high-water mark of the table segment. The data is visible to other users only when a commit is issued, which then moves the high-water mark to the new position.
Parallel Direct-Load Insert Parallel direct-load requires the table to be defined as PARALLEL, or you must use a PARALLEL hint in the DML statement. The APPEND hint is optional. You must enable parallel DML by using the ALTER SESSION command.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
316
Chapter 9
Data Utilities
ALTER SESSION ENABLE PARALLEL DML; ALTER TABLE MON_TRANS PARALLEL (4) NOLOGGING; INSERT /*+ APPEND */ INTO MON_TRANS SELECT * FROM DAILY_TRANS;
In parallel direct-load insert, Oracle creates temporary segments for each parallel server process and writes data to the temporary segments. When a commit is issued, the parallel query coordinator process merges the temporary segments to the table’s segment—the already allocated segments are not used, and the temporary segments become new extents for the table. For parallel direct-load insert, the space requirement is higher, since temporary segments are created. The source table can also be given a parallel hint to enable parallel reads. To specify hints for both the source and target tables: INSERT INTO /*+ PARALLEL (A, 4) */ MON_TRANS A SELECT /*+ PARALLEL (B, 4) */ * FROM DAILY_TRANS B;
Figure 9.1 contrasts the processing in the conventional insert and directload insert methods. FIGURE 9.1
Conventional and direct-load inserts
Conventional Insert
Direct-Load Insert
Parallel Direct-Load Insert
Process SQL commands
Format data blocks
Format data blocks (each server process)
Manage free list
Manage extents
Write to temporary segments (each server process)
Manage buffer cache
Write blocks to data file
Manage extents
Use rollback segments
Adjust HWM
Merge to primary table segment
Manage extents
Perform redo log entries
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Loading Data
317
SQL*Loader SQL*Loader is a utility from Oracle for loading data into tables from external text files. It can load into multiple tables or read data from multiple files in a load session and it can also selectively load data. The text files can have fixed column positions or columns terminated by a special character. SQL*Loader is invoked by the command SQLLDR. If no parameters are specified, a Help menu is displayed as follows. SQL*Loader: Release 8.1.6.0.0 - Production on Sun Aug 27 21:59:50 2000 (c) Copyright 1999 Oracle Corporation. All rights reserved. Usage: SQLLOAD keyword=value [,keyword=value,...] Valid Keywords: userid -- ORACLE username/password control -- Control file name log -- Log file name bad -- Bad file name data -- Data file name discard -- Discard file name discardmax -- Number of discards to allow (Default all) skip -- Number of logical records to skip (Default 0) load -- Number of logical records to load (Default all) errors -- Number of errors to allow (Default 50) rows -- Number of rows in conventional path bind array or between direct path data saves (Default: Conventional path 64, Direct path all) bindsize -- Size of conventional path bind array in bytes (Default 65536) silent -- Suppress messages during run (header,feedback,errors,discards,partitions) direct -- use direct path (Default FALSE) parfile -- parameter file: name of file that contains parameter specifications
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
318
Chapter 9
Data Utilities
parallel -- do parallel load (Default FALSE) file -- File to allocate extents from skip_unusable_indexes -- disallow/allow unusable indexes or index partitions (Default FALSE) skip_index_maintenance -- do not maintain indexes, mark affected indexes as unusable (Default FALSE) commit_discontinued -- commit loaded rows when load is discontinued (Default FALSE) readsize -- Size of Read buffer (Default 1048576) PLEASE NOTE: Command-line parameters may be specified either by position or by keywords. An example of the former case is ‘sqlload scott/tiger foo’; an example of the latter is ‘sqlload control=foo userid=scott/tiger’. One may specify parameters by position before but not after parameters specified by keywords. For example, ‘sqlload scott/tiger control=foo logfile=log’ is allowed, but ‘sqlload scott/tiger control=foo log’ is not, even though the position of the parameter ‘log’ is correct.
The control file (not to be confused with the database’s control file), parameter file, and data file are the input files to SQL*Loader; the bad file, discard file, and log file are output files. The control file specifies how to interpret the text file, which table(s) to load, and any conditions to be verified to load data. Control files also can specify the input file name or can have the data inline. If the control file has data, INFILE * should be specified, and the beginning of the data records should be indicated by BEGINDATA. The parameter file specifies the command-line parameters in a file. The data file is the text file that needs to be loaded. The bad file contains records that are rejected by SQL*Loader or by Oracle (for reasons such as constraint violations, column length, etc.). The discard file contains records that were filtered out (when you specify a condition in loading, the records that are skipped). The log file contains information about the load—start and end time, number of records read, loaded, bad, and discarded. If you do not specify a filename for the log, bad, or discard files (the output files), Oracle creates these files as .log, .bad and .dsc. For example, if your control file name is load.ctl, Oracle creates the log file as load.log, the bad file as load.bad, and the discard file as load.dsc.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Loading Data
319
Conventional Path When the DIRECT parameter is not specified in the control file (or in the parameter file), or if DIRECT=FALSE, Oracle uses the conventional-path method to load data. In this method of operation, Oracle uses INSERT statements to insert data into tables (hence the name conventional load). This method processes data through the buffer cache, and hence can be slower than the direct-path load. The table is not locked during the conventionalpath load; therefore, queries and DML operations are permitted. The triggers and referential integrity are checked for each row inserted. This method of data load is suitable for small data loads or loading into clustered tables (direct load does not support clustered tables). Using the conventional-path load, the NOLOGGING and PARALLEL features are not available.
Direct Path Direct-path loads bypass the buffer cache and write the data blocks directly to the data file. The method is enabled by specifying the DIRECT=TRUE parameter and is faster than the conventional-path load. The direct-path load locks the table and indexes at the beginning of the load and releases them only after completing the load. The direct-path load is suitable for loading large amounts of data quickly. The direct-path load has the capability of not generating the redo log information. This can be specified as a parameter to SQLLDR (UNRECOVERABLE=Y) or by placing the table in NOLOGGING mode. If your database is in ARCHIVELOG mode, remember to take a backup of the table or tablespace when the directpath data load is complete. The direct-path load does not generate redo log information for databases in NOARCHIVELOG mode. SQL*Loader direct path does not support column types such as REF, LOB, VARRAY, and BFILE. During direct-path load, UNIQUE, PRIMARY KEY, and NOT NULL constraints are enforced; if the unique or primary key is violated during the load, its index is left in the UNUSABLE state. Only the rows that do not satisfy the NOT NULL constraints are rejected. The referential integrity, check constraints, and triggers are disabled during the direct-path load. Direct-path load requires more space in the temporary segments if the table has indexes. While loading data, Oracle builds the indexes in the temporary tablespace and merges them with the index when the load completes. Direct-path load can be performed in parallel. It is enabled by specifying PARALLEL=TRUE. For parallel direct-path load, referential integrity constraints and triggers must be disabled. Figure 9.2 contrasts the processing in conventional-path and direct-path loads.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
320
Chapter 9
Data Utilities
FIGURE 9.2
Conventional and direct loads
Conventional Load
Direct Load
Process SQL commands
Format data blocks
Manage free list
Manage extents
Manage extents
Write blocks to data files
Use rollback segments
Adjust HWM
Perform redo log entries
Manage buffer cache
Write to data file
Moving Data
O
ften DBAs are required to move data from one database to another, for example, when refreshing the test database with the production data. Oracle provides various methods to move data from one database to another or to move data from one user to another. In this section we will discuss two methods: using the Export and Import utilities, and using the transportable tablespace feature.
Export and Import Utilities The Export (EXP) utility is used to dump the data in Oracle tables to an external file. The file is in binary format and can be read only by the Import (IMP) utility. The export file created on one platform can be used to import
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Moving Data
321
data into a database on a different platform. The Export and Import utilities can be used to:
Move schema, tables, and/or data between databases
Copy data and/or tables to a different user
Create a logical backup
Reorganize data
The data dictionary views required to use the Export and Import utilities are created by executing the catexp.sql script. This script is run automatically when you run the catalog.sql script as part of the database creation.
Export Data The Export utility provides online help when invoked with the HELP=Y parameter, as shown in Figure 9.3. FIGURE 9.3
Export help
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
322
Chapter 9
Data Utilities
The Export utility can export data in four modes based on the parameters specified: Table mode Exports table(s) listed in the parameter TABLES. All users with the CREATE SESSION privilege can export their own tables. To export the tables owned by another user, the EXP_FULL_DATABASE privilege is needed. Oracle exports the table definition, its constraints, indexes, triggers, and privileges. You can specify a partition name to export the data in a partition. User mode Exports schema objects (definition and data) owned by the user(s) specified in the parameter OWNER. All users can export their own schema if they have the CREATE SESSION privilege; to export another schema, the EXP_FULL_DATABASE privilege is needed. Full database mode Exports all database information, including tablespace definitions, rollback segments, users, grants, database links, public synonyms, schema, data, etc. The EXP_FULL_DATABASE privilege is required to perform the full database export. It is specified by the FULL=Y parameter. Tablespace mode Exports a tablespace definition and its object definitions (data is not exported). This mode of export is used for transporting tablespaces from one database to another; this is discussed later in the chapter. The export can be performed using the direct-path method by specifying the DIRECT=Y parameter. This method is fast because it reads data from the data file to the buffer cache, but bypasses the conventional SQL processing. An export taken using the direct method can be only in the database character set. The FILE parameter specifies the filename (the default is expdat.dmp). The size of the file can be limited by using the FILESIZE parameter and specifying a sufficient number of filenames for the FILE parameter. If the export cannot fit into the filenames specified, the utility prompts you for a new filename. A structure-only export (no data) can be performed by specifying the ROWS=N parameter. You can also specify parameters to exclude structures such as constraints (CONSTRAINTS=N), indexes (INDEXES=N), privileges (GRANTS=N), and triggers (TRIGGERS=N). A selective export of rows can be performed by using the QUERY parameter and specifying a WHERE condition. QUERY can be used only on the table export mode, and the condition applies to all tables listed in the TABLES parameter. The following example shows a parameter file specifying a table mode export and a condition. USERID=scott/tiger FILE=customer.dmp Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Moving Data
323
LOG=customer.log TABLES=( Scott.customer, Scott.customer_address) QUERY=“WHERE CUST_ID IN (‘A01’, ‘B02’)”
If the parameter file name is customer.par, the export is invoked with this command: exp parfile=customer.par
If the USERID is not specified, Oracle prompts you for the username and password.
Import Data The Import (IMP) utility reads data from a previously exported file and performs inserts on the database. You can import the full database, user(s), or table(s). Online help for the Import utility can be obtained by running the command IMP HELP=Y. Figure 9.4 shows the online help. FIGURE 9.4
Import help
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
324
Chapter 9
Data Utilities
The IMP_FULL_DATABASE privilege is required to do a FULL import. Also, if the export file is created by another user, you also need this privilege to import from that file. In full import mode, Oracle creates the tablespaces, users, objects, etc. If any of the objects already exist, the IGNORE=Y parameter should be specified to ignore any CREATE errors. The following sample parameter file imports the table CUSTOMER to user JOHN; the tables were exported from user SCOTT. FILE=customer.dmp LOG=customer.imp.log FROMUSER=SCOTT TOUSER=JOHN TABLES=customer
When the table import is performed, Oracle creates the objects in the following order (table exports are also done in the same order, with the definitions): 1. Create types 2. Create tables 3. Insert table data 4. Create b-tree indexes 5. Create constraints, procedures, views, and triggers 6. Create bitmap- and function-based indexes
Reorganize Data The Export and Import utilities can also be used to reorganize data. Data reorganization may be required to improve performance when the table has many migrated rows or if the table has too many small extents. The following steps show how the Export and Import utilities can be used to reorganize table data. This method saves the data and table definitions by using the Export utility and creates a single large initial extent to accommodate all the rows. The COMPRESS=Y parameter in the export accomplishes this. 1. Create a parameter file to perform the export. FILE=job_transaction.dmp LOG=job_transaction.exp.log TABLES=james.job_transaction COMPRESS=Y DIRECT=Y
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Moving Data
325
2. Perform the export. exp james/mypwd parfile=jobexp.par
3. Drop the JOB_TRANSACTION table. If there are foreign keys that refer
to this table, you may want to drop those foreign keys after saving the definition of the foreign key. The Export utility exports the table definition, its constraints, indexes, and triggers. You can query the DBA_ CONSTRAINTS view to see whether there are foreign key constraints that refer to this table. For example: SELECT CONSTRAINT_NAME, TABLE_NAME FROM DBA_CONSTRAINTS WHERE (R_OWNER, R_CONSTRAINT_NAME) IN (SELECT OWNER, CONSTRAINT_NAME FROM DBA_CONSTRAINTS WHERE OWNER = ‘JAMES’ AND TABLE_NAME = ‘JOB_TRANSACTON’);
4. Create the import parameter file. FILE=job_transaction.dmp LOG=job_transaction.imp.log FROMUSER=JAMES TOUSER=JAMES
5. Perform the import. imp james/mypwd parfile=jobimp.par
Transportable Tablespaces The transportable tablespace is a new feature in the Oracle8i database for moving data from one database to another. The basic concept is to export the table and tablespace definition from the data dictionary, copy the data files belonging to the tablespace to the target directory (or server), and import the table and tablespace definition. This method of moving data from one database to another is fast and useful for moving large amounts of data. Indexes also can be moved using this method, along with the tables. These two export parameters are used to specify the export of the tablespace metadata (dictionary information): TRANSPORT_TABLESPACE Specify Y if you want to export the metadata of the tablespaces and objects contained in them.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
326
Chapter 9
Data Utilities
TABLESPACES Specifies the list of tablespaces to be transported. When TRANSPORT_TABLESPACE is specified as Y, you must specify TABLESPACES. The following import parameters are used to specify the transportable tablespace import of the metadata: TRANSPORT_TABLESPACE Specify Y if you want to import the metadata of the tablespaces and objects contained in them. TABLESPACES Specifies the list of tablespaces to be transported. When the TRANSPORT_TABLESPACE is specified as Y, you must specify TABLESPACES. DATA_FILES Specifies the data file names for the tablespaces specified in the TABLESPACES parameter. Provides the new location and names of the data files as the files are saved in the target database/server. TTS_OWNERS Specifies the owners of data in the transportable tablespaces. Transporting tablespaces has the following restrictions:
The source database should be Oracle8i Enterprise Edition; the target database can be any edition of the Oracle8i database.
The source and target databases should be on the same hardware/OS platform.
The block size of the source and target databases should be the same.
Tablespace(s) with the same name as the source tablespace(s) should not already exist in the target database.
The transporting tablespace should be a self-contained tablespace, that is, there should not be any objects in the tablespace that reference an object outside the tablespace set. For example, if the tablespace contains an index, its table should be available in the transporting tablespace set.
The transporting tablespace set should contain all the partitions of a partitioned table or no partitions at all.
DBMS_TTS Package The TRANSPORT_SET_CHECK procedure in the DBMS_TTS package can be used to verify that a tablespace is self-contained. The list of tablespaces to
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Moving Data
327
be verified is the first parameter, and whether to include referential integrity constraints in the check is specified as a Boolean parameter. The exceptions are listed in the view TRANSPORT_SET_VIOLATIONS. If there are no violations, the view will be empty (the view is created when you run the procedure). For example, if you want to transport tablespaces ACC_DATA and ACC_INDEX from the production database to the test database, and to verify that these tablespaces are self-contained, run the following: SQL> EXECUTE SYS.DBMS_TTS.TRANSPORT_SET_CHECK (‘ACC_DATA, ACC_ INDEX’,TRUE); PL/SQL procedure successfully completed. SQL> SELECT * FROM SYS.transport_set_violations; VIOLATIONS --------------------------------------------------------Partitioned table JOHN.DOCUMENTS1 is partially contained in the transportable set: check table partitions by querying sys.dba_tab_partitions Partitioned table JOHN.CARS is partially contained in the transportable set: check table subpartitions by querying sys.dba_tab_subpartitions Default Composite Partition (Table) Tablespace USERS for CARS not contained in transportable set
If you are not transporting the referential integrity constraints, you can specify the second parameter as FALSE. Specifying TRUE ensures that the referencing table and referenced table are both in the transport tablespace set.
Steps Involved Let’s discuss the steps required to transport tablespaces ACC_DATA and ACC_ INDEX from the production database to the test database. The following steps are to be performed on the production (source) database: 1. After identifying the self-contained tablespaces, make them read-only. ALTER TABLESPACE ACC_DATA READ ONLY; ALTER TABLESPACE ACC_INDEX READ ONLY;
2. Generate the metadata export file. You must specify both the
TABLESPACES and the TRANSPORT_TABLESPACE parameters. The FILE
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
328
Chapter 9
Data Utilities
parameter defaults to expdat.dmp. The other parameters that can be specified are TRIGGERS, CONSTRAINTS, and GRANTS. The default value for these parameters is Y; if you specify N, triggers, constraints, and grants will not be exported. If the tablespaces transported are not selfcontained, the export will fail. Connect to the database using SYS AS SYSDBA. exp FILE=acc_tts.dmp TRANSPORT_TABLESPACE=Y TABLESPACES=(ACC_DATA,ACC_INDEX) CONSTRAINTS=N
3. Copy the data files to the target server or directory using an OS copy
or ftp commands. The data files can be identified using the following query. SELECT FILE_NAME FROM DBA_DATA_FILES WHERE TABLESPACE_NAME IN (‘ACC_DATA’, ‘ACC_INDEX’);
4. After the copy is complete, change the tablespaces back to read-write
mode (optional step). ALTER TABLESPACE ACC_DATA READ WRITE; ALTER TABLESPACE ACC_INDEX READ WRITE;
5. Copy the dump file created in step 2 to the target server.
The following steps are to be performed on the test server (target database). Make sure the data files are copied to the proper location. 6. Import the metadata information about the tablespace and its objects
to the target database (“plug in” the tablespace). imp FILE=acc_ts.dmp TRANSPORT_TABLESPACE=Y TABLESPACES=(ACC_DATA,ACC_INDEX) TTS_OWNERS=(‘JAMES’) DATAFILES=(‘/u01/acc_data01.dbf’, ‘/u01/acc_data02.dbf’, ‘/u02/acc_index01.dbf’) FROMUSER=(‘JAMES’) TOUSER=(‘JOHN’)
The only mandatory parameters are TRANSPORT_TABLESPACE and DATAFILES. If you do not specify TABLESPACES or TTS_OWNERS, Oracle will identify those values from the export file. If you do not specify FROMUSER and TOUSER, Oracle will try to import the objects to the same username that owned these objects in the source database. The username must already exist in the target database.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Using National Language Support
329
7. Change the tablespaces to read-write mode. ALTER TABLESPACE ACC_DATA READ WRITE; ALTER TABLESPACE ACC_INDEX READ WRITE;
Using National Language Support
N
ational Language Support (NLS) lets you store and retrieve data in a native language and format. Oracle supports a wide variety of languages and character sets. NLS enables you to communicate with end users in their native language using their familiar date formats, number formats, and sorting sequence. Oracle uses Unicode (a worldwide encoding standard for computer usage) to support the languages. The database character set is defined at the time of database creation using the CHARACTER SET clause of the CREATE DATABASE command. The character set is used to store data in the CHAR, VARCHAR2, CLOB, and LONG columns and for table names, column names, PL/SQL variables, etc. If you do not specify a character set at the time of database creation, Oracle uses the US7ASCII character set. US7ASCII is a seven-bit ASCII character set. US7ASCII uses a single byte to store a character, and it can represent 128 characters (27). Other widely used single-byte character sets are WE8ISO8859P1 (the Western European eight-bit ISO standard 8859 Part I) and UTF8. These character sets use eight bits to represent a character and can represent 256 characters (28). Oracle also supports multibyte character encoding. Multibyte encoding is used to represent languages such as Japanese, Chinese, Hindi, etc. Multibyte encoding schemes can be fixed-width encoding schemes or variable-width encoding schemes. In a variable-width encoding scheme, certain characters may be represented using one byte, and using two or more characters would represent other characters. The options to change the database character set after database creation are limited. You can change the database character set only if the new character set is a superset of the current character set, that is, all the characters represented in the current character set must be available in the new character set. WE8ISO8859P1 and UTF8 are a superset to US7ASCII. To change the database character set: ALTER DATABASE CHARACTER SET WE8ISO8859P1;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
330
Chapter 9
Data Utilities
You must be careful in changing the database character set; be sure to make a backup of the database before the change. This action cannot be rolled back and may result in loss of data or data corruption.
Oracle lets you choose an additional character set for the database that enhances the character processing capabilities. The second character set is also specified at the time of database creation using the NATIONAL CHARACTER SET clause. If you do not specify NATIONAL CHARACTER SET, Oracle uses the database character set. The national character set is used to store data in NCHAR, NVARCHAR2, and NCLOB data type columns. The national character set can specify a fixed-width or variablewidth character set. The database character set cannot be a fixed-width multibyte character set. If choosing a multibyte character set for your database, remember that the VARCHAR2, CHAR, NVARCHAR2, and NCHAR data types specify the maximum length in bytes, not in characters. If the character set used is two bytes, VARCHAR2 (10) can hold a maximum of five characters. The client machine can specify a character set different from the database character set by using local environment variables. The database character set should be a superset to the client character set. Oracle converts the character set automatically, but there is some overhead associated with it. Certain character sets can support multiple languages. For example, the character set WE8ISO8859P1 can support all western European languages such as English, Finnish, Italian, Swedish, Danish, French, German, Spanish, etc.
NLS Parameters Oracle provides several NLS parameters to customize the database and client workstations to suit the native format. These parameters have a default value based on the database and national character set chosen to create the database. Specifying the NLS parameters can change the default values. These are the ways that the character set can be customized:
At instance start-up using the initialization file (example: NLS_DATE_ FORMAT = “YYYY-MM-DD”)
As environment variables (example on Unix: csh: setenv NLS_ DATE_FORMAT YYYY-MM-DD or using the MS-Windows registry)
In the session using the ALTER SESSION command (example: ALTER SESSION SET NLS_DATE_FORMAT = “YYYY-MM-DD”)
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Using National Language Support
331
In certain SQL functions (example: TO_CHAR (SYSDATE, ‘YYYY-MMDD’, ‘NLS_DATE_LANGUAGE = AMERICAN’))
The parameter specified in SQL functions has the highest priority; the next highest is the parameter specified using ALTER SESSION, then the environment variable, then the initialization parameters, and finally the database default parameters having the lowest priority. Certain parameters cannot be changed using ALTER SESSION or cannot be specified as an environment variable. Parameter specification areas are discussed in the following sections. NLS_LANG NLS_LANG can be specified only as an environment variable. It has three parts: the language, territory, and character set. None of the parts are mandatory. The format to specify NLS_LANG is _ .. The language specifies the language to be used for displaying Oracle error messages, day names, month names, etc. The territory specifies the default date format, numeric formats, and monetary formats. The character set specifies the character set to be used by the client machine—for example, AMERICAN_AMERICA.WE8ISO8859P1, where AMERICAN is the language, AMERICA is the territory, and WE8ISO8859P1 is the character set. NLS_LANGUAGE Specified at the session level (using ALTER SESSION) or as an initialization parameter. Sets the language to be used. The session value overrides the NLS_LANG setting. The default values for the NLS_ DATE_LANGUAGE and NLS_SORT parameters are derived from NLS_ LANGUAGE. NLS_TERRITORY Specified at the session level or as an initialization parameter. Sets the territory. The session value overrides the NLS_LANG setting. The default values for parameters such as NLS_CURRENCY, NLS_ ISO_CURRENCY, NLS_DATE_FORMAT, and NLS_NUMERIC_CHARACTERS are derived from NLS_TERRITORY. NLS_DATE_FORMAT Specified at the session level, or as an environment variable, or as an initialization parameter. Sets a default format for date displays. NLS_DATE_LANGUAGE Specified at the session level, or as an environment variable, or as an initialization parameter. Sets a language explicitly for day and month names in date values. NLS_CALENDAR Specified at the session level, or as an environment variable, or as an initialization parameter. Sets the calendar Oracle uses.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
332
Chapter 9
Data Utilities
NLS_NUMERIC_CHARACTERS Specified at the session level, or as an environment variable, or as an initialization parameter. Specifies the decimal character and group separator (for example, in 234,224.99, the comma is the group separator and the period is the decimal character). NLS_CURRENCY Specified at the session level, or as an environment variable, or as an initialization parameter. Specifies a currency symbol. NLS_ISO_CURRENCY Specified at the session level, or as an environment variable, or as an initialization parameter. Specifies the ISO currency symbol. For example, when the NLS_ISO_CURRENCY value is AMERICA, the currency symbol for US dollars is $ and the ISO currency symbol is USD. NLS_DUAL_CURRENCY Specified at the session level, or as an environment variable, or as an initialization parameter. Specifies an alternate currency symbol. Introduced to support the Euro. NLS_SORT Specified at the session level, or as an environment variable, or as an initialization parameter. Specifies the language that should be used for sorting. You can specify any valid language. The ORDER BY clause in a SQL statement uses this value for the sort mechanism. For example: ALTER SESSION SET NLS_SORT = GERMAN; SELECT * FROM CUSTOMERS ORDER BY NAME;
The NAME column will be sorted using the German linguistic sort mechanism. You can also explicitly set the sort language by using the NLSSORT function, rather than altering the session parameter. The following example demonstrates this method: SELECT * FROM CUSTOMERS ORDER BY NLSSORT(NAME, “NLS_SORT= GERMAN”);
NLS Data Dictionary Information NLS data dictionary information can be obtained from the data dictionary using the following views: NLS_DATABASE_PARAMETERS Shows the parameters defined for the database (the database default values) NLS_INSTANCE_PARAMETERS Shows the parameters specified in the initialization parameter file
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Using National Language Support
333
NLS_SESSION_PARAMETERS Shows the parameters that are in effect in the current session V$NLS_VALID_VALUES Shows the allowed values for the language, territory, and character set definitions The following examples show NLS information from the data dictionary views and examples of changing session NLS values. SQL> SELECT * FROM NLS_DATABASE_PARAMETERS; PARAMETER -----------------------NLS_LANGUAGE NLS_TERRITORY NLS_CURRENCY NLS_ISO_CURRENCY NLS_NUMERIC_CHARACTERS NLS_CHARACTERSET NLS_CALENDAR NLS_DATE_FORMAT NLS_DATE_LANGUAGE NLS_SORT NLS_TIME_FORMAT NLS_TIMESTAMP_FORMAT NLS_TIME_TZ_FORMAT NLS_TIMESTAMP_TZ_FORMAT NLS_DUAL_CURRENCY NLS_COMP NLS_NCHAR_CHARACTERSET NLS_RDBMS_VERSION
VALUE -----------------------------AMERICAN AMERICA $ AMERICA ., US7ASCII GREGORIAN DD-MON-YY AMERICAN BINARY HH.MI.SSXFF AM DD-MON-YY HH.MI.SSXFF AM HH.MI.SSXFF AM TZH:TZM DD-MON-YY HH.MI.SSXFF AM TZH:T $ US7ASCII 8.1.5.0.0
18 rows selected. SQL> ALTER SESSION SET NLS_DATE_FORMAT = ‘DD-MM-YYYY HH24:MI:SS’; Session altered. SQL> ALTER SESSION SET NLS_DATE_LANGUAGE = ‘GERMAN’; Session altered. SQL> SELECT TO_CHAR(SYSDATE, ‘Day, Month’), SYSDATE FROM DUAL;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
334
Chapter 9
Data Utilities
TO_CHAR(SYSDATE,‘DAY, SYSDATE --------------------- ------------------Dienstag , August 29-08-2000 16:24:14 SQL> ALTER SESSION SET NLS_CALENDAR =
“Persian”;
Session altered. SQL> SELECT SYSDATE FROM DUAL; SYSDATE -----------------09 Shahruoar 1379 SQL> SELECT * FROM NLS_SESSION_PARAMETERS; PARAMETER -----------------------NLS_LANGUAGE NLS_TERRITORY NLS_CURRENCY NLS_ISO_CURRENCY NLS_NUMERIC_CHARACTERS NLS_CALENDAR NLS_DATE_FORMAT NLS_DATE_LANGUAGE NLS_SORT NLS_TIME_FORMAT NLS_TIMESTAMP_FORMAT NLS_TIME_TZ_FORMAT NLS_TIMESTAMP_TZ_FORMAT NLS_DUAL_CURRENCY NLS_COMP
VALUE -----------------------------AMERICAN AMERICA $ AMERICA ., Persian DD Month YYYY GERMAN BINARY HH.MI.SSXFF AM DD-MON-YY HH.MI.SSXFF AM HH.MI.SSXFF AM TZH:TZM DD-MON-YY HH.MI.SSXFF AM TZH:T $
15 rows selected.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Summary
335
Summary
This chapter discussed the tools available to manage data in Oracle. The direct-load insert is used to load data to one table using the data from another table. If the table’s parallel degree is 1, the APPEND hint should be specified in the INSERT statement to invoke direct-load insert. When the table has been created as PARALLEL, or if the PARALLEL hint is specified, the APPEND hint is optional. Direct-load insert is used only when you use the INSERT INTO ... SELECT clause. SQL*Loader is a utility used to load data from external files to Oracle. The utility is invoked by SQLLDR on most operating systems, and it uses three types of input files. The control file specifies the table to be loaded, how the data file is formatted, and what columns correspond to the columns in the table. The command-line parameters can be saved in a file, and the PARFILE parameter can be used to specify the parameter file name. The data file name is specified using the FILE parameter. You can optionally have the data in the control file itself; the data rows should be delineated by the BEGINDATA keyword. SQL*Loader can load data into the tables using the conventional-path or direct-path load methods. In the conventional-path method, Oracle uses INSERT statements to load each row into the table; the triggers are fired, and referential integrity is checked for each row. In the direct-load method, Oracle bypasses the buffer cache and writes the rows directly to the data blocks of the table. In the direct-path load method, the load can be performed in parallel and the redo log entry generation can be prevented. The Export and Import utilities can move data between databases, or between users. The Export utility can be used as a logical backup also. The Export utility creates a binary file, whose format can be interpreted only by the Import utility. An export of data can be performed at the table level, user level, or for the full database; the export can also operate at the tablespace level. For full database exports, the EXP_FULL_DATABASE privilege is required. The transportable tablespace feature is also used to move data and objects from one tablespace to another. The restrictions are that the source and target platforms and database block size should be the same. The metadata to plug in the tablespace and to create its objects are exported using the TRANSPORT_TABLESPACE=Y parameter in the Export utility.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
336
Chapter 9
Data Utilities
Oracle has the capability to store and retrieve data in a native language and format using the National Language Support feature. The character set used determines the default language and the conventions used. The character set is specified at the time of database creation. Oracle provides several parameters that can determine the characteristics and conventions of data displayed to the user. These parameters can be specified for a session, for the instance, or in certain SQL functions. If none are specified, the database defaults are used.
Key Terms Before you take the exam, make sure you’re familiar with the following terms: direct-load insert SQL*Loader parameter file log file Export (EXP) utility Import (IMP) utility transportable tablespace metadata self-contained tablespace National Language Support (NLS)
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
337
Review Questions 1. Which tool is used to load data from a text file to an Oracle database? A. Export B. Import C. SQL*Loader D. SQL*Plus 2. Identify the SQL*Loader file that is not an input file. A. Control file B. Data file C. Discard file D. Parameter file 3. How do you invoke a serial (no parallel query) direct-load insert? A. Specify the hint DIRECT_LOAD. B. Specify the hint APPEND. C. No hints needed; Oracle automatically invokes direct-load insert. D. Specify LOAD DIRECT in the INSERT statement. 4. Choose the statement that is not true. A. Conventional-path load always writes to redo log files. B. Direct-path load never writes to redo log files. C. Conventional-path load uses INSERT statements to load data. D. Direct-path load bypasses the data buffer cache and writes directly
to the file.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
338
Chapter 9
Data Utilities
5. What privilege is needed to export the tables owned by you using the
Export utility? A. CREATE TABLE B. EXP_FULL_DATABASE C. CONNECT D. CREATE SESSION 6. Which data dictionary view shows the database character set? A. V$DATABASE B. NLS_DATABASE_PARAMETERS C. NLS_INSTANCE_PARAMETERS D. NLS_SESSION_PARAMETERS 7. When you do a table import, in which order do the following objects
get created in the database? A. Table B. B-tree index C. Bitmap index D. Referential integrity constraints 8. Which parameter is used to specify a transportable tablespace export? A. TABLESPACES B. TRANSPORTABLE_TABLESPACE C. INCTYPE D. None of the above
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
339
9. When you do a parallel direct-load insert, A. The extents are created above the high-water mark. B. The extents are created in the data files that belong to the table’s
tablespace, and made available to the other users when a COMMIT is issued. C. Temporary segments are created for the number of parallel server
processes, and the temporary segments are merged with the table’s primary segment when a COMMIT is issued. D. No COMMIT is necessary; data is automatically saved to the data
files and cannot be rolled back. 10. Choose two NLS parameters that cannot be modified using the ALTER
SESSION statement. A. NLS_CHARACTERSET B. NLS_SORT C. NLS_NCHAR_CHARACTERSET D. NLS_TERRITORY 11. Which two parameters must you specify when importing the metadata
information into the target database while transporting tablespaces? A. DATAFILES B. TABLESPACES C. TRANSPORT_TABLESPACE D. TTS_OWNERS 12. Choose two reasons why you would use a direct-path load instead of
a conventional-path load. A. To restrict users from performing DML operations while the load
is in progress. B. To perform parallel loads. C. To disable the generation of redo log entries if the database is in
ARCHIVELOG mode. D. The table has many indexes, and the temporary tablespace is small.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
340
Chapter 9
Data Utilities
13. If you run the ALTER SESSION SET NLS_DATE_FORMAT = ‘DDMMYY’
statement, which dictionary view would you query to see the value of the parameter? A. V$SESSION_PARAMETERS B. NLS_SESSION_PARAMETERS C. NLS_DATABASE_PARAMETERS D. V$SESSION 14. To use the transportable tablespace feature, the block size of the target
and source database should be A. 4KB B. Same size C. Not larger than 16KB D. Different size 15. Which utility would you use to refresh the test table with produc-
tion data? A. SQL*Loader conventional path B. SQL*Loader direct path C. Export/Import D. SQL*Plus direct-load insert 16. Choose the statement that is not true. Direct-path load A. Disables insert triggers on the table and enables them after the load
completes. B. Disables referential integrity constraints and enables them after the
load completes. C. Disables unique constraints and enables them after the load
completes. D. Does not disable any constraints or triggers.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
341
17. What does the parameter DIRECT=Y in the export parameter file signify? A. When exporting table data, bypass the SQL buffer cache and read
the data blocks directly from the disk to the export file. B. When exporting table data, bypass the SQL processing layer and
read data from disk to the buffer cache, and copy them directly to the export file. C. When importing data using the export file created, write data blocks
directly to the file, rather than going through the buffer cache. D. To perform a direct import, the export file must be created using
the direct method. 18. Which parameter in the export file is used to specify a structure-only
export (no rows)? A. ROWS B. TABLE C. NODATA D. DIRECT 19. Which NLS parameter can be specified only as an environment
variable? A. NLS_LANGUAGE B. NLS_LANG C. NLS_TERRITORY D. NLS_SORT 20. When transporting a tablespace from one database to another, it
should be A. Read-only. B. Offline. C. Online. D. Tablespace status does not matter.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
342
Chapter 9
Data Utilities
Answers to Review Questions 1. C. SQL*Loader can load data into Oracle tables from external text
files. The SQL*Loader control file specifies the input data file, the table where data is to be loaded, the formatting of data in the text file, and how to map the text file data to the columns in table. 2. C. The control file, data file, and parameter file are input files. The
log file, discard file, and bad file are output files generated by SQL*Loader. 3. B. To invoke serial direct-load, you must specify the hint APPEND in
the INSERT statement. The hint is optional for parallel direct-load. Oracle uses direct-load insert only when you insert records into a table, selecting data from another table (INSERT /*+ APPEND */ INTO MYTAB SELECT * FROM YOURTAB). 4. B. Direct-path load does not write to redo log files if the
UNRECOVERABLE parameter is specified or if the table is defined with the NOLOGGING option. 5. D. To export the objects owned by the user (either table mode or user
mode export) the only privilege required is CREATE SESSION. 6. B. The NLS_DATABASE_PARAMETERS view shows the database charac-
ter set and all the NLS parameter settings. The character set cannot be changed at the instance or session level, so they do not show up in the NLS_INSTANCE_PARAMETERS and NLS_SESSION_PARAMETERS views. 7. A, B, D, and C. Oracle exports the structure information in the
order that you would create them. The table may reference userdefined-type information, so they are exported first, and then the table definition is exported. Data is exported next, followed by btree indexes, constraints, triggers, bitmap indexes, function-based indexes, etc. 8. B. Specifying TRANSPORTABLE_TABLESPACE=Y exports the metadata
required to transport the tablespaces listed in the TABLESPACES parameter.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Answers to Review Questions
343
9. C. While doing parallel direct-load insert, each parallel query server
process creates a temporary segment and inserts data into the temporary segment. When the user issues a COMMIT, the parallel query coordinator merges the temporary segments with the table’s primary segment. 10. A and C. The character set cannot be changed after creating the
database. The CHARACTER SET and NATIONAL CHARACTER SET clauses are used in the CREATE DATABASE command. 11. A and C. TRANSPORT_TABLESPACE=Y specifies that the import is a
tablespace import, and you must specify the new data file locations using the DATAFILES parameter. The TABLESPACES and TTS_OWNERS values can be identified from the export file itself. 12. B and C. Using direct-path load, you can perform parallel loads, and
by creating the table in NOLOGGING mode, you can prevent redo entries from being generated. During direct load and conventional load, DML operations are permitted on the table. If the table has many indexes, you need more temporary tablespaces while doing directpath load, because Oracle creates the index segments in the temporary tablespace and merges them with the existing index when the load is complete. 13. B. The NLS_SESSION_PARAMETERS view shows information about
the NLS parameter values that are in effect in the session. 14. B. To move data using the transportable tablespace feature, the
source and destination database block size must be the same. 15. C. Export/Import is used to move data between databases. When the
table already exists in the target database, you must specify the IGNORE=Y parameter to ignore table creation errors. 16. C. Unique constraints remain in force. The data is verified when the
index is rebuilt at the end of the load; if there are any violations, the index is left in the UNUSABLE state.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
344
Chapter 9
Data Utilities
17. B. Direct-path export bypasses the SQL statement processing layer.
In a conventional export, SELECT statements are used to read data blocks from the table to the buffer cache, and after SQL processing, the rows are written to the export file. 18. A. The ROWS=N parameter should be specified to bypass the export of
table data. By default, all rows of the tables are exported. 19. B. NLS_LANG is specified as an environment variable. The parameter
specifies a language, territory, and character set. 20. A. To transport a tablespace, you need to change its status to read-
only while performing the metadata export and also when copying the physical files.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
OCP: Oracle8i Backup & Recovery
Copyright ©2000 SYBEX , Inc., Alameda, CA
PART
II
www.sybex.com
Chapter
10
Backup and Recovery Considerations ORACLE8i BACKUP AND RECOVERY EXAM OBJECTIVES OFFERED IN THIS CHAPTER: Define requirements for a backup and recovery strategy Describe the importance of obtaining management concurrence for the strategy Identify the components of a disaster recovery plan List Oracle Server features in the context of high availability List the strengths of different database configurations for recoverability Discuss the importance of testing a backup and recovery plan
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle's Training and Certification Web site (http://education.oracle .com/certification/index.html) for the most current exam objectives listing.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
W
hen determining your backup and recovery strategy, a number of issues must be considered. In order for backup and recovery to be successful, everyone from the technical team through management must understand the requirements and the effects of the backup and recovery strategy. After this strategy is agreed upon and in place, a disaster recovery plan can be created based upon the backup and recovery plan. It’s important that the options for high availability are understood, as well as the options for configuring your database for recoverability. The final step is to test the plan. This chapter takes you through each of these considerations step by step as they would ideally be performed in many organizations. Keep in mind, though, that organizations can be completely different from one another. Although this approach is nice in theory, it can easily change due to Information Technology (IT) financial issues, IT technical abilities, and the lack of IT knowledge and experience in management.
Requirements for a Backup and Recovery Strategy
T
o create a solid backup and recovery strategy, you must keep in mind six major requirements:
The amount of data that can be lost in the event of a database failure
The length of time that the business can go without the database in the event of a database failure
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Requirements for a Backup and Recovery Strategy
349
Whether the database can be offline to perform a backup, and if so, the length of time that it can remain offline
The types of resources available to perform backup and recovery
The procedures for undoing changes to the database, if necessary
The cost of buying and maintaining hardware and performing additional backups versus the cost of replacing or re-creating the data lost in a disaster
All these requirements must clearly be understood before planning a backup and recovery strategy.
Losing Data in a Database Failure The amount of data that can be lost in a failure is a determining factor of the backup and recovery strategy that gets implemented. If a week’s worth of data can be lost in the event of failure, then a weekly backup may be a possible option. On the other hand, if no data can be lost in the event of failure, then weekly backups would be out of the question, and backups would need to be performed daily.
Surviving without the Database in a Database Failure If the company database were to fail during an outage, how long would it take until the business was negatively affected? Generally, this question can be answered by management. If all data is entered manually by data entry, the downtime could be relatively long without hurting the business operations. The business could potentially operate normally by generating orders or forms that could be entered into the database later. This type of situation could have minimal effect on the business. On the other hand, a financial institution that sends and receives data electronically 24 hours a day can’t afford to be down for any time at all without impairing business operations. The electronic transactions could be unusable until the database was recovered. After you determine how long the business could survive without a database, you can figure out the average amount of time the database could be down if it were to fail by using the mean time to recovery (MTTR). The MTTR is an average time it takes to recover from certain types of failure. You should record each type of failure that is tested so that you can then
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
350
Chapter 10
Backup and Recovery Considerations
determine an average recovery time. The MTTR can help determine mean recovery times for different failure scenarios. You can document these times during your testing cycles.
Performing an Offline Backup To determine whether it is possible to perform a database backup if the database is offline or shut down, you must first know how long the database can afford to be offline or shut down. For example, if the database is being used with an Internet site that has national or international access, or with a manufacturing site that works two or three shifts across different time zones and has nightly batch processing, then it would have to be available 24 hours a day. In this case, the database would always need to remain online, with the exception of scheduled downtimes and maintenance. In this case, an online backup, or hot backup, would need to be performed. This type of backup is done when the database is online or running. Businesses that don’t require 24-hour availability and do not have long batch processing activities in the evening could potentially afford to have the database offline on regular nightly intervals for an offline backup, or cold backup. Each site should conduct their own backup tests with factors unique to their environment to determine how long it would take to perform a cold backup. If the database downtime is acceptable for that site, then a cold backup could be a workable solution.
Knowing Your Backup and Recovery Resources The personnel, hardware, and software resources available to the business also affect the backup and recovery strategy. Personnel resources would include at least adequate support from a database administrator (DBA), system administrator (SA), and operator. The DBA would be responsible for the technical piece of the backup, such as shell scripting or Recovery Manager (RMAN) scripts. A scripted backup is an OS backup written in an OS scripting language, such as the Korn shell in the Unix OS. RMAN is an automated tool from Oracle that can perform the backup and recovery process. The SA would be involved in some aspects of the scripting, tape backup software, and tape hardware. The operator might be involved in changing tapes and ensuring that the proper tape cycles are followed.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Requirements for a Backup and Recovery Strategy
351
The hardware resources could include an automated tape library (ATL), a stand-alone tape drive, adequate staging disk space for scripted hot backups and exports, adequate archive log disk space, and third disk mirrors. All types of disk subsystems should be at least mirrored or use some form of RAID, such as RAID 5, where performance is not compromised.
RAID stands for Redundant Array of Inexpensive Disks. This is essentially fault tolerance to protect against individual disk crashes. There are multiple levels of RAID. RAID 0 implements disk striping without redundancy. RAID 1 is standard disk mirroring. RAID 2–5 offer some form of parity-bit checking on separate disks. RAID 5 has become the most popular in recent days, with many vendors offering their own enhancements to RAID 5 for increased performance. RAID 0 + 1 has been a longtime fault-tolerance and performance favorite for Oracle database configurations. This is due to redundancy protection and strong write performance. However, with the new RAID 5 enhancements by some storage array vendors to include large caches or memory buffers, the write performance has improved substantially. RAID 0 + 1 and RAID 5 can both be viable configurations for Oracle databases.
The software resources could include backup software, scripting capabilities, and tape library software. The Oracle RMAN utility comes with the Oracle8i Server software and is installed when selecting all components of Oracle8i Enterprise Server. The technical personnel, which should include the DBA and SA at a minimum, are generally responsible for informing the management of the necessary hardware and software to achieve the desired recovery goals.
Undoing Changes to the Database Whether it is possible to undo changes made to the database usually depends on the sophistication of the code releases and configuration management control for the application in question. If the configuration control is highly structured with defined release schedules, then undoing changes may not be necessary. This would reduce the possibility of data errors or dropped database objects. On the other hand, if the release schedule tends to be unstructured, the potential for data errors from developers can be higher. It is a good idea to prepare for these issues in any case. A full export can be done periodically,
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
352
Chapter 10
Backup and Recovery Considerations
which would give the DBA a static copy of all the necessary objects within a database. Although exports have limitations, they can be useful for repairing data errors, as individual users and tables can be extracted from the export file. Additionally, individual tablespace backups can be performed more frequently on high-use tablespaces.
Weighing the Costs Additional hardware is usually needed to perform adequate testing and failover for critical databases. When this additional hardware is unavailable, the risk of an unrecoverable database failure is greater. The cost of the additional hardware should be weighed against the cost of re-creating the lost data in the event of an unrecoverable database failure. This type of cost comparison will cause the management team to identify the steps necessary to manually re-create lost data if this can be done. Once the steps for re-creating lost data are identified and the associated costs are determined, these costs can be compared to the cost of additional hardware. This additional hardware would be used for testing backups and as a system failover if a production server was severely damaged.
The Importance of Management Concurrence for the Strategy
I
n order for the backup and recovery strategy to be put into place, the management must understand and be in agreement with the strategy. The potential effects of backup and recovery to the business are crucial for them to understand. An impact analysis should include times the database will be unavailable for backups, maintenance, and recovery. Then, as new hardware and software improvements become available, the management team will be able to more easily understand the effects to the business operations. The benefits and costs of purchasing faster and more redundant hardware will be more easily measured in the decision-making process. Management’s agreement with the backup and recovery strategy should reduce issues and concerns related to the backup and recovery process. Expectations become reasonable and understood. This reduces the number of misconceptions during an outage.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Identifying the Components of a Disaster Recovery Plan
353
Identifying the Components of a Disaster Recovery Plan
A
disaster recovery plan is an agreed-upon set of procedures that will be followed in the event of a down Oracle database and related support systems. This is a key component of your overall backup and recovery strategy. A disaster recovery plan has three major components: managing non-media failures, managing media (disk) failures, and testing backup and recovery strategies. Non-Media Failures Consist of statement failure, process failure, instance failure, and user error. A non-media failure is almost always less critical than a media failure. In most cases, statement failure, process failure, and instance failure are automatically handled by Oracle and require no DBA intervention. User error can require a manual recovery performed by the DBA. Statement failure consists of a syntax error in the statement, and Oracle usually returns an error number and description. Process failure occurs when the user program fails for some reason, such as an abnormal disconnection or termination. The process monitor (PMON) process usually handles cleaning up the terminated process. Instance failure occurs when the database instance abnormally terminates due to a power spike or outage. Oracle handles this automatically upon start-up by reading through the current online redo logs and applying the necessary changes back to the database. User error occurs when a table is erroneously dropped or data is erroneously removed. This could require a tablespace recovery or import of a table to be manually performed by the DBA. Chapter 11, “Oracle Recovery Structures and Processes,” provides a more detailed look at the types of failures. Media, or Disk, Failures Are the most critical type of failure. A media failure is failure of the database to read or write from a file that is required by the database. For example, a disk drive could fail, a controller supporting a disk drive could fail, or a database file could be removed, overwritten, or corrupted. Each type of media failure that occurs requires a different method for recovery.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
354
Chapter 10
Backup and Recovery Considerations
The basic steps to perform media recovery are as follows: 1. Determine which files to recover: data files, control files, and/or
redo logs. 2. Determine which type of media recovery is required: complete or
incomplete, opened database, or closed database. (You will learn more about these types of recovery in later chapters.) 3. Restore backups of the required files: data files, control files, and
offline redo logs (archive logs) necessary to recover. 4. Apply offline redo logs (archive logs) to the data files. 5. Open the database at the desired point, depending on whether you
are performing complete or incomplete recovery. 6. Perform frequent testing of the process. Create a test plan of typ-
ical failure scenarios. Testing Is an important component of the disaster recovery plan. Testing is necessary to validate that the backup and recovery actually works. We’ll go into more detail on testing in the “Testing a Backup and Recovery Plan” section later in this chapter.
High-Availability Oracle Server Features
H
igh availability refers to a database system that has stringent uptime requirements. This affects your backup and recovery strategy because recovery may need to be instantaneous. This feature can rule out some typical recovery methods. High-availability systems are often termed 24 × 7 databases. In reality, few systems are up and running 24 hours a day, 7 days a week, but many can come close. These systems are usually associated with high costs for personnel, software, hardware, and facilities. The Oracle server features for high availability are:
The standby database, which is used primarily for providing additional availability
The Oracle Parallel Server, which provides a solution for both availability and scalability
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
High-Availability Oracle Server Features
355
The standby database is a duplicate copy of your production database. This database can be kept on site or, for maximum protection, in a different geographic area. In the event of a catastrophic failure, when the primary database is unrecoverable, the standby database can be activated. Figure 10.1 shows a standby database. FIGURE 10.1
Standby database Primary server
Standby server Remote or local network access
Sends archive logs
Receives archive logs
Each generated archive log is moved across the network to the standby server, where it gets applied to the standby database. This keeps it in synch with the primary database server. Primary database
Standby database
The standby database is implemented on another server. This other server needs to be the same OS version, OS patch level, and Oracle relational database management system (RDBMS) version. The standby server needs network access to the primary server. Both databases need to be in ARCHIVELOG mode. See Chapter 23, “Oracle Standby Database,” for a detailed explanation of how to configure a standby database. The standby database is, for the most part, in constant recovery mode. It receives the archive logs from the primary database. These archive logs are applied to the standby database to keep it almost exactly in synch with the primary database. The only difference is the gap in receiving the most recent archive logs from the primary database. When a catastrophic failure occurs, the standby database is recovered by applying the last available archive log and opening the database. The business operations can go on as normal, failing over to the standby database. Oracle Parallel Server allows increased scalability and can also be used for high availability. A parallel server consists of at least two servers, or nodes, each with an instance, but sharing one database. The database’s files reside on a raw disk so that each instance can access the same database files. Figure 10.2 shows an Oracle Parallel Server database.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
356
Chapter 10
Backup and Recovery Considerations
FIGURE 10.2
Oracle Parallel Server
This process provides scalability by having more than one node available, thereby allowing more connections and more transactions in a given time frame. We are not concerned with scalability for this discussion, so we will not address these capabilities in any detail. To use the Oracle Parallel Server for high availability, you can have one instance act as a primary instance, and the other instance act as a failover instance. Users would connect to the primary instance unless the primary instance, or node, failed; if failure were to occur, the users would connect to the failover, or secondary, instance. This approach protects against an instance or server hardware failure only. It doesn’t protect against the disk, or media, failure. Each server, or node, shares the same disk. If the disk subsystem fails, both instances would be unavailable. All disk subsystems should be at least mirrored or RAID 5 to provide redundancy in the event of a disk drive failure. Another difference with the Parallel Server option is that the other node would usually be in close proximity to the primary node. Therefore, both nodes are more likely to suffer from a catastrophic failure, such as flood, fire, or earthquake, than would a standby database at a geographically distant location.
Configuring the Database for Recoverability
Two primary database configurations have an effect on recoverability:
ARCHIVELOG mode
NOARCHIVELOG mode
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Configuring the Database for Recoverability
357
ARCHIVELOG is the mode used when the database is generating offline redo logs (archive logs) to store a history of the transactions or other changes to the database. NOARCHIVELOG is the mode used when the database does not create archive logs. Therefore, the database does not save historical changes. Table 10.1 shows the strengths and weaknesses of ARCHIVELOG. TABLE 10.1
ARCHIVELOG Mode Strengths and Weaknesses ARCHIVELOG Strengths
ARCHIVELOG Weaknesses
Database remains available at all times.
Uses more disk space to store archive logs
Can perform incomplete recovery, including cancel-based, time-based, and change-based recoveries. This can allow recovery to stop prior to an undesirable change in the database.
Requires increased administration of archive logs
Possible to recover all data or not lose any data if a failure occurs.
Table 10.2 shows the strengths and weaknesses of NOARCHIVELOG. TABLE 10.2
NOARCHIVELOG Mode Strengths and Weaknesses NOARCHIVELOG Strengths
NOARCHIVELOG Weaknesses
Requires less administration of archive logs
The database must be unavailable on a regular basis to perform backups.
Uses less disk space
Data will almost surely be lost or need to be manually reentered in the event of a media failure (unless the redo logs have not cycled all the way through every group since the last cold backup—a rare exception). The database needs to shut down for a backup after adding new physical structures.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
358
Chapter 10
Backup and Recovery Considerations
The database configuration that provides the most flexibility and protection is ARCHIVELOG mode. If there is a concern about losing data, ARCHIVELOG mode should be selected. The consistent backup (offline, or cold, backup) can be performed with a database in ARCHIVELOG mode. The only drawback with ARCHIVELOG mode is that there will be increased disk usage and administration effort for archive logs. This is minimal when considering the benefits.
Testing a Backup and Recovery Plan
O
ne of the most important (but also most overlooked) components of the recovery plan is testing. Testing should be done before and after the database that you are supporting is in production. Testing validates that your backups are working, and gives you the peace of mind that recovery will work when a real disaster occurs. You, as the DBA, should document and practice scenarios of certain types of failures so that you are familiar with them. The methods to recover from these types of failures should be clearly defined. You should document and practice the following types of failures, at minimum:
Loss of a system tablespace
Loss of a nonsystem tablespace
Loss of a current online redo log
Loss of the whole database
Testing recovery should include recovering your database to another server such as a test or development server. The cost of having additional servers available to perform testing can be intimidating for some businesses. This can be one deterrent for adequate testing. Test servers are absolutely necessary, and businesses that fail to perform this requirement can be at risk of severe data loss or an unrecoverable situation.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Summary
359
One way that you can test recovery is to create a new development or testing environment and recover the database to a development or test server in support of a new software release. Database copies are often necessary to support new releases of the database and application code prior to moving it to production. RMAN provides the DUPLICATE command in support of this. Manual OS tools in Unix, such as ufsrestore, tar, and cpio from tape or copying from a disk staging area are often used for scripted backups.
Summary
O
racle provides numerous options for backup and recovery. The primary consideration that you (as the DBA) will make is whether to operate in ARCHIVELOG mode or NOARCHIVELOG mode. The result of this decision will determine many backup and recovery options. The technical team and management of an organization must agree on how to implement these options. This agreed-upon process should be formalized into backup and recovery plans. These plans need to be tested regularly in order to assist participants such as the DBA and SA groups in the event of an actual failure, as well as to validate the process. High-availability options should be considered and utilized if necessary. These options could be applied to the disaster recovery plan, if it is determined necessary.
Key Terms Before you take the exam, make sure you’re familiar with the following terms: mean time to recovery (MTTR) hot backup cold backup Recovery Manager (RMAN)
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
360
Chapter 10
Backup and Recovery Considerations
Redundant Array of Inexpensive Disks (RAID) disaster recovery non-media failures media (disk) failures statement failure process failure instance failure user error high availability 24 × 7 databases standby database Oracle Parallel Server ARCHIVELOG NOARCHIVELOG
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
361
Review Questions 1. What is a type of non-media failure? Choose all that apply. A. Process failure B. Crashed disk drive with data files that are unreadable C. Instance failure D. User error E. Statement failure 2. What is a strength of operating in ARCHIVELOG mode? Choose all that
apply. A. More recovery options B. Complete recovery C. Incomplete recovery D. Database unavailable during backups 3. What are the major weaknesses of operating in ARCHIVELOG mode?
List all that apply. A. Increased disk usage. B. Archive log management. C. Can perform only cold backups. D. Database can be available during backups. 4. Why is it important to get management to understand and agree with
the backup and recovery plan? A. So that they understand the benefits and costs of the plan. B. So that they understand the plan’s effects on business operations. C. It’s not important for management to understand.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
362
Chapter 10
Backup and Recovery Considerations
5. Which Oracle features are offered to support high-availability data-
base requirements? Choose all that apply. A. Parallel query B. Parallel Server C. Standby database D. Snapshots 6. What are some reasons why a DBA might test the backup and recov-
ery strategy? List all that apply. A. To validate the backup and recovery process B. To stay familiar with certain types of failures C. To practice backup and recovery D. To build duplicate production databases to support new releases 7. What type of failure is usually the most serious? A. Non-media failure B. Statement failure C. Instance failure D. Media failure 8. What are the two types of database configurations that affect recov-
erability? List all that apply. A. ARCHIVELOG mode B. NOARCHIVELOG mode C. MOUNT mode D. NOMOUNT mode
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
363
9. Which type of database configuration is most likely to have data loss
in the event of a database failure? A. ARCHIVELOG mode B. NOARCHIVELOG mode 10. List all the failure types. Choose all that apply. A. Statement B. Process C. Instance D. NOMOUNT mode 11. Which type of high-availability solution would best withstand a cata-
strophic failure? A. Parallel Server B. Standby database 12. What are the two most common disk fault-tolerance options for
Oracle databases? Choose all that apply. A. RAID 0 B. RAID 10 C. RAID 5 D. RAID 0 + 1 13. Why should backup and recovery testing be done? Choose all that
apply. A. To practice your recovery skills B. To validate the backup and recovery process C. To get MTTR statistics D. To move the database from the production server to the test server
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
364
Chapter 10
Backup and Recovery Considerations
14. What term is commonly used to describe high availability? Choose the
best answer. A. 24 × 7 B. Parallel Server C. Standby database D. Parallel query 15. What backup and recovery tests should be performed, at a minimum?
Choose all that apply. A. Recovery from the loss of a system tablespace B. Recovery from the loss of a nonsystem tablespace C. Full database recovery D. Recovery from the loss of an online redo log 16. What is the purpose of MTTR? Choose all that apply. A. Determines the average time to recover the database for specific
failures B. Determines the recovery failure process C. Performs a tablespace recovery D. Performs a non-tablespace recovery 17. List the major components of a disaster recovery plan. A. Managing statement failure B. Managing non-media failure C. Managing instance failure D. Managing media failure 18. List all the methods used to protect against erroneous changes to the
database without performing a full database recovery. A. Tablespace backups of high-usage tablespaces B. Control file backups C. Exports D. Multiplexed redo logs Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Answers to Review Questions
365
Answers to Review Questions 1. A, C, D, and E. A media failure occurs when a database file cannot be
read or written to. A non-media failure comprises all other types of failures. 2. A, B, and C. ARCHIVELOG mode allows more recovery options than
NOARCHIVELOG mode. Complete recovery is available only in ARCHIVELOG mode (incomplete recovery is available in ARCHIVELOG mode as well). The database does not need to be unavailable in ARCHIVELOG mode because of the hot backup capability. 3. A and B. ARCHIVELOG mode has few weaknesses except that archive
generation causes disk usage, and there is increased management of the archive logs. 4. A and B. Management needs to understand the backup and recovery
plan so that the plan can be tailored to meet the business operational requirements. Furthermore, by understanding the plan, management can better gauge the benefits and costs of decisions that they are about to make. 5. B and C. Parallel Server and standby database features both support
high-availability databases. 6. A, B, C, and D. All are relevant reasons to test a backup and recovery
strategy. 7. D. Media failure is the most serious and usually the key component in
the backup and recovery strategy. All other types of failures listed are handled automatically by Oracle. 8. A and B. ARCHIVELOG mode and NOARCHIVELOG mode are the only
two database configurations that affect recoverability because one generates archive logs and the other doesn’t. 9. B. NOARCHIVELOG mode is most likely to have data loss because there
are no archive logs generated, and therefore complete recovery is highly unlikely.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
366
Chapter 10
Backup and Recovery Considerations
10. A, B, and C. Statement, process, and instance failure are all types of
non-media failure. These types of failures tend to be less critical than media failure. 11. B. The standby database solution allows the standby database to be in
another geographic location or not in close proximity to the primary database. 12. C and D. RAID 0 + 1 and RAID 5 are common disk fault-tolerance
options used to protect database files from media failure. RAID 0 + 1 has been the longtime Oracle standard for data files, and RAID 5 has become more accepted due to disk write enhancements. 13. A, B, C, and D. All answers are potential reasons to perform backups,
the most important being that testing validates the backup and recovery process. 14. A. Although the terms Parallel Server and standby database are asso-
ciated with high-availability database options, only 24 × 7 is used to describe high availability. 15. A, B, C, and D. All of the above are backup and recovery tests that
should be performed. 16. A. The purpose of MTTR is to determine average recovery times for
specific failures so that availability goals can be determined. 17. A, B, and D. Statement failure, non-media failure, and media failure
are all major components of a disaster recovery plan. Instance failure is handled automatically when the instance restarts, and usually isn’t as critical as media failure. 18. A and C. Tablespace backups of high-usage tablespaces and exports
of the whole database or high-usage tables can provide protection against erroneous changes without doing a full database recovery.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Chapter
11
Oracle Recovery Structures and Processes ORACLE8i BACKUP AND RECOVERY EXAM OBJECTIVES OFFERED IN THIS CHAPTER: List Oracle processes, memory, and file structures Identify the importance of checkpoints, redo logs, and archives Multiplex control files and redo logs List types of failures Describe the structures for instance and media recovery Describe the deferred transaction recovery concept
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle's Training and Certification Web site (http://education .oracle.com/certification/index.html) for the most current exam objectives listing.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
O
racle uses a wide variety of processes and structures to provide a robust set of recovery options. A process is a daemon, or background program, that performs certain tasks. A structure is either a physical or logical object that is part of the database, such as files or database objects themselves. The processes consist of log writer (LGWR), system monitor (SMON), process monitor (PMON), checkpoint (CKPT), and archive (ARCH). The available structures include redo logs, rollback segments, control files, and data files. You were introduced to these terms in Chapter 1, “Oracle Overview and Architecture.” This chapter provides further detail: Different combinations of these processes and structures are used to recover from different kinds of failures. There are five basic failures that structure/process combinations can resolve: user error, statement error, process failure, instance failure, and media (disk) failure. This chapter addresses each of these failures and the associated processes and structures involved in the recovery process.
Oracle Recovery Processes, Memory Components, and File Structures
Oracle recovery processes, memory structures, and file structures all work together in the recovery process. Oracle recovery processes interact with Oracle memory structures to coordinate data blocks that need to be read from and written to logical and physical structures so that the database
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Oracle Recovery Processes, Memory Components, and File Structures
369
is in a consistent state. From the recovery perspective, all these processes, memory structures, physical structures, and logical structures work together to maintain the physical integrity of the database. The memory structures are made up of blocks of data that reside in memory for faster access. As these memory structures change, coordination with the physical and logical structures occurs so that the database can remain consistent. Processes do the work of coordinating what blocks and other information need to be read or modified from the data blocks in memory, and then written to disk or the physical and logical structures in the form of online redo logs, archive logs, or data files. Each process has specific tasks that it fulfills. These physical and logical structures are like memory structures in that they are made up of data blocks. But the physical structures are static structures that consist of files in the OS file system. Data blocks and other information are written to these physical structures to make them consistent. The logical structures are temporarily used to hold parts of information for intermediate time periods or until the processes can permanently record the appropriate information in the physical structures.
Recovery Processes Oracle has five major processes related to recovery. These processes include log writer, system monitor, process manager, checkpoint, and archiver. Let’s look at each of these processes in more detail. The log writer process (LGWR) writes redo log entries from the redo buffers. A redo log entry is any change, or transaction, that has been applied to the database, committed or not. (To commit means to save or permanently store the results of the transaction to the database.) The LGWR process is mandatory and is started by default when the database is started. The system monitor process (SMON) performs a varied set of functions. SMON is responsible for instance recovery. This process also performs temporary segment cleanup. It is a mandatory process and is started by default when the database is started. The process monitor process (PMON) performs recovery of failed user processes. This is a mandatory process and is started by default when the database is started. The checkpoint process (CKPT) performs checkpointing in the control files and data files. Checkpointing is a process of stamping a unique counter
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
370
Chapter 11
Oracle Recovery Structures and Processes
in the control files and data files for database consistency and synchronization. In Oracle7, the LGWR would also perform checkpointing if the CKPT process wasn’t present. As of Oracle8, the CKPT process is a mandatory process that is started by default. The archiver process (ARCH) performs the copying of the online redo log files to archive log files. ARCH is enabled only if the init.ora parameter LOG_ARCHIVE_START = TRUE or with the ARCHIVE LOG START command. This isn’t a mandatory process. An example of each of these processes can be seen by typing the Unix command ps –ef | grep <$ORACLE_SID>. $ORACLE_SID is a Unix environment variable that identifies the Oracle system identifier. In this case, $ORACLE_SID is brdb. [oracle DS-HPUX]ps -ef|grep brdb oracle 3982 1 0 Feb 13 oracle 3984 1 0 Feb 13 oracle 3986 1 0 Feb 13 oracle 3988 1 0 Feb 13 oracle 3990 1 0 Feb 13 oracle 3992 1 0 Feb 13 oracle 3995 1 0 Feb 13 oracle 14958 12027 1 09:51:22
? ? ? ? ? ? ? ttyp1
0:23 1:26 0:00 0:21 0:13 0:02 0:00 0:00
ora_pmon_brdb ora_dbwr_brdb ora_arch_brdb ora_lgwr_brdb ora_ckpt_brdb ora_smon_brdb ora_reco_brdb grep brdb
Memory Structures There are two Oracle memory structures relating to recovery: log buffers and data block buffers. The log buffers are the memory buffers that record the changes, or transactions, to data block buffers before they are written to online redo logs or disk. Online redo logs record all changes to the database, whether the transactions are committed or rolled back. The data block buffers are the memory buffers that store all the database information. A data block buffer stores mainly data that needs to be queried, read, changed, or modified by users. The modified data block buffers that have not yet been written to disk are called dirty buffers. At some point, Oracle determines that these dirty buffers must be written to disk. A checkpoint occurs when the dirty buffers are written to disk.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Oracle Recovery Processes, Memory Components, and File Structures
371
Both Oracle memory structures can be viewed a number of ways. The most common method is by performing a SHOW SGA command from SVRMGRL. See the following example: [oracle@DS-LINUX pfile]$ svrmgrl Oracle Server Manager Release 3.1.5.0.0 - Production (c) Copyright 1997, Oracle Corporation. All Rights Reserved. Oracle8i Enterprise Edition Release 8.1.5.0.0 - Production With the Partitioning and Java options PL/SQL Release 8.1.5.0.0 - Production SVRMGR> connect internal Connected. SVRMGR> show sga Total System Global Area Fixed Size Variable Size Database Buffers Redo Buffers SVRMGR>
19504528 64912 16908288 2457600 73728
bytes bytes bytes bytes bytes
Looking at this code example, you will see that the database buffers are approximately 2.5MB, and the redo buffers are approximately 70KB. When the SHOW SGA command is run, the data block buffers are referred to as database buffers, and the log buffers are referred to as redo buffers. These values are extremely small and are suitable for a sample database or for testing. Data block buffers can be about 100MB to 200MB for average-sized databases with a few hundred users. Variable Size in this code example pertains to the SHARED_POOL value in the init.ora parameter. Fixed Size pertains to a few less-critical parameters in the init.ora.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
372
Chapter 11
Oracle Recovery Structures and Processes
File Structures As you learned in Chapter 1, the Oracle file structures relating to recovery include the online redo logs, archive logs, rollback segments, control files, and data files. The redo logs consist of files that record all the changes to the database. The archive logs consist of files that are copies of the redo logs, so that there is a historical record of all the changes. See Chapter 12, “Oracle Backup and Recovery Configuration,” for a more detailed discussion of redo logs. Control files are binary files that contain the physical structure of the database, such as operating-system filenames of all files that make up the database. Data files are physical structures of the database that make up a logical structure called a tablespace. All data is stored within some type of object within a tablespace. Let’s look at each of these in more detail.
Redo Logs The redo logs consist of files that record all the changes to the database. Recording all changes is one of the most important activities in the Oracle database from the recovery standpoint. The redo logs get information written to them before all other physical structures in the database. A physical structure is any file in the Oracle database. The purpose of the redo log is to protect against data loss in the event of a failure. The term redo log includes many subclassifications. Redo logs consist of online redo logs, offline redo logs (also called archive logs), current online redo logs, and non-current online redo logs. Each is described below. Online redo logs Logs that are being written to from the log buffers in a circular fashion. These logs are written and rewritten. Offline redo logs, or archive logs Copies of the online redo logs before they are written over by the LGWR. Current online redo logs Logs that are currently being written to and therefore are considered active. Non-current redo logs Online redo logs that are not currently being written to and therefore are inactive. Each database has at least two sets of online redo logs. Oracle recommends at least three sets of online redo logs. You will soon see why when we discuss archive logs in the next section. Redo logs record all the changes that
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Oracle Recovery Processes, Memory Components, and File Structures
373
are made to the database; these changes result from the LGWR writing out log buffers to the redo logs at particular events or points in time. Redo logs are written in a circular fashion. That is, if there are three sets of logs, log 1 gets written to first until full. Then Oracle moves to log 2, and it gets written to until full. Then Oracle moves to log 3, and it gets written to until full. Oracle then goes back to log 1 and writes over the existing information and continues this process over again. Here is a listing of the logs from the V$LOGFILE view. SQLWKS> select group#, member from v$logfile; GROUP# MEMBER ---------- -----------------------------------------3 /redo01/u02/oradata/TRE1/TRE1redo03.log 2 /redo01/u02/oradata/TRE1/TRE1redo02.log 1 /redo01/u02/oradata/TRE1/TRE1redo01.log 3 rows selected. Figure 11.1 shows an example of the circular process of redo file generation, which writes to one log at a time, starting with log 1, then log 2, then log 3, and then back to log 1 again. FIGURE 11.1
The circular process of redo file generation
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
374
Chapter 11
Oracle Recovery Structures and Processes
Redo logs contain values called system change numbers (SCNs) that uniquely identify each committed transaction in the database. As you learned in Chapter 1, these SCNs are like a clock of events that have occurred in the database. SCNs are one of the major synchronization elements in recovery. Each data-file header and control file is synchronized with the current highest SCN. Archive Logs Archive logs are non-current online redo logs that have been copied to a new location. This location is the value of the init.ora parameter LOG_ ARCHIVE_DEST. Archive logs are created if the database is in ARCHIVELOG mode rather than NOARCHIVELOG mode. A more detailed explanation of ARCHIVELOG and NOARCHIVELOG mode will come in Chapter 12. As noted earlier, archive logs are also referred to as offline redo logs. An archive log is created when a current online redo log is complete or filled, and before an online redo log needs to be written to again. Remember, redo logs are written to in a circular fashion. If there are only two redo log sets, and you are in ARCHIVELOG mode, the LGWR may have to wait or halt the writing of information to redo logs while an archive log is being copied. If it didn’t, the LGWR process would overwrite the archive information, making the archive log useless. If at least three redo log groups are available, the archive log will have enough time to be created and not cause the LGWR to wait for an available online redo log. This is under average transaction volumes. Some large or transaction-intensive databases may have 10 to 20 log sets to reduce the contention on redo log availability. Archive logs are the copies of the online redo logs; the archive logs get applied to the database in certain types of recovery. Archive logs build the historical transactions, or changes, back in the database to make it consistent to the desired recovery point.
Control Files A control file is a binary file that stores the information about the physical structures that make up the database. The physical structures are OS objects, such as all OS filenames. The control file also stores the highest SCN to assist in the recovery process. This file stores information about the backups if you are using RMAN, the database name, and the date the database was created. You should always have at least two control files on different disk devices. Having this duplication is called multiplexing your control files. Multiplexing control files is done in an init.ora parameter called CONTROL_FILES.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Oracle Recovery Processes, Memory Components, and File Structures
375
Every time a database is mounted, the control file is accessed to identify the data files and redo logs that are needed for the database to function. All new physical changes to the database get recorded in the control file by default.
Data Files Data files are physical files stored on the file system. All Oracle databases have at least one data file, but usually more. Data files are where the physical and the logical meet. Tablespaces are logical structures and are made up of one or more data files. All logical objects reside in tablespaces. Logical objects are those that do not exist outside of the database, such as tables, indexes, sequences, and views. Data files are made up of blocks. These data blocks are the smallest unit of measure in the database. The logical objects such as tables and indexes are stored in the data blocks, which reside in the data files. The first block of every file is called a header block. The header block of the data file contains information such as the file size, block size, and associated tablespace. It also contains the SCN for recovery purposes. Here is the output from the V$DATAFILE view, showing the data files that make up a sample database. SVRMGR> connect internal Connected. SVRMGR> select file#, status, name from v$datafile; FILE# STATUS NAME ---------- ------- ---------------------------------1 SYSTEM /db01/ORACLE/brdb/system01.dbf 2 ONLINE /db01/ORACLE/brdb/rbs01.dbf 3 ONLINE /db01/ORACLE/brdb/temp01.dbf 4 ONLINE /db01/ORACLE/brdb/users01.dbf 5 ONLINE 6 ONLINE 7 ONLINE 7 rows selected. SVRMGR>
/db01/ORACLE/brdb/tools01.dbf /db01/ORACLE/brdb/data01.dbf /db01/ORACLE/brdb/indx01.dbf
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
376
Chapter 11
Oracle Recovery Structures and Processes
Checkpoints, Redo Logs, and Archive Logs
Now that you have a basic understanding of the various Oracle processes, file structures, and memory structures used for recovery, it’s time to see how these interrelate. As you learned earlier, checkpoints, redo logs, and archive logs are significant to all aspects of recovery. The checkpoint is an event that determines the synchronization or consistency of all transactions on disk. The checkpoint is implemented by storing a unique number—the SCN (again, this stands for system change number)— in the control files, header of the data files, online redo logs, and archive logs. The checkpoint is performed by the CKPT process. One of the ways a checkpoint is initiated is by the DBWR process. The DBWR process initiates a checkpoint by writing all modified data blocks in the data buffers (dirty buffers) to the data files. After a checkpoint is performed, all committed transactions are written to the data files. If the instance were to crash at this point, only new transactions that occurred after this checkpoint would need to be applied to the database to enable a complete recovery. Therefore, the checkpoint process determines which transactions from the redo logs need to be applied to the database in the event of a failure and subsequent recovery. Remember that all transactions, whether committed or not, get written to the redo logs. Other methods that cause a checkpoint to issue include any of the following commands: ALTER SYSTEM SWITCH LOGFILE, ALTER SYSTEM CHECKPOINT LOCAL, ALTER TABLESPACE BEGIN BACKUP, and ALTER TABLESPACE END BACKUP. SCNS are recorded within redo logs at every log switch, at a minimum. This is because a checkpoint occurs at every log switch. Archive logs have the same SCNs recorded within them as the online redo logs, because the archive logs are merely copies of the online redo logs. Let’s look at an example of how checkpointing, online redo logs, and archive logs are all interrelated. The ALTER TABLESPACE BEGIN BACKUP command is used to begin an online backup of a database. An online backup, also called a hot backup, occurs while the database is still available or running. See Chapter 13, “Physical Backup without Oracle Recovery Manager,” for a more detailed explanation of a hot backup. The ALTER TABLESPACE BEGIN BACKUP command is followed by an OS command to copy the files, such as cp in Unix. Then, the command ALTER TABLESPACE END BACKUP is used to end the hot backup. As we just discussed,
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Checkpoints, Redo Logs, and Archive Logs
377
these ALTER TABLESPACE commands also cause a checkpoint to occur. The following is an example of the data file data01.dbf for the tablespace DATA being backed up. SVRMGR> connect internal Connected. SVRMGR> alter tablespace data begin backup; Statement processed. SVRMGR> ! cp data01.dbf /stage/data01.dbf SVRMGR> alter tablespace data end backup; Statement processed. SVRMGR> Note that the tablespace DATA was put in backup mode, and then the OS command was executed, copying the data file data01.dbf to the new directory /stage, where it awaits writing to tape. Finally, the tablespace DATA was taken out of backup mode. These steps are repeated for every tablespace in the database. This is a simplified example of a hot backup.
It is important to execute the ALTER SYSTEM SWITCH LOGFILE command at the end of your scripted hot backup. This assures that the last ALTER TABLESPACE END BACKUP command gets written to an archive log immediately after the last tablespace backup was completed. This log can now be readily applied to the database in the event of a crash. If the tablespace is in backup mode when trying to use a set of data files for a recovery, the database will not open without a data-file recovery. The command to recover would be ALTER DATABASE DATAFILE END BACKUP while the database is mounted.
If a hot backup was taken on Sunday at 2 A.M., and the database crashed on Tuesday at 3 P.M., then the last checkpoint would have been issued after all the data files were backed up. This backup would be the last checkpointed disk copy of the database. Therefore, all the archive logs generated after the 2 A.M. backup was completed would need to be applied to the checkpointed database to bring it up to 3 P.M. Tuesday, the time of the crash. When you are making a hot backup of the database, you are getting a copy of the database that has been checkpointed for each data file. In this case, each data file has a different SCN stamped in the header and each will need all applicable redo log entries made with a greater SCN applied to the data file to make the database consistent.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
378
Chapter 11
Oracle Recovery Structures and Processes
Multiplexed Control Files and Redo Logs
As we’ve previously discussed, control files and redo logs are important structures in the Oracle database. In earlier versions of Oracle, control files were a single point of failure. A single point of failure is a failure point that, in and of itself, can cause a failure to the whole database. Oracle remedied this single point of failure for the database by offering configuration parameters to multiplex the control file and redo logs. As stated earlier, multiplexing is maintaining a copy, or mirror, of a file. Control files can be multiplexed by adding multiple locations in the init.ora CONTROL_FILES parameter. This is done by shutting down the database, performing an OS copy of an existing control file to another location, and then editing the init.ora file and adding the new entry in the CONTROL_FILES parameter. After making this change, you must restart the database. The following sample init.ora file has two locations for control files. background_dump_dest /oracle/admin/brdb/bdump compatible control_files /oracle/data/brdb/control1.ctl, /oracle/data/brdb/control2.ctl core_dump_dest /oracle/admin/brdb/cdump db_block_buffers db_block_size db_file_multiblock_read_count db_files db_name dml_locks global_names log_archive_dest /oracle/admin/brdb/arch log_archive_format log_archive_start log_buffer
Copyright ©2000 SYBEX , Inc., Alameda, CA
= = 8.1.5 =
= = = = = = = = =
300 8192 8 64 brdb 100 false
= archbrdb_%s.log = FALSE = 40960
www.sybex.com
Multiplexed Control Files and Redo Logs
log_checkpoint_interval max_dump_file_size max_rollback_segments processes rollback_segments (rb01,rb02,rb03,rb04,rb05,rb06) shared_pool_size user_dump_dest /oracle/admin/brdb/udump utl_file_dir utl_file_dir utl_file_dir utl_file_dir
= = = = =
379
10000 51200 60 40
= 16000000 = = = = =
/apps/web/master /apps/web/upload /apps/web/remove /apps/web/archive
The redo logs are multiplexed either at the time of database creation with your database create script or by issuing the ALTER DATABASE ADD LOGFILE command. The following is an example of a database creation script that specifies multiplexed redo logs. spool create_stub1.log startup pfile=/oracle/admin/lndb/pfile/init0.ora nomount create database lndb maxinstances 1 maxlogfiles 16 maxdatafiles 100 character set "US7ASCII" datafile '/disk3/ORACLE/lndb/system01.dbf' size 80M logfile group 1 ( '/disk1/redo01/lndb/redo01a.log', '/disk2/redo02/lndb/redo01b.log' ) size 2M, group 2 ( '/disk1/redo01/lndb/redo02a.log', '/disk2/redo02/lndb/redo02b.log' ) size 2M,
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
380
Chapter 11
Oracle Recovery Structures and Processes
group 3 ( '/disk1/redo01/lndb/redo03a.log', '/disk2/redo02/lndb/redo03b.log' ) size 2M ; spool off
Make sure that your multiplexed redo logs and control files are on different disks, at a minimum. With current technologies such as storage arrays, the redo logs and control files should be on separate volume groups and separate controllers. This adds greater protection from failure. You may need to work with the system administrator to determine this information.
Types of Failures
A
s you learned in the preceding chapter, there are five major types of failures within an Oracle database: statement failure, process failure, instance failure, user error, and media (disk) failure. Each type of failure presents a unique set of problems that must be solved. Understanding each type of failure will help you set up the appropriate backup and recovery measures. Each is described below. Statement failure Occurs when there is a syntax error in the SQL statement, or when the middleware software is incompatible with the client software and the server software. Examples of middleware software are open database connectivity (ODBC) drivers, Web server components, and Oracle objects for object linking and embedding (OLE). Oracle returns an error code and description for these failures. There is no need to perform a recovery. Process failure Occurs when a user process terminates abnormally. This can happen for numerous reasons, such as a network problem or power spike, or users could abort their query or process. This happens frequently when a user queries too much data and doesn’t want to wait for the return
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Structures for Instance and Media Recovery
381
set. The process can be accidentally or purposefully killed by system administrators or DBAs. The PMON process wakes up and handles these failures. There is no need to perform a recovery. Instance failure Occurs when the database is abnormally shut down. This can occur due to a power outage, system halt, or Oracle background processes being killed. In this situation, the data buffers don’t get cleanly written to disk. Therefore, Oracle must recover by reading through all the transactions in the online redo log and reapplying these to the database. The SMON process handles this operation. This operation is automatic upon instance start-up and requires no intervention. User error Occurs when a user writes a program or script that removes or modifies data incorrectly or accidentally drops database objects. To recover from this type of failure, Oracle provides an incomplete recovery up to a certain point in time when the recovery operation can be cancelled before the DROP TABLE command was issued. Another technique in more static tables is nightly exports—logical extracts from a database that are stored in a binary dump file. Individual tables or user schemas can be rebuilt by using the Import utility with the binary dump file generated by the Export utility. Chapter 16, “Oracle Export and Import Utilities,” explains these utilities in detail. Media (disk) failure Results when files needed by the database, such as data files, cannot be accessed. This usually occurs because a disk drive fails or a controller accessing a disk fails. Oracle provides several ways to recover from media failure. Each method is dependent on the type of backups that are performed. If you perform backups in ARCHIVELOG mode, complete recovery is possible. If you perform backups in NOARCHIVELOG mode, complete recovery might not be possible, or some data loss usually occurs.
Structures for Instance and Media Recovery
T
he recovery processes and structures we have been discussing support all types of failures that can occur in an Oracle database. To remedy some types of failures, such as instance failure, Oracle automatically utilizes these recovery processes and structures. In cases such as media failure, the DBA must determine which processes and structures to utilize.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
382
Chapter 11
Oracle Recovery Structures and Processes
The components necessary for instance recovery are the current online redo logs, the rollback segments, and the SMON process. Instance recovery is an automatic process and does not require user involvement. It is sometimes called the roll-forward-and-roll-back process. The structures necessary for media (disk) recovery are the online redo logs, archived redo logs, rollback segments, control files, and data files. Media recovery is not an automatic process. The DBA must select and implement the appropriate course of action.
Instance Recovery Structures Instance failure occurs when the database instance or the memory structures and database processes such as PMON, SMON, DBWR, LGWR, and/or ARCH are abnormally halted. This usually occurs because of a power outage or an abrupt shutdown of the server. When Oracle is restarted after an instance failure, the recovery process initiates. The current online redo logs are the redo logs that have not yet been archived when the database is in ARCHIVELOG mode. As you already know, these logs have not been completely filled with changes, or transactions. Therefore, they are the logs that the database is currently using. Rollback segments are logical structures that reside within a database’s tablespace and its associated data files. Rollback segments come into play when data is modified. The rollback segments keep the before image of the data that was modified, so it can be rolled back if necessary. (To roll back simply means to undo.) This allows Oracle to have a read-consistent image of data. A read-consistent image is created while a user is modifying data but hasn’t yet committed the transaction. If a value is updated from 5 to 10, but not committed, the rollback segments keep the old value, 5, so that all other users accessing this data will see the uncommitted value of 5. The user performing the update sees the value of 10. After this user commits the value of 10, all other users will see this value. If the user issues a ROLLBACK command, all users will see 5. The value 5 is what is available in the rollback segment until the commit occurs. In this case, the SMON process is responsible for rolling back any dead transactions, or transactions that were active during the failure. This process rolls back, or undoes, any incomplete transactions that were cut off
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Structures for Instance and Media Recovery
383
during the instance failure. Therefore, if your transaction wasn’t committed before the instance failure, it was rolled back as if it never occurred. Let’s look at an example of the cause and steps associated with an instance recovery. Imagine that there is a sustained power outage longer than one that the uninterrupted power source (UPS) can support. The database goes down hard or without a SHUTDOWN NORMAL or SHUTDOWN IMMEDIATE command.
The SHUTDOWN ABORT command executes the most harsh and rapid shutdown. It doesn’t cleanly checkpoint all files when shutting down. Therefore, instance recovery is necessary after every SHUTDOWN ABORT. A SHUTDOWN ABORT isn’t recommended unless absolutely necessary.
Two hours later, the power is working again. You start the database, and instance recovery begins. Instance recovery goes through what is called the roll-forward-and-roll-back process. This process occurs in the following steps: 1. All the data that is not in the data files is applied to the data files from
the online redo logs. Oracle calls this activity cache recovery. This is also known as the roll-forward part of instance recovery. 2. After cache recovery is complete, the database is opened for all users
to access. However, not all data is available immediately, for example, data that is locked by an unrecoverable transaction or data that needs to be rolled back. All the data that was part of an active transaction during the failure potentially would not be immediately available until cache recovery is complete. 3. All active transactions at the time of failure are marked as dead, as
well as their corresponding undo entries in the rollback segments. The SMON process then begins rolling back dead transactions. Oracle calls this activity transaction recovery. This is also known as the rollback part of instance recovery. 4. Pending distributed transactions are resolved. Distributed transac-
tions are transactions occurring against remote databases. 5. All new transactions encountering dead transactions that are locked
will automatically initiate the SMON task of rolling back the dead transactions to release the locks.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
384
Chapter 11
Oracle Recovery Structures and Processes
After all dead transactions have been rolled back, the database is fully recovered from the instance failure.
Media Recovery Structures The structures necessary to perform media (disk) recovery are the online redo logs, archived redo logs, control files, and data files, as we mentioned earlier. Media failure occurs when a disk containing any of the data files, online redo logs, or control files goes bad. This means that some of these files cannot be accessed. Unlike instance recovery, media recovery requires the DBA to perform certain activities and make decisions. There are essentially three types of recovery that the DBA can perform: database, tablespace, and data-file recovery. Database recovery Restores some or all of the data files from backup and recovers the entire database. You perform this with the RECOVER DATABASE command. The database must be mounted and not opened to perform database recovery. Tablespace recovery Restores all the data files associated with a particular tablespace. This is performed with the RECOVER TABLESPACE command. The database can be open and running with the exception of the tablespace you are recovering. Data-file recovery Recovers a specific data file while the rest of the database is in use. This is performed with the RECOVER DATAFILE command. The capability to recover a database completely, without loss of data, is determined by whether you operate in ARCHIVELOG mode or NOARCHIVELOG mode. ARCHIVELOG mode provides the capability to apply all the historical changes to a database backup so that the database can be brought up to the point of failure. There will be no loss of data, depending on the type of failure. In NOARCHIVELOG mode, the database is not generating the historical changes to the data or the archive logs. Therefore, recovery usually amounts to restoring a consistent backup from an earlier point in time. There is most often a loss of data. You (as the DBA) have many options within the two types of recovery available with media failure. These are complete recovery and incomplete recovery. Complete recovery occurs when all historical changes (archive logs) are applied to a backup to bring the database up to the point of failure. There is no data loss. Incomplete recovery occurs when the DBA applies
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Structures for Instance and Media Recovery
385
some of the historical changes (archive logs) up to a point in time before the failure. There is loss of data. However, the data loss can be limited and may be the best alternative for certain types of failures. For example, if there is block corruption in an important table that occurred at a point in time, the DBA would want to recover up that point in time prior to block corruption. This would leave the recovery incomplete, or with a data loss, but it would be better than recovering into the block corruption, which could render the table and others useless. There are three types of incomplete recovery, or methods of choosing the stop time before the point of failure: Time-based recovery The DBA stops the recovery process at a certain point in time. The command RECOVER DATABASE UNTIL TIME ‘199912-31:23:30:00’ would stop the database recovery at that time. The archive logs would apply all the changes up to that time, and then recovery would stop. Cancel-based recovery The DBA arbitrarily chooses a place to stop the recovery. This is performed by the command RECOVER DATABASE UNTIL CANCEL. Change-based recovery The DBA stops the recovery process at a certain change point, or SCN number. This is performed by issuing the command RECOVER DATABASE UNTIL CHANGE <SCN>. See Chapter 15, “Incomplete Oracle Recovery with Archiving,” for a more detailed explanation of incomplete recovery. Control files and online redo logs can be protected from media failure by multiplexing and OS mirroring each multiplexed copy. With multiple copies of control files and redo logs on separate disks or volumes, there is a reduced likelihood that both locations would go bad. In most cases, if media failure occurs on one side of a multiplexed redo log, the database operations will go on without interruption. The same goes for control files. In the worst case, the init.ora would need to modified to remove the control file that was in the failure location. The database would need to be restarted after changing the control file locations in init.ora.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
386
Chapter 11
Oracle Recovery Structures and Processes
Deferred Transaction Recovery Concept
The deferred transaction recovery concept occurs in instance failure and recovery. As we discussed earlier, the whole Oracle8i database is opened immediately after instance failure and is available for use. In older versions of Oracle, the database was not opened until the current online redo log was completely applied to the database. This could take a while, depending on what was going on at the time of the instance failure. For example, a large transaction or batch job would cause substantial roll-forward and rollback to get back to a consistent state. Even though the whole database is opened immediately, recovery is still not complete. Recovery is not complete until the dead transactions are rolled back. The SMON process begins working on these dead transactions by rolling them back. The deferred transaction recovery concept comes into play when a new transaction accesses a dead transaction’s data before SMON can get around to fixing it. At this point, transaction recovery begins by locking the data until it can be rolled back. The new transaction will trigger the recovery of the dead transaction. The new transaction is deferred until the recovery of the dead transaction is complete. This allows the database to be opened more quickly after instance recovery rather than waiting for the whole redo log to be rolled forward and then rolled back.
Summary
T
he recovery structures and processes available in Oracle allow significant recovery options for the DBA. The structures consist of files, processes, memory buffers, and logical objects that reside in the database. This chapter identified the file structures, which consist of redo logs, archive logs, control files, and data files. It identified the processes, which are PMON, SMON, LGWR, CKPT, and ARCH. It also discussed the memory structures, which consist of log buffers and data buffers. All these structures play different roles in the recovery process and can be used for different types of recovery. This chapter also described the five basic types of failures: user error, statement failure, process failure, instance failure, and media (disk) failure.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Summary
387
You learned the importance of identifying the type of failure, so that the right type of recovery could be applied. You viewed the levels of DBA involvement in the recovery process. Instance recovery is automatic, but media recovery has many DBA-initiated choices within the two main options, incomplete and complete recovery. This chapter also described the deferred transaction concept, which enables the Oracle database to be immediately available after instance failure, instead of requiring users to wait for the whole online redo log to be applied.
Key Terms Before you take the exam, make sure you’re familiar with the following terms: process structure log writer process (LGWR) transaction commit system monitor process (SMON) process monitor process (PMON) checkpoint process (CKPT) checkpointing archiver process (ARCH) log buffers data block buffers dirty buffers database buffers redo buffers redo logs archive logs
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
388
Chapter 11
Oracle Recovery Structures and Processes
control files data files online redo logs offline redo logs current online redo logs non-current redo logs system change numbers (SCNs) multiplexing header block online backup hot backup single point of failure statement failure process failure instance failure user error media (disk) failure roll-forward-and-roll-back process media recovery before image roll back read-consistent image dead transactions cache recovery distributed transactions restore deferred transaction recovery concept
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
389
Review Questions 1. What is the Oracle process associated with rolling back dead trans–
actions during instance recovery? A. PMON B. SMON C. ARCH D. DBWR 2. What command must the DBA execute to initiate an instance recovery? A. RECOVER DATABASE B. RECOVER INSTANCE C. RECOVER TABLESPACE D. No command is necessary 3. What process is in charge of writing data to the online redo logs? A. Log buffer B. ARCH C. Data buffers D. LGWR 4. Of the physical file structures listed below, which can be multiplexed? A. Control files B. Data files C. Redo logs D. init.ora
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
390
Chapter 11
Oracle Recovery Structures and Processes
5. Which of the following are failures that can occur in or with the
Oracle database? Choose all that apply. A. Media failure B. Process failure C. Instance failure D. Statement failure 6. Which type of recovery can be performed for media failure? List all
that apply. A. Time-based recovery B. Incomplete recovery C. Complete recovery D. Cancel-based recovery 7. Which operational decision affects the option to perform a complete
recovery in the event of a media failure? List all that apply. A. To perform hot backups B. To perform cold backups because the database is in
NOARCHIVELOG mode C. To operate in NOARCHIVELOG mode D. To operate in ARCHIVELOG mode 8. What are the file structures related to recovery? Choose all that apply. A. Redo logs B. Archive logs C. Log buffers D. Data files
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
391
9. What file structure consists of a binary file, which stores information
about all the physical components of the database? A. Redo log B. Data file C. Control file D. Archive logs 10. List all the processes associated with recovery. A. PMON B. SMON C. ARCH D. DBWR 11. In Oracle8i, which process is responsible for performing check-
pointing? A. SMON B. PMON C. LGWR D. CKPT
Answer: Answer: D. The CKPT process performs all the checkpointing by default in Oracle8i. 12. Which of the following are memory structures? Choose all that apply. A. Rollback segments B. Log buffers C. Data block buffers D. Data files
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
392
Chapter 11
Oracle Recovery Structures and Processes
13. What type of shutdown requires an instance recovery upon start-up? A. SHUTDOWN NORMAL B. SHUTDOWN IMMEDIATE C. SHUTDOWN TRANSACTIONAL D. SHUTDOWN ABORT 14. What events trigger a checkpoint to take place? Choose all that apply. A. CKPT B. Shutdown normal C. Shutdown immediate D. Log switch 15. What procedure is responsible for stamping the SCN to all necessary
physical database structures? A. Read-consistent image B. Checkpointing C. Commits D. Rollbacks 16. A complete database restore occurs when A. The system data file is restored to file system. B. The online redo logs are restored to the file system C. The nonsystem data files are restored to the file system D. All data files, online redo logs, and control files are restored to the
file system.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
393
17. An individual failure that is responsible for bringing down the whole
database is
? Choose all that apply.
A. A non-multiplexed redo log on a non-fault-tolerant file system B. A non-multiplexed control file on a non-fault-tolerant file system C. A process failure D. A single point of failure 18. The dirty buffers get written to disk when what event occurs? Choose
the best answer. A. A commit occurs. B. A rollback occurs. C. Checkpoint occurs. D. SHUTDOWN ABORT occurs.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
394
Chapter 11
Oracle Recovery Structures and Processes
Answers to Review Questions 1. B. The SMON, or system monitor process, rolls back dead transac-
tions during instance recovery. 2. D. The instance recovery is automatic. 3. D. The LGWR, or log writer process, writes changes from the log
buffers to the online redo logs. 4. A and C. Control files and redo logs can be multiplexed. Multiplexing
control files takes place in the init.ora file, and redo logs are multiplexed at database creation or with an ALTER DATABASE ADD LOGFILE THREAD ‘FILENAME’ command. 5. A, B, C, and D. All the listed failures are types of failures that can
occur with or in the Oracle database. 6. A, B, C, and D. Both incomplete and complete recovery can be per-
formed for media recovery. Cancel- and time-based recovery are types of incomplete recovery. Incomplete and complete recovery are available only when the database is in ARCHIVELOG mode. 7. A and D. To perform complete recovery, archive logs need to be
applied to a restored database up to the point of failure. To perform a hot backup, the database must be in ARCHIVELOG mode, thus generating archive logs. If the database is in ARCHIVELOG mode, it must be generating archived redo logs. 8. A, B, and D. Redo logs, archive logs, and data files are all file struc-
tures that are associated with recovery. Log buffers are memory structures. 9. C. The control file is a binary file that stores all the information about
the physical components of the database.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Answers to Review Questions
395
10. A, B, and C. PMON, SMON, and ARCH are all associated with some
part of the recovery process. PMON recovers failed processes, SMON assists in instance recovery, and ARCH generates archive logs used to recover the database. The DBWR writes data to the data files when appropriate. 11. D. The CKPT process performs all the checkpointing by default in
Oracle8i. 12. B and C. There are only two memory structures related to recovery:
log buffers and data block buffers. 13. D. SHUTDOWN ABORT requires an instance recovery upon start-up
because the data files are not checkpointed during shutdown. 14. B, C, and D. The shutdown normal and shutdown immediate check-
point all necessary physical database structures. Switching log files forces a checkpoint of all necessary physical database structures. The CKPT process is responsible for initiating or performing the checkpoint; it doesn’t cause it. 15. B. Checkpointing is the procedure initiated by the CKPT process,
which stamps all the data files, redo logs, and control files with the latest SCN. 16. D. A complete restore occurs when all database files are restored to
the file system. 17. A, B, and D. A single point of failure is any individual failure capable
of bringing down the whole database in and of itself only. Nonmultiplexed redo logs and control files on non-fault-tolerant file systems are examples of this. 18. C. A checkpoint causes dirty buffers to be flushed to disk. A SHUTDOWN
NORMAL or SHUTDOWN IMMEDIATE causes a checkpoint on shutdown, but a SHUTDOWN ABORT doesn’t force a checkpoint. This is why instance recovery is necessary upon start-up. A rollback and commit do not cause a checkpoint.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Chapter
12
Oracle Backup and Recovery Configuration ORACLE8i BACKUP AND RECOVERY EXAM OBJECTIVES OFFERED IN THIS CHAPTER: Describe the differences between ARCHIVELOG and NOARCHIVELOG modes Identify recovery implications of operating in NOARCHIVE mode Configure a database for ARCHIVELOG mode and automatic archiving Use init.ora parameters to configure multiple destinations for archived log files and multiple archive processes Perform manual archive of logs
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Training and Certification Web site (http://education .oracle.com/certification/index.html) for the most current exam objectives listing.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
C
onfiguring an Oracle database for backup and recovery can be complex. At minimum, it requires an understanding of the archive process, the initialization parameters associated with the archive process, the commands necessary to enable and disable archiving, the commands used to manually archive, and the process of initializing automated archiving. This chapter provides examples of the backup and recovery configuration process. After reading this chapter, you should be comfortable with this process.
Choosing ARCHIVELOG Mode or NOARCHIVELOG Mode
O
ne of the most fundamental backup and recovery decisions that a DBA will make is whether to operate the database in ARCHIVELOG mode or NOARCHIVELOG mode. As you learned earlier, the redo logs record all the transactions that have occurred in a database, and the archive logs are copies of these redo logs. So, the archive logs contain the historical changes, or transactions, that occur to the database. Operating in ARCHIVELOG mode means that the database will generate archive logs; operating in NOARCHIVELOG mode means that the database will not generate archive logs. This section discusses the differences between ARCHIVELOG and NOARCHIVELOG mode.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Choosing ARCHIVELOG Mode or NOARCHIVELOG Mode
399
ARCHIVELOG Mode In ARCHIVELOG mode, the database generates archive log files from the redo logs. This means that the database makes copies of all the historical transactions that have occurred in the database. Here are other characteristics of operating in ARCHIVELOG mode:
Performing online (hot) backups is possible. This type of backup is done when the database up and running. Therefore, a service outage is not necessary to perform a backup. The ALTER TABLESPACE BEGIN BACKUP command is issued to perform hot backups. After this command is issued, an OS copy can take place on each tablespace. When the OS copy is complete, an ALTER TABLESPACE END BACKUP command must be issued. These commands must be executed for every tablespace in the database.
A complete recovery can be performed. This is possible because the archive logs contain all the changes up to the point of failure. All logs can be applied to a backup copy of the database (hot or cold backup). This would apply to all the transactions up to the time of failure. Thus, there would be no data loss or missing transactions.
Tablespaces can be taken offline immediately.
Increased disk space is required to store archive logs, and increased maintenance is associated with maintaining this disk space.
The additional disk space necessary for ARCHIVELOG mode is often overlooked or underestimated. When the archive process doesn’t have enough disk space to write archive logs, the database will hang or stop all activity. This hung state will exist until the space is made available by moving off older archive logs, compressing log files, or setting up an automated job to remove logs after they have been written to tape.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
400
Chapter 12
Oracle Backup and Recovery Configuration
NOARCHIVELOG Mode In NOARCHIVELOG mode, the database does not generate archive log files from the redo logs. This means that the database is not storing any historical transactions from the redo logs. The redo logs are written over each other as needed by Oracle. The only transactions that can be utilized in the event of instance failure are in the current redo log. Operating in NOARCHIVELOG mode has the following characteristics:
In most cases, a complete restore cannot be performed. This means that a loss of data will occur. The last cold backup will need to be used for recovery.
The database must be shut down completely for a backup, which means the database will be unavailable to the users of the database during that time. This means that only a cold backup can be performed.
Tablespaces cannot be taken offline immediately.
Additional disk space and maintenance is not necessary to store archive logs.
Understanding Recovery Implications of NOARCHIVELOG
T
he recovery implications associated with operating a database in NOARCHIVELOG mode are important. A loss of data usually occurs when the last consistent full backup is used while a database is shut down (cold backup) for recovery. Therefore, frequent cold backups need to be performed to reduce the amount of data loss in the event of a failure. This means that the database could be unavailable to users on a regular basis to perform cold backups. Let’s look at examples of when it would not make sense to use NOARCHIVELOG mode and when it would. Imagine that Manufacturing Company A’s database must be available for 24 hours a day to support three shifts of work. This work consists of entering orders, bills of loading, shipping instructions, and inventory adjustments.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Configuring a Database for ARCHIVELOG Mode and Automatic Archiving
401
The shifts are as follows: day shift, 9 A.M. to 5 P.M.; swing shift, 5 P.M. to 1 A.M.; and night shift, 1 A.M. to 9 A.M. If this database is shut down for a cold backup from midnight to 2 A.M., then the night shift and swing shift would be unable to use it during that period. A NOARCHIVELOG backup strategy would not be workable for Manufacturing Company A. On the other hand, if Manufacturing Company B’s database must be available only during the day shift, from 9 A.M. to 5 P.M., then backups could be performed after 5 P.M. and before 9 A.M. without affecting users. The DBA could schedule the database to shut down at midnight and perform the backup for two hours. The database would be restarted before 9 A.M., and there would be no interference with the users’ work. In the event of a failure, there would be a backup from each evening, and only a maximum of one day’s worth of data would be lost. If one day’s worth of data loss were acceptable, this would be a workable backup and recovery strategy for Manufacturing Company B. These examples show that in some situations, operating in NOARCHIVELOG mode makes sense. But there are recovery implications that stem from this choice. One implication is that a loss of data will occur in the event of a failure. Also, there are limited choices on how to recover. The choice is usually to restore the whole database from the last consistent whole backup while the database was shut down (cold backup).
Configuring a Database for ARCHIVELOG Mode and Automatic Archiving
O
nce the determination has been made to run the database in ARCHVIELOG mode, the database will need to be configured properly. You can do this to a new database during database creation or to an existing database via Oracle commands. After the database is in ARCHIVELOG mode, you will most likely configure automatic archiving. Automatic archiving frees the DBA up from the manual task of archiving logs with commands before the online redo logs perform a complete cycle.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
402
Chapter 12
Oracle Backup and Recovery Configuration
Setting ARCHIVELOG Mode ARCHIVELOG mode can be set during the database creation or by using the ALTER DATABASE ARCHIVELOG command. The database must be mounted, but not open, in order to execute this command. This command stays in force until it is turned off by using the ALTER DATABASE NOARCHIVELOG command. The database must be mounted, but not open, in order to execute this command as well. The redo log files will be archived to the location specified by the LOG_ARCHIVE_DEST parameter in the init.ora file. By default, the database is in manual archiving mode. This means that as the redo logs become full, the database will hang until the DBA issues the ARCHIVE LOG ALL command, which archives all the online redo log files not yet archived. Figure 12.1 shows a database configured for ARCHIVELOG mode. FIGURE 12.1
A database configured for ARCHIVELOG mode Oracle log writer process
LGWR Inactive online redo log
Inactive online redo log
Inactive online redo log
Active online redo log
Log1.Log
Log2.Log
Log3.Log
Log4.Log
Redo Logs
Oracle archive process
ARCn
Disk 1 Archive_Dump_Dest directory
/disk01/arch01
Arch_Log1.Log Arch_Log2.Log Arch_Log3.Log
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Configuring a Database for ARCHIVELOG Mode andAutomatic Archiving
403
Let’s look at an example of how to tell whether the database is in ARCHIVELOG or NOARCHIVELOG mode. You will need to run SVRMGRL and execute the following SQL statement that queries from one of the V$ views. Alternatively, you can perform OS commands such as ps –ef |grep arch in Unix that check the process list to see whether the ARCn process is running. This process does the work of copying the archive logs from the redo logs. This example shows ARCHIVELOG mode using the V$ views: [oracle@DS-LINUX /oracle]$ svrmgrl Oracle Server Manager Release 3.1.5.0.0 - Production (c) Copyright 1997, Oracle Corporation. All Rights Reserved. Oracle8i Enterprise Edition Release 8.1.5.0.0 - Production With the Partitioning and Java options PL/SQL Release 8.1.5.0.0 - Production SVRMGR> connect internal Connected. SVRMGR> select name,log_mode from v$database; NAME LOG_MODE --------- -----------BRDB ARCHIVELOG 1 row selected. SVRMGR> This example shows NOARCHIVELOG mode using the V$ views: [oracle@DS-LINUX /oracle]$ svrmgrl Oracle Server Manager Release 3.1.5.0.0 - Production (c) Copyright 1997, Oracle Corporation. Reserved.
All Rights
Oracle8i Enterprise Edition Release 8.1.5.0.0 - Production
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
404
Chapter 12
Oracle Backup and Recovery Configuration
With the Partitioning and Java options PL/SQL Release 8.1.5.0.0 - Production SVRMGR> connect internal Connected. SVRMGR> select name,log_mode from v$database; NAME LOG_MODE --------- -----------BRDB NOARCHIVELOG 1 row selected. SVRMGR> The following is an example of using the Unix OS command ps –ef |grep arch to see whether the archiver process is running. This is more indirect than the V$ view output, but if the archiver process is running, then the database would have to be in ARCHIVELOG mode. [DS-HPUX oracle oracle [DS-HPUX
oracle]ps -ef|grep arch 7712 1 0 Jan 16 ? 15:55 ora_arch_brdb 6665 6643 0 21:21:14 ttyp1 0:00 grep arch oracle]
A couple of methods exist for determining the location of the archived logs. The first is to execute the SHOW PARAMETER command and the second is to view the init.ora file. An example of using the SHOW PARAMETER command to display the value of LOG_ARCHIVE_DEST is as follows: SVRMGR> show parameter log_archive_dest NAME TYPE VALUE ----------------------------------- ------- -------------log_archive_dest string /oracle/admin/brdb/arch log_archive_dest_1 string log_archive_dest_2 string log_archive_dest_3 string log_archive_dest_4 string log_archive_dest_5 string log_archive_dest_state_1 string enable
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Configuring a Database for ARCHIVELOG Mode and Automatic Archiving
log_archive_dest_state_2 log_archive_dest_state_3 log_archive_dest_state_4 log_archive_dest_state_5 SVRMGR>
string string string string
405
enable enable enable enable
An example of viewing the init.ora file to display the LOG_ARCHIVE_ DEST is listed here: background_dump_dest /oracle/admin/brdb/bdump compatible control_files /oracle/data/brdb/control1.ctl, /oracle/data/brdb/control2.ctl core_dump_dest /oracle/admin/brdb/cdump db_block_buffers db_block_size db_file_multiblock_read_count db_files db_name dml_locks global_names log_archive_dest /oracle/admin/brdb/arch log_archive_format log_archive_start log_buffer log_checkpoint_interval max_dump_file_size max_rollback_segments processes rollback_segments (rb01,rb02,rb03,rb04,rb05,rb06) shared_pool_size user_dump_dest
Copyright ©2000 SYBEX , Inc., Alameda, CA
= = 8.1.5 =
= = = = = = = = =
300 8192 8 64 brdb 100 false
= = = = = = = =
archbrdb_%s.log FALSE 40960 10000 51200 60 40
= 16000000 =
www.sybex.com
406
Chapter 12
Oracle Backup and Recovery Configuration
/oracle/admin/brdb/udump utl_file_dir utl_file_dir utl_file_dir utl_file_dir
= = = =
/apps/web/master /apps/web/upload /apps/web/remove /apps/web/archive
Setting Automatic Archiving To configure a database for automatic archiving, you must perform a series of steps: 1. Edit the init.ora file and set the LOG_ARCHIVE_START parameter
to TRUE. This will automate the archiving of redo logs as they become full. 2. Shut down the database and restart the database by using the
command STARTUP MOUNT. 3. Use the ALTER DATABASE ARCHIVELOG command to set
ARCHIVELOG mode. 4. Open the database with the ALTER DATABASE OPEN command.
To verify that the database is actually archiving, you should execute the ALTER SYSTEM SWITCH LOGFILE command. Following the execution of this command, check the OS directory specified by the parameter LOG_ARCHIVE_DEST to validate that archive log files are present. You can also execute the ARCHIVE LOG LIST command to display information that confirms the database is in ARCHIVELOG mode and automatic archival is enabled. Now let’s walk through this process. First, by editing the init.ora file and changing the parameter LOG_ARCHIVE_START to TRUE, the database will be in automatic archive mode. See the example init.ora file below. background_dump_dest = /oracle/admin/brdb/bdump compatible = 8.1.5 control_files = /oracle/data/brdb/control1.ctl, /oracle/data/brdb/control2.ctl
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Configuring a Database for ARCHIVELOG Mode and Automatic Archiving
core_dump_dest /oracle/admin/brdb/cdump db_block_buffers db_block_size db_file_multiblock_read_count db_files db_name dml_locks global_names #log_archive_buffer_size #log_archive_buffers log_archive_dest /oracle/admin/brdb/arch log_archive_format log_archive_start log_buffer log_checkpoint_interval max_dump_file_size max_rollback_segments processes rollback_segments (rb01,rb02,rb03,rb04,rb05,rb06) shared_pool_size user_dump_dest /oracle/admin/brdb/udump utl_file_dir utl_file_dir utl_file_dir utl_file_dir
= = = = = = = =
300 8192 8 64 brdb 100 false = 64 = 4
= = = = = = = = =
archbrdb_%s.log TRUE 40960 10000 51200 60 40
= 16000000 = = = = =
/apps/web/master /apps/web/upload /apps/web/remove /apps/web/archive
Next, run the following commands in SVRMGR. SVRMGR> SVRMGR> SVRMGR> SVRMGR>
shutdown startup mount alter database archivelog; alter database open;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
407
408
Chapter 12
Oracle Backup and Recovery Configuration
The automatic archival feature has now been enabled for this database. To verify that the database has been configured correctly, you can perform the following checks. First, perform an ALTER SYSTEM SWITCH LOGFILE. This will need to be done n + 1 times, where n is the number of redo logs in your database. SVRMGR> alter system switch logfile; Statement processed. Next, perform a directory listing of the archive destination in LOG_ARCHIVE_DEST. The Unix command pwd displays the current working directory, and ls shows the contents of the directory. [oracle@DS-LINUX arch]$ pwd /oracle/admin/brdb/arch [oracle@DS-LINUX arch]$ ls -la total 43 drwxr-xr-x 2 oracle dba 1024 drwxr-xr-x 12 oracle dba 1024 -rw-r----- 1 oracle dba 16896 -rw-r----- 1 oracle dba 16896 -rw-r----- 1 oracle dba 1024 -rw-r----- 1 oracle dba 1024 -rw-r----- 1 oracle dba 1024 -rw-r----- 1 oracle dba 1024 -rw-r----- 1 oracle dba 1024 [oracle@DS-LINUX arch]$
Feb Feb Feb Feb Feb Feb Feb Feb Feb
9 9 9 9 9 9 9 9 9
23:10 23:00 22:59 23:09 23:10 23:10 23:10 23:10 23:10
. .. archbrdb_71.log archbrdb_72.log archbrdb_73.log archbrdb_74.log archbrdb_75.log archbrdb_76.log archbrdb_77.log
The other method of verifying that the automatic archival feature has been enabled is to execute the ARCHIVE LOG LIST command, which displays the status of these settings. SVRMGR> archive log list Database log mode Automatic archival Archive destination Oldest online log sequence Current log sequence SVRMGR>
Copyright ©2000 SYBEX , Inc., Alameda, CA
Archive Mode Enabled /oracle/admin/brdb/arch 69 71
www.sybex.com
Using init.ora Parameters for Multiple Archive Processes and Locations
409
If you enable ARCHIVELOG mode but forget to enable the automatic archival by not editing the init.ora and changing the LOG_ARCHIVE_START to TRUE, the database will hang when it gets to the last available redo log. You will need to perform manual archiving as a temporary fix or to shut down and start up the database after changing the LOG_ARCHIVE_START parameter.
Using init.ora Parameters for Multiple Archive Processes and Locations
T
he capability to have more than one archive log destination and multiple archivers was first introduced in Oracle8. In Oracle8i, more than two destinations can be used, providing even greater archive log redundancy. The main reason for having multiple destinations for archived log files is to eliminate any single point of failure. For example, if you were to lose the disk storing the archive logs before these logs were backed to tape, the database would be vulnerable to data loss in the event of a failure. If the disk containing the archive logs was lost, then the safest thing to do would be to run a backup. This would ensure that no data would be lost in the event of a database crash from media failure. Having only one archive log location is a single point of failure for the backup process. Hence, Oracle has provided multiple locations, which can be on different disk drives, so that the likelihood of archive logs being lost is significantly reduced. See Figure 12.2, which demonstrates ARCHIVELOG mode with multiple destinations.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
410
Chapter 12
Oracle Backup and Recovery Configuration
FIGURE 12.2
ARCHIVELOG mode with multiple destinations Oracle log writer process
LGWR Inactive online redo log
Inactive online redo log
Inactive online redo log
Log1.Log
Log2.Log
Log3.Log
Active online redo log
Redo Logs
Oracle archive process
/disk01/arch01
ARC0
Log4.Log
ARC1
Disk 1
Disk 2
Archive_Dump_Dest_1 or Archive_Dump_Dest
Archive_Dump_Dest_2 or Archive_Dump_Dest_Duplex
Arch_Log1.Log Arch_Log2.Log
/disk02/arch02
Arch_Log3.Log
Note: Archive_dump_dest_1 and archive_dump_dest_2 init.ora parameters work in conjunction with each other as do the archive_dump_dest and archive_dump_dest_duplex.
Having multiple archive processes can make the archive log creation process faster. If significant volumes of data are going through the redo logs, the archiver can be a point of contention. This means that redo logs could wait or delay database activity while trying to write out an archive log. Furthermore, the archiver has more work to do if the database is writing to multiple destinations. Thus, multiple archive processes can do the extra work to support the additional archive destinations. To implement these new features, the database must be in ARCHIVELOG mode. (To set the database to ARCHIVELOG mode, perform the steps shown earlier in the section “Configuring a Database for ARCHIVELOG mode and Automatic Archiving.”) To verify that the database is in ARCHIVELOG mode,
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Using init.ora Parameters for Multiple Archive Processes and Locations
411
either run an ARCHIVE LOG LIST command or query the V$DATABASE view, as shown earlier. To configure a database for multiple archive processes and LOG_ARCHIVE_DESTs, you use two sets of init.ora parameters. The destination parameter has been slightly changed to LOG_ARCHIVE_ DEST_N (where N is a number from 1 to 5). The value for these parameters LOG_ARCHIVE_DEST_1 and LOG_ARCHIVE_DEST_2 are as follows: 'LOCATION = /ORACLE/ADMIN/ BRDB/ARCH1' and 'LOCATION = /ORACLE/ADMIN/BRDB/ARCH2'. The first set of parameters is listed below in init.ora. background_dump_dest /oracle/admin/brdb/bdump compatible control_files /oracle/data/brdb/control1.ctl, /oracle/data/brdb/control2.ctl core_dump_dest
= = 8.1.5 =
=
/oracle/admin/brdb/cdump db_block_buffers = 300 db_block_size = 8192 db_file_multiblock_read_count = 8 db_files = 64 db_name = brdb dml_locks = 100 global_names = false log_archive_dest_1 = 'location=/oracle/admin/brdb/arch1' log_archive_dest_2 = 'location=/oracle/admin/brdb/arch2' log_archive_max_processes = 2 log_archive_format = archbrdb_%s.log log_archive_start = TRUE log_buffer = 40960 log_checkpoint_interval = 10000 max_dump_file_size = 51200
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
412
Chapter 12
Oracle Backup and Recovery Configuration
max_rollback_segments processes rollback_segments (rb01,rb02,rb03,rb04,rb05,rb06) shared_pool_size user_dump_dest /oracle/admin/brdb/udump utl_file_dir utl_file_dir utl_file_dir utl_file_dir
= 60 = 40 = = 16000000 = = = = =
/apps/web/master /apps/web/upload /apps/web/remove /apps/web/archive
After these parameters are changed or added, then start up the database to use the new parameters. These new dump locations will then be in effect. The second set of parameters are the LOG_ARCHIVE_DEST = and LOG_ARCHIVE_DUPLEX_DEST = . This example uses LOG_ARCHIVE_DEST = / ORACLE/ADMIN/BRDB/ARCH1 and LOG_ARCHIVE_DUPLEX_DEST = /ORACLE/ ADMIN/BRDB/ARCH2. The main difference in this approach is that you use these parameters if you are going to have only two locations or want to use the same init.ora parameter format supported in 8.0.x. This second set of parameters can be seen below in the init.ora parameter values. background_dump_dest /oracle/admin/brdb/bdump compatible control_files /oracle/data/brdb/control1.ctl, /oracle/data/brdb/control2.ctl core_dump_dest /oracle/admin/brdb/cdump db_block_buffers db_block_size db_file_multiblock_read_count db_files db_name dml_locks
Copyright ©2000 SYBEX , Inc., Alameda, CA
= = 8.1.5 =
= = = = = = =
300 8192 8 64 brdb 100
www.sybex.com
Using init.ora Parameters for Multiple Archive Processes and Locations
global_names log_archive_dest /oracle/admin/brdb/arch1 log_archive_duplex_dest /oracle/admin/brdb/arch2 log_archive_max_processes log_archive_format log_archive_start log_buffer log_checkpoint_interval max_dump_file_size max_rollback_segments processes rollback_segments (rb01,rb02,rb03,rb04,rb05,rb06) shared_pool_size user_dump_dest /oracle/admin/brdb/udump utl_file_dir utl_file_dir utl_file_dir utl_file_dir
413
= false = = = = = = = = = = =
2 archbrdb_%s.log TRUE 40960 10000 51200 60 40
= 16000000 = = = = =
/apps/web/master /apps/web/upload /apps/web/remove /apps/web/archive
This second method of mirroring archive logs is designed to mirror just one copy of the log files, whereas the first method can mirror up to five copies, one of which can be a remote database.
Make sure each LOG_ARCHIVE_DEST_N and LOG_ARCHIVE_DUPLEX_DEST is on a different physical device. The main purpose of these new parameters is to allow a copy of the files to remain intact if a disk were to crash.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
414
Chapter 12
Oracle Backup and Recovery Configuration
Manually Archiving Logs
The manual archiving of logs consists of enabling the database for ARCHIVELOG mode and then manually executing the ARCHIVE LOG ALL command from SVRMGRL. The init.ora parameter LOG_ARCHIVE_START must be set to FALSE to disable automatic archival. The next step is to put the database in ARCHIVELOG mode by performing the following commands: SVRMGR> SVRMGR> SVRMGR> SVRMGR>
shutdown startup mount alter database archivelog; alter database open;
Now you are ready to perform manual archiving of redo logs: SVRMGR> archive log all; or SVRMGR>
archive log next;
The ARCHIVE LOG ALL command will archive all redo logs available for archiving, and the ARCHIVE LOG NEXT command will archive the next group of redo logs.
Summary
The Oracle backup and recovery capability is full featured and robust. It provides many options to support a wide variety of backup and recovery situations. In this chapter, you have seen how to configure an Oracle database for backup and recovery. You have learned the ramifications of operating in ARCHIVELOG as opposed to NOARCHIVELOG mode, and vice versa, and how that choice affects the backup and recovery process. You have seen examples of how the init.ora parameters control the destinations and automation of the archive logging. Finally, you walked through an example that enabled ARCHIVELOG mode and automatic archival of logs in the database.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Summary
Key Terms Before you take the exam, make sure you’re familiar with the following terms: ARCHIVELOG NOARCHIVELOG automatic archiving manual archiving LOG_ARCHIVE_DEST LOG_ARCHIVE_DUPLEX_DEST LOG_ARCHIVE_DEST_N LOG_ARCHIVE_START
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
415
416
Chapter 12
Oracle Backup and Recovery Configuration
Review Questions 1. What state does the database need to be in to enable archiving? A. Opened B. Closed C. Mounted D. Unmounted 2. What is one concern a DBA should have when running in
ARCHIVELOG mode? A. Ability to perform a complete recovery B. Shutting down the database to perform backups C. Ability to perform an incomplete recovery D. Increased disk space utilization 3. What two ways can you change the destination of archive log files? A. Issuing the ALTER SYSTEM command B. Configuring the control file C. Changing the LOG_ARCHIVE_DEST init.ora parameter D. Changing the LOG_ARCHIVE_DUMP init.ora parameter 4. What is the maximum number of archive log destinations that is
supported? A. One B. Two C. Three D. Five
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
417
5. What type of database backup requires a shutdown of the database? A. A database in ARCHIVELOG B. A database in NOARCHIVELOG C. Hot backup (online backup) D. Cold backup (offline backup) 6. List all the methods of determining that the database is in ARCHIVELOG
mode. Choose all that apply. A. Query the V$DATABASE table B. See whether the ARCn process is running C. Check the value of the LOG_ARCHIVE_DEST parameter D. View the results of the ARCHIVE LOG LIST command 7. You are the DBA for a manufacturing company that runs three shifts
a day or performs work 24 hours a day on the database, taking orders, performing inventory adjustments, and shipping products. Which type of backup should you perform? Choose all that apply. A. Hot backup in ARCHIVELOG mode B. Cold backup in ARCHIVELOG mode C. Online backup in NOARCHIVELOG mode D. Online backup in ARCHIVELOG mode 8. What init.ora parameter allows no more than two archive log
destinations? A. ARCHIVE_LOG_DEST_N B. ARCHIVE_LOG_DEST_DUPLEX C. LOG_ARCHIVE_DEST_DUPLEX D. LOG_ARCHIVE_DUPLEX_DEST
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
418
Chapter 12
Oracle Backup and Recovery Configuration
9. What init.ora parameter allows no more than five archive log
destinations? A. ARCHIVE_LOG_DEST_N B. ARCHIVE_LOG_DEST_DUPLEX C. LOG_ARCHIVE_DEST_DUPLEX D. LOG_ARCHIVE_DUPLEX_DEST E. LOG_ARCHIVE_DEST_N 10. What init.ora parameter allows remote archive log destinations? A. ARCHIVE_LOG_DEST B. ARCHIVE_LOG_DEST_DUPLEX C. LOG_ARCHIVE_DEST_N D. LOG_ARCHIVE_DUPLEX_DEST 11. What init.ora parameter is partially responsible for enabling auto-
matic archiving? A. LOG_START_AUTO B. LOG_ARCHIVE_START C. AUTOMATIC_ARCHIVE D. LOG_START_ARCHIVE 12. What command is necessary to perform manual archiving? A. MANUAL ARCHIVE ALL B. ARCHIVE MANUAL C. LOG ARCHIVE LIST D. ARCHIVE LOG ALL
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
419
13. What is required to manually archive the database? A. The database running in ARCHIVELOG mode B. LOG_ARCHIVE_START set to TRUE C. MANUAL_ARCHIVE set to TRUE D. Nothing 14. What command displays whether automatic archiving is enabled as
well as the value of ARCH_LOG_DEST? A. The ARCHIVE LOG ALL command. B. The LOG ARCHIVE LIST command. C. The ARCHIVE LOG LIST command. D. No one command shows both parameters. 15. What command is necessary to perform manual archiving? A. ARCHIVE LOG NEXT B. ARCHIVE LOG LIST C. ARCHIVE ALL LOG D. ARCHIVE ALL 16. What will happen to the database if ARCHIVELOG mode is enabled, but
the LOG_ARCHIVE_START command is not set to TRUE? Choose all that apply. A. The database will perform with problems. B. The database will hang until the archive logs are manually
archived. C. The database will not start because of improper configuration. D. The database will work properly until all online redo logs have
filled.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
420
Chapter 12
Oracle Backup and Recovery Configuration
17. What is the most important recovery implication associated with
operating a database in NOARCHIVELOG mode? A. Most likely, no data will be lost. B. Most likely, tables will be lost. C. In most situations, a loss of data will occur. D. In all situations, no data will be lost. 18. What type of recovery can be performed when the database is operat-
ing in ARCHIVELOG mode? Choose all that apply. A. Complete recovery. B. Incomplete recovery. C. No recovery can be performed. D. Incomplete recovery cannot be performed. 19. In NOARCHIVELOG mode: A. The database must be shut down completely for backups. B. Tablespaces must be taken offline before backups. C. The database must be running for backups. D. The database may be running for backups, but it isn’t required. 20. What type of backup will cause an error if performed on a database
operating in NOARCHIVELOG mode? A. Cold backup B. Hot backup C. OS copy of the database file when the database is open D. OS copy of database files when the database is shut down
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Answers to Review Questions
421
Answers to Review Questions 1. C. The database must be in MOUNT mode to enable archive logging. 2. D. The database will use more space because of the creation of
archive logs. 3. A and C. The archive destination is controlled by the LOG_ARCHIVE_
DEST init.ora parameter and by issuing ALTER SYSTEM commands when the database is running. 4. D. The maximum number of archive destinations is controlled by the
LOG_ARCHIVE_DEST_N init.ora parameter. It will support up to five locations. 5. B and D. A database in NOARCHIVELOG mode will not support back-
ups without a shutdown of the database, and a cold backup is a backup taken when the database is offline or shut down. 6. A, B, and D. The V$DATABASE view shows whether the database is in
ARCHIVELOG or NOARCHIVELOG mode, the ARCn process indirectly determines that the archiver is running, and the ARCHIVE LOG LIST command shows whether archiving is enabled. Checking the value of the LOG_ARCHIVE_DEST parameter indicates only whether there is a directory to contain the archive logs, not whether the database is in ARCHIVELOG mode. 7. A or D. A hot backup and online backup are synonymous. A cold
backup and offline backup are synonymous. A hot backup can be run only in ARCHIVELOG mode (so the backup method in answer C doesn’t exist). A hot backup can be run while the database is running; therefore, it does not affect the availability of the database. Because this database must have 24-hour availability, a hot or online backup would be appropriate.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
422
Chapter 12
Oracle Backup and Recovery Configuration
8. D. The LOG_ARCHIVE_DUPLEX_DEST parameter allows for the sec-
ond archive destination. Also, this parameter must be used in conjunction with the LOG_ARCHIVE_DEST parameter for the first archive log destination. 9. E. The LOG_ARCHIVE_DEST_N allows up to five locations. 10. C. The LOG_ARCHIVE_DEST_N allows up to five locations. One of
these locations can be remote, that is, not on the same server. 11. B. The LOG_ARCHIVE_START is responsible for enabling automatic
archiving. The database must also be in ARCHIVELOG mode. 12. D. The ARCHIVE LOG ALL command is responsible for enabling auto-
matic archiving. The database must also be in ARCHIVELOG mode. 13. A. The database running in ARCHIVELOG mode is a requirement for
performing manual or automatic archiving. 14. C. The ARCHIVE LOG LIST displays whether the database has auto-
matic archiving enabled along with the value of ARCH_LOG_DEST. It also displays other pertinent information about archiving, such as the current online redo log. 15. A. The ARCHIVE LOG NEXT command will archive the next group of
redo logs. 16. B and D. The database will hang after all the online redo logs have
been filled, but it will work properly as long as there are unfilled online redo logs. 17. C. In NOARCHIVELOG mode, in most recovery situations, data will be
lost. This because there are no archive logs to roll forward changes starting with a restore of a hot backup of a data file.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Answers to Review Questions
423
18. A and B. Both incomplete and complete recovery can be performed
when operating in ARCHIVELOG mode. This is because archive logs are available to be applied completely or incompletely (for example, to a particular point in time before the failure) to the database. 19. A. The database must be shut down completely for backups. If the
database is open and a backup is taken, the backup will be invalid. 20. B. The hot backup cannot be performed when the database is in
NOARCHIVELOG mode. The ALTER TABLEPSPACE BEGIN BACKUP command will fail with an Oracle error. Answer C will not create a valid backup, but no errors will occur during the backup process.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Chapter
13
Physical Backup without Oracle Recovery Manager ORACLE8i BACKUP AND RECOVERY EXAM OBJECTIVES OFFERED IN THIS CHAPTER: Describe the recovery implications of closed and opened database backups Perform closed and opened database backups Identify the backup implications of the Logging and Nologging options Identify the different types of control file backups Discuss backup issues associated with read-only tablespaces List the data dictionary views useful for backup operations
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Training and Certification Web site (http:// education.oracle.com/certification/index.html) for the most current exam objectives listing.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
A
physical backup is a copy of the physical database files. A physical backup can be performed in two ways. The first is through the Recovery Manager tool (RMAN), which Oracle provides. The second option is a non-RMAN-based backup. The non-RMAN-based backup usually consists of an OS backup with a scripting language such as Korn shell or Bourne shell in the Unix environment. This chapter focuses on nonRMAN-based backups. The OS backup has been used for years as a means to back up the Oracle database. The OS backup can be a scripted solution in Unix or a third-party graphical user interface (GUI) tool in Windows. The OS backup script is a totally customized solution and therefore has the variations and inconsistencies associated with custom solutions. The trend appears to be for most larger database sites to use RMAN because of its extended capabilities and consistency in usage regardless of platforms. However, the OS backup is a good training ground to understand the physical backup fundamentals. RMAN builds upon these fundamentals. In this chapter, you will learn the various physical backup methods without using RMAN.
Understanding Closed and Opened Database Backups
A
closed backup is a backup of a database that is not in the opened state. Usually this means that the database is completely shut down. As you have already learned, this kind of backup is also called a cold, or offline, backup. An opened backup is a backup of a database in the opened state. In the opened state, the database is completely available for access. An opened backup is also called a hot, or online, backup.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Understanding Closed and Opened Database Backups
427
The main implication of a closed backup is that the database is unavailable to users until the backup is complete. One of the preferable prerequisites of doing a cold backup is that the database should be shut down with the NORMAL or IMMEDIATE options so that the database is in a consistent state. In other words, the database files are stamped with the same SCN at the same point in time. This is called a consistent backup. Because no recovery information in the archive logs needs to be applied to the data files, no recovery is necessary for a consistent, closed backup. Figure 13.1 is an example of a closed backup. FIGURE 13.1
Physical Backup Utilizing the Cold, Offline, or Closed Backup Approach in Unix
SVRMGRL > SVRMGRL > SHUTDOWN NORMAL SVRMGRL > !cp/Disk1/*/staging … SVRMGRL > !cp/Disk5/*/staging
/Disk1
/Disk2
/Disk3
/Disk4
/Disk5
Archive logs
Redo logs
Data files Control files
Data files
Control files
Copy of all physical files to /staging Disk6 Note: Oracle Support recommends not backing up online redo logs because these are not necessary, and inexperienced DBAs could inadvertently apply these in certain recovery situations.
/Disk6 /staging
/dev/rmt0 tape
tar cvf /dev/rmt0/staging
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
428
Chapter 13
Physical Backup without Oracle Recovery Manager
The main implication of an opened backup is that the database is available to users during the backup. The backup is accomplished with the command ALTER TABLESPACE BEGIN BACKUP, an OS copy command, and the command ALTER TABLEPACE END BACKUP. Refer back to Chapter 12, “Oracle Backup and Recovery Configuration,” for a more detailed discussion of hot backups. Figure 13.2 is an example of an opened backup. FIGURE 13.2
Physical Backup Utilizing the Hot, Online, or Opened Backup Approach in Unix
SVRMGRL > SVRMGRL > ALTER TABLESPACE BEGIN BACKUP; SVRMGRL > ! cp /Disk3/datafile* /staging SVRMGRL > ALTER TABLESPACE END BACKUP;
/Disk1
/Disk2
/Disk3
/Disk4
/Disk5
Archive logs
Redo logs
Data files Control files
Data files
Control files
Copy archive logs for recovery purposes. Copy of all data files associated with tablespaces on which ALTER TABLESPACE BEGIN/END BACKUP was run to /staging Disk6.
Note: Copy all data files associated with each tablespace.
/dev/rmt0 tape
/Disk6 /staging
tar cvf /dev/rmt0 /staging
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Performing Closed and Opened Database Backups
429
During an opened backup, the database is in an inconsistent state; in other words, the SCN information for the data files and control files is not necessarily consistent. Therefore, this is referred to as an inconsistent backup. This requires recovery of the data files by applying archive logs to bring the data files to a consistent state.
Performing Closed and Opened Database Backups
A
closed backup and opened backup are performed in a similar manner—by executing OS copy commands. The closed backup can be performed just like a standard OS backup after the database has been shut down. The closed backup makes a copy of all the necessary physical files that make up the database, including the data files, online redo logs, control files, and parameter files. The opened backup is also executed by issuing OS copy commands. These commands are issued between an ALTER TABLESPACE BEGIN BACKUP command and an ALTER TABLESPACE END BACKUP command. The opened backup requires only a copy of the data files. The ALTER TABLESPACE BEGIN BACKUP command causes a checkpoint to the data file or data files in the tablespace. This causes all dirty blocks to be written to the data file, and the data-file header is stamped with the SCN consistent with those data blocks. All other changes occurring in the data file from Data Manipulation Language (DML), such as INSERT, UPDATE, and DELETE statements, get recorded in the data files and redo logs in almost the same way as during normal database operations. However, the data-file header SCN does not get updated until the ALTER TABLESPACE END BACKUP command gets executed and another checkpoint occurs. To distinguish what blocks are needed in a recovery situation, Oracle writes more information in the redo logs during the period that the ALTER TABLESPACE BEGIN BACKUP command is executed. This is one reason why data files are fundamentally different from other Oracle database files, such as redo logs, control files, and init.ora files when it comes to backups. Redo logs, control files, and init.ora files can be copied with standard OS copy commands without performing any preparatory steps such as the ALTER TABLESPACE commands.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
430
Chapter 13
Physical Backup without Oracle Recovery Manager
Even though hot backups can be performed at any time when the database is opened, it is a good idea to perform hot backups when there is the lowest DML activity. This will prevent excessive redo logging, which could impair database performance.
Here are the steps to perform a closed database backup: 1. Shut down the database that you want to back up. Make sure that a
SHUTDOWN NORMAL, IMMEDIATE, or TRANSACTIONAL is used, and not a SHUTDOWN ABORT. SVRMGR> shutdown; SVRMGR> shutdown immediate; SVRMGR> shutdown transactional; 2. With the database shut down, perform an OS copy of all the data
files, parameter files, and control files to disk or a tape device. In Unix, perform the following commands to copy the data files, parameter files, and control files to a disk staging location where they await copy to tape. cp cp cp cp cp cp
/db01/ORACLE/brdb/*.dbf /stage # datafiles /oracle/admin/brdb/pfile/* /stage # INIT.ora files /db01/ORACLE/brdb/*.ctl /stage # control files location 1 /db02/ORACLE/brdb/*.ctl /stage # control files location 2 /redo01/ORACLE/brdb/*.log /stage # online redo logs group 1 /redo02/ORACLE/brdb/*.log /stage # online redo logs group 2
3. Restart the database and proceed with normal database operations.
SVRMGR> startup;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Performing Closed and Opened Database Backups
431
Here are the steps used to perform an opened database backup. The database is available to users during these operations, although the response time for users may be decreased depending on what tablespace is being backed up. 1. To determine all the tablespaces that make up the database, query
V$TABLESPACE and V$DATAFILE dynamic views. Below is the SQL statement that identifies tablespaces and their associated data files. select a.TS#,a.NAME,b.NAME from v$tablespace a, v$datafile b where a.TS#=b.TS#; 2. Determine all the data files that make up each tablespace. Each
tablespace can be made up of many data files. All data files associated with the tablespace need to be copied when the tablespace is in backup mode. Perform the above query by connecting to SVRMGRL. [oracle@DS-LINUX /oracle]$ svrmgrl Oracle Server Manager Release 3.1.5.0.0 - Production (c) Copyright 1997, Oracle Corporation. All Rights Reserved. Oracle8i Enterprise Edition Release 8.1.5.0.0 Production With the Partitioning and Java options PL/SQL Release 8.1.5.0.0 - Production SVRMGR> connect internal Connected. SVRMGR> select a.TS#,a.NAME,b.NAME from 2> v$tablespace a, v$datafile b 3> where a.TS#=b.TS#;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
432
Chapter 13
Physical Backup without Oracle Recovery Manager
TS#
NAME
NAME
---------- ------------- -----------------------------0 SYSTEM /db01/ORACLE/brdb/system01.dbf 1
RBS
/db01/ORACLE/brdb/rbs01.dbf
2
TEMP
/db01/ORACLE/brdb/temp01.dbf
3
USERS
/db01/ORACLE/brdb/users01.dbf
3
USERS
/db01/ORACLE/brdb/users02.dbf
4
TOOLS
/db01/ORACLE/brdb/tools01.dbf
5
DATA
/db01/ORACLE/brdb/data01.dbf
6
INDX
/db01/ORACLE/brdb/indx01.dbf
3. Put the tablespaces in backup mode.
SVRMGR> alter tablespace users begin backup; Statement processed. 4. Perform an OS copy of each data file associated with the tablespace in
backup mode. SVRMGR> ! cp /db01/ORACLE/brdb/users01.dbf /stage/. SVRMGR> ! cp /db01/ORACLE/brdb/users02.dbf /stage/. 5. End backup mode for the tablespace.
SVRMGR> alter tablespace users end backup; Statement processed. These series of commands can be repeated for every tablespace and associated data file that make up the database. The database must be in ARCHIVELOG mode to execute the ALTER TABLESPACE BEGIN and END backup commands. Typically, this type of backup is done with a scripting language in Unix or a third-party GUI utility in the Windows environment. Using Unix shell scripts, a list of data files and tablespaces are dumped to a file listing and parsed into svrmgr and cp commands so that all tablespaces and data files are backed up together.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Understanding the Logging and Nologging Options
433
During an opened or closed backup, it is good idea to get a backup control file, all archive logs, and a copy of the parameter files. These can be packaged with data files so that all necessary or potentially necessary components for recovery are grouped together. This is called a whole database backup. You can find the location of these database components by selecting information from V$ views (discussed in the section “Using Data Dictionary Views for Backups”).
Understanding the Logging and Nologging Options
L
ogging and Nologging options have significantly different backup implications. Logging options are the default on any table or INDEX, INSERT, UPDATE, or DELETE statement. Logging is the recording of the change activity to the online redo logs and archive logs. This is the information that is used to recover the database in the event of a failure. Nologging is not the default and must be explicitly activated. Nologging is primarily utilized to improve performance and reduce archive log generation on large objects. Nologging deactivates the recording of change activities in the online redo logs and archive logs. Therefore, the recovery information for any actions that are performed with Nologging activated will not be recorded. The main backup implication is that with the Nologging option enabled, the object that was involved with the Nologging event is at risk for a period of time. If a failure occurs before a hot or cold backup takes place, no supporting archive log information for this event would exist. This is because the recording of the recovery information to the online redo logs (and as a result, also to the archive logs), has been deactivated.
After each NOLOGGING command, a backup should be performed if rebuilding the Nologging objects is not an option or would be significant in the event of failure. Each object created with Nologging is at risk until a backup is taken.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
434
Chapter 13
Physical Backup without Oracle Recovery Manager
Backing Up Control Files
There are three types of control-file backups. The first two of these three are non-RMAN. The first type is performed by executing a command that creates a binary copy of the existing control file in a new directory location. For example, the following command performs a binary copy of the control file: SVRMGR> alter database backup controlfile to ‘/stage/control.ctl.bak’; The second type of control-file backup creates an ASCII copy of the current control file as a trace file in the USER_DUMP_DEST location. The parameter USER_DUMP_DEST should be set in your init.ora file. In a configuration compliant with Optimal Flexible Architecture (OFA), this will be the udump directory. The backup of the control file can be performed by executing the following command: SVRMGR> alter database backup controlfile to trace; The output of the trace file looks like the following: Dump file /oracle/admin/brdb/udump/ora_23302.trc Oracle8i Enterprise Edition Release 8.1.5.0.0 - Production With the Partitioning and Java options PL/SQL Release 8.1.5.0.0 - Production ORACLE_HOME = /oracle/product/8.1.5 System name:Linux Node name:DS-LINUX Release:2.2.12-20 Version:#1 Mon Sep 27 10:40:35 EDT 1999 Machine:i686 Instance name: brdb Redo thread mounted by this instance: 1 Oracle process number: 10 Unix process pid: 23302, image: oracle@DS-LINUX *** SESSION ID:(9.106) 2000.03.06.23.44.56.665 *** 2000.03.06.23.44.56.665 # The following commands will create a new control file
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Backing Up Control Files
and use it # to open the database. # Data used by the recovery manager will be lost. Additional logs may # be required for media recovery of offline data files. Use this # only if the current version of all online logs are available. STARTUP NOMOUNT CREATE CONTROLFILE REUSE DATABASE "BRDB" NORESETLOGS ARCHIVELOG MAXLOGFILES 16 MAXLOGMEMBERS 2 MAXDATAFILES 100 MAXINSTANCES 1 MAXLOGHISTORY 226 LOGFILE GROUP 1 ( '/redo01/ORACLE/brdb/redo01a.log', '/redo02/ORACLE/brdb/redo01b.log' ) SIZE 2M, GROUP 2 ( '/redo01/ORACLE/brdb/redo02a.log', '/redo02/ORACLE/brdb/redo02b.log' ) SIZE 2M, GROUP 3 ( '/redo01/ORACLE/brdb/redo03a.log', '/redo02/ORACLE/brdb/redo03b.log' ) SIZE 2M DATAFILE '/db01/ORACLE/brdb/system01.dbf', '/db01/ORACLE/brdb/rbs01.dbf', '/db01/ORACLE/brdb/temp01.dbf', '/db01/ORACLE/brdb/users01.dbf', '/db01/ORACLE/brdb/tools01.dbf', '/db01/ORACLE/brdb/data01.dbf', '/db01/ORACLE/brdb/indx01.dbf'
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
435
436
Chapter 13
Physical Backup without Oracle Recovery Manager
CHARACTER SET US7ASCII ; # Recovery is required if any of the datafiles are restored backups, # or if the last shutdown was not normal or immediate. RECOVER DATABASE # All logs need archiving and a log switch is needed. ALTER SYSTEM ARCHIVE LOG ALL; # Database can now be opened normally. ALTER DATABASE OPEN; # No tempfile entries found to add. #
The control-file backup to ASCII can be used as part of a common technique of moving production databases to test and development servers. This technique can be useful to test backups.
The third type of control-file backup is performed through the RMAN utility by executing the BACKUP CONTROLFILECOPY command. Below is a brief example of using this command within the RMAN utility after being connected to the appropriate target database and RMAN catalog database. You will learn about RMAN commands in more detail in later chapters. run { allocate channel ch1 type disk; backup controlfilecopy '/stage/copy/ control.ctl.bak'; }
Working with Read-Only Tablespaces
T
he backup and recovery of a read-only tablespace requires unique procedures in certain situations. The backup and recovery process changes depending on the state of the tablespace at the time of backup and the time of recovery, and the state of the control file. This section discusses the implications of each of these situations. A backup of a read-only tablespace requires different procedures from those used in a backup of a read-write tablespace. The read-only tablespace
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Using Data Dictionary Views for Backups
437
is, as its name suggests, marked read-only; in other words, all write activity is disabled. Therefore, the SCN does not change after the tablespace has been made read-only, as long as it stays read-only. This means that after a database failure, no recovery is needed for a tablespace marked read-only if the tablespace was read-only at the time of the backup. The read-only tablespace could simply be restored and no archive logs would get applied during the recovery process. If a backup is restored that contains a tablespace that was in read-write mode at the time of the backup, but is in read-only at the time of failure, then a recovery would need to be performed. This is because changes would be made during read-write mode. Archive logs would be applied up until the tablespace was made read-only. The state of the control file also affects the recovery process of read-only tablespaces. During recovery of a backup control file, or recovery when there is no current control file, read-only tablespaces should be taken offline or you will get an ORA-1233 error. The control file cannot be created with a read-only tablespace online. The data file or data files associated with the read-only tablespace must be taken offline before the recovery command is issued. After the database is recovered, the read-only tablespace can be brought online. Refer to Chapter 17, “Additional Recovery Issues,” for a more detailed discussion of this topic.
Using Data Dictionary Views for Backups
A few data dictionary views are useful in the backup process. These views are V$DATAFILE, V$DATAFILE_HEADER, V$TABLESPACE, V$BACKUP, V$CONTROLFILE, and V$LOGFILE. The combination of the V$DATAFILE and V$TABLESPACE views enables the DBA to identify the appropriate tablespaces and data files to back up. The V$BACKUP view displays the backup status of each online data file. The following example shows the V$DATAFILE view, which is used to identify all the data files in the database.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
438
Chapter 13
Physical Backup without Oracle Recovery Manager
SVRMGR> select file#,status,name from v$datafile; FILE# STATUS NAME ---------- ------- -----------------------------------1 SYSTEM /db01/ORACLE/brdb/system01.dbf 2 ONLINE
/db01/ORACLE/brdb/rbs01.dbf
3 ONLINE
/db01/ORACLE/brdb/temp01.dbf
4 ONLINE
/db01/ORACLE/brdb/users01.dbf
5 ONLINE
/db01/ORACLE/brdb/tools01.dbf
6 ONLINE
/db01/ORACLE/brdb/data01.dbf
7 ONLINE
/db01/ORACLE/brdb/indx01.dbf
7 rows selected. SVRMGR> The next example is a query of the V$DATAFILE_HEADER view. This view is useful for identifying the data files that are also being backed up. This example shows that the FUZZY column in data file 6 is YES, which indicates that it is being backed up. SVRMGR> select file#,fuzzy,tablespace_name,name from v$datafile_header; FILE# FUZ TABLESPACE_NAME NAME ---------- --- ------------------ ----------------------1
SYSTEM
/db01/ORACLE/brdb/system01.dbf
2
RBS
/db01/ORACLE/brdb/rbs01.dbf
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Using Data Dictionary Views for Backups
3
TEMP
/db01/ORACLE/brdb/temp01.dbf
4
USERS
/db01/ORACLE/brdb/users01.dbf
5
TOOLS
/db01/ORACLE/brdb/tools01.dbf
DATA
/db01/ORACLE/brdb/data01.dbf
INDX
/db01/ORACLE/brdb/indx01.dbf
6
YES
7
439
7 rows selected.
Some Oracle documentation notes that values can be YES or NO. However, we have found them to be YES or NULL.
The following example is a query of the V$CONTROLFILE view. This view is useful for identifying the control files, which are used for a cold database backup. SVRMGR> select * from v$controlfile; STATUS NAME ------- ---------------------------------------------/db01/ORACLE/brdb/control01.ctl /db02/ORACLE/brdb/control02.ctl 2 rows selected. SVRMGR> The following example is a query of the V$LOGFILE view. This view is useful for identifying the online redo log files, which are used for a cold database backup.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
440
Chapter 13
Physical Backup without Oracle Recovery Manager
SVRMGR> select * from v$logfile; GROUP# STATUS MEMBER ---------- ------- ----------------------------------1 /redo01/ORACLE/brdb/redo01a.log 1
/redo02/ORACLE/brdb/redo01b.log
2
/redo01/ORACLE/brdb/redo02a.log
2
/redo02/ORACLE/brdb/redo02b.log
3
/redo01/ORACLE/brdb/redo03a.log
3
/redo02/ORACLE/brdb/redo03b.log
6 rows selected. SVRMGR> The following example is a query of the V$TABLESPACE view. This view is useful for identifying the tablespaces. It can be joined to the V$DATAFILE view to associate tablespaces and the related data files for a hot backup. You saw this query earlier, in the section “Performing Closed and Opened Database Backups.” SVRMGR> select * from v$tablespace; TS# NAME ---------- -----------------------------0 SYSTEM 1 RBS 2 TEMP 3 USERS 4 TOOLS 5 DATA
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Summary
441
6 INDX 7 rows selected. SVRMGR> The next example is a query of the V$BACKUP view. This view is useful for identifying the data files that are actively or not actively being backed up. This example shows that data file 6 is actively being backed up. The V$BACKUP view shows only online data files. SVRMGR> select * from v$backup; FILE# STATUS CHANGE# TIME ---------- ------------------ ---------- --------1 NOT ACTIVE 0 2 NOT ACTIVE 0 3 NOT ACTIVE 0 4 NOT ACTIVE 0 5 NOT ACTIVE 0 6 ACTIVE 47993 09-MAR-00 7 NOT ACTIVE 0 7 rows selected.
Summary
T
he non-RMAN backups have been commonplace at Oracle sites for years. This type of backup mainly consists of OS-based commands such as Korn shell or Bourne shell commands in the Unix environment. In the Windows environment, it is available in third-party GUI tools. The fundamentals of backing up an Oracle database can be clearly seen in the OS-based backup. Some of these basic techniques are utilized in the hot, or opened, backup examples. A solid understanding of these concepts is necessary to fully understand the backup process with RMAN-based backups.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
442
Chapter 13
Physical Backup without Oracle Recovery Manager
Key Terms Before you take the exam, make sure you’re familiar with the following terms: physical backup closed backup opened backup consistent backup inconsistent backup whole database backup Logging Nologging read-only tablespace read-write tablespace V$DATAFILE V$DATAFILE_HEADER V$TABLESPACE V$BACKUP V$CONTROLFILE V$LOGFILE
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
443
Review Questions 1. Which option disables the writing of redo information to the redo logs? A. Nowrite option B. NOARCHIVELOG mode C. Nologging D. Logging 2. In the event of a failure, how would you get back an object and its
associated contents created with the Nologging option? Choose all that apply. A. The object would be recovered in the recovery process if operating
in ARCHIVELOG mode. B. By manually rebuilding the object. C. The object would be recovered in the recovery process if operating
in NOARCHIVELOG mode. D. There is no way of recovering the object with archive logs. 3. Which three views are useful in the non–Recovery Manager backup
process? Choose all that apply. A. V$BACKUP B. V$DATAFILE C. V$TABLESPACE D. V$PROCESS 4. Which type of backup most closely represents an opened backup?
Choose all that apply. A. Online backup B. Offline backup C. Hot backup D. Cold backup
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
444
Chapter 13
Physical Backup without Oracle Recovery Manager
5. In the event of a database failure that requires a full database restore,
what type of recovery is necessary for a read-only tablespace? A. None, if the restored copy was made when the tablespace was
read-only. B. Redo log entries must be applied to the read-only tablespace,
regardless of the state of the tablespace copy. C. Redo log entries must be applied to the read-only tablespace, if it
was in read-write mode at the time of the backup used for the restore. 6. To perform a closed backup, the database must be in what state?
Choose all that apply. A. Opened B. Mounted C. Shut down 7. What type of shutdown is not supported for a closed backup? A. SHUTDOWN NORMAL B. SHUTDOWN IMMEDIATE C. SHUTDOWN ABORT D. SHUTDOWN TRANSACTIONAL 8. What type of backup is consistent? A. Online backup B. Opened backup C. Hot backup D. Cold backup
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
445
9. What type of backup is inconsistent? A. Cold backup B. Online backup C. Opened backup D. Closed backup 10. What special procedure must be performed to the data file or data files
of a read-only tablespace during a recovery using a backup control file? A. Data file placed online. B. Data file dropped. C. Data file placed offline. D. No special procedure is necessary. 11. If a read-only tablespace is restored from a backup when the data file
is read-only, what type of recovery is necessary? A. Data file recovery. B. Tablespace recovery. C. Database recovery. D. No recovery is needed. 12. What are valid ways to back up a control file while the database is
running? A. Back up to trace file B. Back up to binary control file C. OS copy to tape D. Back up to restore file
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
446
Chapter 13
Physical Backup without Oracle Recovery Manager
Answers to Review Questions 1. C. The Nologging option turns off the recording of redo information
that gets recorded in the redo logs. 2. B and D. Objects that have been created with the Nologging option,
such as tables and indexes, must be either manually rebuilt or created from an export dump. There is no way to recover the data with archive logs, but no data was ever written to archive logs. 3. A, B, and C. V$BACKUP, V$DATAFILE, and V$TABLESPACE are all used
in the backup process. V$PROCESS shows the database processes and has nothing to do with the OS backup process. 4. A and C. An opened backup is performed when the database is
opened or available for access (online). A hot backup and online backup are synonymous. 5. A and C. In a read-only tablespace, the SCN doesn’t change or no
changes get applied. So if the backup of the tablespace was taken when the tablespace was read-only, no recovery would be necessary. If the backup was taken when the database was read-write, then redo logs would need to be applied. The redo logs in this case would also contain the command that put the tablespace into read-only mode. 6. B or C. To perform a closed backup, the database must not be
opened. Typically, closed backups are performed when the database is shut down. The database may be mounted, but not opened, and a valid closed backup can be performed. 7. C. SHUTDOWN ABORT is not supported because it doesn’t force check-
pointing. Therefore, instance recovery is necessary on start-up to make the data files consistent. All other shutdowns checkpoint the data files at shutdown to a consistent state. 8. D. A cold backup ensures that all the SCNs in the data files are con-
sistent for a single point in time.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Answers to Review Questions
447
9. B and C. An opened and online backup both back up the data files
with different SCNs in the headers, requiring recovery during a restore operation. 10. C. A data file must be taken offline before recovery using a backup
control file. 11. D. No recovery is needed because the tablespace was read-only dur-
ing backup and at the time of failure. 12. A and B. The backup to a trace file and to a binary control file are
both valid backups of the control file. The other references are made up.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Chapter
14
Complete Recovery without Recovery Manager ORACLE8i BACKUP AND RECOVERY EXAM OBJECTIVES OFFERED IN THIS CHAPTER: Describe the implications of a media failure in NOARCHIVELOG and ARCHIVELOG modes Recover a database in different situations in NOARCHIVELOG and ARCHIVELOG modes In NOARCHIVELOG and ARCHIVELOG modes, restore files to a different location if media failure occurs List the data dictionary views required to recover a database after a media failure
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Training and Certification Web site (http:// education.oracle.com/certification/index.html) for the most current exam objectives listing.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
T
here are two methods of performing a complete recovery. The first method is manual, or non–Recovery Manager (RMAN). The second method is to use the RMAN utility. This chapter focuses on the first method, non-RMAN. As we have discussed in previous chapters, the decision of whether to operate in ARCHIVELOG mode or NOARCHIVELOG mode has significant implications on the recovery options that can be performed. This chapter covers those implications in further detail. In this chapter, you will follow recovery procedures in different scenarios to learn some of the options and techniques that are available. This provides good practice and testing of the backup and recovery strategies. You also will look at some views that can assist the DBA in the recovery process. These views provide detailed information regarding the data files and tablespaces involved in the recovery process.
Recovering from Media Failure in NOARCHIVELOG and ARCHIVELOG Modes
O
ne of the most significant backup and recovery decisions a DBA can make is whether to operate in ARCHIVELOG mode or NOARCHIVELOG mode. The outcome of this decision dramatically affects the backup and recovery options available. In ARCHIVELOG mode, the database generates historical changes in the form of offline redo logs, or archive logs. That is, the database doesn’t write
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Recovering from Media Failure in NOARCHIVELOG and ARCHIVELOG Modes
451
over the online redo logs until a copy is made, and this copy is called an offline redo log, or archive log. These logs can be applied to backups of the data files to recover the database up to the point of a failure. Figure 14.1 illustrates complete recovery in ARCHIVELOG mode. FIGURE 14.1
Complete recovery in ARCHIVELOG mode to media failure on January 28th
Backup in ARCHIVELOG mode
Media failure Database backup
Time
1-Jan-00
7-Jan-00
14-Jan-00
21-Jan-00
28-Jan-00
Arch11.log Arch21.log Arch38.log Arch41.log Arch53.log
Data applied Archive logs 11–53 applied to backup taken on January 1st.
In NOARCHIVELOG mode, the database does not generate historical changes; there is no archive logging. In this mode, the database writes over the online redo logs without creating an archive log. Thus, no historical information is generated and saved for later use. Figure 14.2 illustrates incomplete recovery in NOARCHIVELOG mode.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
452
Chapter 14
Complete Recovery without Recovery Manager
FIGURE 14.2
Incomplete recovery in NOARCHIVELOG mode for Media Failure on January 28th
Media failure Database backup
Time
1-Jan-00
7-Jan-00
14-Jan-00
21-Jan-00
28-Jan-00
Data lost or must be manually reentered No archive logs to be applied. All data added/modified after January 1st backup is lost.
The most significant type of failure is media failure. As discussed in Chapter 11, “Oracle Recovery Structures and Processes,” media failure occurs when a database file cannot be accessed for some reason. The usual reason is a disk crash or controller failure. Media failure requires database recovery. If the database is in ARCHIVELOG mode, complete recovery can be performed. This means that a backup can be restored to the affected file system, and archive logs can be applied up to the point of failure. Thus, no data is lost. If the database is in NOARCHIVELOG mode, a complete recovery cannot be performed. This means that a backup can be restored to the affected file system, but there are no archive logs or historical changes saved. Thus, the database has only the transactions that were available at the time of the backup. If backups are scheduled every night, the business would lose one day’s worth of transactions. If backups are scheduled weekly, the business would lose one week’s worth of transactions. The end result is that, in almost all cases, some data will be lost.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Recovering the Database under Different Scenarios
453
Recovering the Database under Different Scenarios
I
n this section, we will demonstrate two types of database recovery: a tablespace recovery and a full database recovery. We will show how these two recoveries are performed in NOARCHIVELOG and ARCHIVELOG modes. We have separated each failure and recovery into four scenarios. Scenario 1 and scenario 2 are performed in NOARCHIVELOG mode. Scenario 1 illustrates a data file recovery, whereas scenario 2 illustrates a full database recovery. Scenarios 3 and 4 are performed in ARCHIVELOG mode. Scenario 3 demonstrates a data file recovery. This can be handled by simply replacing the necessary tablespace’s data file and recovering it. And finally, scenario 4 is a full database recovery. Table 14.1 describes the failure scenarios outlined in this section. TABLE 14.1
Summary of Failure Scenarios
Archiving Status
Failure Event
Scenario 1
NOARCHIVELOG mode
Scenario 2
Scenario
Recovery Method
Characteristics
Loss of a data file
Full restore of data files, control files, and redo logs.
Loss of data. A complete database restore must be performed for one data file. Database is unavailable during the recovery process.
NOARCHIVELOG mode
Loss of all data files
Full restore of data files, control files, and redo logs.
Loss of data.
Scenario 3
ARCHIVELOG mode
Loss of a data file
Data file recovery; restore only the missing data file with the database open.
No data loss. Need to recover only the one data file because the archive logs contain additional changes. The database remains available during the recovery process.
Scenario 4
ARCHIVELOG mode
Loss of all data files
Database recovery of all data files.
No data loss because archive logs that contain recent changes are applied.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
454
Chapter 14
Complete Recovery without Recovery Manager
Scenario 1: NOARCHIVELOG Loss of a Data File The database is available all day during the week. Every Saturday, the database is shut down, and a complete, cold backup (offline backup) is performed. The database is restarted when this activity is completed.
Diagnosis On Wednesday morning, there is a lost or deleted data file in the database. The error received upon start-up is as follows: SVRMGR> startup ORACLE instance started. Total System Global Area 19504528 Fixed Size 64912 Variable Size 16908288 Database Buffers 2457600 Redo Buffers 73728 Database mounted. ORA-01157: cannot identify/lock data file 4 - see DBWR trace file ORA-01110: data file 4: '/db01/ORACLE/brdb/users01.dbf'
bytes bytes bytes bytes bytes
Because you are operating in NOARCHIVELOG mode, you must perform a full database restore from the previous weekend. You cannot perform a tablespace or data file recovery in NOARCHIVELOG mode because you have no ability to roll back and roll forward historical changes. There are no archive logs to apply to the data file to make the data file current with the rest of the database. Data entered into the database on Monday, Tuesday, and Wednesday is lost and must be reentered, if possible. Copying all the data files, online redo logs, and control files from the Saturday backup performs a full database restore. You must then copy these files to their original locations.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Recovering the Database under Different Scenarios
455
Step-by-Step Recovery To recover a lost data file when operating in NOARCHIVELOG mode, take the following steps: 1. Perform a cold backup of the database to simulate the Saturday
backup. The following is a sample script, which performs a cold backup by shutting down the database and copying the necessary data files, redo logs, and control files. [oracle@DS-LINUX backup]$ more cold-backup #!/bin/ksh # # Created by: Doug Stuns # Date: 03/22/00 # Description: # This script performs a static COLD BACKUP of a database # by shutting down the database and manually copying # the files to a staging directory. # #********************************************************* # Shutting down the database echo 'Shutting Down the Database!' svrmgrl <<-EOF connect internal shutdown immediate EOF # #********************************************************* # Begin copying control files, online redo logs and data files #echo 'Copying control files' echo '...' cp /oracle/data/brdb/*.ctl /stage/cold echo 'Copying data files' echo '...'
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
456
Chapter 14
Complete Recovery without Recovery Manager
cp /db01/ORACLE/brdb/*.dbf /stage/cold echo 'Copying redo logs' echo '...' cp /redo01/ORACLE/brdb/*.log /stage/cold cp /redo02/ORACLE/brdb/*.log /stage/cold echo 'Complete! - No# of files copied 'ls -ltr /stage/cold/* |wc -l # #********************************************************* # Restart Database echo 'Starting up the Database!' svrmgrl <<-EOF connect internal startup EOF 2. Validate that the user TEST’s objects exist in the USERS tablespace.
This is the tablespace that you will remove to simulate a lost or removed data file. SVRMGR> select username, default_tablespace, temporary_tablespace from dba_users ; USERNAME DEFAULT_TABLESPACE TEMPORARY_TABLESPACE ---------------- ------------------- ------------------SYS SYSTEM TEMP SYSTEM
TOOLS
TEMP
OUTLN
SYSTEM
SYSTEM
DBSNMP
SYSTEM
SYSTEM
TEST
USERS
TEMP
5 rows selected. SVRMGR>
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Recovering the Database under Different Scenarios
457
3. Create a table and insert data to simulate data being entered after Sat-
urday’s cold backup. This is data that would be entered on Monday through Wednesday, before the failure, but after the cold backup. The user TEST was created before the cold backup with a default tablespace of USERS. The account has connect and resource privileges. SVRMGR> connect test/test SVRMGR> create table t1 (c1 number, c2 char (50)); Statement processed. SVRMGR> insert into t1 values (1, 'This is a test!'); 1 row processed. SVRMGR> commit; Statement processed. SVRMGR> 4. Validate the data file location of the USERS tablespace. In this case, the
tablespace name is in the data file name, as the Optimal Flexible Architecture (OFA) standard recommends. Then, remove or delete this file. SVRMGR> select name from v$datafile; NAME ----------------------------------------/db01/ORACLE/brdb/system01.dbf /db01/ORACLE/brdb/rbs01.dbf /db01/ORACLE/brdb/temp01.dbf /db01/ORACLE/brdb/users01.dbf /db01/ORACLE/brdb/tools01.dbf /db01/ORACLE/brdb/data01.dbf /db01/ORACLE/brdb/indx01.dbf 7 rows selected. SVRMGR> rm /db01/ORACLE/brdb/users01.dbf 5. Start the database and verify that the missing data file error occurs.
[oracle@DS-LINUX brdb]$ svrmgrl Oracle Server Manager Release 3.1.5.0.0 - Production
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
458
Chapter 14
Complete Recovery without Recovery Manager
(c) Copyright 1997, Oracle Corporation. All Rights Reserved. Oracle8i Enterprise Edition Release 8.1.5.0.0 Production With the Partitioning and Java options PL/SQL Release 8.1.5.0.0 - Production SVRMGR> connect internal Connected. SVRMGR> startup ORACLE instance started. Total System Global Area 19504528 Fixed Size 64912 Variable Size 16908288 Database Buffers 2457600 Redo Buffers 73728 Database mounted. ORA-01157: cannot identify/lock data file 4 see DBWR trace file ORA-01110: data file 4: '/db01/ORACLE/brdb/users01.dbf'
bytes bytes bytes bytes bytes
6. Shut down the database to perform a complete database restore. The
database must be shut down to restore a cold backup. SVRMGR> shutdown ORA-01109: database not open Database dismounted. ORACLE instance shut down. SVRMGR>
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Recovering the Database under Different Scenarios
459
7. Perform a complete database restore by copying all data files, redo
logs, and control files to the original locations. echo 'Copying datafiles back' cp /stage/cold/*.dbf /db01/ORACLE/brdb echo 'Copying controlfiles back' cp /stage/cold/*.ctl /oracle/data/brdb echo 'Copying redo logs back' cp /stage/cold/*a.log /redo01/ORACLE/brdb cp /stage/cold/*b.log /redo01/ORACLE/brdb 8. Start the database and check whether the data entered after the cold
backup is there. Table t1 and the data do not exist. The data is not there and would need to be reentered. [oracle@DS-LINUX backup]$ svrmgrl Oracle Server Manager Release 3.1.5.0.0 - Production (c) Copyright 1997, Oracle Corporation. All Rights Reserved. Oracle8i Enterprise Edition Release 8.1.5.0.0 Production With the Partitioning and Java options PL/SQL Release 8.1.5.0.0 - Production SVRMGR> connect test/test Connected. SVRMGR> select * from t1; select * from t1 * ORA-00942: table or view does not exist SVRMGR>
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
460
Chapter 14
Complete Recovery without Recovery Manager
Conclusions The most notable observation about this scenario is that in NOARCHIVELOG mode, data is lost. All data entered after the backup, but before the failure, is lost and must be reentered. Furthermore, you must restore the whole database instead of just the one data file that was lost or removed. This means that the recovery time could be longer because all files need to be restored instead of only one. Also, you must shut down the database to perform the recovery.
Scenario 2: NOARCHIVELOG Loss of All Data Files The database is available all day during the week. Every Saturday, the database is shut down and a complete cold (offline) backup is performed. The database is restarted when this backup activity is complete. This cold backup process must be performed for all database backups in NOARCHIVELOG mode, just as in scenario 1.
Diagnosis On Wednesday morning, one of the disks, which supports all data files for the database, has crashed. The disks had no fault tolerance, such as mirroring. All data files are unavailable. The error received upon start-up is as follows: SVRMGR> connect internal Connected. SVRMGR> startup ORACLE instance started. Total System Global Area 19504528 Fixed Size 64912 Variable Size 16908288 Database Buffers 2457600 Redo Buffers 73728 Database mounted. ORA-01157: cannot identify/lock data file 1 see DBWR trace file ORA-01110: data file 1: '/db01/ORACLE/brdb/system01.dbf' SVRMGR>
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
bytes bytes bytes bytes bytes
Recovering the Database under Different Scenarios
461
The database start-up fails during the open state because no system data file exists. It appears that just one data file is not accessible, as in scenario 1. However, if the system data file, or data file 1, is missing, there is a good chance more data files are missing. The error statement indicates further information: ORA-01157: cannot identify/lock data file 1 - see DBWR trace file. There is no mention of the exact name or location of the trace file. The alert log displays all start-up and shutdown information and will have the exact name of the trace file. Below is the output from the alert_brdb.log file: alter database mount Wed Mar 29 17:53:29 2000 Successful mount of redo thread 1, with mount id 2062981145. Wed Mar 29 17:53:29 2000 Database mounted in Exclusive Mode. Completed: alter database mount Wed Mar 29 17:53:29 2000 alter database open Wed Mar 29 17:53:29 2000 Errors in file /oracle/admin/brdb/bdump/dbw0_2280.trc: ORA-01157: cannot identify/lock data file 1 see DBWR trace file ORA-01110: data file 1: '/db01/ORACLE/brdb/system01.dbf' ORA-27037: unable to obtain file status Linux Error: 2: No such file or directory Additional information: 3 The alert log file shows the chain of events throughout the start-up process. On the line before the ORA-01157 error, the exact name and location of the trace file is given: /oracle/admin/brdb/bdump/dbw0_2280.trc. This file has the full listing of the problem. Dump file /oracle/admin/brdb/bdump/dbw0_2280.trc Oracle8i Enterprise Edition Release 8.1.5.0.0 - Production With the Partitioning and Java options PL/SQL Release 8.1.5.0.0 - Production ORACLE_HOME = /oracle/product/8.1.5 System name:Linux Node name:DS-LINUX
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
462
Chapter 14
Complete Recovery without Recovery Manager
Release:2.2.12-20 Version:#1 Mon Sep 27 10:40:35 EDT 1999 Machine:i686 Instance name: brdb Redo thread mounted by this instance: 1 Oracle process number: 3 Unix process pid: 2280, image: oracle@DS-LINUX (DBW0) *** SESSION ID:(2.1) 2000.03.29.17.53.29.916 ORA-01157: cannot identify/lock data file 1 see DBWR trace file ORA-01110: data file 1: '/db01/ORACLE/brdb/system01.dbf' ORA-27037: unable to obtain file status Linux Error: 2: No such file or directory Additional information: 3 ORA-01157: cannot identify/lock data file 2 see DBWR trace file ORA-01110: data file 2: '/db01/ORACLE/brdb/rbs01.dbf' ORA-27037: unable to obtain file status Linux Error: 2: No such file or directory Additional information: 3 ORA-01157: cannot identify/lock data file 3 see DBWR trace file ORA-01110: data file 3: '/db01/ORACLE/brdb/temp01.dbf' ORA-27037: unable to obtain file status Linux Error: 2: No such file or directory Additional information: 3 ORA-01157: cannot identify/lock data file 4 see DBWR trace file ORA-01110: data file 4: '/db01/ORACLE/brdb/users01.dbf' ORA-27037: unable to obtain file status Linux Error: 2: No such file or directory Additional information: 3 ORA-01157: cannot identify/lock data file 5 see DBWR trace file ORA-01110: data file 5: '/db01/ORACLE/brdb/tools01.dbf' ORA-27037: unable to obtain file status
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Recovering the Database under Different Scenarios
463
Linux Error: 2: No such file or directory Additional information: 3 ORA-01157: cannot identify/lock data file 6 see DBWR trace file ORA-01110: data file 6: '/db01/ORACLE/brdb/data01.dbf' ORA-27037: unable to obtain file status Linux Error: 2: No such file or directory Additional information: 3 ORA-01157: cannot identify/lock data file 7 see DBWR trace file ORA-01110: data file 7: '/db01/ORACLE/brdb/indx01.dbf' ORA-27037: unable to obtain file status Linux Error: 2: No such file or directory Additional information: 3 This trace file indicates that more than just the system data file is missing. Every data file is missing, which means you will need to perform a full database restore and recovery. Even though a full restore must be performed with every cold backup, it is important to note the severity of any failure. For instance, the original location could be unavailable for a significant period of time. In turn, this requires the recovery to be performed in a different location.
Step-by-Step Recovery This recovery follows the same steps as those in scenario 1. This is true for every recovery in NOARCHIVELOG mode, with the exception of a temporary tablespace recovery.
Conclusions The most notable observation about this scenario is that data is lost in NOARCHIVELOG mode. All data entered after the backup, but before the failure, is lost and must be reentered.
Scenario 3: ARCHIVELOG Loss of a Data File The database is available 24 hours a day, 7 days a week, with the exception of scheduled maintenance periods. Every morning at 1 A.M., a hot backup is performed. The data files, archive logs, control files, backup control files,
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
464
Chapter 14
Complete Recovery without Recovery Manager
and init.ora files are copied to a staging directory, where they are then copied to tape. The copy also remains on disk until the next morning, when the hot backup runs again. This allows quick access in the event of failure. When the backup runs again, the staging directory is purged and rewritten to again.
Diagnosis On Wednesday morning, there is a lost or deleted data file in the database. The error received upon start-up is as follows: SVRMGR> startup ORACLE instance started. Total System Global Area 19504528 Fixed Size 64912 Variable Size 16908288 Database Buffers 2457600 Redo Buffers 73728 Database mounted. ORA-01157: cannot identify/lock data file 4 see DBWR trace file ORA-01110: data file 4: '/db01/ORACLE/brdb/users01.dbf'
bytes bytes bytes bytes bytes
In this case, you are operating in ARCHIVELOG mode, so you need to replace only the damaged or lost file: /db01/ORACLE/brdb/users01.dbf. Then, with the database open, the archive logs can be applied to the database. This archive log action reapplies all the changes to the database. Therefore, no data will be lost.
Step-by-Step Recovery To recover the lost data file, take these steps: 1. Connect to user TEST and enter data in table t1 in the tablespace
USERS, which consists of the data file users01.dbf. This will simulate the data that is in the hot backup of the USERS tablespace. [oracle@DS-LINUX backup]$ svrmgrl Oracle Server Manager Release 3.1.5.0.0 - Production
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Recovering the Database under Different Scenarios
465
(c) Copyright 1997, Oracle Corporation. All Rights Reserved. Oracle8i Enterprise Edition Release 8.1.5.0.0 Production With the Partitioning and Java options PL/SQL Release 8.1.5.0.0 - Production SVRMGR> connect test/test Connected. SVRMGR> insert into t1 values (1,'This is test one before hot backup'); 1 row processed. SVRMGR> connect internal SVRMGR> commit; Statement processed. SVRMGR> select username,default_tablespace from 2> dba_users where username = 'TEST'; USERNAME DEFAULT_TABLESPACE -------------------------- ---------------------------TEST USERS 1 row selected. 2. Perform a hot backup of the USERS tablespace by placing it in backup
mode. Proceed to copy the data file users01.dbf to a staging directory. Then, end the backup of the USERS tablespace. SVRMGR> [oracle@DS-LINUX backup]$ svrmgrl Oracle Server Manager Release 3.1.5.0.0 - Production (c) Copyright 1997, Oracle Corporation. All Rights Reserved. Oracle8i Enterprise Edition Release 8.1.5.0.0 Production With the Partitioning and Java options
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
466
Chapter 14
Complete Recovery without Recovery Manager
PL/SQL Release 8.1.5.0.0 - Production SVRMGR> connect internal Connected. SVRMGR> alter tablespace users begin backup; Statement processed. SVRMGR> ! cp /db01/ORACLE/brdb/users01.dbf /stage SVRMGR> alter tablespace users end backup; Statement processed. SVRMGR> alter system switch logfile; Statement processed. 3. Connect to the user TEST and add more data to table t1. This data is
in rows 2 and 3. This data has been added after the backup of the users01.dbf data file. Therefore, the data is not part of the data file copied earlier. Then, perform log switches to simulate normal activity in the database. This activates the archiver process to generate archive logs for the newly added data. [oracle@DS-LINUX backup]$ svrmgrl Oracle Server Manager Release 3.1.5.0.0 - Production (c) Copyright 1997, Oracle Corporation. All Rights Reserved. Oracle8i Enterprise Edition Release 8.1.5.0.0 Production With the Partitioning and Java options PL/SQL Release 8.1.5.0.0 - Production SVRMGR> connect test/test Connected. SVRMGR> insert into t1 values(2,'This is test two after hot backup'); 1 row processed. SVRMGR> insert into t1 values(3,'This is test three after hot backup'); 1 row processed. SVRMGR> commit;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Recovering the Database under Different Scenarios
Statement processed. SVRMGR> connect internal Connected. SVRMGR> alter system switch Statement processed. SVRMGR> alter system switch Statement processed. SVRMGR> alter system switch Statement processed. SVRMGR> alter system switch Statement processed.
467
logfile; logfile; logfile; logfile;
4. Validate the data file location of the USERS tablespace. In this case, the
tablespace name is part of the data file name, as the OFA standard recommends. Then remove or delete this file. SVRMGR> ! rm
/db01/ORACLE/brdb/users01.dbf
5. Stop the database. Upon restarting, verify that the missing data file
error occurs. [oracle@DS-LINUX brdb]$ svrmgrl Oracle Server Manager Release 3.1.5.0.0 - Production (c) Copyright 1997, Oracle Corporation. All Rights Reserved. Oracle8i Enterprise Edition Release 8.1.5.0.0 Production With the Partitioning and Java options PL/SQL Release 8.1.5.0.0 - Production SVRMGR> connect internal Connected. SVRMGR> startup ORACLE instance started. Total System Global Area Fixed Size Variable Size
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
19504528 bytes 64912 bytes 16908288 bytes
468
Chapter 14
Complete Recovery without Recovery Manager
Database Buffers 2457600 bytes Redo Buffers 73728 bytes Database mounted. ORA-01157: cannot identify/lock data file 4 see DBWR trace file ORA-01110: data file 4: '/db01/ORACLE/brdb/users01.dbf' SVRMGR> 6. Take the recovered data file offline. This will enable you to recover
this data file and tablespace while the rest of the database is available for user access. SVRMGR> alter database datafile '/db01/ORACLE/brdb/ users01.dbf' offline; Statement processed. 7. Restore the individual data file by copying the data file users01.dbf
back to the original location. [oracle@DS-LINUX brdb]$ cp /stage/users01.dbf /db01/ ORACLE/brdb 8. With the database open, begin the recovery process by executing the
RECOVER DATABASE command. Then, apply all the available redo logs, resulting in a complete recovery. Finally, bring the data file online so that it is available for access by users. SVRMGR> connect internal Connected. SVRMGR> recover datafile '/db01/ORACLE/brdb/ users01.dbf'; ORA-00279: change 48323 generated at 03/29/00 22:04:25 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_84.log ORA-00280: change 48323 for thread 1 is in sequence #84 Specify log: {=suggested | filename | AUTO | CANCEL} Log applied. ORA-00279: change 48325 generated at 03/29/00 22:05:25 needed for thread 1
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Recovering the Database under Different Scenarios
469
ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_85.log ORA-00280: change 48325 for thread 1 is in sequence #85 ORA-00278: log file '/oracle/admin/brdb/arch1/archbrdb_84.log' no longer needed for this recovery Specify log: {=suggested | filename | AUTO | CANCEL} Log applied. ORA-00279: change 48330 generated at 03/29/00 22:08:41 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_86.log ORA-00280: change 48330 for thread 1 is in sequence #86 ORA-00278: log file '/oracle/admin/brdb/arch1/archbrdb_85.log' no longer needed for this recovery Specify log: {=suggested | filename | AUTO | CANCEL} ORA-00279: change 48330 generated at 03/29/00 22:08:41 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_86.log ORA-00280: change 48330 for thread 1 is in sequence #86 ORA-00278: log file '/oracle/admin/brdb/arch1/archbrdb_85.log' no longer needed for this recovery Specify log: {=suggested | filename | AUTO | CANCEL} Log applied. Media recovery complete. SVRMGR>
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
470
Chapter 14
Complete Recovery without Recovery Manager
SVRMGR> alter database datafile '/db01/ORACLE/brdb/users01.dbf' online; Statement processed. 9. Validate that there is no data loss, even though records two and three
were added after the hot backup. The data for records two and three were applied from the offline redo logs (archive logs). SVRMGR> select * from t1; C1 C2 ---------- -------------------------------------------1 This is a test one - before hot backup 2 This is a test two - after hot backup 3 This is a test three - after hot backup 3 rows selected. SVRMGR>
Conclusions The most notable observation about this scenario is that in ARCHIVELOG mode, no data is lost. All data entered after the hot backup of tablespace USERS, but before the failure, is not lost. Only the data file users01.dbf must be restored, which takes less time than restoring all the data files. Therefore, applying the archive logs during the recovery process can salvage all changes that occur after a hot backup of a data file. Another equally important feature is that the database can remain open to users while the one tablespace and associated data file(s) are being recovered. This allows users to access data in other tablespaces of the database not affected by the failure.
Scenario 4: ARCHIVELOG Loss of All Data Files The database is available 24 hours a day and 7 days a week, with the exception of scheduled maintenance periods, just as in scenario 3. Every morning at 1 A.M., a hot backup is performed. The data files, archive logs, control files, backup control files, and init.ora files are copied to a staging directory, where they are then copied to tape. The copy also remains on disk until
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Recovering the Database under Different Scenarios
471
the next morning, when the hot backup runs again. At this point, the staging directory is purged and rewritten to again.
Diagnosis On Wednesday morning, one of the disks, which supports all data files for the database, has crashed. The disk had no fault tolerance, such as mirroring. All data files are unavailable. The error received upon start-up is as follows: oracle@DS-LINUX brdb]$ svrmgrl Oracle Server Manager Release 3.1.5.0.0 - Production (c) Copyright 1997, Oracle Corporation. Reserved.
All Rights
Oracle8i Enterprise Edition Release 8.1.5.0.0 - Production With the Partitioning and Java options PL/SQL Release 8.1.5.0.0 - Production SVRMGR> connect internal Connected. SVRMGR> startup ORACLE instance started. Total System Global Area 19504528 Fixed Size 64912 Variable Size 16908288 Database Buffers 2457600 Redo Buffers 73728 Database mounted. ORA-01157: cannot identify/lock data file 1 see DBWR trace file ORA-01110: data file 1: '/db01/ORACLE/brdb/system01.dbf' SVRMGR>
bytes bytes bytes bytes bytes
You need to perform the same steps as in scenario 2 to verify that all data files are unavailable.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
472
Chapter 14
Complete Recovery without Recovery Manager
Step-by-Step Recovery To recover all data files, take these steps: 1. Connect to user TEST and validate the data in table t1. This will sim-
ulate the data that is in the hot backup of all the data files. [oracle@DS-LINUX brdb]$ svrmgrl Oracle Server Manager Release 3.1.5.0.0 - Production (c) Copyright 1997, Oracle Corporation. All Rights Reserved. Oracle8i Enterprise Edition Release 8.1.5.0.0 Production With the Partitioning and Java options PL/SQL Release 8.1.5.0.0 - Production SVRMGR> connect test/test Connected. SVRMGR> select * from t1; C1 C2 ---------- -------------------------------------------1 This is a test one - before hot backup 1 row selected. 2. Perform a hot backup of all tablespaces by placing each tablespace in
backup mode. Proceed to copy the data file, in this case users01.dbf, to a staging directory. Then, end the backup of the USERS tablespace. You need to repeat this for all tablespaces and the associated data files. An alternative method is to create a backup script that could automate this activity, so that the hot backup could be scheduled. This example uses the Korn shell in the Linux OS. The script name is hot-backup. The hot-backup script uses multiple Unix OS commands in conjunction with queries from the database. The result is that all tablespaces and the associated data files get backed up, as well as control files, init.ora, redo log files, and archive logs. Binary and ASCII backups of the control file are also created. This type of backup script makes a complete recovery package for a hot backup.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Recovering the Database under Different Scenarios
473
SVRMGR> connect internal Connected. SVRMGR> alter tablespace users begin backup; Statement processed. SVRMGR> ! cp /db01/ORACLE/brdb/users01.dbf /stage SVRMGR> alter tablespace users end backup; Statement processed. SVRMGR> alter system switch logfile; Statement processed. (repeat these steps for all tablespaces). #!/bin/ksh ######################### hotbackup##################### # # Hot Backup of Oracle DB on Linux ksh # Written by Doug Stuns # # Date: Action: Initials: # ==== ====== ======== # 05/01/98 Created from Solaris 2.6/DLT DRS # triple mirror backup. # 10/01/99 Adjustments for RH Linux 5.2 DRS # 03/03/00 Adjustments for RH Linux 6.1 DRS #********************************************************* # UNIX Environment variables today=`date '+%y.%m.%d'` logFile="/local/backup/log/hot-bu-3.$today" #logFile=$STDOUT listFile="/tmp/hot-tabspc.$today" tmpFile="/tmp/hot-tmp.$today" toolsDir=/local/backup stageZip=/stage #tapeDev="/dev/rmt"; export tapeDev #*********************************************************
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
474
Chapter 14
Complete Recovery without Recovery Manager
# ORACLE Environment Variables ORACLE_SID=brdb; export ORACLE_SID ORACLE_BASE=/oracle; export ORACLE_BASE ORACLE_HOME=$ORACLE_BASE/product/8.1.5; export ORACLE_ HOME ORACLE_TERM=xterm; export ORACLE_TERM ORA_NLS=$ORACLE_HOME/ocommon/nls/admin/data; export ORA_NLS ORACLE_DOC=$ORACLE_HOME/odoc; export ORACLE_DOC LD_LIBRARY_PATH=$ORACLE_HOME/lib; export LD_LIBRARY_ PATH PATH=$ORACLE_HOME/bin:/bin:/usr/bin:/usr/sbin:/bin. export PATH ORAENV_ASK=NO DBA=$ORACLE_BASE/admin/$ORACLE_SID; export DBA archDir=$DBA/arch1 udumpDir=$DBA/udump pfileDir=$DBA/pfile # These are the volumes/filesystems to be backed up dbVolumes="/db01/ORACLE/"$ORACLE_SID
#********************************************************* # Starting Backup Log now=`date +%m%d%y` echo "Starting backup, beginning Hot Backup at $now" >$logFile #********************************************************* # Rewind the tape if copying to tape. # mt -f $tapeDev rewind . #********************************************************* # ORACLE - Archive flush redo to archive.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Recovering the Database under Different Scenarios
475
Generate List for info echo "flush logs and give current log info" >>$logFile 2>>$logFile su oracle 2>>$logFile >>$logFile <<-_ORA_ svrmgrl <<-EOF_SAVE_LOGS connect internal archive log list; alter system archive log all; exit; EOF_SAVE_LOGS _ORA_ # ORACLE - Put the databases in hot backup mode. #********************************************************* # Begin tablespace backup echo "begin backup mode" >>$logFile 2>>$logFile su oracle 2>>$logFile >>$logFile <<-_ORA_ svrmgrl <<-EOF_BEGIN_BACKUP connect internal set echo off spool $listFile select distinct 'alter tablespace ' ||tablespace_name|| ' begin backup;' from dba_data_files / spool off set echo on !sed -e '1,2d' -e 's/ *$//g' -e '/selected/d' \ $listFile >$tmpFile @$tmpFile !rm $listFile $tmpFile EOF_BEGIN_BACKUP
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
476
Chapter 14
Complete Recovery without Recovery Manager
_ORA_ #********************************************************* # Remove all files from /stage! echo 'Removing files from stage directory!' >>$logFile 2>>$logFile rm -rf /stage/* >>$logFile 2>&1 #********************************************************* # Copy Oracle datafiles from working directory to stage # and gzip them up to conserve space. This only copies the files # from $dbVolumes for dbf in $dbVolumes do cp ${dbf}/*.dbf $stageZip >>$logFile 2>&1 gzip -1N ${stageZip}/*.dbf >>$logFile 2>&1 done # ORACLE - Stop backup mode #********************************************************* # End tablespace backups echo "end backup mode" >>$logFile 2>>$logFile su oracle 2>>$logFile >>$logFile <<-_ORA_ svrmgrl <<-EOF_END_BACKUP connect internal set echo off spool $listFile select distinct 'alter tablespace ' ||tablespace_name|| ' end backup;' from dba_data_files / spool off
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Recovering the Database under Different Scenarios
477
set echo on !sed -e '1,2d' -e 's/ *$//g' -e '/selected/d' \ $listFile >$tmpFile @$tmpFile !rm $listFile $tmpFile EOF_END_BACKUP _ORA_ # ORACLE - Archive flush all redo to archive logs. Generate List for info. #********************************************************* # Switch logfile echo "switch out end backup to arch log" >>$logFile 2>>$logFile su oracle 2>>$logFile >>$logFile <<-_ORA_ svrmgrl <<-EOF_SAVE_LOGS connect internal alter system archive log all; alter system switch logfile; archive log list; EOF_SAVE_LOGS _ORA_ # ORACLE - Backup Control file to trace #********************************************************* # Generate binary and ascii controlfile backup echo "backup control file to binary and ascii" >>$logFile 2>>$logFile su oracle 2>>$logFile >>$logFile <<-_ORA_ svrmgrl <<-EOF_BACKUP_CONTROL connect internal; alter database backup controlfile to '$DBA/exp/backup.ctl' reuse;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
478
Chapter 14
Complete Recovery without Recovery Manager
alter database backup controlfile to trace; EOF_BACKUP_CONTROL _ORA_
#********************************************************* # Copy archived logs, trace, pfiles from directories to stage
cp $archDir/* $stageZip >>$logFile 2>&1 cp $udumpDir/* $stageZip >>$logFile 2>&1 cp $pfileDir/* $stageZip >>$logFile 2>&1
#********************************************************* # Tar up files to /tmp directory or tape device. tar -cvf '/tmp/hot-backup.'$now'.tar' $stageZip/* >>$logFile 2>&1 # tar -cvf '/dev/rmt/hot-backup.'$now'.tar' $stageZip/* >>$logFile 2>&1
#********************************************************* #Remove old log files find /local/backup/log \( -name "hot-bu-3*" \) -ctime +33 -print -exec rm {} \; >>$logFile 2>>$logFile 3. Connect to the user TEST and add more data. These are rows 2 and 3
in table t1. This data has been added after the backup of all the data files. Therefore, the data is not part of the data files that made up the backup. Then, perform log switches to simulate normal activity in the
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Recovering the Database under Different Scenarios
479
database. This activates the archiver process to generate archive logs for the newly added data. Perform a SELECT to validate that the rows exist. SVRMGR> connect test/test Connected. SVRMGR> insert into t1 values(2,'This is a test two after hot backup'); 1 row processed. SVRMGR> insert into t1 values(3,'This is a test three after hot backup'); 1 row processed. SVRMGR> commit; SVRMGR> connect internal Connected. SVRMGR> alter system switch logfile; Statement processed. SVRMGR> alter system switch logfile; Statement processed. SVRMGR> alter system switch logfile; Statement processed. SVRMGR> alter system switch logfile; Statement processed. SVRMGR> connect test/test SVRMGR> select * from t1; C1 C2 ---------- -------------------------------------------1 This is a test one - before hot backup 3 This is a test three - after hot backup 2 This is a test two - after hot backup 3 rows selected. SVRMGR> 4. Remove or delete all the data files to simulate a disaster.
SVRMGR>! rm /db01/ORACLE/brdb/* 5. Shut down the database to perform the recovery. The database will
not shut down with a normal shutdown. This can be validated by
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
480
Chapter 14
Complete Recovery without Recovery Manager
hosting out to the OS and performing a ps –ef | grep ora. This is a Unix OS command to verify that all the Oracle processes are running. You must use a SHUTDOWN ABORT in this case. SVRMGR> shutdown ORA-01116: error in opening database file 1 ORA-01110: data file 1: '/db01/ORACLE/brdb/ system01.dbf' ORA-27041: unable to open file Linux Error: 2: No such file or directory Additional information: 3 SVRMGR> SVRMGR> ! ps -ef|grep ora root 2679 2678 0 Mar29 pts/0 00:00:00 login -- oracle oracle 2680 2679 0 Mar29 pts/0 00:00:00 -bash root 2783 2782 0 Mar29 pts/2 00:00:00 login -- oracle oracle 2784 2783 0 Mar29 pts/2 00:00:00 -bash oracle 4289 1 0 19:33 ? 00:00:00 ora_pmon_brdb oracle 4291 1 0 19:33 ? 00:00:00 ora_dbw0_brdb oracle 4293 1 0 19:33 ? 00:00:00 ora_lgwr_brdb oracle 4295 1 0 19:33 ? 00:00:00 ora_ckpt_brdb oracle 4297 1 0 19:33 ? 00:00:00 ora_smon_brdb oracle 4299 1 0 19:33 ? 00:00:00 ora_reco_brdb oracle 4301 1 0 19:33 ? 00:00:00 ora_arc0_brdb oracle 4303 1 0 19:33 ? 00:00:00 ora_arc1_brdb
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Recovering the Database under Different Scenarios
481
oracle 4309 2784 0 19:40 pts/2 00:00:00 svrmgrl SVRMGR> shutdown abort ORACLE instance shut down. 6. Perform a restore of all data files to their original location. This is done
with a Unix OS copy command, which copies the data files from the stage directory to the db01/ORACLE/brdb directory. Then, execute the directory listing to verify that all the files are in the appropriate location. [oracle@DS-LINUX brdb]$ cp /stage/*.dbf . [oracle@DS-LINUX brdb]$ ls -ltr total 257067 -rw-r----1 oracle dba Mar 30 19:42 rbs01.dbf -rw-r----1 oracle dba Mar 30 19:42 indx01.dbf -rw-r----1 oracle dba Mar 30 19:42 data01.dbf -rw-r----1 oracle dba Mar 30 19:42 users01.dbf -rw-r----1 oracle dba Mar 30 19:42 tools01.dbf -rw-r----1 oracle dba Mar 30 19:42 temp01.dbf -rw-r----1 oracle dba Mar 30 19:42 system01.dbf [oracle@DS-LINUX brdb]$
20979712 20979712 104865792 5251072 5251072 20979712 83894272
7. Proceed to recover the database by performing a STARTUP MOUNT.
Then, execute the RECOVER DATABASE command. This example uses the AUTO option when applying archive logs. The AUTO option applies all the logs needed for complete recovery. Then, open the database. oracle@DS-LINUX brdb]$ svrmgrl Oracle Server Manager Release 3.1.5.0.0 - Production
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
482
Chapter 14
Complete Recovery without Recovery Manager
(c) Copyright 1997, Oracle Corporation. All Rights Reserved. Oracle8i Enterprise Edition Release 8.1.5.0.0 Production With the Partitioning and Java options PL/SQL Release 8.1.5.0.0 - Production SVRMGR> connect internal Connected. SVRMGR> startup mount ORACLE instance started. Total System Global Area 19504528 bytes Fixed Size 64912 bytes Variable Size 16908288 bytes Database Buffers 2457600 bytes Redo Buffers 73728 bytes Database mounted. SVRMGR> SVRMGR> recover database; ORA-00279: change 68378 generated at 03/29/00 22:32:12 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_91.log ORA-00280: change 68378 for thread 1 is in sequence #91 Specify log: {=suggested | filename | AUTO | CANCEL} AUTO Log applied. ORA-00279: change 68449 generated at 03/30/00 19:27:11 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_92.log ORA-00280: change 68449 for thread 1 is in sequence #92 ORA-00278: log file '/oracle/admin/brdb/arch1/archbrdb_91.log' no longer needed for this recovery
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Recovering the Database under Different Scenarios
483
Log applied. ORA-00279: change 68450 generated at 03/30/00 19:27:18 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_93.log ORA-00280: change 68450 for thread 1 is in sequence #93 ORA-00278: log file '/oracle/admin/brdb/arch1/archbrdb_92.log' no longer needed for this recovery Log applied. ORA-00279: change 68451 generated at 03/30/00 19:27:20 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_94.log ORA-00280: change 68451 for thread 1 is in sequence #94 ORA-00278: log file '/oracle/admin/brdb/arch1/archbrdb_93.log' no longer needed for this recovery Log applied. ORA-00279: change 68452 generated at 03/30/00 19:27:23 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_95.log ORA-00280: change 68452 for thread 1 is in sequence #95 ORA-00278: log file '/oracle/admin/brdb/arch1/archbrdb_94.log' no longer needed for this recovery Log applied. Media recovery complete. SVRMGR> SVRMGR> alter database open; Statement processed.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
484
Chapter 14
Complete Recovery without Recovery Manager
8. Verify that this was a complete recovery—in other words, that no data
was lost. Query the t1 table, and there should be three records. The first record was inserted before the hot backup, and the next two records were applied from archive logs. SVRMGR> connect test/test Connected. SVRMGR> select * from t1; C1 C2 ---------- -------------------------------------------1 This is a test one - before hot backup 3 This is a test three - after hot backup 2 This is a test two - after hot backup 3 rows selected. SVRMGR>
Conclusions The full database recovery in ARCHIVELOG mode doesn’t require the online redo logs and control files. It requires the data files to be restored only if a control file and the online redo logs are intact. If you are following the OFA standards, these files would be on different disks than the data files. Again, no loss of data occurs; a complete recovery can be performed. This is the most important point about ARCHIVELOG mode recoveries.
Restoring Files to a Different Location in Both Modes
R
estoring files to a different location in both ARCHIVELOG mode and NOARCHIVELOG mode can be performed in a similar manner. The main difference is that like any NOARCHIVELOG mode recovery, the database in most cases cannot be completely recovered to the point of failure. The only time a database can be completely recovered in NOARCHIVELOG mode is when the database has not cycled through all of the online redo logs since the complete backup.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Restoring Files to a Different Location in Both Modes
485
To restore the files to a different location, you would perform an OS copy from the backup location to the new location. At this point, you would need to rebuild the control file to account for the physical changes in the database. Remember, the control file keeps track of the physical location of all the database files. Figure 14.3 illustrates this point. FIGURE 14.3
Restoring files to different locations in the event of media failure
/Disk1 Archive logs
/Disk2
/Disk3
/Disk4
/Disk5
Redo logs
Data files Control files
Data files
Control files
/Disk6 tape
/staging
Data files, redo logs, archive logs restored to new disk
/Disk10
/Disk11
/Disk12
/Disk13
Archive logs
Redo logs
Data files
Data files
/Disk14
Note: No control files will be copied. Control files will be rebuilt. SVRMGRL > session
Run the CREATE CONTROLFILE command to build new control files in /Disk12 and /Disk14 with new physical structures of the database.
Recovery in NOARCHIVELOG Mode NOARCHIVELOG mode requires a full database restore of the data files to a new location. A BACKUP CONTROLFILE TO TRACE (ASCII control file) will
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
486
Chapter 14
Complete Recovery without Recovery Manager
be needed to rebuild the binary control file. The ASCII control file will need to be edited with the new file system locations of any physical database structures that have moved, with the exception of the control file itself. In this case, you will change the location of the data file from the db01 mount point to the db02 mount point. The recovery options in this ASCII control file will need to be changed also. The RECOVER DATABASE command will be changed to recover the database by using UNTIL CANCEL USING BACKUP CONTROLFILE. This is the main difference between NOARCHIVELOG mode and ARCHIVELOG mode. The recovery process will be canceled immediately because there are no archive logs to recover. After recovery is cancelled, the database will be opened with the RESETLOGS option. This is mandatory when using the UNTIL CANCEL USING BACKUP CONTROLFILE option. The following steps show the step-by-step recovery of files to a new location when operating in NOARCHIVELOG mode: 1. The file system /db01/ORACLE/brdb is destroyed due to a bad disk in
the volume, and there isn’t any disk fault tolerance, such as RAID 5 or mirroring on this drive. As the DBA, you proceed to copy the latest hot backup data files from the online location in the backup area called /stage to a new file system, /db02/ORACLE/brdb. This file system was unaffected by the disk crash. [oracle@DS-LINUX brdb]$cp /stage/* /db02/ORACLE/brdb [oracle@DS-LINUX brdb]$ pwd /db02/ORACLE/brdb [oracle@DS-LINUX brdb]$ ls -ltr total 257067 -rw-r----1 oracle dba 20979712 Mar 30 23:27 rbs01.dbf -rw-r----1 oracle dba 20979712 Mar 30 23:27 indx01.dbf -rw-r----1 oracle dba 104865792 Mar 30 23:27 data01.dbf -rw-r----1 oracle dba 83894272 Mar 30 23:27 system01.dbf -rw-r----1 oracle dba 5251072 Mar 30 23:27 users01.dbf -rw-r----1 oracle dba 5251072 Mar 30 23:27 tools01.dbf
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Restoring Files to a Different Location in Both Modes
-rw-r----1 oracle dba Mar 30 23:27 temp01.dbf [oracle@DS-LINUX brdb]$
487
20979712
2. A part of any good cold or hot backup should be an ALTER DATABASE
BACKUP CONTROLFILE TO TRACE command, which generates an ASCII version of the control file. The cold backup generates the ASCII control file as a trace file and gets copied to the staging directory. This file was originally located in the ../udump directory per OFA requirements. In this case, the file name is ora_23302.trc. You will rename this to something more meaningful, for example, crdb_db01_to_ db02.sql. Dump file /oracle/admin/brdb/udump/ora_23302.trc Oracle8i Enterprise Edition Release 8.1.5.0.0 Production With the Partitioning and Java options PL/SQL Release 8.1.5.0.0 - Production ORACLE_HOME = /oracle/product/8.1.5 System name:Linux Node name:DS-LINUX Release:2.2.12-20 Version:#1 Mon Sep 27 10:40:35 EDT 1999 Machine:i686 Instance name: brdb Redo thread mounted by this instance: 1 Oracle process number: 10 Unix process pid: 23302, image: oracle@DS-LINUX *** SESSION ID:(9.106) 2000.03.06.23.44.56.665 *** 2000.03.06.23.44.56.665 # The following commands will create a new control file and use it # to open the database. # Data used by the recovery manager will be lost. Additional logs may # be required for media recovery of offline data files. Use this # only if the current version of all online logs are available.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
488
Chapter 14
Complete Recovery without Recovery Manager
STARTUP NOMOUNT CREATE CONTROLFILE REUSE DATABASE "BRDB" NORESETLOGS ARCHIVELOG MAXLOGFILES 16 MAXLOGMEMBERS 2 MAXDATAFILES 100 MAXINSTANCES 1 MAXLOGHISTORY 226 LOGFILE GROUP 1 ( '/redo01/ORACLE/brdb/redo01a.log', '/redo02/ORACLE/brdb/redo01b.log' ) SIZE 2M, GROUP 2 ( '/redo01/ORACLE/brdb/redo02a.log', '/redo02/ORACLE/brdb/redo02b.log' ) SIZE 2M, GROUP 3 ( '/redo01/ORACLE/brdb/redo03a.log', '/redo02/ORACLE/brdb/redo03b.log' ) SIZE 2M DATAFILE '/db01/ORACLE/brdb/system01.dbf', '/db01/ORACLE/brdb/rbs01.dbf', '/db01/ORACLE/brdb/temp01.dbf', '/db01/ORACLE/brdb/users01.dbf', '/db01/ORACLE/brdb/tools01.dbf', '/db01/ORACLE/brdb/data01.dbf', '/db01/ORACLE/brdb/indx01.dbf' CHARACTER SET US7ASCII ; # Recovery is required if any of the datafiles are restored backups, # or if the last shutdown was not normal or immediate. RECOVER DATABASE # All logs need archiving and a log switch is needed. ALTER SYSTEM ARCHIVE LOG ALL;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Restoring Files to a Different Location in Both Modes
489
# Database can now be opened normally. ALTER DATABASE OPEN; # No tempfile entries found to add. # 3. You need to edit this file, so that the newly created control file will see
the data files in the new file system. Modify the RECOVER DATABASE command by using the USING BACKUP CONTROLFILE option of the RECOVER command. Modify the ALTER DATABASE OPEN command to ALTER DATABASE OPEN RESETLOGS. Below is the edited version of the ASCII control file, which is now called crdb_d01_to_db02.sql. # The following commands will create a new control file and use it # to open the database. # Data used by the recovery manager will be lost. Additional logs may # be required for media recovery of offline data files. Use this # only if the current version of all online logs are available. STARTUP NOMOUNT CREATE CONTROLFILE REUSE DATABASE "BRDB" NORESETLOGS ARCHIVELOG MAXLOGFILES 16 MAXLOGMEMBERS 2 MAXDATAFILES 100 MAXINSTANCES 1 MAXLOGHISTORY 226 LOGFILE GROUP 1 ( '/redo01/ORACLE/brdb/redo01a.log', '/redo02/ORACLE/brdb/redo01b.log' ) SIZE 2M, GROUP 2 ( '/redo01/ORACLE/brdb/redo02a.log', '/redo02/ORACLE/brdb/redo02b.log' ) SIZE 2M,
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
490
Chapter 14
Complete Recovery without Recovery Manager
GROUP 3 ( '/redo01/ORACLE/brdb/redo03a.log', '/redo02/ORACLE/brdb/redo03b.log' ) SIZE 2M DATAFILE '/db02/ORACLE/brdb/system01.dbf', '/db02/ORACLE/brdb/rbs01.dbf', '/db02/ORACLE/brdb/temp01.dbf', '/db02/ORACLE/brdb/users01.dbf', '/db02/ORACLE/brdb/tools01.dbf', '/db02/ORACLE/brdb/data01.dbf', '/db02/ORACLE/brdb/indx01.dbf' CHARACTER SET US7ASCII ; # Recovery is required if any of the datafiles are restored backups, # or if the last shutdown was not normal or immediate. RECOVER DATABASE UNTIL CANCEL USING BACKUP CONTROLFILE; # All logs need archiving and a log switch is needed. # ALTER SYSTEM ARCHIVE LOG ALL; # Database can now be opened normally. ALTER DATABASE OPEN RESTLOGS; # No tempfile entries found to add. # 4. This file is now ready to be run. This is performed by going into
SVRMGRL and running the newly renamed file, crdb_db01_to_ db02.sql. After executing the RECOVER DATABASE command, you need to cancel recovery immediately because there are no archive logs to apply. SVRMGR> connect internal Connected. SVRMGR> @crdb_db01_to_db02.sql ORACLE instance started. Total System Global Area Fixed Size Variable Size
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
19504528 bytes 64912 bytes 16908288 bytes
Restoring Files to a Different Location in Both Modes
491
Database Buffers 2457600 bytes Redo Buffers 73728 bytes Statement processed. ORA-00279: change 108564 generated at 03/31/00 00:32:58 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_1.log ORA-00280: change 108564 for thread 1 is in sequence #1 Specify log: {=suggested | filename | AUTO | CANCEL} cancel Media recovery cancelled. Statement processed. SVRMGR> 5. You should verify the new location of the data files after the database
is opened. The database data files were originally in the /db01 mount point, and now the data files are located in the /db02 mount point. SVRMGR> select file#,name from v$datafile; FILE# NAME ---------- -------------------------------------------1 /db02/ORACLE/brdb/system01.dbf 2 /db02/ORACLE/brdb/rbs01.dbf 3 /db02/ORACLE/brdb/temp01.dbf 4 /db02/ORACLE/brdb/users01.dbf 5 /db02/ORACLE/brdb/tools01.dbf 6 /db02/ORACLE/brdb/data01.dbf 7 /db02/ORACLE/brdb/indx01.dbf 7 rows selected. SVRMGR>
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
492
Chapter 14
Complete Recovery without Recovery Manager
Recovery in ARCHIVELOG Mode Recovering to a different location in ARCHIVELOG mode requires a full database restore of the data files to a new location, just as in NOARCHIVELOG mode. A BACKUP CONTROLFILE TO TRACE, or ASCII control file, is needed to rebuild the binary control file, just as in NOARCHIVELOG mode. You need to edit the ASCII control file with the new file system locations of any physical database structures that have moved, with the exception of the control file itself. In this case, you will change the data file location from db01 to db02, just as you did in the NOARCHIVELOG mode example. You also need to change the recovery options in this ASCII control file. Change the RECOVER DATABASE command to RECOVER DATABASE USING BACKUP CONTROLFILE. The recovery process will not need to be canceled immediately because archive logs can be applied to perform a complete recovery. After recovery is completed, you open the database with the NORESETLOGS option. In this case, you can perform a complete recovery by applying all the archive logs, and no data will be lost. The step-by-step process is as follows: 1. The file system /db01/ORACLE/brdb gets destroyed due to a bad disk
in the volume, and there isn’t any disk fault tolerance such as RAID 5 or mirroring. As the DBA, you proceed to copy the latest hot backup data files from the online location in the staging area to a new file system, /db02/ORACLE/brdb. This file system was unaffected by the disk crash. [oracle@DS-LINUX brdb]$cp /stage/* /db02/ORACLE/brdb [oracle@DS-LINUX brdb]$ pwd /db02/ORACLE/brdb [oracle@DS-LINUX brdb]$ ls rbs01.dbf indx01.dbf data01.dbf system01.dbf users01.dbf tools01.dbf temp01.dbf [oracle@DS-LINUX brdb]$ 2. Part of any good hot backup should be an ALTER DATABASE BACKUP
CONTROLFILE TO TRACE command. Of course, yours does just that, so the trace file is also in the staging directory. This file was originally located in the ../udump directory per OFA requirements. This filename is ora_23302.trc. Rename this to something more meaningful, for example, crdb_db01_to_db02.sql.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Restoring Files to a Different Location in Both Modes
493
Dump file /oracle/admin/brdb/udump/ora_23302.trc Oracle8i Enterprise Edition Release 8.1.5.0.0 Production With the Partitioning and Java options PL/SQL Release 8.1.5.0.0 - Production ORACLE_HOME = /oracle/product/8.1.5 System name:Linux Node name:DS-LINUX Release:2.2.12-20 Version:#1 Mon Sep 27 10:40:35 EDT 1999 Machine:i686 Instance name: brdb Redo thread mounted by this instance: 1 Oracle process number: 10 Unix process pid: 23302, image: oracle@DS-LINUX *** SESSION ID:(9.106) 2000.03.06.23.44.56.665 *** 2000.03.06.23.44.56.665 # The following commands will create a new control file and use it # to open the database. # Data used by the recovery manager will be lost. Additional logs may # be required for media recovery of offline data files. Use this # only if the current version of all online logs are available. STARTUP NOMOUNT CREATE CONTROLFILE REUSE DATABASE "BRDB" NORESETLOGS ARCHIVELOG MAXLOGFILES 16 MAXLOGMEMBERS 2 MAXDATAFILES 100 MAXINSTANCES 1 MAXLOGHISTORY 226 LOGFILE GROUP 1 ( '/redo01/ORACLE/brdb/redo01a.log',
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
494
Chapter 14
Complete Recovery without Recovery Manager
'/redo02/ORACLE/brdb/redo01b.log' ) SIZE 2M, GROUP 2 ( '/redo01/ORACLE/brdb/redo02a.log', '/redo02/ORACLE/brdb/redo02b.log' ) SIZE 2M, GROUP 3 ( '/redo01/ORACLE/brdb/redo03a.log', '/redo02/ORACLE/brdb/redo03b.log' ) SIZE 2M DATAFILE '/db01/ORACLE/brdb/system01.dbf', '/db01/ORACLE/brdb/rbs01.dbf', '/db01/ORACLE/brdb/temp01.dbf', '/db01/ORACLE/brdb/users01.dbf', '/db01/ORACLE/brdb/tools01.dbf', '/db01/ORACLE/brdb/data01.dbf', '/db01/ORACLE/brdb/indx01.dbf' CHARACTER SET US7ASCII ; # Recovery is required if any of the datafiles are restored backups, # or if the last shutdown was not normal or immediate. RECOVER DATABASE # All logs need archiving and a log switch is needed. ALTER SYSTEM ARCHIVE LOG ALL; # Database can now be opened normally. ALTER DATABASE OPEN; # No tempfile entries found to add. # 3. You need to edit this file so that the new control file that it will create
will see the data files in the new file system. To recover the database, you need to modify the RECOVER DATABASE command by using BACKUP CONTROLFILE UNTIL CANCEL. You also need to modify the ALTER DATABASE OPEN command to ALTER DATABASE OPEN RESETLOGS. Below is the edited version of the ASCII control file, which is now called crd01_to_db02.sql.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Restoring Files to a Different Location in Both Modes
495
# The following commands will create a new control file and use it # to open the database. # Data used by the recovery manager will be lost. Additional logs may # be required for media recovery of offline data files. Use this # only if the current version of all online logs are available. STARTUP NOMOUNT CREATE CONTROLFILE REUSE DATABASE "BRDB" NORESETLOGS ARCHIVELOG MAXLOGFILES 16 MAXLOGMEMBERS 2 MAXDATAFILES 100 MAXINSTANCES 1 MAXLOGHISTORY 226 LOGFILE GROUP 1 ( '/redo01/ORACLE/brdb/redo01a.log', '/redo02/ORACLE/brdb/redo01b.log' ) SIZE 2M, GROUP 2 ( '/redo01/ORACLE/brdb/redo02a.log', '/redo02/ORACLE/brdb/redo02b.log' ) SIZE 2M, GROUP 3 ( '/redo01/ORACLE/brdb/redo03a.log', '/redo02/ORACLE/brdb/redo03b.log' ) SIZE 2M DATAFILE '/db02/ORACLE/brdb/system01.dbf', '/db02/ORACLE/brdb/rbs01.dbf', '/db02/ORACLE/brdb/temp01.dbf', '/db02/ORACLE/brdb/users01.dbf', '/db02/ORACLE/brdb/tools01.dbf', '/db02/ORACLE/brdb/data01.dbf', '/db02/ORACLE/brdb/indx01.dbf'
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
496
Chapter 14
Complete Recovery without Recovery Manager
CHARACTER SET US7ASCII ; # Recovery is required if any of the datafiles are restored backups, # or if the last shutdown was not normal or immediate. RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL; # All logs need archiving and a log switch is needed. # ALTER SYSTEM ARCHIVE LOG ALL; # Database can now be opened normally. ALTER DATABASE OPEN RESTLOGS; # No tempfile entries found to add. # 4. This file is now ready to be run. You do this by going into SVRMGRL
and running the newly named file crdb_db01_to_db02.sql. After executing the RECOVER DATABASE command, you need to either manually apply each log or run in AUTO mode—that is, you use either manual log recovery or automatic log recovery. This example uses AUTO mode and applies all logs. SVRMGR> connect internal Connected. SVRMGR> @crdb_db01_to_db02.sql ORACLE instance started. Total System Global Area 19504528 bytes Fixed Size 64912 bytes Variable Size 16908288 bytes Database Buffers 2457600 bytes Redo Buffers 73728 bytes Statement processed. ORA-00279: change 68378 generated at 03/29/00 22:32:12 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_91.log ORA-00280: change 68378 for thread 1 is in sequence #91 Specify log: {=suggested | filename | AUTO | CANCEL} AUTO Log applied.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Restoring Files to a Different Location in Both Modes
497
ORA-00279: change 68449 generated at 03/30/00 19:27:11 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_92.log ORA-00280: change 68449 for thread 1 is in sequence #92 ORA-00278: log file '/oracle/admin/brdb/arch1/archbrdb_91.log' no longer needed for this recovery Log applied. ORA-00279: change 68450 generated at 03/30/00 19:27:18 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_93.log ORA-00280: change 68450 for thread 1 is in sequence #93 ORA-00278: log file '/oracle/admin/brdb/arch1/archbrdb_92.log' no longer needed for this recovery Log applied. ORA-00279: change 68451 generated at 03/30/00 19:27:20 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_94.log ORA-00280: change 68451 for thread 1 is in sequence #94 ORA-00278: log file '/oracle/admin/brdb/arch1/archbrdb_93.log' no longer needed for this recovery Log applied. ORA-00279: change 68452 generated at 03/30/00 19:27:23 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_95.log ORA-00280: change 68452 for thread 1 is in sequence #95 ORA-00278: log file '/oracle/admin/brdb/arch1/archbrdb_94.log' no longer needed for this recovery
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
498
Chapter 14
Complete Recovery without Recovery Manager
Log applied. ORA-00279: change 68453 generated at 03/30/00 19:27:25 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_96.log ORA-00280: change 68453 for thread 1 is in sequence #96 ORA-00278: log file '/oracle/admin/brdb/arch1/archbrdb_95.log' no longer needed for this recovery SVRMGR> alter database open resetlogs; Statement processed. SVRMGR> 5. After the database is open, the new location of the data files should be
verified. The database data files were originally in the /db01 mount point, and now the data files are in the /db02 mount point. SVRMGR> select file#,name from v$datafile; FILE# NAME ---------- -------------------------------------------1 /db02/ORACLE/brdb/system01.dbf 2 /db02/ORACLE/brdb/rbs01.dbf 3 /db02/ORACLE/brdb/temp01.dbf 4 /db02/ORACLE/brdb/users01.dbf 5 /db02/ORACLE/brdb/tools01.dbf 6 /db02/ORACLE/brdb/data01.dbf 7 /db02/ORACLE/brdb/indx01.dbf 7 rows selected. SVRMGR>
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Using Dictionary Views for Recovery
499
After an ALTER DATABASE OPEN RESETLOGS, Oracle recommends performing a complete backup. This is because there would no longer be a good baseline backup if the database were to crash. The new redo logs, which have been reset, would not be able to be applied to the old backup.
Using Dictionary Views for Recovery
Three main views are used to recover from a media failure. These views are V$RECOVER_FILE, V$TABLESPACE, and V$DATAFILE. These views’ main benefit is that they display the data file that needs recovery as well as the data file’s physical location and the related tablespace, or logical location. The physical and logical locations of what needs to be recovered are some of the most important pieces of information during a recovery situation. The physical data files will be restored from backup and often require other members of the technical team to retrieve the physical tape archives. Let’s walk through the use of these views to identify a data file that experienced media failure and needs recovery: 1. A user attempts to query a table t1 as part of their daily routine. The
user receives the following error. The user then calls you, the DBA, to report a problem. SVRMGR> select * from test.t1; C1 C2 ---------- -------------------------------------------ORA-00376: file 4 cannot be read at this time ORA-01110: data file 4: '/db01/ORACLE/brdb/users01.dbf' 2. You query the V$RECOVER_FILE view to see whether any data files
need media recovery. SVRMGR> select * from v$recover_file; FILE# ONLINE ERROR CHANGE# TIME ---------- ------- ------------------ ---------- -----4 ONLINE FILE NOT FOUND 0 1 row selected.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
500
Chapter 14
Complete Recovery without Recovery Manager
3. You see a problem with data file 4. You must determine which
tablespace this is so that recovery measures can be implemented. SVRMGR> 2> 3> 4>
select a.name,b.name from v$datafile a, v$tablespace b where a.ts#=b.ts# and a.file# = 4;
NAME ----------------------------/db01/ORACLE/brdb/users01.dbf 1 row selected. SVRMGR>
NAME --------------USERS
Now that you have the data file and tablespace name, media recovery can begin on this tablespace and associated data files.
Summary
This chapter focused on non-RMAN-based recoveries. You performed OS-based recoveries by using OS commands to copy and restore files. You recovered the database in multiple scenarios operating in ARCHIVELOG mode and NOARCHIVELOG mode. You learned the implications of operating in ARCHIVELOG mode and NOARCHIVELOG mode. In ARCHIVELOG mode, no data is lost, and the database can be open during certain types of recovery. In NOARCHIVELOG mode, data is lost, and the database must always be closed. You performed a specific recovery from media failure to a new location in both ARCHIVELOG and NOARCHIVELOG modes. You learned the differences each mode has on the recovery. NOARCHIVELOG mode has data loss, and ARCHIVELOG mode does not. This chapter also demonstrated the use of the V$ views to assist in the recovery process. These views can display the data files and associated tablespaces to be recovered in a media failure.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Summary
Key Terms Before you take the exam, make sure you’re familiar with the following terms: ARCHIVELOG mode NOARCHIVELOG mode cold backup Optimal Flexible Architecture (OFA) hot backup backup script BACKUP CONTROLFILE TO TRACE (ASCII control file) RESETLOGS manual log recovery automatic log recovery V$RECOVER_FILE V$TABLESPACE V$DATAFILE
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
501
502
Chapter 14
Complete Recovery without Recovery Manager
Review Questions 1. What type of database operation allows no loss of data in a recovery?
Choose all that apply. A. ARCHIVELOG mode B. NOARCHIVELOG mode C. Cold backup D. Hot backup 2. A complete database restore for a cold backup requires what files?
Choose all that apply. A. Data files B. Online redo logs C. Control files D. Archive logs 3. What are the three views useful in the non–Recovery Manager recov-
ery process? Choose all that apply. A. V$BACKUP B. V$DATAFILE C. V$TABLESPACES D. V$RECOVER_FILE 4. What view has the full name and location of the data file? A. V$BACKUP B. V$DATAFILE C. V$TABLESPACE D. V$RECOVER_FILE
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
503
5. What view has the file number of each data file? Choose all that apply. A. V$DATAFILE B. V$TABLESPACE C. V$RECOVER_FILE D. V$RECOVERY_FILE_NUMBER 6. What two views can you join to display the tablespace name and the
full data file location and name? Choose all that apply. A. V$DATAFILE B. V$RECOVER_FILE C. V$TABLESPACE D. V$PROCESS 7. What does OFA stand for ? A. Open Flexible Architecture B. Open File Architecture C. Optimal Flexible Association D. Optimal Flexible Architecture 8. What directory would the ALTER DATABASE BACKUP CONTROLFILE
TO TRACE output reside in using an OFA-compliant database configuration? A. trace directory B. pfile directory C. cdump directory D. udump directory
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
504
Chapter 14
Complete Recovery without Recovery Manager
9. A good hot-backup script will copy which files? Choose all that apply. A. Data files B. Control files, ASCII and binary C. Archive logs D. Online redo logs 10. A cold backup script needs to do what before copying the necessary
database files to a backup location? Choose the best answer. A. Alter the database to ARCHIVELOG mode B. Shut down C. SHUTDOWN ABORT D. Create a backup control file 11. What type of backup does not require the copying of archive logs?
Choose all that apply. A. Hot backup B. Cold backup C. Online backup D. Offline backup 12. Which files are required to restore a cold, or offline, backup to a func-
tioning database to the point in time that the cold backup was taken? Choose all that apply. A. Binary control files B. ASCII control files C. Archive logs D. Online redo logs
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
505
13. What file or files must be restored if one data file is lost or removed
when the restored files are based on a cold backup? A. The lost data file and necessary archive logs B. All data files only C. All data files and control files D. All data files, control files, and online redo logs 14. What file or files must be restored if one data file is lost or removed
when the restored files are based on a hot backup? A. The lost data file and necessary archive logs B. All data files only C. All data files and control files D. All data files, control files, and online redo logs 15. What option automatically applies the archived logs in a recovery? A. AUTO B. Cancel C. A keyboard return (CRLF) D. SCN auto 16. What command must be included with ALTER DATABASE OPEN if a
RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL command is issued during the recovery? A. NORESETLOGS B. RESETLOGS C. ARCHIVE LOGS D. REDO LOGS
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
506
Chapter 14
Complete Recovery without Recovery Manager
17. In the event of a media failure with the original file location unusable,
which recovery technique allows a DBA to recover a database to a different location on disk? A. Using a backup control file and data files restored to a new location B. Using a backup control file and redo logs restored to a new location C. Using a backup control file and data files restored to the same location D. Just restoring a cold backup to the new location and starting up
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Answers to Review Questions
507
Answers to Review Questions 1. A and D. The database must be in ARCHIVELOG mode, and a hot
backup requires the database to be in ARCHIVELOG mode. 2. A, B, and C. Data files, online redo logs, and control files are neces-
sary to start the database when restoring from a cold backup. Archive logs are not used in cold backups. 3. B, C, and D. V$RECOVER_FILE, V$DATAFILE, and V$TABLESPACE are
all used in the recovery process. V$BACKUP shows the data files currently in backup mode and has nothing to do with the OS recovery process. 4. B. V$DATAFILE has this information. 5. A and C. Only the V$DATAFILE and V$RECOVER_FILE views contain
this information. The file number is used to link the two views when querying the files for recovery purposes. 6. A and C. The V$DATAFILE and V$TABLESPACE tables can be joined by
the #ts# columns to retrieve this information. 7. D. OFA is short for Optimal Flexible Architecture. 8. D. The udump directory is where user traces are generated in an Optimal
Flexible Architecture configuration. The trace file is a user trace. 9. A, B, and C. Data files and archive logs, in most cases, are the only
files needed for a recovery when operating in ARCHIVELOG mode. Control files, both ASCII and binary, and the init.ora files are copied for extra precautions. Online redo logs are not used in hot-backup-related recoveries. 10. B. The database must be shut down normally to perform a cold
backup. 11. B and D. Archive logs are not required for cold, or offline, backups.
They are required only for hot, or online, backups.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
508
Chapter 14
Complete Recovery without Recovery Manager
12. A and D. Binary control files and online redo logs are necessary to
restore and restart a cold backup to the point in time that the cold backup was taken. This is because no recovery is preformed; the database is restored to its original state at the time of backup. Note that the data files are also necessary. 13. D. When recovering from a cold backup, the complete database must
be restored. The complete database includes data files, control files, and online redo logs. 14. A. When recovering from a hot backup, only the lost data file and
changes to this data file (archive logs) need to be recovered. 15. A. AUTO tells the RECOVER command to apply all archive logs until
recovery is complete or more archive logs are needed in the arch_ dump_dest directory. 16. B. The ALTER DATABASE OPEN RESETLOGS must be used in conjunction
with the RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL command, or the database will not open. 17. A. A backup control file and the data files restored to a new location
allow recovery of the database to a new location on disk.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Chapter
15
Incomplete Oracle Recovery with Archiving ORACLE8i BACKUP AND RECOVERY EXAM OBJECTIVES OFFERED IN THIS CHAPTER: Determine when to use an incomplete recovery to recover the database Perform an incomplete database recovery Recover after losing current and inactive nonarchived redo log files
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Training and Certification Web site (http://education .oracle.com/certification/index.html) for the most current exam objectives listing.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
I
ncomplete database recovery requires an understanding of the redo log and ARCHIVELOG processes, the synchronization of the Oracle database, and the options allowed for performing an incomplete recovery. This chapter discusses the incomplete recovery process and the commands associated with each incomplete recovery option. It also includes an example showing how to perform an incomplete recovery due to lost or corrupted current and inactive nonarchived redo log files.
Determining When to Use Incomplete Recovery
I
ncomplete recovery occurs when the database is not recovered entirely to the point at which the database failed. It is a partial recovery of the database, so that some archive logs are applied to the database, but not all archive logs. This means that only a portion of the transactions get applied. There are three types of incomplete media recovery:
Cancel-based
Time-based
Change-based
Each option allows recovery to a point in time prior to the failure. The main reason for having three options is for better flexibility and control of the stopping point during the recovery process. Each option is described in detail in the next section.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Determining When to Use Incomplete Recovery
511
Unlike incomplete recovery, complete recovery reads through the entire set of archived and nonarchived log files up to the time of failure. No loss of transactions occurs, and the recovered database is complete. All transactions are applied to the database via change vectors in the log files. Incomplete recovery is performed anytime you don’t want to apply all the archived and nonarchived log files that are necessary to bring the database up to the time of failure. The database is essentially not completely recovered; transactions remain missing. Figure 15.1 illustrates the different types of incomplete recovery and how the database is not completely recovered. FIGURE 15.1
Incomplete recovery in ARCHIVELOG mode for Media Failure on January 28th
Database CRASH
Media failure Database backup
Time
1-Jan-00
7-Jan-00
14-Jan-00
28-Jan-00
Arch11.log Arch21.log Arch38.log Arch53.log Recover database until change *6,748; Recover database until time ‘2000-1-25-13:00:00’; Recover database until cancel; *6,748 is system change number in Arch37.log
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
512
Chapter 15
Incomplete Oracle Recovery with Archiving
Incomplete recovery should be performed when the DBA wants or is required to recover the database prior to the point of time when the database failed. Certain circumstances require an incomplete recovery. These include data file corruption, redo log corruption, or the loss of a table due to user error. In some cases, incomplete recovery is the only option available to you. In a failure situation with a loss or corruption of the current and inactive nonarchived redo log files, recovering the database without the transactions in these files is the only option. Otherwise, if a complete recovery were performed, the error would be reintroduced in the recovery process.
Performing an Incomplete Database Recovery
T
his section details the three types of incomplete database recovery: cancel-based, time-based, and change-based. Each of these methods is used for different circumstances, and you should be aware of when each is appropriate.
Cancel-Based Recovery In cancel-based recovery, you cancel the recovery before the point of failure. Cancel-based recovery provides the least flexibility and control of the stopping point during the recovery process. You apply archive logs during the recovery process. At some point before complete recovery, you enter the CANCEL command. At this point, recovery is ended, and no more archive logs are applied. The following is a sample of cancel-based incomplete recovery. SVRMGR> recover database until cancel;
In Oracle8i, the SQL*Plus tool has been given similar functionality to SVRMGR for recovery and other purposes. While SVRMGR has been the tool that has been used in the past, Oracle is moving toward the use of SQL*Plus. In 8i, both tools can perform the recovery functionality.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Performing an Incomplete Database Recovery
513
An example of when to use a cancel-based, incomplete recovery is to restore a backup of a lost data file from a hot backup. To restore a lost data file, you perform the following steps: 1. Make sure that the database is shut down by using the SHUTDOWN com-
mand from Server Manager. SVRMGR> shutdown abort 2. Make sure that current copies of the data files, control files, and
parameter files exist, in case of errors in the recovery process. This will allow you to restart the recovery process, if needed, without errors introduced during a failed recovery. 3. Make sure that a current backup exists. Copied files from the current
backup will replace the failed data files, online redo log files, or control files. 4. Restore the data file from the backup location to the proper location.
You do this by issuing an operating system–specific command. In Unix, you would use a cp command, as in this example: cp /stage/data01.dbf
/oracle/database/lndb/data01.dbf
5. Start the database in MOUNT mode.
SVRMGRL> startup mount 6. Verify that all the data files you need to recover are online. The fol-
lowing query shows the status of each data file as online (with the exception of the system data file, which is always online; status equals system.) SVRMGR> select file#,status,enabled,name from v$datafile; FILE# STATUS ENABLED NAME ---------- ------- ---------- ------------------------1 SYSTEM READ WRITE /oracle/database/lndb/system01.dbf 2 ONLINE
READ WRITE /oracle/database/lndb/rbs01.dbf
3 ONLINE
READ WRITE /oracle/database/lndb/temp01.dbf
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
514
Chapter 15
Incomplete Oracle Recovery with Archiving
4 ONLINE
READ WRITE /oracle/database/lndb/users01.dbf
5 ONLINE
READ WRITE /oracle/database/lndb/tools01.dbf
6 ONLINE
READ WRITE /oracle/database/lndb/data01.dbf
7 ONLINE
READ WRITE /oracle/database/lndb/indx01.dbf
7 rows selected. 7. Perform incomplete recovery by using the UNTIL CANCEL clause in the
RECOVER DATABASE command. SVRMGRL> recover database until cancel; 8. Open the database with the RESETLOGS option.
SVRMGRL>
alter database open resetlogs;
It is desirable to have normal user transactions off of the system during this process. Do so with the ALTER SYSTEM ENABLE RESTRICTED SESSION command, or in a client/server environment, shut the listener down with a command similar to lsnrctl stop.
Using the RESETLOGS clause with the ALTER DATABASE OPEN command is necessary for all types of incomplete recovery. The RESETLOGS option ensures that the log files applied in recovery can never be used again by resetting the log sequence and rebuilding the existing online redo logs. This process essentially purges all transactions existing in the nonarchived log files. The purged transaction can never be recovered after this is performed. This command resynchronizes log files with the data files and control files If these were not purged, the log files would create bad archive logs. This is the main reason why a backup of the control file, data files, and redo logs should be done prior to performing an incomplete recovery. 9. Perform a new cold or hot backup of the database. Existing backups
are no longer valid.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Performing an Incomplete Database Recovery
515
Remember, after the ALTER DATABASE OPEN RESETLOGS command is applied, the previous log files and backed-up data files are useless for this newly recovered database. This is because a gap exists in the log files. The old backup data files and logs can never be synchronized with the database. A complete backup must be done following an incomplete recovery of any type.
Time-Based Recovery In time-based recovery, the DBA recovers the database to a point in time before the point of failure. Time-based recovery provides more flexibility and control in the incomplete recovery process than the cancel-based option. The cancel-based option’s granularity is the size of a redo log file; in other words, when applying a redo log file, you get all the transactions in that file, regardless of the time period over which that log was filled. You apply archived logs to the database up to a designated point in time. This point could be in the middle of the archive log, not necessarily applying the whole archive log. You can then control the recovery process to a time prior to a fatal action in the database, such as data block corruption or the loss of a database object due to user error. Below is a sample of the timebased, incomplete recovery. SVRMGR> recover database until time ‘1999-12-31:22:55:00’; You can use the preceding example on restoring lost data files, but utilize time-based recovery in place of cancel-based recovery. You restore all the necessary data files from a hot backup. The only change is that you use the UNTIL TIME clause in step 7. All other steps remain the same. Step 7: Perform incomplete recovery by using the UNTIL TIME clause. SVRMGR> recover database until time ‘1999-12-31:22:55:00’;
Change-Based Recovery In change-based recovery, you recover to a system change number (SCN) before the point of failure. This type of incomplete recovery gives you the most control.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
516
Chapter 15
Incomplete Oracle Recovery with Archiving
As you have already learned, the SCN is what Oracle uses to uniquely identify each committed transaction. The SCN is a number that orders the transactions consecutively in the redo logs as each transaction occurs. This number is also recorded in transaction tables within the rollback segments, control files, and data file headers. The SCN coordination between the transactions and these files synchronizes the database to a consistent state. Each redo log is associated with a low and high SCN number. This SCN information can be viewed in the V$LOG_HISTORY table output below. Notice the low and high SCN numbers in the FIRST_CHANGE# and NEXT_ CHANGE# columns for each log sequence or log file. SQLWKS> select sequence#,first_change#,next_change#,first_ time 2> from v$log_history where sequence# > 10326; SEQUENCE# FIRST_CHAN NEXT_CHANG FIRST_TIME ---------- ---------- ---------- -------------------10327 60731807 60732514 10-DEC-99 10328 60732514 60732848 10-DEC-99 10329 60732848 60747780 10-DEC-99 10330 60747780 60748140 11-DEC-99 4 rows selected. All transactions between these SCNs are included in these logs. Oracle determines what is needed to recover by these SCNs and the SCN information recorded in transaction tables within the rollback segments, control files, and data file headers. You can use the previous example of incomplete database recovery, but utilize change-based recovery in place of cancel-based or time-based recovery. First, restore all the needed data files from a hot backup. Only step 7 changes. Step 7: Perform incomplete recovery by using the UNTIL CHANGE clause. SVRMGR> recover database until change 60747681;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Recovering after Losing Current and Inactive Nonarchived Redo Logs
517
Recovering after Losing Current and Inactive Nonarchived Redo Logs
I
ncomplete recovery is necessary if there is a loss of the current and/or inactive nonarchived redo log files. If this scenario occurs, it means that you don’t have all the redo log files up to the point of failure, so the only alternative is to recover prior to the point of failure.
Oracle has made improvements to compensate for this failure by giving you the ability to mirror copies of redo logs or to create group members on different file systems.
Some common error messages that might be seen in the alert log are ORA-00255, ORA-00312, ORA-00286, and ORA-00334. Each of the error messages indicates a problem writing to the online redo log files. To perform incomplete recovery after the redo log files have been lost, you do the following: 1. Start Server Manager and execute a CONNECT INTERNAL command.
SVRMGRL>
connect internal;
2. Execute a SHUTDOWN command.
SVRMGRL>
shutdown;
3. Execute a STARTUP MOUNT to read the contents of the control file.
SVRMGRL>
startup mount;
4. Execute a RECOVER DATABASE UNTIL CANCEL to start the recovery
process. SVRMGRL>
recover database until cancel;
5. Apply the necessary archive logs up to, but not including, the lost or
corrupted log.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
518
Chapter 15
Incomplete Oracle Recovery with Archiving
6. Open the database and reset the log files.
SVRMGRL>
alter database open resetlogs;
7. Switch the log files to see whether the new logs are working.
SVRMGRL>
alter system switch logfile;
8. Shut down the database.
SVRMGRL>
shutdown normal;
9. Execute STARTUP and SHUTDOWN NORMAL commands to validate that
the database is functional by checking the alert logs after these commands are executed. SVRMGRL> SVRMGRL>
startup; shutdown normal;
10. Perform a cold backup or hot backup.
If you need to recover archive logs from a different location, you can just change the LOG_ARCHIVE_DEST location to the new location of the archive log files. This may occur in a recovery situation where you recover your archive logs from tape to a staging location on disk.
Summary
I
ncomplete recovery is recovering a database prior to the point where the database failed or to the last available transaction at the time of the failure. In other words, the recovered database is missing transactions or is incomplete. As a DBA, you might need to perform incomplete database recovery for various reasons. The type of failure might be in the database, such as a corruption error or a dropped database object. This means that recovery would need to stop short of using all the archive logs that are available to be
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Summary
519
applied. Otherwise, the failure could be reintroduced to the database as the transactions are read from archived redo logs. An example of incomplete recovery is recovering from a loss of one of the current and/or inactive nonarchived redo log files. Again, the reason this can be only an incomplete recovery is because not all the previous transactions are available. At least one online log file is corrupted or lost.
Key Terms Before you take the exam, make sure you’re familiar with the following terms: incomplete recovery cancel-based recovery UNTIL CANCEL RESETLOGS time-based recovery UNTIL TIME change-based recovery system change number (SCN) V$LOG_HISTORY UNTIL CHANGE
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
520
Chapter 15
Incomplete Oracle Recovery with Archiving
Review Questions 1. What are the three types of incomplete recovery? A. Change-based B. Time-based C. Stop-based D. Cancel-based E. Quit-based 2. Which type of incomplete recovery can be performed in
NOARCHIVELOG mode? A. Change-based B. Time-based C. Stop-based D. Cancel-based E. None 3. You’re a DBA and you have just performed an incomplete recovery.
There has been no backup following the incomplete recovery. After a couple of hours of use, the database fails again due to the loss of a disk that stores some of your data files. What type of incomplete recovery can you perform? Choose all that apply. A. Change-based B. Stop-based C. Time-based D. Cancel-based E. None 4. What does Oracle use to uniquely identify each committed transaction
in the log files? A. Unique transaction ID B. Static transaction ID
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
521
C. System change number D. Serial change number E. Transaction change number 5. In what type of recovery is it necessary to execute the ALTER
DATABASE OPEN RESETLOGS command? Choose all that apply. A. Complete recovery B. Incomplete recovery C. Cancel-based recovery D. Time-based recovery 6. What should be performed after the ALTER DATABASE OPEN
RESETLOGS command has been applied? A. A recovery of the database B. A backup of the database C. An import of the database D. Nothing 7. What type of incomplete recovery gives the DBA the most control? A. Cancel-based. B. Time-based. C. Change-based. D. All give equal control. 8. Which type of incomplete recovery gives the DBA the least control? A. Cancel-based. B. Time-based. C. Change-based. D. All give equal control.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
522
Chapter 15
Incomplete Oracle Recovery with Archiving
Answers to Review Questions 1. A, B, and D. Answers A, B, and D all describe valid types of incom-
plete recovery. Answers C and E are not incomplete recovery types. 2. E. Incomplete recovery cannot be performed in NOARCHIVELOG mode. 3. E. Incomplete recovery cannot be performed unless a new backup is
taken after the first failure. All backups prior to an incomplete recovery are invalidated for use with any of the existing data files, control files, or redo logs. 4. C. The system change number, or SCN, uniquely identifies each com-
mitted transaction in the log files. 5. B, C, and D. All forms of incomplete recovery require the use of the
RESETLOGS clause during the ALTER DATABASE OPEN command. All redo logs must be reset to a new sequence number. This invalidates all prior logs to that database. 6. B. A backup of the database should be performed if the ALTER
DATABASE OPEN RESETLOGS command has been applied. 7. C. Change-based recovery gives the DBA the most control of the
incomplete recovery process in that the stopping point can be specified by the SCN number. 8. A. Cancel-based recovery applies the complete archive log before can-
celing or stopping the recovery process. Therefore, you cannot recover part of the transactions within the archive log as with change-based or time-based recovery.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Chapter
16
Oracle Export and Import Utilities ORACLE8i BACKUP AND RECOVERY EXAM OBJECTIVES OFFERED IN THIS CHAPTER: Create a complete logical backup of a database object with the Export utility Create an incremental backup of a database object with the Export utility Invoke the direct-path method export Recover a database object with the Import utility Perform a tablespace point-in-time recovery (TSPITR)
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Training and Certification Web site (http:// education.oracle.com/certification/index.html) for the most current exam objectives listing.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
T
he Oracle database software provides two primary ways to back up the database. The first is a physical backup, which consists of copying files and recovering these files as needed. You have been reading about this approach in the previous four chapters. The second type of backup is a logical backup. Oracle provides the Export utility to create logical backups of the database. Oracle provides the Import utility to perform logical recoveries of the database. A logical backup of the database requires reading certain database objects and writing them to a file without concern for their physical location. The file can then be inserted back into the database as a logical restore. This chapter demonstrates how to back up and recover the database with the Export and Import utilities. It also explains incremental backups and recovery.
Creating a Complete Logical Backup with the Export Utility
A
s stated in the introduction to this chapter, the Export utility can be used to create logical backups of the Oracle database. There are two types of exports: the conventional-path export and the direct-path export. This section covers the conventional-path export. The “Invoking the Direct-Path Export” section later in this chapter is devoted to that method. The Export utility performs a full SELECT of a table and then dumps the data into a binary file called a dump file. This file has a DMP file extension and is named expdat.dmp by default. The Export utility then creates the
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Creating a Complete Logical Backup with the Export Utility
525
tables and indexes by reproducing the Data Definition Language (DDL) of the backup tables. This information can then be played back by the Import utility to rebuild the object and its underlying data. To display all the export options available, issue the command EXP – HELP from the command line. [oracle@DS-LINUX exp]$ exp -help Export: Release 8.1.5.0.0 - Production on Wed Apr 5 19:15:10 2000 (c) Copyright 1999 Oracle Corporation. All rights reserved.
You can let Export prompt you for parameters by entering the EXP command followed by your username/password: Example: EXP SCOTT/TIGER Or, you can control how Export runs by entering the EXP command followed by various arguments. To specify parameters, you use keywords: Format: EXP KEYWORD=value or KEYWORD=(value1,value2,...,valueN) Example: EXP SCOTT/TIGER GRANTS=Y TABLES=(EMP,DEPT,MGR) or TABLES=(T1:P1,T1:P2), if T1 is partitioned table USERID must be the first parameter on the command line.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
526
Chapter 16
Oracle Export and Import Utilities
Keyword Description (Default) ------------------------------------------------------USERID username/password BUFFER size of data buffer FILE output files (EXPDAT.DMP) COMPRESS import into one extent (Y) GRANTS export grants (Y) INDEXES export indexes (Y) ROWS export data rows (Y) CONSTRAINTS export constraints (Y) LOG log file of screen output DIRECT direct path (N) FEEDBACK display progress every x rows (0) FILESIZE maximum size of each dump file QUERY select clause used to export a subset of a table VOLSIZE number of bytes to write to each tape volume FULL export entire file (N) OWNER list of owner usernames TABLES list of table names RECORDLENGTH length of IO record INCTYPE incremental export type RECORD track incr. export (Y) PARFILE parameter filename CONSISTENT cross-table consistency STATISTICS analyze objects (ESTIMATE) TRIGGERS export triggers (Y) The following keywords apply only to transportable tablespaces TRANSPORT_TABLESPACE export transportable tablespace metadata (N) TABLESPACES list of tablespaces to transport Export terminated successfully without warnings. [oracle@DS-LINUX exp]$
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Creating a Complete Logical Backup with the Export Utility
527
The options used in the following example are obtained by prompting the user of this utility. This is called an interactive export. You can use a parameter file or fully qualified command line parameters to store all this information so you would not need to be prompted. An example of this technique follows the interactive export. In this interactive export, you will perform a full export of the table t1, which is owned by the user TEST. Here are the steps used to perform the export: 1. Start the Export utility by executing exp on the command line. In
Unix, you should first set your ORACLE_SID environment variable to point to the database you are trying to connect to. This can be done by executing export ORACLE_SID = brdb at the command line. Alternatively, you could connect via SQL*Net through a tnsnames.ora entry that explicitly defines the target database. [oracle@DS-LINUX exp]$ exp Export: Release 8.1.5.0.0 - Production on Wed Apr 5 18:13:49 2000 (c) Copyright 1999 Oracle Corporation. All rights reserved. Username: test Password: Connected to: Oracle8i Enterprise Edition Release 8.1.5.0.0 - Production With the Partitioning and Java options PL/SQL Release 8.1.5.0.0 - Production 2. Enter the appropriate buffer size. If you want to keep the default
buffer size of 4096KB, press Enter. The buffer size is the buffer used to fetch data. The default size is usually adequate. The maximum size is 64KB. Enter array fetch buffer size: 4096 > 3. The next prompt asks for the filename of the dump file. You can enter
your own filename or choose the default, expdat.dmp. Export file: expdat.dmp > t1.dmp
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
528
Chapter 16
Oracle Export and Import Utilities
4. The next prompt designates users or tables in the dump file. The
USERS choice (2) exports all objects owned by the user that you are connected as. The TABLE option (3) exports all designated tables. (2)U(sers), or (3)T(ables): (2)U > 3 5. The next prompt asks whether to export the table data.
Export table data (yes/no): yes> y 6. The next prompt asks whether to compress extents. For example, if
a table has 20 1MB extents, and you choose yes, the extents will be compressed into one 20MB extent. All extents are compressed into one. If you choose no, then the 20 1MB extents will remain. The compress option will reduce fragmentation in the tablespace by compressing the extents or making a bigger initial table extents. This can present problems when the compressed size is large. If the compressed size is large enough, there may not be enough contiguous free space in the tablespace, so the import will fail. This can be remedied by resizing the tablespace. Compress extents (yes/no): yes > y 7. The next prompt specifies which tables to export. In this example, you
have chosen the t1 table. When the export is complete, you are prompted for another table or partition. If done, hit Enter. Export done in US7ASCII character set and US7ASCII NCHAR character set About to export specified tables via Conventional Path ... Table(T) or Partition(T:P) to be exported: (RETURN to quit) > t1 . . exporting table 3 rows exported
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
T1
Creating a Complete Logical Backup with the Export Utility
529
Table(T) or Partition(T:P) to be exported: (RETURN to quit) > Export terminated successfully without warnings. [oracle@DS-LINUX exp]$ This concludes the interactive export of the individual table, t1. This same process can be performed without prompts or user interaction by utilizing a parameter file or the command line with specified parameters. Specifying the PARFILE at the command line, for example, is one method of performing a non-interactive export. Here is an example of this technique. This example will export the full database, which includes all users and all objects. This can be done on a table-by-table basis also. [oracle@DS-LINUX exp]$ exp parfile=brdbpar.pr file=brdb.dmp [oracle@DS-LINUX exp]$cat brdbpar.pr USERID=system/manager DIRECT=Y FULL=Y GRANTS=Y ROWS=Y CONSTRAINTS=Y RECORD=Y The following is an example of using the command-line parameter specifications. This technique also does not require user interaction to respond to prompts. [oracle@DS-LINUX exp]$exp userid=system/manager full=y constraints=Y file=brdb1.dmp
A useful feature of the export is that it can detect block corruption in the database. Block corruption occurs when the database block becomes corrupted or unreadable. An export will fail if block corruption exists. Block corruption has to be fixed before a logical backup can be completed.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
530
Chapter 16
Oracle Export and Import Utilities
Creating an Incremental Backup with the Export Utility
T
here are three types of exports: complete, cumulative, and incremental. Each type exports data in a different manner. The complete export method exports the entire database while updating the export record-keeping tables. The cumulative export and incremental export methods export the changed data in new tables and within existing tables. The incremental and cumulative exports are much quicker than complete exports. The reason for this is the smaller size of the changed data compared to the complete data set. The three types of exports can be combined to make an efficient backup strategy. A complete export is the baseline backup of the database. The incremental and cumulative exports can be used to back up changes occurring between the complete exports. An export backup strategy might include a complete backup every Sunday, with incremental backups during the week. The incremental and cumulative exports are similar in that they both back up changes since the complete (or previous cumulative) export. However, the incremental backup is usually faster than the cumulative backup. The incremental backup also will be more work for you (as the DBA) during a restore, because you will be applying several incremental backups during the restore process. A much smaller number of cumulative backups will need to be applied to a complete backup. The following example demonstrates the use of a complete and an incremental export to back up a database. It shows how the incremental method exports modified data and new database objects since the last complete export. 1. The database gets a baseline complete export. Run the complete
export with the following parameters executed at the command line: the SYSTEM user (or an account with the DBA role), FULL database, incremental type of COMPLETE, constraints selected, and the dump filename. With these parameters specified at the command line, you don’t have to respond to prompts as in the preceding example. Below is an example of this syntax and associated output. [oracle@DS-LINUX exp]$exp userid=system/manager full=y inctype=complete constraints=Y file=complete1.dmp
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Creating an Incremental Backup with the Export Utility
Export: Release 8.1.5.0.0 - Production on Fri Apr 7 22:39:09 2000 (c) Copyright 1999 Oracle Corporation. All rights reserved.
Connected to: Oracle8i Enterprise Edition Release 8.1.5.0.0 - Production With the Partitioning and Java options PL/SQL Release 8.1.5.0.0 - Production Export done in US7ASCII character set and US7ASCII NCHAR character set About to export the entire database ... . exporting tablespace definitions . exporting profiles . exporting user definitions . exporting roles . exporting resource costs . exporting rollback segment definitions . exporting database links . exporting sequence numbers . exporting directory aliases . exporting context namespaces . exporting foreign function library names . exporting object type definitions . exporting system procedural objects and actions . exporting pre-schema procedural objects and actions . exporting cluster definitions . about to export SYSTEM's tables via Conventional Path ... . . exporting table DEF$_AQCALL 0 rows exported . . exporting table DEF$_AQERROR 0 rows exported
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
531
532
Chapter 16
Oracle Export and Import Utilities
. . exporting table DEF$_CALLDEST 0 rows exported . . exporting table DEF$_DEFAULTDEST 0 rows exported . . exporting table DEF$_DESTINATION 0 rows exported . . exporting table DEF$_ERROR 0 rows exported . . exporting table DEF$_LOB 0 rows exported . . exporting table DEF$_ORIGIN 0 rows exported . . exporting table DEF$_PROPAGATOR 0 rows exported . . exporting table DEF$_PUSHED_TRANSACTIONS 0 rows exported . . exporting table DEF$_TEMP$LOB 0 rows exported . . exporting table SQLPLUS_PRODUCT_PROFILE 0 rows exported . about to export OUTLN's tables via Conventional Path ... . . exporting table OL$ 0 rows exported . . exporting table OL$HINTS 0 rows exported . about to export DBSNMP's tables via Conventional Path ... . about to export TEST's tables via Conventional Path ... . . exporting table T1 3 rows exported . exporting referential integrity constraints . exporting synonyms . exporting views . exporting stored procedures . exporting operators
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Creating an Incremental Backup with the Export Utility
533
. exporting indextypes . exporting bitmap, functional and extensible indexes . exporting posttables actions . exporting triggers . exporting snapshots . exporting snapshot logs . exporting job queues . exporting refresh groups and children . exporting dimensions . exporting post-schema procedural objects and actions . exporting user history table . exporting default and system auditing options Export terminated successfully without warnings. [oracle@DS-LINUX exp]$ 2. The next step is to enter some data into the existing t1 table to simulate
weekly changes. You will also add a new table called t2 to simulate a new table creation. [oracle@DS-LINUX exp]$ svrmgrl Oracle Server Manager Release 3.1.5.0.0 - Production (c) Copyright 1997, Oracle Corporation. All Rights Reserved. Oracle8i Enterprise Edition Release 8.1.5.0.0 Production With the Partitioning and Java options PL/SQL Release 8.1.5.0.0 - Production SVRMGR> connect test/test Connected. SVRMGR> select * from t1; C1 C2 ---------- ---------------------------------------1 This is a test one - before hot backup 3 This is a test three - after hot backup 2 This is a test two - after hot backup
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
534
Chapter 16
Oracle Export and Import Utilities
3 rows selected. SVRMGR> insert into t1 2> values (4, 'This is test four - after complete export'); 1 row processed. SVRMGR> commit; Statement processed. SVRMGR> create table t2 (c1 number, c2 char(50)); Statement processed. SVRMGR> exit Server Manager complete. [oracle@DS-LINUX exp]$ 3. Perform an incremental backup, which should pick up all this new
activity in the database. This backup simulates an incremental export that is performed on week nights, after a complete export on Sunday. You also execute this in command-line mode, with the same parameters as the complete export, with the exception of the INCTYPE being set to INCREMENTAL. [oracle@DS-LINUX exp]$ exp userid=system/manager full=y inctype=incremental constraints file=incremental1.dmp Export: Release 8.1.5.0.0 - Production on Fri Apr 7 22:47:19 2000 (c) Copyright 1999 Oracle Corporation. All rights reserved.
Connected to: Oracle8i Enterprise Edition Release 8.1.5.0.0 - Production With the Partitioning and Java options PL/SQL Release 8.1.5.0.0 - Production Export done in US7ASCII character set and US7ASCII NCHAR character set
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Creating an Incremental Backup with the Export Utility
About to export the entire database ... . exporting tablespace definitions . exporting profiles . exporting user definitions . exporting roles . exporting resource costs . exporting rollback segment definitions . exporting database links . exporting sequence numbers . exporting directory aliases . exporting context namespaces . exporting foreign function library names . exporting object type definitions . exporting system procedural objects and actions . exporting pre-schema procedural objects and actions . exporting cluster definitions . about to export SYSTEM's tables via Conventional Path ... . about to export OUTLN's tables via Conventional Path ... . about to export DBSNMP's tables via Conventional Path ... . about to export TEST's tables via Conventional Path ... . . exporting table T1 4 rows exported . . exporting table T2 0 rows exported . exporting referential integrity constraints . exporting synonyms . exporting views . exporting stored procedures . exporting operators . exporting indextypes . exporting bitmap, functional and extensible indexes . exporting posttables actions
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
535
536
Chapter 16
Oracle Export and Import Utilities
. exporting triggers . exporting snapshots . exporting snapshot logs . exporting job queues . exporting refresh groups and children . exporting dimensions . exporting post-schema procedural objects and actions . exporting user history table . exporting default and system auditing options . exporting information about dropped objects Export terminated successfully without warnings. [oracle@DS-LINUX exp]$ Notice that only the changes to the existing database objects plus any new objects are exported to the dump file. 4. Perform a complete export again, which exports everything in the
database regardless of changes. The parameters are the same as the first complete export. This also resets the export record-keeping tables. [oracle@DS-LINUX /oracle]$ exp userid=system/manager full=y inctype=complete constraints=Y file=complete2.dmp Export: Release 8.1.5.0.0 - Production on Sat Apr 8 14:36:58 2000 (c) Copyright 1999 Oracle Corporation. All rights reserved.
Connected to: Oracle8i Enterprise Edition Release 8.1.5.0.0 - Production With the Partitioning and Java options PL/SQL Release 8.1.5.0.0 - Production Export done in US7ASCII character set and US7ASCII NCHAR character set
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Creating an Incremental Backup with the Export Utility
537
About to export the entire database ... . exporting tablespace definitions . exporting profiles . exporting user definitions . exporting roles . exporting resource costs . exporting rollback segment definitions . exporting database links . exporting sequence numbers . exporting directory aliases . exporting context namespaces . exporting foreign function library names . exporting object type definitions . exporting system procedural objects and actions . exporting pre-schema procedural objects and actions . exporting cluster definitions . about to export SYSTEM's tables via Conventional Path ... . . exporting table DEF$_AQCALL 0 rows exported . . exporting table DEF$_AQERROR 0 rows exported . . exporting table DEF$_CALLDEST 0 rows exported . . exporting table DEF$_DEFAULTDEST 0 rows exported . . exporting table DEF$_DESTINATION 0 rows exported . . exporting table DEF$_ERROR 0 rows exported . . exporting table DEF$_LOB 0 rows exported . . exporting table DEF$_ORIGIN 0 rows exported . . exporting table DEF$_PROPAGATOR 0 rows exported
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
538
Chapter 16
Oracle Export and Import Utilities
. . exporting table DEF$_PUSHED_TRANSACTIONS 0 rows exported . . exporting table DEF$_TEMP$LOB 0 rows exported . . exporting table SQLPLUS_PRODUCT_PROFILE 0 rows exported . about to export OUTLN's tables via Conventional Path ... . . exporting table OL$ 0 rows exported . . exporting table OL$HINTS 0 rows exported . about to export DBSNMP's tables via Conventional Path ... . about to export TEST's tables via Conventional Path ... . . exporting table T1 4 rows exported . . exporting table T2 0 rows exported . exporting referential integrity constraints . exporting synonyms . exporting views . exporting stored procedures . exporting operators . exporting indextypes . exporting bitmap, functional and extensible indexes . exporting posttables actions . exporting triggers . exporting snapshots . exporting snapshot logs . exporting job queues . exporting refresh groups and children . exporting dimensions . exporting post-schema procedural objects and actions . exporting user history table . exporting default and system auditing options
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Creating an Incremental Backup with the Export Utility
539
Export terminated successfully without warnings. [oracle@DS-LINUX exp]$ 5. Finally, you can perform another incremental export. Since the
record-keeping tables have been adjusted, and there are no changes to the database, there will be nothing to export. This is because the incremental export exports only tables that have been modified or created since the last complete or cumulative export. [oracle@DS-LINUX /oracle]$ exp userid=system/manager full=y inctype=incremental constraints=Y file=incremental2.dmp Export: Release 8.1.5.0.0 - Production on Sat Apr 8 14:37:42 2000 (c) Copyright 1999 Oracle Corporation. All rights reserved.
Connected to: Oracle8i Enterprise Edition Release 8.1.5.0.0 - Production With the Partitioning and Java options PL/SQL Release 8.1.5.0.0 - Production Export done in US7ASCII character set and US7ASCII NCHAR character set About to export the entire database ... . exporting tablespace definitions . exporting profiles . exporting user definitions . exporting roles . exporting resource costs . exporting rollback segment definitions . exporting database links . exporting sequence numbers . exporting directory aliases . exporting context namespaces . exporting foreign function library names
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
540
Chapter 16
Oracle Export and Import Utilities
. exporting object type definitions . exporting system procedural objects and actions . exporting pre-schema procedural objects and actions . exporting cluster definitions . about to export SYSTEM's tables via Conventional Path ... . about to export OUTLN's tables via Conventional Path ... . about to export OUTLN's tables via Conventional Path ... . about to export DBSNMP's tables via Conventional Path ... . about to export TEST's tables via Conventional Path ... . exporting referential integrity constraints . exporting synonyms . exporting views . exporting stored procedures . exporting operators . exporting indextypes . exporting bitmap, functional and extensible indexes . exporting posttables actions . exporting triggers . exporting snapshots . exporting snapshot logs . exporting job queues . exporting refresh groups and children . exporting dimensions . exporting post-schema procedural objects and actions . exporting user history table . exporting default and system auditing options . exporting information about dropped objects Export terminated successfully without warnings. [oracle@DS-LINUX /oracle]$
The export record-keeping tables, which record the incremental export information, are sys.incvid, sys.incfil, and sys.incexp.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Invoking the Direct-Path Export
541
Invoking the Direct-Path Export
An alternative option to the conventional-path export is the direct-path export. The direct-path export is enabled by using direct=y on the command line. This export option is substantially faster than the conventional-path export. This option bypasses the SQL evaluation layer as it regenerates the commands to be stored in the binary dump file. This is where the performance improvements are made. Figure 16.1 displays the execution path of each type of export. FIGURE 16.1
The differences between direct-path and conventional-path exports Conventional path
Direct path
Dump file
Dump file
Evaluating buffer
Evaluating buffer
Buffer cache
Buffer cache
Database
Database
Now, step through the following example of using the direct-path export method with the TEST user, which owns the t1 table. You used this table in the previous examples. The direct-path export is executed by using the direct=y option on the export command line. [oracle@DS-LINUX exp]$ exp direct=y
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
542
Chapter 16
Oracle Export and Import Utilities
Export: Release 8.1.5.0.0 - Production on Wed Apr 5 19:31:38 2000 (c) Copyright 1999 Oracle Corporation. All rights reserved.
Username: test Password: Connected to: Oracle8i Enterprise Edition Release 8.1.5.0.0 - Production With the Partitioning and Java options PL/SQL Release 8.1.5.0.0 - Production Export done in US7ASCII character set and US7ASCII NCHAR character set . exporting pre-schema procedural objects and actions . exporting foreign function library names for user TEST . exporting object type definitions for user TEST About to export TEST's objects ... . exporting database links . exporting sequence numbers . exporting cluster definitions . about to export TEST's tables via Direct Path ... . . exporting table T1 4 rows exported . exporting synonyms . exporting views . exporting stored procedures . exporting operators . exporting referential integrity constraints . exporting triggers . exporting indextypes . exporting bitmap, functional and extensible indexes . exporting posttables actions . exporting snapshots
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Recovering a Database Object with the Import Utility
543
. exporting snapshot logs . exporting job queues . exporting refresh groups and children . exporting dimensions . exporting post-schema procedural objects and actions Export terminated successfully without warnings. [oracle@DS-LINUX exp]$ In the middle of the export output, the reference to the direct-path option can be seen. The statement About to export TEST's tables via Direct Path... identifies that the export is in direct-path mode, not conventionalpath mode.
Direct exports can be imported only by versions of Oracle that support direct imports and exports. Oracle version 8 databases support direct path exports. There are other limitations with direct exports: you cannot direct export Large Binary Objects (LBOs) and you cannot use the interactive export method.
Recovering a Database Object with the Import Utility
T
he Import utility is used to perform a logical recovery of the Oracle database. This utility reads the dump file generated from the Export utility. As discussed earlier, the Export utility dumps the data in the table and the DDL commands necessary to re-create the table. The Import utility then plays back these commands, re-creates the table, and inserts the data stored in the binary dump file. This section provides a step-by-step outline of this process. The Import utility also has numerous options. To display all the import options available, issue the command IMP –HELP from the command line. [oracle@DS-LINUX /oracle]$ imp -help Import: Release 8.1.5.0.0 - Production on Thu Apr 6 18:58:11 2000
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
544
Chapter 16
Oracle Export and Import Utilities
(c) Copyright 1999 Oracle Corporation. All rights reserved.
You can let Import prompt you for parameters by entering the IMP command followed by your username/password: Example: IMP SCOTT/TIGER Or, you can control how Import runs by entering the IMP command followed by various arguments. To specify parameters, you use keywords: Format: IMP KEYWORD=value or KEYWORD=(value1,value2,...,valueN) Example: IMP SCOTT/TIGER IGNORE=Y TABLES=(EMP,DEPT) FULL=N or TABLES=(T1:P1,T1:P2), if T1 is partitioned table USERID must be the first parameter on the command line. Keyword Description (Default) ------------------------------------------------------USERID username/password BUFFER size of data buffer FILE input files (EXPDAT.DMP) SHOW just list file contents (N) IGNORE ignore create errors (N) GRANTS import grants (Y) INDEXES import indexes (Y) ROWS import data rows (Y) LOG log file of screen output
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Recovering a Database Object with the Import Utility
DESTROY overwrite tablespace data file (N) INDEXFILE write table/index info to specified file SKIP_UNUSABLE_INDEXES skip maintenance of unusable indexes (N) ANALYZE execute ANALYZE statements in dump file (Y) FEEDBACK display progress every x rows(0) TOID_NOVALIDATE skip validation of specified type ids FILESIZE maximum size of each dump file RECALCULATE_STATISTICS recalculate statistics (N) VOLSIZE number of bytes in file on each volume of a file on tape FULL import entire file (N) FROMUSER list of owner usernames TOUSER list of usernames TABLES list of table names RECORDLENGTH length of IO record INCTYPE incremental import type COMMIT commit array insert (N) PARFILE parameter filename CONSTRAINTS import constraints (Y)
The following keywords only apply to transportable tablespaces TRANSPORT_TABLESPACE import transportable tablespace metadata (N) TABLESPACES tablespaces to be transported into database DATAFILES datafiles to be transported into database TTS_OWNERS users that own data in the transportable tablespace set
Import terminated successfully without warnings. [oracle@DS-LINUX /oracle]$
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
545
546
Chapter 16
Oracle Export and Import Utilities
The following steps indicate how to perform the recovery of a table with the Import utility. Here the Import utility is in interactive mode. 1. To recover the database, you must validate what should be recovered.
In this case, use the export from the t1.dmp created in the earlier section titled “Creating a Complete Logical Backup with the Export Utility.” Here, you are exporting the complete t1 table, which contains four rows of data. SVRMGR> select * from t1; C1 C2 ---------- ---------------------------------------------1 This is a test one - before hot backup 3 This is a test three - after hot backup 2 This is a test two - after hot backup 4 This is test four - after complete export 4 rows selected. SVRMGR> 2. Truncate the table to simulate a complete data loss in the table.
SVRMGR> truncate table t1; Statement processed. SVRMGR> 3. From the working directory of the exported dump file, execute the IMP
command and connect to the user TEST. [oracle@DS-LINUX exp]$ imp Import: Release 8.1.5.0.0 - Production on Wed Apr 5 19:25:31 2000 (c) Copyright 1999 Oracle Corporation. All rights reserved. Username: test Password: Connected to:Oracle8i Enterprise Edition Release 8.1.5.0.0 - Production
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Recovering a Database Object with the Import Utility
547
With the Partitioning and Java options PL/SQL Release 8.1.5.0.0 - Production 4. The next prompt is for the filename of the dump file. You can enter the
fully qualified filename, or just the filename if you are in the working directory of the dump file. Import file: expdat.dmp > t1.dmp] 5. The next prompt is for the buffer size of the import data loads. Choose
the minimum, which is 8KB. Enter insert buffer size (minimum is 8192) 30720> 8192 6. The next prompt asks whether to list the contents of the dump file,
instead of actually doing the import. The default is no; choose that option for this example. Export file created by EXPORT:V08.01.05 via conventional path import done in US7ASCII character set and US7ASCII NCHAR character set List contents of import file only (yes/no): no > n 7. The next prompt asks whether to ignore the CREATE ERROR if the
object exists. For this example, choose yes. Ignore create error due to object existence (yes/no): no > y 8. The next prompt asks whether to import the grants related to the
object that is being imported. Choose yes. Import grants (yes/no): yes > y 9. The next prompt asks whether to import the data in the table, instead
of just the table definition. Choose yes. Import table data (yes/no): yes > y 10. The next prompt asks whether to import the entire export file. In this
case, choose yes because the import consists only of the table you want to import. If it had multiple objects in the dump file, then you would choose the default, no. Import entire export file (yes/no): no > y
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
548
Chapter 16
Oracle Export and Import Utilities
. . importing TEST's objects into TEST . . importing table "T1" 4 rows imported Import terminated successfully without warnings. [oracle@DS-LINUX exp]$ 11. Next, you need to validate that the import successfully restored the
table t1. Perform a query on the table to validate that there are four rows within the table. SVRMGR> select * from t1; C1 C2 ---------- -------------------------------------------1 This is a test one - before hot backup 2 This is a test two - after hot backup 3 This is a test three - after hot backup 4 This is test four - after complete export 4 rows selected. SVRMGR>
The IGNORE CREATE ERROR DUE TO OBJECT EXISTENCE can be a confusing option. This option means that if the object exists during the import, then ignore the CREATE or REPLACE errors during the object re-creation. Furthermore, if you specify IGNORE=Y, the Import utility continues its work without reporting the error. Even if IGNORE=Y, the Import utility still does not replace an existing object; it will skip the object.
Performing a Tablespace Point-in-Time Recovery
T
he tablespace point-in-time recovery (TSPITR) is a fairly complicated recovery process. Both physical and logical recovery techniques are required to successfully perform this type of recovery process. A clone database must be created and recovered. Exports and imports must be performed on the clone and primary databases.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Performing a Tablespace Point-in-Time Recovery
549
The main use of this recovery feature is to recover erroneously dropped tables. This is most useful in large databases, where it would take a significant amount of time to restore all the tablespaces, opposed to just one. In this section, you will walk through a TSPITR, where the table t1 is erroneously dropped in the brdb (primary) database. You will create a clone database and recover it to a point in time prior to the dropped table t1. You will export the data dictionary objects and import them back into the primary database. Here are the steps of the process: 1. Identify which archive logs will be needed in this recovery.
SVRMGR> archive log list Database log mode Automatic archival Archive destination Oldest online log sequence Next log sequence to archive Current log sequence SVRMGR>
Archive Mode Enabled /oracle/admin/brdb/arch1 8 10 10
2. Create a binary backup control file of the primary database. You will
save this in the clone database ad hoc directory. This control file will be referenced in the start-up of the clone database. SVRMGR> alter database backup controlfile to '/oracle/admin/clone/adhoc/backctl.ctl'; 3. Perform a cold backup of the primary database. This will be the data-
base from which you build the clone database. You will use the coldbackup shell script, which you have used in earlier examples to perform a cold backup. This shell script copies the cold backup to the /oracle/backup directory. [oracle@DS-LINUX backup]$ . cold-backup Shutting Down the Database! Oracle Server Manager Release 3.1.5.0.0 - Production (c) Copyright 1997, Oracle Corporation. All Rights Reserved.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
550
Chapter 16
Oracle Export and Import Utilities
Oracle8i Enterprise Edition Release 8.1.5.0.0 Production With the Partitioning and Java options PL/SQL Release 8.1.5.0.0 - Production SVRMGR> Connected. SVRMGR> Database closed. Database dismounted. ORACLE instance shut down. SVRMGR> Server Manager complete. Starting cold backup… Removing existing backup files Copying control files Copying data files Copying redo logs Finishing cold backup… Starting up the Database! Oracle Server Manager Release 3.1.5.0.0 - Production (c) Copyright 1997, Oracle Corporation. Reserved.
All Rights
Oracle8i Enterprise Edition Release 8.1.5.0.0 Production With the Partitioning and Java options PL/SQL Release 8.1.5.0.0 - Production SVRMGR> Connected. SVRMGR> ORACLE instance started. Total System Global Area Fixed Size Variable Size Database Buffers Redo Buffers
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
19504528 64912 16908288 2457600 73728
bytes bytes bytes bytes bytes
Performing a Tablespace Point-in-Time Recovery
551
Database mounted. Database opened. SVRMGR> Server Manager complete. [oracle@DS-LINUX backup]$ 4. Connect to the primary database as TEST and enter data in the t1
table. This is the table that will be erroneously dropped. SVRMGR> connect test/test; Connected. SVRMGR> insert into t1 values (5, 'TSPITR Test'); 1 row processed. SVRMGR> insert into t1 values (6, 'TSPITR Test'); 1 row processed. SVRMGR> commit; Statement processed. 5. Obtain the exact time just prior to the drop of t1. This information
will be used as the recovery point in the clone database. SVRMGR> select to_char(sysdate,'yy-mon-dd:hh24:mi:ss') before_drop from dual; BEFORE_DROP -----------------00-apr-09:22:22:55 1 row selected. 6. Drop the t1 table.
SVRMGR> drop table t1; Statement processed. 7. Create another table, t3, after the recovery point target. You do this
to emphasize that these tables will be lost in a TSPITR. This is always an important consideration. SVRMGR> create table t3 (c1 number,c2 char(50)); Statement processed. 8. Take the tablespaces containing the dropped table offline to prevent
any more changes from occurring in it until recovery is complete. SVRMGR> alter tablespace users offline immediate; Statement processed.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
552
Chapter 16
Oracle Export and Import Utilities
9. Push the changes from the online redo logs to archive logs so these
changes can be applied to the clone database. SVRMGR> alter system archive log current; Statement processed. SVRMGR> 10. Build the clone database by restoring the cold backup to the new clone
destination. In this case, it is /db03/ORACLE/clone. [oracle@DS-LINUX clone]$ cp /oracle/backup/* /db03/ORACLE/clone 11. Build the init.ora files from the primary database (brdb). Copying
the file initbrdb.ora to initclone.ora will do this. [oracle@DS-LINUX clone]$ cp /oracle/admin/brdb/pfile/initbrdb.ora /oracle/admin/clone/initclone.ora 12. Make the necessary changes in the file initclone.ora to start the
clone database. These changes are the appropriate directory changes, name changes, LOG_FILE_NAME_CONVERT parameter, DB_FILE_NAME_ CONVERT parameter, and LOCK_NAME_SPACE parameters. Also notice that the control file reference is to the backup control file you created earlier. background_dump_dest /oracle/admin/clone/bdump compatible control_files clone/adhoc/backctl.ctl core_dump_dest /oracle/admin/clone/cdump db_block_buffers db_block_size db_file_multiblock_read_count db_files log_file_name_convert brdb','db03/ORACLE/clone' db_file_name_convert brdb','db03/ORACLE/clone'
Copyright ©2000 SYBEX , Inc., Alameda, CA
= = 8.1.5 = /oracle/admin/ = = 300 = 8192 = 8 = 64 ='redo01/ORACLE/ ='db01/ORACLE/
www.sybex.com
Performing a Tablespace Point-in-Time Recovery
lock_name_space db_name dml_locks global_names log_archive_dest '/oracle/admin/brdb/arch1' #log_archive_duplex_dest_2 '/oracle/admin/clone/arch2' log_archive_max_processes log_archive_format log_archive_start log_buffer log_checkpoint_interval max_dump_file_size max_rollback_segments processes rollback_segments (rb01,rb02,rb03,rb04,rb05,rb06) shared_pool_size user_dump_dest /oracle/admin/clone/udump utl_file_dir utl_file_dir utl_file_dir utl_file_dir
= = = = =
clone brdb 100 false
= = = = = = = = = =
2 archclone_%s.log TRUE 40960 10000 51200 60 40
= 16000000 = = = = =
/apps/web/master /apps/web/upload /apps/web/remove /apps/web/archive
13. At this point, you can start up the clone database.
[oracle@DS-LINUX pfile]$ svrmgrl Oracle Server Manager Release 3.1.5.0.0 - Production (c) Copyright 1997, Oracle Corporation. All Rights Reserved. Oracle8i Enterprise Edition Release 8.1.5.0.0 Production
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
553
554
Chapter 16
Oracle Export and Import Utilities
With the Partitioning and Java options PL/SQL Release 8.1.5.0.0 - Production SVRMGR> connect internal Connected. SVRMGR> startup nomount pfile=initclone.ora ORACLE instance started. Total System Global Area 19504528 Fixed Size 64912 Variable Size 16908288 Database Buffers 2457600 Redo Buffers 73728 SVRMGR> alter database mount clone database; Statement processed.
bytes bytes bytes bytes bytes
14. Next, you need to perform some housekeeping on the clone database
before you can use it to recover a tablespace or table. These chores consist of bringing all offline data files online and renaming the second log members. Remember that only one set of log file members was mentioned in the LOG_FILE_NAME_CONVERT parameter in the file initclone.ora. SVRMGR> alter database datafile '/db03/ORACLE/clone/users01.dbf' online; Statement processed. SVRMGR> alter database datafile '/db03/ORACLE/clone/system01.dbf' online; Statement processed. SVRMGR> alter database datafile '/db03/ORACLE/clone/rbs01.dbf' online; Statement processed. SVRMGR> alter database datafile '/db03/ORACLE/clone/temp01.dbf' online; Statement processed. SVRMGR> alter database datafile '/db03/ORACLE/clone/tools01.dbf' online; Statement processed. SVRMGR> alter database datafile '/db03/ORACLE/clone/data01.dbf' online;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Performing a Tablespace Point-in-Time Recovery
Statement processed. SVRMGR> alter database datafile '/db03/ORACLE/clone/indx01.dbf' online; Statement processed. SVRMGR> select file#, status, name from v$datafile; FILE# STATUS NAME ---------- ------- --------------------------------1 SYSTEM /db03/ORACLE/clone/system01.dbf 2 ONLINE
/db03/ORACLE/clone/rbs01.dbf
3 ONLINE
/db03/ORACLE/clone/temp01.dbf
4 ONLINE
/db03/ORACLE/clone/users01.dbf
5 ONLINE
/db03/ORACLE/clone/tools01.dbf
6 ONLINE
/db03/ORACLE/clone/data01.dbf
7 ONLINE
/db03/ORACLE/clone/indx01.dbf
7 rows selected. SVRMGR> alter database rename file '/redo02/ORACLE/brdb/redo03b.log' to 2> '/db03/ORACLE/clone/redo03b.log'; Statement processed. SVRMGR> alter database rename file '/redo02/ORACLE/brdb/redo02b.log' to 2> '/db03/ORACLE/clone/redo02b.log'; Statement processed. SVRMGR> alter database rename file '/redo02/ORACLE/brdb/redo01b.log' to 2> '/db03/ORACLE/clone/redo01b.log'; Statement processed.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
555
556
Chapter 16
Oracle Export and Import Utilities
15. Finally, you can recover the clone database to the point prior to the
dropped table t1. Note that the date/time clause for RECOVER must always be in the format 'YYYY-MM-DD:HH24:MI:SS' SVRMGR> recover database using backup controlfile until time '2000-APR-09:22:22: 55'; ORA-00279: change 169732 generated at 04/08/00 17:37:47 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archclone_10.log ORA-00280: change 169732 for thread 1 is in sequence #10 Specify log: {=suggested | filename | AUTO | CANCEL} /oracle/admin/brdb/arch1/archbrdb_10.log Log applied. Media recovery complete. SVRMGR> alter database open resetlogs; Statement processed. SVRMGR> 16. Take an export TSPITR of the clone database and apply it to the pri-
mary database (brdb). The SYS account must always be used in the TSPITR export. [oracle@DS-LINUX exp]$ exp sys/change_on_install point_in_time_recover=y tablespaces=users
Export: Release 8.1.5.0.0 - Production on Sat Apr 8 20:17:39 2000 (c) Copyright 1999 Oracle Corporation. All rights reserved.
Connected to: Oracle8i Enterprise Edition Release 8.1.5.0.0 - Production With the Partitioning and Java options
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Performing a Tablespace Point-in-Time Recovery
557
PL/SQL Release 8.1.5.0.0 - Production Export done in US7ASCII character set and US7ASCII NCHAR character set Note: table data (rows) will not be exported About to export Tablespace Point-in-time Recovery objects... For tablespace USERS ... . exporting cluster definitions . exporting table definitions . . exporting table T1 . . exporting table T2 . exporting referential integrity constraints . exporting triggers . end point-in-time recovery Export terminated successfully without warnings. [oracle@DS-LINUX exp]$ 17. Go back to the primary database and take the USERS tablespace offline
for recovery. SVRMGR> alter tablespace users offline for recover; Statement processed. 18. At this point, you can copy the data file users01.dbf from the clone
database to the primary database. This is where the table t1 and data for the associated table are located. [oracle@DS-LINUX brdb]$ cp /db03/ORACLE/clone/ users01.dbf /db01/ORACLE/brdb/users01.dbf 19. Now you can perform the TSPITR import to synchronize the data dic-
tionary objects in the primary database (brdb). The SYS account must always be used in the TSPITR import. [oracle@DS-LINUX exp]$ imp sys/change_on_install point_in_time_recover=y Import: Release 8.1.5.0.0 - Production on Sun Apr 9 23:05:36 2000
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
558
Chapter 16
Oracle Export and Import Utilities
(c) Copyright 1999 Oracle Corporation. All rights reserved.
Connected to: Oracle8i Enterprise Edition Release 8.1.5.0.0 - Production With the Partitioning and Java options PL/SQL Release 8.1.5.0.0 - Production Export file created by EXPORT:V08.01.05 via conventional path About to import Tablespace Point-in-time Recovery objects... import done in US7ASCII character set and US7ASCII NCHAR character set . importing TEST's objects into TEST . . importing table "T1" . . importing table "T2" . importing SYS's objects into SYS Import terminated successfully without warnings. [oracle@DS-LINUX exp]$ 20. Make the tablespace USERS online so that the objects can be accessed.
SVRMGR> alter tablespace users online; Statement processed. 21. Finally, you can see that all the rows are back in table t1 and the
object is no longer dropped. The t3 table is still dropped because you didn’t recover to the point in time where it existed. SVRMGR> select * from t1 order by c1; C1 C2 ---------- -------------------------------------------1 This is a test one - before hot backup 2 This is a test two - after hot backup 3 This is a test three - after hot backup 4 This is test four - after complete export 5 TSPITR Test
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Summary
559
6 TSPITR Test 6 rows selected. SVRMGR> SVRMGR> select * from t3; select * from t3 * ORA-00942: table or view does not exist SVRMGR>
Summary
T
his chapter demonstrated the capabilities of the Oracle logical backup and recovery utilities, called the Export and Import utilities. You walked through a backup and recovery of a database object by using these tools. You learned how to display all the options available in the Export and Import utilities. You also saw the direct-path export, which is significantly faster than the standard conventional-path export. This chapter provided scenarios for which the Export and Import utilities are useful. One of these scenarios is the tablespace point-in-time recovery (TSPITR). In most environments, the Export and Import utilities do not serve as the primary backup, but as a supplement to a physical backup. The Export and Import utilities provide extra protection and eliminate the need for certain physical recoveries.
Key Terms Before you take the exam, make sure you’re familiar with the following terms: logical backup Export utility conventional-path export direct-path export
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
560
Chapter 16
Oracle Export and Import Utilities
dump file Import utility interactive export PARFILE complete export cumulative export incremental export tablespace point-in-time recovery (TSPITR)
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
561
Review Questions 1. What type of database utility performs a logical backup? A. Export B. Import C. Cold backup D. Hot backup 2. What type of database utility reads the dump file into the database? A. Export B. Import C. SQL*Loader D. Forms 3. What is the default dump file named? A. export.dmp B. expdat.dmp C. expdata.dmp D. export_data.dmp 4. What command provides help for the Export utility? A. EXP –HELP B. EXPORT -HELP C. EXPORT HELP D. EXP HELP
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
562
Chapter 16
Oracle Export and Import Utilities
5. What types of backup are required for the TSPITR? Choose all that
apply. A. Logical B. Physical C. Import D. Export 6. What is the name of the parameter that reads the parameter file? A. PARAMETER FILE B. PARFILE C. PARAFILE D. PAR-FILE 7. What is the name of the other database called in TSPITR? A. Secondary database B. Recovery database C. Clone database D. Backup database 8. Which database is physically recovered in the TSPITR? A. Secondary database B. Primary database C. Recovery database D. Clone database
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
563
9. What export changes the export record-keeping tables? Choose all
that apply. A. Cumulative B. Complete C. Incremental D. Full 10. What export option must be used with the complete, incremental, or
cumulative export? Choose all that apply. A. INCTYPE B. FULL C. ARRAYSIZE D. BUFFERSIZE 11. Which export type is the fastest? A. Complete B. Cumulative C. Direct-path D. Conventional-path 12. Which export type would be the fastest? A. Complete B. Cumulative with significant table changes since the last complete C. Incremental with significant table changes since the last complete D. Incremental without table changes since the last complete
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
564
Chapter 16
Oracle Export and Import Utilities
13. Which table name is not an export record-keeping table? A. sys.expdata B. sys.incvid C. sys.incfil D. sys.incexp 14. Which user must perform the TSPITR export or import? A. SYSTEM B. SYS C. Any account with DBA privilege D. The user with the dropped table 15. What is the name of the parameter in the init.ora file that allows the
database name to remain unchanged in the clone database of a TSPITR? A. NAME_SPACE B. LOCK_NAME_SPACE C. DB_NAME D. DBNAME 16. What is the name of the parameter in the init.ora file that allows the
data file’s location to be changed to the clone location? A. DB_FILE_NEW B. DB_FILE_NAME_CONVERT C. DB_FILE_CONVERT D. CONVERT_ DB_FILE_NAME
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
565
17. What is the name of the parameter in the init.ora file that allows the
log file’s location to be changed to the clone location? A. LOG_FILE_CONVERT B. CONVERT_LOG_FILE C. LOG_FILE_NAME_CONVERT D. LOG_FILE_NAME 18. What command can be issued to convert log files or data file names in
the event that LOG_FILE_NAME_CONVERT or DB_FILE_NAME_CONVERT are not used? A. ALTER DATABASE RENAME FILE ‘FROM_FILE_NAME’ TO ‘TO_
NEW_FILE_NAME’; B. RENAME FILE ‘FROM_FILE_NAME’ TO ‘TO_NEW_FILE_NAME’; C. ALTER SYSTEM RENAME FILE ‘FROM_FILE_NAME’ TO ‘TO_NEW_
FILE_NAME’; D. ALTER DATAFILE RENAME FILE ‘FROM_FILE_NAME’ TO ‘TO_
NEW_FILE_NAME’; 19. What type of recovery can be performed in the TSPITR? Choose all
that apply. A. Complete B. Incomplete C. Cancel D. Point-in-time
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
566
Chapter 16
Oracle Export and Import Utilities
Answers to Review Questions 1. A. The Export utility performs the logical backup of the Oracle
database. 2. B. The Import utility is responsible for reading in the dump file. 3. B. The expdat.dmp is the default dump file name. 4. A. EXP –HELP generates the help information for the Export utility. 5. A, B, and D. Both logical and physical database backups are performed
with the TSPITR, and exports are logical backups. Import is not backup, but a recovery utility in this context. 6. B. The PARFILE parameter is the name of the parameter that specifies
the parameter file. 7. C. The clone database is the other database that is recovered in the
TSPITR. 8. D. The clone database is physically recovered by applying the necessary
redo logs to rebuild the dropped objects up to the desired point in time. 9. A and B. The complete export changes the record-keeping tables. 10. A and B. Both INCTYPE and FULL parameters must be used with the
complete, incremental, or cumulative exports. 11. C. A direct-path export is the fastest export. Complete and cumulative
exports are data-volume dependent. 12. D. Incremental and cumulative exports will be faster than the com-
pletes because of smaller volumes of data. The one with least number of changes will be the quickest. 13. A. The sys.expdata is not an export record-keeping table. 14. B. The SYS account must be used or the TSPITR option will fail.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Answers to Review Questions
567
15. B. The LOCK_NAME_SPACE parameter performs this task. 16. B. The DB_FILE_NAME_CONVERT performs this task. 17. C. The LOG_FILE_NAME_CONVERT parameter performs this task. 18. A. The ALTER DATABASE RENAME FILE command renames data files
and log files. 19. B, C, and D. All forms of incomplete recovery can be used in the
TSPITR. Complete can’t because it will roll forward into the failure or the undesired change in the database, which negates the value of a TSPITR.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Chapter
17
Additional Recovery Issues ORACLE8i BACKUP AND RECOVERY EXAM OBJECTIVES OFFERED IN THIS CHAPTER: List methods for minimizing downtime: perform parallel recovery, start recovering a database with missing data files, and re-create lost temporary or index tablespaces Reconstruct lost or damaged control files List recovery issues associated with read-only tablespaces
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Training and Certification Web site (http:// education.oracle.com/certification/index.html) for the most current exam objectives listing.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
T
his chapter covers the following recovery issues: minimizing downtime, reconstructing control files, and recovering read-only tablespaces. To minimize downtime, you need to understand not only the performance features associated with recovery, but how to handle certain events in the most expeditious manner. Reconstructing control files is a technique that should be in every DBA’s bag of tricks. This chapter demonstrates how to reconstruct control files with minimal effect on the database. Read-only tablespaces have special recovery procedures associated with them. This chapter identifies various recovery scenarios and demonstrates the recovery procedures required for read-only tablespaces.
Minimizing Downtime
S
ome of the methods used to minimize downtime during a failure situation are to increase the speed of the recovery process, to recover databases before all data files have been restored, and to re-create lost temporary tablespaces. A temporary tablespace stores temporary objects for sorting and creating tables and indexes. Performance increases in the recovery process can be attained by operating in parallel. This approach applies multiple redo logs to the database in parallel instead of serially and can dramatically increase the recovery time if large numbers of archive logs need to be applied. Recovering data files before all data files have been restored can speed up the recovery process by applying redo logs to the available data files instead of waiting until all data files have been restored. Temporary tablespaces do not need to be restored because they do not contain permanent objects. Temporary tablespaces can be rebuilt, which reduces downtime significantly.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Minimizing Downtime
571
Performing Parallel Recovery Parallel recovery uses multiple recovery processes to read multiple redo logs at the same time instead of using one recovery process to read one log at a time. The parallel recovery process consists of a server process, which reads the redo logs, and a recovery process, which applies the changes back to the database. When parallel recovery is enabled, multiple recovery processes are performing the recovery. Figure 17.1 shows a diagram of parallel recovery. FIGURE 17.1
Parallel recovery
Redo log 1
Redo log 2
Redo log 3
SVRMGRL session
Recovery process 1
/Disk1 users01.dbf users02.dbf
Redo log 4
RECOVER DATABASE PARALLEL 2;
Recovery process 2
/Disk2
Data files
USERS tablespace
Data01.dbf Data02.dbf Data03.dbf
DATA1 tablespace
/Disk3
Data files
Index01.dbf
INDEX tablespace
To use parallel recovery, the database must be configured to run parallel query processes. Two init.ora parameters are required to initiate the parallel query processes. These are the same parameters that enable parallel query: PARALLEL_MIN_SERVERS and PARALLEL_MAX_SERVERS. You need to consider many hardware configuration factors to determine the proper parallel query configuration. In general, you should have your
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
572
Chapter 17
Additional Recovery Issues
PARALLEL_MAX_SERVERS equal to or less than your server’s total processors or the number of disks supporting the database’s data files. An additional parameter, called RECOVERY_PARALLELISM, can be used to set the recovery parallelism at the database level. This parameter statically sets the degree of parallelism for the recovery process until the database is restarted. An alternative to using this parameter is to set parallelism at the time of recovery. The degree of parallelism is the number of parallel processes you choose to enable for a particular parallel activity such as recovery. The following example demonstrates this approach.
Too many parallel query processes can overwhelm the server and cause poor performance and abnormal system behavior. A good rule of thumb is that your PARALLEL_MAX_PROCESS parameter should be equal to or less than the total number of CPUs or the number of disk drives that your data files are spread across. If the system doesn't have I/O or CPU saturation, you can always increase your settings.
Let’s walk through a parallel recovery step by step. In this case, you will be performing media recovery on two tablespaces and data files. The tablespaces and associated data files are DATA/data01.dbf and USERS/ user01.dbf. You will recover with a degree 4 of parallelism. The steps are as follows: 1. First, you must make sure that the database has the parallel query pro-
cesses enabled. You do this by setting the proper init.ora parameters, which enable parallel query. The first parameter is PARALLEL_ MIN_SERVERS, which sets the minimum parallel query processes. Set this to 2, indicating that at least two parallel query processes will always be started. The next parameter is PARALLEL_MAX_SERVERS, which you set to 4. This means that the maximum number of parallel query processes at any time is four. This is the minimum number of parameters necessary to enable parallel recovery. The following is a sample of the initbrdb.ora parameter file with these parameters set. [oracle@DS-LINUX pfile]$ more initbrdb.ora background_dump_dest = /oracle/admin/brdb/bdump compatible = 8.1.5 control_files = /oracle/data/brdb/control1.ctl,/oracle/data
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Minimizing Downtime
/brdb/control2.ctl core_dump_dest /oracle/admin/brdb/cdump db_block_buffers db_block_size db_file_multiblock_read_count db_files db_name dml_locks global_names log_archive_dest /oracle/admin/brdb/arch1' #log_archive_duplex_dest_2 '/oracle/admin/brdb/arch2' log_archive_max_processes log_archive_format log_archive_start log_buffer log_checkpoint_interval max_dump_file_size max_rollback_segments processes parallel_max_servers parallel_min_servers rollback_segments (rb01,rb02,rb03,rb04,rb05,rb06) shared_pool_size user_dump_dest /oracle/admin/brdb/udump utl_file_dir utl_file_dir utl_file_dir utl_file_dir
573
= = 300 = 8192 = 8 = 64 = brdb = 100 = false =' = = = = = = = = = = = =
2 archbrdb_%s.log TRUE 40960 10000 51200 60 40 4 2
= 16000000 = = = = =
/apps/web/master /apps/web/upload /apps/web/remove /apps/web/archive
2. Validate that the parameter PARALLEL_MIN_SERVERS is actually at 2;
then verify that you have a degree 4 of parallelism in the recovery. You can see at this point that you are utilizing only two parallel query processes because ora_p000_brdb and ora_p001_brdb are the only parallel query processes running.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
574
Chapter 17
Additional Recovery Issues
[oracle@DS-LINUX pfile]$ ps -ef|grep ora root 740 739 0 20:17 pts/0 00:00:00 oracle 741 740 0 20:17 pts/0 00:00:00 root 778 777 0 20:21 pts/1 00:00:00 oracle 779 778 0 20:21 pts/1 00:00:00 oracle 819 1 0 20:35 ? 00:00:00 oracle 821 1 0 20:35 ? 00:00:00 oracle 823 1 0 20:35 ? 00:00:00 oracle 825 1 0 20:35 ? 00:00:00 oracle 827 1 0 20:35 ? 00:00:00 oracle 829 1 0 20:35 ? 00:00:00 oracle 831 1 0 20:35 ? 00:00:00 oracle 833 1 0 20:35 ? 00:00:00 oracle 835 1 0 20:35 ? 00:00:00 oracle 837 1 0 20:35 ? 00:00:00 root 1146 1145 0 22:36 pts/2 00:00:00 oracle 1147 1146 0 22:36 pts/2 00:00:00 oracle 1165 1147 0 22:38 pts/2 00:00:00 oracle 1166 1147 0 22:38 pts/2 00:00:00
su - oracle -bash su - oracle -bash ora_pmon_brdb ora_dbw0_brdb ora_lgwr_brdb ora_ckpt_brdb ora_smon_brdb ora_reco_brdb ora_arc0_brdb ora_arc1_brdb ora_p000_brdb ora_p001_brdb login -- oracle -bash ps -ef grep ora
3. Take a hot backup of the brdb database to use for the recovery. Exe-
cute the same hot-backup script you have used in earlier chapters. This will store the database copy in the directory /stage. [oracle@DS-LINUX brdb]$ . hot-backup 4. Simulate a media failure by removing the data files data01.dbf and
users01.dbf . [oracle@DS-LINUX brdb]$ total 257067 -rw-r----1 oracle users01.dbf -rw-r----1 oracle temp01.dbf -rw-r----1 oracle system01.dbf -rw-r----1 oracle rbs01.dbf -rw-r----1 oracle tools01.dbf
Copyright ©2000 SYBEX , Inc., Alameda, CA
ls -ltr dba
5251072 Apr 10 23:27
dba
20979712 Apr 10 23:27
dba
83894272 Apr 10 23:27
dba
20979712 Apr 10 23:27
dba
5251072 Apr 10 23:27
www.sybex.com
Minimizing Downtime
-rw-r----1 oracle indx01.dbf -rw-r----1 oracle data01.dbf [oracle@DS-LINUX brdb]$ [oracle@DS-LINUX brdb]$
575
dba
20979712 Apr 10 23:27
dba
104865792 Apr 10 23:27
rm data01.dbf rm users01.dbf
5. Restore the data files from the /stage directory.
[oracle@DS-LINUX brdb]$ /db01/ORACLE/brdb [oracle@DS-LINUX brdb]$ /db01/ORACLE/brdb [oracle@DS-LINUX brdb]$ total 149912 -rw-r----1 oracle users01.dbf -rw-r----1 oracle tools01.dbf -rw-r----1 oracle temp01.dbf -rw-r----1 oracle system01.dbf -rw-r----1 oracle rbs01.dbf -rw-r----1 oracle indx01.dbf -rw-r----1 oracle data01.dbf
cp /stage/data01* cp /stage/user01* ls -lr dba
152597 Apr 10 23:28
dba
5251072 Apr 10 23:27
dba
20979712 Apr 10 23:27
dba
83894272 Apr 10 23:27
dba
20979712 Apr 10 23:27
dba
20979712 Apr 10 23:27
dba
664719 Apr 10 23:28
6. Restart the database and perform media recovery. The logs are
applied in groups of the parallelism factor, which is 4 in this case. Without using parallel recovery, they are applied one at a time. The parallel processing of groups of log files can be displayed only when this is actively running. [oracle@DS-LINUX brdb]$ svrmgrl Oracle Server Manager Release 3.1.5.0.0 - Production
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
576
Chapter 17
Additional Recovery Issues
(c) Copyright 1997, Oracle Corporation. All Rights Reserved. Oracle8i Enterprise Edition Release 8.1.5.0.0 Production With the Partitioning and Java options PL/SQL Release 8.1.5.0.0 - Production SVRMGR> connect internal Connected. SVRMGR> startup mount ORACLE instance started. Total System Global Area 19926416 bytes Fixed Size 64912 bytes Variable Size 17330176 bytes Database Buffers 2457600 bytes Redo Buffers 73728 bytes Database mounted. SVRMGR> SVRMGR> recover database parallel 4; ORA-00279: change 170660 generated at 04/10/00 23:10:03 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_17.log ORA-00280: change 170660 for thread 1 is in sequence #17 Specify log: {=suggested | filename | AUTO | CANCEL} auto Log applied. ORA-00279: change 170736 generated at 04/10/00 23:10:14 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_18.log ORA-00280: change 170736 for thread 1 is in sequence #18 ORA-00278: log file '/oracle/admin/brdb/arch1/archbrdb_17.log' no longer needed for this recovery Log applied. ORA-00279: change 170863 generated at 04/10/00 23:19:33 needed for thread 1 Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Minimizing Downtime
577
ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_19.log ORA-00280: change 170863 for thread 1 is in sequence #19 ORA-00278: log file '/oracle/admin/brdb/arch1/archbrdb_18.log' no longer needed for this recovery Log applied. ORA-00279: change 170973 generated at 04/10/00 23:19:46 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_20.log ORA-00280: change 170973 for thread 1 is in sequence #20 ORA-00278: log file '/oracle/admin/brdb/arch1/archbrdb_19.log' no longer needed for this recovery Log applied. ORA-00279: change 171052 generated at 04/10/00 23:20:00 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_21.log ORA-00280: change 171052 for thread 1 is in sequence #21 ORA-00278: log file '/oracle/admin/brdb/arch1/archbrdb_20.log' no longer needed for this recovery Log applied. ORA-00279: change 171143 generated at 04/10/00 23:20:29 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_22.log ORA-00280: change 171143 for thread 1 is in sequence #22 ORA-00278: log file '/oracle/admin/brdb/arch1/archbrdb_21.log' no longer needed for this recovery Log applied. Media recovery complete. SVRMGR> alter database open; Statement processed. SVRMGR>
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
578
Chapter 17
Additional Recovery Issues
7. Validate how many parallel query processes are now active by check-
ing all running Oracle processes. In this case, you can see four parallel query processes, ora_p000_brdb through ora_p003_brdb. This verifies that two additional parallel query processes were started when the RECOVER DATABASE PARALLEL 4 command was initiated. [oracle@DS-LINUX arch1]$ ps -ef|grep ora oracle 741 740 0 20:17 pts/0 00:00:00 -bash oracle 779 778 0 20:21 pts/1 00:00:00 -bash root 1146 1145 0 22:36 pts/2 00:00:00 login - oracle oracle 1147 1146 0 22:36 pts/2 00:00:00 -bash root 1203 1202 0 22:59 pts/3 00:00:00 login - oracle oracle 1206 1203 0 23:00 pts/3 00:00:00 -bash oracle 1328 1206 0 23:30 pts/3 00:00:00 svrmgrl oracle 1329 1328 1 23:30 ? 00:00:07 oraclebrdb (DESCRIPTION=(LOCAL=Y oracle 1355 1328 0 23:35 pts/3 00:00:00 /bin/bash oracle 1371 1355 0 23:37 pts/3 00:00:00 svrmgrl oracle 1372 1371 2 23:37 ? 00:00:02 oraclebrdb (DESCRIPTION=(LOCAL=Y oracle 1374 1 0 23:37 ? 00:00:00 ora_pmon_brdb oracle 1376 1 0 23:37 ? 00:00:00 ora_dbw0_brdb oracle 1378 1 0 23:37 ? 00:00:00 ora_lgwr_brdb oracle 1380 1 0 23:37 ? 00:00:00 ora_ckpt_brdb oracle 1382 1 0 23:37 ? 00:00:00 ora_smon_brdb oracle 1384 1 0 23:37 ? 00:00:00 ora_reco_brdb oracle 1386 1 0 23:37 ? 00:00:00 ora_arc0_brdb oracle 1388 1 0 23:37 ? 00:00:00 ora_arc1_brdb oracle 1390 1 0 23:37 ? 00:00:00 ora_p000_brdb oracle 1392 1 0 23:37 ? 00:00:00 ora_p001_brdb oracle 1394 1 1 23:38 ? 00:00:00 ora_p002_brdb oracle 1396 1 1 23:38 ? 00:00:00 ora_p003_brdb oracle 1401 1147 0 23:39 pts/2 00:00:00 ps -ef oracle 1402 1147 0 23:39 pts/2 00:00:00 grep ora
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Minimizing Downtime
579
Recovering a Database with Missing Data Files You can recover a database with missing data files by using either the RECOVER TABLESPACE or RECOVER DATAFILE command. The following example uses the RECOVER DATAFILE command. In this case, assume that you have lost both data01.dbf and users01.dbf and have only the data file data01.dbf restored from tape. The file users01.dbf is still being searched for on the backup tapes by the person responsible for restoration of backups, usually the system administrator (SA). Therefore, you begin recovery without that file. The following steps show how to recover a database with a missing data file by specifying recovery on the remaining available data file: 1. With the database shut down, simulate media failure by removing the
data01.dbf and user01.dbf data files. [oracle@DS-LINUX brdb]$ rm users01.dbf [oracle@DS-LINUX brdb]$ rm data01.dbf 2. Execute a STARTUP MOUNT command to prepare for recovery.
SVRMGR> startup mount ORACLE instance started. Total System Global Area Fixed Size Variable Size Database Buffers Redo Buffers Database mounted.
19926416 64912 17330176 2457600 73728
bytes bytes bytes bytes bytes
3. Copy the available data file data01.dbf from the /stage directory to
the /db01/ORACLE/brdb directory. SVRMGR>! cp /stage/data01.dbf /db01/ORACLE/brdb 4. Begin data file recovery on data01.dbf while the system administra-
tors are still searching for users01.dbf on tape. SVRMGR> recover datafile '/db01/ORACLE/brdb/ data01.dbf'; ORA-00279: change 170660 generated at 04/10/00 23:10:03 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_17.log
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
580
Chapter 17
Additional Recovery Issues
ORA-00280: change 170660 for thread 1 is in sequence #17 Specify log: {=suggested | filename | AUTO | CANCEL} AUTO Log applied. ORA-00279: change 170736 generated at 04/10/00 23:10:14 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_18.log ORA-00280: change 170736 for thread 1 is in sequence #18 ORA-00278: log file '/oracle/admin/brdb/arch1/archbrdb_ 17.log' no longer needed for this recovery Log applied. ORA-00279: change 170863 generated at 04/10/00 23:19:33 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_19.log ORA-00280: change 170863 for thread 1 is in sequence #19 ORA-00278: log file '/oracle/admin/brdb/arch1/archbrdb_18.log' no longer needed for this recovery Log applied. ORA-00279: change 170973 generated at 04/10/00 23:19:46 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_20.log ORA-00280: change 170973 for thread 1 is in sequence #20 ORA-00278: log file '/oracle/admin/brdb/arch1/archbrdb_19.log' no longer needed for this recovery Log applied. ORA-00279: change 171052 generated at 04/10/00 23:20:00 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_21.log ORA-00280: change 171052 for thread 1 is in sequence #21
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Minimizing Downtime
581
ORA-00278: log file '/oracle/admin/brdb/arch1/archbrdb_20.log' no longer needed for this recovery Log applied. ORA-00279: change 171143 generated at 04/10/00 23:20:29 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_22.log ORA-00280: change 171143 for thread 1 is in sequence #22 ORA-00278: log file '/oracle/admin/brdb/arch1/archbrdb_21.log' no longer needed for this recovery Log applied. ORA-00279: change 171252 generated at 04/10/00 23:20:37 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_23.log ORA-00280: change 171252 for thread 1 is in sequence #23 ORA-00278: log file '/oracle/admin/brdb/arch1/archbrdb_22.log' no longer needed for this recovery Log applied. Media recovery complete. 5. Try to open the database. The instance will not start due to the missing
data file user01.dbf. SVRMGR> alter database open; alter database open * ORA-01157: cannot identify/lock data file 4 see DBWR trace file ORA-01110: data file 4: '/db01/ORACLE/brdb/users01.dbf' SVRMGR> 6. Copy the file user01.dbf, because the system administrators have
found it on another tape and restored it to /stage. SVRMGR>! cp /stage/users01.dbf /db01/ORACLE/brdb
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
582
Chapter 17
Additional Recovery Issues
7. Recover the data file users01.dbf.
SVRMGR> recover datafile '/db01/ORACLE/brdb/ users01.dbf'; ORA-00279: change 170660 generated at 04/10/00 23:10:03 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_17.log ORA-00280: change 170660 for thread 1 is in sequence #17 Specify log: {=suggested | filename | AUTO | CANCEL} auto Log applied. ORA-00279: change 170736 generated at 04/10/00 23:10:14 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_18.log ORA-00280: change 170736 for thread 1 is in sequence #18 ORA-00278: log file '/oracle/admin/brdb/arch1/archbrdb_17.log' no longer needed for this recovery Log applied. ORA-00279: change 170863 generated at 04/10/00 23:19:33 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_19.log ORA-00280: change 170863 for thread 1 is in sequence #19 ORA-00278: log file '/oracle/admin/brdb/arch1/archbrdb_18.log' no longer needed for this recovery Log applied. ORA-00279: change 170973 generated at 04/10/00 23:19:46 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_20.log ORA-00280: change 170973 for thread 1 is in sequence #20 ORA-00278: log file '/oracle/admin/brdb/arch1/archbrdb_19.log' no longer needed for this recovery
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Minimizing Downtime
583
Log applied. ORA-00279: change 171052 generated at 04/10/00 23:20:00 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_21.log ORA-00280: change 171052 for thread 1 is in sequence #21 ORA-00278: log file '/oracle/admin/brdb/arch1/archbrdb_20.log' no longer needed for this recovery Log applied. ORA-00279: change 171143 generated at 04/10/00 23:20:29 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_22.log ORA-00280: change 171143 for thread 1 is in sequence #22 ORA-00278: log file '/oracle/admin/brdb/arch1/archbrdb_21.log' no longer needed for this recovery Log applied. ORA-00279: change 171252 generated at 04/10/00 23:20:37 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_23.log ORA-00280: change 171252 for thread 1 is in sequence #23 ORA-00278: log file '/oracle/admin/brdb/arch1/archbrdb_22.log' no longer needed for this recovery Log applied. Media recovery complete. SVRMGR> 8. Finally, you can open the database after both data files have been
recovered successfully. SVRMGR> alter database open; Statement processed. SVRMGR>
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
584
Chapter 17
Additional Recovery Issues
Re-Creating a Lost Temporary Tablespace You can re-create a lost temporary tablespace with little harm to the database. This is because a temporary tablespace contains no permanent objects. Therefore, you can perform recovery simply by dropping the existing data file and tablespace, and creating a new temporary tablespace. The following is a step-by-step example of re-creating a lost temporary tablespace: 1. Remove the existing data file temp01.dbf from the file system to
simulate a lost temporary tablespace. Below you can see the temp01.dbf data file is in the file system; you will remove it. [oracle@DS-LINUX brdb]$ total 257067 -rw-r----1 oracle users01.dbf -rw-r----1 oracle tools01.dbf -rw-r----1 oracle temp01.dbf -rw-r----1 oracle indx01.dbf -rw-r----1 oracle data01.dbf -rw-r----1 oracle system01.dbf -rw-r----1 oracle rbs01.dbf [oracle@DS-LINUX brdb]$
ls -ltr dba
5251072 Apr 10 23:40
dba
5251072 Apr 10 23:40
dba
20979712 Apr 10 23:40
dba
20979712 Apr 10 23:40
dba
104865792 Apr 10 23:40
dba
83894272 Apr 11 00:11
dba
20979712 Apr 11 15:53
rm temp01.dbf
2. Shut down the database. The database attempts to checkpoint the
data files and does not find temp01.dbf. You have to perform a SHUTDOWN ABORT to stop the instance.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Minimizing Downtime
585
SVRMGR> shutdown ORA-01116: error in opening database file 3 ORA-01110: data file 3: '/db01/ORACLE/brdb/temp01.dbf' ORA-27041: unable to open file Linux Error: 2: No such file or directory Additional information: 3 SVRMGR> shutdown abort ORACLE instance shut down. SVRMGR> 3. Execute a STARTUP MOUNT command on the database.
SVRMGR> startup mount ORACLE instance started. Total System Global Area Fixed Size Variable Size Database Buffers Redo Buffers Database mounted. SVRMGR>
19926416 64912 17330176 2457600 73728
bytes bytes bytes bytes bytes
4. Take the data file temp01.dbf offline to allow the database to be
opened. SVRMGR> alter database datafile '/db01/ORACLE/brdb/temp01.dbf' offline; Statement processed. 5. Open the database.
SVRMGR> alter database open; Statement processed. 6. Validate the status of the data files and tablespaces. The data file
temp01.dbf needs recovery, as the data dictionary still contains the definition for the temporary tablespace. SVRMGR> select file#,status,name from v$datafile;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
586
Chapter 17
Additional Recovery Issues
FILE#
STATUS
NAME
---------- ------- -----------------------------------1 SYSTEM /db01/ORACLE/brdb/system01.dbf 2
ONLINE
/db01/ORACLE/brdb/rbs01.dbf
3
RECOVER /db01/ORACLE/brdb/temp01.dbf
4
ONLINE
/db01/ORACLE/brdb/users01.dbf
5
ONLINE
/db01/ORACLE/brdb/tools01.dbf
6
ONLINE
/db01/ORACLE/brdb/data01.dbf
7
ONLINE
/db01/ORACLE/brdb/indx01.dbf
7 rows selected. SVRMGR> SVRMGR> select * from v$tablespace; TS# NAME ---------- -----------------------------0 SYSTEM 1 RBS 2 TEMP 3 USERS 4 TOOLS 5 DATA 6 INDX 7 rows selected. 7. Remove the TEMP tablespace from the data dictionary by dropping the
tablespace. SVRMGR> drop tablespace temp; Statement processed. SVRMGR>
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Minimizing Downtime
587
8. Re-create the tablespace and data file at the same time.
SVRMGR> create tablespace temp datafile '/db01/ORACLE/brdb/temp01.dbf' size 10m; Statement processed. 9. Validate the status of the data file and the tablespace.
SVRMGR> select file#,status,name from v$datafile; FILE# STATUS NAME ---------- ------- -----------------------------------1 SYSTEM /db01/ORACLE/brdb/system01.dbf 2
ONLINE
/db01/ORACLE/brdb/rbs01.dbf
3
ONLINE
/db01/ORACLE/brdb/temp01.dbf
4
ONLINE
/db01/ORACLE/brdb/users01.dbf
5
ONLINE
/db01/ORACLE/brdb/tools01.dbf
6
ONLINE
/db01/ORACLE/brdb/data01.dbf
7
ONLINE
/db01/ORACLE/brdb/indx01.dbf
7 rows selected. SVRMGR> select * from v$tablespace; TS# NAME ---------- -----------------------------0 SYSTEM 1 RBS 2 TEMP 3 USERS 4 TOOLS 5 DATA 6 INDX 7 rows selected. SVRMGR>
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
588
Chapter 17
Additional Recovery Issues
Reconstructing Lost or Damaged Control Files
Y
ou can reconstruct lost or damaged control files in a couple of ways: you can copy an existing control file or create a new control file by using a BACKUP CONTROL FILE TO TRACE command (ASCII control file). This section demonstrates each method. The easiest way of reconstructing a control file is to copy an existing one that is not damaged. This method assumes that your database is multiplexing control files. As you know, multiplexing means that you have at least one, preferably two, copies of the control file on separate disks. If this were the case, the database could be shut down, and the good control file would be copied to the location of the damaged control file. The following steps show how to perform this task: 1. Validate the location of the control files by viewing the parameter
CONTROL_FILES in the parameter file initbrdb. [oracle@DS-LINUX pfile]$ more initbrdb.ora background_dump_dest = /oracle/admin/brdb/bdump compatible = 8.1.5 control_files = /oracle/data/brdb/ control1.ctl,/oracle/data /brdb/control2.ctl,/db01/ORACLE/brdb/control3.ctl core_dump_dest = /oracle/admin/brdb/cdump db_block_buffers = 300 db_block_size = 8192 db_file_multiblock_read_count = 8 db_files = 64 db_name = brdb dml_locks = 100 global_names = false log_archive_dest = '/oracle/admin/brdb/arch1' #log_archive_duplex_dest_2 = '/oracle/admin/brdb/arch2' log_archive_max_processes = 2
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Reconstructing Lost or Damaged Control Files
log_archive_format = log_archive_start = log_buffer = log_checkpoint_interval = max_dump_file_size = max_rollback_segments = processes = parallel_max_servers = parallel_min_servers = parallel_automatic_tuning = rollback_segments = (rb01,rb02,rb03,rb04,rb05,rb06) shared_pool_size = 16000000 user_dump_dest = /oracle/admin/brdb/udump utl_file_dir = /apps/web/master utl_file_dir = /apps/web/upload utl_file_dir = /apps/web/remove utl_file_dir = /apps/web/archive
589
archbrdb_%s.log TRUE 40960 10000 51200 60 40 4 2 TRUE
2. Remove one of the control files.
[oracle@DS-LINUX brdb]$ rm /oracle/data/brdb/ control1.ctl 3. Start up the database. During the MOUNT portion of the database start-
up process, Oracle checks all copies of the control files to verify the physical validity of the database. SVRMGR> startup ORACLE instance started. Total System Global Area Fixed Size Variable Size Database Buffers Redo Buffers
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
19926416 64912 17330176 2457600 73728
bytes bytes bytes bytes bytes
590
Chapter 17
Additional Recovery Issues
ORA-00205: error in identifying controlfile, check alert log for more info SVRMGR> 4. Check for supplemental information in the file alert_brdb.log in
the /oracle/admin/brdb/bdump directory. [oracle@DS-LINUX bdump]$ tail alert_brdb.log ORA-00202: controlfile: '/oracle/data/brdb/control1.ctl' ORA-27037: unable to obtain file status Linux Error: 2: No such file or directory Additional information: 3 Wed Apr 12 00:13:57 2000 ORA-205 signalled during: alter database mount... 5. Now that you know which control file is missing, you can copy an
existing duplicate to re-create the missing one. [oracle@DS-LINUX bdump]$ cp /db01/ORACLE/brdb/ control3.ctl /oracle/data/brdb/control1.ctl 6. Now that you have replaced the missing control file, you can mount
and open the database. SVRMGR> alter database mount; Statement processed. SVRMGR> alter database open; Statement processed. SVRMGR> The second approach for repairing a damaged or missing control file is to use the backup control file (ASCII control file). This method is useful if you are missing all control files or you are significantly changing the physical structure of your database. An example of a significant change is moving the data files to a different file system, for example, from /db01 to /db02. Walk through this technique by following the steps outlined here: 1. The DBA must execute the command that generates the ASCII control
file. This should be done on a nightly basis as part of the backup script. SVRMGR>alter database backup control file to trace; Statement processed.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Reconstructing Lost or Damaged Control Files
591
2. Display the trace file that this command generates. This trace file is
the ASCII control file—a command script that re-creates the binary control file. Dump file /oracle/admin/brdb/udump/ora_23302.trc Oracle8i Enterprise Edition Release 8.1.5.0.0 Production With the Partitioning and Java options PL/SQL Release 8.1.5.0.0 - Production ORACLE_HOME = /oracle/product/8.1.5 System name:Linux Node name:DS-LINUX Release:2.2.12-20 Version:#1 Mon Sep 27 10:40:35 EDT 1999 Machine:i686 Instance name: brdb Redo thread mounted by this instance: 1 Oracle process number: 10 Unix process pid: 23302, image: oracle@DS-LINUX *** SESSION ID:(9.106) 2000.03.06.23.44.56.665 *** 2000.03.06.23.44.56.665 # The following commands will create a new control file and use it # to open the database. # Data used by the recovery manager will be lost. Additional logs may # be required for media recovery of offline data files. Use this # only if the current versions of all online logs are available. STARTUP NOMOUNT CREATE CONTROLFILE REUSE DATABASE "BRDB" NORESETLOGS ARCHIVELOG MAXLOGFILES 16 MAXLOGMEMBERS 2 MAXDATAFILES 100 MAXINSTANCES 1 MAXLOGHISTORY 226
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
592
Chapter 17
Additional Recovery Issues
LOGFILE GROUP 1 ( '/redo01/ORACLE/brdb/redo01a.log', '/redo02/ORACLE/brdb/redo01b.log' ) SIZE 2M, GROUP 2 ( '/redo01/ORACLE/brdb/redo02a.log', '/redo02/ORACLE/brdb/redo02b.log' ) SIZE 2M, GROUP 3 ( '/redo01/ORACLE/brdb/redo03a.log', '/redo02/ORACLE/brdb/redo03b.log' ) SIZE 2M DATAFILE '/db01/ORACLE/brdb/system01.dbf', '/db01/ORACLE/brdb/rbs01.dbf', '/db01/ORACLE/brdb/temp01.dbf', '/db01/ORACLE/brdb/users01.dbf', '/db01/ORACLE/brdb/tools01.dbf', '/db01/ORACLE/brdb/data01.dbf', '/db01/ORACLE/brdb/indx01.dbf' CHARACTER SET US7ASCII ; # Recovery is required if any of the datafiles are restored backups, # or if the last shutdown was not normal or immediate. RECOVER DATABASE # All logs need archiving and a log switch is needed. ALTER SYSTEM ARCHIVE LOG ALL; # Database can now be opened normally. ALTER DATABASE OPEN; # No tempfile entries found to add. # 3. Rename this trace file to something more meaningful, with an SQL
extension (for example, cr_control.sql). The top of this file has
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Reconstructing Lost or Damaged Control Files
593
trace information and comments that will cause the script to fail. You should remove this section before running the script. [oracle@DS-LINUX brdb]$ cp /oracle/admin/brdb/udump/ ora_23302.trc cr_control.sql 4. You can execute the file cr_control.sql and perform recovery. The
recovery step can be modified to fit different situations. This is accomplished by editing the RECOVER command in the file. SVRMGR> @cr_control.sql ORACLE instance started. Total System Global Area 19504528 bytes Fixed Size 64912 bytes Variable Size 16908288 bytes Database Buffers 2457600 bytes Redo Buffers 73728 bytes Statement processed. ORA-00279: change 68378 generated at 03/29/00 22:32:12 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_91.log ORA-00280: change 68378 for thread 1 is in sequence #91 Specify log: {=suggested | filename | AUTO | CANCEL} AUTO Log applied. ORA-00279: change 68449 generated at 03/30/00 19:27:11 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_92.log ORA-00280: change 68449 for thread 1 is in sequence #92 ORA-00278: log file '/oracle/admin/brdb/arch1/archbrdb_91.log' no longer needed for this recovery Log applied. ORA-00279: change 68450 generated at 03/30/00 19:27:18 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_93.log
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
594
Chapter 17
Additional Recovery Issues
ORA-00280: change 68450 for thread 1 is in sequence #93 ORA-00278: log file '/oracle/admin/brdb/arch1/archbrdb_92.log' no longer needed for this recovery Log applied. ORA-00279: change 68451 generated at 03/30/00 19:27:20 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_94.log ORA-00280: change 68451 for thread 1 is in sequence #94 ORA-00278: log file '/oracle/admin/brdb/arch1/archbrdb_93.log' no longer needed for this recovery Log applied. ORA-00279: change 68452 generated at 03/30/00 19:27:23 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_95.log ORA-00280: change 68452 for thread 1 is in sequence #95 ORA-00278: log file '/oracle/admin/brdb/arch1/archbrdb_94.log' no longer needed for this recovery Log applied. ORA-00279: change 68453 generated at 03/30/00 19:27:25 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/ archbrdb_96.log ORA-00280: change 68453 for thread 1 is in sequence #96 ORA-00278: log file '/oracle/admin/brdb/arch1/archbrdb_95.log' no longer needed for this recovery 5. Open the database. This example has used manual execution of the
commands. These commands also are in the backup control file and will run without performing them manually, if desired. SVRMGR> alter database open resetlogs; Statement processed. SVRMGR>
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Recovering Read-Only Tablespaces
595
Recovering Read-Only Tablespaces
This section outlines four main scenarios dealing with recovery issues and read-only tablespaces. Before you explore these scenarios, you must have a solid understanding of read-only and read-write tablespaces. As you learned earlier in this book, a read-only tablespace is one that can only be read from; in other words, only SELECT statements can be performed on it. A read-write tablespace is one on which INSERT, UPDATE, or DELETES, as well as SELECT statements, can be performed. These scenarios focus primarily on the state of the tablespace at the end of the recovery and when a control file must be re-created. First, if a tablespace is read-only and recovered to read-only, the control file should see the tablespace as read-only. Second, if the read-only tablespace becomes read-write when recovered, the control file should see the tablespace as read-write. Third, if a read-only tablespace begins as readwrite and is recovered as read-only, the control file should see the tablespace as read-only. Fourth, if the tablespace is read-only and you are using a BACKUP CONTROL FILE TO TRACE recovery, the read-only tablespace must be taken offline during the recovery and brought online after the recovery is complete. Table 17.1 summarizes these scenarios, which are detailed in the sections that follow. TABLE 17.1
Read-Only Recovery Scenarios Tablespace State before Recovery
Tablespace State after Recovery
Scenario
Control File Status
1. Read-only to read-only
Current existing control file
Read-only
Read-only
2. Read-only to read-write
Current existing control file
Read-only
Read-write
3. Read-write to read-only
Current existing control file
Read-write
Read-only
4. Read-only to read-only
Backup control file to trace
Read-only
Read-only
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
596
Chapter 17
Additional Recovery Issues
Scenario 1: Read-Only to Read-Only In this example, a read-only tablespace becomes recovered to read-only: 1. Validate that the USERS tablespace is read-only.
[oracle@DS-LINUX backup]$ svrmgrl SVRMGR> select file#,name,enabled from v$datafile; FILE# NAME ENABLED ---------- --------------------------------------------1 /db01/ORACLE/brdb/system01.dbf READ WRITE 2 /db01/ORACLE/brdb/rbs01.dbf READ WRITE 3 /db01/ORACLE/brdb/temp01.dbf READ WRITE 4 /db01/ORACLE/brdb/users01.dbf READ ONLY 5 /db01/ORACLE/brdb/tools01.dbf READ WRITE 6 /db01/ORACLE/brdb/data01.dbf READ WRITE 7 /db01/ORACLE/brdb/indx01.dbf READ WRITE 7 rows selected. 2. Perform a hot backup.
[oracle@DS-LINUX backup]$ . hot-backup 3. Simulate a failure by removing the data file users01.dbf.
SVRMGR> ! rm /db01/ORACLE/brdb/users01.dbf 4. Shut down and restart the database.
SVRMGR> shutdown abort ORACLE instance shut down. SVRMGR> startup mount ORACLE instance started. Total System Global Area Fixed Size Variable Size Database Buffers Redo Buffers Database mounted.
Copyright ©2000 SYBEX , Inc., Alameda, CA
19926416 64912 17330176 2457600 73728
www.sybex.com
bytes bytes bytes bytes bytes
Recovering Read-Only Tablespaces
597
5. Restore the data file users01.dbf.
SVRMGR> ! cp /stage/users01.dbf* /db01/ORACLE/brdb 6. Recover the database.
SVRMGR> recover database; ORA-00279: change 171252 generated at 04/10/00 23:20:37 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_24.log ORA-00280: change 171252 for thread 1 is in sequence #25 ORA-00278: log file '/oracle/admin/brdb/arch1/archbrdb_24.log' no longer needed for this recovery Log applied. Media recovery complete. SVRMGR> 7. Validate that the data file users01.dbf is read-only.
SVRMGR> select file#,name,enabled from v$datafile; FILE# NAME ENABLED ---------- -------------------------------------------1 /db01/ORACLE/brdb/system01.dbf READ WRITE 2 /db01/ORACLE/brdb/rbs01.dbf READ WRITE 3 /db01/ORACLE/brdb/temp01.dbf READ WRITE 4 /db01/ORACLE/brdb/users01.dbf READ ONLY 5 /db01/ORACLE/brdb/tools01.dbf READ WRITE 6 /db01/ORACLE/brdb/data01.dbf READ WRITE 7 /db01/ORACLE/brdb/indx01.dbf READ WRITE 7 rows selected.
Scenario 2: Read-Only to Read-Write Here is the step-by-step process for recovering a read-only tablespace to read-write status: 1. Validate that the status of the data file is read-only.
SVRMGR>
select name,enabled from v$datafile;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
598
Chapter 17
Additional Recovery Issues
NAME ENABLED ------------------------------------------------------/db01/ORACLE/brdb/system01.dbf READ WRITE /db01/ORACLE/brdb/rbs01.dbf READ WRITE /db01/ORACLE/brdb/temp01.dbf READ WRITE /db01/ORACLE/brdb/users01.dbf READ ONLY /db01/ORACLE/brdb/tools01.dbf READ WRITE /db01/ORACLE/brdb/data01.dbf READ WRITE /db01/ORACLE/brdb/indx01.dbf READ WRITE 7 rows selected. 2. Perform a hot backup.
[oracle@DS-LINUX backup]$ . hot-backup 3. Alter the tablespace to read-write status and switch out the logs for
recovery. SVRMGR> SVRMGR> alter tablespace users read write; Statement processed. SVRMGR> alter system switch logfile; Statement processed. SVRMGR> alter system switch logfile; Statement processed. SVRMGR> alter system switch logfile; Statement processed. SVRMGR> alter system switch logfile; Statement processed. 4. Simulate a failure by removing the data file users01.dbf.
SVRMGR> !rm /db01/ORACLE/brdb/users01.dbf 5. Restore the data file users01.dbf with the read-only backup copy
and apply the logs that converted the data file to read-write. SVRMGR> !cp /stage/users01* /db01/ORACLE/brdb SVRMGR> shutdown abort ORACLE instance shut down. SVRMGR> startup mount ORACLE instance started. Total System Global Area 19926416 bytes Fixed Size 64912 bytes Variable Size 17330176 bytes
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Recovering Read-Only Tablespaces
599
Database Buffers 2457600 bytes Redo Buffers 73728 bytes Database mounted. SVRMGR> recover database; ORA-00279: change 231891 generated at 04/13/00 23:02:22 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_39.log ORA-00280: change 231891 for thread 1 is in sequence #39 Specify log: {=suggested | filename | AUTO | CANCEL} auto Log applied. ORA-00279: change 231897 generated at 04/13/00 23:02:31 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_40.log ORA-00280: change 231897 for thread 1 is in sequence #40 ORA-00278: log file '/oracle/admin/brdb/arch1/archbrdb_39.log' no longer needed for this recovery Log applied. Media recovery complete. SVRMGR> alter database open; Statement processed. 6. Validate that the data file users01.dbf has read-write status.
SVRMGR> select name,enabled from v$datafile; NAME ENABLED ------------------------------------------------------/db01/ORACLE/brdb/system01.dbf READ WRITE /db01/ORACLE/brdb/rbs01.dbf READ WRITE /db01/ORACLE/brdb/temp01.dbf READ WRITE /db01/ORACLE/brdb/users01.dbf READ WRITE /db01/ORACLE/brdb/tools01.dbf READ WRITE /db01/ORACLE/brdb/data01.dbf READ WRITE /db01/ORACLE/brdb/indx01.dbf READ WRITE 7 rows selected. SVRMGR>
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
600
Chapter 17
Additional Recovery Issues
Scenario 3: Read-Write to Read-Only In this case, a tablespace begins in read-write mode, gets converted to read-only, and needs recovery after being converted to read-only: 1. Validate that the status of users01.dbf is read-write.
SVRMGR> select file#,name,enabled from v$datafile; FILE# NAME ENABLED ---------- ------------------------------------------1 /db01/ORACLE/brdb/system01.dbf READ WRITE 2 /db01/ORACLE/brdb/rbs01.dbf READ WRITE 3 /db01/ORACLE/brdb/temp01.dbf READ WRITE 4 /db01/ORACLE/brdb/users01.dbf READ WRITE 5 /db01/ORACLE/brdb/tools01.dbf READ WRITE 6 /db01/ORACLE/brdb/data01.dbf READ WRITE 7 /db01/ORACLE/brdb/indx01.dbf READ WRITE 7 rows selected. SVRMGR> 2. Perform a hot backup of the database.
[oracle@DS-LINUX backup]$ . hot-backup 3. Alter the tablespace to read-only status and switch out the logs for
recovery. SVRMGR> alter tablespace users read only; Statement processed. SVRMGR> alter system switch logfile; Statement processed. SVRMGR> alter system switch logfile; Statement processed. SVRMGR> alter system switch logfile; Statement processed. SVRMGR> alter system switch logfile; 4. Simulate failure of the database by removing the data file
users01.dbf. SVRMGR> shutdown Database closed. Database dismounted. ORACLE instance shut down. SVRMGR> ! rm /db01/ORACLE/brdb/users01.dbf Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Recovering Read-Only Tablespaces
601
5. Restore the data file users01.dbf and perform recovery.
SVRMGR> ! cp /stage/users01.dbf* /db01/ORACLE/brdb SVRMGR> SVRMGR> startup mount ORACLE instance started. Total System Global Area 19926416 bytes Fixed Size 64912 bytes Variable Size 17330176 bytes Database Buffers 2457600 bytes Redo Buffers 73728 bytes Database mounted. SVRMGR> recover database; ORA-00279: change 231848 generated at 04/13/00 22:25:17 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_34.log ORA-00280: change 231848 for thread 1 is in sequence #34 Specify log: {=suggested | filename | AUTO | CANCEL} auto Log applied. Media recovery complete. SVRMGR> SVRMGR> alter database open; Statement processed. SVRMGR> 6. Validate that the status of users01.dbf is read-only.
SVRMGR> select name,enabled from v$datafile; NAME ENABLED -----------------------------------------------------/db01/ORACLE/brdb/system01.dbf READ WRITE /db01/ORACLE/brdb/rbs01.dbf READ WRITE /db01/ORACLE/brdb/temp01.dbf READ WRITE /db01/ORACLE/brdb/users01.dbf READ ONLY /db01/ORACLE/brdb/tools01.dbf READ WRITE /db01/ORACLE/brdb/data01.dbf READ WRITE /db01/ORACLE/brdb/indx01.dbf READ WRITE
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
602
Chapter 17
Additional Recovery Issues
7 rows selected. SVRMGR>
Scenario 4: Using a Backup Control File Recovery In this final scenario, you use a backup control file recovery when the tablespace begins and ends in read-only mode: 1. Validate that the status of users01.dbf is read-only.
SVRMGR> select name,enabled from v$datafile; NAME ENABLED -----------------------------------------------------/db01/ORACLE/brdb/system01.dbf READ WRITE /db01/ORACLE/brdb/rbs01.dbf READ WRITE /db01/ORACLE/brdb/temp01.dbf READ WRITE /db01/ORACLE/brdb/users01.dbf READ ONLY /db01/ORACLE/brdb/tools01.dbf READ WRITE /db01/ORACLE/brdb/data01.dbf READ WRITE /db01/ORACLE/brdb/indx01.dbf READ WRITE 7 rows selected. SVRMGR> 2. Perform a cold backup of the database.
[oracle@DS-LINUX backup]$ . cold-backup 3. Start up the database and run the script created from the BACKUP
CONTROL FILE TO TRACE output. This is the same backup control file used in the previous “Reconstructing Lost or Damaged Control Files” section. [oracle@DS-LINUX udump]$ svrmgrl SVRMGR> connect internal Connected. SVRMGR> @cr_control ORACLE instance started. Total System Global Area Fixed Size Variable Size
Copyright ©2000 SYBEX , Inc., Alameda, CA
19926416 bytes 64912 bytes 17330176 bytes
www.sybex.com
Recovering Read-Only Tablespaces
Database Buffers Redo Buffers Statement processed.
603
2457600 bytes 73728 bytes
4. Alter the read-only data file users01.dbf to offline status before
you recover the database. SVRMGR>alter database datafile '/db01/ORACLE/brdb/users01.dbf' offline; 5. Recover the database by using the BACKUP CONTROL FILE option
and then open the database. SVRMGR> recover database using backup controlfile until cancel; ORA-00279: change 251964 generated at 04/13/00 23:08:03 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_44.log ORA-00280: change 251964 for thread 1 is in sequence #44 Specify log: {=suggested | filename | AUTO | CANCEL} Log applied. ORA-00279: change 252039 generated at 04/13/00 23:37:13 needed for thread 1 ORA-00289: suggestion : /oracle/admin/brdb/arch1/archbrdb_45.log ORA-00280: change 252039 for thread 1 is in sequence #45 ORA-00278: log file '/oracle/admin/brdb/arch1/archbrdb_44.log' no longer needed for this recovery Specify log: {=suggested | filename | AUTO | CANCEL} CANCEL
SVRMGR>alter database open resetlogs;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
604
Chapter 17
Additional Recovery Issues
6. After the database is open, bring the USERS tablespace online.
SVRMGR>alter tablespace users online; SVRMGR> 7. Validate that the tablespace USERS is read-only.
SVRMGR> select name,enabled from v$datafile; NAME ENABLED -----------------------------------------------------/db01/ORACLE/brdb/system01.dbf READ WRITE /db01/ORACLE/brdb/rbs01.dbf READ WRITE /db01/ORACLE/brdb/temp01.dbf READ WRITE /db01/ORACLE/brdb/users01.dbf READ ONLY /db01/ORACLE/brdb/tools01.dbf READ WRITE /db01/ORACLE/brdb/data01.dbf READ WRITE /db01/ORACLE/brdb/indx01.dbf READ WRITE 7 rows selected. SVRMGR>
Summary
T
his chapter covered these recovery issues: minimizing downtime, reconstructing control files, and recovering read-only tablespaces. You learned about parallel recovery—how to set up the database for parallel activities and recover the database in parallel. You also learned two methods for reconstructing control files. You can copy an existing control file or re-create a control file with a BACKUP CONTROL FILE TO TRACE command. Finally, you walked through the major recovery scenarios pertaining to read-only tablespaces. The major point is that your control file should match the final status of the tablespace: read-only or read-write. If you must re-create a control file, the read-only tablespace must be taken offline to recover the database.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Summary
605
Key Terms Before you take the exam, make sure you’re familiar with the following terms: temporary tablespace parallel recovery parallel query processes PARALLEL_MIN_SERVERS PARALLEL_MAX_SERVERS RECOVERY_PARALLELISM degree of parallelism BACKUP CONTROL FILE TO TRACE read-only tablespace read-write tablespace
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
606
Chapter 17
Additional Recovery Issues
Review Questions 1. Which type of recovery utilizes multiple recovery processes? A. Multiple recovery B. Serial recovery C. Parallel recovery D. Dual recovery 2. What underlying database functionality is necessary to run parallel
recovery? A. Parallel query B. Parallel Server option C. Standby database D. Parallel recovery option 3. What is a parallel query process? A. A background process that performs a portion of a query utilizing
a separate CPU B. A background process that performs the whole query C. A background process that is started by the recovery process D. A background process that is started when a table has the parallel
option enabled 4. Which entry most closely represents a parallel query background pro-
cess in the Unix environment? A. ora_p001_brdb B. ora_parallel C. parallel_p001 D. ora_para_brdb
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
607
5. What is the init.ora parameter for setting the minimum number of
parallel servers? A. PARALLEL_MAX_SERVERS B. PARALLEL_MIN_SERVERS C. MIN_PARALLEL D. MINIMUM_PARALLEL_SERVERS 6. What is the init.ora parameter for setting the maximum number of
parallel servers? A. PARALLEL_MIN_SERVERS B. PARALLEL_MAXIMUM C. PARALLEL_MAXIMUM_SERVERS D. PARALLEL_MAX_SERVERS 7. What is the name of the init.ora parameter that determines parallel
recovery at the database level? A. PARALLEL_RECOVERY B. PARALLEL_RECOVERY_STATUS C. RECOVERY_PARALLELISM D. RECOVERY_PARALLELISM_LEVEL 8. Which type of tablespace allows data to inserted, updated, deleted,
and selected from the data files? A. Read-only B. Write-only C. Read-write D. Online
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
608
Chapter 17
Additional Recovery Issues
9. What type of tablespace allows only SELECT statements against its data? A. Read-only B. Write-only C. Select-only D. Read-write 10. What type of tablespace primarily stores objects that are not permanent? A. Not permanent B. Temporary C. System D. Rollback 11. An ALTER DATABASE BACKUP CONTROL FILE TO TRACE command
produces which type of control file? A. Binary control file B. ASCII control file C. Damaged control file D. Repaired control file 12. What must you perform on a read-only tablespace before recovering
with a backup control file? A. Take the data file online. B. Take the data file offline. C. Recover the data file and don’t do anything. D. You can never recover a read-only tablespace.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
609
13. What should be done to a read-only tablespace after recovery from
BACKUP CONTROLFILE is complete? A. Take the data file offline B. Take the tablespace offline C. Put the tablespace online D. Put the data file online 14. Which is a good rule of thumb for judging the number of PARALLEL_
MAX_SERVERS a server should have? Choose all that apply. A. Not greater than the number of CPUs in a server B. Not greater than the number of disk drives storing data files in the
database C. Never more than four parallel query processes D. Never more then eight parallel query processes 15. What is the easiest way to restore a damaged control file? A. Issue the ALTER DATABASE BACKUP CONTROL FILE TO TRACE
command B. Copy an existing current control file C. Restore a control file from tape D. Manually rebuild one 16. What must be done if all control files are damaged or lost? Choose all
that apply. A. Use an ASCII control file. B. Use a control file generated by the ALTER DATABASE BACKUP
CONTROL FILE TO TRACE command. C. Nothing can be done. D. Run the database without control files.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
610
Chapter 17
Additional Recovery Issues
17. Which RECOVER command can recover one data file at a time? A. RECOVER DATABASE B. RECOVER TABLESPACE C. RECOVER BACKUP CONTROL FILE D. RECOVER DATAFILE 18. What type of recovery could be used to recover a data file when not all
necessary data files are available for recovery? A. Using RECOVER TABLESPACE with the available data files B. Using RECOVER DATAFILE with the available data files C. Using RECOVER DATABASE with the available data files D. Using BACKUP CONTROL FILE with the available data files 19. What is a multiplexed control file? Choose all that apply. A. A copy of the control file on a separate disk specified at the Oracle
parameter file level B. A copy of the control file on mirrored disk drives performed at the
Oracle level C. A copy of a control file that has been backed up at least two times D. A copy of a control file that has been backed up at least three times 20. What type of recovery is performed by one recovery process? A. Standard, or serial, recovery B. Parallel recovery C. Parallel query D. Parallel database recovery only
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Answers to Review Questions
611
Answers to Review Questions 1. C. Parallel recovery is the type of recovery that is not serial or single-
process recovery. It uses more than one recovery process. 2. A. The parallel query is the underlying database functionality that
runs parallel recovery; the Parallel Server option allows multiple instances to access one database. 3. A. A parallel query process is an Oracle process that performs a por-
tion of a query. The query is divided by the number of parallel query processes used in the query, and a portion of the query is given to each parallel query process. 4. A. The ora_p001_brdb is an actual parallel background process. All
parallel background processes begin with ora_p000_. 5. B. The init.ora parameter for minimum parallel servers is
PARALLEL_MIN_SERVERS. This parameter is responsible for setting the minimum number of servers available to perform parallel query processes. 6. D. The init.ora parameter for maximum parallel servers is
PARALLEL_MAX_SERVERS. This parameter is responsible for setting the maximum number of servers available to perform parallel query processes. 7. C. After this parameter is set, the parallel recovery will be set at this
degree of parallelism, and it is not needed to be set manually in the recover statement. 8. C. A read-write tablespace allows data to be modified by inserts,
updates, and deletes. This is the default state of a tablespace and a data file. 9. A. The read-only tablespace allows only SELECT statements or queries
to be performed on the data in that tablespace.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
612
Chapter 17
Additional Recovery Issues
10. B. The temporary tablespace’s primary purpose is to store temporary
objects used for such jobs as sorting and creating indexes and tables. 11. B. The ALTER DATABASE BACKUP CONTROL FILE TO TRACE command
produces an ASCII control file that can rebuild the binary control file after some minor editing. 12. B. Take the data file offline and then recover by using BACKUP
CONTROLFILE. 13. C. The tablespace must be brought online after the recovery from the
backup control file is complete if the tablespace is to be accessed. 14. A and B. Either the number of CPUs or the number of disks contain-
ing data files for your database is a safe rule of thumb for CPU- and I/O-intensive queries. 15. B. The easiest way to restore a damaged control file is to copy and
rename an existing control file from another location to the damaged location and then open the database. 16. A and B. Use an ASCII control file or BACKUP CONTROL FILE TO
TRACE, which are the same thing. 17. D. The RECOVER DATAFILE command recovers one data file com-
pletely. The RECOVER TABLESPACE command can do this if only one data file makes up a tablespace. 18. B. The RECOVER DATAFILE command can recover one data file at a
time even if there are missing data files from other tablespaces. 19. A and B. A copy of a control file on mirrored or non-
mirrored drives done at the Oracle level (in init.ora) is a multiplexed control file. 20. A. Standard, or serial, recovery utilizes only one recovery process.
This is the default recovery method, which gets performed if the RECOVERY_PARALLELISM parameter is not set in the init.ora file or a parallelism factor is not added to the recovery command.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Chapter
18
Oracle Utilities for Troubleshooting ORACLE8i BACKUP AND RECOVERY EXAM OBJECTIVES OFFERED IN THIS CHAPTER: Use log and trace files to diagnose backup and recovery problems Detect corruption by using different methods Detect and mark corrupted blocks by using the DBMS_REPAIR package Use the DBVERIFY utility Use the LogMiner utility to analyze redo log files to recover by undoing changes
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Training and Certification Web site (http:// education.oracle.com/certification/index.html) for the most current exam objectives listing.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
T
his chapter discusses and demonstrates utilities and methods that help diagnose backup and recovery problems. You will look at trace and log files, such as the alert log, to aid in the backup and recovery process. You will also learn various ways to detect block corruption. The tools used in the detection and repair of corrupt data blocks are the DBMS_ REPAIR package, ANALYZE TABLE VALIDATE STRUCTURE command, the DBVERIFY utility, and the init.ora parameter DB_ BLOCK_CHECKING. Finally, this chapter shows you how to configure and use the Oracle LogMiner utility to undo changes in the database. You do this by extracting the SQL statements from the archive logs. These statements can then be redone or undone in the database, depending on the desired outcome.
Using Log and Trace Files to Diagnose Problems
Log and trace files can be an excellent aid for diagnosing backup and recovery problems. The most important log file is the alert_.log, generally referred to as the alert log. The is the Oracle system identifier for the instance in question. In our examples, this filename will be alert_brdb.log. The alert log file records all start-ups, shutdowns, log switches, some ALTER commands, storage problems, and a myriad of error-related statements.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Using Log and Trace Files to Diagnose Problems
615
The alert log’s location is determined by the init.ora parameter BACKGROUND_DUMP_DEST. This file is located in the bdump directory in an OFA-compliant configuration. The examples in this chapter use the fully qualified pathname /oracle/admin/brdb/bdump. This directory will contain background trace files from the Oracle background processes, such as DBWR, in addition to the alert log. There is another type of trace file, called a user trace file. This trace is generated when there are problems with a user- or session-level operation. The location of these trace files is determined by the USER_DUMP_DEST parameter in the init.ora file. This is located in the udump directory in an OFA-compliant configuration. The examples in this chapter use the fully qualified location or pathname /oracle/admin/brdb/udump. First, let’s look at normal shutdown and start-up entries in the alert log. Sun Apr 23 08:21:03 2000 Shutting down instance (normal) License high water mark = 1 Sun Apr 23 08:21:03 2000 ALTER DATABASE CLOSE NORMAL Sun Apr 23 08:21:03 2000 SMON: disabling tx recovery SMON: disabling cache recovery Sun Apr 23 08:21:04 2000 Thread 1 closed at log sequence 44 Sun Apr 23 08:21:04 2000 Completed: ALTER DATABASE CLOSE NORMAL Sun Apr 23 08:21:04 2000 ALTER DATABASE DISMOUNT Completed: ALTER DATABASE DISMOUNT archiving is disabled ARCH: changing ARC0 KCRRACTIVE->KCRRSHUTDN ARCH: sending ARC0 shutdown message Sun Apr 23 08:21:06 2000 ARCH shutting down ARC0: Archival stopped Sun Apr 23 08:21:06 2000 ARCH: changing ARC1 KCRRACTIVE->KCRRSHUTDN
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
616
Chapter 18
Oracle Utilities for Troubleshooting
ARCH: sending ARC1 shutdown message Sun Apr 23 08:21:06 2000 ARCH shutting down ARC1: Archival stopped Sun Apr 23 08:21:06 2000 ARC0: changing ARC0 KCRRSHUTDN->KCRRDEAD Sun Apr 23 08:21:06 2000 ARC1: changing ARC1 KCRRSHUTDN->KCRRDEAD Sun Apr 23 08:21:10 2000 Starting ORACLE instance (normal) LICENSE_MAX_SESSION = 0 LICENSE_SESSIONS_WARNING = 0 LICENSE_MAX_USERS = 0 Starting up ORACLE RDBMS Version: 8.1.5.0.0. System parameters with non-default values: processes = 40 shared_pool_size = 16000000 control_files = /oracle/data/brdb/control1.ctl, /oracle/data/brdb/control2.ctl db_block_buffers = 300 db_block_size = 8192 compatible = 8.1.5 log_archive_start = TRUE log_archive_dest = /oracle/admin/brdb/arch1 log_archive_max_processes= 2 log_archive_format = archbrdb_%s.log log_buffer = 40960 log_checkpoint_interval = 10000 db_files = 64 db_file_multiblock_read_count= 8 dml_locks = 100 max_rollback_segments = 60 rollback_segments = rb01, rb02, rb03, rb04, rb05, rb06 global_names = FALSE
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Using Log and Trace Files to Diagnose Problems
db_name = brdb parallel_automatic_tuning= TRUE utl_file_dir = /apps/web/master, /apps/web/upload, /apps/web/remove, /apps/web/archive parallel_min_servers = 2 parallel_max_servers = 4 background_dump_dest = /oracle/admin/brdb/bdump user_dump_dest = /oracle/admin/brdb/udump max_dump_file_size = 51200 core_dump_dest = /oracle/admin/brdb/cdump PMON started with pid=2 OER 536879337 in Load Indicator : Error Code = 558095936 ! DBW0 started with pid=3 LGWR started with pid=4 CKPT started with pid=5 SMON started with pid=6 Sun Apr 23 08:21:11 2000 ARCH: changing ARC0 KCRRNOARCH->KCRRSCHED ARCH: changing ARC1 KCRRNOARCH->KCRRSCHED ARCH: STARTING ARCH PROCESSES ARCH: changing ARC0 KCRRSCHED->KCRRSTART ARCH: changing ARC1 KCRRSCHED->KCRRSTART ARCH: invoking ARC0 RECO started with pid=7 Sun Apr 23 08:21:12 2000 ARC0: changing ARC0 KCRRSTART->KCRRACTIVE Sun Apr 23 08:21:12 2000 ARCH: Initializing ARC0 ARCH: ARC0 invoked ARCH: invoking ARC1 ARC0 started with pid=8 ARC0: Archival started Sun Apr 23 08:21:12 2000 ARC1: changing ARC1 KCRRSTART->KCRRACTIVE
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
617
618
Chapter 18
Oracle Utilities for Troubleshooting
Sun Apr 23 08:21:12 2000 ARCH: Initializing ARC1 ARCH: ARC1 invoked ARCH: STARTING ARCH PROCESSES COMPLETE Sun Apr 23 08:21:12 2000 alter database mount ARC1 started with pid=9 ARC1: Archival started Sun Apr 23 08:21:16 2000 Successful mount of redo thread 1, with mount id 2065108860. Sun Apr 23 08:21:16 2000 Database mounted in Exclusive Mode. Completed: alter database mount Sun Apr 23 08:21:16 2000 alter database open Picked broadcast on commit scheme to generate SCNs Sun Apr 23 08:21:16 2000 Thread 1 opened at log sequence 44 Current log# 1 seq# 44 mem# 0: /redo01/ORACLE/brdb/redo01a.log Successful open of redo thread 1. Sun Apr 23 08:21:17 2000 sql: prodding the archiver Sun Apr 23 08:21:17 2000 ARC0: received prod Sun Apr 23 08:21:17 2000 SMON: enabling cache recovery SMON: enabling tx recovery Sun Apr 23 08:21:18 2000 Completed: alter database open Now let’s look at a backup and recovery problem that can be resolved by careful examination of the alert logs. 1. The database alert log is displaying standard logging during the start-up
sequence. The database is successfully mounted but fails to open because it is missing the system01.dbf database, the data file for the SYSTEM tablespace. If recovery of this missing data file occurred without
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Using Log and Trace Files to Diagnose Problems
619
viewing the additional trace information, a DBA may restore only the system tablespace. However, the alert log refers the DBA to another trace file that is generated by DBWR; in this case it is called dbw0_ 2280.trc. alter database mount Wed Mar 29 17:53:29 2000 Successful mount of redo thread 1, with mount id 2062981145. Wed Mar 29 17:53:29 2000 Database mounted in Exclusive Mode. Completed: alter database mount Wed Mar 29 17:53:29 2000 alter database open Wed Mar 29 17:53:29 2000 Errors in file /oracle/admin/brdb/bdump/dbw0_2280.trc: ORA-01157: cannot identify/lock data file 1 see DBWR trace file ORA-01110: data file 1: '/db01/ORACLE/brdb/system01.dbf' ORA-27037: unable to obtain file status Linux Error: 2: No such file or directory Additional information: 3 2. This trace file has more information about the problem. The file iden-
tifies that data files 1 through 7 are missing. In this case, these files make up the whole database. All these data files would need to be restored to begin the recovery. Dump file /oracle/admin/brdb/bdump/dbw0_2280.trc Oracle8i Enterprise Edition Release 8.1.5.0.0 - Production With the Partitioning and Java options PL/SQL Release 8.1.5.0.0 - Production ORACLE_HOME = /oracle/product/8.1.5 System name:Linux Node name:DS-LINUX Release:2.2.12-20 Version:#1 Mon Sep 27 10:40:35 EDT 1999
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
620
Chapter 18
Oracle Utilities for Troubleshooting
Machine:i686 Instance name: brdb Redo thread mounted by this instance: 1 Oracle process number: 3 Unix process pid: 2280, image: oracle@DS-LINUX (DBW0) *** SESSION ID:(2.1) 2000.03.29.17.53.29.916 ORA-01157: cannot identify/lock data file 1 see DBWR trace file ORA-01110: data file 1: '/db01/ORACLE/brdb/ system01.dbf' ORA-27037: unable to obtain file status Linux Error: 2: No such file or directory Additional information: 3 ORA-01157: cannot identify/lock data file 2 see DBWR trace file ORA-01110: data file 2: '/db01/ORACLE/brdb/rbs01.dbf' ORA-27037: unable to obtain file status Linux Error: 2: No such file or directory Additional information: 3 ORA-01157: cannot identify/lock data file 3 see DBWR trace file ORA-01110: data file 3: '/db01/ORACLE/brdb/temp01.dbf' ORA-27037: unable to obtain file status Linux Error: 2: No such file or directory Additional information: 3 ORA-01157: cannot identify/lock data file 4 see DBWR trace file ORA-01110: data file 4: '/db01/ORACLE/brdb/users01.dbf' ORA-27037: unable to obtain file status Linux Error: 2: No such file or directory Additional information: 3 ORA-01157: cannot identify/lock data file 5 see DBWR trace file ORA-01110: data file 5: '/db01/ORACLE/brdb/tools01.dbf' ORA-27037: unable to obtain file status
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Using Various Methods to Detect Corruption
621
Linux Error: 2: No such file or directory Additional information: 3 ORA-01157: cannot identify/lock data file 6 see DBWR trace file ORA-01110: data file 6: '/db01/ORACLE/brdb/data01.dbf' ORA-27037: unable to obtain file status Linux Error: 2: No such file or directory Additional information: 3 ORA-01157: cannot identify/lock data file 7 see DBWR trace file ORA-01110: data file 7: '/db01/ORACLE/brdb/indx01.dbf' ORA-27037: unable to obtain file status Linux Error: 2: No such file or directory Additional information: 3
Using Various Methods to Detect Corruption
There are four methods for detecting corruption:
The DBMS_REPAIR procedure used against a table, index, or partition
The ANALYZE TABLE VALIDATE STRUCTURE command
The Oracle DBVERIFY utility used against the offline data files
The init.ora parameter DB_BLOCK_CHECKING, which checks data and index blocks each time they are created or modified.
Each method is described in the following sections.
DBMS_REPAIR DBMS_REPAIR is a package in an Oracle database made up of PL/SQL procedures that provide information and perform tasks associated with fixing and identifying corrupt data blocks. The DBMS_REPAIR package is created when the bmprpr.sql script is executed in the database. This script builds the DBMS_REPAIR package and its associated procedures, CHECK_OBJECT,
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
622
Chapter 18
Oracle Utilities for Troubleshooting
FIX_CORRUPT_BLOCKS, DUMP_ORPHAN_KEYS, REBUILD_FREELISTS, SKIP_ CORRUPT_BLOCKS, and ADMIN_TABLES. The script dbmsrpr.sql is run automatically when the procedure catproc.sql is run—usually right after database creation. The following list describes these procedures: CHECK_OBJECT
Detects and displays table and index corruptions.
FIX_CORRUPT_BLOCKS Marks corrupt blocks identified in the CHECK_OBJECT procedure. DUMP_ORPHAN_KEYS Displays index entries that have correlating corrupt blocks in the associated table. REBUILD_FREELISTS
Rebuilds the free lists associated with a object.
SKIP_CORRUPT_BLOCKS Skips corrupted blocks to avoid ORA-1578 errors. The ORA-1578 error indicates a corrupt data block in a database segment such as a table or index. ADMIN_TABLES Provides create, drop, and purge functions for the DBMS_REPAIR and orphan key tables.
ANALYZE TABLE VALIDATE STRUCTURE The ANALYZE TABLE VALIDATE STRUCTURE command validates the integrity of the structure of the object being analyzed. This command is either successful or not successful at the object level. Therefore, if this command returns an error for the object being analyzed, you would need to completely rebuild it. If no error is returned, the object is free of corruption and would not need to be re-created. The following is an example of the command with and without error: SVRMGR> analyze table test.t3 validate structure; * ERROR at line 1: ORA-01498: block check failure - see trace file SVRMGR> analyze table t1 validate structure; Statement processed. SVRMGR>
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Using the DBMS_REPAIR Package
623
DBVERIFY DBVERIFY is an Oracle utility used to see whether corruption exists in a particular data file. This utility is most often used on a backup of the database or when the database is not running. However, if necessary, the tool can be used on the database when it is online to minimize availability impacts on high-use databases. The output of this command verifies index and data blocks that have processed with and without error, the total number of blocks processed, empty blocks, and blocks already marked as corrupt.
The DBVERIFY utility uses the term pages instead of blocks. This term refers to blocks within the Oracle data file.
DB_BLOCK_CHECKING The DB_BLOCK_CHECKING init.ora parameter sets block checking at the database level. The default is set to TRUE. This parameter forces checks for corrupt blocks each time blocks are modified in the tables, indexes, and partitions. The following is a sample of the parameter from an init.ora file. [oracle@DS-LINUX pfile]$ db_block_checking
= TRUE
Using the DBMS_REPAIR Package
T
he DBMS_REPAIR package is a set of utilities that enables you to detect and fix corrupt blocks in tables and indexes. The DBMS_REPAIR is made up of multiple stored procedures, as we described earlier. Each of these procedures performs different actions. This section focuses on the CHECK_ OBJECT procedure and the FIX_CORRUPT_BLOCKS procedure. The general procedure is to verify that you have corrupt data blocks. Next you need to put the list of corrupt data blocks in a holding table so the corrupt blocks can be identified. These blocks are then marked as corrupt so that they can be skipped over in a query or normal usage of the table. Let’s
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
624
Chapter 18
Oracle Utilities for Troubleshooting
walk through a step-by-step example of how to detect and mark corrupt blocks. 1. Generate a trace file of the corrupt block, which is automatically cre-
ated by the ANALYZE command. SVRMGR> analyze table test.t3 validate structure; * ERROR at line 1: ORA-01498: block check failure - see trace file 2. View the trace file to determine bad block information. In this example,
the bad block is 5. This is indicated by the output line nrow=5, highlighted at the end of this code listing. Dump file /oracle/admin/brdb/udump/brdb_ora_4587.trc Oracle8 Enterprise Edition Release 8.1.5.0.0 - Beta With the Partitioning option *** 1998.12.16.15.53.02.000 *** SESSION ID:(11.9) 2000.05.08.11.51.09.000 kdbchk: row locked by non-existent transaction table=0 slot=0 lockid=44 ktbbhitc=1 Block header dump: 0x01800005 Object id on Block? Y seg/obj: 0xb6d csc: 0x00.1cf5f itc: 1 flg: typ: 1 - DATA fsl: 0 fnx: 0x0 ver: 0x01 Itl Xid Uba Lck Scn/Fsc 0x01 xid: 0x0003.011.00000151 uba: 0x008018fb.0645.0d --U4 fsc 0x0000.0001cf60 data_block_dump =============== tsiz: 0x6b8 hsiz: 0x18 pbl: 0x38088044
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Flag
Using the DBMS_REPAIR Package
625
bdba: 0x01800008 flag=----------ntab=1 nrow=5 3. Create the repair tables to store and retrieve information from running
the DBMS_REPAIR procedure. Below is the example PL/SQL in a file called repair_tab.sql to perform this activity. This is a custom script that must be created by the DBA. After the script repair_ tab.sql is run, query the DBA_OBJECTS table to verify that the REPAIR_TABLE has been created. SVRMGR> ! more repair_tab.sql -- Create DBMS Repair Table declare begin dbms_repair.admin_tables (table_name => 'REPAIR_TABLE', table_type => dbms_repair.repair_table, action => dbms_repair.create_action, tablespace => 'USERS'); end; / SVRMGR> SVRMGR> @repair_tab Statement processed. SVRMGR> select owner, object_name, object_type 2> from dba_objects 3> where object_name like '%REPAIR_TABLE'; OWNER OBJECT_NAME OBJECT_TYPE -------------- ---------------- ------------------SYS DBA_REPAIR_TABLE VIEW SYS REPAIR_TABLE TABLE 2 rows selected. SVRMGR>
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
626
Chapter 18
Oracle Utilities for Troubleshooting
4. Check the object, or table T3, to determine whether there is a corrupt
block in the table. Even though you know this from the ANALYZE TABLE VALIDATE STRUCTURE command, you need this information saved in the REPAIR_TABLE. SVRMGR> ! more check_obj.sql --determine what block is corrupt in a table set serveroutput on size 100000; declare rpr_count int; begin rpr_count := 0; dbms_repair.check_object( schema_name => 'TEST', object_name => 'T3', repair_table_name => 'REPAIR_TABLE', corrupt_count => rpr_count); dbms_output.put_line('repair block count: ' ||to_char(rpr_count)); end;
SVRMGR> @check_obj.sql Server Output Statement processed. repair block count: 1 SVRMGR>
ON
5. Verify that the REPAIR_TABLE contains information about table T3
and the bad block. This query has been broken into three queries for display purposes.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Using the DBMS_REPAIR Package
627
SVRMGR> select object_name, block_id, corrupt_type, marked_corrupt, 2 corrupt_description, repair_description 3 from repair_table; OBJECT_NAME ----------T1
BLOCK_ID -------3
CORRUPT_TYPE MARKED_COR ------------ ---------1 FALSE
SVRMGR> select object_name, corrupt_description 2 from repair_table; OBJECT_NAME CORRUPT_DESCRIPTION ----------- ------------------------------------------T1 kdbchk: row locked by non-existent transaction table=0 slot=0 lockid=44 ktbbhitc=1 SVRMGR> select object_name, repair_description 2 from repair_table; OBJECT_NAME REPAIR_DESCRIPTION ----------- --------------------------T1 mark block software corrupt 6. A backup of the table should be created before any attempts are made
to fix or mark the block as corrupt. Therefore, attempt to salvage any good data from the corrupted block before marking it as corrupt. SVRMGR> connect test/test statement processed SVRMGR> create table t3_bak as 3 select * from t3 3 where dbms_rowid.rowid_block_number(rowid) = 5 4 and dbms_rowid.rowid_to_absolute_fno (rowid, 'TEST','T3') = 4; Table created.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
628
Chapter 18
Oracle Utilities for Troubleshooting
SQL> select c1 from t3_bak; C1 -------1 2 3 5 7. Mark block 5 as corrupt, but full table scans will still generate an
ORA-1578 error. SVRMGR> ! more fix_blocks.sql -- Create DBMS Fix Corrupt blocks declare fix_block_count int; begin fix_block_count := 0; dbms_repair.fix_corrupt_blocks ( schema_name => 'TEST', object_name => 'T3', object_type => dbms_repair.table_object, repair_table_name => 'REPAIR_TABLE', fix_count => fix_block_count); dbms_output.put_line('fix blocks count: ' || to_char(fix_block_count)); end; /
SVRMGR> SVRMGR> @fix_blocks fix blocks count: 1 Statement processed.
SQL> select object_name, block_id, marked_corrupt 2 from repair_table;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Using the DBMS_REPAIR Package
629
OBJECT_NAME BLOCK_ID MARKED_COR ------------------------------ ---------- ---------T3 5 TRUE SQL> select * from test.t3; select * from test.t3 * ERROR at line 1: ORA-01578: ORACLE data block corrupted (file # 4, block # 5) ORA-01110: data file 4: '/db01/ORACLE/brdb/users01.dbf' 8. Use the DUMP_ORPHAN_KEYS procedure to dump the index entries that
point to the corrupt rows in the corrupt data blocks. This procedure displays the affected index entries. Therefore, the index will need to be rebuilt. SVRMGR> ! more orphan_dump.sql -- Create DBMS Dump orphan/Index entries declare orph_count int; begin orph_count:= 0; dbms_repair.dump_orphan_keys ( schema_name => 'TEST', object_name => 'T3_PK', object_type => dbms_repair.index_object, repair_table_name => 'REPAIR_TABLE', orphan_table_name => 'ORPHAN_KEY_TABLE', key_count => orph_count); dbms_output.put_line('orphan-index entries: ' || to_char(orph_count)); end; /
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
630
Chapter 18
Oracle Utilities for Troubleshooting
SVRMGR> SVRMGR> @orphan_dump orphan-index entries: 3 Statement processed. SVRMGR> select index_name, count(*) from orphan_key_ table 2 group by index_name; INDEX_NAME COUNT(*) ------------------------------ ---------T3_PK 3 9. Mark the corrupt block as skip enabled. This allows for querying the
table without retrieving the corrupt block, which would trigger an ORA-1578 error. SVRMGR> ! more corrupt_block_skip.sql -- Skips the corrupt blocks in the tables. declare begin dbms_repair.skip_corrupt_blocks ( schema_name => 'TEST', object_name => 'T3', object_type => dbms_repair.table_object, flags => dbms_repair.skip_flag); end; / SVRMGR> @corrupt_block_skip Statement processed.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Using the DBMS_REPAIR Package
631
SVRMGR> select table_name, skip_corrupt from dba_tables 2 where table_name = 'T3'; TABLE_NAME SKIP_COR ------------------------------ -------T3 ENABLED 10. Rebuild the FREELISTS so that the corrupt block is never added to free
lists of blocks. This will prevent this block from being used for future data entry. This is performed with the procedure DBMS_REPAIR .REBUILD_FREELISTS. SVRMGR> ! more rebuild_freelists.sql -- Removes the bad block from the freelist of blocks
declare begin dbms_repair.rebuild_freelists ( schema_name => 'TEST', object_name => 'T3', object_type => dbms_repair.table_object); end; / SVRMGR> @rebuild_freelists Statement processed. 11. Finally, you can rebuild the index, and the table T3 is ready for use.
SVRMGR> drop index t3_pk; Index dropped. SVRMGR> create index t3_pk on Index created.
Copyright ©2000 SYBEX , Inc., Alameda, CA
t3 (c1);
www.sybex.com
632
Chapter 18
Oracle Utilities for Troubleshooting
Repairing corrupt blocks can result in the loss of data in those blocks. Furthermore, the repairs can result in logical inconsistencies between certain relationships in your database. Thus, you should perform careful analysis before using the DBMS_REPAIR procedure to determine the overall database effect. You should use this tool with the assistance of Oracle support, if possible.
Using the DBVERIFY Utility
T
he Oracle DBVERIFY utility is executed by entering dbv at the command prompt. This utility has six parameters that can be specified at execution. The parameters are FILE, START, END, BLOCKSIZE, LOGFILE, and FEEDBACK. Table 18.1 describes these parameters. TABLE 18.1
DBVERIFY Parameters Default Value for Parameter
Parameter
Description
FILE
Data file to be verified by the utility.
No default parameter
START
Starting block to begin verification.
First block in the data file
END
Ending block to end verification.
Last block in the data file
BLOCKSIZE
Block size of database. This should be the same as the init.ora parameter DB_ BLOCK_SIZE.
2048
LOGFILE
Log file to store the results of running the utility.
No default parameter
FEEDBACK
Displays the progress of the utility by displaying a dot for each number of blocks processed.
0
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Using the DBVERIFY Utility
633
This help information can also be seen by executing the DBV HELP=Y command. See the following example: [oracle@DS-LINUX brdb]$ dbv help=y DBVERIFY: Release 8.1.5.0.0 - Production on Mon Apr 24 19:17:53 2000 (c) Copyright 1999 Oracle Corporation. All rights reserved. Keyword Description (Default) ---------------------------------------------FILE File to Verify (NONE) START Start Block (First Block of File) END End Block (Last Block of File) BLOCKSIZE Logical Block Size (2048) LOGFILE Output Log (NONE) FEEDBACK Display Progress (0) To run the DBVERIFY utility, the BLOCKSIZE parameter must match your database block size, or the following error will result: [oracle@DS-LINUX brdb]$ dbv file=data01.dbf DBVERIFY: Release 8.1.5.0.0 - Production on Mon Apr 24 19:18:48 2000 (c) Copyright 1999 Oracle Corporation. All rights reserved. DBV-00103: Specified BLOCKSIZE (2048) differs from actual (8192) Once the BLOCKSIZE parameter is set to match the database block size, the DBVERIFY utility can proceed. There are two ways to run this utility: without the LOGFILE parameter specified, and with it specified.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
634
Chapter 18
Oracle Utilities for Troubleshooting
Let’s walk through each of these examples. First, without the LOGFILE parameter set: [oracle@DS-LINUX brdb]$ dbv file=data01.dbf BLOCKSIZE=8192 DBVERIFY: Release 8.1.5.0.0 - Production on Mon Apr 24 19:20:49 2000 (c) Copyright 1999 Oracle Corporation. reserved.
All rights
DBVERIFY - Verification starting : FILE = data01.dbf
DBVERIFY - Verification complete Total Total Total Total Total Total Total Total Total
Pages Pages Pages Pages Pages Pages Pages Pages Pages
Examined : Processed (Data) : Failing (Data) : Processed (Index): Failing (Index): Processed (Other): Empty : Marked Corrupt : Influx :
12800 319 0 0 0 2 12479 0 0
The following code demonstrates the DBVERIFY utility with the LOGFILE parameter set. The results of this command are written to the file data01.log and not to the screen. This can be displayed by editing the log file. [oracle@DS-LINUX brdb]$ dbv file=data01.dbf BLOCKSIZE=8192 LOGFILE=data01.log DBVERIFY: Release 8.1.5.0.0 - Production on Mon Apr 24 19:19:56 2000 (c) Copyright 1999 Oracle Corporation. All rights reserved. In this second example, the previous output would appear in the file data01.log.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Analyzing Redo Log Files with LogMiner
635
Analyzing Redo Log Files with LogMiner
LogMiner is an Oracle utility whose main purpose is to read log files to identify and undo logical corruption or corruption performed by DML in the database. LogMiner works by reading and converting redo log files into the original SQL statements and corresponding SQL statements that will back out or reverse the SQL statement that might have caused the logical corruption. In order to use LogMiner, the database must be properly configured. The database must have a UTL_FILE_DIR parameter set in the init.ora file. This will be the location of the dictionary file. The dictionary file will make the SQL statements use the data dictionary names, as opposed to the internal object identifiers. The data dictionary names are the table names and object names seen when accessing the database. The dictionary file is created by executing the procedure DBMS_LOGMNR_D.BUILD. Next, the redo logs to be analyzed will need to be added. This can be performed by executing the following procedure: DBMS_LOGMNR.ADD_LOGFILE. Once these steps are complete, the redo logs can be analyzed by executing the procedure DBMS_ LOGMNR.START_LOGMNR. Finally, the results can be viewed by querying the V$LOGMNR_CONTENTS table. Here are the steps: 1. Edit the init.ora file and validate that there is a UTL_FILE_DIR loca-
tion. If not, you will need to add one and restart the database. In this case, one is already present and active. background_dump_dest = /oracle/admin/brdb/bdump compatible = 8.1.5 control_files = /oracle/data/brdb/control1.ctl,/oracle/data /brdb/control2.ctl core_dump_dest = /oracle/admin/ brdb/cdump db_block_buffers = 300 db_block_size = 8192 db_file_multiblock_read_count = 8 db_files = 64 db_name = brdb
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
636
Chapter 18
Oracle Utilities for Troubleshooting
dml_locks = 100 global_names = false log_archive_dest_1 = 'location=/oracle/admin/brdb/arch1' log_archive_dest_2 = 'location=/oracle/admin/brdb/arch2' #log_archive_dest = /oracle/admin/brdb/arch1 #log_archive_duplex_dest = /oracle/admin/brdb/arch2 log_archive_max_processes = 2 log_archive_format = archbrdb_%s.log log_archive_start = TRUE log_buffer = 40960 log_checkpoint_interval = 10000 max_dump_file_size = 51200 max_rollback_segments = 60 processes = 40 parallel_max_servers = 4 parallel_min_servers = 2 parallel_automatic_tuning = TRUE rollback_segments = (rb01,rb02,rb03,rb04,rb05,rb06) shared_pool_size = 16000000 user_dump_dest = /oracle/admin/brdb/udump utl_file_dir = /oracle/admin/brdb/adhoc 2. Build the dictionary file in the UTL_FILE_DIR location.
SVRMGR> EXECUTE dbms_logmnr_d.build(dictionary_ filename =>'dictionary.ora', dictionary_location =>'/oracle/admin/brdb/adhoc'); Statement processed. SVRMGR>
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Analyzing Redo Log Files with LogMiner
637
3. Add the redo logs you want to analyze.
SVRMGR> execute dbms_logmnr.add_logfile(LogFileName => '/oracle/admin/brdb/arch2/archbrdb_51.log', Options=> dbms_logmnr.ADDFILE); Statement processed. SVRMGR> execute dbms_logmnr.add_logfile(LogFileName => '/oracle/admin/brdb/arch2/archbrdb_52.log', Options=> dbms_logmnr.ADDFILE); Statement processed. SVRMGR> 4. Start LogMiner to analyze the redo log files you just added.
SVRMGR> execute dbms_logmnr.start_logmnr(DictFileName =>'/oracle/admin/brdb/adhoc/dictionary.ora'); Statement processed. SVRMGR> 5. Query the view V$LOGMNR_CONTENTS to display the original opera-
tion, the redo operation, and the undo operation. In this example, if you wanted to undo the DELETE statement, you could run the INSERT statement to fix this logical corruption. SVRMGR> select operation,sql_redo from v$logmnr_ contents; OPERATION SQL_REDO ---------- ---------------------------------------------DELETE delete from TEST.T3 where C1 = 7410 and C2 = 'This is a test to fill redo logs!’ SVRMGR> select operation,sql_undo from v$logmnr_ contents; OPERATION SQL_REDO ---------- ---------------------------------------------DELETE insert into TEST.T3(C1,C2) values (7410,'This is a test to fill redo logs!’)
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
638
Chapter 18
Oracle Utilities for Troubleshooting
Another use for LogMiner is to move data from one database to another, for example, from an Online Transaction Processing (OLTP) database to a Decision Support System (DSS) or an Operational Data Store (ODS).
Summary
T
his chapter demonstrated uses for alert logs and trace files to diagnose backup and recovery situations. You learned four ways to detect block corruption: by using the DBMS_REPAIR package, the ANALYZE TABLE VALIDATE STRUCTURE command, the DBVERIFY utility, and the DB_BLOCK_CHECKING init.ora parameter. This chapter also showed you how to use the DBMS_REPAIR package and the DBVERIFY utility to identify block corruption. The DBMS_ REPAIR example demonstrated how to fix corruption. You also saw the use of the Oracle LogMiner utility to fix logical corruption.
Key Terms Before you take the exam, make sure you’re familiar with the following terms: alert log BACKGROUND_DUMP_DEST USER_DUMP_DEST DBMS_REPAIR ANALYZE TABLE VALIDATE STRUCTURE DBVERIFY DB_BLOCK_CHECKING LogMiner V$LOGMNR_CONTENTS
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
639
Review Questions 1. What is the name of the log file that contains start-up and shutdown
information, ALTER DATABASE and ALTER SYSTEM commands, and errors to the database? A. Error log B. Log errors C. Alert log D. Log alerts 2. What is the name of the init.ora parameter that determines the loca-
tion of the alert log? A. USER_DUMP_DEST B. ALERT_DUMP_DEST C. BACKGROUND_DUMP_DEST D. LOG_DUMP_LOC 3. What init.ora parameter determines the location of the Oracle
background processes’ trace files? A. BACKGROUND_DUMP_DEST B. PROCESS_DUMP_DEST C. USER_DUMP_DEST D. BACKGROUND_DEST 4. Which init.ora parameter determines the location of the user
trace files? A. BACKGROUND_DUMP_DEST B. USER_DUMP_DEST C. USER_DEST_DUMP D. BACKGROUND_DEST_DUMP
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
640
Chapter 18
Oracle Utilities for Troubleshooting
5. In an OFA-compliant database configuration, what is the directory
name of the BACKGROUND_DUMP_DEST? E. udump F. bdump G. cdump H. adump 6. What is a method of detecting and fixing block corruption comprised
of PL/SQL procedures? A. DBMS_REPAIR B. DBVERIFY C. ANALYZE TABLE D. DB_BLOCK_CHECKING 7. What is the name of the utility that identifies corrupt blocks and uses
the term pages? Choose all that apply. A. DBVERIFY utility B. dbv C. DBMS_REPAIR D. ANALYZE TABLE 8. What is the name of the init.ora parameter that checks for block
corruption each time a block is modified? A. DB_BLOCK_CHECK B. BLOCK_CHECKER C. BLOCK_DB_CHECK D. DB_BLOCK_CHECKING 9. What DBMS_REPAIR procedure detects corrupt blocks? A. DBMS_REPAIR.CHECK_OBJECT B. DBMS_REPAIR.DETECT_OBJECT_CORRUPTION C. DBMS_REPAIR.VALIDATE_OBJECT D. DBMS_REPAIR.DETECT_OBJECT Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
641
10. What init.ora parameter must be set and active to utilize the
LogMiner utility? A. LOGMINER_ENABLE B. LOGMINER_STATUS C. UTL_FILE_DIR D. LOG_ENABLE 11. What table contains the operation, undo, and redo information from
the LogMiner utility? A. V$LOGMNR_UNDO B. V$LOGMNR_INFO C. V$LOGMNR_CONTENTS D. V$LOGMNR_STATUS 12. What happens when a full table scan is performed with a block
marked as corrupt? Choose the best answer. A. Nothing; full table scans don’t select all blocks. B. You receive an Oracle error. C. You receive an ORA-1578 error. D. You receive an ORA-1498 error. 13. Why must a DBA be cautious about repairing blocks with the
DBMS_REPAIR tool? Choose all that apply. A. Loss of data B. Logical data inconsistencies C. No reason to be cautious D. Impacts to the whole database
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
642
Chapter 18
Oracle Utilities for Troubleshooting
Answer to Review Questions 1. C. The alert log records all start-up and shutdown information and
various ALTER DATABASE and ALTER SYSTEM commands, as well as errors occurring in the database. 2. C. The init.ora parameter BACKGROUND_DUMP_DEST determines the
location of the alert log. 3. A. The init.ora parameter that determines the location of the
Oracle background processes is BACKGROUND_DUMP_DEST. 4. B. The USER_DUMP_DEST is the init.ora parameter that determines
the location of user trace files. 5. B. The name of the directory is bdump. This stands for background dump. 6. A. The DBMS_REPAIR package not only detects block corruption, but
also allows you to fix the corrupt blocks. 7. A and B. The DBVERIFY utility and dbv are the same thing. The dbv
is the binary executable. DBVERIFY refers to blocks as pages. 8. D. The DB_BLOCK_CHECKING init.ora parameter default is set to
TRUE and causes block validation each time the blocks are modified. 9. A. The procedure DBMS_REPAIR.CHECK_OBJECT detects corrupt blocks. 10. C. The init.ora parameter UTL_FILE_DIR must be set, which
enables the dictionary file to be written to a directory on the server’s file system. 11. C. The name of the table that contains the LogMiner output, including
the operation, undo, and redo information, is V$LOGMNR_CONTENTS. 12. C. An Oracle error is generated, ORA-1578, which indicates that a
block is corrupt in a table and that the block cannot be accessed. 13. A, B and D. The DBA must be cautious when using the DBMS_
REPAIR tool because data in bad blocks may be lost and logical inconsistencies to other related objects may result. The DBA must consider the effect of repaired blocks on the whole database. Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Chapter
19
Oracle Recovery Manager ORACLE8i BACKUP AND RECOVERY EXAM OBJECTIVES OFFERED IN THIS CHAPTER: List the capabilities of Oracle Recovery Manager (RMAN) Describe the components of RMAN Connect to RMAN without a recovery catalog Start up and shut down a target database with RMAN
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Training and Certification Web site (http:// education.oracle.com/certification/index.html) for the most current exam objectives listing.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
T
his chapter provides an overview of RMAN, including the capabilities and components of the RMAN tool. The RMAN utility attempts to move away from the highly customized OS backup scripts, as you have seen in earlier chapters, to a highly standardized backup and recovery process. Thus, starting with Oracle version 8, you can reduce backup and recovery mistakes associated with the highly customized OS backup scripts used before RMAN’s release. In this chapter, you will walk through a practical example of connecting to the RMAN utility without using the optional, but recommended, recovery catalog. Also, you will start and stop a target database with the RMAN tool.
The Capabilities of Oracle Recovery Manager (RMAN)
O
racle Recovery Manager (RMAN) has many capabilities to facilitate the backup and recovery process. The tool comes in both GUI and commandline versions. In general, RMAN performs and standardizes the backup and recovery process, which can reduce mistakes made by DBAs during this process. Below is a list of some of the major RMAN capabilities:
Back up databases, tablespaces, data files, control files, and archive logs
Compress backups by determining which blocks have changed and backing up only those blocks
Perform incremental backups
Provide scripting capabilities to combine tasks
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
The Capabilities of Oracle Recovery Manager (RMAN)
645
Log backup operations
Integrate with third-party tape media software
Provide reports and lists of catalog information
Store information about backups in a catalog in an Oracle database
Offer performance benefits, such as parallel processing of backups and restores
Create duplicates of databases for testing and development purposes
Test whether backups can be restored successfully
Determine whether backups are still available in media libraries
Figure 19.1 illustrates some of the differences between RMAN and the customized backup scripts and commands used in the earlier chapters. FIGURE 19.1
Differences between backup scripts and the RMAN utility
Enterprise Manager
Telnet> DOS> SVRMGR>
RMAN>
SQL commands: ALTER TABLESPACE OS commands: [BEGIN/END] BACKUP cp, tar, cpio, ALTER DATABASE dd in Unix BACKUP CONTROL FILE TO TRACE ARCHIVE LOG ALL
Back up and restore database files
Back up database files
Combine OS commands and SQL to make hot and cold custom backup scripts
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
RMAN utility Commands Scripts Recovery catalog
Back up and restore database files Hot and cold backups Record backup information Validate backups Integrate with third-party tape media hardware
646
Chapter 19
Oracle Recovery Manager
Oracle provides the third-party media management layer provided by Legato. The media management layer or media libraries are included in the software installation by default with RMAN. This media management layer or others offered by different tape management vendors can be used to interface RMAN with tape hardware devices allowing backup to tape.
The Components of RMAN
T
he main components of RMAN are GUI or command-line access, the optional recovery catalog, the RMAN commands and scripting, and tape media connectivity. These components enable a DBA to automate and standardize the backup and recovery process. Each component is described below: GUI or command-line access method Provides access to Recovery Manager. This process spawns off server sessions that connect to the target database. The target database is the database that will be backed up. The GUI access is provided through the Oracle Enterprise Manager tool. The Oracle Enterprise Manager (OEM) tool is a DBA tool that performs backups, exports/imports, data loads, performance monitoring/tuning, job and event scheduling, and standard DBA management, to mention a few. Within this tool is a Recovery Manager tool that provides access to RMAN. The client access can be run in a standard Unix Telnet or x-Windows session as well as in a DOS shell in the Windows environment. Optional recovery catalog Is a special data dictionary of backup information that is stored in a set of tables, much like the data dictionary stores information about databases. The recovery catalog provides a method for storing information about backups, restores, and recoveries. This information can provide status on the success or failure of backups, OS backup, data file copies, tablespace copies, control file copies, archive log copies, full database backups, and the physical structures of a database. RMAN commands Enable different actions to be performed to facilitate the backup and restore of the database. These commands can be organized logically into scripts, which can then be stored in the recovery
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
The Components of RMAN
647
catalog database. The scripts can be reused for other backups, thus keeping consistency among different target database backups. Tape media connectivity Provides a method for interfacing with various third-party tape hardware vendors to store and track backups in automated tape libraries (ATLs). By default, Oracle provides libraries for Legato Storage Manager tape media libraries. Figure 19.2 shows an example of how the RMAN utilities’ components fit together to form a complete backup and recovery package. FIGURE 19.2
The components of RMAN
Enterprise Manager RMAN> command
Recovery catalog database
Stored scripts & information about backups RMAN Write or read from tape or disk
Server session
Server session Back up & restore databases
Target databases
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Third-party tape hardware or disk
648
Chapter 19
Oracle Recovery Manager
Connecting to RMAN without a Recovery Catalog
T
he RMAN utility enables you to connect to a target database without utilizing the recovery catalog database. This is not the Oracle recommended approach for using RMAN. This approach would be used if the overhead of creating and maintaining a recovery catalog were too great for an organization. The recovery catalog database will be covered in more detail in Chapter 20, “Oracle Recovery Manager Catalog Creation and Maintenance.” The target database is the database targeted by RMAN for backup or recovery actions. If you use RMAN without the recovery catalog, you are storing most of the necessary information about each target database in the target database’s control file. Thus, you must manage the target database’s control file to support this. The init.ora parameter CONTROL_FILE_RECORD_KEEP_TIME determines how long information that can be used by RMAN is kept in the control file. The default value for this parameter is 7 days and can be as many as 365 days. The greater the number, the larger the control file becomes to store more information. The information that is stored within the control file is stored in the reusable sections. These sections can grow if the value of the parameter CONTROL_FILE_RECORD_KEEP_TIME is 1 or more. The reusable sections consist of the following categories:
Archive log
Backup data file
Backup redo log
Copy corruption
Deleted object
Offline range
Backup corruption
Backup piece
Backup set
Data file copy
Log history
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Starting Up and Shutting Down a Target Database Using RMAN
649
In order to connect to the target database in RMAN, you must set the ORACLE_SID to the appropriate target database. In this example, it is brdb. This example uses the oraenv shell script provided by Oracle with the 8i database software to change database environments. Next, you initiate the RMAN utility. Once the RMAN utility is started, issue the CONNECT TARGET command with the appropriate SYS, or system account and password, or a DBA privileged account. This performs the connection to the target database. Let’s walk through this step by step: 1. Set the ORACLE_SID to the appropriate target database that you wish
to connect to. [oracle@DS-LINUX /oracle]$ . oraenv ORACLE_SID = [rcdb] ? brdb 2. Execute the RMAN utility by typing RMAN and pressing the
Enter key. [oracle@DS-LINUX /oracle]$ rman Recovery Manager: Release 8.1.5.0.0 - Production RMAN> 3. Issue the CONNECT TARGET command with the appropriate DBA priv-
ileged account. RMAN> connect target sys/change_on_install RMAN-06005: connected to target database: BRDB (DBID=2058500149)
Starting Up and Shutting Down a Target Database Using RMAN
A target database can be started or stopped with the RMAN utility. The OS account used should be the account used to start and stop the database, such as the Unix Oracle user account or an account with SYSDBA privilege. In our example, the Oracle OS account will be used to launch the RMAN utility.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
650
Chapter 19
Oracle Recovery Manager
As in the preceding example, you must first set the ORACLE_SID to the desired target database by using the oraenv shell script or manually setting the ORACLE_SID environment variable. Next, launch RMAN and connect to a DBA privileged account on the target database. Finally, execute the STARTUP command to start the database or the SHUTDOWN command to stop the database. Let’s walk through this example: 1. You should set the ORACLE_SID with the oraenv shell script. In this
example, it will be the brdb database. [oracle@DS-LINUX /oracle]$ . oraenv ORACLE_SID = [rcdb] ? brdb 2. Launch RMAN.
[oracle@DS-LINUX /oracle]$ rman Recovery Manager: Release 8.1.5.0.0 - Production RMAN> connect target sys/change_on_install RMAN-06005: connected to target database: BRDB (DBID=2058500149) 3. Execute the STARTUP command.
RMAN> startup RMAN-06193: RMAN-06196: RMAN-06199: RMAN-06400:
connected to target database (not started) Oracle instance started database mounted database opened
Total System Global Area
19926416 bytes
Fixed Size Variable Size Database Buffers Redo Buffers
64912 17330176 2457600 73728
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
bytes bytes bytes bytes
Starting Up and Shutting Down a Target Database Using RMAN
651
4. At this point, you can host out of the RMAN utility with the HOST
command and verify that the Oracle database processes for the brdb database are running by using a standard Unix process command of ps –ef and then grep for ora. RMAN> host; RMAN> [oracle@DS-LINUX /oracle]$ ps -ef|grep root 970 965 0 23:16 pts/0 login -- oracle oracle 994 970 0 23:17 pts/0 -bash oracle 1071 994 0 23:19 pts/0 rman oracle 1101 1071 2 23:21 ? oraclebrdb (DESCRIPTION=(LOCAL=Y oracle 1103 1 0 23:21 ? ora_pmon_brdb oracle 1105 1 0 23:21 ? ora_dbw0_brdb oracle 1107 1 0 23:21 ? ora_lgwr_brdb oracle 1109 1 0 23:21 ? ora_ckpt_brdb oracle 1111 1 0 23:21 ? ora_smon_brdb oracle 1113 1 0 23:21 ? ora_reco_brdb oracle 1115 1 0 23:21 ? ora_arc0_brdb oracle 1117 1 0 23:21 ? ora_arc1_brdb oracle 1119 1 0 23:21 ? ora_p000_brdb oracle 1121 1 0 23:21 ? ora_p001_brdb
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
ora 00:00:00 00:00:00 00:00:00 00:00:01 00:00:00 00:00:00 00:00:00 00:00:00 00:00:00 00:00:00 00:00:00 00:00:00 00:00:00 00:00:00
652
Chapter 19
Oracle Recovery Manager
oracle 1122 1071 0 23:22 pts/0 /bin/bash oracle 1123 1122 0 23:22 pts/0 ps -ef oracle 1124 1122 0 23:22 pts/0 /bin/bash [oracle@DS-LINUX /oracle]$
00:00:00 00:00:00 00:00:00
5. Perform the database shutdown process by executing a SHUTDOWN
command in the RMAN utility. RMAN> shutdown RMAN-06405: database closed RMAN-06404: database dismounted RMAN-06402: Oracle instance shut down 6. Again you can host out of the RMAN utility with the HOST command
and verify that there are no Oracle database processes for the database brdb. Do this by using a standard Unix command of ps –ef and then grep for ora. In this example, you don’t see the SMON, PMON, DBWR, LGWR (and so on) processes running. RMAN> host; [oracle@DS-LINUX /oracle]$ ps -ef|grep root 970 965 0 23:16 pts/0 login -- oracle oracle 994 970 0 23:17 pts/0 oracle 1071 994 0 23:19 pts/0 oracle 1098 1071 0 23:20 pts/0 bash oracle 1099 1098 0 23:21 pts/0 oracle 1100 1098 0 23:21 pts/0 bash [oracle@DS-LINUX /oracle]$ exit exit RMAN-06134: host command complete
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
ora 00:00:00 00:00:00 -bash 00:00:00 rman 00:00:00 /bin/ 00:00:00 ps -ef 00:00:00 /bin/
Summary
653
Summary
This chapter has discussed the components and capabilities of the RMAN utility. This chapter should give you a sense of some of the basic functions of the RMAN utility. We discussed the use of RMAN without the recovery catalog and described the environments in which this would be most beneficial. We also discussed the effects on the control file if you are not using the recovery catalog. One practical example you explored was connecting to RMAN without the recovery catalog. You also learned how to start and stop a target database with the RMAN tool.
Key Terms Before you take the exam, make sure you’re familiar with the following terms: Oracle Recovery Manager (RMAN) media management layer target database Oracle Enterprise Manager (OEM) automated tape libraries (ATLs)
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
654
Chapter 19
Oracle Recovery Manager
Review Questions 1. Does the RMAN utility require the use of the recovery catalog? A. The recovery catalog is required. B. The recovery catalog is not required. C. The recovery catalog is required if stored in the same database as
the target database. D. The recovery catalog is not required if stored in the same database
as the target database. 2. What are some of the capabilities of the RMAN utility? Choose all
that apply. A. Back up databases, tablespaces, data files, control files, and
archive logs B. Compress backups C. Provide scripting capabilities D. Test whether backups can be restored 3. What type of interface does the RMAN utility support? Choose all
that apply. A. GUI through Oracle Enterprise Manager B. Command-line interface C. Command line only D. GUI through Oracle Enterprise Manager only
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
655
4. What actions can be performed within the RMAN utility? Choose all
that apply. A. Start up target database B. Shut down target database C. Grant roles to users D. Create user accounts 5. The tape media library enables RMAN to perform which of the
following? A. Interface with third-party tape hardware vendors B. Use third-party automated tape libraries (ATLs) C. Write to any tape D. Write to disk
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
656
Chapter 19
Oracle Recovery Manager
Answers to Review Questions 1. B. The recovery catalog is optional regardless of the configuration of
the target database. The recovery catalog is used to store information about the backup and recovery process, in much the same way that the data dictionary stores information about the database. 2. A, B, C, and D. All answers are capabilities of the RMAN utility. 3. A and B. The RMAN utility can be run in GUI mode via the use of
Oracle Enterprise Manager or through a command-line interface on the server. 4. A and B. The RMAN utility can start and stop a target database. Data-
base objects and users’ accounts are not created with the RMAN utility. 5. A and B. The tape media library enables RMAN to interface with
other tape hardware vendors and use their automated tape library systems. Writing to disk and tape can still be performed by using special tape libraries.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Chapter
20
Oracle Recovery Manager Catalog Creation and Maintenance ORACLE8i BACKUP AND RECOVERY EXAM OBJECTIVES OFFERED IN THIS CHAPTER: Describe the considerations for using the recovery catalog Describe the components of the recovery catalog Create a recovery catalog Use RMAN commands to maintain the recovery catalog Query the recovery catalog to generate reports and lists Create, store, and run RMAN scripts
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Training and Certification Web site (http://education .oracle.com/certification/index.html) for the most current exam objectives listing.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
T
his chapter discusses both practical and conceptual topics related to RMAN and the recovery catalog. You will learn the reasons and considerations for using the recovery catalog. You will also explore the components that make up the RMAN catalog. This chapter focuses on the practical aspects of the recovery catalog, such as the installation and configuration of the catalog. You will use RMAN commands to manage and maintain the recovery catalog. You will walk through the creation of reports and lists, which provide information about the backup process. You will also work with scripts to perform assorted backup tasks and activities, similar to the way you have done the OS hot and cold backup scripts.
Considerations for Using the Recovery Catalog
A
s you learned in Chapter 19, “Oracle Recovery Manager,” utilizing the recovery catalog is one of the most significant decisions you make when using RMAN. The recovery catalog provides many more automated backup and recovery functions than not using a recovery catalog. For this reason, Oracle recommends that you use the recovery catalog when using RMAN whenever possible. The main considerations regarding the use of the recovery catalog are as follows:
Some functionality is not supported unless the recovery catalog exists.
You should create a separate catalog database.
You must administer the catalog database like any other database in areas such as data growth and stored database objects such as scripts.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Considerations for Using the Recovery Catalog
659
You must back up the catalog database.
You must determine whether you will keep each target database in a separate recovery catalog within a database.
Oracle recommends that the recovery catalog should be used unless the maintenance and creation of the catalog database is too great for a site. (Any site that has experienced and qualified DBA and system administrator resources should be capable of the maintenance and creation of the catalog database.) If the database were small and not critical, using RMAN without the recovery catalog could be acceptable.
You should store the recovery catalog on a separate server and file system than the target databases it is responsible for backing up. This prevents failures on the target database server and file system from affecting the recovery catalog for backup and recovery purposes.
Figure 20.1 shows recovery catalog’s association with the whole RMAN backup process. FIGURE 20.1
RMAN utility interacting with Recovery Manager catalog RMAN
Recovery catalog database
Server session
Server session
Server session
Disk storage Tape storage
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Target database
660
Chapter 20
Oracle Recovery Manager Catalog Creation and Maintenance
The Components of Recovery Catalog
The main components of the RMAN recovery catalog support the logging of the backup and recovery information in the catalog. This information is stored within tables, views, and other databases objects within an Oracle database. Backups are compressed for optimal storage. Here is a list of the components contained in a recovery catalog:
Backup and recovery information that is logged for long-term use from the target databases
RMAN scripts that can be stored and reused
Backup information about data files and archive logs
Information about the physical makeup, or schema, of the target database
Create a Recovery Catalog
As noted earlier, the recovery catalog is an optional feature of RMAN. The catalog is similar to the standard database catalog, in that the recovery catalog stores information about the recovery process as the database catalog stores information about the database. As mentioned in Chapter 19, RMAN can be run without the catalog. The recovery catalog must be stored in its own database, preferably on another server other than the server where the target database resides. To enable the catalog, an account with connect, resource, and RECOVERY_CATALOG_OWNER privileges must be created to hold the catalog tables. Next, the catalog creation script command must be executed as the user RMAN connected to the RMAN utility. Let’s walk through the creation of the recovery catalog step by step. This example assumes that you have already built a database called rcdb to store the recovery catalog. 1. First, you must point to the database where the recovery catalog will
reside. This is not the target database. The RMAN database will be
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Create a Recovery Catalog
661
called rcdb. The oraenv shell script is provided by Oracle to switch to other databases on the same server. [oracle@DS-HPUX 8.1.5]$ oraenv ORACLE_SID = [brdb] ? rcdb [oracle@DS-HPUX 8.1.5]$ 2. Create the user that will store the catalog. Use the name RMAN with the
password RMAN. Make DATA the default tablespace and TEMP the temporary tablespace. [oracle@DS-HPUX 8.1.5]$ svrmgrl Oracle Server Manager Release 3.1.5.0.0 - Production (c) Copyright 1997, Oracle Corporation. All Rights Reserved. Oracle8i Enterprise Edition Release 8.1.5.0.0 Production With the Partitioning and Java options PL/SQL Release 8.1.5.0.0 - Production SVRMGR> connect internal Connected. SVRMGR> create user rman identified by rman 2> default tablespace data 3> temporary tablespace temp; Statement processed. 3. Grant the appropriate permissions to the RMAN user.
SVRMGR> grant connect,resource,recovery_catalog_owner to rman; Statement processed. SVRMGR>
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
662
Chapter 20
Oracle Recovery Manager Catalog Creation and Maintenance
4. Launch the RMAN tool.
[oracle@DS-LINUX bin]$ rman Recovery Manager: Release 8.1.5.0.0 - Production 5. Connect to the catalog with the user called RMAN that you created in
step 2. RMAN> connect catalog rman/rman RMAN-06008: connected to recovery catalog database RMAN-06428: recovery catalog is not installed 6. Finally, you can create the catalog by executing the following com-
mand and specifying the tablespace you want to store the catalog in. RMAN> create catalog tablespace data; RMAN-06431: recovery catalog created RMAN>
Oracle recommends the following space requirements for RMAN in tablespaces for one-year growth in the recovery catalog database: system tablespace, 50MB; rollback tablespace, 5MB; temp tablespace, 5MB; and the recovery catalog tablespace, 10MB.
Use RMAN Commands to Maintain the Recovery Catalog
T
here are many other RMAN commands available to maintain the recovery catalog. This section addresses most of these commands. These commands fall under the following maintenance categories:
Registering and unregistering the target database
Resetting the recovery catalog
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Use RMAN Commands to Maintain the Recovery Catalog
Resynchronizing the recovery catalog
Changing the availability of backups
Deleting backup copies
Validating the restore of backup copies
Cataloging OS backups
663
As a DBA, you should be familiar with these maintenance categories and the associated commands. This section briefly explains each and demonstrates the associated RMAN commands.
Registering and Unregistering the Target Database Registering the target database is required for RMAN to store information about the target database in the recovery catalog. This is the information that RMAN uses to properly back up the database. Here is an example of registering a target database: [oracle@DS-HPUX 8.1.5]$ rman target brdb catalog rman/rman@rcdb Recovery Manager: Release 8.1.5.0.0 - Production target database Password: RMAN-06005: connected to target database: BRDB (DBID=2058500149) RMAN-06008: connected to recovery catalog database RMAN> register database; RMAN-03022: compiling command: register RMAN-03023: executing command: register RMAN-08006: database registered in recovery catalog RMAN-03023: executing command: full resync RMAN-08029: snapshot controlfile name set to default value:?/dbs/[email protected] RMAN-08002: starting full resync of recovery catalog RMAN-08004: full resync complete RMAN> RMAN>
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
664
Chapter 20
Oracle Recovery Manager Catalog Creation and Maintenance
Unregistering the target database removes the information necessary to back up the database. This task is not performed in the RMAN utility. This is performed by executing a stored procedure as the recovery catalog’s schema owner. Here is an example of unregistering a target database: 1. You must get the DB_ID and DB_KEY values in the DB table that resides
in the Recovery Manager catalog. [oracle@DS-HPUX 8.1.5]$ sqlplus rman/rman@rcdb SQL*Plus: Release 8.1.5.0.0 - Production on Mon May 29 16:48:51 2000 (c) Copyright 1999 Oracle Corporation. All rights reserved. Connected to: Oracle8i Enterprise Edition Release 8.1.5.0.0 Production With the Partitioning and Java options PL/SQL Release 8.1.5.0.0 - Production SQL> select * from db; DB_KEY DB_ID CURR_DBINC_KEY ---------- ---------- -------------1 2058500149 2 2. You must run the stored procedure with the values obtained in the
previous query. SQL> execute dbms_ rcvcat.unregisterdatabase(1,2058500149); PL/SQL procedure successfully completed.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Use RMAN Commands to Maintain the Recovery Catalog
665
3. Finally, you can validate that there is not a value in the DB table refer-
encing the database. SQL> select * from db; no rows selected SQL>
Resetting the Recovery Catalog Resetting the recovery catalog enables RMAN to work with a database that has been opened with the ALTER DATABASE OPEN RESETLOGS command. This command causes RMAN to make what is called a new incarnation of the target database. An incarnation of the target database is a new reference of the database in the recovery catalog. This incarnation is marked as the current reference for the target database, and all future backups are associated with this incarnation. Here is an example of connecting to the target database and recovery catalog and then resetting the database: [oracle@DS-HPUX 8.1.5]$ rman target sys/change_on_install catalog rman/rman@rcdb Recovery Manager: Release 8.1.5.0.0 - Production RMAN-06005: connected to target database: BRDB (DBID=2058500149) RMAN-06008: connected to recovery catalog database RMAN> reset database; RMAN-03022: compiling command: reset RMAN-03023: executing command: reset
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
666
Chapter 20
Oracle Recovery Manager Catalog Creation and Maintenance
Resynchronizing the Recovery Catalog Resynchronizing the recovery catalog enables RMAN to compare the control file of the target database to the information stored in the recovery catalog and to update the recovery catalog appropriately. Resynchronization can be full or partial. A partial resynchronization does not update the recovery catalog with any physical information such as data files, tablespaces, and redo logs. A full resynchronization captures all the previously mentioned physical information plus the changed records. Here is an example of connecting to the target database and recovery catalog and then resynchronizing the database: [oracle@DS-HPUX 8.1.5]$ rman target sys/change_on_install catalog rman/rman@rcdb Recovery Manager: Release 8.1.5.0.0 - Production RMAN-06005: connected to target database: BRDB (DBID=2058500149) RMAN-06008: connected to recovery catalog database RMAN> resync catalog; RMAN-03022: RMAN-03023: RMAN-08002: RMAN-08004: RMAN>
compiling command: resync executing command: resync starting full resync of recovery catalog full resync complete
Changing the Availablility of Backups Changing the availability of backups enables RMAN to mark a backup or copy as unavailable or available. This is used primarily to designate backups or to designate a copy that has been moved offsite or is back on site. Below is an example of making a data file and backup set unavailable and then making that same set available again. As in the previous examples, you must be connected to the target database and recovery catalog before executing these commands.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Use RMAN Commands to Maintain the Recovery Catalog
667
1. Execute the CHANGE command, making a data file and a backup set
unavailable. RMAN> change datafilecopy '/db01/ORACLE/data01.dbf' unavailable; RMAN> change backupset 1 unavailable; RMAN> 2. Execute the CHANGE command, making a data file and a backup set
available again. RMAN> change datafilecopy '/db01/ORACLE/data01.dbf' available; RMAN> change backupset 1 available;
Deleting Backup Copies Deleting backup copies enables RMAN to mark a backup or copies with a status of deleted. These will not appear in a list output when querying the recovery catalog. Here is an example of deleting part of a backup set and control file: 1. Designate a channel to work with the storage medium, which in this
example is tape. RMAN> allocate channel for delete type 'sbt_tape'; 2. Mark the backup set and a control file as deleted.
RMAN> change backuppiece 21 delete; RMAN> change controlfilecopy 10 delete; 3. When complete, you must release the channel so that it can be used by
other jobs within RMAN. RMAN> release channel;
Validating the Restore of Backup Copies Validating the restore of backup copies enables RMAN to validate the restore without actually restoring the files. This can be performed on the whole database, data files, tablespaces, or control files.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
668
Chapter 20
Oracle Recovery Manager Catalog Creation and Maintenance
Here is an example of validating one tablespace and all archive logs within a backup: 1. Close the database (data files can be taken offline for a partial
validation). SVRMGRL>shutdown; 2. Connect to the target database and recovery catalog.
[oracle@DS-HPUX 8.1.5]$ rman target sys/change_on_ install catalog mailto:rman/rman@rcdb 3. Run the following RMAN commands to allocate a channel and to val-
idate the tablespace DATA1 and all archive logs. RMAN> run
{ allocate channel ch1 type disk; allocate channel ch2 type tape; restore archivelog all validate; restore tablespace 'data1' validate;
} 4. If there are no errors in the preceding VALIDATE command, then val-
idation was successful. If there are errors, such as the one indicated in the following code line, then validation was not successful. RMAN> RMAN-06026: some targets not found aborting restore
Cataloging OS Backups Cataloging OS backups enables RMAN to catalog or store information in the catalog about OS-based backups. In a traditional hot backup, you would enter the ALTER TABLESPACE BEGIN BACKUP command. Then you would perform a cp command in Unix to copy the file to another place on disk. This new location is then cataloged in RMAN. Cataloging means to store the information in the recovery catalog. Below is an example of cataloging the data01.dbf data file.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Generate Lists and Reports from the Recovery Catalog
669
1. Make a copy of the data01.dbf data file to a backup location.
[oracle@DS-HPUX 8.1.5]$ cp /db01/ORACLE/brdb/data01.dbf /staging/data01.dbf 2. Connect to the target database and recovery catalog.
[oracle@DS-HPUX 8.1.5]$ rman target sys/change_on_install@brdb catalog mailto:rman/ rman@rcat 3. Store the data file in the recovery catalog by executing the CATALOG
DATAFILECOPY command. RMAN> catalog datafilecopy '/staging/data01.dbf'; RMAN-03022: compiling command: catalog RMAN-03023: executing command: catalog RMAN-08050: cataloged datafile copy RMAN-08513: datafile copy filename=/staging/data01.dbf recid=113 stamp=417672403 RMAN-03023: executing command: partial resync RMAN-08003: starting partial resync of recovery catalog RMAN-08005: partial resync complete
Generate Lists and Reports from the Recovery Catalog
R
MAN has two commands for accessing the recovery catalog to provide the status of what you may need to back up, copy, or restore, as well as general information about your target database. Each of these commands is performed within the RMAN utility. List commands query the recovery catalog or control file to determine which backups or copies are available. List commands provide the most basic information from the recovery catalog. The information generated is
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
670
Chapter 20
Oracle Recovery Manager Catalog Creation and Maintenance
mainly what has been done to this point in time so that you know what is available or not available. In each of the examples below, you must first connect to the target database and recovery catalog. Let’s walk through three examples using the LIST command. The first example displays the incarnation of the database. This listing is useful for showing when the database was registered in the recovery catalog. Again, you must first connect to the target database and recovery catalog. [oracle@DS-HPUX 8.1.5]$ rman target sys/change_on_ install@brdb catalog mailto:rman/rman@rcat RMAN> list incarnation of database; RMAN-03022: compiling command: list RMAN-06240: List of Database Incarnations RMAN-06241: DB Key Inc Key DB Name DB ID CUR Reset SCN Reset Tim RMAN-06242: ------- ------- ------- --------- --- ---------- --------RMAN-06243: 1 2 brdb 1729182204 YES 43472429 16-JAN-00 The next example lists the USERS tablespace backups that have occurred in the database. This listing is useful for showing when the USERS tablespace was last backed up. Again, you must first connect to the target database and recovery catalog. [oracle@DS-HPUX 8.1.5]$ rman target sys/change_on_ install@brdb catalog mailto:rman/rman@rcat RMAN> list backup of tablespace users; RMAN-03022: compiling command: list RMAN-06230: List of Datafile Backups
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Generate Lists and Reports from the Recovery Catalog
RMAN-06231: RMAN-06232: RMAN-06233: RMAN-06233: RMAN-06233: RMAN-06233: RMAN-06233: RMAN-06233:
Key ---5807 6013 6049 6179 6379 6512
File ---5 5 5 5 5 5
671
Type L Compon_ti SCN Time ---- - --------- -------- -------Full 25-JAN-00 64869996 25-JAN-00 Full 29-JAN-00 65206868 29-JAN-00 Full 30-JAN-00 65206885 30-JAN-00 Full 31-JAN-00 65432996 31-JAN-00 Full 06-FEB-00 65823365 06-FEB-00 Full 07-FEB-00 66090679 07-FEB-00
Finally, this example lists the full database backups that have occurred in the database. This listing is useful for showing when the full database was last backed up. Again, you must first connect to the target database and recovery catalog. This command assumes that you are already connected to the target database and recovery catalog. [oracle@DS-HPUX 8.1.5]$ rman target sys/change_on_ install@brdb catalog mailto:rman/rman@rcat RMAN> list backup of database; RMAN-03022: RMAN-06230: RMAN-06231: RMAN-06232: RMAN-06233: RMAN-06233: RMAN-06233: RMAN-06233: RMAN-06233: RMAN-06233:
compiling command: list List of Datafile Backups Key File Type L Completio ---- ---- ---- - --------5808 1 Full 25-JAN-00 6012 1 Full 29-JAN-00 6048 1 Full 30-JAN-00 6180 1 Full 31-JAN-00 6378 1 Full 06-FEB-00 6513 1 Full 07-FEB-00
SCN -------64869997 65206869 65206886 65432997 65823366 66090680
Ckp Time -------25-JAN-00 29-JAN-00 30-JAN-00 31-JAN-00 06-FEB-00 07-FEB-00
Report commands provide more detailed information from the recovery catalog and are used for more sophisticated purposes than lists. Reports can provide information about what should be done. Some uses of reports are to determine what database files need to be backed up or what database files have been recently backed up. Let’s walk through some examples of report queries.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
672
Chapter 20
Oracle Recovery Manager Catalog Creation and Maintenance
The first example displays all the physical structures that make up the database. This report is used for determining every structure that should be backed up when performing a full database backup. Again, you must first connect to the target database and recovery catalog before running any report. [oracle@DS-HPUX 8.1.5]$ rman target sys/change_on_ install@brdb catalog mailto:rman/rman@rcat RMAN> report schema; RMAN-03022: compiling command: report RMAN-06290: Report of database schema RMAN-06291: File K-bytes Tablespace RB Name RMAN-06292: ---- -------- ---------- --- ---------------RMAN-06293: 1 102400 SYSTEM YES /db01/ORACLE/brdb/system01.dbf RMAN-06293: 2 225280 RBS YES /db01/ORACLE/brdb/rbs01.dbf RMAN-06293: 3 512000 TEMP NO /db01/ORACLE/brdb/temp01.dbf RMAN-06293: 4 51200 TOOLS NO /db01/ORACLE/brdb/tools01.dbf RMAN-06293: 5 76800 USERS NO /db01/ORACLE/brdb/users01.dbf RMAN-06293: 6 1536000 DATA NO /db01/ORACLE/brdb/data01.dbf RMAN-06293: 7 870400 INDEX NO /db01/ORACLE/brdb/index01.dbf RMAN-06293: 8 512000 RBS YES /db01/ORACLE/brdb/rbs02.dbf RMAN-06293: 9 1024000 DATA NO /db01/ORACLE/brdb/data02.dbf RMAN-06293: 10 768000 INDEX NO /db01/ORACLE/brdb/index02.dbf RMAN-06293: 11 1024000 RBS YES /db01/ORACLE/brdb/rbs03.dbf RMAN-06293: 12 512000 DATA NO /db01/ORACLE/brdb/data03.dbf
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Generate Lists and Reports from the Recovery Catalog
RMAN-06293: 13 512000 INDEX /db01/ORACLE/brdb/index03.dbf RMAN-06293: 14 512000 TEMP /db01/ORACLE/brdb/temp02.dbf
673
NO NO
The second example displays all the backup sets that contain an additional copy or duplicate backup. Again, you must first connect to the target database and recovery catalog before running any report. [oracle@DS-HPUX 8.1.5]$ rman target sys/change_on_ install@brdb catalog mailto:rman/rman@rcat RMAN> report obsolete redundancy = 2; RMAN-03022: compiling command: report RMAN-06280: Report of obsolete backup sets and datafile copies RMAN-06281: Type Recid Stamp Filename RMAN-06282: -------------------- ------ --------- -------RMAN-06284: Backup Set 308 398683981 RMAN-06285: Backup Piece 308 398683978 df_t398683973_s320_s1 RMAN-06284: Backup Set 293 398416430 RMAN-06285: Backup Piece 293 398416429 df_t398416422_s299_s1 RMAN-06284: Backup Set 278 398327355 RMAN-06285: Backup Piece 278 398327353 df_t398327346_s284_s1 RMAN-06284: Backup Set 271 398323527 RMAN-06285: Backup Piece 271 398323458 df_t398323453_s278_s1 RMAN-06284: Backup Set 263 398235578 RMAN-06285: Backup Piece 263 398235577 df_t398235572_s269_s1 RMAN-06284: Backup Set 255 398233307
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
674
Chapter 20
Oracle Recovery Manager Catalog Creation and Maintenance
Create, Store, and Run RMAN Scripts
RMAN scripts can be created to execute a group of RMAN commands. These scripts can be stored within the RMAN catalog. Once stored in the recovery catalog, the scripts can then be executed much the same as a stored PL/SQL procedure. Let’s walk through an example of creating, storing, and running an RMAN script. In this example, you will back up the complete database: 1. Connect to the recovery catalog.
[oracle@DS-HPUX 8.1.5]$ rman catalog mailto:rman/ rman@rcdb 2. While in the RMAN utility, create a script called complete_bac. This
will create, compile, and store the script in the recovery catalog. RMAN> Create script complete_bac { alocate channel c1 type disk; alocate channel c2 type disk; backup database; sql ‘ALTER SYSTEM ARCHIVE LOG ALL’; backup archivelog all; } RMAN-03022: compiling command: create script RMAN-03023: executing command: create script RMAN-08085: created script complete_bac
3. Once the scripts are created and stored within the recovery catalog,
the scripts can be rerun as needed. This assures that the same script and functionality is reproduced for later jobs. Figure 20.2 shows how to create and store scripts with the recovery catalog.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Create, Store, and Run RMAN Scripts
FIGURE 20.2
675
Create and store RMAN scripts in the recovery catalog
[oracle@OS-HPUX 8.1.5] rman catalog rman/rman@redo RMAN > create script complete_bac {…}
complete_bac
Recovery catalog complete_bac
4. Run the stored RMAN script complete_bac. This is performed by
executing the following command when connected or pointed to the target database and connected to the recovery catalog. [oracle@DS-HPUX 8.1.5]$ rman target sys/change_on_ install@brdb catalog mailto:rman/rman@rcat RMAN> run { execute script complete_bac; } Figure 20.3 shows how stored scripts can be run in RMAN.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
676
Chapter 20
Oracle Recovery Manager Catalog Creation and Maintenance
FIGURE 20.3
RMAN scripts [oracle@OS-HUPX 8.1.5]$ rman target user/ password@brdb catalog rman/rman@rcdb RMAN > run {execute script complete_bac}
Stored script executed on target database
Target database
Stored script retrieved from catalog
Recovery catalog complete_bac
Summary
T
his chapter discussed the considerations for using the recovery catalog as well as its major components. You walked through many practical examples of how to perform certain tasks in RMAN and the recovery catalog. One practical example was the creation of the recovery catalog in a database. This chapter demonstrated various commands and methods to manage the recovery catalog. Some of these commands are the lists and reports to query information in the recovery catalog. This list and report information helps validate the backup status and schedule of backups. You have used the scripting capabilities within RMAN to group commands, store them within the catalog, and run the stored script. These scripts can reduce the number of errors that can occur in scripts or programs not centrally stored in the RMAN schema.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Summary
Key Terms Before you take the exam, make sure you’re familiar with the following terms: recovery catalog target database resetting incarnation resynchronizing cataloging list report
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
677
678
Chapter 20
Oracle Recovery Manager Catalog Creation and Maintenance
Review Questions 1. The RMAN utility does not require the use of which of the following? A. Recovery catalog B. Server sessions C. Allocated channel for backup D. Allocated channel for restore 2. What are the features supported by the recovery catalog? Choose all
that apply. A. Backup databases, tablespaces, data files, control files, and
archive logs B. Compress backups C. Scripting capabilities D. Test whether backups can be restored 3. Where is the best place to store the database housing the recovery
catalog? A. On the same server and different file system as the target database
being backed up by the recovery catalog B. On the same server and file system as the target database that is
being backed up by the recovery catalog C. On a different server than the target databases D. None of the above 4. Which privileges are required for the Recovery Manager catalog user
account? Choose all that apply. A. DBA privilege B. Connect privilege C. Resource privilege D. RECOVERY_CATALOG_OWNER privilege
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
679
5. What is the one-year space requirement of the recovery catalog
tablespace? A. 100MB B. 20MB C. 50MB D. 10MB 6. What command can be performed only once on a target database? A. CHANGE AVAILABILITY OF BACKUPS B. DELETE BACKUPS C. REGISTER THE DATABASE D. RESYNCHRONIZING THE DATABASE 7. The target database is A. Any database designated for backup by RMAN B. The database that stores the recovery catalog C. A database not targeted to be backed up by RMAN D. A special repository database for the RMAN utility 8. What type of backups can be stored in the recovery catalog of
RMAN? Choose all that apply. A. Non-RMAN backups based on OS commands B. Full database backups C. Tablespace backups D. Control file backups 9. Which are methods of getting information from the recovery catalog? A. REPORT command B. Querying in SQL*Plus C. LIST command D. RETREIVAL command
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
680
Chapter 20
Oracle Recovery Manager Catalog Creation and Maintenance
10. What must be done prior to running the REPORT or LIST commands?
Choose the best answer. A. Determine log file B. Spool output C. Connect to target D. Connect to target and recovery catalog 11. What is the main difference between reports and lists? A. Lists have more output than reports. B. Reports have more output than lists. C. Reports provide more detailed information than lists. D. Lists provide more detailed information than reports. 12. What command stores scripts in the recovery catalog? A. CREATE SCRIPT <SCRIPT_NAME> B. STORE SCRIPT <SCRIPT_NAME> C. CREATE OR REPLACE <SCRIPT_NAME> D. Scripts cannot be stored in the recovery catalog.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Answers to Review Questions
681
Answers to Review Questions 1. B. The recovery catalog is optional. The recovery catalog is used to store
information about the backup and recovery process, in much the same way as the data dictionary stores information about the database. 2. A, B, C, and D. All answers are capabilities of the RMAN utility. 3. C. The recovery catalog database should be on a different server than
the target database to eliminate the potential of a failure on one server affecting the backup and restore capabilities of RMAN. 4. B, C, and D. The DBA privilege is not required from the recovery cat-
alog user account. This user must be able to connect to the database, create objects within the database, and have the RECOVERY_CATALOG_ OWNER privilege. 5. D. Oracle recommends 10MB as the minimum one-year space
requirement of the tablespace housing the Recovery Manager catalog. 6. C. Registering the database can be performed only once for each data-
base unless the database is unregistered. 7. A. The target database is any database that is targeted for backup by
the RMAN utility. 8. A, B, C, and D. RMAN can catalog non-RMAN backups based on
OS commands as well as full database backups, tablespace backups, and control file backups. 9. A, B and C. The RMAN utility provides the REPORT and LIST com-
mands to generate outputs from the recovery catalog. SQL*Plus can also be used to manually query the recovery catalog in certain instances.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
682
Chapter 20
Oracle Recovery Manager Catalog Creation and Maintenance
10. D. Before running any REPORT or LIST, you must be connected to the
target database and recovery catalog. 11. C. The REPORT command provides more detailed information than
the LIST command. The REPORT command is used to answer more “what if” or “what needs to be done” type questions than the LIST command. 12. A. The CREATE SCRIPT <SCRIPT_NAME> stores the associated script
in the recovery catalog. This script can then be run at a later date.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Chapter
Backups Using RMAN
21
ORACLE8i BACKUP AND RECOVERY EXAM OBJECTIVES OFFERED IN THIS CHAPTER: Describe backup concepts using RMAN Describe types of RMAN backups Perform incremental and cumulative backups Tune backup operations View information from the data dictionary
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Training and Certification Web site (http://education .oracle.com/certification/index.html) for the most current exam objectives listing.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
T
his chapter focuses on using the RMAN utility for backups. It covers the type of backups that are available in the RMAN utility and the considerations associated with each type of backup. You will walk through an example of an incremental and a complete backup using RMAN. You will also learn how to tune backup operations performed in RMAN. Finally, you will use the data dictionary elements to view information about RMAN and the backup and recovery process.
Describing Backup Concepts Using RMAN
R
MAN handles backups differently from the traditional OS command and SQL command backup script method. Backups in RMAN are performed in one or more backup sets. A backup set is a logical object that contains at least one backup piece and constitutes a full or incremental backup. A backup piece contains one or more physical files and is stored in a special RMAN format. Physical files cannot be split across backup pieces. Furthermore, archive logs must be in a separate backup piece than data files. Backup sets can include data files, archive logs, and control files only. Figure 21.1 illustrates the relationship between backup sets and backup pieces. Backup sets are stored on tape or disk. Oracle optimizes the way backup sets are created by writing only blocks that have been used. There is one exception: image copies of data files, and in this case, all blocks are copied, used or not.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Describing Backup Concepts Using RMAN
FIGURE 21.1
685
Backup sets and their relationship to backup pieces
Whole database backup
Backup Set Containing one data file
data01.dbf
control1.ctl control2.ctl
data01.dbf index01.dbf res01.dbf temp01.dbf system01.dbf
arch01.log arch02.log arch99.log
Size limit forces multiple backup pieces.
Backup piece Backup piece
Backup piece
Backup set
Backup set
Backup piece Backup set
Backup piece Backup set
Backup sets are multiplexed and can be duplexed. Multiplexed backup sets are an inherent feature of RMAN. Multiplexing means that multiple data files are interspersed with each other in the backup set. In other words, there are no complete files like OS copies, but just blocks of data written in a random order inside the backup set. Figure 21.2 illustrates the multiplexing of backup sets. Duplexing backup sets provides protection against loss or corruption of the backup sets by making additional copies. This is performed with the SET DUPLEX command within the RMAN utility.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
686
Chapter 21
Backups Using RMAN
RMAN multiplexing and duplexing is not the same as redo log or control file multiplexing or duplexing. RMAN multiplexing is combining input files such as data files, archive logs, or control files in the same backup set. The individual identity of each input file is lost because it is interspersed with other input files. RMAN duplexing is creating additional copies of an original backup set, which serves the same purpose as control file or redo log duplexing by adding redundancy.
FIGURE 21.2
Multiplexing of backup sets
a = data01.dbf b = data02.dbf c = index01.dbf
data01.dbf data02.dbf index01.dbf
All data files are interspersed in a backup set and a backup piece. The data files cannot be seen individually as in a Unix O/S tar or cpio to tape.
a b c b b c
b c a c a c
a b c a a b
Backup piece Backup set
The naming convention of backup pieces is determined either by RMAN or by the FORMAT parameter in the BACKUP command. You use this parameter to specify your own naming convention. RMAN uses the %U substitution
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Describing Types of RMAN Backups
687
FORMAT variable for the default naming convention. There are other values for the FORMAT variables used with the BACKUP command. These values are as follows:
%D TARGET DATABASE NAME
%N PADDED TARGET DATABASE NAME
%P BACKUP PIECE NUMBER
%S BACKUP SET NUMBER
%T BACKUP SET STAMP
Backup pieces have a size limitation, which is enforced by OS or media manager library limits. You should limit the size of the backups to disk based on these constraints, which are OS dependent. Once you place a size limit on a backup set, an additional backup piece will be created when the SIZE LIMIT value is exceeded.
Describing Types of RMAN Backups
T
here are three categories of backups that are supported by the RMAN utility:
Full or incremental
Opened or closed
Consistent or inconsistent
Each is described in the following sections.
Full or Incremental Backups The full and incremental backups are differentiated by how the data blocks are backed up in the target database. The full backup backs up all the data blocks in the data files, modified or not. An incremental backup backs up only the data blocks in the data files that were modified since the last incremental backup.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
688
Chapter 21
Backups Using RMAN
The full backup cannot be used as part of an incremental backup strategy. The baseline backup for an incremental backup is a level 0 backup. A level 0 backup is a full backup at that point in time. Thus, all blocks, modified or not, are backed up, allowing the level 0 backup to serve as a baseline for future incremental backups. The incremental backups can then be applied with the baseline, or level 0, backup to form a full backup at some time in the future. The benefit of the incremental backup is that it is quicker, because not all data blocks need to be backed up. There are two types of incremental backups: differential and cumulative. The differential and cumulative backups both back up only modified blocks. The difference between these two types of incremental backups is in the baseline database used to identify the modified blocks that need to be backed up. The differential incremental backup backs up only data blocks modified since the most recent backup at the same level or lower. A differential incremental backup will determine which level 1 or level 2 backup has occurred most recently and back up only blocks that have changed since that backup. The differential incremental backup is the default incremental backup. The cumulative incremental backup backs up only the data blocks that have changed since the most recent backup of the next lowest level, or n – 1 or lower (with n being the existing level of backup). For example, if you are performing a level 2 cumulative incremental backup, the backup will copy data blocks only from the most recent level 1 backup. If no level 1 backup is available, then it will back up all data blocks that have changed since the most recent level 0 backup.
Full backups do not mean the whole or complete database was backed up. In other words, a full backup can back up only part of the database and not all data files, control files, and logs.
Opened or Closed Backups The opened and closed backups are differentiated by the state of the target database being backed up. The opened backup occurs when the target database is backed up while it is opened or available for use. This is similar to the non-RMAN hot backup that was demonstrated in earlier chapters.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Performing Incremental and Cumulative Backups
689
The closed backup occurs when the target database is mounted but not opened. This means the target database is not available for use during this type of backup. This is similar to the non-RMAN cold backup that was demonstrated in earlier chapters.
Consistent or Inconsistent Backups The consistent and inconsistent backups are differentiated by the state of the SCN in data file headers and in the control files. The consistent backup is a backup of a target database that is mounted but not opened and was shut down with either a SHUTDOWN IMMEDIATE or SHUTDOWN NORMAL option, but not the SHUTDOWN ABORT option. Also, the database must not have crashed prior to being mounted. This means that the SCN information in the data files matches the SCN information in the control files. The inconsistent backup is the backup of the target database when it is opened but crashed prior to mounting, or when it was shut down with the SHUTDOWN ABORT option prior to mounting. This means that the SCN information in the data files does not match the SCN information in the control files.
Performing Incremental and Cumulative Backups
T
he incremental and cumulative incremental backups are used with each other to reduce the time it takes to back up a database and reduce the size of the database backup. As mentioned in earlier sections, incremental backups back up only blocks that have changed or been modified since the most recent backup. Incremental backups are organized by levels. The three levels are level 0, level 1, and level 2. A level 0 incremental backup is the base for all incremental backups. The level 0 backup copies all blocks containing data. The level 1 incremental backup copies all data since the last level 0 incremental backup. The level 2 incremental backup copies all data since the last level 1, or level 0 if no level 1 is present. To perform an incremental and cumulative incremental backup, you must take a level 0 incremental backup as a baseline. Next, you must perform a
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
690
Chapter 21
Backups Using RMAN
level 1 cumulative backup, which will back up all the data blocks since the level 0 backup. Last, you can perform the level 2 cumulative backup, which will back up all the data blocks changed since the incremental backup level 1. Let’s walk through this example: 1. First, you must point to the target database on which you want to per-
form the backups. In this example, it is brdb. [oracle@DS-HPUX 8.1.5]$ oraenv ORACLE_SID = [brdb] ? brdb [oracle@DS-HPUX 8.1.5]$ 2. Connect within RMAN to the target database you will back up and
the recovery catalog. [oracle@DS-HPUX bin]$ rman target / catalog rman/ rman@rcdb 3. Create a level 0 incremental backup. Again, this backup backs up all
data blocks with data. This is the baseline backup. RMAN>
run { allocate channel chnl1 type disk; backup incremental level = 0 database; }
4. Create a level 1 incremental cumulative backup. Again, this backup
backs up all data blocks that have been modified since the last backup. RMAN>
rman target / catalog rman/rman@rcdb
RMAN>
run { allocate channel chnl1 type disk; backup incremental level 1 cumulative database;
}
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Tuning Backup Operations
691
5. Create a level 2 incremental cumulative backup. Again, this backup
backs up all data blocks that have been modified since the last backup n – 1, with n being the current-level backup. In this example, the current backup that you are performing is n = level 2, so n – 1 = level 1. RMAN>
rman target / catalog rman/rman@rcdb
RMAN>
run { allocate channel chnl1 type disk; backup incremental level 2 cumulative database;
}
Tuning Backup Operations
T
he primary goal for tuning backup operations should be to have the tape device be the only bottleneck when writing to tape, that is, to make sure that the tape device is constantly streaming. Streaming means that the tape device is constantly reading or writing. The tape device should never be idle or waiting for data. When writing to disk, much more data can usually be written because of the faster write capabilities of most disk subsystems as compared to tape devices. Since tape devices are the area of most concern for performance, this section focuses on the performance of writing to tape devices. Backup and restore operations are performed in three steps, which are as follows: 1. Reading data (backup or restore) from disk or tape 2. Processing the data blocks in the memory buffers 3. Writing the data (backup or restore) to disk or tape
The writing and reading of the disk (steps 1 and 3) can be performed synchronously or asynchronously. When synchronous I/O is performed, one I/O occurs as writes or reads from disk or tape at a time. When asynchronous I/O is performed, multiple I/Os are performed from disk or tape at a time.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
692
Chapter 21
Backups Using RMAN
There are four major methods used to improve throughput of the overall backup and restore process, which will keep the tape device streaming. These methods are as follows:
Improve synchronous and asynchronous I/O operations
Increase disk reads by reading from multiple disk volumes
Adjust throughput for differing file densities
Parallelize channel allocation
Improving I/O Operations Improving synchronous and asynchronous I/O operations can be accomplished by viewing two dynamic performance views. These views can be used to monitor and determine bottlenecks in synchronous and asynchronous I/O of backups and restores. The names of these views are V$BACKUP_SYNCH_IO and V$BACKUP_ASYNCH_IO. For synchronous I/O, the column V$BACKUP_SYNCH_IO.DISCRETE_ BYTES_PER_SECOND can be compared to the tape device’s maximum throughput. If the column’s value is significantly less than the device’s maximum throughput, this could cause a bottleneck. For asynchronous I/O, the columns V$BACKUP_ASYNCH_IO.LONG_WAITS and V$BACKUP_ASYNCH_IO.SHORT_WAITS can be used to identify bottlenecks. As a general rule, if the value in the SHORT_WAITS column and LONG_ WAITS column is a significant percentage of the IO_COUNT column, then the current file being copied is causing a bottleneck. There are three init.ora parameters associated with setting asynchronous I/O, if supported by your operating system. Asynchronous I/O is the desired setting. These parameters are TAPE_AYSNCH_IO, DISK_ASYNCH_IO, and BACKUP_TAPE_IO_SLAVES. When each of these parameters is set to TRUE, the tape and disk I/O resources associated with RMAN will operate asynchronously.
Increasing Disk Reads Increasing disk reads by reading from multiple disk volumes can increase throughput to the tape device. This can be accomplished by using the DISKRATIO parameter of the BACKUP statement to feed the tape device the required throughput to keep the device streaming. Setting the DISKRATIO
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Tuning Backup Operations
693
equal to 3 would spread the backup reads to three separate disk volumes at the same time. Therefore, if disks supplied data at 1000 bytes per second, then with DISKRATIO equal to 3 you could generate 3000 bytes to the tape device.
Adjusting Throughput By adjusting throughput for differing file densities, you can feed the tape device the required amount of data to keep the tape device streaming. Under certain scenarios, the files you’re backing up/restoring may be sparsely packed or less dense. In order to keep the tape device streaming, you could set the MAXOPENFILES parameter in the RMAN SET LIMIT CHANNEL statement. If you are backing up densely packed files, performance can be improved by setting the init.ora parameters BACKUP_TAPE_IO_SLAVES and LARGE_ POOL_SIZE to TRUE. Also, increase the DB_FILE_DIRECT_IO_COUNT, which will increase the buffer size used for backup and restore disk I/O.
Many hardware vendors or media library vendors implement data compression as it writes to tape. If this is being utilized, then the amount of data that is being sent to the tape device should be increased by the factor of compression to keep the tape streaming. So, if your files are being compressed to 25 percent of the original size, or to 1/4, then four times the data should be sent to the tape drive to keep it streaming.
Parallelizing Channel Allocation One method for improving throughput of the overall backup and restore process on disk drives is increasing channel allocation. Channel allocation is the allocation of a physical device to be associated with a server session for backup or restore purposes. Because disk drives have better I/O capabilities than tape devices, you can typically increase throughput. One method of improving performance is parallelizing channel allocation. Parallelizing channel allocation is done by allocating multiple channels during the backup and restore process. This method can improve I/O
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
694
Chapter 21
Backups Using RMAN
performance to disk by causing multiple channels to access files in parallel. Below is an RMAN command that performs parallel channel allocation. run { allocate channel chnl_1 type disk; allocate channel chnl_2 type disk; copy datafile 3 to '/backup/users01.dbf', datafile 4 to '/backup/data01.dbf'; } In the preceding example, each data file is written to a disk volume at the same time. Thus, with two channels allocated, the two data files are written simultaneously instead of one at a time if only one channel was allocated. Figure 21.3 illustrates parallel channel allocation. FIGURE 21.3
Parallel channel allocation
RMAN
Channel Chn2
Channel Chn1
Server session
Target database
data01.dbf
Server session
users01.dbf
/backup Disk volume
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Viewing Information from the Data Dictionary
695
Viewing Information from the Data Dictionary
A
ll information that is stored in the recovery catalog resides in the data dictionary of the recovery catalog. You can access the data dictionary by querying any of the RMAN RC_ views. There are 24 of these views in the 8i recovery catalog. To get a list of these views, you can point to the recovery catalog database rcdb and perform the following query: [oracle@DS-LINUX /oracle]$ . oraenv ORACLE_SID = [rcdb] ? [oracle@DS-LINUX /oracle]$ sqlplus system/manager SQL*Plus: Release 8.1.5.0.0 - Production on Mon Jul 24 21:29:16 2000 (c) Copyright 1999 Oracle Corporation. reserved.
All rights
Connected to: Oracle8i Enterprise Edition Release 8.1.5.0.0 - Production With the Partitioning and Java options PL/SQL Release 8.1.5.0.0 - Production
SQL> select object_name from dba_objects 2 where object_type = 'VIEW' 3 and object_name like 'RC%'; RC_ARCHIVED_LOG RC_BACKUP_CONTROLFILE RC_BACKUP_CORRUPTION RC_BACKUP_DATAFILE RC_BACKUP_PIECE RC_BACKUP_REDOLOG RC_BACKUP_SET
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
696
Chapter 21
Backups Using RMAN
RC_CHECKPOINT RC_CONTROLFILE_COPY RC_COPY_CORRUPTION RC_DATABASE RC_DATABASE_INCARNATION RC_DATAFILE RC_DATAFILE_COPY RC_LOG_HISTORY RC_OFFLINE_RANGE RC_PROXY_CONTROLFILE RC_PROXY_DATAFILE RC_REDO_LOG RC_REDO_THREAD RC_RESYNC RC_STORED_SCRIPT RC_STORED_SCRIPT_LINE RC_TABLESPACE 24 rows selected. SQL>
These views exist only if you have created the recovery catalog. Furthermore, the views are not normalized and in some cases contain redundant data.
Let’s look at one of these views in more detail. The RC_STORED_SCRIPT view shows information regarding the stored scripts within the recovery catalog. The view shows one line for each stored script. Here is an example query from the stored scripts view in the rcdb recovery catalog database. SQL> select * from rman.rc_stored_script; DB_KEY DB_NAME ---------- ------7 BRDB
SCRIPT_NAME ------------back_comp
SQL>
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Summary
697
Summary
This chapter discussed RMAN backup concepts and showed hands-on backup tasks with RMAN. It discussed the backup set and backup piece architecture of RMAN and the options and uses of these backup structures. It also covered the three categories of backups that are supported by the RMAN utility: the full or incremental, opened or closed, and consistent or inconsistent backups. You walked through an example of incremental and incremental cumulative backups using RMAN. You learned the backup options for different levels and the effects that different incremental backups have on these levels. This chapter demonstrated how to tune backup operations performed in RMAN by using dynamic performance views, RMAN parameters, and init.ora parameters. Last, it showed how to use the data dictionary elements to view information about RMAN and the backup and recovery process.
Key Terms Before you take the exam, make sure you’re familiar with the following terms: backup set backup piece multiplexing duplexing full backup incremental backup differential incremental backup cumulative incremental backup opened backup closed backup consistent backup inconsistent backup streaming synchronous I/O asynchronous I/O channel allocation Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
698
Chapter 21
Backups Using RMAN
Review Questions 1. A backup set has at least one of what? A. Data file B. Data file and archive log C. Control file and archive log D. Backup piece 2. What logical object within RMAN is stored in a proprietary format
and needs to be accessed via the RMAN RESTORE command? A. Data file B. Image copy C. Backup piece D. Backup set 3. Data blocks from different data files are written to backup sets in an
interspersed manner. What is the term for this method? A. Duplexing B. Serializing C. Parallelizing D. Multiplexing 4. What type of backup is performed when the database is not opened?
Choose the best answer. A. Open backup B. Incremental backup C. Closed backup D. Full backup
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
699
5. In what state must a database have been shut down to perform a con-
sistent backup following the shutdown? Choose all that apply. A. SHUTDOWN ABORT B. Abrupt shutdown due to power outage C. SHUTDOWN NORMAL D. SHUTDOWN 6. For a baseline backup, what type of backup must be performed prior
to an incremental level 2 backup? A. Incremental level 0 B. Incremental level 1 C. Incremental level 2 D. Full backup 7. What is the default naming convention variable value for a backup
piece? A. %N B. %D C. %U D. %B 8. What type of backups can be stored in the recovery catalog of
RMAN? Choose all that apply. A. Non-RMAN backups based on OS commands B. Full database backups C. Tablespace backups D. Control file backups
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
700
Chapter 21
Backups Using RMAN
9. What is the name of the dynamic performance view that can assist in
RMAN performance tuning? Choose all that apply. A. V$BACKUP_SYNCH_IO B. V$BACKUP_ASYNCH_IO C. V$SYNCH_BACKUP_IO D. V$ASYNCH_BACKUP_IO 10. What is the name of the init.ora parameters that can improve the
backup of densely packed files? Choose all that apply. A. LARGE_POOL_SIZE B. BACKUP_ASYNC_IO C. BACKUP_TAPE_IO_SLAVES D. BACKUP_IO_SYNCH 11. What is the primary goal of performance tuning RMAN backups and
restores? A. To read and write as much data from disk as possible B. To read and write as much data from tape as possible C. To read and write large amounts of data in bursts D. To keep the tape drive streaming at all times during a backup or
restore until complete
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Answers to Review Questions
701
Answers to Review Questions 1. D. Each backup set is made up of at least one backup piece. 2. D. A backup set is a logical object made up of physical objects, which
are backup pieces. The backup set is stored in a special RMAN format and needs to be unformatted with the RMAN RESTORE command. 3. D. Multiplexing is the term that describes the way that multiple data
files are interspersed within a backup set. 4. C. A closed backup is performed when the database is mounted but
not opened. 5. C and D. A SHUTDOWN NORMAL or SHUTDOWN both shut down the data-
base by cleaning or making sure the SCN in the control file and data file headers are the same. 6. A. An incremental level 0 backup is the baseline for all successive
incremental backups. 7. C. The %U is the default naming convention for the backup piece if the
format option is not used to name the backup. 8. A, B, C, and D. RMAN can catalog non-RMAN backups based on
OS commands as well as full database backups, tablespace backups, and control file backups.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
702
Chapter 21
Backups Using RMAN
9. A and B. The dynamic performance view V$BACKUP_SYNCH_IO helps
detect synchronous backup bottlenecks, and V$BACKUP_ASYNCH_IO helps detect asynchronous backup bottlenecks. 10. A and C. The LARGE_POOL_SIZE and BACKUP_TAPE_IO_SLAVES
parameters, when used together, help optimize throughput for densely packed files such as full database backups. 11. D. The goal of an RMAN backup is to keep the tape drive constantly
rotating or streaming for backups and restores. This means that throughput may need to increase or decrease for different types of backups or restores.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Chapter
22
Restoration and Recovery using RMAN ORACLE8i BACKUP AND RECOVERY EXAM OBJECTIVES OFFERED IN THIS CHAPTER: Restore and recovery considerations using RMAN Restore a database in NOARCHIVELOG mode Restore and recover a tablespace in an opened database Restore and recover a data file Incomplete recovery using RMAN
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Training and Certification Web site (http://education .oracle.com/certification/index.html) for the most current exam objectives listing.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
T
his chapter covers the recovery considerations when using the RMAN utility. It demonstrates restores and recoveries using the RMAN utility for specific situations. You will perform restores and recoveries for the following examples:
A restore of a whole database from RMAN while the database is in NOARCHIVELOG mode
A restore and recovery of a tablespace using RMAN
A restore and recovery of a data file using RMAN
An incomplete database recovery while using RMAN
For each of these examples, you will be given a general explanation before the step-by-step procedure.
Restore and Recovery Considerations Using RMAN
T
he restore and recovery considerations for using RMAN consist of how you will restore databases, tablespaces, data files, control files, and archive logs from RMAN. Restores and recoveries can be performed from backups on both disk and tape devices. There are two main backup sources that can be the basis for the RMAN recovery process. These sources are image copies and backup sets. Image copies can be stored only on disk. Image copies are actual copies of the database files, archive logs, or control files and are not stored in a special RMAN format. An image copy in RMAN is equivalent to an OS copy command
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Restore a Database in NOARCHIVELOG Mode
705
such as cp or dd in Unix, or COPY in Windows NT/2000. Thus, no RMAN restore processing is necessary to make image copies usable in a recovery situation. This can improve the speed and efficiency of the restore and recovery process in some cases. On the other hand, database files in backup sets are stored in a special RMAN format and must be processed with the RESTORE command before these files are usable. This can take more time and effort during the recovery process. Let’s look at the main difference in RMAN commands between a backup copy using a backup set and an image copy, with the commands and syntax required to perform each type. The first example is the command necessary to perform an image copy. In this example, you are backing up the system, data, and users data files. RMAN> run { allocate channel ch1 type disk; copy datafile 1 to '/staging/system01.dbf, datafile 2 to '/staging/data01.dbf', datafile 3 to '/staging/users01.dbf', current controlfile to '/staging/control1.ctl'; } The next RMAN command is used to perform the backup set backup process. The backup set backup process consists of using the BACKUP command instead of the COPY command. Below is a example of the backup set command. In this example, you are backing up the USERS tablespace and current control file to a backup set. RMAN> run {allocate channel ch1 type disk; backup tablespace users include current controlfile; }
Restore a Database in NOARCHIVELOG Mode
A
s the first example of using RMAN for restores and recoveries, you will restore a database in NOARCHIVELOG mode. To restore a database in NOARCHIVELOG mode, you must first make sure that the database was shut
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
706
Chapter 22
Restoration and Recovery using RMAN
down cleanly to get a consistent backup. This means the database should be shut down with a SHUTDOWN NORMAL, IMMEDIATE, or TRANSACTIONAL, but not ABORT. The database should then be started in MOUNT mode, but not opened. This is because the database files cannot be backed up when the database is opened and not in ARCHIVELOG mode. Next, while in the RMAN utility, you must connect to the target database, which in our example is ORCL in the Windows NT OS. Then you can connect to the recovery catalog in the RCDB database. Once connected to the proper target and catalog, you can execute the appropriate RMAN backup script. This script will back up the entire database. Next, the database can be restored with the appropriate RMAN script. Finally, the database can be opened for use. Let’s walk through this example step by step: 1. Set the ORACLE_SID to ORCL, which is your target database, so that the
database can be started in MOUNT mode with SVRMGRL. set ORACLE_SID=ORCL SVRMGRL> startup mount 2. Start the RMAN utility at the DOS command prompt and connect to
the target and the recovery catalog database RCDB. C:\> C:\backup\RMAN RMAN> connect target RMAN> connect catalog rman/rman@rcdb 3. Once connected to the target and recovery catalog, you can back up
the target database to disk or tape. In this example, you choose disk. You give the database name a format of %U, which means that a backup set name will be generated by the system. RMAN> run { allocate channel c1 type disk; backup database format 'c:\backup\%U'; release channel c1; } 4. Once the backup is complete, the database may be restored. The data-
base must be mounted but not opened. In the restore script, choose
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Restore and Recover a Tablespace
707
three disk channels to utilize parallelization of the restore process. The RESTORE DATABASE command is responsible for the restore process within RMAN. No recovery is required because the database was in NOARCHIVELOG mode and the complete database was restored. RMAN> run { allocate channel c1 type disk; allocate channel c2 type disk; allocate channel c3 type disk; restore database; } 5. Once the database has been restored, it can be opened and then shut
down normally. Then a start-up should be performed to make sure the restore process was successful. SVRMGRL> alter database open; SVRMGRL> shutdown SVRMGRL> startup
The most unusual aspect of a NOARCHIVELOG mode recovery in RMAN is that the database must be mounted so that RMAN server sessions can connect and perform the recovery. In an OS backup, the database would most likely be closed.
Restore and Recover a Tablespace
A
s a second example, you will restore and recover a tablespace by using RMAN. In this case, the database will be in ARCHIVELOG mode because an individual tablespace will be backed up. Thus, the database will be backed up while it is opened. Within RMAN, you must perform the appropriate tablespace backup script. In this example, you will select the USERS tablespace. Also, you will back up the current control file as an extra precaution.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
708
Chapter 22
Restoration and Recovery using RMAN
Once the tablespace is backed up, the restore and recovery process can begin. With the database open, you can execute the appropriate RMAN script to restore and recover the tablespace. The tablespace must first be taken offline, and then brought online when the restore and recovery is complete. Here are the steps: 1. Set ORACLE_SID to ORCL, which is your target database, so that the
database can be started completely with SVRMGRL. set ORACLE_SID=ORCL SVRMGRL> startup 2. Connect to RMAN, the target database, and the recovery catalog in
one step. In this example, you utilize the log option to write the output to a log file. C:\rman target / catalog rman/rman@rcdb log "c:\backup\users.log" 3. Run the appropriate RMAN script to back up the USERS tablespace to
disk. RMAN> run { allocate channel ch1 type disk; backup tablespace users include current controlfile; } 4. Once the tablespace has been backed up, you can restore and recover
the tablespace with the appropriate RMAN script. The script first takes the tablespace offline before the restore and recovery. The RESTORE TABLESPACE and RECOVER TABLESPACE commands are responsible for the restore and recovery process within RMAN. Once this part is complete, you can bring the tablespace back online. RMAN> run { sql 'ALTER TABLESPACE users OFFLINE TEMPORARY'; allocate channel ch1 type disk; restore tablespace users;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Restore and Recover a Data File
709
recover tablespace users; sql ‘ALTER TABLESPACE users ONLINE’; } 5. Once the database has been restored, it can be opened and then shut
down normally. Then you should perform a start-up to make sure the restore process was completed successfully. SVRMGRL> shutdown SVRMGRL> startup
Restore and Recover a Data File
A
s the third example, you will restore and recover a data file by using RMAN. In this case, the database will also be in ARCHIVELOG mode because an individual data file will be backed up. As in the previous tablespace example, the database will be backed up while it is opened. Next, within RMAN, you must perform the appropriate data file backup script. For this example, you will select the data file for the USERS tablespace. You will back up the current control file as an extra precaution. Once the data file is backed up, you can begin the restore and recovery process. With the database open, you can execute the appropriate RMAN script to restore and recover the data file. The tablespace must first be taken offline, and then brought online when the restore and recovery is complete. The steps are as follows: 1. Set ORACLE_SID to ORCL, which is your target database, so that the
database can be started completely with SVRMGRL. set ORACLE_SID=ORCL SVRMGRL> startup 2. Connect to RMAN, the target database, and the recovery catalog in
one step. Utilize the log option to write the output to a log file. C:\rman target / catalog rman/rman@rcdb log "c:\backup\users01.log"
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
710
Chapter 22
Restoration and Recovery using RMAN
3. Run the appropriate RMAN script to back up the USERS data file to
disk. RMAN> run { allocate channel ch1 type disk; backup format '%d_%u' (datafile 'C:\oracle\oradata\ORCL\users01.dbf'); release channel ch1; } 4. Once the data file has been backed up, you can restore and recover the
data file with the appropriate RMAN script. The script takes the tablespace offline before the data file restore and recovery. The RESTORE DATAFILE and RECOVER DATAFILE commands are responsible for the restore and recovery process within RMAN. Once this part is complete, the tablespace is brought back online. Data file 3 corresponds to the USER tablespace, which can be seen by querying V$DATAFILE. RMAN> run { allocate channel ch1 type disk; sql 'ALTER tablespace users offline immediate'; restore datafile 3; recover datafile 3; sql 'ALTER tablespace users online'; release channel ch1; } 5. Once the database has been restored, you can open it and then shut it
down normally. Then a start-up should be performed to make sure the restore process was completed. SVRMGRL> shutdown SVRMGRL> startup
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Incomplete Recovery Using RMAN
711
Incomplete Recovery Using RMAN
As the final example, you will perform an incomplete recovery using RMAN. In this example, you will recover the database to a particular point in time. You will create a user called TEST and two tables with date and time data stored in them. You will perform a database backup in ARCHIVELOG mode and then recover to a time between 2:31, when the data was stored in the first table, and 3:59, when the data was stored in the second table. This is accomplished with the SET UNTIL TIME clause in RMAN. The SET UNTIL [TIME/CHANGE/CANCEL] clause is required to perform incomplete recovery. Thus, when the database is recovered, you should not see the second table’s data. The database will need to be registered in the recovery catalog upon completion and validation of the incomplete recovery. Let’s walk through this example: 1. Set ORACLE_SID to ORCL, which is your target database, so that the
database can be started in MOUNT mode with SVRMGRL. set ORACLE_SID=ORCL SVRMGRL> startup mount 2. Connect to RMAN, the target database, and the recovery catalog in
one step. Utilize the log option to write the output to a log file. C:\rman target / catalog rman/rman@rcdb log "c:\backup\incomplete.log" 3. Create a user TEST and the two tables, which will be used throughout
this example. Data will be added to the first table. The results of this data insertion can be seen in the SELECT statement. SVRMGR> create user test identified by test 2> default tablespace users 3> temporary tablespace temp; Statement processed. SVRMGR> grant connect,resource to test; Statement processed.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
712
Chapter 22
Restoration and Recovery using RMAN
SVRMGR> connect test/test Connected. SVRMGR> create table t1 (c1 number, c2 char(50)); Statement processed. SVRMGR>insert into t1 values (1, to_char(sysdate, 'HH:MI DD-MON-YYYY')); SVRMGR>commit; SVRMGR> create table t2 (c1 number, c2 char(50)); Statement processed. SVRMGR> connect system/manager SVRMGR> alter system switch logfile; Statement processed. SVRMGR> select * from t1; C1 C2 ---------- -------------------------------------------1 02:31 28-JUN-2000 1 row selected. 4. Back up the database and archive the log files.
run { allocate channel ch1 type disk; backup database; sql 'ALTER SYSTEM ARCHIVE LOG CURRENT'; sql 'ALTER SYSTEM ARCHIVE LOG ALL'; } 5. Create the second table, t2, and add the date-time data to the table.
This date-time is the same day, but at 3:59 in the afternoon. Assume that you have run some log switches to move the data to the archive logs. SVRMGR> connect test/test Connected. SVRMGR>insert into t2 values (2, to_char(sysdate, 'HH:MI DD-MON-YYYY')); 1 row processed.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Incomplete Recovery Using RMAN
713
SVRMGR>commit; SVRMGR> select * from t2; C1 C2 ---------- -------------------------------------------1 03:59 28-JUN-2000 1 row selected. SVRMGR> connect system/manager SVRMGR> alter system switch logfile; SVRMGR> alter system switch logfile; 6. Back up the archive logs to disk with the appropriate RMAN script.
RMAN> run { allocate channel ch1 type disk; backup archivelog all; } 7. Restore the database to a point in time between 2:31 and 3:59. Then
validate that you do not see the second table. RMAN> run { allocate channel ch1 type disk; set until time 'JUN 28 2000 15:58:00'; restore database; recover database; sql 'alter database open resetlogs'; } SVRMGR> select * from t1; C1 C2 ---------- -----------------------------------------1 02:31 28-JUN-2000 1 row selected. SVRMGR> select * from t2; No rows selected. SVRMGR>
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
714
Chapter 22
Restoration and Recovery using RMAN
8. Once the database has been restored, it can be opened and then shut
down normally. Then a start-up should be performed to make sure the restore process was completed successfully. SVRMGRL> alter database open; SVRMGRL> shutdown SVRMGRL> startup 9. Once the database has been validated, the database should be reregis-
tered in the RMAN catalog. This must done every time there is an incomplete recovery and the SQL 'ALTER DATABASE OPEN RESETLOGS'; command is performed. You must be connected to the target database and the recovery catalog. RMAN> reset database;
A complete backup should be performed on any database that is opened with the RESETLOGS options. All previous archive logs are invalid with databases opened with the RESETLOGS options. Thus, a new complete backup must be performed.
Summary
I
n this chapter, we have discussed recovery considerations when using the RMAN utility. These considerations surround the backup type used, either a backup set or an image copy. The image copy retains the database files in their existing format and must be stored on disk only. The backup set stores the files in an RMAN format. This type of backup may take longer to recover under certain situations because of the required unformatting performed in the RESTORE command. You performed restores and recoveries for the following situations: NOARCHIVELOG mode, tablespace, data file, and an incomplete recovery. You walked through each of these recoveries step by step. Each of these examples was performed with an RMAN backup set.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Summary
Key Terms Before you take the exam, make sure you’re familiar with the following terms: image copies backup sets RESTORE DATABASE RESTORE TABLESPACE RECOVER TABLESPACE RESTORE DATAFILE RECOVER DATAFILE SET UNTIL [TIME/CHANGE/CANCEL]
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
715
716
Chapter 22
Restoration and Recovery using RMAN
Review Questions 1. What type of restoration and recovery does not require the use of the
RMAN RESTORE command? A. RESTORE FROM BACKUP SET B. RESTORE FROM BACKUP PIECE C. RESTORE FROM IMAGE COPY D. RESTORE DATABASE 2. Recovery from what backup source requires the use of the RESTORE
command to process the contents of the backup? A. Image copy B. Backup set C. OS copy D. Import 3. What type of recovery is needed on a full database restore if the data-
base is operating in NOARCHIVELOG mode and full backups are being used for the backup strategy? A. RECOVER DATAFILE B. None C. RECOVER DATABASE D. RECOVER TABLESPACE
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
4. What command needs to be issued before recovering a data file or
tablespace? A. RECOVER DATABASE B. SQL
'ALTER TABLESPACE OFFLINE IMMEDIATE';
C. RECOVER DATAFILE D. RECOVER TABLESPACE 5. What command is required to perform an incomplete recovery?
Choose all that apply. A. SET UNTIL [TIME/CHANGE/CANCEL] B. SQL
'ALTER DATABASE OPEN RESETLOGS';
C. ARCHIVE LOG ALL D. ARCHIVE LOG LIST
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
717
718
Chapter 22
Restoration and Recovery using RMAN
Answers to Review Questions 1. C. An image copy does not require the use of the RMAN RESTORE
command because the image copy is an exact representation of the file from which it is copied. 2. B. A backup set requires the use of the RESTORE command to process
the backup, because backup sets are stored in an RMAN format. 3. B. None; if the database is completely restored, no recovery is needed. 4. B. The SQL 'ALTER TABLESPACE OFFLINE
IMMEDIATE'; command must be performed before restoring and recovering a data file or tablespace. This allows the data file and tablespace to be restored and recovered without affecting the rest of the database. This information is not explicitly covered in the text, but you can find out more in the Oracle8i Recovery Manager User’s Guide and Reference, Release 2 (8.1.6). 5. A and B. Both SET UNTIL [TIME/CHANGE/CANCEL] and SQL 'ALTER
DATABASE OPEN RESETLOGS'; commands are necessary in an incomplete recovery. The SET UNTIL [TIME/CHANGE/CANCEL] is necessary to determine the stopping point in the recovery process, and SQL 'ALTER DATABASE OPEN RESETLOGS'; resets the log sequence numbers so that the old logs will not be used by the new database. This information is not explicitly covered in the text, but you can find out more in the Oracle8i Recovery Manager User’s Guide and Reference, Release 2 (8.1.6).
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Chapter
23
Oracle Standby Database ORACLE8i BACKUP AND RECOVERY EXAM OBJECTIVES OFFERED IN THIS CHAPTER: Explain the use of a standby database Configure initialization parameters Create, maintain, and activate a standby database Describe managed recovery mode Set a standby database in read-only mode Describe the process of propagating structural changes to a standby database Describe the Impact of Nologging actions on the primary database
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Training and Certification Web site (http://education .oracle.com/certification/index.html) for the most current exam objectives listing.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
T
his chapter explains and demonstrates the standby database as well as its configuration. The topics in this chapter include configuring the initialization parameters as well as creating, maintaining, and activating the standby database. You will also walk through specific scenarios of using the standby database. These scenarios include using managed recovery mode and setting the standby database in read-only mode. In addition, this chapter describes the concepts of propagating structural changes and the effects of Nologging actions on the primary database. These descriptions include detailed discussions of each concept.
Using a Standby Database
T
he standby database’s primary purpose is to support high availability for environments that need 24-hour, 7-days-a-week uptime. The standby database must have a primary database to exist. Furthermore, one primary database can support up to four standby databases, which can be used for different purposes and are constantly in a state of recovery. The standby databases are receiving archived logs from the primary database, and these archive logs are being applied to the standby databases. The standby databases can be in different geographic regions or in one location on one server. Figure 23.1 shows a standby database on remote servers, and Figure 23.2 shows a standby database on the same server.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Using a Standby Database
FIGURE 23.1
721
Standby database on remote servers
Remote standby server
Standby database
Primary server
Primary database archive location
Remote archive directory
Primary database
arc01.log arc02.log arc05.log
arc01.log
arc05.log
arc02.log Archive logs are transferred across the network by tools like ftp, Unix rsh commands, or by utilizing managed recovery mode.
The standby is utilized when the primary database suffers a failure. At this point, the standby database can be completely recovered and opened, or made available, to the users. Thus, recovery time can be nearly immediate. The standby database serves as a high-availability solution for database environments that have stringent uptime requirements. The standby database also serves as protection against natural disasters or damage to a primary business site.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
722
Chapter 23
Oracle Standby Database
FIGURE 23.2
Standby database on the same server
Primary and standby database server
Local file system
Local file system
Local file system
Local file system
Standby database
Standby archive directory
Primary archive directory Primary database
arc01.log arc02.log … arc05.log
arc01.log
arc05.log
arc02.log Archive files are copied using the OS copy command such as the Unix cp or mv commands.
Configuring Initialization Parameters
T
he initialization parameters in a standby database and a primary database should, in most cases, be identical. Certain initialization parameters, however, must be different. The parameters that should be different are
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Configuring Initialization Parameters
723
DB_FILE_NAME_CONVERT, LOG_FILE_NAME_CONVERT, STANDBY_ARCHIVE_ DEST, and CONTROL_FILES. All other parameters should be changed with caution because parameters that are inconsistent between the standby and primary databases could cause severe performance problems or cause the database to stop functioning entirely. The following parameters must be identical in the standby and primary databases: COMPATIBLE Determines the compatibility level of the features that can be used in a database. This parameter must be identical in the standby and primary databases, or log files run the risk of not working with the primary and standby databases. DB_FILES Determines the MAXDATAFILES in a database. These should be the same in primary and standby databases to allow equal amounts of data files in each as your database grows and data files need to be added. The following parameters must be different in the standby and primary databases. This is because each parameter identifies database-specific attributes. DB_FILE_NAME_CONVERT Used when you want to make the standby and primary database data files with a different naming convention to distinguish between the two. LOG_FILE_NAME_CONVERT Used when you want to make the standby and primary database redo logs with a different naming convention to distinguish between the two. STANDBY_ARCHIVE_DEST Used for the remote file system (RFS) to determine the directory where you want place the archive logs on the standby database server. This parameter is used with the Managed Recovery option to perform recovery. CONTROL_FILES Used to differentiate the control files on the primary and standby database.
To open and use the standby database in read-only mode, the COMPATIBLE parameter must be 8.1 or higher.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
724
Chapter 23
Oracle Standby Database
Creating, Maintaining, and Activating a Standby Database
T
hree main tasks get the standby database up and running: creating the standby database, maintaining the standby database, and activating the standby database. First, we will describe each of these categories in general. Then we will walk you through each process step by step.
Creating the Standby Database Creating the standby database consists of creating a database that matches the primary database exactly from a physical standpoint, with the exception of control files. To accomplish this task, you must first make a copy of the primary database’s init.ora file and move that copy to the standby server location. The appropriate changes must be made to the init.ora parameters described in the preceding section. Then, you should validate the data files in the database. This can be done by selecting from the V$DATAFILE view. The primary database should be shut down cleanly; a full backup should be taken and then started back up. Next, you must create the standby control file, specifying the standby file location. Once this is complete, the database should be checkpointed and made consistent by archiving all the online redo log files. Finally, all the necessary physical files (standby control files, archive log files, and backed-up data files from when the database was shut down earlier) can be transferred to the standby server location. Let’s walk through this example one step at a time: 1. Create the standby init.ora parameter file by making a copy of the
primary database’s init.ora file. This init file should have the appropriate changes mentioned in the previous section, “Configuring Initialization Parameters,” and then be moved to the standby database. 2. Verify the data files that need to be copied from backup to the standby
location. This can be done by selecting from the V$DATAFILE view. SVRMGR> select file#,status,name from v$datafile;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Creating, Maintaining, and Activating a Standby Database
FILE#
STATUS
725
NAME
---------- ------- -----------------------------------1 SYSTEM /db01/ORACLE/brdb/system01.dbf 2
ONLINE
/db01/ORACLE/brdb/rbs01.dbf
3
ONLINE
/db01/ORACLE/brdb/temp01.dbf
4
ONLINE
/db01/ORACLE/brdb/users01.dbf
5
ONLINE
/db01/ORACLE/brdb/tools01.dbf
6
ONLINE
/db01/ORACLE/brdb/data01.dbf
7
ONLINE
/db01/ORACLE/brdb/indx01.dbf
7 rows selected. SVRMGR> 3. Shut down the primary database cleanly with the SHUTDOWN
IMMEDIATE, NORMAL, or TRANSACTIONAL options. SVRMGR> shutdown 4. Take a full backup of the data files in the primary database. This
backup should be a consistent backup if the database has been shut down cleanly. 5. Start up the primary database.
SVRMGR> startup 6. Create a standby database control file.
SVRMGR> ALTER DATABASE CREATE STANDBY CONTROLFILE AS 'control_stby.ctl';
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
726
Chapter 23
Oracle Standby Database
7. Archive the online redo files to assure that the database is consistent.
SVRMGR> ALTER SYSTEM ARCHIVE LOG CURRENT; 8. Transfer all necessary physical files to the standby database location.
This can be performed by using an ftp command or a remote shell command in Unix. If the standby database is local, simple OS copy commands will suffice.
Maintaining the Standby Database Maintaining the standby database requires that you determine a plan of action for several maintenance issues. These maintenance issues are converting the names of data files and archived redo logs, determining the most recently applied archive redo log, clearing out online redo logs, and backing up the standby database. Table 23.1 describes the action necessary for each maintenance issue. TABLE 23.1
Maintaining the Standby Database Maintenance Issue
Action
Converting the names of data files and archived redo logs
Make sure that DB_FILE_NAME_CONVERT and LOG_ FILE_NAME_CONVERT are set such that when data files are updated or added on the standby database, the data files and redo logs will be named accordingly. The conversion init.ora parameters will not be applied to the ALTER TABLESPACE RENAME DATAFILE, ALTER DATABASE RENAME FILE, and ALTER DATABASE CREATE DATAFILE AS commands.
Determining the most recently applied archive redo log
Query the V$LOG_HISTORY view or view the alert log.
Clearing out online redo logs in the standby database
Run the ALTER DATABASE CLEAR LOGFILE GROUP N command, which can improve performance.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Creating, Maintaining, and Activating a Standby Database
TABLE 23.1
727
Maintaining the Standby Database (continued) Maintenance Issue
Action
Backing up the standby database
The standby database can be backed up when the database is not in manual or managed recovery mode. It should be done when the database has been shut down or is in read-only mode. When backing up in read-only mode, the primary database control file should also be copied.
Backups of the standby database can affect the primary database. The primary database may have to wait to archive the online redo logs during the downtime on the standby database.
Activating the Standby Database Activating the standby database means making the standby database the primary production database for your site. You should do this only in case of an emergency. To activate the standby database, you should first attempt to archive the primary database’s online redo logs one last time. If this is possible, the online redo logs should be archived and transferred to the standby site. These logs should be applied to the standby database so that the final changes in the primary database are applied to the standby database. Once the last archive logs are applied to the standby database, the ALTER DATABASE ACTIVATE STANDBY DATABASE command should be executed. Next, the standby database should be shut down. The former standby database, which is now the new primary production database, should be backed up as soon as possible. This new production database will not be able to be recovered until a backup is made. The standby database can then be opened in read-write or read-only mode. This is now the primary production database. Let’s walk through these steps one at a time: 1. Make sure that the standby database is mounted in EXCLUSIVE mode.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
728
Chapter 23
Oracle Standby Database
2. Attempt to archive the primary database and apply these logs to the
standby database one last time. Transfer these last archive logs to the standby database and validate that these archive logs are applied. SVRMGR> alter system archive log current; 3. Activate the standby database.
SVRMGR> alter database activate standby database; 4. Shut down the standby database.
SVRMGR> shutdown 5. Perform a backup on the standby database as soon as possible. The
standby database, which has now become the primary production database, is vulnerable to a failure because there is no good backup of this database. 6. Start up the standby database, which is now the primary production
database. This database can be started in read-write or read-only mode. Most likely, this database will be in read-write mode if it is the new primary production database. SVRMGR> startup mount SVRMGR> alter database read write
Make sure that you attempt to transfer all the available archive logs from the primary database and apply them to the standby database before the standby database is activated. All redo log entries that are not applied to the standby database will potentially be lost and would need to be reentered manually.
Using Managed Recovery Mode
M
anaged recovery mode is a state of the standby database that is activated by issuing the RECOVER command. The standby database waits for archive logs from the primary database and then automatically applies them to the standby database. Thus, the standby database recovery process is automatically kept in synch with the primary database.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Using Managed Recovery Mode
729
The standby and primary databases must be configured as normal standby and primary databases by using the steps outlined in the previous section, “Creating the Standby Database.” The main difference is that LOG_ ARCHIVE_DEST_N is configured to transmit the archive logs to a local file system or to a remote file system if the standby database is on another file server. When placed in managed recovery mode, the standby database will wait indefinitely or for a designated period time for the next archive log to be received and applied. Let’s walk through the configuration of a standby database in managed recovery mode: 1. Configure the init.ora for the appropriate LOG_ARCHIVE_DEST_N
parameters, with N being an integer from 1 to 5. The LOCATION entry defines the parameter as a local file system, whereas the SERVICE entry defines the parameter as a remote location or different database server. Local file location: LOG_ARCHIVE_DEST_1=’LOCATION=/oracle/admin/standby/ arch1’ Remote file location: LOG_ARCHIVE_DEST_2=’SERVICE=standby_db’
There are also MANDATORY and OPTIONAL clauses used with the LOG_ARCHIVE_ DEST_N parameter. These clauses are used with the LOG_ARCHIVE_MIN_ SUCCEED_DEST = N parameter, where N is an integer value. The LOG_ARCHIVE_ MIN_SUCCEED_DEST parameter uses all MANDATORY and some OPTIONAL nonstandby archive destinations to determine whether the LGWR can recycle the online redo logs.
2. Configure tnsnames.ora on the primary server if you are using a
remote standby database server. In this example, you are configuring for a remote server. Standby_db = (DESCRIPTION= (ADDRESS= (PROTOCOL=tcp) (HOST=different_server) (PORT=1512)
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
730
Chapter 23
Oracle Standby Database
) (CONNECT_DATA= (SID=stby) (GLOBAL_NAME=standby_db) (SERVER=DEDICATED))) 3. Configure listener.ora on the standby server. In this example, you
are configuring for a remote server. The listener should be restarted or recycled to update these changes. LISTENER = (ADDRESS_LIST= (ADDRESS= (PROTOCOL=tcp) (KEY=stdby) (HOST=different_server) (PORT=1512))) SID_LIST_LISTENER = (SID_LIST= (SID_DESC= (SID_NAME=stby) (ORACLE_HOME=/oracle/product/8.1.5) 4. Start up the standby database in NOMOUNT mode.
SVRMGR> startup nomount 5. Mount the database in standby database mode.
SVRMGR> alter database mount standby database; 6. Place the standby database in managed recovery mode. The default is
to wait indefinitely, or you can set it to a time-out period in minutes. SVRMGR> recover managed standby database or SVRMGR> recover managed standby database timeout 30
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Setting the Standby Database in Read-Only Mode
731
The REOPEN clause of the LOG_ARCHIVE_DEST_N parameter determines when, in seconds, the archiver process should retry the archiving to a failed destination following an error. If this clause is omitted, the archiver will never attempt to retry after an error. Failed logs would need to be transmitted to the standby server manually.
Setting the Standby Database in Read-Only Mode
R
ead-only mode enables users to query the standby database without writing or making modifications. This can reduce resource usage on the primary database by allowing the standby database to serve as a reporting database. Thus, resource-intensive queries can be run on the standby database. The standby database can be configured to operate in read-only mode by simply stopping and restarting the database in read-only. Let’s walk through an example of this procedure: 1. Start the database in NOMOUNT mode.
SVRMGR> startup nomount; 2. Mount the standby database.
SVRMGR> alter database mount standby database; 3. Open the database in read-only mode.
SVRMGR> alter database open read only; If the database is in manual recovery mode or managed recovery mode, you must first cancel recovery before opening the database in read-only mode. Let’s walk through an example of this procedure: 1. Cancel the manual recovery.
SVRMGR> recover cancel;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
732
Chapter 23
Oracle Standby Database
Or, in managed recovery mode: SVRMGR> recover managed standby database cancel; 2. Open the database in read-only mode.
SVRMGR> alter database open read only; To take the standby database out of read-only mode and move to manual recovery or managed recovery, you should make sure all users are out of the database and then place the database back in the desired recovery mode. In manual mode: SVRMGR> recover standby database; Or in managed recovery mode: SVRMGR> recover managed standby database;
When in read-only mode, your standby database will take longer to fail over in the event of a disaster on the primary database. You will need to take the database out of read-only mode and place the standby database back in manual or managed recovery mode before it can be activated. Then, all archive logs that have not been applied since it was put into read-only mode will need to be applied to make the standby database consistent with the primary database. If you want to take advantage of query access without compromising disaster recovery failover time, you could have multiple standby databases. One database could be placed in read-only mode while the other was in managed or manual recovery mode. Then, in the off-hours, resynchronize the read-only databases by applying all archive logs that have been generated while the database was in read-only mode.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Propagating Structural Changes to a Standby Database
733
Propagating Structural Changes to a Standby Database
C
hanging the physical structure of the primary database can increase the maintenance activities and procedures needed to propagate these changes to the standby database. Four of the most common areas where physical changes in the primary database occur are as follows:
Adding new data files as the data in the primary database grows
Renaming data files in the primary database
Modifying the online redo logs to support increases and decreases of the activity in the database
Changing the control file on the primary database
Adding New Data Files Adding new data files as the data in the primary database grows is a necessary part of database management. This activity must be handled with special procedures in the standby database, or the standby database will be out of synch with the primary database. When a new data file is added, the control file in the standby database is updated to record the physical change as a normal process of reading the archive logs. The catch is that the data file must be in place on the standby database or the recovery process will terminate on the standby database. To deal with this special circumstance, you must follow a special procedure on the standby database. Let’s walk through this procedure: 1. Make sure that the standby database is shut down.
SVRMGR> shutdown 2. Create the tablespace and the associated data file(s) in the primary
database. SVRMGR> create tablespace test datafile ‘/db01/ORACLE/brdb/test01.dbf’ size 10m;
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
734
Chapter 23
Oracle Standby Database
3. Copy the new data file to the standby server and file location. This
could be done using an ftp command if the standby is remote or an OS command such as cp on Unix if the standby database is local. 4. Start the standby database in NOMOUNT mode and then change it to
MOUNT mode. SVRMGR> startup nomount SVRMGR> alter database mount standby database; 5. Place the standby database in managed or manual recovery mode.
SVRMGR> recover managed standby database 6. Archive all the log files on the primary database and validate that the
logs are transmitted to the standby database. SVRMGR> alter system switch logfile; 7. Recover the standby database so that all archive logs with new control
file changes regarding the new data file are recorded in the standby database. SVRMGR> recover managed standby database cancel; 8. Now the new data file can be added to the standby database. Note that
a new tablespace does not need to be created because this information was applied in the archive log recovery activity. SVRMGR> alter database create datafile ‘/db01/ORACLE/stby/test01.dbf’ as ‘/db01/ORACLE/stby/test01.dbf’ 9. Finally, place the standby database in recovery mode again.
SVRMGR> recover managed standby database;
Renaming Data Files Renaming data files in the primary database is much less involved than adding a new data file or tablespace. All that is required for this activity is to perform the same command on the standby database as on the primary database.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Propagating Structural Changes to a Standby Database
735
Modifying the Online Redo Logs Modifying the online redo logs to support increases and decreases of the activity in the database can be required on the primary database. This can be performed as it normally would on the primary database, and no changes will be made on the standby database.
Changing the Control File Changing the control file on the primary database may be required to make significant changes in the structure of that database. This has a serious effect on the standby database’s control file and may invalidate it. Also, opening the primary database after executing the CREATE CONTROLFILE command generated from the ALTER DATABASE BACKUP CONTROLFILE TO TRACE command will require the ALTER DATABASE OPEN RESETLOGS command to be executed. This will reset the logs and invalidate the standby database. If this action is required on the primary database, then special procedures will need to be performed on the standby database to resynchronize the primary and standby databases. Let’s walk through this special procedure: 1. First, you must cancel recovery on the standby database.
SVRMGR> recover cancel or SVRMGR> recover managed standby database cancel 2. Shut down the standby database.
SVRMGR> shutdown 3. On the primary database, create a standby control file.
SVRMGR> alter database create standby controlfile as ‘stby_control.ctl’; 4. Archive the current online redo log on the primary database.
SVRMGR> alter system archive log current; 5. Move the standby control file and archive logs to the standby database
server. This can be done by an ftp or a remote shell command on Unix.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
736
Chapter 23
Oracle Standby Database
6. Mount the standby database.
SVRMGR> alter database mount standby database; 7. Begin the recovery process on the standby database.
SVRMGR> recover standby database;
Understanding the Impact of Nologging Actions on the Primary Database
N
ologging or unrecoverable actions on the primary database are not recorded in the standby database. This is because, by definition, Nologging or unrecoverable actions do not record information in the redo logs. Thus, the recovery activity on the standby database has no information to apply. To record these changes in the standby database, you must perform backup activities on the primary database just as you would in a nonstandby database environment to support the Nologging or unrecoverable activities. Tablespace backups can be used on the standby database in some circumstances. Two of the most common techniques to record unrecoverable or Nologging changes in the standby database are as follows:
Re-creating the standby database from a backup of the primary database
Backing up tablespaces affected by unrecoverable and Nologging in the primary database and recovering the tablespaces in the standby database
In general, it is best to avoid these unrecoverable and Nologging operations for permanent objects in the primary database unless absolutely necessary. The maintenance work on the standby databases in most cases is greater than the extra work involved not using these options in the primary database. If the unrecoverable and Nologging operations are used for temporary objects that are used for maintenance operations such as creating temporary or backup tables, then the unrecoverable and Nologging commands should not cause any additional maintenance on the standby database.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Summary
737
Summary
In this chapter, we have discussed and explained the standby database in detail. We also walked through the configuration of initialization parameters as well as creating, maintaining, and activating the standby database. We looked at some specific scenarios using the standby database. These scenarios included using managed recovery mode and setting the standby database to read-only mode. In addition, we described the concepts of propagating structural changes to a standby database, as well as the impact of unrecoverable and Nologging actions on the primary database. Each of these concepts requires special maintenance procedures to be performed on the primary and standby databases.
Key Terms Before you take the exam, make sure you’re familiar with the following terms: managed recovery mode read-only mode manual recovery mode
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
738
Chapter 23
Oracle Standby Database
Review Questions 1. What init.ora parameters must be the same in the primary database
and the standby database? A. COMPATIBLE B. DB_BLOCK_BUFFERS C. DB_FILES D. SHARED_POOL 2. What init.ora parameters must be different in the primary database
and the standby database? A. CONTROL_FILES B. BACKUP SET C. OS COPY D. IMPORT 3. Select all of the maintenance issues associated with maintaining the
standby database. A. Converting data file names B. Backing up of the primary database C. Backing up of the standby database D. Determining the most recently applied archive log 4. What is the standby database vulnerable to once it is activated to the
primary production database? Choose the best answer. A. Data file corruption B. Any failure requiring a recovery C. User access D. Long-running queries
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Review Questions
739
5. What type of recovery automatically transfers archive logs to the
standby database? A. Manual recovery B. Auto recovery C. Managed recovery D. Automatic recovery 6. What standby database option allows query access of the standby
database? A. Query only B. Read-write mode C. Read-only mode D. Read mode 7. How do the archive logs get applied to the standby database when it
is in read-only mode? Choose the best answer. A. Archive logs are applied serially, or one at a time. B. No archive logs get applied. C. Archive logs get applied without change. D. Archive logs get applied in parallel.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
740
Chapter 23
Oracle Standby Database
Answers to Review Questions 1. A and C. COMPATIBLE and DB_FILES must be the same in the primary
and standby databases. The COMPATIBLE parameter ensures that the log files will work between the standby and primary databases. The DB_FILES parameter ensures that the number of data files between the primary and standby databases is consistent. 2. A. The control files on the standby and primary database must be in
different locations and have different names for the standby database to function. 3. A, C, and D. All three are maintenance issues associated with the
standby database. 4. B. As soon as the standby database is activated and it becomes the pri-
mary production database, it should be backed up. Since its main purpose is to support a failed production database, it will mostly likely be brought online immediately. Therefore, it is vulnerable to any failure requiring a backup until a hot backup can be performed. 5. C. Managed recovery automatically transfers archive logs from the
primary database to the standby database. 6. C. Read-only mode allows the standby database to be used as a query-
only database. 7. B. Read-only mode requires the standby database to be taken out of
recovery mode. Therefore, no archive logs get applied to the database when it is in read-only mode. The logs must be applied when the database is placed back in recovery mode.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Appendix
A
Oracle8i Architecture & Administration Practice Exam
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
742
Appendix A
Oracle8i Architecture & Administration Practice Exam
Practice Exam Questions 1. Choose two answers that are true. When a table is created with the
NOLOGGING option, A. Direct-path loads using the SQL*Loader utility are not recorded in
the redo log file. B. Direct-load inserts are not recorded in the redo log file. C. Inserts and updates to the table are not recorded in the redo log file. D. Conventional-path loads using the SQL*Loader utility are not
recorded in the redo log file. 2. What happens if you issue the EXP command without specifying any
arguments? A. The Export utility returns an error. B. The Export utility prompts for values to perform the export. C. The Export utility displays the Help screen. D. The Export utility performs a full database export. 3. The tablespace that contains rollback segments can be taken offline if A. All the rollback segment are online. B. All the rollback segments are offline. C. There are no active transactions. D. A tablespace with rollback segments cannot be taken offline. 4. Which action increases the size of the control file? A. Adding a new data file B. Adding a new redo log group C. Adding a new redo log member D. Adding a new tablespace E. None of the above
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Practice Exam Questions
743
5. What is the value of the parameter LOGICAL_READS_PER_SESSION for
the DEFAULT profile? A. 0 B. DEFAULT C. UNLIMITED D. NULL 6. Choose two factors that increase the overhead in a data block. A. More rows stored in the block B. More free space in the block C. A higher INITRANS value D. A higher MAXTRANS value 7. To grant or revoke a system privilege, you must have (choose two) A. GRANT ANY system privilege B. GRANT ANY PRIVILEGE system privilege C. Granted the privilege with the WITH GRANT OPTION D. Granted the privilege with the WITH ADMIN OPTION 8. Which area holds the parsed (compiled) SQL statements in memory? A. Buffer cache B. Library cache C. Dictionary cache D. Log buffer 9. What information is available in the view V$PWFILE_USERS? A. Users granted the DBA privilege B. Users with the SYSDBA or SYSOPER privilege C. The encrypted password for SYS and INTERNAL D. Options A and B
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
744
Appendix A
Oracle8i Architecture & Administration Practice Exam
10. Which procedure in the DBMS_TTS package is used to verify that the
tablespaces are self-contained? A. TRANSPORT_SET_VIOLATIONS B. TRANSPORT_SET_CHECK C. CHECK_SELF_CONTAINED D. CHECK_VIOLATIONS 11. Bitmap indexes are best suited for columns with A. High selectivity B. Low selectivity C. High inserts D. High updates 12. When you query the DBA_DATA_FILES view, you notice that certain
filenames are MISSINGnnnn (where nnnn is a number). What would have caused this? A. The data files were dropped. B. The control file was re-created without these files. C. The data dictionary is corrupted. D. The data files are not accessible. 13. The database should be started in which mode to perform maintenance
operations? A. READ ONLY B. RESTRICT C. MOUNT D. OPEN
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Practice Exam Questions
745
14. Oracle cannot find a free extent big enough in a tablespace to satisfy
a new extent request. What action is performed next? A. Oracle attempts to extend a data file belonging to the tablespace,
if the auto-extensible feature is enabled. B. Oracle returns an error to the user. C. Oracle moves the segment to a new tablespace. D. Oracle attempts to coalesce adjacent free extents to satisfy the new
extent request. 15. Which data dictionary view can be queried to see the objects that are
accessible to the user? A. USER_OBJECTS B. ALL_OBJECTS C. DBA_TABLES D. ALL_ALL_OBJECTS 16. What schema objects can PUBLIC own? Choose two. A. Database links B. Rollback segments C. Synonyms D. Tables 17. Which environment variable specifies the location of NLS data files? A. ORA_NLS B. ORA_NLS33 C. NLS_LANG D. ORA_NLS81
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
746
Appendix A
Oracle8i Architecture & Administration Practice Exam
18. The ALTER TABLE statement cannot be used to A. Move the table from one tablespace to another B. Change the initial extent size of the table C. Rename the table D. Disable triggers E. Resize the table below the HWM 19. Choose two correct answers. The database instance crashes when A. A redo log member in the current log group is not accessible. B. All members of the next redo log group are not available at a log
switch. C. The next redo log group is not archived (database is in
ARCHIVELOG mode). D. All members of the current redo log group become inaccessible. 20. After creating the database, how do you change its name? A. Change the DB_NAME parameter in the initialization parameter file B. Re-create the control file C. Change the ORACLE_SID environment variable to the new data-
base name D. Set the DB_NAME parameter and use the ALTER DATABASE RENAME
command 21. What happens if you add one more filename to the CONTROL_FILES
parameter in the initialization file when the database is up and running? A. The instance crashes. B. When the database shuts down, Oracle creates the new control file. C. Oracle creates the new control file immediately. D. You get an error when the database shuts down. E. None of the above is true.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Practice Exam Questions
747
22. During instance recovery, committed transactions that are not yet
written to the data files are read from ___________. A. Rollback segments B. Online redo log files C. Archived log files D. Temporary segments 23. When using a password file for authentication, which parameter must
be set in the initialization parameter file? A. OS_AUTHENT_PREFIX B. REMOTE_OS_AUTHENT C. REMOTE_LOGIN_PASSWORDFILE D. LOGIN_PASSWORD_FILE 24. To drop a log file group, you must A. Re-create the control file. B. Use the ALTER DATABASE command with the DROP LOGFILE clause. C. Remove the files using an OS command; Oracle updates the control
file automatically when it cannot find the redo log files on the disk. D. Restart the database with the LOG_FILES parameter changed. 25. Which two shutdown options wait for the transactions to complete? A. NORMAL B. IMMEDIATE C. TRANSACTIONAL D. NOMOUNT
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
748
Appendix A
Oracle8i Architecture & Administration Practice Exam
26. Which storage parameter cannot be specified when you create a
tablespace? A. INITIAL B. MAXEXTENTS C. PCTFREE D. MINEXTENTS 27. Choose the best answer. In temporary tablespaces, a sort segment is
created A. For an instance B. For every sort operation C. For a session D. Per user 28. What action is not performed by the CREATE DATABASE statement? A. Creating the SYSTEM tablespace B. Creating the user SYS C. Creating the V$ views D. Creating the DUAL table E. Creating indexes on dictionary tables 29. When sizing extents, the best performance for a full table scan can be
achieved if _____. A. The extent size is a multiple of the database block size. B. The data file size of the tablespace is a multiple of the extent size,
plus one block for the overhead. C. The extent size is a multiple of the DB_FILE_MULTIBLOCK_READ_
COUNT parameter. D. Oracle reads data from data files in units of database blocks, not
extents. Therefore, the extent size does not affect the performance.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Practice Exam Questions
749
30. Which parameter does not affect the size of the SGA? A. SHARED_POOL_SIZE B. LOG_BUFFER C. BUFFER_POOL_KEEP D. DB_BLOCK_BUFFERS 31. When a row is updated, what information is written to the redo logs? A. The changes to the data block. B. The changes to the rollback block. C. The changes to the data block and the rollback block. D. No changes are written to the redo log until the transaction is
committed. 32. Which export parameter is useful for reorganizing tables by combin-
ing all the extents into one large extent? A. ROWS B. INITIAL C. COMPRESS D. BUFFER 33. Which option of the ANALYZE TABLE command can be used to obtain
the exact count of chained rows in a table? A. ESTIMATE STATISTICS B. VALIDATE STRUCTURE C. COMPUTE STATISTICS D. LIST CHAINED ROWS
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
750
Appendix A
Oracle8i Architecture & Administration Practice Exam
34. Which data dictionary view can be queried to see whether a trigger
owned by SCOTT is enabled? A. DBA_TRIGGERS B. USER_TRIGGERS C. DBA_OBJECTS D. ALL_OBJECTS 35. Which step is not required to drop a control file? A. Shut down the database B. Start up the database C. ALTER DATABASE DROP CONTROLFILE D. Change the initialization parameter file 36. When you create a profile, if you do not specify values for a certain
parameter, what will be the value used for that parameter? A. UNLIMITED B. 0 C. DEFAULT D. NULL 37. Which data dictionary view has information about the users currently
connected to the database? A. V$DATABASE B. DBA_USERS C. V$SESSION D. USER_USERS
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Practice Exam Questions
38. Which process processes user requests? A. Server process B. PMON C. Dispatcher process D. SMON 39. When is a block removed from the free list? A. The block is more than PCTUSED percent full. B. The block is less than PCTUSED percent free. C. The block is more than PCTFREE percent full. D. The block is less than PCTFREE percent free. 40. Which event is not a valid database event that can fire a trigger? A. AUDIT B. DROP C. COMMENT D. ROLLBACK 41. What privilege does the DELETE_CATALOG_ROLE provide? A. To update or replace the DBMS packages B. To drop and create the dictionary views C. To delete rows from dictionary base tables D. To delete rows from the SYS.AUD$ table 42. To enable direct-load insert, the table must A. Be empty B. Be defined with NOLOGGING C. Not have any triggers defined D. Be defined with PARALLEL
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
751
752
Appendix A
Oracle8i Architecture & Administration Practice Exam
43. Which two statements can be used to determine whether the database
is in ARCHIVELOG mode? A. SELECT * FROM V$DATABASE; B. SELECT * FROM V$PARAMETER; C. ARCHIVE LOG LIST D. ARCHIVELOG LIST 44. How do you relocate a tablespace? A. Use TRANSPORT_TABLESPACE=Y in exports and imports and copy
the data files. B. Re-create the control file with the new location of tablespace file
names. C. Copy the files to the new location and update the control file by
using ALTER TABLESPACE. D. Tablespaces cannot be relocated. 45. Which is not a valid way to specify an NLS parameter? A. Environment variables B. Initialization parameters C. Using ALTER SYSTEM D. SQL function parameter 46. A control file is not required when you A. Start the instance. B. Open the database. C. Mount the database. D. The control file stores information about the structure of the data-
base, so it is required at all times.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Practice Exam Questions
753
47. Which tablespace offline option requires media recovery? A. NORMAL B. IMMEDIATE C. TEMPORARY 48. What is the maximum number of online redo log groups you can have
in a database? A. The value of the MAXLOGFILES parameter specified in the CREATE
DATABASE command B. The value of the MAXLOGMEMBERS parameter specified in the
CREATE DATABASE command C. 32 D. The value of the MAXLOGGROUPS parameter specified in the CREATE
DATABASE command 49. What type of auditing option is not available in Oracle? A. Auditing successful and unsuccessful statements B. Auditing the privileges used by users C. Auditing the statements issued by users D. Auditing the rows updated or deleted by users E. Auditing actions of a specific user 50. Choose the best answer regarding the following two statements. A. SET CONTRAINTS ALL DEFERRED B. ALTER SESSION SET CONSTRAINTS DEFERRED
A. Statement A and B have the same result. B. Statement A is applicable only for the transaction, whereas state-
ment B is applicable to the entire session. C. Statement A and B are applicable only to a transaction. D. Statement B defers checking only foreign key constraints, whereas
statement A defers checking all constraints.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
754
Appendix A
Oracle8i Architecture & Administration Practice Exam
Answers to Practice Exam 1. A and B. NOLOGGING is useful when loading bulk data or creating a
table with a large number of rows. The speed of the operation is increased when no redo entries are generated. See Chapter 7 for more information. 2. B. When the Export and Import utilities are invoked without speci-
fying any arguments, the utility prompts for values. The SQL*Loader utility displays the Help screen when invoked without any arguments. See Chapter 9 for more information. 3. B. To take a tablespace offline, it must not have any active rollback
segments. See Chapter 5 for more information. 4. E. The control file size is determined by the values specified for the
MAXDATAFILES, MAXLOGFILES, MAXLOGMEMBERS, MAXLOGHISTORY, and MAXINSTANCES parameters in the CREATE DATABASE statement or the CREATE CONTROLFILE statement. Oracle pre-allocates space in the control file based on the values of these parameters. See Chapter 4 for more information. 5. C. Initially, the DEFAULT profile has UNLIMTED value for all parame-
ters; the DEFAULT profile can be altered using the ALTER PROFILE statement. See Chapter 8 for more information. 6. A and C. The block overhead consists of the common and variable
header, the table directory, and the row directory. A higher INITRANS increases the size of the variable header, and more rows stored in the block increases the space used for the row directory. See Chapter 6 for more information. 7. B and D. To grant a system privilege to another user, you must have
the privilege with the WITH ADMIN OPTION; to grant an object privilege to another user, you must have the privilege with the WITH GRANT OPTION. See Chapter 8 for more information. 8. B. The library cache has the parsed SQL statements and execution
plans along with the SQL text. See Chapter 1 for more information.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Answers to Practice Exam
755
9. B. The V$PWFILE_USERS view shows the users who have been granted
SYSDBA or SYSOPER privilege. See Chapter 2 for more information. 10. B. The TRANSPORT_SET_CHECK procedure is used to verify that the
tablespaces are self-contained. The violations can be queried from the TRANSPORT_SET_VIOLATIONS view. See Chapter 9 for more information. 11. B. Bitmap indexes are suitable for columns with low selectivity (car-
dinality) and infrequent updates. They are best suited for data warehouse applications. See Chapter 7 for more information. 12. B. Oracle puts placeholders in the control file if the data file in the
data dictionary is not listed when the CREATE CONTROLFILE statement is issued. See Chapter 5 for more information. 13. B. Starting the database in restricted mode allows only users with
the RESTRICTED SESSION privilege to connect to the database. This mode is ideal for making a consistent logical backup or for performing data or structure maintenance operations. See Chapter 2 for more information. 14. D. When allocating extents, Oracle looks for extent sizes that match
the requested size (one block is added if the extent size is more than five blocks). If it cannot find one, Oracle searches for extents that are bigger than the new extent. If Oracle cannot find one, it coalesces the tablespace and searches again. If it still cannot find an extent large enough, Oracle tries to extend the data files if the auto-extensible feature is enabled and searches again. An error is returned if all of the preceding steps fail. See Chapter 6 for more information. 15. B. The dictionary views that begin with DBA_ contain information
about all the objects; the views that begin with ALL_ contain information about the objects that are accessible to the user; and the views that begin with USER_ contain information about the objects owned by the user. The USER_ views do not have an OWNER column. See Chapter 3 for more information.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
756
Appendix A
Oracle8i Architecture & Administration Practice Exam
16. A and C. The PUBLIC user group can own database links and syn-
onyms. Even though you can create a PUBLIC ROLLBACK SEGMENT, all rollback segments are owned by SYS. See Chapter 8 for more information. 17. B. The ORA_NLS33 environment variable should be specified when
using any character set other than US7ASCII. See Chapter 9 for more information. 18. E. The MOVE clause can be used to specify new storage parameters for
the table, including INITIAL and MINEXTENTS. To resize the table below the HWM, you must first truncate the table. See Chapter 7 for more information. 19. B and D. When all the redo log group members are not available due
to media failure or other reasons, Oracle aborts the instance. Media recovery may be required if you cannot recover from the disk error. See Chapter 4 for more information. 20. B. To change the database name, the control file must be re-created.
Specifying the INSTANCE_NAME parameter and restarting the instance after setting up the value of ORACLE_SID can change the instance name. See Chapter 3 for more information. 21. E. Nothing happens to the current database or instance. You may get
an error when the database starts the next time if you have not copied an existing control file to the new location after the database shut down. See Chapter 4 for more information. 22. B. The redo log buffer records all changes to the database. When a
user commits a transaction, Oracle writes the redo log buffer to the online redo log files. The changes are not immediately written to the data files to improve I/O performance. During instance start-up after a crash, Oracle reads the blocks from the online redo log files and applies the changes to the data files; this is known as rolling forward. See Chapter 1 for more information. 23. C. To make use of the password file authentication method, the
parameter REMOTE_LOGIN_PASSWORDFILE must be set to SHARED or EXCLUSIVE. EXCLUSIVE specifies that the password file can be used for only one database. See Chapter 2 for more information.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Answers to Practice Exam
757
24. B. When you use the DROP LOGFILE clause in the ALTER DATABASE
command, Oracle updates the control file. You must use an OS command to remove the files from the disk. See Chapter 4 for more information. 25. A and C. SHUTDOWN NORMAL waits for all users to disconnect from the
database after ending their transactions. SHUTDOWN TRANSACTIONAL waits for all active transactions to either commit or roll back. See Chapter 2 for more information. 26. C. PCTFREE and PCTUSED are parameters that manage the free space
in the data blocks. The DEFAULT STORAGE clause allows only the extent sizing parameters such as INITIAL, NEXT, PCTINCREASE, MINEXTENTS, and MAXEXTENTS. See Chapter 5 for more information. 27. A. All sort operations in a tablespace for an instance share the same
sort segment. The sort segment is created when the first statement that uses a temporary tablespace is issued. Sort segment(s) in the temporary tablespace are removed only when the instance is shut down. See Chapter 5 for more information. 28. C. The V$ views are created in the database when you execute the
script catalog.sql. See Chapter 3 for more information. 29. C. The DB_FILE_MULTIBLOCK_READ_COUNT parameter specifies the
number of blocks to read from the data file in a single I/O operation for a full table scan. If the extents are sized as a multiple of this parameter, the I/O pipe is well utilized. See Chapter 6 for more information. 30. C. The KEEP buffer pool and the RECYCLE buffer pool are areas inside
the database buffer cache. The SGA is calculated as SHARED_POOL_ SIZE + LOG_BUFFER + (DB_BLOCK_SIZE × DB_BLOCK_BUFFERS). See Chapter 1 for more information. 31. C. All changes to the data blocks, including rollback segment blocks,
are recorded in the redo log file. The rollback entries are written to the redo log file to ensure that the transaction can be rolled back in case of a system crash and the transaction has not yet been committed. See Chapter 6 for more information.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
758
Appendix A
Oracle8i Architecture & Administration Practice Exam
32. C. The parameter COMPRESS=Y creates a single big extent to accom-
modate all the table’s data into one extent. See Chapter 9 for more information. 33. C. The COMPUTE STATISTICS option should be used to get the exact
count of chained rows. The number of chained rows can be queried from the DBA_TABLES view. See Chapter 7 for more information. 34. A. The DBA_TRIGGERS view shows information about all the triggers
in the database; the STATUS column in the DBA_TRIGGERS view shows whether the trigger is enabled or disabled. The STATUS column in the DBA_OBJECTS view shows whether the trigger is valid. See Chapter 3 for more information. 35. C. To drop a control file, you need to shut down the database,
change the parameter CONTROL_FILES in the initialization parameter file to remove the control file name, and then start up the database. See Chapter 4 for more information. 36. C. A parameter that is not explicitly defined will have DEFAULT as its
value, which means that the value will be taken from the DEFAULT profile. See Chapter 8 for more information. 37. C. The V$SESSION view has information about the sessions con-
nected to the Oracle database. The view contains information such as the username, SID, SERIAL#, logon time, status, etc. See Chapter 2 for more information. 38. A. In a dedicated server configuration, the server process communi-
cates directly with the user process and processes the user requests. In a multithreaded configuration, the dispatcher process communicates with the user process. The dispatcher process communicates the requests to the shared server process, which processes the request and communicates the result to the dispatcher process. See Chapter 1 for more information. 39. D. PCTFREE specifies the amount of free space available for updates
to rows in the block. The block is removed from the free list when the free space falls below PCTFREE percent. The block is again added to the free list when the used space in the block falls below PCTUSED percent (due to updates or deletes). See Chapter 6 for more information.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Answers to Practice Exam
759
40. D. ROLLBACK is not a valid database event. The DDL triggers can be
created to fire before or after the operation. See Chapter 3 for more information. 41. D. The SYS.AUD$ table is used to record auditing information. Over
time, this table can become large if auditing is enabled. The DELETE_ CATALOG_ROLE role enables the user to delete rows from the SYS.AUD$ table. This role also provides privileges to delete all dictionary packages. See Chapter 8 for more information. 42. C. Oracle uses direct-load insert for INSERT INTO … SELECT state-
ments only if the table does not have any triggers or referential integrity constraints. See Chapter 9 for more information. 43. A and C. The ARCHIVELOG/NOARCHIVELOG mode of database opera-
tion can be determined by using the ARCHIVE LOG LIST command or by querying the V$DATABASE view. See Chapter 1 for more information. 44. C. To relocate a tablespace, you must take the tablespace offline,
copy or move the data files belonging to the tablespace to the new location, and perform the command ALTER TABLESPACE RENAME FILE TO for each data file. See Chapter 5 for more information. 45. C. NLS parameters can be specified for a session by using the ALTER
SESSION command. See Chapter 9 for more information. 46. A. To start an instance, a control file is not needed. The instance con-
sists of the shared memory area and the background processes. When creating the control file, you should start the instance (using the STARTUP NOMOUNT command). See Chapter 1 for more information. 47. B. The IMMEDIATE option requires media recovery. When a
tablespace is taken offline with the IMMEDIATE option, Oracle does not perform a checkpoint on the tablespace’s data files. See Chapter 5 for more information. 48. A. The MAXLOGFILES and MAXLOGMEMBERS parameters specify the
maximum number of log groups and log group members, respectively, allowed for the database. See Chapter 4 for more information.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
760
Appendix A
Oracle8i Architecture & Administration Practice Exam
49. D. Auditing does not show the rows inserted, updated, or deleted by
a user, but you can identify who performed the operation and what privilege was used. See Chapter 8 for more information. 50. B. The SET CONSTRAINTS statement is effective only for the current
transaction. To verify that the rows satisfy the constraints, you can issue SET CONSTRAINTS ALL IMMEDIATE; an error is returned if the rows do not satisfy the constraints. If you issue a COMMIT and the rows fail the constraint check, the transaction is rolled back immediately. See Chapter 7 for more information.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Appendix
B
Oracle8i Backup & Recovery Practice Exam
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
762
Appendix B
Oracle8i Backup & Recovery Practice Exam
Practice Exam Questions 1. What is usually the most serious type of database failure for a DBA? A. Instance failure B. Media failure C. User process D. Statement failure 2. What type of failure is remedied by the PMON process rolling back a
transaction? A. User process failure B. Media failure C. Instance failure D. Statement failure 3. What is the main purpose of database checkpoints? A. Decrease free memory buffers in the SGA B. Write non-modified database buffers to the database files C. Write modified database buffers to the database files D. Increase free memory buffers in the SGA 4. What Oracle process is responsible for performing the roll-forward in
instance recovery? A. PMON B. LGWR C. SMON D. DBWn
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Practice Exam Questions
763
5. What database configuration lets you recover to a specific system
change number (SCN) during recovery? A. NOARCHIVELOG mode B. ARCHIVELOG mode C. EXPORT D. IMPORT 6. What statement lets you know that the database is in ARCHIVELOG mode? A. Cold backups can be performed. B. Offline backups can be performed. C. Online backups can be performed. D. Exports can be performed. 7. Your database has a media failure, losing a nonsystem tablespace con-
taining necessary data. The database is configured for NOARCHIVELOG mode. What must be done to restore the nonsystem tablespace? A. Restore the lost nonsystem tablespace B. Restore the lost nonsystem tablespace and apply archive logs C. Restore all tablespaces and apply archive logs D. Restore the entire database 8. You want to make a backup of a database that is configured for
NOARCHIVELOG mode. What must be done? A. Back up all data files B. Back up all redo logs and data files C. Back up all control files, redo logs, and data files D. Back up all control files and data files
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
764
Appendix B
Oracle8i Backup & Recovery Practice Exam
9. Which mode must a tablespace be in to perform a hot backup? A. ARCHIVELOG B. NOARCHIVELOG C. NOMOUNT D. BACKUP 10. You alter a tablespace from read-write to read-only. Which database
file gets modified to reflect the change? A. Offline redo log B. Archive logs C. init.ora D. Control file 11. What is the implication of using Nologging mode? A. Changes will not be written to the redo logs. B. Changes will not be written to the archive logs but will be written
to the redo logs. C. Changes will not be written to the data files. D. Changes will not be written to the control files. 12. Which view displays which data files are actively being backed up? A. V$DATAFILE B. V$ACTIVE_BACKUP C. V$BACKUP D. V$TABLESPACE
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Practice Exam Questions
765
13. What command could you issue to recover all the archive logs with
limited intervention? A. RECOVER DATABASE AUTOMATIC B. SET AUTORECOVERY ON C. SET AUTORECOVER TRUE D. RECOVER DATAFILE AUTOMATIC 14. Due to a media failure of your database, which type of shutdown will
most likely be necessary to prepare for the restore? A. SHUTDOWN IMMEDIATE B. SHUTDOWN NORMAL C. SHUTDOWN TRANSACTIONAL D. SHUTDOWN ABORT 15. What backup type will enable you to recover to the point of failure? A. Import without archive logs B. Operating system backup with archiving C. Operating system backup without archiving D. Export with archive logs 16. If media failure occurred, under which scenario could you restore only
part of the data files of a database operating in NOARCHIVELOG mode? A. If the archive logs were not overwritten since the last backup B. If the online redo logs were not overwritten since the last backup C. If the control file was not overwritten since the last backup D. If the init.ora file was unchanged since the last backup
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
766
Appendix B
Oracle8i Backup & Recovery Practice Exam
17. What example would force an incomplete recovery situation? A. A data file is lost or destroyed. B. All data files are lost or destroyed. C. All control files are lost or destroyed. D. All archive logs are lost or destroyed. 18. What type of recovery would you perform if a user drops a table? A. Complete B. Data file C. Incomplete D. Tablespace 19. What recovery operations enable you to perform a recovery prior to
the point of failure? Choose all that apply. A. Cancel-based B. Time-based C. Control-based D. Change-based E. Log-based 20. What clause is required to open a database after incomplete recovery? A. ALTER DATABASE OPEN IMMEDIATE B. ALTER DATABASE MOUNT EXCLUSIVE C. ALTER DATABASE OPEN RESETLOGS D. ALTER DATABASE MOUNT PARALLEL
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Practice Exam Questions
767
21. What parameter causes the Import utility to import the transportable
tablespace’s metadata from the export file? A. IGNORE B. TRANSPORT_TABLESPACE C. BUFFER_SIZE D. ARRAYSIZE 22. What are the primary tasks of the Export and Import utilities? Choose
all that apply. A. Database reorganization B. Point-in-time recovery C. Complete recovery D. Historical archive at a point in time 23. What is the first task performed by the Import utility when importing
a dump file? A. Create sequence B. Create procedures C. Grant privileges D. Create tables 24. Which parameter overlooks creation errors if an object already exists
during an import? A. INCTYPE B. PARFILE C. IGNORE D. CREATE
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
768
Appendix B
Oracle8i Backup & Recovery Practice Exam
25. How can you recover a database that has had many transactions since
the last backup? A. Import the transactions separately B. Parallel recovery of the redo logs C. Recover multiple data files at the same time D. Import the transactions with the transportable tablespace option 26. What is the best way to protect your control file from being destroyed? A. Have only one control file on mirrored disks B. Have multiplexed control files on different mirrored disks C. Have multiplexed control files on different disks on non-fault-
tolerant disks D. Don’t have control files 27. What information is required to re-create a control file with the
CREATE CONTROLFILE command? A. The location of the data files and redo logs B. The name of the data files and redo logs C. The name and location of the data files and redo logs D. The size of the data files 28. What must be done to restore a read-only tablespace that was backed
up when the tablespace was read-only? A. Restore the data file B. Restore and recover the data file C. Recover the data file D. Recover the tablespace
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Practice Exam Questions
769
29. Which task will the following recovery command accomplish if the
database is in ARCHIVELOG mode? SVRMGR> recover datafile ‘/db01/ORACLE/brdb/users01.dbf’;
A. Recover the associated tablespace USERS B. Recover the data file using archive logs if necessary C. Recover the data file past the point of failure D. Restore the associated tablespace 30. What process checks for corrupt blocks each time data and index
blocks are changed? A. DBMS_REPAIR B. EXPORT C. DB_BLOCK_CHECKING D. DBVERIFY 31. Which Oracle utility could you use to determine the user responsible
for deleting necessary data? Assume auditing is not enabled. A. DBVERIFY. B. Export. C. LogMiner. D. This is not possible in Oracle8i. 32. What block size does DBVERIFY use by default? A. 2KB B. 4KB C. 8KB D. 16KB
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
770
Appendix B
Oracle8i Backup & Recovery Practice Exam
33. What happens to the standby database if the primary database is
opened with the RESETLOGS option? A. The standby database is activated automatically. B. The standby database is activated manually. C. The standby database is invalidated. D. Nothing happens to the standby database. 34. What utility can you use to verify corruption of both the backup and
online data files? A. DBMS_REPAIR B. DBVERIFY C. ANALYZE D. DB_CHECKSUM 35. What is the location of the trace file generated from the Oracle PMON
process? A. USER_DUMP_DEST B. BACKGROUND_DUMP_DEST C. CORE_DUMP_DEST D. ARCH_DUMP_DEST 36. Which RMAN component requires a server process to be initiated on
the target database? A. SVRMGR B. Channel C. RMAN D. Recovery Manager initiated from Enterprise Manager
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Practice Exam Questions
771
37. What command would you use to connect to RMAN without the
recovery catalog? A. RMAN TARGET SYSTEM/MANAGER NOCATALOG B. RMAN NOCATALOG SYSTEM/MANAGER @TARGET C. RMAN SYSTEM/MANAGER NOCATALOG D. RMAN SYSTEM/MANAGER TARGET 38. What must be done before executing RMAN to connect to the target
database? A. Set ORACLE_SID to target database name B. Shut down dedicated server process C. Start target database nomount D. Start up target database 39. What must be allocated before you can begin any backup or recovery
commands? A. Channel B. Server session C. DBMS pipe D. Dedicate connection 40. Why would you use a recovery catalog with RMAN? A. To improve performance of the RMAN backup process B. To improve performance of the RMAN restore process C. To store long-term information D. To reduce the amount of disk space
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
772
Appendix B
Oracle8i Backup & Recovery Practice Exam
41. What action would cause you to resynch the recovery catalog? A. Drop a table B. Truncate a table C. Add a new tablespace D. Add new users 42. What is a Recovery Manager backup set? Choose the best answer. A. A set of OS database files in an OS format B. At least one file stored in an RMAN format C. No more than one file stored in an RMAN format D. Only control files and data files stored in an RMAN format 43. Why would you use the UNCATALOG command? A. Undo a database resynch B. Remove a database reset C. Undo the most recent database resynch only D. Remove the reference to a file that is no longer needed 44. Which RMAN command will combine incremental, cumulative,
archives, and redo logs to synchronize the restored file? A. RESTORE B. RECOVER C. REGISTER D. COMBINE
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Practice Exam Questions
773
45. The primary database server is down due to a failure of some sort.
Which command would you perform to start using the standby database? A. ACTIVATE STANDBY DATABASE IMMEDIATE; B. ALTER DATABASE ACTIVATE STANDBY DATABASE; C. ALTER DATABASE ENABLE STANDBY DATABASE; D. RECOVER STANDBY DATABASE ENABLE; 46. An incomplete recovery is performed using Recovery Manager. What
is the last command that must be executed? A. RMAN> SQL “ALTER DATABASE OPEN”; B. RMAN> ALTER DATABASE OPEN; C. RMAN> SQL “ALTER DATABASE OPEN RESETLOGS”; D. RMAN> ALTER DATABASE OPEN RESETLOGS; 47. After performing an incomplete recovery, what command must you
perform in RMAN to register the new incarnation of the database? A. REGISTER DATABASE B. RESET DATABASE C. RESYNCH DATABASE D. INCARNATE DATABASE 48. What state must the database be in for RMAN to restore data files if
the database is not in ARCHIVELOG mode? Choose all that apply. A. MOUNT B. NOMOUNT C. Opened D. Closed E. RECOVER
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
774
Appendix B
Oracle8i Backup & Recovery Practice Exam
49. You are getting ready to perform an incomplete recovery on a data-
base and you notice that the database is in a mounted state. What state must the database be in for RMAN to begin recovery? A. MOUNT B. Opened C. Closed D. NOMOUNT 50. What does the following command perform? RMAN> recover managed standby database timeout 30;
A. The standby database will wait 30 seconds before initiating
recovery. B. The standby database will wait 30 minutes before initiating
recovery. C. The standby database will wait 30 minutes for archive logs before
timing out. D. The standby database will activate the recovery process on the pri-
mary server in 30 minutes.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Answers to Practice Exam
775
Answers to Practice Exam 1. B. Media failure is usually the most serious type of failure because it
almost always requires the DBA to perform some form of recovery. See Chapter 10 for more information. 2. A. User process failure is corrected by the PMON process, undoing, or
rolling back, the incomplete transaction and releasing the resources held by the transaction. See Chapter 11 for more information. 3. C. The main purpose of the database checkpoint is to write modified
buffers to the data files. See Chapter 11 for more information. 4. C. The SMON process is responsible for applying all of the commit-
ted or uncommitted changes recorded in the online redo logs. See Chapter 11 for more information. 5. B. ARCHIVELOG mode enables incomplete recovery to be based on
cancel, time, or change (SCN). See Chapter 11 for more information. 6. C. Online (hot) backups can be performed only when the database is
configured in ARCHIVELOG mode. See Chapter 12 for more information. 7. D. You must restore the entire database because the database is con-
figured for NOARCHIVELOG mode. Thus, no online backups could be performed, only complete offline database backups. See Chapter 12 for more information. 8. C. You must back up the entire database if you are operating the data-
base in NOARCHIVELOG mode. The entire database includes data files, redo logs, and control files. See Chapter 12 for more information. 9. D. You must place a tablespace in BACKUP mode by issuing the com-
mand ALTER TABLESPACE BEGIN BACKUP. The database must be in ARCHIVELOG mode; individual tablespaces cannot be in ARCHIVELOG mode. See Chapter 12 for more information.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
776
Appendix B
Oracle8i Backup & Recovery Practice Exam
10. D. The control file is modified to record information indicating
whether a tablespace is read-write or read-only. See Chapter 13 for more information. 11. A. Changes will not be written to any redo logs. Thus, the changes
are vulnerable to loss until the data files are backed up. See Chapter 13 for more information. 12. C. The V$BACKUP view displays which data files are currently being
backed up by indicating a status of ACTIVE. The status is NOT ACTIVE when the data files are not being backed up. See Chapter 13 for more information. 13. B. The SET AUTORECOVERY ON command will apply all available
archive logs to the database that is being recovered. The DBA will not be required to specify individual redo logs. See Chapter 14 for more information. 14. D. The SHUTDOWN ABORT will most likely be necessary to shut down
the database, which has lost one or more database files due to media failure. See Chapter 14 for more information. 15. B. The only backup that will enable you to recover up to the point of
failure is a backup including archive logs. This could be an operating system backup with archive logs or an RMAN backup with archive logs. See Chapter 14 for more information. 16. B. If a database is operating in NOARCHIVELOG mode, and a media
failure destroys a data file, under most circumstances the complete database will need to be restored. However, if the online redo logs have not been overwritten or cycled through, then the destroyed data file can be replaced and the online redo logs can be applied to the database. See Chapter 14 for more information. 17. C. If all control files are lost or destroyed, an incomplete recovery
will need to be performed to rebuild the control file with the CREATE CONTROL command, ideally generated previously from the ALTER DATABASE BACKUP CONTROLFILE TO TRACE command. See Chapter 15 for more information.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Answers to Practice Exam
777
18. C. An incomplete recovery could be performed to recover the data-
base to a point in the past just before the user error occurred. See Chapter 15 for more information. 19. A, B, and D. Incomplete recovery enables you to recover to a point
prior to the failure of the database or to a point in the past. There are three types of incomplete recovery: cancel-, time-, and change-based recoveries. See Chapter 15 for more information. 20. C. The ALTER DATABASE OPEN RESETLOGS must be performed after
every incomplete recovery. This protects against using the redo logs from before recovery by resetting the redo log sequence number back to 1. See Chapter 15 for more information. 21. B. The TRANSPORT_TABLESPACE keyword is responsible for extracting
the metadata from the export file. See Chapter 16 for more information. 22. A and D. The primary purpose of the Export and Import utilities is to
reorganize database structures and make historical archives of the database at a certain point in time. The main purpose is not to serve as a backup method. See Chapter 16 for more information. 23. D. Creating tables is the first task performed by the Import utility.
See Chapter 16 for more information. 24. C. The IGNORE=Y parameter will ignore the errors associated with the
creation of objects that may already exist. See Chapter 16 for more information. 25. B. Parallel recovery of redo log files enables you to improve the per-
formance of the recovery process by recovering multiple redo logs at the same time. See Chapter 17 for more information. 26. B. Multiplexed control files on mirrored disks offer the best protec-
tion for control files. The mirrored disk protects against hardware failure, and the multiplexing of control files assures that multiple copies are stored in different locations. Each location should be on a separate disk or volume of disk than the other control files. See Chapter 17 for more information.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
778
Appendix B
Oracle8i Backup & Recovery Practice Exam
27. C. The names and locations of the data files and redo logs are
required to re-create the CREATE CONTROLFILE command by hand. This information is not necessary if you are building the CREATE CONTROLFILE command using the output from a previous ALTER DATABASE BACKUP CONTROL FILE TO TRACE command. See Chapter 17 for more information. 28. A. If a read-only tablespace is backed up as read-only, no recovery is
necessary because the SCNs will not be changed in the control file and data file header. The data file can simply be restored. See Chapter 17 for more information. 29. B. This command will recover the data file using archive logs if nec-
essary to recover the database until the point of failure. The sequence of events necessary to perform this recovery is as follows: start the database in MOUNT mode, take the tablespace USERS offline, open the database, restore the data file, recover the data file (as shown in the preceding code), and, finally, bring the tablespace back online. See Chapter 17 for more information. 30. C. The init.ora parameter DB_BLOCK_CHECKING=TRUE will enable
this feature within Oracle8i. See Chapter 18 for more information. 31. C. The LogMiner utility will enable you to determine the USERNAME
responsible for deletes by querying the V$LOGMNR_CONTENTS view. See Chapter 18 for more information. 32. A. The DBVERIFY utility assumes the blocks analyzed will be 2KB
unless you specify a different block size. See Chapter 18 for more information. 33. C. The standby database is invalidated because the archive logs’
sequence is reset and the recovery process must end. See Chapter 23 for more information. 34. B. The DBVERIFY utility can verify both online data files and copies
of online data files. See Chapter 18 for more information.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Answers to Practice Exam
779
35. B. The Oracle PMON process is a background process. All trace files
generated from background processes go into the BACKGROUND_DUMP_ DEST location. See Chapter 18 for more information. 36. B. Each allocated channel requires an associated server process to
communicate with the target database that is being backed up or restored. See Chapter 21 for more information. 37. A. The proper connect sequence is RMAN TARGET SYSTEM/MANAGER
NOCATALOG. See Chapter 19 for more information. 38. A. The ORACLE_SID must be pointing to the appropriate target data-
base before you can connect. See Chapter 19 for more information. 39. A. A channel must be allocated before any backup or recovery pro-
cesses can begin in RMAN. See Chapter 21 for more information. 40. C. The recovery catalog is used to store long-term information about
the backup and restore processes that have occurred in the past. See Chapter 20 for more information. 41. C. When you modify the physical structure of the database, you need
to resynch the recovery catalog. See Chapter 20 for more information. 42. B. A backup set is created by the RMAN backup command. The
backup set stores data files, control files, and archive logs in an RMAN format. See Chapter 21 for more information. 43. D. The UNCATALOG command will remove from the catalog a file that
is no longer needed for recovery operations to the database. See Chapter 20 for more information on RMAN commands. 44. B. The RECOVER command will combine all the incremental, cumula-
tive, archives, and redo logs to the restored file in the RMAN recovery process. See Chapter 22 for more information. 45. B. To activate the standby database, you must issue the ALTER
DATABASE ACTIVATE STANDBY DATABASE; command. See Chapter 23 for more information.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
780
Appendix B
Oracle8i Backup & Recovery Practice Exam
46. C. Any incomplete recovery with RMAN or traditional recovery
must be followed by the ALTER DATABASE OPEN RESETLOGS; command. The proper syntax in RMAN is SQL “ALTER DATABASE OPEN RESETLOGS”;. See Chapter 22 for more information. 47. B. When you perform an incomplete recovery, you must reset the
database to recognize the new incarnation of the database. You resynch the database only after the database’s metadata has changed to assure that the RMAN catalog matches the contents of the latest control file. See Chapter 20 for more information. 48. A and B. To restore data files, the database must be in the MOUNT or
NOMOUNT state if the database is in NOARCHIVELOG mode. This enables the RMAN-initiated server sessions to connect to the database but not have the database open when the data files are restored. See Chapter 22 for more information. 49. A. The database must be in a mounted state to begin an incomplete
recovery using RMAN. See Chapter 22 for more information. 50. C. The standby database will wait 30 minutes to receive archive logs
from the primary database; otherwise, the database exits managed recovery mode. See Chapter 23 for more information.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Glossary
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
782
Glossary
Numbers 24 × 7 databases days a week.
High-availability databases running 24 hours a day, 7
A alert log A log that is written to the BACKGROUND_DUMP_DEST, which shows start-ups, shutdowns, ALTER DATABASE and ALTER SYSTEM commands, and a variety of error statements. alert log file A text file that logs significant database events and messages. The alert log file stores information about block corruption errors, internal errors, and the nondefault initialization parameters used at instance start-up. ANALYZE TABLE VALIDATE STRUCTURE A command that checks a table for corrupt blocks. archive logs Logs that are copies of the online redo logs and that are saved to another location before the online copies are reused. ARCHIVELOG A mode of database operation. When the Oracle database is run in ARCHIVELOG mode, the online redo log files are copied to another location before they are overwritten. These archived log files can be used for point-in-time recovery of the database. They can also be used for analysis. archiver process (ARCn) to archive log files.
Performs the copying of the online redo log files
asynchronous I/O Multiple I/O activities performed at the same time without any dependencies. audit trail Records generated by auditing, which are stored in the database in the table SYS.AUD$. Auditing enables the DBA to monitor suspicious database activity. automated tape library (ATL) A tape device that can interface with RMAN and can automatically store and retrieve tapes via tape media software.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Glossary
783
automatic archiving The automatic creation of archive logs after the appropriate redo logs have been switched. The LOG_ARCHIVE_START parameter must be set to TRUE in the init.ora file for automatic archiving to take place. automatic log recovery Using the AUTO command during the recovery process to automatically apply archive logs.
B BACKGROUND_DUMP_DEST An init.ora parameter that determines the location of the alert log and Oracle background process trace files. BACKUP CONTROLFILE TO TRACE (ASCII control file) A create control file command, this makes an ASCII backup of the binary control file, which can be executed to re-create the binary control file. It is dumped as a user trace file. This file can be viewed, edited, and run as a script after editing the comments and miscellaneous trace information. backup piece A physical object that stores data files, control files, or archive logs and resides within a backup set. backup script A script written in different OS scripting languages, such as Korn shell or C shell in the Unix environments. backup set A logical object that stores one or more physical backup pieces containing either data files, control files, or archive logs. Backup sets must be processed with the RESTORE command before these files are usable. before image occurred.
Image of the transaction data before the transaction
bitmap index An indexing method used by Oracle to create the index by using bitmaps. Used for low-cardinality columns. block The smallest unit of storage in an Oracle database. Data is stored in the database in blocks. The block size is defined at the time of creating the database and is a multiple of the operating system block size. b-tree
An algorithm used for creating indexes.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
784
Glossary
C cache recovery The part of instance recovery where all the data that is not in the data files gets reapplied to the data files from the online redo logs. cancel-based recovery A type of incomplete recovery that is stopped by the DBA executing a CANCEL command during a manual recovery operation. cataloging
Storing information in the Recovery Manager catalog.
change vectors database.
A description of a change made to a single block in the
change-based recovery A type of incomplete recovery that is stopped by a change number designated at the time when the recovery is initiated. channel allocation server session.
Allocating a physical device to be associated with the
checkpoint process (CKPT) A checkpoint is an event that flushes the modified data from the buffer cache to the disk and updates the control file and data files. The checkpoint process updates the headers of data files and control files; the actual blocks are written to the file by the DBWn process. checkpointing The process of updating the SCN in all the data files and control files in the database in conjunction with all necessary data blocks in the data buffers being written to disk. This is done for the purposes of ensuring database consistency and synchronization. CKPT
See checkpoint process.
closed backup A backup that occurs when the target database is mounted but not opened. This means that the target database is not available for use during this type of backup. This is also referred to as an offline or cold backup. cold backup Also called a closed, or offline, backup. Occurs when the database is shut down and a physical file copy of all database files is made to another location on disk or a tape drive. commit To save or permanently store the results of a transaction to the database.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Glossary
785
complete export An export that exports all the database objects and resets the export record-keeping tables. consistent backup A backup of a target database that is mounted but not opened and was shut down with either a SHUTDOWN IMMEDIATE or SHUTDOWN NORMAL option, but not the SHUTDOWN ABORT option. The database files are stamped with the same SCN at the same point in time. This occurs during a cold backup of the database. No recovery is needed. control file Maintains information about the physical structure of the database. The control file contains the database name and timestamp of database creation, along with the name and location of every data file and redo log file. conventional-path export SQL-evaluation layer.
The standard export that goes through the
cumulative export An export that performs a cumulative export of all the modified tables or created tables since the last cumulative or complete export. cumulative incremental backup A type of backup that backs up only the data blocks that have changed since the most recent backup of the next lowest level, or n – 1 or lower (with n being the existing level of backup). current online redo logs LGWR process.
Logs that are actively being written to by the
D data block The smallest unit of data storage in Oracle. The block size is specified at the time of database creation. data block buffers Memory buffers containing data blocks that get flushed to disk if modified and committed. data dictionary A collection of database tables and views containing metadata about the database, its structures, its privileges, and its users. Oracle accesses the data dictionary frequently during the parsing of SQL statements.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
786
Glossary
data file The data files in a database contain all the database data. One data file can belong to only one database and to one tablespace. data segment
A segment used to store table or cluster data.
data types Used to specify certain characteristics for table columns, such as numeric, alphanumeric, date, etc. database The physical structure that stores the actual data. The Oracle server consists of the database and the instance. database buffer cache The area of memory that caches the database data. It holds the recent blocks that are read from the database data files. database buffers
See data block buffers.
database event triggers Triggers created on database events. The events can be server events such as STARTUP, SHUTDOWN, or SERVERERROR, or user events such as LOGON, LOGOFF, or DDL statements. database writer process (DBWn) The DBWn process is responsible for writing the changed database blocks from the SGA to the data file. There can be up to 10 database writer processes (DBW0 through DBW9). DB_BLOCK_CHECKING An init.ora parameter that forces checks for corrupt blocks while they are being created or modified. DBMS_REPAIR An Oracle package procedure that assists in the identification and repair of corrupted blocks. DBVERIFY An Oracle utility used to determine whether data files have corrupt blocks. DBWn
See database writer process.
dead transactions failure.
Transactions that were active during an instance
deferred transaction recovery concept The process of rolling forward dead transactions after instance recovery if a new transaction accesses the dead transaction’s data. degree of parallelism The number of parallel processes you choose to enable for a particular parallel activity such as recovery.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Glossary
787
DICTIONARY A data dictionary view that has the name and description of all the data dictionary views in the database. dictionary-managed tablespace A tablespace in which the extent allocation and de-allocation information is managed through the data dictionary. differential incremental backup A type of backup that backs up only data blocks modified since the most recent backup at the same level or lower. direct-load insert A faster method to add rows to a table from existing tables by using the INSERT INTO … SELECT … statement. Direct-load insert bypasses the buffer cache and writes the data blocks directly to the data files. direct-path export The type of export that bypasses the SQL-evaluation layer, creating significant performance gains. dirty buffers The blocks in the database buffer cache that are changed, but are not yet written to the disk. disaster recovery Recovery of a database that has been entirely destroyed due to fire, earthquake, flood, or some other disastrous situation. distributed transactions
Transactions that occur in remote databases.
DUAL A dummy table owned by SYS; it has one column and one row and is useful for computing a constant expression with a SELECT statement. dump file The file where the logical backup is stored. This file is created by the export and read by the Import utilities. duplexing
The duplicating of backup sets.
E execute The stage in SQL processing that executes the parsed SQL code from the library cache. Export (EXP) utility A utility from Oracle to unload (export) data to external files in a binary format. The Export utility can export the definitions of all objects in the database. It makes logical backups of the database. extent A contiguous allocation of blocks for data/index storage. An extent has multiple blocks, and a segment can have multiple extents.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
788
Glossary
extent management Extents allocated to tablespaces can be managed locally in the data files or through the data dictionary. The extent management clause can be specified when creating a tablespace. The default is dictionary managed.
F fetch The stage in SQL query processing that returns the data to the user process. free buffers
The blocks in the database buffer cache that may be overwritten.
full backup A type of backup that backs up all the data blocks in the data files, modified or not. function-based index Indexes created on functions or expressions, to speed up queries containing WHERE clauses with a particular function or expression.
H header block The first block in a data file; it contains information about the data file, such as freelist and checkpoint information. high availability requirements.
Describes a database system that has stringent uptime
high-water mark (HWM) The maximum number of blocks used by the table. The high-water mark is not reset when you delete rows. hot backup Also called an opened, or online, backup. Occurs when the database is open and a physical file copy of the data files associated with each tablespace is made (placed into backup mode with the ALTER TABLESPACE [BEGIN/END] BACKUP commands). See online backup.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Glossary
789
I image copies Image copies of data files, control files, or archive logs, either individually or as a whole database. These copies are not stored in an RMAN format. Import (IMP) utility An Oracle utility to read and import data from the file created by the Export utility. A selective import can be performed using the appropriate parameters. It reads the logical backups generated by the export. incarnation
A reference of a target database in the recovery catalog.
incomplete recovery A form of recovery that doesn’t completely recover the database to the point of failure. There are three types of incomplete recovery: cancel-based, time-based, and change-based. inconsistent backup A backup of the target database when it is opened but has crashed prior to mounting, or when it was shut down with the SHUTDOWN ABORT option prior to mounting. In this type of backup, the database files are stamped with different SCNs. This occurs during a hot backup of the database. Recovery is needed. incremental backup A type of backup that backs up only the data blocks in the data files that were modified since the last incremental backup. There are two types of incremental backups: differential and cumulative. incremental export An export that performs an incremental export of all the existing tables and new tables in the database since the last incremental, cumulative, or complete export. index segment
A segment used to store index information.
index-organized table (IOT) Table rows stored in a b-tree index, using a primary key. Avoids duplication of storage for table data and index information. instance server.
The memory structures and background processes of the Oracle
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
790
Glossary
instance failure An abnormal shutdown of the Oracle database that then requires applying the latest online redo log to the database when it restarts to assure database consistency. integrity constraints business rules.
Structures built in the database to enforce
interactive export An export whereby the user responds to prompts from the Export utility to perform various actions.
L large pool An optional area in the SGA used for specific database operations, such as backup, recovery, or the User Global Area (UGA) space when using an MTS configuration. LGWR
See log writer process.
library cache An area in the shared pool of the SGA that stores the parsed SQL statements. When a SQL statement is submitted, the server process searches the library cache for a matching SQL statement; if it finds one, parsing is not done on the SQL again. list
A simple query of the catalog to tell what has been done to date.
locally managed tablespace A tablespace that has the extent allocation and de-allocation information managed through bitmaps in the associated data files of the tablespace. log buffers log files.
Memory buffers containing the entries that get written to the
log file When using utilities such as SQL*Loader or Export or Import, the status of the operation is written to the log file. log sequence number
A sequence number assigned to each redo log file.
log writer process (LGWR) The LGWR process is responsible for writing the redo log buffer entries (change vectors) to the online redo log files. A redo log entry is any change, or transaction, that has been applied to the database, committed or not.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Glossary
791
LOG_ARCHIVE_DEST An init.ora parameter that determines the destination of the archive logs. LOG_ARCHIVE_DEST_N An init.ora parameter that determines the other destinations of the archive logs, remote or local. This parameter supports up to five locations, N being a number 1 through 5. Only one of these destinations can be remote. LOG_ARCHIVE_DUPLEX_DEST An init.ora parameter that determines the duplexed, or second, destination of archive logs in a two-location archive log configuration. LOG_ARCHIVE_START An init.ora parameter that enables automatic archiving. Logging The recording of DML statements, creation of new objects, and other changes in the redo logs. logical attributes For tables and indexes, logical attributes are the columns, data types, constraints, etc. logical backup Entails reading certain data and writing the data to a file without concern for the physical location. logical structures The database structures as seen by the user. Tablespaces, segments, extents, blocks, tables, and indexes are all examples of logical structures. LogMiner A utility that can be used to analyze the redo log files. It can provide a fix for logical corruption by building redo and undo SQL statements from the contents of the redo logs. LogMiner is a set of PL/SQL packages and dynamic performance views.
M managed recovery mode Recovering the standby database automatically. This includes transmitting the logs automatically by remote locations in the LOG_ARCHIVE_DEST_N parameter and by configuring the tnsnames.ora and listener.ora files and their associated utilities.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
792
Glossary
Management Server The middle tier between the console GUI and managed nodes in the Enterprise Manager setup. It processes and coordinates all system management tasks and distributes these tasks to Intelligent Agents on the nodes. manual archiving The execution of commands to create archive logs. Archive logs are not automatically created after redo log switching. manual log recovery Manually accepting each log file (by pressing the Enter key) to be applied in the recovery process. manual recovery mode Recovering the standby database manually. This includes transmitting the archive logs from the primary database by a manual method such as ftp or a Unix rcp command and applying the logs manually. mean time to recovery (MTTR) The mean (average) time to recover a database from a certain type of failure. media (disk) failures Physical disk failures, or those that occur when the database files cannot be accessed by the instance. media management layer The tape media libraries that allow RMAN to interface with various tape hardware vendors’ tape backup devices. metadata Information about the objects that are available in the database. The data dictionary information is the metadata for the Oracle database. mount A stage in starting up the database. When the instance started mounts a database, the control file is opened to read the data file and redo log file information before the database can be opened. multiplexing Oracle’s mechanism for writing to more than one copy of the redo log file or control file. It involves mirroring, or making duplicate copies. Multiplexing ensures that even if you lose one member of the redo log group or one control file, you can recover using the other one. It intersperses blocks from Oracle data files within a backup set. multithreaded configuration A database configuration whereby one shared server process takes requests from multiple user processes. In a dedicated server configuration, there will be one server process for one user process.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Glossary
793
N National Language Support (NLS) Enables Oracle to store and retrieve information in a format and language understandable by users anywhere in the world. The database character set and various other parameters are used to enhance this capability. NOARCHIVELOG Mode of database operation, whereby the redo log files are not preserved for recovery or analysis purposes. Nologging Not recording DML statements, creation of new objects, and other changes in the redo logs—therefore making the changes unrecoverable until the next physical backup. non-current online redo logs Online redo logs that are not in the current or active group being written to. non-media failures Failures that occur for reasons other than disk failure. This type of failure consists of statement failure, process failure, instance failure, and user error.
O object privileges Privileges granted on an object for users other than the object owner. They allow these users to manipulate data in the object or to modify the object. online backup Backup of the database when it is open, or running. Also called a hot backup. online redo logs Redo logs that are being written to by the LGWR process at some point in time. See archive logs. opened backup A backup of the database made when the database is opened for use. This is also referred to as an online or hot backup. Optimal Flexible Architecture (OFA) A standard presenting the optimal way to set up an Oracle database. It includes guidelines for creating database file locations for better performance and management.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
794
Glossary
ORA_NLS33 An environment variable to set if using a character set other than US7ASCII. Oracle Enterprise Manager (OEM) A DBA system management tool that performs a wide variety of DBA tasks, including running the RMAN utility in GUI mode, managing different components of Oracle, and administering the databases at one location. Oracle Parallel Server An Oracle database that consists of at least two servers, or nodes, each with an instance but sharing one database. Oracle Recovery Manager (RMAN) The Recovery Manager utility, which is responsible for the backup and recovery of Oracle databases. Oracle Universal Installer (OUI) Oracle products.
A Java-based GUI tool used to install all
ORACLE_HOME The environment variable that defines the location where the Oracle software is installed. ORACLE_SID The environment variable that defines the database instance name. If not using Net8, connections are made to this database instance by default. OS authentication An authentication method used to connect administrators and operators to the database to perform administrative tasks. Connection is made to the database by verifying the operating system privilege.
P package A stored PL/SQL program that holds a set of other programs such as procedures, functions, cursors, variables, and so on. parallel query processes Oracle background processes that process a portion of a query. Each parallel query process runs on a separate CPU. parallel recovery Recovery performed by multiple recovery processes (as opposed to serial recovery, which uses only one recovery process). PARALLEL_MAX_SERVERS An init.ora parameter that determines the maximum number of parallel query processes at any given time.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Glossary
795
parameter file A file with parameters to configure memory, database file locations, and limits for the database. This file is read when the database is started. When using utilities such as SQL*Loader or Export or Import, the command-line parameters can be specified in a parameter file, which can be reused for other exports or imports. PARFILE The parameter file that stores export options. parsing A stage in SQL processing wherein the syntax of the SQL statement, object names, and user access are verified. Oracle also prepares an execution plan for the statement. partitioning Breaking the table or index into multiple smaller, more manageable chunks. password file authentication An authentication method used to connect administrators and operators to the database to perform administrative tasks. Oracle creates a file on the server with the SYS password; users are added to this file when they are granted SYSOPER or SYSDBA privilege. PGA
See Program Global Area.
physical attributes For tables and indexes, physical attributes mean the physical storage characteristics, such as the extent size, tablespace name, etc. physical backup A copy of all the Oracle database files, including the data files, control files, redo logs, and init.ora files. physical structures The database structures used to store the actual data and operation of the database. Data files, control files, and redo log files constitute the physical structure of the database. pinned buffers accessed. PMON
The blocks in the database buffer cache that are being
See process monitor process.
Private Global Area
See Program Global Area.
private rollback segment A rollback segment available to only the instance that acquires it—by making it online. privileges Authorization granted on an object in the database or an authorization to perform an activity.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
796
Glossary
procedure or function A PL/SQL program that is stored in the database in a compiled form. A function always returns one value; a procedure does not. You can pass a parameter to and from the procedure or function. process
A daemon, or background program, that performs certain tasks.
process failure
The abnormal termination of an Oracle process.
process monitor process (PMON) Performs recovery of failed user processes. This is a mandatory process and is started by default when the database is started. It frees up all the resources held by the failed processes. profiles A set of named parameters used to control the use of resources and to manage passwords. Program Global Area (PGA) A non-shared memory area allocated to the server process. Also known as the Process Global Area or Private Global Area. public rollback segment A rollbacks segment available to all instances; they are made online when the first instance starts. PUBLIC A users group available in all databases of which all users are members. When a privilege is granted to PUBLIC, it is available to all users in the database.
R read-consistent image Image of the transaction data before the transaction occurred. This image is available to all other users not executing the transaction. read-only mode A standby database mode that enables users to query the standby database without writing or making modifications. read-only tablespace A tablespace that allows only read activity, such as SELECT statements. It is available only for querying. The data is static and doesn’t change. No write activity (for example, INSERT, UPDATE, and DELETE statements) is allowed. Read-only tablespaces need to be backed up only once. read-write tablespace A tablespace that allows both read and write activity, including SELECT, INSERT, UPDATE, and DELETE statements. This is the default tablespace mode.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Glossary
797
RECOVER DATAFILE The RMAN command responsible for recovering a data file. RECOVER TABLESPACE The RMAN command responsible for recovering a tablespace. recovery catalog Information stored in a database used by the RMAN utility to back up and restore databases. Recovery Manager (RMAN) An automated tool from Oracle that can perform and manage the backup and recovery process. RECOVERY_PARALLELISM An init.ora parameter that determines the recovery parallelism at the database level. redo buffers redo entry
See log buffers. See redo record.
redo log buffer The area in the SGA that records all changes to the database. The changes are known as redo entries, or change vectors, and are used to reapply the changes to the database in case of a failure. redo log file The redo log buffers from the SGA are periodically copied to the redo log files. Redo log files are critical to database recovery. redo logs Record all changes to the database, whether the transactions are committed or rolled back. Redo logs are classified as online redo logs or offline redo logs (also called archive logs), which are simply copies of online redo logs. redo record A group of change vectors. Redo entries record data that you can use to reconstruct all changes made to the database, including the rollback segments. Redundant Array of Inexpensive Disks (RAID) The storage of data on multiple disks for fault tolerance, to protect against individual disk crashes. If one disk fails, then that disk can be rebuilt from the other disks. RAID has many variations of how to redundantly store the data on separate disks, termed RAID 0 through 5 in most cases. report A query of the catalog that is more detailed than a list and that tells what may need to be done.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
798
Glossary
RESETLOGS The process that resets the redo log files’ sequence number. resetting Updating the recovery catalog for a target database that has been opened with the ALTER DATABASE OPEN RESETLOGS command. restore
To copy backup files to disk from the backup location.
RESTORE DATABASE The RMAN command responsible for retrieving the database backup and converting it from the RMAN format back to the OSspecific file format on disk. RESTORE DATAFILE The RMAN command responsible for retrieving the data file and converting it from the RMAN format back to the OS-specific file format on disk. RESTORE TABLESPACE The RMAN command responsible for retrieving the tablespace’s associated data file and converting it from the RMAN format back to the OS-specific file format on disk. resynchronizing Updating the recovery catalog with either physical or logical information (or both) about the target database. reverse key index An index in which column values are reversed before adding to the index entry. role A named group of system and object privileges used to ease the administration of privileges to users. roll back
To undo a transaction from the database.
rollback entries made to a row.
The block and row information used to undo the changes
roll-forward-and-roll-back process Applying all the transactions, committed or not committed, to the database and then undoing all uncommitted transactions. row chaining Storing a row in multiple blocks because the entire row cannot fit in one block. Usually this happens when the table has LONG or LOB columns. Oracle recommends using CLOB instead of LONG, since the LONG data type is being phased out. row migration Moving a row from one block to another due to an update operation, because there is not enough free space available to accommodate the updated row.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Glossary
799
ROWID Exact physical location of the row on disk. ROWID is a pseudocolumn in all tables.
S schema A logical structure used to group a set of database objects owned by a user. segment A logical structure that holds data. Every object created to store data is allocated a segment. A segment has one or more extents. self-contained tablespace Tablespace(s) where the objects in a tablespace do not reference any other objects outside of the tablespace. To transport a tablespace to another database, the tablespace must be self-contained. server process A background process that takes requests from the user process and applies that request to the Oracle database. session A job or a task that Oracle manages. When you log in to the database by using SQL*Plus or any tool, you start a session. SET UNTIL [TIME/CHANGE/CANCEL] The clause in RMAN that is necessary to perform an incomplete recovery by causing the recovery process to terminate on a timestamp or SCN, or to be manually cancelled. SGA
See System Global Area.
Shared Global Area
See System Global Area.
shared pool An area in the SGA that holds information such as parsed SQL, PL/SQL procedures and packages, the data dictionary, locks, character set information, security attributes, and so on. single point of failure A point of failure that can bring the whole database down. SMON
See system monitor process.
sort area An area in the PGA that is used for sorting data during query processing. space quota objects.
Maximum space allowed for a user in a tablespace for creating
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
800
Glossary
SQL*Loader
A utility used to load data into Oracle tables from text files.
standby database A database that is a duplicate copy of your production database, which can be kept in a different geographic location and is usually in a state of constant recovery. statement failure
Syntactic errors in the construction of a SQL statement.
streaming Keeping a tape device constantly reading or writing during the desired activity. structure Either a physical or logical object that is part of the database, such as files or database objects themselves. synchronous I/O previous activity.
Single I/O activities dependent upon completion of the
system change number (SCN) A unique number generated at the time of a COMMIT, acting as an internal counter to the Oracle database, and used for recovery and read consistency. System Global Area A memory area in the Oracle instance that is shared by all users. system monitor process (SMON) Performs instance recovery at database start-up by using the online redo log files. It is also responsible for cleaning up temporary segments in the tablespaces that are no longer used and to coalesce the contiguous free space in the tablespaces. system privileges Privileges granted to perform an action, as opposed to a privilege on an object.
T tablespace A logical storage structure at the highest level. A tablespace can have many segments that may be used for data, index, sorting (temporary), or rollback information. The data files are directly related to tablespaces. A segment can belong to only one tablespace. tablespace point-in-time recovery (TSPITR) A type of recovery whereby logical and physical backups are combined to recover a tablespace to a different point in time from the rest of the database.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
Glossary
target database
801
The database that will be backed up.
temporary segment A segment created for sorting data. Temporary segments are also created when an index is built or a table is built using CREATE TABLE AS. temporary tablespace A tablespace that stores temporary segments for sorting and creating tables and indexes. time-based recovery A type of incomplete recovery that is stopped by a point in time designated when the recovery is initiated. transaction
Any change, addition, or deletion of data.
transaction recovery The part of instance recovery where the dead transactions are rolled back by the SMON process. transportable tablespace A feature of Oracle8i whereby a tablespace belonging to one database can be copied to another database.
U UNTIL CANCEL The clause in the RECOVER command that designates cancel-based recovery. UNTIL CHANGE The clause in the RECOVER command that designates change-based recovery. UNTIL TIME The clause in the RECOVER command that designates timebased recovery. user error An unintentional, harmful action on a database—such as deletion of data or dropping of tables—by a user. user process A process that is started by the application tool that communicates with the Oracle server process. USER_DUMP_DEST An init.ora parameter that determines the location of the user process trace files.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com
802
Glossary
V V$BACKUP The table providing backup information of the data files in the database. Primarily used to identify which data files are in hot backup mode. V$CONTROLFILE The dictionary view that gives the names of the control files. It provides information about the control files that can be useful in the backup and recovery process. V$DATAFILE The view providing information about the data files that can be useful in the backup and recovery process. V$DATAFILE_HEADER The view used to identify data files that are being backed up. V$LOG_HISTORY A V$ view that displays history regarding the redo log files. V$LOGFILE The dictionary view that gives the redo log file names and status of each redo log file. It provides information about the online redo log files that can be useful in the backup and recovery process. V$LOGMNR_CONTENTS A table that stores the output from the LogMiner utility. This table contains the undo and redo commands for the redo log information. V$RECOVER_FILE A V$ view that shows the data file needing recovery. V$TABLESPACE The view providing information about the tablespaces that can be useful in the backup and recovery process.
W whole database backup A backup that gets the complete physical image of an Oracle database, such as the data files, control files, redo logs and init.ora files.
Copyright ©2000 SYBEX , Inc., Alameda, CA
www.sybex.com